id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
0aEhB0XqMH
24,900
0aEhB0XqMH
Learning-Domain Decomposition: Interpreting Training Dynamics via Loss Vectors
Deep neural networks achieve high performance, but it is still not well understood how they learn during training and when they forget what has been learned. In this study, we propose Learning-Domain Decomposition (LDD), a method that analyzes training dynamics based on per-sample loss vectors. LDD applies sparse dictionary learning to the differences of loss vectors across training steps. This enables the extraction of learning-domains, which represent common patterns learned by the model, and clarifies when they are acquired or forgotten in a bottom-up manner. We further evaluate the contribution of each domain to generalization by quantifying its effect on validation loss. Experiments on the MNIST dataset with a simple CNN show that easy samples are learned early but later degrade generalization, while ambiguous samples are repeatedly forgotten and relearned and ultimately contribute to generalization. In addition, data pruning based on the degree of contribution to multiple domains (domain multiplicity) allows training with 5\% of the data while achieving performance comparable to or better than training with the full dataset. These findings demonstrate that LDD provides both an interpretable perspective on training dynamics and a practical tool for efficient data selection.
null
['Training Dynamics', 'Loss Vectors', 'Interpretability', 'Data Pruning']
/pdf/4222cceeb6455955f54ad8cdead8fe065adeb0dc.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission24900/Authors']
EJ3uh7XPgJ
24,899
EJ3uh7XPgJ
On Non-interactive Evaluation of Animal Communication Translators
If you had an AI Whale-to-English translator, how could you validate whether or not it is working? Does one need to interact with the animals or rely on grounded observations such as temperature? We provide theoretical and proof-of-concept experimental evidence suggesting that interaction and even observations may not be necessary for sufficiently complex languages. One may be able to evaluate translators solely by their English outputs, offering potential advantages in terms of safety, ethics, and cost. This is an instance of machine translation quality evaluation (MTQE) without any reference translations available. A key challenge is identifying "hallucinations," false translations which may appear fluent and plausible. We propose using segment-by-segment translation together with the classic NLP shuffle test to evaluate translators. The idea is to translate animal communication, turn by turn, and evaluate how often the resulting translations make more sense in order than permuted. Proof-of-concept experiments on data-scarce human languages and constructed languages demonstrate the potential utility of this evaluation methodology. These human-language experiments serve solely to validate our reference-free metric under data scarcity. It is found to correlate highly with a standard evaluation based on reference translations, which are available in our experiments. We also perform a theoretical analysis suggesting that interaction may not be necessary nor efficient in the early stages of learning to translate.
If you had an AI Whale-to-English translator, how could you validate whether or not it is working?
['Machine translation', 'Reference-free evaluation', 'Semantic Order', 'Low-resource learning', 'Active learning theory', 'Animal communication']
/pdf/54192c81bfd1cc6cb9de8faa75d42d8233ce60ad.pdf
other topics in machine learning (i.e., none of the above)
/attachment/d7368bb8501e0861807d2814cd0c578761987dc2.zip
['ICLR.cc/2026/Conference/Submission24899/Authors']
qcz3g6mH3L
24,895
qcz3g6mH3L
ROSARL: Reward-Only Safe Reinforcement Learning
An important problem in reinforcement learning is designing agents that learn to solve tasks safely in an environment. A common solution is to define either a penalty in the reward function or a cost to be minimised when reaching unsafe states. However, designing reward or cost functions is non-trivial and can increase with the complexity of the problem. To address this, we investigate the concept of a *Minmax* penalty, the smallest penalty for unsafe states that leads to safe optimal policies, regardless of task rewards. We derive an upper and lower bound on this penalty by considering both environment *diameter* and *controllability*. Additionally, we propose a simple algorithm for agents to estimate this penalty while learning task policies. Our experiments demonstrate the effectiveness of this approach in enabling agents to learn safe policies in high-dimensional continuous control environments.
null
['Reinforcement Learning', 'Deep Reinforcement Learning', 'Safe RL', 'Constrained RL', 'Reward shaping']
/pdf/bafc582802c24fa147a79b2d0826bb27e97cea11.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24895/Authors']
J0NqBn5RQq
24,894
J0NqBn5RQq
Learning Music Style For Piano Arrangement Through Cross-Modal Bootstrapping
What is music style? Though often described using text labels such as "swing," "classical," or "emotional," the real style remains implicit and hidden in concrete music examples. In this paper, we introduce a cross-modal framework that learns implicit music styles from raw audio and applies the styles to symbolic music generation. Inspired by BLIP-2, our model leverages a Querying Transformer (Q-Former) to extract style representations from a large, pre-trained audio language model (LM), and further applies them to condition a symbolic LM for generating piano arrangements. We adopt a two-stage training strategy: contrastive learning to align auditory style with symbolic expression, followed by generative modelling to perform music arrangement. Our model generates piano performances jointly conditioned on a lead sheet (content) and a reference audio example (style), enabling controllable and stylistically faithful arrangement. Experiments demonstrate the effectiveness of our approach in piano cover generation, style transfer, and audio-to-MIDI retrieval, achieving substantial improvements in style-aware alignment and music quality.
We present a cross-modal framework that enables style-faithful symbolic piano cover arrangement from music audio.
['music generation', 'audio-to-symbolic alignement', 'piano cover generation', 'style transfer', 'Q-Former']
/pdf/f7303d87dcc7835858aa551d0ac5ea7d97aa8389.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24894/Authors']
YWLMoSmakk
24,893
YWLMoSmakk
CR-Guided Transformers: Coherence-Based Redundancy Identification and Regularization
Current Transformer-based language models demonstrate excellent performance across various tasks. However, these models commonly produce redundant transformations in middle-to-deep layers. This manifests as transformations between the inputs and output in a layer containing pronounced linear correlation or near irrelevance components. This paper attributes its root cause to current training paradigms. These paradigms emphasize prediction accuracy while neglect the effectiveness of nonlinear transformations in model layers. Based on this observation, we propose criteria for identifying redundant transformations. To quantify the degree of redundancy, we further propose a Coherence-based Redundancy (CR) measure. Specifically, we treat the input and output of a model layer as sequence distributions. We leverage characteristic functions and Fourier transform to map the distributions to frequency-domain representations. Finally, we compute coherence in the complex plane and assess the effectiveness of transformations on a [0,1] coherence scale. To suppress redundant transformations at layer outputs, we propose two schemes: tree-structured residual paths and a coherence-based redundancy loss. These approaches guide middle-to-deep layers to produce effective transformations. At the same time, they supervise and regularize against redundant outputs. Our pre-training experiments on Llama3-130M with 12 layers demonstrate that the proposed methods significantly reduce redundant transformations. With training settings held constant, we successfully make the 12-layer model outperform the 14-layer baseline.
null
['Redundancy Identification', 'Coherence-based Redundancy measure', 'Redundancy Regularization']
/pdf/91c3857fdd5f850e1c9ee98e384c8d1956534b8e.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24893/Authors']
6fFDsMY3ry
24,892
6fFDsMY3ry
Surrogate-Based Quantification of Policy Uncertainty in Generative Flow Networks
Generative flow networks are able to sample, via sequential construction, highreward, complex objects according to a reward function. However, such reward functions are often estimated approximately from noisy data, leading to epistemic uncertainty in the learnt policy. We present an approach to quantify this uncertainty by constructing a surrogate model composed of a polynomial chaos expansion, fit on a small ensemble of trained flow networks. This model learns the relationship between reward functions, parametrised in a low-dimensional space, and the probability distributions over actions at each step along a trajectory of the flow network. The surrogate model can then be used for inexpensive Monte Carlo sampling to estimate the uncertainty in the policy given uncertain rewards. We illustrate the performance of our approach on a discrete and continuous grid-world, symbolic regression, and a Bayesian structure learning task.
Quantifying policy uncertainty in generative flow networks with uncertain reward via a PCE-surrogate model
['uncertainty quantification', 'GFlowNets', 'generative modelling', 'polynomial chaos expansions']
/pdf/76b231c9d44d75157213d42e73d1f753291fbdbe.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission24892/Authors']
neG0h10Be5
24,890
neG0h10Be5
Few-Shot Adversarial Low-Rank Fine-Tuning of Vision-Language Models
Vision-Language Models (VLMs) such as CLIP have shown remarkable performance in cross-modal tasks through large-scale contrastive pre-training. To adapt these large transformer-based models efficiently for downstream tasks, Parameter-Efficient Fine-Tuning (PEFT) techniques like (Low-Rank Adaptation) LoRA have emerged as scalable alternatives to full fine-tuning, especially in few-shot scenarios. However, like traditional deep neural networks, VLMs are highly vulnerable to adversarial attacks, where imperceptible perturbations can significantly degrade model performance. Adversarial training remains the most effective strategy for improving model robustness in PEFT. In this work, we propose AdvCLIP-LoRA, to our knowledge the first method designed to enhance the adversarial robustness of CLIP models fine-tuned with LoRA in few-shot settings. Our method formulates training as a minimax optimization over low-rank adapters and adversarial perturbations, enabling robust adaptation with a small trainable footprint. Across eight datasets and two backbones (ViT-B/16 and ViT-B/32), AdvCLIP-LoRA achieves state-of-the-art performance in few-shot classification, adversarial base-to-new generalization, and cross-dataset transfer, delivering higher adversarial robustness than prompt tuning baselines without sacrificing much clean accuracy. These findings highlight AdvCLIP-LoRA as a practical approach for robust adaptation of VLMs in resource-constrained settings.
Adversarial training via low-rank adaptation for VLMs in few-shot settings.
['Adversarial Training', 'Minimax Optimization', 'Low-Rank Adaptation', 'Vision Language Models']
/pdf/745a82e1778fc90cd9cbaab05f9cc2d664957393.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24890/Authors']
rBt9aW3Mx7
24,889
rBt9aW3Mx7
Complexity- and Statistics-Guided Anomaly Detection in Time Series Foundation Models
This paper introduces a methodology for anomaly detection in time series using Time Series Foundation Models (TFMs). While TFMs have achieved strong success in forecasting, their role in anomaly detection remains underexplored. We identify two key challenges when applying TFMs to reconstruction-based anomaly detection and propose solutions. The first challenge is overgeneralization, where TFMs reconstruct both normal and abnormal data with similar accuracy, masking true anomalies. We find that this effect often occurs in data with strong low-frequency components. To address it, we propose a complexity metric, $\alpha$, that reflects how difficult the data is for TFMs and design a Complexity-Aware Ensemble (CAE) that adaptively balances TFMs with a statistical model. The second challenge is overstationarization, caused by instance normalization layers that improve forecasting accuracy but remove essential statistical features such as mean and variance, which are critical for anomaly detection. We resolve this by reintroducing these features into the reconstruction process without retraining the TFMs. Experiments on 23 univariate benchmark datasets demonstrate that our method significantly outperforms both deep learning and statistical baselines. Furthermore, we show that our complexity-based metric, $\alpha$, provides a theoretical foundation for improved anomaly detection, and we briefly explore prediction-based anomaly detection using TFMs.
We propose solutions based on a complexity measure α that captures high-frequency complexity and restores statistical features removed by RevIN, leading to theoretical and empirical improvements in anomaly detection.
['Timeseries anomaly detection', 'Timeseries foundation model', 'Reconstruction based anomaly detection']
/pdf/448c4a04ec89f09788e3a87f27848d91b951844c.pdf
learning on time series and dynamical systems
/attachment/d388a5ac138e242cdf03691d823a6df6069ae7cb.zip
['ICLR.cc/2026/Conference/Submission24889/Authors']
HHQjNDiWoR
24,888
HHQjNDiWoR
Can LLMs Serve as Causal Inference Agents? A Study on Post-Training Methods
Despite the potential of Large Language Models (LLMs) to democratize causal inference, they currently struggle with quantitative reasoning. This paper investigates whether post-training can transform an LLM into a practical and accessible causal inference agent for non-professionals. To facilitate this, we first introduce the DeepCausal dataset, a novel collection of seven computational causal inference tasks designed for both training and evaluation. We then propose DeepCausal, an LLM-based agent that enables users to perform complex causal analysis using natural language. Our core methodology involves a comprehensive comparison of online and offline post-training techniques. We find that while offline training equips LLMs with fundamental causal concepts, online post-training is crucial for teaching them how to apply these rules to solve problems, resulting in a significantly more effective, robust, and generalizable model. Our extensive experiments demonstrate that DeepCausal effectively performs causal effect estimation, providing clear, interpretable explanations in natural language. By lowering the technical barrier, our work makes complex causal analysis accessible to a broader audience and establishes the viability of using post-trained LLMs for sophisticated causal reasoning.
null
['Large Language Models (LLMs)', 'Causal Inference', 'Post-training']
/pdf/5b5c34303357050b135c7a1437d103a132577ff7.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24888/Authors']
qiduLvfi63
24,886
qiduLvfi63
RAH-LORA: TRAINING-FREE CALIBRATION OF HIGH-INFLUENCE ATTENTION HEADS IN MLLMS
Multimodal large language models (MLLMs) suffer from a coordination failure during training—attention heads optimize independently despite sharing inputs, leading many to develop suboptimal specialization patterns. We identify that numerous attention heads exhibit high downstream influence yet minimal cross-modal interaction, acting as performance bottlenecks that propagate misaligned patterns throughout the network. To address this, we introduce \textbf{RAH-LoRA (Representative Anchor Head Low-Rank Adaptation)}, a training-free calibration method that realigns these problematic heads by transferring successful patterns from high-performing anchors. Our key insight is that the transformer's residual architecture enables safe pattern transfer between heads operating in the same representation space. RAH-LoRA identifies bottleneck heads using our proposed metrics (Instruction-conditioned Saliency and Causal Attention Flow), constructs representative patterns from similar well-performing heads, and applies controlled low-rank updates with theoretical guarantees on output stability. The method requires only forward passes on unlabeled data, completing calibration in minutes on a single GPU. Experiments demonstrate consistent improvements across vision-language benchmarks, with gains strongly correlated to the identified influence-saliency gap, validating that targeting high-influence, low-cross-modal heads yields amplified benefits.
null
['MLLM', 'training-free adaptation']
/pdf/89cb1b4da3db8ac913d26a523b217ec24b53386f.pdf
foundation or frontier models, including LLMs
/attachment/5dfdb5db2e92b64ffb278810f45962433521bf55.pdf
['ICLR.cc/2026/Conference/Submission24886/Authors']
EiEbn6FZsK
24,885
EiEbn6FZsK
URS: A Unified Neural Routing Solver for Cross-Problem Zero-Shot Generalization
Multi-task neural routing solvers have emerged as a promising paradigm for their ability to solve multiple vehicle routing problems (VRPs) using a single model. However, existing neural solvers typically rely on predefined problem constraints or require per-problem fine-tuning, which substantially limits their zero-shot generalization ability to unseen VRP variants. To address this critical bottleneck, we propose URS, a unified neural routing solver capable of zero-shot generalization across a wide range of unseen VRPs using a single model without any fine-tuning. The key component of URS is the unified data representation (UDR), which replaces problem enumeration with data unification, thereby broadening the problem coverage and reducing reliance on domain expertise. In addition, we propose a Mixed Bias Module (MBM) to efficiently learn the geometric and relational biases inherent in various problems. On top of the proposed UDR, we further develop a parameter generator that adaptively adjusts the decoder and bias weights of MBM to enhance zero-shot generalization. Moreover, we propose an LLM-driven constraint satisfaction mechanism, which translates raw problem descriptions into executable stepwise masking functions to ensure solution feasibility. Extensive experiments demonstrate that URS can consistently produce high-quality solutions for more than 100 distinct VRP variants without any fine-tuning, which includes more than 90 unseen variants. To the best of our knowledge, URS is the first neural solver capable of handling over 100 VRP variants with a single model.
We propose a unified neural routing solver capable of consistently producing high-quality solutions for more than 100 distinct VRP variants without any fine-tuning.
['Vehicle Routing Problem', 'Reinforcement Learning', 'Neural Combinatorial Optimization', 'Zero-Shot Generalization', 'Unified Routing Solver']
/pdf/3d1cddac559dec242f2e52a8f57bc90d6ccb689b.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24885/Authors']
ZkiVWvWwic
24,883
ZkiVWvWwic
Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI
Brain-to-image decoding has been recently propelled by the progress in generative AI models and the availability of large ultra-high field functional Magnetic Resonance Imaging (fMRI). However, current approaches depend on complicated multi-stage pipelines and preprocessing steps that typically collapse the temporal dimension of brain recordings, thereby limiting time-resolved brain decoders. Here, we introduce Dynadiff (Dynamic Neural Activity Diffusion for Image Reconstruction), a new single-stage diffusion model designed for reconstructing images from dynamically evolving fMRI recordings. Our approach offers three main contributions. First, Dynadiff simplifies training as compared to existing approaches. Second, our model outperforms state-of-the-art models on time-resolved fMRI signals, especially on high-level semantic image reconstruction metrics, while remaining competitive on preprocessed fMRI data that collapse time. Third, this approach allows a precise characterization of the evolution of image representations in brain activity.
null
['brain decoding', 'neuroimaging', 'image generation', 'visual perception']
/pdf/56382e4b0f29c3cdd8e04f0afe5ef3460cb2361d.pdf
applications to neuroscience & cognitive science
/attachment/09e5f48b4b365cb7c2c781f23fa14785681e4b36.zip
['ICLR.cc/2026/Conference/Submission24883/Authors']
FmxRzlu0rT
24,880
FmxRzlu0rT
Learning Posterior Predictive Distributions for Node Classification from Synthetic Graph Priors
One of the most challenging problems in graph machine learning is generalizing across graphs with diverse properties. Graph neural networks (GNNs) face fundamental limitations as they require training on labeled nodes for each individual graph. A critical challenge facing GNNs lies in their reliance on labeled training data for each individual graph, a requirement that hinders the capacity for universal node classification due to the heterogeneity inherent in graphs --- differences in homophily levels, community structures, and feature distributions across datasets. Inspired by the success of large language models (LLMs) that achieve in-context learning through massive-scale pre-training on diverse datasets, we introduce NodePFN. This universal node classification method generalizes to arbitrary graphs without graph-specific training. NodePFN learns posterior predictive distributions (PPDs) by training only on thousands of synthetic graphs generated from carefully designed priors. Our synthetic graph generation covers real-world graphs through the use of random networks with controllable homophily levels and structural causal models for complex feature-label relationships. We develop a dual-branch architecture combining context-query attention mechanisms with local message passing to enable graph-aware in-context learning. Extensive evaluation on 23 benchmarks demonstrates that a single pre-trained NodePFN achieves 71.27% average accuracy. These results validate that universal graph learning patterns can be effectively learned from synthetic priors, establishing a new paradigm for generalization in node classification.
null
['graph machine learning', 'node classification']
/pdf/a56692fbf56a2786bad998508353d730ee5678df.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission24880/Authors']
TeDkzf34hs
24,879
TeDkzf34hs
Dynamical properties of dense associative memory
Dense associative memory, a fundamental instance of modern Hopfield networks, can store a large number of memory patterns as equilibrium states of recurrent networks. While the stationary-state storage capacity has been investigated, its dynamical properties have not yet been discussed. In this paper, we analyze the dynamics using an exact approach based on generating functional analysis. We show results on convergence properties of memory retrieval, such as the convergence time and the size of the attraction basins. Our analysis enables a quantitative evaluation of the convergence time and the storage capacity of dense associative memory, which is useful for model design. Unlike the traditional Hopfield model, the retrieval of a pattern does not act as additional noise to itself, suggesting that the structure of modern networks makes recall more robust. Furthermore, the methodology addressed here can be applied to other energy-based models, and thus has the potential to contribute to the design of future architectures.
We analyze the recalling process using the generating functional analysis and discuss the convergence time and the attraction basins and so on.
['Hopfield networks', 'dense associative memory', 'dynamics', 'convergence time', 'attraction basin', 'generating functional analysis']
/pdf/47ac1a57346853f471b6a02c4d9d9c1ab3d8d6e2.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission24879/Authors']
nQiV6kQpEc
24,878
nQiV6kQpEc
Attacking and Securing Masking Scheme for TEE-Based Model Protection
Deep learning (DL) models are being increasingly adopted across a wide range of applications. Many inference models are deployed on edge devices to enable efficient and low-latency computation. However, such deployment exposes security risks, including the potential leakage of model parameters. To address these security risks, several researchers have proposed protection schemes for deployed models based on Trusted Execution Environments (TEEs). In this paper, we analyze a common weakness of existing TEE-based protection schemes, namely the insecurity of the masking mechanism. Existing masking schemes not only provide limited security guarantees but also incur high computational and storage complexity. Motivated by these inherent weaknesses, we develop a targeted differential attack that can accurately recover the parameters of linear layers in ReLU-based neural networks. Furthermore, we propose an improved masking scheme that achieves higher security and efficiency by generating substantially more mask combinations under the same computational cost, thereby considerably strengthening TEE-based model protection.
null
['TEE-Based Protection Schemes', 'ReLU-Based Neural Networks', 'Masking Schemes', 'Differential Attack']
/pdf/a34dd19c9d3051cb51823e7461ebb4286f62a40a.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/72a1f9ffd79b1838c52b5e3de69fd32201b00e2a.zip
['ICLR.cc/2026/Conference/Submission24878/Authors']
MsZa6NgqWJ
24,877
MsZa6NgqWJ
Comprehensive Benchmark for Tailored Small Molecule-Binding Aptamer Design
Despite their growing role as recognition elements in diagnostics, therapeutics, and biosensing, aptamers remain underserved by computational design tools compared to antibodies and protein binders. Current pipelines are fragmented and predominantly protein-focused, leaving small-molecule aptamer discovery underexplored. A key bottleneck has been the absence of a unified benchmark dataset that would allow systematic evaluation of predictive and generative models. To address this gap, we introduce the first comprehensive benchmark for aptamer–small molecule interactions, integrating seven curated sources into 2,210 annotated pairs covering 1,430 unique aptamers (DNA and RNA) and 496 chemically diverse ligands. More than half of the entries include quantitative binding affinities, enabling not only binary classification but also regression. To demonstrate the utility of this resource, we establish baseline results across shallow and deep learning baseline models under multiple splitting protocols. Our analysis yields two key insights: (i) the coverage and diversity of aptamer sequences are sufficient to support robust modeling, ensuring that receptor-side representation is not the limiting factor; and (ii) the main challenge arises from the ligand space, where a relatively small number of molecules display high structural diversity, limiting model transferability. Because the ultimate goal is designing aptamers for previously unseen molecules, the observed limitations in ligand transferability point directly to the representation problem, reinforcing the necessity of a common benchmark to address it. By providing a standardized corpus, evaluation protocols, and reproducible baselines, our work establishes a foundation for systematic progress in aptamer–small molecule prediction.
We introduce a unified benchmark for aptamer–small molecule interactions, showing that aptamer sequence diversity is well covered while ligand representation remains the main challenge for predictive modeling and practical applications.
['aptamer', 'small molecule', 'binding', 'prediction', 'benchmark']
/pdf/da82ef42c78a2f5a4c936c2d669cb539ac3851e1.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24877/Authors']
OiwMgMjeRz
24,875
OiwMgMjeRz
Shared Recurrent Memory Transformer for Multi-agent Lifelong Pathfinding
Coordination in decentralized multi-agent reinforcement learning (MARL) necessitates that agents share information about their behavior and intentions. Existing approaches rely on communication protocols with domain or resource constraints or centralized training that poorly scales to large agent populations. We introduce the Shared Recurrent Memory Transformer (SRMT), which enables coordination through unconstrained communication. SRMT provides a global memory workspace where agents broadcast their learned working memory states and query others' memory representations to exchange information and coordinate while maintaining decentralized training and execution. We evaluate SRMT on the Partially Observable Multi-Agent Pathfinding (PO-MAPF) problem, where coordination is vital for optimal path planning and deadlock avoidance. We demonstrate that shared memory enables emergent coordination even when the reward function provides minimal or no guidance. On the specifically constructed Bottleneck task that requires negotiation, SRMT consistently outperforms communicative and memory-augmented baselines, particularly under sparse reward signals, and successfully generalizes to longer corridors unseen during training. On POGEMA maps, SRMT scales with the increasing agents' population and map size, achieving competitive performance with recent MARL, hybrid, and planning-based methods while requiring no domain-specific heuristics. These results demonstrate that a transformer with shared recurrent memory enhances coordination in decentralized multi-agent systems.
null
['Multi-agent System', 'Shared Memory', 'Transformer']
/pdf/494d40f4d440c457023c115607e2d996791b2a89.pdf
applications to robotics, autonomy, planning
/attachment/bbc9e551313c2bd3e8cfdbf14c8b9f549a5416e0.zip
['ICLR.cc/2026/Conference/Submission24875/Authors']
TtlBX0cT3C
24,873
TtlBX0cT3C
Single-Cell Spatial Proteomics Clustering by Decoupling Spatiality and Expression
Single-cell spatial proteomics can reveal protein expression patterns while preserving the spatial structure of tissues, providing valuable insights into cellular functions and disease mechanisms. Spatial proteomics data clustering is a fundamental step in such studies, but it remains in the preliminary exploration phase, facing at least two prominent challenges: i) Functional regions within tissues often exhibit inherent area variations and imbalanced cell quantities, leading the model to favor features of majority classes, thus overshadowing the characteristics of minority ones. ii) Cellular identity is influenced by both intrinsic protein expression and the external spatial microenvironment; however, the heterogeneity and potential conflicts between these two information sources make it difficult to effectively identify subtle yet biologically significant cellular states. To overcome these issues, we propose a deep clustering framework named spClust. Our approach first introduces a spatially constrained synthetic minority oversampling technique to generate biologically meaningful cells of minority classes, alleviating the feature bias caused by cell type imbalance. Furthermore, we construct a spatiality adjacency graph and an expression similarity graph between cells, forming a decoupled dual-view contrastive learning architecture. We then define an adaptive mechanism to fuse the dual-view features and to assign soft cluster labels using dynamic prototypes, and further optimize labels by maximizing the modularity loss. Extensive experiments on spatial proteomics datasets demonstrate that spClust effectively identifies minority cells and improves the distinction of different cells, confirming its effectiveness and superiority.
null
['spatial proteomics clustering', 'cross-view contrastive learning', 'imbalanced learning', 'unsupervised learning']
/pdf/752429a459b6bcbced7d4108b4702161f4159f49.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission24873/Authors']
wnCJLnRBtb
24,869
wnCJLnRBtb
Context Similarity Structure Shapes the Emergence of Reliable In-Context and In-Weights Mixtures
We aim to train models that co-develop in-context learning (ICL) and in-weights learning (IWL), and flexibly switch between them based on context relevance. Such models should exploit closely related in-context examples while relying on IWL when examples are irrelevant. Although LLMs exhibit both modes, standard task-specific fine-tuning often erodes ICL, motivating IC-Train, a form of fine-tuning with in-context examples. When trained under IC-Train, prior work has shown that emergence of ICL depends on factors such as task diversity and training duration. We show that an overlooked factor is the similarity structure between target inputs and context examples. Of the two existing modes of context-target pairing, random context leads to IWL dominance, while only similar examples in context causes ICL to degenerate to copying labels without regard to relevance. To address this, we propose Contrastive-Context which enforces two types of contrasts: (1) mix of similar and random examples within a context to evolve a correct form of ICL, and (2) varying grades of similarity across contexts to evolve IWL-ICL mixtures. With experiments on real sequence to sequence learning tasks on four models, we show that Contrastive-Context strengthens ICL while preserving IWL, outperforming random and nearest-neighbor sampling in both in-domain and out-of-domain evaluation. Theoretical analysis and diagnostic probes confirm that contrasted contexts yield stable ICL–IWL mixtures, avoiding collapse into pure ICL, IWL, or copying. Our results establish similarity structure as a key driver of reliable ICL under fine-tuning an LLM for a task.
null
['In-Context Learning', 'Transformers', 'Sequence to Sequence Learning', 'Continuous Adaptation', 'In-Weights Learning']
/pdf/383b395d78e83f2d70cd6d4d6b207d4ece4363fa.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24869/Authors']
ZQ9uqllSts
24,868
ZQ9uqllSts
Long-Context Modeling with Dynamic Hierarchical Sparse Attention for On-Device LLMs
The quadratic cost of attention hinders the scalability of long-context LLMs, particularly in resource-constrained settings. While attention is known to be often sparse, existing static sparse methods such as sliding windows or global tokens cannot adapt to task or input dependent variations in attention. While there are recently proposed dynamic approaches for sparse attention, they still depend on predefined templates or heuristic mechanisms that reduces generality and may prune tokens that remain contextually important. As such, we introduce Dynamic Hierarchical Sparse Attention (DHSA), a data-driven framework that dynamically predicts attention sparsity online without any retraining of the base LLM. Our proposed DHSA adaptively segments sequences into variable-length chunks, then computes chunk representations by aggregating the token embeddings within each chunk. To avoid the bias introduced by varying chunk lengths, we apply a length-normalized aggregation that scales the averaged embeddings by the square root of the chunk size. Finally, DHSA upsamples the chunk-level similarity scores to the token level to produce importance scores that determine which token-level interactions are preserved. Our experiments with Needle-in-a-Haystack and LongBench show that DHSA matches dense attention in accuracy while reducing prefill latency by 20–60% and peak memory usage by 35% at 8K compared to eager attention. On Llama-3.1-8B (4-bit), DHSA scales to 100K context with high accuracy and competitive latency on a single 24 GB GPU, where dense kernels fail between 16K and 64K. Compared to representative sparsity baselines, DHSA achieves consistently higher accuracy, yielding 12–20% relative gains, with comparable prefill cost. These results highlight DHSA as an efficient and adaptable solution for long-context inference for on-device LLMs.
null
['Long-context language models', 'Sparse attention', 'Dynamic sparsity', 'On-device inference', 'Efficient LLMs']
/pdf/09940cefdeae773f720c0f39c06a817c421c8674.pdf
foundation or frontier models, including LLMs
/attachment/7d90f0743a19e7ac831dd6fdd83fbbcf29a5f609.zip
['ICLR.cc/2026/Conference/Submission24868/Authors']
kQRjcBmFAD
24,867
kQRjcBmFAD
Chain-of-Thought Hijacking
Reasoning models are widely used to improve task performance by allocating more inference-time compute, and prior work suggest it may also strengthen safety by improving refusal. Yet we find the opposite: the same reasoning can be used to bypass safety. We introduce Chain-of-Thought Hijacking, a jailbreak attack on reasoning models. The attack pads harmful requests with long sequences of harmless reasoning. Across HarmBench, CoT Hijacking reaches a 99% attack success rate (ASR) on Gemini 2.5 Pro, far exceeding prior jailbreak methods. Our mechanistic analysis shows that mid layers encode the strength of safety checking, while late layers encode the verification outcome. Long benign CoT dilutes both signals by shifting attention away from harmful tokens. Targeted ablations of at- tention heads identified by this analysis causally increased ASR, confirming their role in a safety subnetwork. These results show that the most interpretable form of reasoning—explicit CoT—can itself become a jailbreak vector when combined with final-answer cues. We release prompts, outputs, and judge decisions to facil- itate replication.
we introduce CoT hijacking, a new jailbreak for reasoning models
['safety', 'jailbreaks', 'chain-of-thought']
/pdf/cada7f5dee584fa1b3ce3ac36e3493d2094e6883.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24867/Authors']
6XdUI9Cibc
24,866
6XdUI9Cibc
ENFORCE: Nonlinear Constrained Learning with Adaptive-depth Neural Projection
Ensuring neural networks adhere to domain-specific constraints is crucial for addressing safety and ethical concerns while also enhancing inference accuracy. Despite the nonlinear nature of most real-world tasks, existing methods are predominantly limited to affine or convex constraints. We introduce ENFORCE, a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality constraints in the predictions. We mathematically prove that our projection mapping is 1-Lipschitz under mild assumptions, making it well-suited for stable training. We evaluate ENFORCE on multiple tasks, including function fitting, a real-world engineering simulation, and learning optimization problems. For the latter, we introduce a class of scalable optimization problems as a benchmark for nonlinear constrained learning. The predictions of our new architecture satisfy $N_C$ equality constraints that are nonlinear in both the inputs and outputs of the neural network, while maintaining scalability with a tractable computational complexity of $\mathcal{O}(N_C^3)$ at training and inference time.
ENFORCE is a neural network architecture that uses an adaptive projection module (AdaNP) to enforce nonlinear equality constraints in predictions, improving safety, accuracy, and efficiency in optimization and regression tasks.
['Constrained learning', 'Hard-constrained neural networks', 'Proxy optimization', 'Trustworthy AI', 'Physics-informed machine learning']
/pdf/ccc87d99b1dfa3ac123220c3197caf8c9a7a8141.pdf
optimization
/attachment/91e635d9dc3843e4621223bd5281e1cb0c437eeb.zip
['ICLR.cc/2026/Conference/Submission24866/Authors']
wiPldcWBPp
24,865
wiPldcWBPp
FINEdits : Precise Image Editing with Inferred Masks and Light Fine-tuning
Image editing with diffusion models faces a fundamental trade-off between edit fidelity and preservation of unedited regions. Training-free methods often suffer from imperfect inversion that degrades reconstruction quality, while training-based approaches require substantial computational resources and carefully curated datasets. We present FINEdits, a method that addresses these limitations through two key innovations: (1) automatic mask inference using cross-attention maps to explicitly preserve non-edited regions, and (2) lightweight fine-tuning to improve inversion quality without semantic drift. Our masking approach leverages transformer attention mechanisms to automatically identify editing regions using a parameter-free K-means clustering method, eliminating the need for manual hyperparameter tuning. To handle the inversion quality degradation at early timesteps required for large edits, we introduce a light fine-tuning strategy that balances reconstruction fidelity with semantic preservation. We introduce EditFFHQ, a new benchmark dataset of 2000 face images with sequential editing instructions, enabling quantitative evaluation of identity preservation and edit quality. Extensive experiments demonstrate that FINEdits achieves superior identity preservation while maintaining competitive edit fidelity and image quality. Our method provides an effective solution for precise image editing that preserves visual consistency without requiring extensive retraining or manual parameter adjustment.
we propose a new image editing method that better preserves elements from the original image
['computer vision', 'generative modeling', 'diffusion models', 'image editing']
/pdf/3e74d427cdca1dce566276f901715761cd7c4c82.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24865/Authors']
GrsofC2FqF
24,864
GrsofC2FqF
Detection of unknown unknowns in autonomous systems
Unknown unknowns (U2s) are deployment-time scenarios absent from development/testing. Unlike conventional anomalies, U2s are not out-of-distribution (OOD); they stem from changes in underlying system dynamics without a distribution shift from normal data. Thus, existing multi-variate time series anomaly detection (MTAD) methods—which rely on distribution-shift cues—are ill-suited for U2 detection. Specifically: (i) we show most anomaly datasets exhibit distribution shift between normal and anomalous data and therefore are not representative of U2s; (ii) we introduce eight U2 benchmarks where training data contain OOD anomalies but no U2s, while test sets contain both OOD anomalies and U2s; (iii) we demonstrate that state-of-the-art (SOTA) MTAD results often depend on impractical enhancements: point adjustment (PA) (uses ground truth to flip false negatives to true positives, inflating precision) and threshold learning with data leakage (TL) (tuning thresholds on test data and labels); (iv) with PA+TL, even untrained deterministic methods can match or surpass MTAD baselines; (v) without PA/TL, existing MTAD methods degrade sharply on U2 benchmarks. Finally, we present sparse model identification–enhanced anomaly detection (SPIE-AD), a model-recovery-and-conformance, zero-shot MTAD approach that outperforms baselines on all eight U2 benchmarks and on six additional real-world MTAD datasets—without PA or TL.
We formalize U2 (non-OOD dynamic changes without distribution shift), release 8 U2 benchmarks, and propose SPIE-AD—a zero-shot U2 detection method.
['unknown unknowns', 'autonomous systems', 'conformal bounds']
/pdf/9ae888cc8a84d10a6f39e70b6fd964b4b93cccec.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission24864/Authors']
57jcZv7Kuq
24,863
57jcZv7Kuq
Character-Level Perturbations Amplify LLM Jailbreak Attacks
Contemporary large language models (LLMs) exhibit remarkable capabilities, yet their subword tokenization mechanisms suffer from a vulnerability, whereby small character-level perturbations can re-partition text into unfamiliar subwords, degrading model performance across various tasks. Building on this, we show that this tokenization vulnerability also compromises safety mechanisms in jailbreak scenarios. We introduce a simple, model- and template-agnostic character-level jailbreak method and demonstrate that minimal character-level perturbations effectively increase the success rates of both simple and complex jailbreak attacks across multiple LLMs. We reveal that these perturbations lead to over-fragmented tokenization and token representation drift, resulting in substantial divergence in the semantic representations of words. Furthermore, our analysis using word-level semantic recovery and sentence-level spelling error detection and correction shows that models struggle to reconstruct the original semantics for perturbed content. In addition, layer-wise probe classifiers also fail to reliably detect the harmful intent of perturbed jailbreak prompts, further exposing the models' vulnerability in comprehending adversarially perturbed input. Finally, we find that in certain cases, perturbations reduce rather than increase attack success, as the corrupted spans fit less naturally into the template. Together, our findings demonstrate that tokenization-induced vulnerabilities compromise safety mechanisms, underscoring the need for investigation into mitigation strategies.
null
['large language models', 'tokenization vulnerability', 'character-level perturbations', 'jailbreak attacks', 'safety mechanisms']
/pdf/8eeb9671b57266fc26091b459e7b6572b7a3ce4d.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24863/Authors']
seuufmeTI3
24,862
seuufmeTI3
On the Convergence of LoRA-Based Federated Learning: A Unified Analysis of Aggregation-Broadcast Operators
Federated Learning (FL) enables collaborative model training across decentralized data sources while preserving data privacy. However, the increasing scale of Machine Learning (ML) models poses significant communication and computation challenges in FL. Low-Rank Adaptation (LoRA) has recently been integrated into FL as a Parameter-Efficient Fine-Tuning (PEFT) strategy, substantially lowering communication costs by transmitting only a small set of trainable parameters. Nevertheless, how to aggregate LoRA-updated local models on the server remains a critical and understudied problem. This paper presents a comprehensive theoretical analysis of LoRA-based FL frameworks. We first classify existing aggregation schemes into two main categories: Sum-Product (SP) and Product-Sum (PS). We then introduce the Aggregation-Broadcast Operator (ABO) as a general class encompassing all aggregation-broadcast methods. Any method in this class ensures local or global convergence as long as the corresponding Weak or Strong Convergence Condition is satisfied. In particular, we prove that the SP and PS aggregation methods satisfy the weak and strong convergence conditions, respectively, but differ in their ability to achieve the optimal convergence rate. Moreover, we conducted extensive experiments on standard open datasets to verify our theoretical findings. AI Acknowledgment: We acknowledge that AI tools were employed to assist in paper writing and polishing the text to improve readability.
Theoretical convergence analysis for lora enabled distributed fine-tuning.
['federated learning', 'low rank adaptation', 'convergence analysis', 'fine turning']
/pdf/3779ec681b0e02b33047e7ed2fa85b1a171acfc4.pdf
foundation or frontier models, including LLMs
/attachment/241f9f48953e7298839a5ba40c975a8f03c14418.zip
['ICLR.cc/2026/Conference/Submission24862/Authors']
KBXrByWLcb
24,861
KBXrByWLcb
SPARC: SURVIVAL PSEUDO-LABEL ADAPTIVE RE- FINEMENT AND CALIBRATION
Accurate survival prediction is critical for oncology, public health, and reliability engineering, yet existing methods remain constrained by limited follow-up, heavy censoring, and static pseudo-labeling practices. In many clinical datasets, including Our reconstructed cohort of $N = 50{,}155$ patients with observed follow-up of only 74.742 months (58.8\% deceased, 41.2\% censored), long-term outcomes remain unobserved, preventing reliable 10-year (120-month) survival estimation. We address this gap by introducing a dynamic pseudo-label refinement and calibration framework that transforms incomplete follow-up into extended, biologically consistent survival trajectories. Starting from a hybrid Weibull–Kaplan–Meier initialization, pseudo-labels are iteratively corrected under survival-theoretic constraints and clinical plausibility rules, including enforcing zero survival beyond death and monotonic survival probabilities for censored patients. These refined labels are propagated through a deep ensemble trained with variance-penalizing objectives and monitored via diagnostic feedback for stability and uncertainty calibration. This process enables survival labels to evolve adaptively, rather than remain static preprocessing artifacts, and produces clinically plausible estimates well beyond the observed horizon. We applied to the 50{,}155-patient cohort, the framework achieved rapid convergence and outstanding predictive performance ($R^2 = 0.9964$, MAE = 0.0066, C-index = 0.9915), with predictions tightly calibrated, biologically consistent, and robust under long-term censoring. We validated Our proposed framework on two public datasets of N = 2509 \& N = 205 available in \cite{Metabric_Kaggle} (Metabric) \& \cite{finalfit2023survival} (Malignant Melanoma) achieved remarkable results ($R^2 = 0.9924$ \& $0.9781$, MAE = 0.0142 \& 0.0247, C-index = 0.9633 \& 0.8459) by follow-up to 480 \& 240 months respectively. Thus, by bridging the 74.742-month follow-up limit with reliable 120-month projections on Our dataset, Our work establishes adaptive pseudo-label refinement as a principled foundation for long-horizon, interpretable, and clinically reliable survival modeling. Moreover, we are going to publicly publish Our dataset and code at \url{https://doi.org/10.5281/zenodo.17163267} and \url{https://anonymous.4open.science/r/Dynamic-Pseudo-Labeling-D2AB/} respectively for the research community.
We propose an adaptive pseudo-label refinement and calibration framework that enables accurate, biologically consistent long-term survival prediction despite heavy censoring and limited follow-up.
['Survival Analysis', 'Pseudo-Label Refinement', 'Calibration', 'Censoring', 'Long-Term Survival Prediction', 'Deep Ensemble Learning', 'Weibull–Kaplan–Meier Initialization', 'Clinical Reliability', 'Oncology', 'Healthcare AI']
/pdf/95906b885552fe9decdc2e06c9248e85309a722b.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
/attachment/cefa4980faddbddd7b6a2fcb9c0087e6babaede6.zip
['ICLR.cc/2026/Conference/Submission24861/Authors']
sC9gjsv7p7
24,860
sC9gjsv7p7
A2A: Mechanistic Analysis for Efficient Layer Selection in Activation Steering
Activation steering has emerged as an effective and economical technique for behavior control in large language models (LLMs). Despite growing interest, existing methods typically rely on exhaustive layer-wise interventions to identify effective steering locations. This process is computationally expensive and lacks interpretability. To mitigate this problem, we propose Attribution-to-Action (A2A), an efficient layer selection framework that leverages mechanistic interpretability to identify layers where steering is most impactful. Specifically, A2A first constructs an attribution graph that traces how internal pathways contribute to model outputs, guided by a small set of contrastive behavior data. Subsequently, edge-level attribution weights are aggregated at the node level and then combined within each layer to derive an importance ranking. Steering vectors are applied only to the top-ranked layers, effectively reducing the search space. Experiments on behavior control tasks such as personality conditioning and model jailbreaking demonstrate that A2A achieves performance comparable to exhaustive search while requiring significantly fewer interventions and offering improved interpretability.
null
['Mechanistic Interpretability', 'Activation Steering', 'Attribution Graph', 'Large Language Models', 'Efficient Intervention']
/pdf/e6e8d1cad6c03926acf0fe8def819265ea30a825.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission24860/Authors']
VMlajIH1oF
24,859
VMlajIH1oF
Differentiable Top-k: From One-Hot to k-Hot
The one-hot representation, argmax operator, and its differentiable relaxation, softmax, are ubiquitous in machine learning. These building blocks lie at the heart of everything from the cross-entropy loss and attention mechanism to differentiable sampling. Their $k$-hot counterparts, however, are not as universal. In this paper, we consolidate the literature on differentiable top-$k$, showing how the $k$-capped simplex connects relaxed top-$k$ operators and $\pi$ps sampling to form an intuitive generalization of one-hot sampling. In addition, we propose sigmoid top-$k$, a scalable relaxation of the top-$k$ operator that is fully differentiable and defined for continuous $k$. We validate our approach empirically and demonstrate its computational efficiency.
We propose a framework for differentiable top-k by generalizing from one-hot to k-hot.
['top-k', 'k-hot', 'subset', 'relaxed', 'differentiable', 'sampling']
/pdf/a7fd95924697370c361585b89d9a7ce16bb3a1e8.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24859/Authors']
zBgjWTWgCh
24,856
zBgjWTWgCh
Exploring Expert Concentration for Parameter-efficient Fine-tuning of Mixture-of-Expert LLMs
Scaling large language models (LLMs) with the Mixture-of-Experts (MoE) architecture has emerged as a powerful alternative to dense models. However, fine-tuning MoE models for domain- or task-specific adaptation remains challenging: full-model tuning is prohibitively expensive, while existing parameter-efficient fine-tuning (PEFT) methods, mostly adapted from dense models, suffer from unstable optimization due to MoE’s sparse expert activation. In this work, we conduct an empirical study on the fine-tuning dynamics of MoE models. We first introduce the Domain Advantage Score (DAS), a simple yet effective metric for identifying domain-relevant experts. Our findings uncover an expert concentration phenomenon: during domain-specific fine-tuning, the overall DAS of the top experts consistently increases, indicating a progressive enhancement of domain concentration. Building on this, we propose a lightweight two-stage PEFT framework: (1) fine-tuning only the attention and router layers to sharpen expert specialization, and (2) selectively fine-tuning parameters on the identified experts. This approach updates only a small fraction of parameters while achieving performance on par with full fine-tuning, and it effectively preserves the model's general capabilities. Experiments on nine benchmarks show the effectiveness and efficiency of our method. Our code and data will be publicly released.
we propose a lightweight two-stage PEFT framework that first tunes attention and routers, then selectively fine-tunes expert modules, achieving near full-tuning accuracy with only a small fraction of parameters.
['MoE; PEFT']
/pdf/fda28f715ca7b788020563964bd9f0f6c0af0a31.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24856/Authors']
lw3t6x4mJq
24,852
lw3t6x4mJq
Sample Efficient Forced Dynamics Recovery
Recovering governing equations of dynamical systems from limited samples is critical for deploying autonomous systems under real-world resource constraints. Classical sparse regression methods (e.g., SINDy-MPC) and physics-informed neural networks achieve good fits when oversampled, but their accuracy degrades sharply when data is available only at the Nyquist rate. We provide an information-theoretic analysis showing that reconstruction error fundamentally decomposes into a data-fit component (linear in sampling frequency) and a model-estimation component (nonlinear in frequency), bounded by the Cramér–Rao lower bound. This motivates MRIDHA, a model recovery framework that constrains the equation search space to physically consistent structures by embedding continuous-time latent variable nodes that enforce stability and time-constant properties. Across nine simulated and three real-world benchmarks—including automated insulin delivery, EEG reconstruction, and a 16D quadcopter—MRIDHA significantly outperforms SINDy-MPC and PINN-SR at Nyquist-rate sampling, demonstrating improved sample efficiency, robustness to input uncertainty, and scalability. Our results establish both new theoretical limits and a practical method for sample-efficient recovery of forced dynamics.
We introduce MRIDHA, a constrained model recovery framework that enforces physical consistency and achieves state-of-the-art reconstruction of forced dynamics at Nyquist-rate sampling across six simulation and three real-world benchmarks.
['equation discovery', 'liquid time constant networks', 'SINDY']
/pdf/d0b1cd523b1f999644873312a3000dd1c3391df9.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24852/Authors']
eJ58DsnY4i
24,850
eJ58DsnY4i
A Neuro-symbolic Approach to Epistemic Deep Learning for Hierarchical Image Classification
Deep neural networks achieve strong recognition performance, but they often produce overconfident predictions and fail to respect structural constraints in data. We propose a neuro-symbolic framework that augments Swin Transformers with focal set reasoning and differentiable fuzzy logics. Rather than treating labels as isolated categories, the model induces focal sets by modelling overlaps in the learned embedding space, which helps capture epistemic alternatives beyond single labels. These focal sets form the basis of a belief-theoretic layer that uses fuzzy membership functions and $t$-norm conjunctions to encourage consistency between fine- and coarse-grained predictions. A learnable loss further balances calibration, mass regularisation, and logical consistency, allowing the model to adaptively trade off symbolic structure with data-driven evidence. In experiments on hierarchical image classification, our framework maintains accuracy on par with transformer baselines while providing more calibrated and interpretable predictions, reducing overconfidence, and enforcing high logical consistency across hierarchical outputs. Overall, our results suggest that combining focal set reasoning with fuzzy logics provides a practical step toward deep learning models that are both accurate and epistemically aware.
We propose a neurosymbolic epistemic framework that integrates fuzzy logic t-norms and focal sets into Swin Transformers. The approach maintains competitive accuracy while improving calibration, logical consistency, and interpretability.
['Neurosymbolic AI', 'Epistemic AI', 'Fuzzy Logic', 'T-norm Functions', 'Belief Functions', 'Swin Transformer', 'Image Classification', 'Vision Transformer', 'Multilabel Classification', 'Random Sets', 'Logical Consistency']
/pdf/15eed5411fdd827f8e89ec9e75b75dc9972a52f5.pdf
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
/attachment/a773eb72271984acb08a2583d343d576238e29ea.zip
['ICLR.cc/2026/Conference/Submission24850/Authors']
yGJrvSU6wK
24,849
yGJrvSU6wK
COSMO-INR: Complex Sinusoidal Modulation for Implicit Neural Representations
Implicit neural representations (INRs) have recently emerged as a powerful paradigm for modeling data, offering a continuous alternative to traditional discrete signal representations. Their ability to compactly encode complex signals has led to strong performance across a wide range of computer vision tasks. In previous studies, it has been repeatedly shown that INR performance has a strong correlation with the activation functions used in its multilayer perceptrons. Although numerous competitive activation functions for INRs have been proposed, the theoretical foundations underlying their effectiveness remain poorly understood. Moreover, key challenges persist, including spectral bias (the reduced sensitivity to high-frequency signal content), limited robustness to noise, and difficulties in jointly capturing both local and global features. In this paper, we explore the underlying mechanism of INR signal representation, leveraging harmonic analysis and Chebyshev Polynomials. Through a rigorous mathematical proof, we show that modulating activation functions using a complex sinusoidal term yields better and complete spectral support throughout the INR network. To support our theoretical framework, we present empirical results over a wide range of experiments using Chebyshev analysis. We further develop a new activation function, leveraging the new theoretical findings to highlight its feasibility in INRs. We also incorporate a regularized deep prior, extracted from the signal via a task-specific model, to adjust the activation functions. This integration further improves convergence speed and stability across tasks. Through a series of experiments which include image reconstruction (with an average PSNR improvement of +5.67 dB over the nearest counterpart across a diverse image dataset), denoising (with a +0.46 dB increase in PSNR), super-resolution (with a +0.64 dB improvement over the nearest State-Of-The-Art (SOTA) method for 6X super-resolution), inpainting, and 3D shape reconstruction we demonstrate the novel proposed activation over existing state of the art activation functions.
A study on the effect of complex sinusoidal modulation on INR activation functions
['Implicit Neural Networks', 'Chebyshev Polynomials', 'Raised cosine filter', 'Spectral bias']
/pdf/572d84e13b2ee2009bfdefa68db4ed44ce635e58.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24849/Authors']
pAgiqavopA
24,848
pAgiqavopA
Tri-Factor Saliency: A Low-Dimensional Representation for Efficient and Diversity-Aware Video Token Pruning
The quadratic computational overhead of self-attention severely limits the application of Large Vision-Language Models (LVLMs) to long-form video. While training-free token pruning offers a promising avenue for acceleration, current methods still struggle for balancing the token diversity and pruning efficiency. Query-based approaches prune tokens irrelevant to a specific prompt, but consequently sacrifice the intrinsic diversity of the video content. Conversely, methods that preserve diversity by clustering or matching based on the raw, high-dimensional token features incur prohibitive computational costs, making them impractical for long video inputs. In this work, we challenge the assumption that preserving diversity necessitates expensive computations in the original high-dimensional feature space. We hypothesize that a low-dimensional yet informative representation engineered for pruning can achieve comparable results with a fraction of the overhead. To validate this, we propose a framework that first projects the original token features into a highly informative 3D "saliency-space." This projection is achieved via our Tri-Factor Saliency (TFS) model, which computes three largely orthogonal sub-features from a local spatio-temporal neighborhood: (1) Dynamic Saliency, which captures the magnitude of movement; (2) Regional Saliency, which identifies coherent objects that stand out from their background; and (3) Focal Saliency, which pinpoints unpredictable, fine-grained details. This low-dimensional representation enables subsequent entity-aware clustering and diversity-preserving stratified sampling to be performed with minimal computational cost. Our experiments show that this approach allows for the pruning of up to 75% of tokens while retaining 95% of the original model's performance on video understanding benchmarks. Our work demonstrates that a well-designed, low-dimensional perceptual projection can effectively replace expensive high-dimensional feature matching for video token pruning, charting a new course that achieves both high efficiency and strong diversity preservation.
null
['video token compression', 'video understanding']
/pdf/13c0e5f612e17f492890cfafbbdbecdaeea68f41.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24848/Authors']
N4l4Jp50R4
24,847
N4l4Jp50R4
Enough is as good as a feast: A Comprehensive Analysis of How Reinforcement Learning Mitigates Task Conflicts in LLMs
Model merging plays a crucial role in consolidating multiple specialized models into a single, unified model, especially in the era of large language models (LLMs). Recent research has primarily focused on developing strategies to enhance merging performance with the trained models, while the impact of training paradigms, such as supervised fine-tuning (SFT) and reinforcement learning (RL), on the effectiveness of model merging remains underexplored. In this study, we systematically explore the merging behavior of RL-trained LLMs compared to those trained with traditional SFT. Through comprehensive evaluations across five representative tasks, we find that RL significantly reduces task conflicts and results in less performance degradation after merging, making RL-trained models particularly well-suited for this process. To unearth the reasons behind the superior suitability of RL for model merging, we conduct extensive empirical experiments and theoretical analyses. Our findings highlight three key factors: (1) On-policy training data in RL control the gradient updates in a smaller magnitude, reducing the risk of overwriting existing knowledge for other tasks in the model. (2) The RL optimization objective, which favors "\textit{enough is as good as a feast}", progressively reduces the magnitude of parameter updates as the model converges, thereby alleviating inter-task conflicts. (3) Joint optimization of positive and negative examples in RL steers the model towards an unbiased task-specific parameter subspace, ensuring robust performance while further preventing parameter conflicts.
We provide a comprehensive analysis of how reinforcement learning mitigates task conflicts in LLMs
['Large language model', 'Reinforcement learning', 'Model merging']
/pdf/87ebf4c0c7c4bb54d83072e4e087c2f117e41115.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24847/Authors']
QWvrz4qzqU
24,846
QWvrz4qzqU
Accuracy at Lower Cost: Rethinking Client Selection in Federated Learning
Federated learning (FL) enables collaborative model training across multiple clients without sharing raw data, thereby ensuring privacy. A critical performance factor for FL is client selection. Under independent and identically distributed (IID) data, clients are chosen at random, which can lead to reduced accuracy, slower convergence, and higher communication cost. In this work, we present a systematic empirical study of client selection, revealing that random participation can significantly degrade performance. Motivated by these findings, we introduce a multi-objective optimization strategy that jointly balances model accuracy and communication cost under IID partitioning. For fast evaluation, we propose a dataset complexity-aware surrogate regressor that predicts the FL outcomes (e.g., accuracy or loss) for image classification tasks, thereby avoiding costly full model training. Using the predicted client configuration (number of selected and available clients) resulting from multi-objective optimization on a new dataset, and without requiring any additional training, our framework achieves 98.9\% of the maximum attainable accuracy while incurring only 38.75\% of the maximum communication cost. Moreover, it identifies a diminishing‑returns regime that preserves 99.9\% of peak accuracy while reducing cost to 63.12\%. These results demonstrate that both the performance and variance of FL can be estimated solely by dataset complexity and client dataset size, enabling the identification of client configurations that best balance accuracy and communication costs.
Optimal Client Selection for Federated Learning by balancing Accuracy and Communication Cost
['Federated Learning', 'Client Selection Problem']
/pdf/902a3357ff0308fd0ead745789b08804528336d3.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24846/Authors']
yz4NpKm7Yx
24,845
yz4NpKm7Yx
Attention Smoothing: Correcting Causal Bias in Autoregressive Language Models
Autoregressive large language models (LLMs) suffer from causal bias: once attention states are cached under the causal mask, they cannot be revised, leading to information solidification and path-dependent errors. This structural limitation undermines contextual fidelity and amplifies hallucinations. We introduce Attention Smoothing, a decoding-time framework that revises attention after the entire context is observed. Our method models token-to-token information flow as an absorbing Markov chain, computes token-level surprisal scores, and derives a smoothed posterior attention distribution that corrects the causal bias. The framework is model-agnostic, training-free, and can be seamlessly integrated into existing inference pipelines. Experiments on multiple hallucination and factuality benchmarks show that Attention Smoothing consistently improves contextual faithfulness across model scales, highlighting the importance of managing information flow for more reliable LLM generation.
null
['Large Language Models', 'Hallucination Mitigation', 'Attention Smoothing']
/pdf/6da0e96f5e7389a315ff429ce9759b35fbefe6f7.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24845/Authors']
UJqXhFFzKu
24,844
UJqXhFFzKu
Learning Continually at Peak Performance with Continuous Continual Backpropagation
Training neural networks under non-stationary data distributions, as in continual supervised and reinforcement learning, is hindered by loss of plasticity and representation collapse. While recent approaches employ periodic, full neuron reinitialization to sustain gradient flow and restore plasticity, they often sacrifice asymptotic performance and still suffer frequent collapse. To address these limitations, we propose Continuous Continual Backpropagation (CCBP), which instead continuously, partially resets units. Empirically, CCBP preserves the long-term performance of standard optimizers while maintaining the plasticity of reset-based methods, and uniquely prevents policy collapse. Ablations further show how CCBP can be tuned to smoothly trade off plasticity and asymptotic performance, highlighting gradual reinitialization as a promising direction for continual deep learning.
We introduce Continuous Continual Backpropagation (CCBP), which introduces utility-scaled partial resets inplace of full neurons to maintain peak performance throughout Continual Reinforcement Learning
['Continual Learning', 'Reinforcement Learning', 'Plasticity', 'Dormant Neurons', 'Optimizer', 'Continual Reinforcement Learning']
/pdf/7a4c827fdaf317a4157c11eeac44cdc875d80924.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24844/Authors']
0B5K9pIdSK
24,843
0B5K9pIdSK
TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
Large Language Models (LLMs) are widely applied in real world scenarios, but fine-tuning them comes with significant computational and storage costs. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA mitigate these costs, but the adapted parameters are dependent on the base model and cannot be transferred across different backbones. One way to address this issue is through knowledge distillation, but its effectiveness inherently depends on training data. Recent work such as TransLoRA avoids this by generating synthetic data, but this adds complexity because it requires training an additional discriminator model. In this paper, we propose TiTok, a new framework that enables effective LoRA Transplantation through Token-level knowledge transfer. Specifically, TiTok captures task-relevant information through a contrastive excess between a source model with and without LoRA. This excess highlights informative tokens and enables selective filtering of synthetic data, all without additional models or overhead. Through experiments on three benchmarks across multiple transfer settings, our experiments show that the proposed method is consistently effective, achieving average performance gains of +4–8% compared to baselines overall.
We propose a new framework TiTok, which enables effective LoRA transplantation through token-level knowledge transfer
['Large Language Models', 'Knowledge Transfer', 'PEFT']
/pdf/36dec6a7418f68bdb0d5b16bf31ccf444eadf348.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24843/Authors']
LLWIaUZvEu
24,842
LLWIaUZvEu
VGPO: Fine-Tuning Speech Autoregressive Diffusion Models with Value Guided Policy Optimization
Autoregressive diffusion models (ARDMs), which generate continuous latent sequences, have recently achieved state-of-the-art zero-shot text-to-speech (TTS) performance. However, fine-tuning these models with reinforcement learning (RL) to directly optimize user-defined reward functions remains an open challenge. In this work, we propose Value-Guided Policy Optimization (VGPO), an actor-critic RL algorithm tailored to ARDMs. We train a causal value model to predict expected future rewards and update the ARDM using gradients from this value model. To validate VGPO, we fine-tune the recently introduced DiTAR model and evaluate it on two tasks: improving F0 variance to enhance expressiveness; and optimizing text log-probability to improve the model's robustness to challenging long text. VGPO can achieve significant improvement in zero-shot TTS expressiveness and robustness, while maintaining naturalness and speaker similarity.
null
['text-to-speech', 'speech synthesis', 'diffusion model', 'continuous-valued language model', 'reward optimization']
/pdf/1a7a4176ff60351f9891f12be1851b2220d0c3ec.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24842/Authors']
QuTLcwGQTJ
24,840
QuTLcwGQTJ
ACLIB-GNN: Incorporating Adversarial Causal Learning with Information Bottlenecks for Interpretable Graph Neural Networks
Graph Neural Networks (GNNs) excel in node classification but face critical interpretability challenges. Though existing explanation methods that include post-hoc and self-interpretable approaches are widely adopted, they still struggle to enhance prediction through explanation effectively. Moreover, causal graph learning demonstrates the capacity to identify causal features that bolster predictive performance, but its utilization in node classification tasks has remained notably limited,primarily due to the non-trivial challenges of handling localized heterogeneity and contextual noise in node-level tasks. To address these gaps, we propose ACLIB-GNN, a novel framework unifying adversarial causal learning and the node information bottleneck. By leveraging graph attention to minimize noncausal feature interference and adversarial training to maximize mutual information between explanatory subgraphs and labels, it explicitly disentangles causal features from shortcut signals, balancing transparency and performance. On four benchmark datasets, ACLIB-GNN outperforms state-of-the-art baselines via causal subgraphs to enhance classification accuracy and provides superior explanatory power.Ablation studies validate the synergistic effect of its core components. Notably, the framework generalizes graph classification tasks effectively. ACLIB-GNN offers a scalable and trustworthy solution for interpretable node classification tasks based on causal graph learning.
We propose an interpretable GNN that integrates adversarial causal learning and information bottlenecks for node classification tasks.
['causal graph learning', 'GNN Interpretability', 'adversarial learning', 'node classification']
/pdf/87882ec3c9f6444ce19aec52ee807525f3f40746.pdf
interpretability and explainable AI
/attachment/bee54d2ba42ee5bf4939178db761e96d92ee5dfc.zip
['ICLR.cc/2026/Conference/Submission24840/Authors']
vVMT9ODgwR
24,837
vVMT9ODgwR
Oh-A-DINO: Understanding and Enhancing Attribute-Level Information in Self-Supervised Object-Centric Representations
Object-centric understanding is fundamental to human vision and required for complex reasoning. Traditional methods define slot-based bottlenecks to learn object properties explicitly, while recent self-supervised vision models like DINO have shown emergent object understanding. We investigate the effectiveness of self-supervised representations from models such as CLIP, DINOv2 and DINOv3, as well as slot-based approaches, for multi-object instance retrieval, where specific objects must be faithfully identified in a scene. This scenario is increasingly relevant as pre-trained representations are deployed in downstream tasks, e.g., retrieval, manipulation, and goal-conditioned policies that demand fine-grained object understanding. Our findings reveal that self-supervised vision models and slot-based representations excel at identifying edge-derived geometry (shape, size) but fail to preserve non-geometric surface-level cues (colour, material, texture), which are critical for disambiguating objects when reasoning about or selecting them in such tasks. We show that learning an auxiliary latent space over segmented patches, where VAE regularisation enforces compact, disentangled object-centric representations, recovers these missing attributes. Augmenting the self-supervised methods with such latents improves retrieval across all attributes, suggesting a promising direction for making self-supervised representations more reliable in downstream tasks that require precise object-level reasoning.
We show that slot-based and pre-trained DINO representations struggle to retrieve object-level attributes in multi-object scenes, and improve them by augmenting DINO with learned object-centric features.”
['object-centric learning', 'self-supervised learning', 'multi-object instance retrieval', 'representation fusion', 'fine-tuning', 'representation learning', 'pre-trained representations', 'latent spaces']
/pdf/3e98b084c43f435252d74eaa2e2844d57a3c64d1.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
/attachment/e2928c406f6f018ea25f10abe5b38ff320973ff6.zip
['ICLR.cc/2026/Conference/Submission24837/Authors']
fIpDd5UlFP
24,835
fIpDd5UlFP
AA-SVD: Anchored and Adaptive SVD for Large Model Compression
Pretrained large-language and vision-language models have demonstrated remarkable capabilities over the years, but their ever-increasing size poses challenges for deployment and accessibility. Model compression offers a path toward democratizing access, yet many existing approaches either require costly retraining or result in substantial performance degradation. To address this, we introduce a fast SVD-based truncation framework for compressing pretrained networks that enables rapid compression of billion-parameter models without retraining. Unlike existing SVD-based approaches that optimize only on the original inputs — ignoring distribution shifts from upstream compression and thus propagating errors forward—or those that rely only on shifted inputs and risk drifting away from the original outputs, our approach accounts for both. By anchoring each compressed layer to the original outputs while explicitly modeling input distribution shifts, our method identifies optimal low-rank approximations that maintain functional equivalence with the uncompressed network, thereby preserving the behavior of the full model. Experiments across language and vision-language models of varying scales demonstrate that our method not only achieves favorable trade-offs between compression ratio and task accuracy, but also outperforms existing baselines particularly at low compression ratios—where the gap widens as compression becomes more aggressive—offering a practical solution for efficient, large-scale model deployment.
null
['Compression', 'SVD', 'Efficient LLM', 'Low-rank decomposition']
/pdf/21917e67b3149960b5a21e6393fa16fb8ed2281e.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24835/Authors']
HQMVRQUEaM
24,834
HQMVRQUEaM
Scalable Multilingual Multimodal Machine Translation with Speech-Text Fusion
Multimodal Large Language Models (MLLMs) have achieved notable success in enhancing translation performance by integrating multimodal information. However, existing research primarily focuses on image-guided methods, whose applicability is constrained by the scarcity of multilingual image-text pairs. The speech modality overcomes this limitation due to its natural alignment with text and the abundance of existing speech datasets, which enable scalable language coverage. In this paper, we propose a \textbf{Speech-guided Multimodal Machine Translation (SMMT)} framework that integrates speech and text as fused inputs into an MLLM to improve translation quality. To mitigate reliance on low-resource data, we introduce a \textbf{Self-Evolution Mechanism}. The core components of this framework include a text-to-speech model, responsible for generating synthetic speech, and an MLLM capable of classifying synthetic speech samples and iteratively optimizing itself using positive samples. Experimental results demonstrate that our framework surpasses all existing methods on the Multi30K multimodal machine translation benchmark, achieving new state-of-the-art results. Furthermore, on general machine translation datasets, particularly the FLORES-200, it achieves average state-of-the-art performance in 108 translation directions. Ablation studies on CoVoST-2 confirms that differences between synthetic and authentic speech have negligible impact on translation quality. We will open-source our model to support the wider community.
A Speech-guided framework that leverages synthetic speech and a self-evolution mechanism to improve translation
['Speech', 'Multimodal Machine Translation']
/pdf/5f2f0b374851ad2d090b9e6580baca842d98ca30.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24834/Authors']
s5riWPseUm
24,833
s5riWPseUm
Refining Bias and Reward in LLM Recommender Agents through Meta-Controlled Tool Invocation
Large language model (LLM) agents have recently been brought to recommender systems given their flexible capability of tool use. Although existing approaches adopt the reasoning and acting paradigms for profiling, planning, and memory augmentation, they remain ad hoc and overlook core recommendation challenges in agent-environment interactions, including debiasing and reward estimation in offline learning scenarios. In this paper, we introduce BARO (Bias And Reward Optimization), a meta-controlled, tool-augmented LLM agent framework that explicitly addresses these challenges. BARO employs a two-stage recommendation process: a coarse recommender generates a candidate slate based on user history, and a meta-controller adaptively invokes three specialized tools to refine the recommendation results: a bias detector assesses and mitigates bias in the candidate set, a reward estimator calibrates noisy offline rewards, and an action grounder selects final recommendations from the candidate pool. This design injects bias correction and reward refinement directly into the agent’s decision loop in the recommendations. Empirical results on two benchmark datasets demonstrate that BARO achieves consistent improvements over state-of-the-art methods in metrics such as accuracy, diversity, and fairness. The code will be made publicly available upon acceptance.
We design a tool-augmented LLM agent for offline recommendation, where a meta-controller adaptively invokes bias detection, reward refinement, and action grounding to improve both accuracy and fairness.
['LLM Agent', 'Offline Reinforcement Learning for Recommender Systems']
/pdf/043a4988d93bcfe676a44f3c62c911d6386a342e.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24833/Authors']
F8cje9T5r2
24,832
F8cje9T5r2
The Few Govern the Many:Unveiling Few-Layer Dominance for Time Series Models
Time series (TS) forecasting plays a vital role in practice, but remains a highly challenging task. The outstanding performance of large-scale models across multiple domains has driven the advancement of large-scale TS models, providing an effective pathway for forecasting task. Performance degradation has been observed in large-scale TS models, demonstrating that bigger is not always better which is a puzzling phenomenon. We trained two categories of large-scale TS models, LLM4TS and TSFMs, across four scales, examining how architecture, model size, data volume and distribution, and training strategies influence model performance. Due to the lack of in-depth studies on representations in large-scale TS models, we examined the evolution of representations from both inter-layer and intra-layer perspectives. Our analysis reveals that only a small subset of layers play a critical role in learning, while the majority contribute minimally—a phenomenon we term few-layer dominance. Building on the insight, we propose a method to identify critical layers, allowing models to achieve performance on par while improving inference efficiency. Validation on existing large-scale TS models confirms the universality of few-layer dominance and the reliability of critical layers identification method.
null
['Time Series Forecasting', 'Large Model', 'Time Series Foundation Models']
/pdf/c66c0ea52094e40b0fe9c46266310376089b3cef.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24832/Authors']
eWuRZ51gpr
24,831
eWuRZ51gpr
SWAN: Sparse Winnowed Attention for Reduced Inference Memory via Decompression-Free KV-Cache Compression
Large Language Models (LLMs) face a significant bottleneck during autoregressive inference due to the massive memory footprint of the Key-Value (KV) cache. Existing compression techniques like token eviction, quantization, or other low-rank methods often risk information loss, have fixed limits, or introduce significant computational overhead from explicit decompression steps. In this work, we introduce SWAN, a novel, fine-tuning-free framework that eliminates this overhead. Our method uses an offline orthogonal matrix to rotate and prune the KV-cache, which is then used directly in the attention computation without any reconstruction. Our extensive experiments demonstrate that SWAN, augmented with a small dense buffer, offers a robust trade-off, maintaining performance close to the uncompressed baseline even at aggressive 50-60\% memory savings per-token on KV-cache. A key advantage is its runtime-tunable compression level, allowing operators to dynamically adjust the memory footprint, a flexibility absent in methods requiring fixed offline configurations. This combination of a decompression-free design, high performance under compression, and adaptability makes SWAN a practical and efficient solution for serving LLMs with long contexts.
We developed SWAN, a new method to shrink the large memory footprint of the LLM KV-cache during inference with small performance loss.
['Transformers', 'KV-Cache', 'Compression', 'Dimensionality Reduction', 'LLM Inference']
/pdf/a1b28ca8c1fe20ada2f76c7b9b73625e4f6965ff.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24831/Authors']
mfZlixxP35
24,830
mfZlixxP35
Coresets for Mixtures of (arbitrarily large) Gaussians
An $\varepsilon$-coreset for $k$-Gaussian Mixture Models (k-GMMs) of an input set $P \subseteq \mathbb{R}^d$ of points, is a small weighted set $C \subseteq P$, such that the negative log-likelihood $L(P, \theta)$ of every $k$-GMM $\theta$ is provably approximated by $L(C, \theta)$, up to a multiplicative factor of $1 \pm \varepsilon$, for a given $\varepsilon > 0$. Existing coreset \cite{NIPS11,JMLR18} approximates only ``semi-spherical'' k-GMMs, whose covariance matrices are similar to the identity matrix. This work provides the first algorithm that computes a coreset for arbitrarily large k-GMMs. This is by forging new links to projective clustering and modern techniques in computational geometry. Experimental results on real-world datasets that demonstrate the efficacy of our approach are also provided.
Core-sets for any Mixture of Gaussians
['Coresets', 'Mixture of Gaussians', 'Sketches']
/pdf/08271d7745b71bd12c6f0cd2ccb052d35a573238.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission24830/Authors']
vLy6kvtfV6
24,827
vLy6kvtfV6
SAFE: Benchmarking AI Weather Prediction Fairness with Stratified Assessments of Forecasts over Earth
The dominant paradigm in machine learning is to assess model performance based on average loss across all samples in some test set. However, this approach fails to account for the non-uniform patterns of human development and geography that exist across Earth. We introduce Stratified Assessments of Forecasts over Earth (SAFE), a package for elucidating the stratified performance of a set of predictions made over Earth. SAFE integrates various domains of data to perform stratification on different attributes associated with gridpoints: territory (usually country), global subregion, income, and landcover (land or water). This allows us to examine the performance of models for each individual stratum of the different attributes (e.g., the accuracy in every individual country). To demonstrate its importance, we utilize SAFE to benchmark a zoo of state-of-the-art AI-based weather prediction models, finding that they all exhibit disparities in forecasting skill across every attribute. We use this to seed a benchmark of model forecast fairness through stratification at different lead times for various climatic variables. By moving beyond globally-averaged metrics, we for the first time ask: where do models perform best or worst, and which models are most fair? To support further work in this direction, the SAFE package is made available at https://anonymous.4open.science/r/safe-E7C7.
AI weather prediction models exhibit biases in forecast performance based on geographic region, income, landcover, and lead time.
['faireness', 'weather', 'climate', 'artificial intelligence', 'machine learning']
/pdf/d7776304e52cf44cbb892c5271e83fb4d50a92bd.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24827/Authors']
hAurIMOhOW
24,826
hAurIMOhOW
Learning to Reduce Search Space for Generalizable Neural Routing Solver
Constructive neural combinatorial optimization (NCO) has attracted growing research attention due to its ability to solve complex routing problems without relying on handcrafted rules. However, existing NCO methods face significant challenges in generalizing to large-scale problems due to high computational complexity and inefficient capture of structural patterns. To address this issue, we propose a novel learning-based search space reduction method that adaptively selects a small set of promising candidate nodes at each step of the constructive NCO process. Unlike traditional methods that rely on fixed heuristics, our selection model dynamically prioritizes nodes based on learned patterns, significantly reducing the search space while maintaining solution quality. Experimental results demonstrate that our method, trained solely on 100-node instances from a uniform distribution, generalizes remarkably well to large-scale Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) instances with up to 1 million nodes from the uniform distribution and over 80K nodes from other distributions.
We propose a novel learning-based search space reduction framework, significantly reducing the search space while maintaining the solution quality of large-scale routing instances.
['Vehicle Routing Problem', 'Reinforcement Learning', 'Neural Combinatorial Optimization', 'Large-scale Generalization']
/pdf/a46f4067373a55d34aa8f68e8440c11facb90a56.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24826/Authors']
tMzuMWiE82
24,823
tMzuMWiE82
The Social Welfare Function Leaderboard: When LLM Agents Allocate Social Welfare
Large language models (LLMs) are increasingly entrusted with high-stakes decisions that affect human welfare. However, the principles and values that guide these models when distributing scarce societal resources remain largely unexamined. To address this, we introduce the {\bf Social Welfare Function (SWF) Benchmark}, a dynamic simulation environment where an LLM acts as a sovereign allocator, distributing tasks to a heterogeneous community of recipients. The benchmark is designed to create a persistent trade-off between maximizing collective efficiency (measured by Return on Investment) and ensuring distributive fairness (measured by the Gini coefficient). We evaluate 20 state-of-the-art LLMs and present the first leaderboard for social welfare allocation. Our findings reveal three key insights: (i) A model's general conversational ability, as measured by popular leaderboards, is a poor predictor of its allocation skill. (ii) Most LLMs exhibit a strong default utilitarian orientation, prioritizing group productivity at the expense of severe inequality. (iii) Allocation strategies are highly vulnerable, easily perturbed by output-length constraints and social-influence framing. These results highlight the risks of deploying current LLMs as societal decision-makers and underscore the need for specialized benchmarks and targeted alignment for AI governance.
we introduce the Social Welfare Function (SWF) Benchmark, a dynamic simulation environment where an LLM acts as a sovereign allocator, highlight the risks of deploying current LLMs as societal decision-makers.
['LLM Agents; Fairness; Social Consideration']
/pdf/aa69178262747b4c7d34dba2bbe105b7f6fc5c20.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/a22565d68dc7a3f7c4eb43ad5924bdd1e32ce4f5.zip
['ICLR.cc/2026/Conference/Submission24823/Authors']
f9BuANYtJf
24,820
f9BuANYtJf
GRAF: Multi-turn Jailbreaking via Global Refinement and Active Fabrication
Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks. Nevertheless, they still pose notable safety risks due to potential misuse for malicious purposes. Jailbreaking, which seeks to induce models to generate harmful content through single-turn or multi-turn attacks, plays a crucial role in uncovering underlying security vulnerabilities. However, prior methods, including sophisticated multi-turn approaches, often struggle to adapt to the evolving dynamics of dialogue as interactions progress. To address this challenge, we propose \textbf{GRAF}(JailBreaking via \textbf{G}lobally \textbf{R}efining and \textbf{A}daptively \textbf{F}abricating), a novel multi-turn jailbreaking method that globally refines the attack trajectory at each interaction. In addition, we actively fabricate model responses to suppress safety-related warnings, thereby increasing the likelihood of eliciting harmful outputs in subsequent queries. Extensive experiments across six state-of-the-art LLMs demonstrate the superior effectiveness of our approach compared to existing single-turn and multi-turn jailbreaking methods.
null
['LLM safety', 'LLM jailbreak', 'Multi-turn jailbreaking']
/pdf/d6a780bf98a5a3ea2e90453fa234feef8a94236d.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24820/Authors']
fkyebMiRHv
24,819
fkyebMiRHv
Are Small Language Models the Silver Bullet to Low-Resource Languages Machine Translation?
Small language models (SLMs) represent parameter-efficient variants of large language models, designed to achieve computational efficiency while retaining core linguistic competencies. This study investigates the persistent challenges associated with translation performance in low-resource languages (LRLs) through a systematic evaluation of SLMs across 200 languages. In contrast to prior research, which has only marginally addressed LRL-oriented distillation, this work provides empirical evidence that transferring knowledge from large-scale teacher models to compact SLMs (2B/3B parameters) using predominantly monolingual LRL data yields substantial translation improvements, at times even surpassing models of up to 70B parameters. The primary contributions of this work can be summarized as follows: (1) the introduction of the first comprehensive quantitative benchmark evaluating SLMs over 200 languages with explicit emphasis on LRL limitations; (2) the demonstration that knowledge distillation for LRLs enhances translation quality without provoking catastrophic forgetting, while also elucidating key design priorities—prioritizing full-scale models over LoRA-based strategies, privileging data quality over data volume, and favoring decoder-only architectures as teachers over encoder–decoder frameworks; and (3) the confirmation of the robustness and transferability of these improvements across a wide spectrum of LRLs, thereby establishing a scalable and cost-effective methodology for addressing fairness disparities in multilingual translation. Overall, this study offers a rigorous validation of the feasibility and methodological best practices for applying SLMs in the context of LRLs, thereby laying an empirical foundation for their reliable deployment in low-resource language scenarios.
null
['Low-resource language', 'Small language models', 'Luxembourgish', 'Monolingual distillation']
/pdf/38b16480cd6f775fc943d53bd0dd9051b9e8ec98.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24819/Authors']
USjSdem7WO
24,818
USjSdem7WO
Just a Simple Transformation is Enough for Data Protection in Split Learning
Split Learning (SL) aims to enable collaborative training of deep learning models while maintaining privacy protection. However, the SL procedure still has components that are vulnerable to attacks by malicious parties. In our work, we consider feature reconstruction attacks --- a common risk targeting input data compromise. We theoretically claim that feature reconstruction attacks cannot succeed without knowledge of the prior distribution on data. Consequently, we demonstrate that even simple model architecture transformations can significantly impact the protection of input data during SL. Confirming these findings with experimental results, we show that MLP-based models are resistant to state-of-the-art feature reconstruction attacks.
null
['Privacy', 'Split Learning', 'Feature Reconstruction attacks']
/pdf/6303b6199c065612139eccba36edd1c93e9cf75c.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24818/Authors']
vzXyVNCGAL
24,814
vzXyVNCGAL
Modulating LLM Behavior via Context-Specific Activation Steering
LLMs achieve strong capabilities, yet precisely steering their responses with ever-shifting safety requirements remains unresolved. Current activation engineering methods embed a static premise — prompt categories elicit distinct activation patterns — and coerce each input into a hand-crafted semantic category. This premise fails when adversarial prompt variations (e.g. jailbreaks) perturb the activations, yielding collateral suppression or undetected risks. We contend that the activation steering should be determined on-the-fly by the input itself within the semantic space, rather than being predetermined by rigid, hand-crafted categories. In this paper, we propose Context-Specific Steering (COS-Steering), which maps the full safety-steering activation subspace and lets inputs locate its own steering coordinately. COS-Steering recover this subspace by compressing a pool of steering signals into a compact set of basis vectors via SAE. A lightweight module then reads the input activation and outputs weights these basis vectors for context-specific steering. To evaluate robustness against distribution shift, we test COS-Steering in a mixed-attack setting, which combines multiple attack methods, across datasets and models. Comparing to baselines, COS-Steering preserves strong refusal on harmful prompts while introducing negligible side-effects on benign queries.
Adaptive activation steering delivers context-specific safety steering within an SAE-compressed subspace on-the-fly according to input.
['Activation Engineering', 'Safety', 'Alignment', 'Steering Vector']
/pdf/bb91370ba19663f61131c10fac27536689f69512.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24814/Authors']
dDBT2N3wLG
24,811
dDBT2N3wLG
SCMF: Lightweight Retrieval-Augmented Generation via Retrieval Vector Compression
With the widespread adoption of Retrieval-Augmented Generation (RAG) in knowledge-intensive tasks, efficiency bottlenecks become increasingly evident: storing and retrieving large-scale high-dimensional embeddings incur substantial storage and computation costs. To address this challenge, we propose the Semantic Compressed Memory Framework (SCMF), a lightweight and traceable indexing paradigm tailored for large-scale RAG. SCMF first projects document embeddings into a low-dimensional semantic space, and then discretizes them into compact Semantic Memory Units (SMUs) via Residual Vector Quantization (RVQ). Each SMU is explicitly linked to its corresponding Raw Knowledge Unit (RKU) through a semantic inverted index, which enables efficient CRUD operations while preserving the traceability of retrieval results. During retrieval, SCMF performs Approximate Nearest Neighbor (ANN) search in the SMU space, followed by a two-stage re-ranking strategy that combines sparse retrieval (BM25) and dense retrieval, thereby achieving efficient and accurate evidence localization. Experimental results demonstrate that SCMF substantially reduces storage costs and retrieval latency while preserving explicit traceability to the original knowledge units, significantly outperforming mainstream vector indexing methods.
We propose SCMF, a semantic compressed memory framework that accelerates retrieval in RAG by PCA+RVQ compression while preserving knowledge traceability.
['Retrieval-Augmented Generation', 'Semantic Memory', 'Vector Quantization', 'Residual Vector Quantization', 'PCA Compression', 'Efficient Retrieval']
/pdf/e193d1a7f0eb6832478a9b8cf028290ab1092a7d.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24811/Authors']
aSKrSpVrCu
24,810
aSKrSpVrCu
PCDVQ: Enhancing Vector Quantization for Large Language Models via Polar Coordinate Decoupling
Large Language Models (LLMs) face significant challenges in edge deployment due to their massive parameter scale. Vector Quantization (VQ), a clustering-based quantization method, serves as a prevalent solution to this issue for its extremely low-bit (even at 2-bit) and considerable accuracy. Since a vector is a quantity in mathematics and physics that has both direction and magnitude, existing VQ works typically quantize them in a coupled manner. However, we find that direction exhibits significantly greater sensitivity to quantization compared to the magnitude. For instance, when separately clustering the directions and magnitudes of weight vectors in LLaMA-2-7B, the accuracy drop of zero-shot tasks are 46.5\% and 2.3\%, respectively. This gap even increases with the reduction of clustering centers. Further, Euclidean distance, a common metric to access vector similarities in current VQ works, places greater emphasis on reducing the magnitude error. This property is contrary to the above finding, unavoidably leading to larger quantization errors. To these ends, this paper proposes Polar Coordinate Decoupled Vector Quantization (PCDVQ), an effective and efficient VQ framework consisting of two key modules: 1) Polar Coordinate Decoupling (PCD), which transforms vectors into their polar coordinate representations and perform independent quantization of the direction and magnitude parameters. 2) Distribution Aligned Codebook Construction (DACC), which optimizes the direction and magnitude codebooks in accordance with the source distribution. Experimental results show that PCDVQ outperforms baseline methods at 2-bit level by at least 1.5\% zero-shot accuracy, establishing a novel paradigm for accurate and highly compressed LLMs.
null
['Large Language Models', 'Vector Quantization']
/pdf/e0d3be6edd72fca98f8be21a12a4879d387e2407.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24810/Authors']
Q4Wk1EIVSV
24,809
Q4Wk1EIVSV
Addressing Exogenous Variability in Cooperative Multi-Agent Reinforcement Learning
Multi-agent reinforcement learning (MARL) has advanced control of many cooperative multi-agent systems. However, most approaches are trained against a single fixed adversarial strategy, leaving teams fragile to adversarial strategy shifts at test time. To handle such limitations, in this paper, we recast cooperative MARL from a new perspective into an Exogenous Dec-POMDP, separating agent-controllable endogenous and environment-driven exogenous dynamics in order to learn policies that adapt to exogenous shifts while preserving coordination. Our framework is composed of two main components: (i) learning exogenous dynamics and (ii) updating policy with two complementary goals - coordination to achieve high team return and causal influence on future exogenous evolution. We implement the framework under centralized training with decentralized execution into a practical algorithm, named Learning Exogenous Influence for Coordination and Adaptation (LEICA), and evaluate it on SMAX with distinct train/test adversarial strategies. Experimental results show that our approach drastically improves performance in test time with unseen opponents' strategies while achieving high training-time performance, demonstrating its ability to handle exogenous shift and improve training stability.
LEICA: learn exogenous influence, weight counterfactual shaping by sensitivity, and get robust, well-coordinated MARL under exogenous shifts.
['Multi-Agent Reinforcement Learning', 'Exogenous Dec-POMDP', 'Influence-based Coordination']
/pdf/a153b36ea168b02b0805f67b36f384c8db388ad3.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24809/Authors']
XDIqAGxSrE
24,808
XDIqAGxSrE
Influence Guided Sampling for Domain Adaptation of Text Retrievers
General-purpose open-domain dense retrieval systems must usually be trained with a large, eclectic mix of corpora and search tasks. How should these diverse corpora and tasks be sampled for training? Conventional approaches are to sample them uniformly, or proportional to their instance population sizes, or depend on human-level expert supervision. It is well known that the training data sampling strategy can greatly impact model performance. However, how to find the optimal strategy has not been adequately studied in the context of embedding models. We propose Inf-DDS, a novel reinforcement learning–driven sampling framework that adaptively reweighs training datasets guided by influence‑based reward signals and is much more lightweight w.r.t. to GPU consumption. Our technique iteratively refines the sampling policy, prioritizing sampling from datasets that maximize the model performance on a target development set. We evaluate the efficacy of our sampling strategy on a wide range of text retrieval tasks, demonstrating strong improvements in retrieval performance and better adaptation compared to existing gradient-based sampling methods, while also being *1.5×–4×* cheaper than them in terms of GPU compute needed. Our sampling strategy achieves a **5.03** absolute *NDCG@10* improvement while training a multilingual *bge-m3-dense* model and an absolute *NDCG@10* improvement of **0.94** while training *sentence-transformers/all-MiniLM-L6-v2*, even when starting from an expert assigned weights on a large pool of training datasets.
We address domain adaptation in text retrievers via data reweighting and propose an efficient influence-based sampling strategy.
['retrieval', 'domain adaptation', 'dataset sampling']
/pdf/3d186529006da180340fecb9f8de555b35c6c03d.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24808/Authors']
MPhnlqdU9Z
24,804
MPhnlqdU9Z
MIRA: Quantifying Neural Network Monitorability via Feature Space Analysis
Monitoring neural networks is increasingly important for detecting potential failures in safety-critical applications. Although out-of-distribution (OoD) detection and uncertainty estimation have been widely studied, they often rely on the assumption that neural networks learn high-quality features. However, this assumption may not hold in practice, potentially leading to undetected failures. In this work, we introduce the concept of monitorability, which captures the intrinsic ability of a model to highlight potential inference errors through internal activations. We provide a formal definition of monitorability and propose the Monitorability via Input peRturbAtion (MIRA) Score, a practical measure that quantifies this property without requiring access to external OoD data. Our method accounts for the behavior of the model near the decision boundary by applying norm-bounded input perturbations and evaluates how distinguishable the resulting internal representations are by using Mahalanobis distance. Since no established baseline exists for monitorability, we validate MIRA by comparing it against the best achievable OoD detection performance across three representative methods. Through experiments across multiple architectures and domain applications, we show that the MIRA Score correlates with the strongest actual detection performance, providing a tool for evaluating and comparing monitorability across different models. To the best of our knowledge, this is the first formalization and quantitative measure of monitorability. Our findings offer both theoretical grounding and empirical insight into the conditions under which model failures become detectable.
We propose the MIRA Score, a metric that quantifies a neural network’s ability to expose its own failures by perturbing inputs and measuring feature separability.
['Neural Networks', 'Monitorability', 'Out-of-Distribution Detection', 'Anomaly Detection', 'Runtime Monitoring', 'Activation Patterns']
/pdf/39aeaf204c0c02f739ec2c9b74a8a5e0bfbb78bc.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24804/Authors']
8bM7MkxJee
24,802
8bM7MkxJee
From movement to cognitive maps: recurrent neural networks reveal how locomotor development shapes hippocampal spatial coding
The hippocampus contains neurons whose firing correlates with an animal's location and orientation in space. Collectively, these neurons are held to support a cognitive map of the environment, enabling the recall of and navigation to specific locations. Although recent studies have characterised the timelines of spatial neuron development, no unifying mechanistic model has yet been proposed. Moreover, the processes driving the emergence of spatial representations in the hippocampus remain unclear (Tan et al., 2017). Here, we combine computational analysis of postnatal locomotor development with a recurrent neural network (RNN) model of hippocampal function to demonstrate how changes in movement statistics -- and the resulting sensory experiences -- shape the formation of spatial tuning. First, we identify distinct developmental stages in rat locomotion during open-field exploration using published experimental data. Then, we train shallow RNNs to predict upcoming visual stimuli from concurrent visual and vestibular inputs, exposing them to trajectories that reflect progressively maturing locomotor patterns. Our findings reveal that these changing movement statistics drive the sequential emergence of spatially tuned units, mirroring the developmental timeline observed in rats. The models generate testable predictions about how spatial tuning properties mature -- predictions we confirm through analysis of hippocampal recordings. Critically, we demonstrate that replicating the specific statistics of developmental locomotion -- rather than merely accelerating sensory change -- is essential for the emergence of an allocentric spatial representation. These results establish a mechanistic link between embodied sensorimotor experience and the ontogeny of hippocampal spatial neurons, with significant implications for neurodevelopmental research and predictive models of navigational brain circuits.
null
['recurrent neural network', 'spatial representations', 'hippocampus', 'development', 'locomotion', 'rats']
/pdf/3f2dec28d8650483865f3b5d633c8bfe12089dd1.pdf
applications to neuroscience & cognitive science
/attachment/a8a5f43eb07b7c1af69e12af9f45c1f0c8922820.zip
['ICLR.cc/2026/Conference/Submission24802/Authors']
qsaZxgKAtk
24,800
qsaZxgKAtk
Missing Pattern Recognized Diffusion Imputation Model for Missing Not at Random
Missing data frequently arises across diverse domains, including time-series and image domains. In the real world, missing occurrences often depend on the unobservable values themselves, which are referred to as Missing Not at Random (MNAR). To address this, numerous generative models have been proposed, with diffusion models in particular demonstrating strong capabilities in out-of-sample imputation. However, most existing diffusion-based imputation approaches overlook the MNAR setting and instead rely on restrictive assumptions about the missing process, thereby limiting their applicability to practical scenarios. In this work, we introduce the Missing Pattern Recognized Diffusion Imputation Model (PRDIM), a novel framework that explicitly captures the missing pattern and precisely imputes unobserved values. PRDIM iteratively maximizes the likelihood of the joint distribution for observed values and missing mask under an Expectation-Maximization (EM) algorithm. In this sense, we first employ a pattern recognizer, which approximates the underlying missing pattern and provides guidance during every inference toward more plausible imputations with respect to the missing information. In various experimental settings, we demonstrate that PRDIM achieves the state-of-the-art performance compared to previous diffusion imputation approaches under MNAR setting.
null
['Imputation', 'Generative Models']
/pdf/e46630368cdd58b4c137713c6433e2e40c42282f.pdf
generative models
/attachment/f7c971272f6612b86bdece63cef84781bbcdc5e7.zip
['ICLR.cc/2026/Conference/Submission24800/Authors']
Q8g97TA2YC
24,799
Q8g97TA2YC
Beyond Exploration–Exploitation: An Identification-Aware Bayesian Optimization Method under Noisy Evaluations
In this study, we investigate black-box optimization problems with heteroscedastic noise, a setting commonly encountered in hyperparameter tuning for machine learning models. Bayesian optimization (BO) is a popular framework for such problems, with prior work primarily focusing on designing acquisition functions or surrogate models to balance exploration and exploitation. However, a critical yet underexplored issue is the identification problem: BO algorithms often locate promising solutions but fail to reliably identify and return them to users. We take the first step toward addressing this challenge. We formally define the identification error within a standard BO framework and derive a myopic acquisition function that directly minimizes this error. A surprising theoretical result shows that the acquisition function for minimizing identification error is equivalent to the difference between two widely used criteria: the knowledge gradient (KG) and expected improvement (EI). Building on this insight, we propose a novel acquisition function, Identification-Error Aware Acquisition (IDEA), and establish its asymptotic no-regret property. The effectiveness of IDEA is demonstrated on benchmark test functions.
Identification-Aware BO
['black-box optimization', 'Bayesian optimization', 'acquisition function', 'identification problem']
/pdf/95895eb12ce5d51e394dc1b1b2d9eb461feed2bd.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24799/Authors']
xNUgwvJ115
24,796
xNUgwvJ115
BagelScore: Visual-Language Evaluation Made Easy
Evaluation remains a fundamental challenge in multimodal learning. Existing metrics such as CLIPScore, LPIPS, and FID reduce assessment to embedding similarity or perceptual distance, which systematically fails to capture semantic correctness or editing plausibility, while GPT-based scoring remains subjective and inconsistent. We argue that the emergence of bottleneck-free unified multimodal models enables a new evaluation paradigm: their internal reasoning and generative dynamics can serve as principled signals. Building on BAGEL, we propose two complementary metrics. BagelScore focuses on image understanding and image-text matching, outperforming traditional metrics like CLIPScore, LPIPS, FID, and GPT-based heuristics by directly evaluating the semantic alignment between images and captions using the unified model's reasoning capabilities. EditingScore, the first evaluation metric specifically designed for assessing image editing quality, quantifies the difficulty of learning the transformation in the latent space of a generative model. EditingScore is validated on Edit-1K, the first benchmark dataset specifically created for image editing quality evaluation. Together, BagelScore and EditingScore provide a unified, reasoning-based paradigm for multimodal evaluation.
null
['Multimodal Learning', 'Image Editing']
/pdf/f77c4c74637e6ad1be8e6788daf95bebcab00e71.pdf
applications to computer vision, audio, language, and other modalities
/attachment/f19df0e98657c7cdb8004beddf052df3552e57d6.pdf
['ICLR.cc/2026/Conference/Submission24796/Authors']
m2rgUNmnDI
24,794
m2rgUNmnDI
Depth-consistent Motion Blur Augmentation
Motion blur is a ubiquitous phenomenon commonly encountered in lightweight, handheld cameras. Addressing this degradation is essential for preserving visual fidelity and ensuring the robustness of vision models for scene understanding tasks. In the literature, robustness to motion blur has been generally treated like other degradations; this despite the complex space-variant nature of motion blur due to scene dynamics and its inherent dependence on scene geometry and depth. While some recent works addressing this issue have introduced space-variant blur due to scene dynamics, they fall back on space-invariant blurring to model camera egomotion which is imperfect. This work proposes an efficient methodology to generate space-variant depth-consistent blur to model camera egomotion by leveraging depth foundation models. We refer to our approach as Depth-consistent Motion Blur Augmentation (DMBA). To demonstrate the effectiveness of DMBA in improving robustness to realistic motion blur, we provide experiments for the tasks of semantic segmentation and self-supervised monocular depth estimation. We include results for standard networks on the Cityscapes dataset for semantic segmentation and the KITTI dataset for monocular depth estimation. We also illustrate the improved generalizability of our method to complex real-world scenes by evaluating on commonly used datasets GoPro and REDS that contain real motion blur.
null
['Motion Blur', 'Augmentation', 'Segmentation', 'Depth estimation']
/pdf/8089e34778f12802a3d6a0ee95a9146db64b70ed.pdf
applications to computer vision, audio, language, and other modalities
/attachment/943bdd292d0f330a459954fd797e392ec6d95de2.pdf
['ICLR.cc/2026/Conference/Submission24794/Authors']
3x4SDbXbgl
24,792
3x4SDbXbgl
Computer Agent Arena: Toward Human-Centric Evaluation and Analysis of Computer-Use Agents
As Computer-Use Agents (CUAs) proliferate and grow increasingly capable, evaluation has become more challenging: static, manually curated benchmarks are narrow in domain, contamination-prone, and environment-heavy, and they diverge substantially from user-driven, real-world evaluation. We present Computer Agent Arena, an open-source platform for head-to-head CUA evaluation and a dynamic methodology that converts human preferences into structured feedback in realistic environments. The system (i) simulates real-world computer use via cloud-hosted, diverse, and dynamic environment initializations and customizations; (ii) ensures authentic, fair comparison by faithfully reproducing open-source CUAs and executing anonymously in matched, controlled environments; and (iii) extends evaluation beyond pairwise preference and correctness to capability- and behavior-oriented signals. Across 2,201 high-quality votes over 12 agents—spanning multi-app interactions, ambiguous instructions, and open-ended queries—we observe striking ranking reversals relative to static benchmarks. Further analysis shows that overall correctness mainly drives human preference; beyond that, agent-human interaction and self-correction boost user preference, even when overall task completion is comparable. Our error analysis reveals agent behavior errors, such as long-horizon memory and fine-grained action failures that static benchmarks fail to evaluate. We also contrast pure GUI agents with universal digital agents capable of tool use and coding, and discuss the trade-offs of these different design philosophies. We open source the full platform, collected dataset, and code of Computer Agent Arena to support future research on the evaluation and development of CUA.
null
['Computer-Use Agent', 'Visual Language Model', 'Human-in-the-loop', 'Evaluation']
/pdf/c89502c51b47570070dc2ff01ed59675c43abd51.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24792/Authors']
4KVeb0Vv13
24,791
4KVeb0Vv13
(Token-Level) \textbf{InfoRMIA}: Stronger Membership Inference and Privacy Assessment for LLMs
Machine learning models are known to leak sensitive information, as they inevitably memorize (parts of) their training data. More alarmingly, large language models (LLMs) are now trained on nearly all available data, which amplifies the magnitude of information leakage and raises serious privacy risks. Hence, it is more crucial than ever to quantify privacy risk before the release of LLMs. The standard method to quantify privacy is via membership inference attacks, where the state-of-the-art approach is the Robust Membership Inference Attack (RMIA). In this paper, we present InfoRMIA, a principled information-theoretic formulation of membership inference. Our method consistently outperforms RMIA across benchmarks while also offering improved computational efficiency. In the second part of the paper, we identify the limitations of treating sequence-level membership inference as the gold standard for measuring leakage. We propose a new perspective for studying membership and memorization in LLMs: token-level signals and analyses. We show that a simple token-based InfoRMIA can pinpoint which tokens are memorized within generated outputs, thereby localizing leakage from the sequence level down to individual tokens, while achieving stronger sequence-level inference power on LLMs. This new scope rethinks privacy in LLMs and can lead to more targeted mitigation, such as exact unlearning.
We introduce a new state-of-the-art membership inference attack, InfoRMIA that dominates RMIA on all benchmarks. We also propose a token-level attack framework that has high power and can pinopint info leakage to token levels.
['membership inference attack', 'mia', 'privacy', 'llm privacy', 'memorization']
/pdf/631257434707e4096a5c97b3c52c9c11a8076282.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24791/Authors']
n0CiqI3Rz2
24,790
n0CiqI3Rz2
Robust discovery of governing equations through symmetry
Discovering governing equations of dynamical systems directly from data remains a fundamental challenge, especially under noise and data scarcity. We propose a symmetry-inspired symbolic regression (SI-SR) framework that automatically identifies intrinsic physical invariances and embeds them into a symmetry-constrained variable set, enhancing robustness and promoting sparsity. The framework combines a validation step for symmetry confirmation with symbolic regression for expressive nonlinear modelling. We evaluate SI-SR on canonical partial differential equations (PDEs) and variable-coefficient systems, with systematic comparisons against state-of-the-art baselines. Results show that leveraging symmetry reduces redundancy and enables the recovery of compact, accurate models. This establishes symmetry as a powerful inductive bias for data-driven equation discovery.
Robust equation discovery via symmetry identification
['Dynamical systems', 'Equation discovery', 'Symmetry', 'Noisy and sparse data']
/pdf/45c0874a44306fa4eea3281ac39355fd0e379778.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24790/Authors']
Ky02MLYr2r
24,789
Ky02MLYr2r
MLBF-PRS: A MACHINE LEARNING MODEL DE- VELOPMENT AND BENCHMARKING FRAMEWORK FOR POLYGENIC RISK SCORES
In contrast to other genomic tasks, the development of machine learning-based individual-level, genome-wide predictive models, typically termed polygenic risk scores (PRS), have shown little improvement from the use of complex machine learning (ML) methods. This disparity can be attributed to challenges in accessibility, comparability across studies, and a lack of development and evaluation guidelines that enable reproducibility. Sequence-based genomic tasks benefit from benchmarks, which have proven to be fruitful in the advancement of machine learning model development across domains. To overcome the challenges present in the development of ML-based PRS models, we introduce MLBF-PRS, a novel framework as a catalyst to promote and accelerate the development of ML-based solutions. The framework provides flexible Nextflow DSL2 pipelines that enable parallel comparison of ML models (SVMs, random forests, neural networks) against established statistical PRS methods, comprehensive quality control and data preparation modules following PRS-specific best practices, and automated tracking of model parameters, trained weights, and configurations to ensure full reproducibility. We describe the usage of MLBF-PRS to showcase how this framework provides accessibility, where, in most cases, the setup and evaluation of PRS models can be time-consuming and require navigation of multiple software tools. The standardised and reproducible dataset-specific benchmarking through MLBF-PRS offers a practical alternative to traditional open benchmarks. We make our framework openly available and continue expanding its capabilities.
null
['Machine Learning', 'PRS', 'PGS', 'Benchmarks', 'Nextflow', 'Pipeline', 'Polygenic score', 'Polygenic risk score']
/pdf/62eae8b8e2051f2391ac9f19f8e048217d044c15.pdf
infrastructure, software libraries, hardware, systems, etc.
/attachment/4444e08b11b896cedb500766e8831e7e5da2330e.zip
['ICLR.cc/2026/Conference/Submission24789/Authors']
iBLHGdBImw
24,787
iBLHGdBImw
From Numerical Solvers to Graph Surrogates: Physics-Informed Losses for Data-Efficient CFD Modeling
Graph neural networks (GNN) represent a promising method for creating robust and physically interpretable surrogate models for fluid dynamics. These surrogates offer a significant advantage over traditional computational fluid dynamics (CFD) solvers based on numerical methods because they require much less computational cost. In a GNN designed as a surrogate model for spatio-temporal partial differential equations, message passing can be interpreted as the propagation of physical quantities such as velocity, pressure, and temperature. The complexity of the Navier-Stokes equations, however, can limit the generalizability of existing models and lead to long training times. We show that including a physics-informed loss function based on the numerical methods used to generate the training data, specifically the finite volume method, can reduce the amount of data needed to train an accurate physics-informed surrogate compared with a purely data-driven baseline. By reducing the dataset size by 20\% and applying this approach, we achieved a 33\% reduction in convergence time. For larger datasets, model accuracy improved by up to 7.4\% within the same timeframe. Our method also avoids interpolation between cell centers and vertices, which can introduce errors from numerical discretization. Applying this soft constraint during training can support the development of future CFD surrogate GNN models that perform well even with smaller datasets.
null
['Physics Informed Neural Networks; Graph Neural Networks; Fluid Dynamics Surrogate']
/pdf/d9030d18b4c981fb65fb7231aa580cea92efbdd6.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission24787/Authors']
wPr54hJYuF
24,786
wPr54hJYuF
MA-EgoQA: Question Answering over Egocentric Videos from Multiple Embodied Agents
As embodied models become powerful, humans will collaborate with multiple embodied AI agents at their workplace or home in the future. To ensure better communication between human users and the multi-agent system, it is crucial to interpret incoming information from agents in parallel and refer to the appropriate context for each query. Existing challenges are to effectively compress and communicate high volumes of individual sensory inputs in the form of video and to correctly aggregate multiple egocentric videos to construct system-level memory. In this work, we first formally define a novel problem of understanding multiple long-horizon egocentric videos simultaneously collected from embodied agents. To facilitate research in this direction, we introduce MultiAgent-EgoQA (MA-EgoQA), a benchmark designed to systemically evaluate existing models in our scenario. MA-EgoQA provides 1.7k questions unique to multiple egocentric streams, spanning five categories: social interaction, task coordination, theory-of-mind, temporal reasoning, and environmental interaction. We further propose a simple baseline model for MA-EgoQA named EgoMAS, which leverages shared memory across embodied agents and agent-wise dynamic retrieval. Through comprehensive evaluation across diverse baselines and EgoMAS on MA-EgoQA, we find that current approaches are unable to effectively handle multiple egocentric streams, highlighting the need for future advances in this direction.
null
['Egocentric Video Understanding', 'Multi-Agent System']
/pdf/efaf08bb93fdee7943bddae368a8443ed2739c87.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24786/Authors']
anAHXnrTVW
24,784
anAHXnrTVW
Relative-Based Scaling Law for Neural Language Models
Scaling laws aim to accurately predict model performance across different scales. Existing scaling-law studies almost exclusively rely on cross-entropy as the evaluation metric. However, cross-entropy provides only a partial view of performance: it measures the absolute probability assigned to the correct token, but ignores the relative ordering between correct and incorrect tokens. Yet, relative ordering is crucial for language models, such as in greedy-sampling scenario. To address this limitation, we investigate scaling from the perspective of relative ordering. We first propose the Relative-Based Probability (RBP) metric, which quantifies the probability that the correct token is ranked among the top predictions. Building on this metric, we establish the Relative-Based Scaling Law, which characterizes how RBP improves with increasing model size. Through extensive experiments on four datasets and four model families spanning five orders of magnitude, we demonstrate the robustness and accuracy of this law. Finally, we illustrate the broad application of this law with two examples, namely providing a deeper explanation of emergence phenomena and facilitating finding fundamental theories of scaling laws. In summary, the Relative-Based Scaling Law complements the cross-entropy perspective and contributes to a more complete understanding of scaling large language models. Thus, it offers valuable insights for both practical development and theoretical exploration.
We propose Relative-Based Scaling Law for neural language models.
['Scaling Law', 'Neural Language Models']
/pdf/c3972534b004d7a08b2ec75f74b9effa52e45a27.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24784/Authors']
PNRGSW9jPz
24,783
PNRGSW9jPz
EvA: An Evidence-First Audio Understanding Paradigm for LALMs
While Large Audio Language Models (LALMs) have demonstrated remarkable capabilities in audio understanding tasks, their performance degrades sharply in complex acoustic scenes, revealing a fundamental limitation in their perceptual grounding. In this work, we first identify a critical failure mode that exposes this limitation: state-of-the-art LALMs paradoxically struggle more with simple evidence-extraction tasks than with complex reasoning ones. We diagnose this as a breakdown in acoustic evidence grounding, a problem rooted in systemic information loss during feature encoding and fusion. To address this, we introduce EvA (Evidence-First Audio), a new paradigm that prioritizes maximizing the fidelity of acoustic evidence. EvA's dual-encoder architecture combines Whisper with CED-Base, a ViT-based general audio encoder, and pioneers a structure-preserving, two-stage fusion process. First, it enriches evidence by hierarchically aggregating multi-level features from within the CED-Base encoder. Second, it integrates this representation with Whisper's output via a time-aligned, inject-and-add mechanism that guarantees perfect temporal integrity. To facilitate training for this paradigm, we co-develop EvA-Perception, a large-scale open-source dataset with high-temporal-precision annotations. Our resulting model establishes a new open-source state-of-the-art on multiple challenging benchmarks, including MMAU, MMAR, and MMSU. Crucially, EvA achieves its most significant gains on perception-heavy subsets, validating our hypothesis that addressing the evidence bottleneck is key to unlocking the next level of audio understanding.
null
['Large Audio Language Model', 'Audio Understanding']
/pdf/7dfd122442c022c398559436dea15702857d4933.pdf
applications to computer vision, audio, language, and other modalities
/attachment/a547ac3e0ce7454df8af924d2d75fa4d5381da24.zip
['ICLR.cc/2026/Conference/Submission24783/Authors']
1spOYCVPPg
24,781
1spOYCVPPg
It’s Not You, It’s Clipping: A Soft Trust-Region via Probability Smoothing for LLM RL
Training large language models (LLMs) with reinforcement learning (RL) methods such as PPO and GRPO commonly relies on ratio clipping to stabilise updates. While effective at preventing instability, clipping discards information and introduces gradient discontinuities. We propose Probability Smoothing Policy Optimisation (PSPO), which smooths the current policy’s probabilities toward the old (behaviour) policy before computing the importance ratio, analogous to label smoothing. Unlike clipping, PSPO preserves gradient signal, while interpolation toward the old policy creates a soft trust region that discourages large, destabilising updates, with formal guarantees. We instantiate PSPO within GRPO (GR-PSPO) and fine-tune Qwen2.5-0.5B/1.5B on GSM8K, evaluating on GSM8K test and the cross-dataset generalisation on SVAMP, ASDiv, and MATH-500. Relative to unclipped GRPO (single iteration; no data reuse, ratio always = 1), GR-PSPO attains similar accuracy but produces clearer, more concise, and more logically coherent responses (LLM-as-Judge). Compared to clipped GRPO, GR-PSPO substantially improves performance in both the 0.5B and 1.5B models, with a boost of over 20% on GSM8K (39.7% vs. 17.6% for 0.5B, 59.4% vs. 37.8% for 1.5B).
null
['Policy Optimization', 'PPO', 'GRPO', 'Clipping', 'Trust Region', 'Probability Smoothing', 'Soft Trust Region', 'LLM', 'Reasoning', 'Mathematical Problem Solving', 'GRPO', 'fine-tuning.']
/pdf/93ea150d3e8d4988105fb5c6ca3a1e0d0440ba7c.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24781/Authors']
JbjiTt7Exh
24,779
JbjiTt7Exh
CAdam: Confidence-Based Optimization for Online Learning
Modern recommendation systems frequently employ online learning to dynamically update their models with freshly collected data. The most commonly used optimizer for updating neural networks in these contexts is the Adam optimizer, which integrates momentum ($m_t$) and adaptive learning rate ($v_t$). However, the volatile nature of online learning data, characterized by its frequent distribution shifts and presence of noise, poses significant challenges to Adam's standard optimization process: (1) Adam may use outdated momentum and the average of squared gradients, resulting in slower adaptation to distribution changes, and (2) Adam's performance is adversely affected by data noise. To mitigate these issues, we introduce CAdam, a confidence-based optimization strategy that assesses the consistency between the momentum and the gradient for each parameter dimension before deciding on updates. If momentum and gradient are in sync, CAdam proceeds with parameter updates according to Adam's original formulation; if not, it temporarily withholds updates and monitors potential shifts in data distribution in subsequent iterations. This method allows CAdam to distinguish between the true distributional shifts and mere noise, and to adapt more quickly to new data distributions. In various settings with distribution shift or noise, our experiments demonstrate that CAdam surpasses other well-known optimizers, including the original Adam. Furthermore, in large-scale A/B testing within a live recommendation system, CAdam significantly enhances model performance compared to Adam, leading to substantial increases in the system's gross merchandise volume (GMV).
null
['Online Learning', 'Optimization']
/pdf/5e5b6600c7f59f286608e30a29ab129ad76a4999.pdf
other topics in machine learning (i.e., none of the above)
/attachment/e9b81de535a9684b726dacdf3ecb4a39c1c2f825.zip
['ICLR.cc/2026/Conference/Submission24779/Authors']
m06fRxiGi9
24,778
m06fRxiGi9
VRIQ: Benchmarking and Analyzing Visual-Reasoning IQ of VLMs
Recent progress in Vision Language Models (VLMs) has raised the question of whether they can reliably perform nonverbal reasoning. To this end, we introduce VRIQ (Visual Reasoning IQ), a novel benchmark designed to assess and analyze the visual reasoning ability of VLMs. We evaluate models on two sets of tasks: abstract puzzle-style and natural-image reasoning tasks. We find that on abstract puzzles, performance remains near random with an average accuracy of around 28\%, while natural tasks yield better but still weak results with 45\% accuracy. We also find that tool-augmented reasoning demonstrates only modest improvements. To uncover the source of this weakness, we introduce diagnostic probes targeting perception and reasoning. Our analysis demonstrates that around 56\% of failures arise from perception alone, 43\% from both perception and reasoning, and only a mere 1\% from reasoning alone. This motivates us to design fine-grained diagnostic probe questions targeting specific perception categories (e.g., shape, count, position, 3D/depth), revealing that certain categories cause more failures than others. Our benchmark and analysis establish that current VLMs, even with visual reasoning tools, remain unreliable abstract reasoners, mostly due to perception limitations, and offer a principled basis for improving visual reasoning in multimodal systems.
null
['Visual Reasoning', 'Vision Language Models', 'Benchmark', 'IQ']
/pdf/e65fa664e93389ed629df61aae20dd77acc393e3.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24778/Authors']
VSDV0SWwOC
24,777
VSDV0SWwOC
LS-Merge: Merging Language Models in Latent Space
Model merging in weight space is an efficient way to reuse pretrained models, but existing methods typically assume matching architectures or sizes, making heterogeneous merges brittle or infeasible. We address this limitation by encoding model weights into a smooth latent space, enabling cross-architecture operations, and performing the merge in the latent space before decoding back to weights. This approach faces two major challenges. First, LLMs contain billions of parameters, which makes latent encoding computationally demanding. Second, using high compression ratios often hinders the encoder’s ability to generalize to unseen weights. We tackle these issues with a transformer-based variational autoencoder (VAE) trained in a two-stage compression curriculum with structured layer-aware chunking: the model first learns a high-capacity latent representation and then distills to a compact code, improving both stability and out-of-distribution generalization. To align heterogeneous models, we introduce a dimensionality-matching projection that allows interpolation between models of different sizes. Empirically, latent-space interpolation is consistently more robust than direct weight-space averaging and yields stronger downstream performance when merging models of different sizes. Together, these components provide a scalable, architecture-agnostic recipe for model merging.
Merging Language Models in Latent Space
['LS-Merge', 'LLM merging', 'latent space', 'weight space learning']
/pdf/2d6c926c9a0e59e561ac2d47db8f24abaeccf458.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24777/Authors']
7ei1uOYUY0
24,775
7ei1uOYUY0
From Scarcity to Efficiency: Preference-Guided Learning for Sparse-Reward Multi-Agent Reinforcement Learning
We study the problem of online multi-agent reinforcement learning (MARL) in environments with sparse rewards, where reward feedback is not provided at each interaction but only revealed at the end of a trajectory. This setting, though realistic, presents a fundamental challenge: the lack of intermediate rewards hinders standard MARL algorithms from effectively guiding policy learning. To address this issue, we propose a novel framework that integrates online inverse preference learning with multi-agent on-policy optimization into a unified architecture. At its core, our approach introduces an implicit multi-agent reward learning model, built upon a preference-based value-decomposition network, which produces both global and local reward signals. These signals are further used to construct dual advantage streams, enabling differentiated learning targets for the centralized critic and decentralized actors. In addition, we demonstrate how large language models (LLMs) can be leveraged to provide preference labels that enhance the quality of the learned reward model. Empirical evaluations on state-of-the-art benchmarks, including MAMuJoCo and SMACv2, show that our method achieves superior performance compared to existing baselines, highlighting its effectiveness in addressing sparse-reward challenges in online MARL.
We propose a novel MARL framework that tackles sparse rewards by integrating online Inverse Preference Learning with LLM-generated preferences.
['Multi-Agent Reinforcement Learning', 'Preference-Based Reinforcement Learning', 'Inverse Preference Learning', 'Value Decomposition']
/pdf/2c51a0c054224de916fe3a27a985d268592f67e9.pdf
reinforcement learning
/attachment/8425f724729626fe47f483c64ab6080a06611f27.zip
['ICLR.cc/2026/Conference/Submission24775/Authors']
P7wBg0vPTh
24,774
P7wBg0vPTh
RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents
Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess. While reinforcement learning from verifiable rewards (RLVR) has advanced in other domains, its application to dialogue—especially for emotional intelligence—remains underexplored. In this work, we introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users to cultivate higher-order empathetic abilities in LLMs. Within this framework, self-consistent affective simulated users engage in dialogue rollouts and produce deterministic emotion scores during conversations, serving as reward signals to guide the LLM's learning. Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its Sentient-Benchmark score from 13.3 to 79.2 while largely preserving mathematical and coding competence. Extensive experiments reveal that: (i) RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and non-thinking models show distinct trends—thinking models excel in empathy and insight, while non-thinking models favor action; (iii) GRPO often yields stable gains, while PPO can push certain capabilities to a higher ceiling; (iv) More challenging environments are not always better—moderate ones can yield stronger outcomes. Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.
null
['Large language models', 'Reinforcement Learning', 'Agent']
/pdf/b2ca928b3b59a98f86ccd808874c43e3a41d618a.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24774/Authors']
ErnnE2UNI2
24,773
ErnnE2UNI2
Minor First, Major Last: A Depth-Induced Implicit Bias of Sharpness-Aware Minimization
We study the implicit bias of sharpness-aware minimization (SAM) when training $L$-layer linear diagonal networks on linearly separable binary classification. For linear models ($L=1$), both $\ell_\infty$- and $\ell_2$-SAM recover the $\ell_2$ max-margin classifier, matching gradient descent (GD). However, for depth $L = 2$, the behavior changes drastically—even on a single-example dataset where we can analyze the dynamics. For $\ell_\infty$-SAM, the limit direction depends critically on initialization and can converge to $0$ or to any standard basis vector; this is in stark contrast to GD, whose limit aligns with the basis vector of the dominant coordinate in the data. For $\ell_2$-SAM, we uncover a phenomenon we call *sequential feature discovery*, in which the predictor initially relies on minor coordinates and gradually shifts to larger ones as training proceeds or initialization grows. Our theoretical analysis attributes this phenomenon to $\ell_2$-SAM’s gradient normalization factor applied in its perturbation, which amplifies minor coordinates early and allows major ones to dominate later. Synthetic and real-data experiments corroborate our findings.
null
['sharpness-aware minimization', 'implicit bias', 'gradient flow']
/pdf/e312c1e389906af24bf4dd3572ca0c1e0e97f392.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24773/Authors']
0fvVI2rORC
24,772
0fvVI2rORC
Can Large Language Models Model Programs Formally?
In the digital age, ensuring the correctness, safety, and reliability of software through formal verification is paramount, particularly as software increasingly underpins critical infrastructure. Formal verification, split into theorem proving and model checking, provides a feasible and reliable path. Unlike theorem proving, which yields notable advances, model checking has been less focused due to the difficulty of automatic program modeling. To fill this gap, we introduce \name, a benchmark and an accompanying pipeline for evaluating and improving LLMs' program modeling capability by modeling Python programs into verification-ready model checking specifications checkable by its accompanying model checker. \name comprises 400 Python programs derived from three well-known benchmarks (HumanEval, MBPP, and LiveCodeBench). Our extensive experiments reveal significant limitations in LLMs' program modeling and further provide inspiring directions.
null
['model checking', 'large language model', 'formal verification']
/pdf/7040fc763a6cdd56f6b65b8e2a5ba25f43dfc650.pdf
foundation or frontier models, including LLMs
/attachment/d7b30887d9069bf7556a34bc98c2c55e2060d9a1.zip
['ICLR.cc/2026/Conference/Submission24772/Authors']
5Taa8ZaZ5o
24,768
5Taa8ZaZ5o
GAMformer: Bridging Tabular Foundation Models and Interpretable Machine Learning
While interpretability is crucial for machine learning applications in safety-critical domains and regulatory compliance, existing tabular foundation models like TabPFN lack the transparency needed for these applications. Generalized Additive Models (GAMs) provide the needed interpretability through their additive structure, but traditional GAM methods rely on iterative learning algorithms (such as splines, boosted trees, or neural networks) that are fundamentally incompatible with the in-context learning paradigm of foundation models. In this paper, we introduce GAMformer, the first tabular foundation model for GAMs that bridges the gap between the power of foundation models and the interpretability requirements of real-world applications. GAMformer estimates GAM shape functions in a single forward pass using in-context learning, representing a significant departure from conventional iterative approaches. Building on previous research applying in-context learning to tabular data, we train GAMformer exclusively on synthetically generated tables. Our experiments demonstrate that GAMformer performs comparably to other leading GAMs across various classification benchmarks while maintaining full interpretability.
GAMformer is the first tabular foundation model for GAMs, estimating shape functions in a single forward pass, performing well on real datasets despite training only on synthetic causal data.
['GAMs', 'interpretability', 'tabular deep learning', 'glassbox', 'generalized additive models']
/pdf/ba5c85fbd2134327ae24c601be39dc125591ed2f.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission24768/Authors']
vNIfmUPPfw
24,767
vNIfmUPPfw
Channel-Similarity Aware Spike Encoding for Multivariate Time-Series Forecasting
Spiking Neural Networks (SNNs) have attracted increasing attention for multivariate time-series forecasting due to their intrinsic energy efficiency and suitability for modeling temporal signals. However, existing SNN-based studies have largely focused on temporal dynamics, restricting their scope to spike encoding and temporal modeling. In contrast, recent advances in artificial neural networks demonstrate that modeling inter-channel similarity can substantially enhance forecasting performance. Despite this, such perspectives remain underexplored in the context of SNNs. In this work, we introduce a method that explicitly models inter-channel similarity through channel clustering and integrates it into temporal spike encoding. Specifically, we employ attention-based clustering to quantify channel similarity as cluster memberships, and leverage the Straight-Through Estimator (STE) to enable both end-to-end optimization and seamless integration into spike-based encoding. We evaluate the proposed approach on six benchmark datasets spanning diverse domains and temporal characteristics, using recurrent-, transformer-, and convolution-based SNN backbones. Experimental results show consistent improvements over baseline SNNs, achieving relative reductions in RRSE ranging from 3.0\% to 6.5\%. These findings highlight the potential of inter-channel similarity modeling as a complementary dimension to temporal dynamics in advancing the forecasting capabilities of SNNs.
We propose a spike-form channel clustering method that injects inter-channel similarity into temporal spike encoding, yielding up to 20.1% forecasting improvement across diverse SNN backbones and datasets.
['Spiking Neural Network', 'Spike Encoding', 'Inter-Channel Similarity']
/pdf/03c4f808f44903c6b41db22355da8bd4065237d0.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24767/Authors']
SVUAHG0pNe
24,764
SVUAHG0pNe
Few-Shot Class-Incremental Learning based on Hierarchical Dual-Stream Interaction and Associative Memory Fusion
Few-Shot Class-Incremental Learning (FSCIL) aims to learn novel classes from limited examples while preserving previously acquired knowledge. Current methods face two challenges: (1) Collapsed intra-class variance, where enhancing base-class separability limits generalization; and (2) Boundary instability, where few novel samples distort feature distribution and cause catastrophic forgetting. To address these challenges, we propose a cognition-inspired framework that employs a dual-stream network to extract a unified representation space with strong generalization and a hierarchical fusion mechanism with associative memory to improve old and new feature distribution. This framework comprises two key modules for rapid adaptation and long-term stability. The Hierarchical Dual-Stream Interaction Network (HDIN) decouples feature learning into a ResNet-based local stream for fine-grained detail extraction and a ViT-based global stream for long-range semantic dependencies. These streams are dynamically integrated via channel-adaptive attention to harmonize multi-scale information, simulating cognitive-level feature integration. The Associative-Enhanced Hierarchical Memory Fusion (AE-HMF) module simulates cortical memory consolidation by Gaussian sampling from class prototypes as associative memories and performing cross-layer feature interactions. Experiments on CIFAR100, miniImageNet, and CUB200 show that under the setting of no large-scale pretraining or data expansion techniques, our approach achieves the lowest Performance Decline Rates (DR) across all benchmarks, delivering a state-of-the-art balance between accuracy and forgetting. This work establishes a cognition-inspired, unified framework that effectively promotes the generalization capability and reduces catastrophic forgetting in FSCIL.
We propose a cognition-inspired framework with a dual-stream network and associative memory fusion to address intra-class variance collapse and boundary instability in FSCIL, achieving better generalization and lower catastrophic forgetting.
['Continual Learning', 'Few-shot Learning', 'Few-Shot Class-Incremental Learning', 'Image Classification', 'Brain-inspired']
/pdf/d0ed8846d4f75a0429f3d6d337a8f1613b860345.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24764/Authors']
0yjmQKUiUr
24,762
0yjmQKUiUr
Topology aware optimization of soft prompts
Soft prompt tuning achieves excellent performance in few-shot tasks. However, soft prompt tuning lacks interpretability, and traditional prompt tuning methods fail to analyze its internal structural features or optimize from this perspective. To address this limitation, this research proposes a topology-aware optimization method focused on the internal structure of soft prompts. By introducing persistent homology methods from topological data analysis (TDA), we characterize the structural evolution features of soft prompts during training, discovering that changes in connectivity persistence and redundancy affect soft prompt tuning performance. When both structural connectivity and persistent homology entropy simultaneously approach convergence, soft prompts can more easily guide models to output correct reasoning chains. Based on this phenomenon, we developed a new loss function with specific topological structure analysis, called TDA for Softprompt Loss Function (TSLoss), which introduces topological measurement tools through TDA to quantify connectivity and redundancy between semantic units, learning information related to topological structure transformations trending toward structural stability. Extensive experiments demonstrate that TSLoss can significantly accelerate the convergence speed of prompt tuning, outperforming traditional prompt tuning methods, and providing an interpretable research direction for soft prompt tuning from a new perspective.
null
['soft prompts', 'parameter-efficient fine-tuning', 'large language models']
/pdf/31e052792933a972c4dfd2d06ceec95eb20a473e.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24762/Authors']
z1yPH2Rska
24,761
z1yPH2Rska
Diagonal Batching Unlocks Parallelism in Recurrent Memory Transformers for Long Contexts
Long-context inference with Transformers is constrained by quadratic attention and linear memory growth. Many linear-time alternatives require pretraining from scratch, whereas Recurrent Memory Transformers (RMTs) convert pretrained models into segment-recurrent variants via finetuning without modifying the original model architecture. However, their sequential memory updates underutilize GPUs. We show that RMT-style architectures with layer-level memory (PRMTs) (e.g., ARMT) can be among the most latency-efficient linear approaches when scheduled properly. We introduce Diagonal Batching, a compute-reordering scheme that preserves exact recurrence while exposing inter-step parallelism by executing "diagonals" concurrently with grouped layers. On LLaMA (1B/3B/8B) up to 131,072 tokens on A100/H100, Diagonal Batching achieves up to 3.3× lower latency than full-attention inference and 1.8× over a sequential ARMT baseline, with no custom CUDA kernels. With the right scheduling, PRMTs achieve linear scaling with context length and stand out as competitive, scalable architectures among linear recurrent models.
null
['Transformer inference', 'Efficient transformer architectures', 'Recurrent Memory Transformers', 'diagonal batching', 'inference scheduling', 'LLM inference acceleration', 'efficient deep learning', 'long-context sequence modeling']
/pdf/992d63439a1694b7c4c2ff0366bbea4d489ecf56.pdf
foundation or frontier models, including LLMs
/attachment/d8f12b3ccc7d73709a10c4931a0daa9d43acf45a.zip
['ICLR.cc/2026/Conference/Submission24761/Authors']
kIpCoJNULS
24,760
kIpCoJNULS
Tensor Train Diffusion: A Fast Solver for High-Dimensional Sampling
Diffusion models offer a powerful framework for sampling from complex probability densities by learning to reverse a noising process. A common approach involves solving for the time-reversed stochastic differential equation (SDE), which requires the score function of the evolving sample distribution. The logarithm of this distribution's density is governed by a Hamilton-Jacobi-Bellman (HJB) type partial differential equation (PDE). However, current methods for solving this PDE, such as PINNs or trajectory-based techniques, often suffer from long training times and significant sensitivity to hyperparameter tuning. In this work, we introduce a novel and efficient solver for the underlying HJB equation based on the functional tensor train (FTT) format. The FTT representation leverages latent low-rank structures to efficiently approximate high-dimensional functions, enabling both model compression and rapid computation. By integrating this efficient representation with a backward-in-time iterative scheme derived from backward stochastic differential equations (BSDEs), we develop a fast, robust and accurate sampling method. Our approach overcomes primary bottlenecks of existing techniques, enabling high-fidelity sampling from challenging target distributions with improved efficiency.
We approximate the Hamilton Jacobi Bellman PDE with tensor trains; the solution can be used for sampling from unnormalized probability densities
['diffusion-based sampling', 'unnormalized probability density', 'tensor trains', 'Hamilton Jacobi Bellman PDE', 'PDE approximation', 'BSDE']
/pdf/06fea4ecf7f69a119a487d27347a9c6827619fb8.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
null
['ICLR.cc/2026/Conference/Submission24760/Authors']
10TkrLM8bW
24,759
10TkrLM8bW
The Hunger Game Debate: On the Emergence of Over-Competition in Multi-Agent Systems
LLM-based multi-agent systems demonstrate great potential for tackling complex problems, but how competition shapes their behavior remains underexplored. This paper investigates the over-competition in multi-agent debate, where agents under extreme pressure exhibit unreliable, harmful behaviors that undermine both collaboration and task performance. To study this phenomenon, we propose HATE, the Hunger Game Debate, a novel experimental framework that simulates debates under a zero-sum competition arena. Our experiments, conducted across a range of LLMs and tasks, reveal that competitive pressure significantly stimulates over-competition behaviors and degrades task performance, causing discussions to derail. We further explore the impact of environmental feedback by adding variants of judges, indicating that objective, task-focused feedback effectively mitigates the over-competition behaviors. We also probe the post-hoc kindness of LLMs and form a leaderboard to characterize top LLMs, providing insights for understanding and governing the emergent social dynamics of AI community.
This paper presents the first study of competitive incentives in multi-agent debates, quantifying over-competition behaviors of state-of-the-art LLMs.
['Large language models', 'Multi-agent debate', 'Evaluation']
/pdf/31e3e88609ade2ade3d62333b355bc6838ec983c.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/a99d01949f1bab20f88b2e4b0abc36133a3e0d19.zip
['ICLR.cc/2026/Conference/Submission24759/Authors']
gSPkuTTWgU
24,756
gSPkuTTWgU
Is Graph Unlearning Ready for Practice? A Benchmark on Efficiency, Utility, and Forgetting
Graph Neural Networks (\textsc{Gnn}s) are increasingly being deployed in sensitive, user-centric applications where regulations such as the GDPR mandate the ability to remove data upon request. This has spurred interest in graph unlearning, the task of removing the influence of specific training data from a trained \textsc{Gnn} without retraining from scratch. While several unlearning techniques have recently emerged, the field lacks a principled benchmark to assess whether these methods truly provide a practical alternative to retraining and, if so, how to choose among them for different workloads. In this work, we present the first systematic benchmark for \textsc{Gnn} unlearning, structured around three core desiderata: \emph{efficiency} (is unlearning faster than retraining?), \emph{utility} (does the unlearned model preserve predictive performance and align with the retrained gold standard?), and \emph{forgetting} (does the model genuinely eliminate the influence of removed data?). Through extensive experiments across diverse datasets and deletion scenarios, we deliver a unified assessment of existing approaches, surfacing their trade-offs and limitations. Crucially, our findings show that most unlearning techniques are not yet practical for large-scale graphs. At the same time, our benchmarking yields actionable guidelines on when unlearning can be a viable alternative to retraining and how to select among methods for different workloads, thereby charting a path for future research toward more practical, scalable, and trustworthy graph unlearning.
Our benchmark shows that graph unlearning can rival retraining in select scenarios, but in most cases, it remains less reliable than retraining from scratch
['graph unlearning', 'GNN', 'graph neural network']
/pdf/4b8981fbc7d8e56388f67091f4bd7a564e2eae52.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24756/Authors']
yVhFlnwG3k
24,755
yVhFlnwG3k
PromptFE: Automated Feature Engineering by Prompting
Automated feature engineering (AutoFE) liberates data scientists from the burden of manual feature construction. The semantic information of datasets contains rich context information for feature engineering but has been underutilized in many existing AutoFE works. We present PromptFE, a novel AutoFE framework that leverages large language models (LLMs) to automatically construct features in a compact string format and generate semantic explanations based on dataset descriptions. By learning the performance of constructed features in context, the LLM iteratively improves feature construction. We demonstrate through experiments on real-world datasets the superior performance of PromptFE over state-of-the-art AutoFE methods. We verify the impact of dataset semantic information and provide comprehensive study on the LLM-based feature construction process.
A novel AutoFE framework that leverages LLMs to automatically construct features in a compact string format and generate semantic explanations utilizing dataset descriptions and performance feedback.
['Automated Feature Engineering', 'Large Language Models']
/pdf/71b136eae4be98005ad9eb56774751359e4e6876.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24755/Authors']
QjWQxmKGlL
24,754
QjWQxmKGlL
Towards Distribution-Aware Active Learning for Data-Efficient Neural Architecture Predictor
As the neural predictor (NP) provides a fast evaluation for neural architectures, it is highly sought after in neural architecture search (NAS). However, the high computational cost involved in generating training data results in its scarcity, which in turn limits the accuracy of the NP. Active learning (AL) has the potential to address this issue by prioritizing the most informative samples, yet existing methods struggle with selection bias when faced with imbalanced data distributions, often prioritizing diversity over representativeness. In this paper, we redefine the sample selection mechanism in AL and propose a Distribution-aware Active Learning framework for Neural Predictor (called DARE). The goal is to select samples that not only ensure diversity but also exhibit a high degree of generalizability, making them more representative of the underlying data distribution. Our approach first extracts architecture representations via a graph-based encoder enhanced with a consistency-driven objective. Then, a two-stage selection strategy identifies both globally diverse and locally reliable samples through progressive representation learning and refinement. For non-uniform data distributions, we further introduce an adaptive mechanism that anchors sampling to key regions with high similarity density, avoiding performance degradation caused by outliers. Extensive experiments have shown that the proposed distribution-aware active learning strategy samples a higher-quality training dataset for NPs, allowing the neural architecture predictor to achieve state-of-the-art results.
null
['neural architecture predictor', 'active learning', 'neural architecture search']
/pdf/bdfa19598232245ee202cb1fab6b66c11cb067b4.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24754/Authors']
9JCknIXbDo
24,753
9JCknIXbDo
Graph Generation via Temporal-Aware Biased Walks
Some real networks keep a fixed structure (e.g., roads, sensors and their connections) while node or edge signals evolve over time. Existing graph generators either model topology changes (i.e., edge additions/deletions) or focus only on static graph properties (such as degree distributions or motifs), without considering how temporal signals shape the generated structure. By approaching the problem from an unconventional perspective, we introduce temporally attributed graphs, named TANGEM, that integrate a temporal similarity matrix into biased random walks, thereby coupling signals with structure to generate graphs that highlight patterns reflecting how nodes co-activate over time. We evaluate TANGEM using an approach that separates structural fidelity (clustering, spectral metrics) from downstream temporal consistency, allowing us to clearly isolate the impact of the topology generator itself. In time series benchmarks, TANGEM consistently outperforms strong baselines in structural metrics while remaining lightweight, learning from a single graph. These results show that adding temporal bias to structural sampling produces more realistic graphs and establishes TANGEM as a basis for future models that further integrate evolving signals and structure.
We propose a graph generation framework that uses a strategically designed Temporal-Aware Biased Random Walk sampling procedure to synthesize large, structurally faithful graphs.
['Graph Generation', 'Biased Random Walk', 'Transformer']
/pdf/153cfaeb33cf1a06991dcc028c1ece82d96aa0d1.pdf
learning on graphs and other geometries & topologies
/attachment/637fdaec7d459afb57729468df8332ae5e8d9090.zip
['ICLR.cc/2026/Conference/Submission24753/Authors']
CaqVssw7rN
24,751
CaqVssw7rN
Particles Don’t Care About Z: Towards Scaling Entropy Estimation of Unnormalized Densities
Computing the differential entropy of distributions known only up to a normalization constant is a long-standing challenge with broad theoretical and practical significance. While variational inference is the most scalable approach for density approximation from samples, its potential in settings where only the unnormalized density is available remains largely underexplored. The central difficulty lies in constructing variational distributions that simultaneously ($i$) exploit the structure of the unnormalized density, ($ii$) are expressive enough to capture complex target distributions, ($iii$) remain computationally tractable, and ($iv$) support efficient sampling. Recently, \citet{messaoud2024s} introduced P-SVGD, a particle-based variational method leveraging Stein Variational Gradient Descent dynamics, which satisfies all of these constraints and demonstrates promising results in low-dimensional setups. We show, however, that P-SVGD does not scale to high dimensions due to fundamental algorithmic flaws: ($i$) misdiagnosed sensitivity to SVGD hyperparameters, ($ii$) violation of the global invertibility assumption in entropy derivation, and ($iii$) omission of a critical trace-of-Hessian term, along with sub-optimal heuristics, including a divergence-based sampling check that induces mode collapse and loose informal bounds with no practical value. These issues severely limit both correctness and scalability. We propose MET-SVGD, a principled extension of P-SVGD that addresses these flaws, providing a general framework for SVGD hyperparameters selection with global invertibilty and convergence guarantees. This enabled accurate and more scalable entropy estimation in high-dimensional set-ups. Empirically, on entropy estimation benchmarks, MET-SVGD achieves up to a 12$\times$ and 16$\times$ accuracy improvement over, respectively, P-SVGD and the most scalable baselines from the SVGD literature. On CIFAR-10 Energy-Based image generation, it improves FID by $80.4\%$ compared to P-SVGD and achieves 64$\times$ improved stability. In Maximum-Entropy reinforcement learning, MET-SVGD yields up to $16\%$ better returns than P-SVGD. We will make our code publically available: \url{https://tinyurl.com/2esyfx8j}.
We propose a variational method to estimate the entropy of distributions known-up-to a normalization constant.
['Stein Variational Gradient Descent', 'Sampling', 'Variational Inference', 'Entropy']
/pdf/48f48ad56db58f3a1456f00150313be1405c54db.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
/attachment/a5990295b4eef99e4ff350d91ab1291f4108d03b.pdf
['ICLR.cc/2026/Conference/Submission24751/Authors']
8USxc43D3I
24,750
8USxc43D3I
HardcoreLogic: Challenging Large Reasoning Models with Long-tail Logic Puzzle Games
Large Reasoning Models (LRMs) have demonstrated impressive performance on complex tasks, including logical puzzle games that require deriving solutions satisfying all constraints. However, whether they can flexibly apply appropriate rules to varying conditions, particularly when faced with non-canonical game variants, remains an open question. Existing corpora focus on popular puzzles like 9x9 Sudoku, risking overfitting to canonical formats and memorization of solution patterns, which can mask deficiencies in understanding novel rules or adapting strategies to new variants. To address this, we introduce **HardcoreLogic**, a challenging benchmark of over 5,000 puzzles across 10 games, designed to test the robustness of LRMs on the "long-tail" of logical games. HardcoreLogic systematically transforms canonical puzzles through three dimensions: **Increased Complexity (IC)**, **Uncommon Elements (UE)**, and **Unsolvable Puzzles (UP)**, reducing reliance on shortcut memorization. Evaluations on a diverse set of LRMs reveal significant performance drops, even for models achieving top scores on existing benchmarks, indicating heavy reliance on memorized stereotypes. While increased complexity is the dominant source of difficulty, models also struggle with subtle rule variations that do not necessarily increase puzzle difficulty. Our systematic error analysis on solvable and unsolvable puzzles further highlights gaps in genuine reasoning. Overall, HardcoreLogic exposes the limitations of current LRMs and establishes a benchmark for advancing high-level logical reasoning.
We propose HardcoreLogic, a logic puzzle game benchmark with non-canonical long-tail puzzles that evaluates the reasoning capability robustness of LLM/LRMs.
['long-tail benchmark', 'logic puzzle games', 'large reasoning model']
/pdf/2b1f5ef12c92ac62919baf27e8b8fe7e4df70f2c.pdf
datasets and benchmarks
/attachment/94e0448353ccae0cb8f87dd93f8b2209e63e60fc.zip
['ICLR.cc/2026/Conference/Submission24750/Authors']
Vw8iG1RcXc
24,748
Vw8iG1RcXc
AlphaCon: In-Context Adaptation for Dynamic Alpha Generation
Finding predictive signals known as alphas for stock returns is a central challenge in quantitative finance. This challenge is complicated by the non-stationary nature of financial markets. Conventional automated methods learn a single static model from historical data, and may perform poorly when market regimes shift. In this work, we reformulate this task as a problem of in-context adaptation. Our goal is to train a single universal model that can adapt its generation process to different market conditions at inference time. We introduce \model{}, a novel framework that uses recent data as context to guide alpha generation without requiring retraining. The model learns this adaptive capability through a specialized two-level training procedure, where an outer loop optimizes the context encoder across diverse historical market tasks, and an inner loop refines the generation agents within each task. The generation process itself is structured as a two-stage proposal and refinement loop enhanced by a learnable advice mechanism. We train the entire framework using reinforcement learning. Experiments show that \model{} trained once significantly outperforms strong baselines that require periodic retraining. This demonstrates robust performance across diverse market regimes.
We propose a framework enabling in-context adaptation to generate tailored alphas at inference time without retraining, using a two-stage proposal-refinement process trained via two-level RL
['In-Context Adapt', 'Reinforcement Learning', 'Alpha Generation', 'Quantitative Finance', 'Large Language Model']
/pdf/67dc8a9d8a5d4d4763e1d5de57592b39bca079b6.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24748/Authors']
e1osUquspZ
24,746
e1osUquspZ
Select and Schedule: An Efficient Hierarchical Optimizer for Blocking Job Shop Scheduling Problem with Massive Jobs
The Blocking Job Shop Scheduling Problem (BJSP) is a widely studied variant of the classic Job Shop Scheduling Problem. In BJSP, the blocking constraint requires a job to remain on its current machine until the next machine is available. This constraint substantially increases problem complexity, which in turn limits most existing scheduling algorithms to small-scale instances. However, we observe that this blocking constraint also has merit: it naturally restricts the number of jobs processed concurrently, thereby reducing the number of candidate jobs that must be considered at almost any decision point. Building on this insight, we propose a novel hierarchical optimization framework. The higher layer employs a neural network to select a small subset of jobs from a large candidate pool, while the lower layer uses a solver to schedule the selected jobs. Compared with traditional approaches that directly schedule large sets of jobs, our method achieves significantly lower computational complexity and scales almost linearly with the number of jobs. This scalability enables us to efficiently handle larger instances that are previously intractable. Experimental results demonstrate that, on large-scale benchmarks and under comparable runtime budgets, our approach improves solution quality by an average of 11\%, while continuing to deliver high-quality solutions within reasonable runtimes for even larger instances.
null
['Blocking Job Shop Scheduling Problem,Efficient,massive jobs']
/pdf/4d5cd8eacb6b3532668e6c60631b7cbec18726d2.pdf
optimization
/attachment/d7eff95b269837821668923a56a932a7e6c36f8b.zip
['ICLR.cc/2026/Conference/Submission24746/Authors']
BqOmsYIe7M
24,743
BqOmsYIe7M
Efficient Credal Prediction through Decalibration
A reliable representation of uncertainty is essential for the application of modern machine learning methods in safety-critical settings. In this regard, the use of credal sets (i.e., convex sets of probability distributions) has recently been proposed as a suitable approach to representing epistemic uncertainty. However, as with other approaches to epistemic uncertainty, training credal predictors is computationally complex and usually involves (re-)training an ensemble of models. The resulting computational complexity prevents their adoption for complex models such as foundation models and multi-modal systems. To address this problem, we propose an efficient method for credal prediction that is grounded in the notion of relative likelihood and inspired by techniques for the calibration of probabilistic classifiers. For each class label, our method predicts a range of plausible probabilities in the form of an interval. To produce the lower and upper bounds of these intervals, we propose a technique that we refer to as decalibration. Extensive experiments show that our method yields credal sets with strong coverage and efficiency and performs well on out-of-distribution detection tasks. Notably, we demonstrate credal prediction on models such as TabPFN and CLIP—architectures for which the construction of credal sets was previously infeasible.
Efficient credal prediction based on plausible probability intervals for computationally complex models (e.g. TabPFN, CLIP,…).
['efficient uncertainty representation', 'credal sets', 'relative likelihood']
/pdf/ce62a62480e161edd2dfdcb37b9ad209e9b2bf73.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
/attachment/16042bbe66c9b56a0b297deae58fb0e1fe8073d1.zip
['ICLR.cc/2026/Conference/Submission24743/Authors']
qUo6yqYNHQ
24,742
qUo6yqYNHQ
Soup Kitchen: Mixing Exotic Model Soups across Labels, Losses, and Data
Model soups split a model into multiple models (by fine-tuning) then merge them back into one model (by mixing) to improve accuracy, robustness, and more. How to fine-tune and mix these multiple models, or ingredients, deserves closer examination to keep turning more train-time computation into more improvement. In this work we fine-tune novel ingredients and analyze their mixtures to produce more exotic soups for visual recognition that nevertheless work. For a soup to be possible, the ingredients are known to require a common initialization for fine-tuning, but they vary in their fine-tuning configurations. In existing soups, ingredients vary in their optimization noise, hyperparameters, datasets, and output rewards or input perturbations. However, all known soups are mixed from supervised ingredients with the same loss on labeled data. We show for the first time that 1. ingredients can be fine-tuned without labels by self-supervision and vary across self-supervision hyperparameters (e.g. masking rate), 2. soups can be mixed across supervised losses and self-supervised losses (e.g. MAE and MoCoV3), 3. soups can be mixed across tasks and partitions of the training data, and 4. ingredients fine-tuned by self-supervision on the test data are possible and improve predictions. Our exotic soups provide $1–3\%$ improvements on ImageNet variants and up to $10\%$ improvement on VTAB with remarkable consistency across our novel ingredients from self-supervising, partitioning, and adapting to the test data.
We fine-tune and mix novel model soups of ingredients from supervised and self-supervised learning across different losses, data, and tasks to show they are possible and positively improve predictions.
['soups', 'transfer', 'generalization', 'self-supervision']
/pdf/f5c24f18e08507a24df2577e07ec3aa567e64309.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24742/Authors']
sPRK6XefjY
24,739
sPRK6XefjY
On the Lipschitz Continuity of Set Aggregation Functions and Neural Networks for Sets
The Lipschitz constant of a neural network is connected to several important properties of the network such as its robustness and generalization. It is thus useful in many settings to estimate the Lipschitz constant of a model. Prior work has focused mainly on estimating the Lipschitz constant of multi-layer perceptrons and convolutional neural networks. Here we focus on data modeled as sets or multisets of vectors and on neural networks that can handle such data. These models typically apply some permutation invariant aggregation function, such as the sum, mean or max operator, to the input multisets to produce a single vector for each input sample. In this paper, we investigate whether these aggregation functions are Lipschitz continuous with respect to three distance functions for unordered multisets, and we compute their Lipschitz constants. In the general case, we find that each aggregation function is Lipschitz continuous with respect to only one of the three distance functions. Then, we build on these results to derive upper bounds on the Lipschitz constant of neural networks that can process multisets of vectors, while we also study their stability to perturbations and generalization under distribution shifts. To empirically verify our theoretical analysis, we conduct a series of experiments on datasets from different domains.
null
['set aggregation functions', 'Lipschitz continuity', 'stability']
/pdf/74505badd4d9e18ffe82f06b1f109662d46544e7.pdf
other topics in machine learning (i.e., none of the above)
/attachment/599ff37925ff21b6ae7603af8ac673b913de6408.zip
['ICLR.cc/2026/Conference/Submission24739/Authors']
i2fkuh0uzA
24,738
i2fkuh0uzA
Aligning Rotational and Hierarchical Geometry in Molecular Representation Learning with Product-Manifold Latent Spaces
Learning effective molecular representations requires capturing two fundamental but largely disjoint aspects of the structure of molecules: rotational symmetries in 3D conformations and the hierarchical organization of chemical scaffolds. We introduce a new paradigm of product-manifold representation learning with product-manifold message passing on $\mathrm{SO}(3) \times \mathbb{H}^d$, which couples equivariant geometric features with hyperbolic embeddings of chemical hierarchy. Our construction preserves $\mathrm{SO}(3)$-equivariance in the geometric channel and uses an $\mathrm{E}(3)$‑invariant readout for scalar properties while enabling curvature-aware aggregation in the hyperbolic channel, with cross-coupling restricted to scalar invariants to maintain symmetry. Unlike prior approaches that fuse equivariant and hierarchical encoders via concatenation or stacking, our method defines message passing directly on the product manifold, yielding a unified representation. We outline how such models could be evaluated on molecular property prediction, scaffold-split generalization, and generative design, and discuss how embeddings in $\mathrm{SO}(3) \times \mathbb{H}^d$ provide a natural surrogate space for manifold Bayesian optimization, enabling more sample-efficient discovery of high-value molecules compared to Euclidean BO. Together, these results suggest a principled path toward unifying physical symmetries and chemical hierarchies within a single geometric learning framework.
null
['molecular representation learning', 'molecular machine learning', 'equivariant graph neural networks', 'rotational symmetry', 'SO(3)-equivariance', 'hyperbolic geometry', 'product manifold', 'hierarchical chemical scaffolds', 'geometric deep learning', 'message passing', 'manifold Bayesian optimization', 'scaffold split generalization', 'molecular property prediction', 'generative molecular design', 'curvature-aware aggregation', 'symmetry-preserving learning']
/pdf/1ac4ea7fb4789555c73bae97e572ceccd498b44d.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/245e5c9be9f0e41a6fe3658e538855b8466f72f3.zip
['ICLR.cc/2026/Conference/Submission24738/Authors']