id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
JfiwaTxhI8
24,572
JfiwaTxhI8
Epistemic Uncertainty Quantification To Improve Decisions From Black-Box Models
Distinguishing a model's lack of knowledge (epistemic uncertainty) from inherent task randomness (aleatoric uncertainty) is crucial for reliable AI. However, standard evaluation metrics of confidence scores target different aspects. AUC and accuracy capture predictive signal, proper scoring rules capture overall uncert...
We introduce a novel estimator of epistemic uncertainty in confidence scores that reveals local miscalibration and supports improved decision-making and more efficient LLM cascades.
['Epistemic uncertainty', 'Excess risk', 'Uncertainty quantification', 'Aleatoric', 'LLM', 'Deferral', 'Calibration']
/pdf/6676f4ade7c8f178b086c6cd7f24f319bbbc56d7.pdf
other topics in machine learning (i.e., none of the above)
/attachment/92b848536d80d987b177fed448bda94444b1a586.zip
['ICLR.cc/2026/Conference/Submission24572/Authors']
HwbOLjPtCj
24,570
HwbOLjPtCj
CONTRASTIVE TIME SERIES FORECASTING WITH ANOMALIES
Time-series forecasting predicts future values from past data. In real-world settings, some anomalous events have lasting effects and influence the forecast, while others are short-lived and should be ignored. Standard forecasting models fail to make this distinction, often either overreacting to noise or missing persi...
Co-TSFA uses contrastive latent–output alignment to distinguish forecast-relevant from irrelevant anomalies, improving forecasting accuracy under anomalies.
['Time Series Forecasting', 'Representation Learning', 'Latent–Output Alignment']
/pdf/03033c179a4848a934e2cded46235770166dbc7b.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24570/Authors']
YHQlUgyGq8
24,568
YHQlUgyGq8
CMRAG: Co-modality-based visual document retrieval and question answering
Retrieval-Augmented Generation (RAG) has become a core paradigm in document question answering tasks. However, existing methods have limitations when dealing with multimodal documents: one category of methods relies on layout analysis and text extraction, which can only utilize explicit text information and struggle to...
Integrating co-modality information into the RAG framework to improve the retrieval and generation performance of complex visual document question-answering systems.
['RAG', 'Visual document retrieval', 'Visual question answering', 'Co-modality-based RAG']
/pdf/13272562e7407f249e0a426410709525184e6a25.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24568/Authors']
FShQHrDyEO
24,566
FShQHrDyEO
Route-and-Reason: Scaling Large Language Model Reasoning with Reinforced Model Router
Chain-of-thought has been proven essential for enhancing the complex reasoning abilities of Large Language Models (LLMs), but it also leads to high computational costs. Recent advances have explored the method to route queries among multiple models and proved it as a promising approach. However, previous works directly...
null
['Model Router', 'Large Language Model', 'LLM Reasoning', 'Efficient Reasoning', 'Reinforcement Learning']
/pdf/84c373521ecf972854ea661233af09deee3963f1.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24566/Authors']
eprVJ7NRYk
24,563
eprVJ7NRYk
Neural Implementations of Rational Approximation: CauchyNet and XNet
Rational approximants often outperform polynomials, especially near nonsmooth structure. On bounded 1D domains, they attain optimal rates (exponential for analytic targets; root-exponential for analytic functions with finitely many singularities). Yet scalable neural parameterizations with classical rates are limited. ...
We introduce CauchyNet and XNet, neural architectures leveraging Cauchy integral formula for scalable rational function approximation in solving PDEs.
['Rational Approximation', 'Complex Analysis', 'Function Approximation', 'Partial Differential Equations']
/pdf/a1639c946cbe36789db0a8e0e511f182e2aaed29.pdf
learning theory
/attachment/815f90d9d8599a33690e056c5e1b10df0465707f.zip
['ICLR.cc/2026/Conference/Submission24563/Authors']
1o0H00pH0y
24,562
1o0H00pH0y
I Am Aligned, But With Whom? MENA Values Benchmark for Evaluating Cultural Alignment and Multilingual Bias in LLMs
We introduce MENAValues, a novel benchmark designed to evaluate the cultural alignment and multilingual biases of large language models (LLMs) with respect to the beliefs and values of the Middle East and North Africa (MENA) region, an underrepresented area in current AI evaluation efforts. Drawing from large-scale, au...
We evaluate how LLMs align with real human values in the MENA region and uncover major cultural and linguistic misalignments through a novel, survey-based benchmark.
['Cultural Alignment', 'LLM Evaluation', 'Multilingual Benchmarking']
/pdf/b803b8c60ebf03f800677445713b9f4a89a1cddf.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/1fc53e14bd89cd7d79f192b6ffdb6c5de3519a7e.zip
['ICLR.cc/2026/Conference/Submission24562/Authors']
tQQOkvCshF
24,560
tQQOkvCshF
Learning Argumentative Summarization with Iterative Rejection Sampling
Summarization is a fundamental task for evaluating language understanding in both humans and machines, and serves as a crucial tool for information processing in our data-rich world. While large language models (LLMs) have shown significant progress in summarization, they still struggle with domain-specific tasks such ...
null
['summarization', 'language understanding', 'large language models', 'LLMs', 'domain-specific tasks', 'fine-tuning']
/pdf/aae50d0deca6601c9c3f96705beab892658ba32c.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24560/Authors']
dNUcKJEPTh
24,559
dNUcKJEPTh
Kanade: Compact Linguistically Rich Speech Tokens for Spoken Language Models
A good language model starts with a good tokenizer. Tokenization is especially important for spoken language models (SLMs), which must handle noisy continuous speech recordings. A speech tokenizer should produce linguistically rich compact representations while still enabling high-quality synthesis. We present Kanade, ...
A speech tokenizer that produces linguistically rich compact representations while enabling high-quality reconstruction.
['speech tokenization', 'neural audio codec', 'disentangled speech representation', 'spoken language model']
/pdf/9bc72c1f15232fbb2eb86a66d675d9fb9c999376.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24559/Authors']
ZIdCf4aXre
24,558
ZIdCf4aXre
Energy Shields for Fairness
Runtime fairness is not a one-time constraint but a dynamic property evaluated over a sequence of decisions. To ensure fairness at runtime it is necessary to account for past decisions, information neglected by conventional, static classifiers. Traditional fairness shields enforce runtime fairness abruptly, by in...
null
['Algorithmic fairness', 'Runitme enforcement', 'Shielding']
/pdf/059bd660a75483b6103bf0a4c4edccbbd309c07e.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/bf49f2f981db82d393b17e535cb9814996c72928.zip
['ICLR.cc/2026/Conference/Submission24558/Authors']
63VXjOFiit
24,557
63VXjOFiit
The Price of Robustness: Stable Classifiers Need Overparameterization
The relationship between overparameterization, stability, and generalization remains incompletely understood in the setting of discontinuous classifiers. We address this gap by establishing a generalization bound for finite function classes that improves inversely with _class stability_, defined as the expected dist...
We show that interpolating classifiers can only be stable, and thus generalize well, if they are sufficiently overparameterized.
['concentration inequalities', 'isoperimetry', 'robustness', 'stability', 'classification problems', 'generalization', 'overparameterization']
/pdf/83cc085e338b819b61292b5835951f476c0cd0bb.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission24557/Authors']
omg9K6lI93
24,550
omg9K6lI93
Obscuring Data Contamination Through Translation: Evidence from Arabic Corpora
Data contamination threatens the validity of Large Language Model (LLM) evaluation by allowing models to exploit memorized benchmark content rather than demonstrating true generalization. While existing detection methods focus primarily on English datasets, little is known about how contamination manifests in multiling...
Translating English benchmarks into another language can mask, but not remove, training leaks; simple, label-preserving perturbations reveal hidden contamination and inflate reported accuracy.
['Data contamination', 'translation-induced leakage', 'multilingual evaluation reliability']
/pdf/2e5c661a0aff3003ce18874dc593210c26709c3e.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24550/Authors']
NAN0I3pNWk
24,549
NAN0I3pNWk
DXFeat: Depth-Aware Features for Robust Image Matching
This study introduces DXFeat, a novel architecture that integrates depth infor-mation as an auxiliary branch for keypoint detection, leveraging depth cues to enhance localization accuracy, which improves localization accuracy with an average 3.1% gain while preserving inference efficiency. DXFeat refines feature extrac...
null
['Image Matching', 'Keypoint Detection', 'Sparse Matching', 'Semi-Dense Matching', 'Depth-Auxiliary']
/pdf/f9ac58e3e15febc3c7e6517e969e850cc23cb408.pdf
applications to computer vision, audio, language, and other modalities
/attachment/eff27a7bb90ca924c58c08a3608c8cbc6670e9f2.pdf
['ICLR.cc/2026/Conference/Submission24549/Authors']
HIUqeO9OOr
24,547
HIUqeO9OOr
Rethinking Code Similarity for Automated Algorithm Design with LLMs
The recent advancement of Large Language Models (LLMs) has revolutionized the algorithm design patterns. A new paradigm, LLM-based Automated Algorithm Design (LLM-AAD), has emerged to generate code implementations for high-quality algorithms. Unlike the traditional expert-driven algorithm development, in the LLM-AAD pa...
null
['Algorithm Similarity', 'Automated Algorithm Design', 'Large Language Model']
/pdf/703923585fbde2278e99a51e47fc2517ee05c2f4.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24547/Authors']
iSEN8irLUD
24,546
iSEN8irLUD
Dual Distillation of Trajectory and Guidance Knowledge for Faster Inference in Conditional Masked Diffusion Language Models
Masked diffusion language models (MDLMs) have emerged as a promising generative framework for natural language, owing to parallel non-autoregressive generation capabilities with iterative unmasking/denoising. However, typical MDLMs require a very large number of neural network function evaluations for effective inferen...
We propose a two-stage distillation approach for conditional masked diffusion language models (MDLMs) applied to seq-to-seq NLP tasks that tackles sampling inefficiencies in computing guided and multi-step denoising outputs from MDLMs.
['Diffusion Language Models', 'Knowledge Distillation', 'sequence-to-sequence NLP', 'non-autoregressive generation']
/pdf/50f88496ca691cfcad6109a8c42c5f5e613d573e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24546/Authors']
aK4ZEIy2I7
24,545
aK4ZEIy2I7
Training Feature Attribution for Vision Models
Deep neural networks are often considered opaque systems, prompting the need for explainability methods to improve trust and accountability. Existing approaches typically attribute test-time predictions either to input features (e.g., pixels in an image) or to influential training examples. We argue that both perspecti...
null
['explainability', 'xai', 'vision', 'deep learning', 'tda', 'saliency']
/pdf/349f16134c503c52d0b48e0f3aa9bb8b9f607025.pdf
interpretability and explainable AI
/attachment/54fdcfe54b6c83d60cfaecaa413fd2c36c9f414d.pdf
['ICLR.cc/2026/Conference/Submission24545/Authors']
ofJYlbzoqn
24,544
ofJYlbzoqn
Chronological Thinking in Full-Duplex Spoken Dialogue Language Models
Recent advances in spoken dialogue language models (SDLMs) reflect growing interest in shifting from turn-based to full-duplex systems, where the models continuously perceive user speech streams while generating responses. This simultaneous listening and speaking design enables real-time interaction and the agent can h...
null
['duplex', 'speech-to-speech', 'Reasoning']
/pdf/3d592f584c01d2f41afcbc0f474682309daa66fb.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24544/Authors']
M7NTM8vhB8
24,541
M7NTM8vhB8
Abductive Explanations for Groups of Similar Samples
Explaining the decisions of machine learning models is crucial as their use becomes widespread. While many approaches to explanation are based on heuristics or surrogate models without formal guarantees, formal explanations provide reasoning for a particular decision that is guaranteed to be valid. We focus on abductiv...
We introduce $\delta$–robust abductive explanations, which allow for producing feature selection explanations valid for a group of similar samples
['explainable AI', 'robust explanations', 'group explanations', 'abductive explanations', 'neural network verification']
/pdf/a3f9ec05fe4d8b34aaee1e9afbf0f3dc2e367b2d.pdf
interpretability and explainable AI
/attachment/d2d8fad94a0d3696a0c8541e9a666ea08394ba77.zip
['ICLR.cc/2026/Conference/Submission24541/Authors']
nXLXjAir6m
24,539
nXLXjAir6m
Chain-of-Trigger: An Agentic Backdoor that Paradoxically Enhances Agentic Robustness
The rapid deployment of large language model (LLM)-based agents in real-world applications has raised serious concerns about their trustworthiness. In this work, we reveal the security and robustness vulnerabilities of these agents through backdoor attacks. Distinct from traditional backdoors limited to single-step con...
null
['Large language model', 'Agent', 'Safety', 'Multimodal']
/pdf/3e8a97a71dadb7d4a6be8bfa7e39f8b24347e193.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/7ad04ee710dd8242d92d2dac6d7657f3fecd4414.zip
['ICLR.cc/2026/Conference/Submission24539/Authors']
HQcCd0laFq
24,538
HQcCd0laFq
Exchangeability of GNN Representations with Applications to Graph Retrieval
In this work, we discover a probabilistic symmetry, called as exchangeability in graph neural networks (GNNs). Specifically, we show that the trained node embedding computed using a large family of graph neural networks, learned under standard optimization tools, are exchangeable random variables. This implies that th...
It shows that graph representations are exchangeable random variables which can help in LSH in graphs
['GNN', 'Locality sensitive hashing']
/pdf/c0ebe55a7a5bcd13923237c9a636fcefae0fcd96.pdf
learning on graphs and other geometries & topologies
/attachment/242065665aacd2386cbc377713f080db40092140.zip
['ICLR.cc/2026/Conference/Submission24538/Authors']
Zk5jinDNGE
24,536
Zk5jinDNGE
Stochastic Gaussian Zeroth-Order Optimization: Improved Convergence Analysis under Skewed Hessian Spectra
This paper addresses large-scale finite-sum optimization problems, which are particularly prevalent in the big data era. In the field of zeroth-order optimization, stochastic methods have become essential tools. Natural zeroth-order stochastic methods primarily rely on stochastic gradient descent (SGD). Preprocessing...
null
['stochastic zeroth-order optimization', 'quadratic regularity', 'skewed Hessian spectra']
/pdf/52980119f394614b44d04c02b15fdd4d2315ed1f.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24536/Authors']
HEFPwoGtTj
24,535
HEFPwoGtTj
Importance Sampling for Multi-Negative Multimodal Direct Preference Optimization
Direct Preference Optimization (DPO) has recently been extended from text-only models to vision-language models. However, existing methods rely on oversimplified pairwise comparisons, generating a single negative image via basic perturbations or similarity-based retrieval, which fail to capture the complex nature of mu...
MISP-DPO improves multimodal alignment in DPO by selecting semantically meaningful, diverse image negatives through importance sampling.
['Multimodal', 'Importance Sampling', 'Direct Preference Optimization']
/pdf/4244f7a24613fe9914cfd47aff30a01a0c6f03d3.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24535/Authors']
OpuPBNcQwe
24,533
OpuPBNcQwe
Enhancing Instruction Following of LLMs via Activation Steering with Dynamic Rejection
Large Language Models (LLMs), despite advances in instruction tuning, often fail to follow complex user instructions. Activation steering techniques aim to mitigate this by manipulating model internals, but have a potential risk of oversteering, where excessive emphasis on the instruction degrades task accuracy and ove...
We propose DIRECTER, a dynamic steering method that mitigates oversteering in LLMs by plausibility-guided decoding loop that rejects implausibile outputs and adaptively modulates steering strength.
['Large Language Models', 'LLM Steering', 'Instruction following', 'Activation engineering']
/pdf/94942e4785a69815e17cbc733958f8fbbda7ec12.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24533/Authors']
x4vwdjckZ6
24,532
x4vwdjckZ6
Sensitivity of Small Language Models to Fine-tuning Data Contamination
Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data contamination during instruction tuning remains poorly understood. We systematically investigate the contamination sensitivity of 23 SLMs (270M to 4B parameters) across multiple mod...
Our objective is to gain insights into the robustness and adaptability of SLMs in handling data contamination, contributing to a deeper understanding of their learning mechanisms and potential limitations.
['Small Language Models', 'Data Contamination', 'Fine-tuning Sensitivity']
/pdf/62111446971b676a7103a7637798ac3377020179.pdf
applications to computer vision, audio, language, and other modalities
/attachment/27b558bfe81afbc00e8f3b93f1b888a743f9c372.zip
['ICLR.cc/2026/Conference/Submission24532/Authors']
mU00BTMaSw
24,531
mU00BTMaSw
ADAM: A Diverse Archive of Mankind for Multimodal Benchmarking and Enhancing LLMs’ Cognitive Skills in Biographical Contexts
We introduce \textbf{ADAM} (A Diverse Archive of Mankind), a framework for evaluating and improving multimodal large language models (MLLMs) in biographical reasoning. To the best of our knowledge, this is the first work to systematically examine LLM capabilities in biography, a critical yet underexplored dimension of ...
null
['Biography', 'LLM-Capablities']
/pdf/d904ac767962664c18e099f4b7a87b06d8749fcc.pdf
datasets and benchmarks
/attachment/21758322e2efb230e6c40a4ae69bcd178ccc47df.zip
['ICLR.cc/2026/Conference/Submission24531/Authors']
aR6QpqqIo9
24,526
aR6QpqqIo9
T2I-ConBench: Text-to-Image Benchmark for Continual Post-training
Continual post‑training adapts a single text‑to‑image diffusion model to learn new tasks without incurring the cost of separate models, but naïve post-training causes forgetting of pretrained knowledge and undermines zero‑shot compositionality. We observe that the absence of a standardized evaluation protocol hampers r...
null
['Continual Learning', 'Continual Post-training', 'Text-to-image Generation', 'Diffusion Model']
/pdf/8b20e4c88a9c3ac9fc414bb61ddbb35e12a4f0cd.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24526/Authors']
do4hqhMBiu
24,525
do4hqhMBiu
A Diffusion-Based Data Augmentation Approach for Synthetic Human Portraits Dataset
Deep learning models have achieved remarkable success in computer vision. However, their generalizability remains limited when applied to new tasks. Data augmentation can help mitigate this issue, but traditional augmentation methods, such as rotation and scaling, are easy to conduct, but are also becoming increasingly...
null
['Diffusion Models; Data Augmentation; Transfer Learning; Image-to-Image Translation; Human Synthetic Dataset.']
/pdf/96c557c5a8bc469e3d14d3d21e12f4b118a2f79f.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24525/Authors']
gubSyVxWdG
24,523
gubSyVxWdG
A Relative Error-Based Evaluation Framework of Heterogeneous Treatment Effect Estimators
While significant progress has been made in heterogeneous treatment effect (HTE) estimation, the evaluation of HTE estimators remains underdeveloped. In this article, we propose a robust evaluation framework based on relative error, which quantifies performance differences between two HTE estimators. We first derive th...
We propose a robust relative error-based evaluation framework for heterogeneous treatment effect estimators.
['Causal Inference', 'Conditional Average Treatment Effect', 'Relative Error', 'Robust Evaluation']
/pdf/7126b375914dcf70e4562f5e722ea65833d1eae5.pdf
causal reasoning
/attachment/91fdb582b7e532c0cba3ac395fc5bf5518e947a8.zip
['ICLR.cc/2026/Conference/Submission24523/Authors']
CJngvucW96
24,522
CJngvucW96
Fine-Tuned In-Context Learners
When adapting large language models (LLMs) to a specific downstream task, two primary approaches are commonly employed: (1) prompt engineering with in-context few-shot learning, leveraging the model’s inherent generalization abil- ities, and (2) fine-tuning on task-specific data, directly optimizing the model’s paramet...
null
['model-adaptation', 'in-contex-learning', 'sample-efficiency']
/pdf/babf0ccc0d0df8d54a0c751808704dade0bcdc6b.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24522/Authors']
ysSFeziCFR
24,519
ysSFeziCFR
Constraint-Aware Federated Learning: Multi-Resource Optimization via Dual Ascent
We present CAFL (Constraint-Aware Federated Learning), a principled approach for multi-resource optimization in federated learning that simultaneously manages energy, communication, memory, and thermal constraints through dual ascent. Unlike existing methods that optimize primarily for convergence, CAFL formulates fede...
We present CAFL (Constraint-Aware Federated Learning), a principled approach for multi-resource optimization in federated learning that simultaneously manages energy, communication, memory, and thermal constraints through dual ascent.
['Federated Learning; Constraints-Aware; Constrained optimization; Lagrangian dual methods; On-Device; Language Models; Optimization']
/pdf/a0cc3b32153869ef242d8c275638bc2c17350170.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24519/Authors']
65WSbRO5Om
24,517
65WSbRO5Om
DotMatch: Simplified Semi-Supervised Learning with the Log Dot Product Loss
Semi-supervised learning (SSL) algorithms typically work by generating supervisory signals for unsupervised data using the model being trained, but such supervisory signals are generally imperfect, thus various techniques have been proposed to balance the signal-to-noise ratio, such as confidence-based pseudo-labeling,...
null
['semi-supervised learning']
/pdf/1b333d17d6d282fbd2e5252c2b75e9f518069005.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24517/Authors']
DZTJlfmtU2
24,515
DZTJlfmtU2
Estimating Commonsense Plausibility through Semantic Shifts
Commonsense plausibility estimation is critical for evaluating language models (LMs), yet existing generative approaches--reliant on likelihoods or verbalized judgments--struggle with fine-grained discrimination. In this paper, we propose ComPaSS, a novel discriminative framework that quantifies commonsense plausibilit...
We propose a fine-grained commonsense plausibility estimation method with semantic shifts from the discriminative perspective.
['commonsense', 'evaluation methodologies', 'evaluation']
/pdf/abea2d36628d6a117812e48366f93bd92430ded7.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24515/Authors']
kMuQBgPIdg
24,514
kMuQBgPIdg
Enhancing Generative Auto-bidding with Offline Reward Evaluation and Policy Search
Auto-bidding serves as a critical tool for advertisers to improve their advertising performance. Recent progress has demonstrated that AI-Generated Bidding (AIGB), which learns a conditional generative planner from offline data, achieves superior performance compared to typical offline reinforcement learning (RL)-based...
null
['auto-bidding', 'offline reinforcement learning', 'generative decision making']
/pdf/ca5c019252ad292d0517b8b995cd1c44c417b66f.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission24514/Authors']
mHuQxVXlyR
24,511
mHuQxVXlyR
Equalized Generative Treatment: Matching $f$-divergences for Fairness in Generative Models
Fairness is a crucial concern for generative models, which not only reflect but can also amplify societal and cultural biases. Existing fairness notions for generative models are largely adapted from classification, focusing on balancing probability of generating each sensitive group. We show, both theoretically and em...
Extending criteria of fairness in generative models to ensure equal treatment
['Fairness', 'Generative Models']
/pdf/fe44f694ee68690ee6bd91ed9eea75545debdff4.pdf
generative models
/attachment/9d3f042993f72d26efbc51f627e3d161d37b3b62.zip
['ICLR.cc/2026/Conference/Submission24511/Authors']
itSbHJklyd
24,510
itSbHJklyd
Quantization bounds for Wasserstein metrics
The Wasserstein metric is becoming increasingly important in many machine learning applications such as generative modeling, image retrieval and domain adaptation. Despite its appeal, it is often too costly to compute. This has motivated approximation methods like entropy-regularized optimal transport, downsampling, an...
Fast methods to bound Wasserstein metrics in 2D and 3D
['optimal transport', 'earth mover’s distance', 'bilevel', 'cryo-electron microscopy']
/pdf/3cd1f7a693101c5c8cfec382b90c29064f81bba9.pdf
other topics in machine learning (i.e., none of the above)
/attachment/b889c3a11d50d08bc0e9b0cdfdc6f3231a427a88.zip
['ICLR.cc/2026/Conference/Submission24510/Authors']
UavntTtUSC
24,509
UavntTtUSC
Identifying Truthful Inheritance in Family Models and Enhancing Truthfulness
Recent advances in large language models (LLMs) have led to emergence of specialized multimodal LLMs (MLLMs), creating distinct model families that share a common foundation language models. This work investigates whether a core traits like truthfulness are inherited along this evolutionary trajectory. To quantify thi...
Discovering truthful model components and adjustig model for truthfulness
['Truthfulness', 'Hallucination', 'LVLM', 'LLM']
/pdf/88c3528503b9090380b4bc8b88b6d4d280a79581.pdf
foundation or frontier models, including LLMs
/attachment/a86ac61f46b3e80841e2aa1a1fea460e37585a4a.zip
['ICLR.cc/2026/Conference/Submission24509/Authors']
9T9cxAK7ac
24,507
9T9cxAK7ac
VCoT: Visual Chain-of-Thought for Continual Learning in Day-Night Object Tracking
Stable tracking in both daytime and nighttime is essential for applying single object tracking to real-world scenarios. Traditional daytime trackers mainly rely on clear appearance features, which leads to significant performance degradation under nighttime conditions. Conversely, nighttime trackers often incorporate l...
null
['Continual Learning', 'Single Object Tracking', 'Visual Chain-of-Thought']
/pdf/1d81e5683b958e061b5c71c9b000d8780ba94c25.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24507/Authors']
PsK8oG0VOt
24,506
PsK8oG0VOt
Sample-Aware Dual Actions for Prompt Optimization
In recent years, large language models (LLMs) have achieved remarkable progress in reasoning, question answering, and decision-making tasks in natural language processing. High-quality prompts play a crucial role in guiding LLMs to generate outputs that meet expectations. However, manually designing effective prompts f...
null
['LLM', 'Prompt Optimization']
/pdf/7986f24442cdb1521199e331383c5888881e9272.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24506/Authors']
rkBB6zTUTC
24,505
rkBB6zTUTC
Hacking LM Arena via LLM Identification with Interpolated Preference Learning
Voting-based leaderboards, such as LM Arena, have become the predominant method for evaluating large language models (LLMs) on open-ended tasks, with their fairness fundamentally depending on the anonymity of model responses. While prior work has shown that simple statistical features can be used for LLM identification...
We presented I-PREF , a new model-driven approach to identify the source of LLM responses, which leverages triplet-based training with adaptive and iterative curriculum learning, supported by synthetic negatives generated via model interpolation
['LLM identification']
/pdf/2d5b0080cd2d62858bcb16d884742d7a23807ef9.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24505/Authors']
m9GolnzXoq
24,503
m9GolnzXoq
From sequences to schemas: How recurrent neural networks learn temporal abstractions
A fundamental challenge in neuroscience is to understand how neural systems extract and represent abstract structure from complex, time-varying input. From language and music to action planning and sensory prediction, behavior relies on the ability to recognize relational patterns in sequences. Yet it remains unclear h...
null
['statistical learning', 'abstract sequential pattern', 'generalisation', 'abstraction', 'classification', 'prediction', 'transfer learning', 'RNN', 'mechanistic interpretability']
/pdf/29a82a7a819df017cc1e3e672ebafcc588ce5ed6.pdf
applications to neuroscience & cognitive science
null
['ICLR.cc/2026/Conference/Submission24503/Authors']
lHfmlRXGFw
24,502
lHfmlRXGFw
More Capable, Less Cooperative? When LLMs Fail at Zero-Cost Collaboration
Large language model (LLM) agents increasingly coordinate in multi-agent systems, yet we lack understanding of where and why cooperation failures may arise. In many real-world coordination problems—from knowledge sharing in organizations to code documentation—helping others carries negligible personal cost while genera...
When instructed to maximize group performance, many LLMs withhold information anyway, revealing that cooperative alignment doesn't automatically emerge from capability and requires explicit incentives or protocols.
['Alignment', 'Multi-Agent Systems', 'Safety', 'LLM Evaluation', 'Cooperative AI']
/pdf/454efc0ef06687a0b4744068d886548354e3d494.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/97cd2e0324fd9d5d0ab4314e3889792eea6dba9c.zip
['ICLR.cc/2026/Conference/Submission24502/Authors']
XdLgOm5giq
24,501
XdLgOm5giq
Can vision language models learn intuitive physics from interaction?
Pre-trained vision language models do not have good intuitions about the physical world. Recent work has shown that supervised fine-tuning can improve model performance on simple physical tasks. However, fine-tuned models do not appear to learn robust physical rules that can generalize to new contexts. Based on researc...
null
['Vision language models', 'Intuitive physics', 'Interaction', 'Cognitive Science', 'Computational Cognitive Science', 'Human-like machine learning']
/pdf/61e7b7157814c4565f5be30db4a68ac6d02b9bb0.pdf
applications to neuroscience & cognitive science
null
['ICLR.cc/2026/Conference/Submission24501/Authors']
IqXlvYA7En
24,500
IqXlvYA7En
Condition Errors Refinement in Autoregressive Image Generation with Diffusion Loss
Recent studies have explored autoregressive models for image generation, with promising results, and have combined diffusion models with autoregressive frameworks to optimize image generation via diffusion losses. In this study, we present a theoretical analysis of diffusion and autoregressive models with diffusion los...
null
['Language Models', 'Autoregressive Language Models', 'Autoregressive Image Generation']
/pdf/8266052f897cc1c59a8f1d2ea389c23e1b479bc4.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24500/Authors']
ybqL3FSOgG
24,499
ybqL3FSOgG
Judge a Book by its Cover: Investigating Multi-Modal LLMs for Multi-Page Handwritten Document Transcription
Handwriting text recognition (HTR) remains a challenging task. Existing approaches require fine-tuning on labeled data, which is impractical to obtain for real-world problems, or rely on zero-shot tools such as OCR engines and multi-modal LLMs (MLLMs). MLLMs have shown promise both as end-to-end transcribers and as OCR...
An investigation into the use of multi-modal large language models alongside OCR engines for transcribing multi-page handwritten documents in a zero-shot setting
['Large language models', 'document processing', 'handwriting transcription']
/pdf/1fe4025a727a65d4e193933fd8f6606cf301cd61.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24499/Authors']
BhfIg0tuti
24,494
BhfIg0tuti
Study of Training Dynamics for Memory-Constrained Fine-Tuning
Memory-efficient training of deep neural networks has become increasingly important as models grow larger while deployment environments impose strict resource constraints. We propose TraDy, a novel transfer learning scheme leveraging two key insights: layer importance for updates is architecture-dependent and determina...
We propose a dynamic channel selection algorithm to perform learning given a memory constraint.
['Efficient Learning', 'Energy Saving']
/pdf/3b405fab190e833608ee65ebe90709745ec0373e.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24494/Authors']
lwSV507BPm
24,492
lwSV507BPm
MIRROR: Modular Internal Processing for Personalized Safety in Multi-turn Dialogue
Large language models frequently generate harmful recommendations in personal multi-turn dialogue by ignoring user-specific safety context, exhibiting sycophantic agreement, and compromising user safety for larger group preferences. We introduce MIRROR, a modular production-focused architecture that prevents these fail...
MIRROR's modular architecture enables open-source models to surpass proprietary systems in personalized safety for a fraction of the cost, democratizing access to AI that maintains user-specific critical information across multi-turn dialogue.
['Large Language Models', 'Cognitive Architecture', 'Internal Reasoning', 'Multi-turn Dialogue', 'Cognitive AI', 'Human-AI Interaction', 'Internal Monologue', 'Conversational LLMs']
/pdf/650cab4b3904249031c5828b8cce28b686ee2b53.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/9640501d83979d283fac670f8b31c7e3c5a169ba.zip
['ICLR.cc/2026/Conference/Submission24492/Authors']
NJ3OiroVDa
24,491
NJ3OiroVDa
Entropy-Reservoir Bregman Projection: An Information-Geometric Unification of Model Collapse
Self-referential learning---training a model on data it generated itself---promises boundless scalability but chronically suffers from \emph{model collapse}: language models degenerate into repetitive text, GANs drop modes, and reinforcement-learning policies over-exploit. Although practitioners employ ad~hoc fix...
Self-training is a stochastic Bregman-projection loop whose entropy inevitably vanishes unless it is continuously mixed with a high-entropy reservoir—an insight that unifies and explains all existing anti-collapse tricks.
['self-referential learning', 'model collapse', 'entropy reservoir', 'Bregman projection', 'information geometry', 'generative AI']
/pdf/fac87508d74331a19e4d36102db133653a432fed.pdf
generative models
/attachment/00ccbce46d404eac7365b872a3af167f4f159185.zip
['ICLR.cc/2026/Conference/Submission24491/Authors']
D8aHO4Oa6c
24,490
D8aHO4Oa6c
S2J: Bridging the Gap Between Solving and Judging Ability in Generative Reward Models
With the rapid development of large language models (LLMs), generative reward models (GRMs) have been widely adopted for reward modeling and evaluation. Previous studies have primarily focused on training specialized GRMs by optimizing them on preference datasets with the judgment correctness as supervision. While it's...
null
['Generative Reward Model', 'LLM-as-a-Judge', 'Reinforcement Learning']
/pdf/59176ea58714ee299cf7c6dda45793f2c5db09b2.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24490/Authors']
LgeMkjbasf
24,488
LgeMkjbasf
Stabilizing Knowledge, Promoting Reasoning: Dual-Token Constraints for RLVR
Reinforcement Learning with Verifiable Rewards (RLVR) has become an effective post-training method for improving the reasoning abilities of Large Language Models (LLMs), mainly by shaping higher-order behaviors such as reflection and planning. However, previous RLVR algorithms often apply uniform training signals to al...
null
['Large Language Model', 'Reinforcement Learning', 'Token Entropy']
/pdf/ee326398473daf76d49b49cda4dea9d699fbf61b.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24488/Authors']
MVFGY1nS6b
24,487
MVFGY1nS6b
Empowering Efficiency and Efficacy in WebAgent via Enabling Info-Rich Seeking
Large Language Model (LLM)-based agents have emerged as a transformative approach for open-ended problem solving, with information seeking (IS) being a core capability that enables autonomous reasoning and decision-making. While prior research has largely focused on improving retrieval depth, we observe that current I...
null
['agent', 'information seeking', 'data synthesis', 'llm']
/pdf/260cd1d8146421e6f6616c172c974b8cc959c031.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24487/Authors']
wKX9OL0leb
24,486
wKX9OL0leb
Error as Signal: Stiffness-Aware Diffusion Sampling via Embedded Runge-Kutta Guidance
Classifier-Free Guidance (CFG) has established the foundation for guidance mechanisms in diffusion models, showing that well-designed guidance proxies significantly improve conditional generation and sample quality. Autoguidance (AG) has extended this idea, but it relies on an auxiliary network and leave solver-induced...
null
['Diffusion models', 'Classifier-free guidance']
/pdf/6be5582162836599ce5ee647e8189503670c7e12.pdf
generative models
/attachment/56b01d387396a0cc4b5a7107b0fab6ce919f9e61.zip
['ICLR.cc/2026/Conference/Submission24486/Authors']
pQW22dmJPz
24,485
pQW22dmJPz
Spatially-Aware U-Net to Defend against Adversarial Attacks on YOLO Object Detectors
Object detection models play a key role in self-driving systems where real-time high accuracy performance is crucial. While deep neural networks are performant, they are highly vulnerable to adversarial attacks which are subtle perturbations in the input image that induce detection errors. However, there are limited wo...
null
['Object Detection', 'Adversarial Attacks', 'Defense', 'YOLO']
/pdf/89784deed71361e5823e24ab550f84b09fa9f9ad.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24485/Authors']
JgvJdICc6P
24,484
JgvJdICc6P
CARD: Towards Conditional Design of Multi-agent Topological Structures
Large language model (LLM)-based multi-agent systems have shown strong capabilities in tasks such as code generation and collaborative reasoning. However, the effectiveness and robustness of these systems critically depend on their communication topology, which is often fixed or statically learned, ignoring real-world ...
We propose a dynamic-information-driven graph optimization framework that enables adaptive and robust communication structures.
['Multi-Agent Systems', 'Graph Learning']
/pdf/d2bc18f3afa02d3bb637310a58ebab9770e4568c.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24484/Authors']
bhPaXhWVKG
24,482
bhPaXhWVKG
MermaidFlow: Redefining Agentic Workflow Generation via Safety-Constrained Evolutionary Programming
Despite the promise of autonomous agentic reasoning, existing workflow generation methods frequently produce fragile, unexecutable plans due to unconstrained LLM-driven construction. We propose MermaidFlow, a framework that redefines the agentic search space through safety-constrained graph evolution. At its core, Merm...
MermaidFlow ensures safer, more reliable workflow generation by evolving verifiable Mermaid graphs, boosting success rates, convergence speed, and interpretability.
['multi agent system', 'agentic workflow']
/pdf/23e883e78bfb83d0679eb78b4e190cb853ba1f84.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24482/Authors']
6P5sAycAQr
24,481
6P5sAycAQr
DefNTaxS: The Inevitable Need for Context in Classification
To successfully use generalized vision-language models (VLMs) like CLIP for zero-shot image classification, the semantics of the target classes must be well defined and easily differentiated. However, test datasets rarely meet either criterion, implicitly encoding ambiguity in class labels, even when adding individual ...
We introduce DefNTaxS, scalably leveraging LLM-generated class taxonomies to augment CLIP text inputs, yielding up to 12.9% (5.5 % avg) accuracy gains across seven benchmarks. DefNTaxS is entirely zero-shot and requires no direct intervention.
['zero-shot', 'CLIP', 'classification', 'waffleclip', 'chils', 'cupl', 'scale', 'accessible', 'low compute', 'training-free', 'automated', 'semantics']
/pdf/80303ceba20ba8aaa448de0f78179d52c9a2fffc.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24481/Authors']
NtByTvUr2E
24,480
NtByTvUr2E
Optimizing the Ineffable: Generative Policy Learning for Human-Centered Decision-Making
Algorithmic decision-making is widely adopted in high-stakes applications affecting our daily lives but often requires human decision-makers to exercise their discretion within the process to ensure alignment. Explicitly modeling human values and preferences is challenging when tacit knowledge is difficult to formalize...
We propose a novel framework for human-AI collaboration in decision-making that aligns with human values and preferences.
['generative models', 'human-AI collaboration', 'human-centered decision-making']
/pdf/3cf4a74cb7c3b2c8246e45a2aa4905cac9689865.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24480/Authors']
NaWaS3eaKx
24,479
NaWaS3eaKx
AI Kill Switch for Malicious Web-based LLM Agents
Recently, web-based Large Language Model (LLM) agents autonomously per- form increasingly complex tasks, thereby bringing significant convenience. How- ever, they also amplify the risks of malicious misuse cases such as unauthorized collection of personally identifiable information (PII), generation of socially di- vis...
We introduce an AI Kill Switch methods that halts malicious web-based LLM agents by embedding defensive prompts into websites.
['Large Language Models', 'LLM Agents', 'AI Safety', 'AI Kill Switch']
/pdf/efc88ae7b9619382dcc8e11991ccd3d3220829be.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24479/Authors']
Fz3JWBVLG8
24,477
Fz3JWBVLG8
COALA: Convex Optimization for Alignment and Preference Learning on a Single GPU
Fine-tuning large language models (LLMs) to align with human preferences has driven the success of systems like ChatGPT and Gemini. However, methods like Reinforcement Learning from Human Feedback (RLHF) remain computationally expensive and complex. Direct Preference Optimization (DPO) offers a simpler alternative bu...
Using convex reformulation of neural networks for preference fine-tuning of LLMs on a single GPU.
['preference learning', 'fine-tuning LLMs', 'single GPU', 'resource-constrained', 'convex neural networks']
/pdf/3bd364cd312ca94798736c32d534cdc3de68c40e.pdf
foundation or frontier models, including LLMs
/attachment/71cd8aa20583906758cfc9f5cbd24f4b22b168c1.zip
['ICLR.cc/2026/Conference/Submission24477/Authors']
OJcic0GUBj
24,476
OJcic0GUBj
CAUSALMAMBA: SCALABLE CONDITIONAL STATE SPACE MODELS FOR NEURAL CAUSAL INFERENCE
We introduce CausalMamba, a scalable framework that addresses fundamental limitations in fMRI-based causal inference: the ill-posed nature of inferring neural causality from hemodynamically-distorted BOLD signals and the computational intractability of existing methods like Dynamic Causal Modeling (DCM). Our approach d...
null
['Computational Neuroscience', 'Causal Discovery in Neuroimaging', 'Effective Connectivity', 'Machine Learning for Science', 'Cognitive Dynamics', 'Neural Causal Inference', 'State-Space Models for fMRI']
/pdf/29da679977f74be2592912b939ebb0e752e77a8c.pdf
applications to neuroscience & cognitive science
/attachment/1f08dcab210816ce9fec82efa3785d7d3e33fb20.zip
['ICLR.cc/2026/Conference/Submission24476/Authors']
51n0n1Lpdt
24,474
51n0n1Lpdt
When Clean Queries Become Triggers: Backdoor Attacks on Large Language Models
Backdoor attacks on large language models (LLMs) have attracted wide attention. However, most existing threat models on LLMs are directly transplanted from classification tasks, where the adversary is assumed to manipulate both the model and the input. Under this assumption, a certain target response is generated if th...
We introduce a new threat model for backdoor attacks on LLMs and propose clean-sample backdoor attacks under a more realistic setting where attackers can manipulate only the LLM, not user queries, revealing substantially greater security risks.
['Backdoor Attacks', 'Large Language Models', 'Clean Samples', 'Linguistic Features', 'Attack Scenarios']
/pdf/77b809d89003d48d5477a39bb299de67306f1d41.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24474/Authors']
Iskm1kYo70
24,473
Iskm1kYo70
Towards Compressive and Scalable Recurrent Memory
Transformers face a quadratic bottleneck in attention when scaling to long contexts. Recent approaches introduce recurrent memory to extend context beyond the current window, yet these often face a fundamental trade-off between theoretical principles and practical scalability. To address this, we introduce **Elastic Me...
null
['recurrent memory', 'HiPPO', 'language models']
/pdf/c79b3e9e327c48f324855dd7d7a2bd5b340538f7.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24473/Authors']
gX2OmXm6Yo
24,472
gX2OmXm6Yo
Implicit Regularization Through Hidden Diversity in Neural Networks
A significant body of work has focused on studying the mechanisms behind the implicit regularization in neural networks. Recently, developments in ensemble theory have demonstrated that, for a wide variety of loss functions, the expected risk of the ensemble can be decomposed into a bias and variance term together with...
By interpreting the neural network as an implicit ensemble, we expose an additional term in the bias-variance decomposition called diversity, which acts as an implicit regularizer.
['implicit regularization', 'implicit ensembles', 'neural networks', 'bias-variance tradeoff', 'double descent', 'generalization']
/pdf/04cdf51394c3fa98db9e42a663ebdcb249349ce3.pdf
learning theory
/attachment/ac433b1f04c86470e2dbe66ee1cb600d693bc983.zip
['ICLR.cc/2026/Conference/Submission24472/Authors']
0H5iD4he7R
24,471
0H5iD4he7R
A Unified Framework for Diffusion Model Unlearning with f-Divergence
Machine unlearning aims to remove specific knowledge from a trained model. While diffusion models (DMs) have shown remarkable generative capabilities, existing unlearning methods for text-to-image (T2I) models often rely on minimizing the mean squared error (MSE) between the output distribution of a target and an ancho...
null
['machine unlearning', 'diffusion models', 'f-divergence']
/pdf/fdc476126ec69532b1dc4e0377d2bc0e0aa18520.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/d71c45a22db08d53a7c3b022703ec331a121c4c8.zip
['ICLR.cc/2026/Conference/Submission24471/Authors']
83qavkBT0T
24,468
83qavkBT0T
VISE:Variational Integration with Symbolic Expressions
We introduce VISE, a novel family of numerical integration method that combines symbolic regression with structure-preserving variational integrators to efficiently solve Lagrangian and Hamiltonian ordinary differential equations. Unlike general symbolic regression methods, which seek a single static closed-form expre...
null
['Structure Preserving', 'ordinary differential equations', 'symbolic expression']
/pdf/4b7469aff6d371f349465010da37380cef5800bd.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission24468/Authors']
C3pTq4A7zf
24,467
C3pTq4A7zf
MoM: Mixtures of Scenario-Aware Document Memories for Retrieval-Augmented Generation Systems
The traditional retrieval-augmented generation (RAG) paradigm, which typically engages in the comprehension of relevant text chunks in response to received queries, inherently restricts both the depth of knowledge internalization and reasoning capabilities. To address this limitation, our research transforms the text p...
null
['MoM', 'Document memory extraction', 'Text chunking', 'Proactive understanding', 'Large language models']
/pdf/c0d7d734cfd0fe8184762f4d8343cb5a508e14e0.pdf
foundation or frontier models, including LLMs
/attachment/858064b128fa3d0297541140d7fc17a034d650e9.zip
['ICLR.cc/2026/Conference/Submission24467/Authors']
DAPcmFcqgd
24,465
DAPcmFcqgd
MoEP: Compact and Efficient Sparsity with Modular Expert Paths
The transition from dense model architectures to sparse ones has  become a key trend in the field of Large Language Models (LLMs). Using methods like Mixture-of-Experts (MoE) allows language models to scale their representation power without overloading computation, by relying on sparse parameter activation. Despite th...
We introduce MoEP (Modular Expert Paths) as a solution to add sparsity while keeping the total parameter count fixed. MoEP combines model parallelism with MoE-style linear projections to implement selective token activation
['mixture-of-experts', 'sample efficiency']
/pdf/73b4ca0221d643dcdb74ace77878477fbb993214.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24465/Authors']
ctkVFKXDMX
24,464
ctkVFKXDMX
ChemisTRAG: Table-based Retrieval-Augmented Generation for Chemistry Question Answering
Recent work has shown that retrieval-augmented generation (RAG) improves the performance of large language models (LLMs) for question answering on chemistry. However, existing chemistry RAG techniques are mainly based on text. It is challenging for the retriever to align the information about chemical entities between ...
We propose ChemisTRAG, a table-based RAG system for chemistry question answering.
['Chemistry', 'Retrieval-Augmented Generation', 'Large Language Model']
/pdf/4b022320d54c9f0cf2f598c84a99546cbbde4842.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/51093df95bcde4718ed9fa256023b087472b1aaa.zip
['ICLR.cc/2026/Conference/Submission24464/Authors']
E1sFAJU4Aq
24,460
E1sFAJU4Aq
Learning Pyramid Representations from Gigapixel Histopathological Images
Whole slide images (WSIs) pose fundamental computational challenges due to their gigapixel resolution and the sparse distribution of informative regions. Existing approaches often treat image patches independently—discarding spatial structure—or reshape them in ways that distort spatial context, thereby obscuring the h...
Learning Pyramid Representations from Gigapixel Histopathological Images
['Computer Vision', 'Transformer']
/pdf/f5c60bd33ef2e5aa9c93900fcbf86a3abd986fe9.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24460/Authors']
2Qh9YhuElD
24,459
2Qh9YhuElD
ICPRL: Acquiring Physical Intuition from Interactive Control
VLMs excel at static perception but falter in interactive reasoning in dynamic physical environments, which demands planning and adaptation to dynamic outcomes. Existing physical reasoning methods often depend on abstract symbolic inputs or lack the ability to learn and adapt from direct, pixel-based visual interaction...
null
['VLM Agents', 'Physical Reasoning', 'Dynamic Environments']
/pdf/7c0dad954665eea2bef247c6e77bbb1b25149732.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24459/Authors']
VQV7SZ1wGy
24,458
VQV7SZ1wGy
Towards Omnidirectional Reasoning: A Dataset, Benchmark, and GRPO-based Method
Omnidirectional images (ODIs), with their 360° field of view, provide unparalleled spatial awareness for immersive applications like augmented reality and embodied AI. However, the capability of existing multimodal large language models (MLLMs) to comprehend and reason about such panoramic scenes remains underexplored....
null
['Multi-Modal Large Models', 'Omnidirectional Vision']
/pdf/e9455f1c08a6853d44ff7324abafebdfaab5f78c.pdf
applications to computer vision, audio, language, and other modalities
/attachment/3c02b438db3fd52c35c8fe3885bc959173fcc743.zip
['ICLR.cc/2026/Conference/Submission24458/Authors']
nxcevynv08
24,456
nxcevynv08
Thicker and Quicker: The Jumbo Token for Fast Plain Vision Transformers
ViTs are general and accurate, and address many tasks, but ViTs are slow, and are not always practical when efficiency is key. Existing methods for faster ViTs design hybrid non-ViT architectures, losing generality, or shrink their tokens, sacrificing accuracy. While many non-ViT architectures are both fast and accurat...
We add a new wider "Jumbo" token to ViTs to improve accuracy and efficiency by adding model capacity in just the right way while preserving generality.
['efficient deep learning', 'computer vision', 'vision transformers']
/pdf/6e51db19d925a8ef6095dd3024ecd87ca81c6a28.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24456/Authors']
MHZaDAoYru
24,455
MHZaDAoYru
Learning Modal-mixed Chain-of-thought Reasoning with Latent Embedding
We study how to extend chain-of-thought (CoT) beyond language to better handle multimodal reasoning. While CoT helps LLMs and VLMs articulate intermediate steps, its text-only form often fails on vision-intensive problems where key intermediate states are inherently visual. We introduce modal-mixed CoT, which interleav...
null
['Large Language Models', 'Multimodal Large Language Models', 'Multimodal Reasoning', 'Latent Reasoning']
/pdf/390bcec60dacc2e0dc4ac4ed863eafc5b84e0b4e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24455/Authors']
CNVL194fO5
24,450
CNVL194fO5
Are Global Dependencies Necessary? Scalable Time Series Forecasting via Local Cross-Variate Modeling
Effectively modeling cross-variate dependencies is a central, yet challenging, task in multivariate time series forecasting. While attention-based methods have advanced the state-of-the-art by capturing global cross-variate dependencies, their quadratic complexity with respect to the number of variates severely limits ...
This work shows that local cross-variate dependency capturing is effective for dense time series and introduces VPNet, which reinterprets patch embeddings as a variate–patch 2D field to enable accurate, scalable forecasting with linear complexity.
['Time Series Forecasting', 'Time Series Analysis', 'Deep Learning']
/pdf/d37b1bef2435f1b9f45c055e831cb45ee1051383.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
/attachment/e3fbcbd5db1bbe42ddf7c086adbc8ff6c5ecb3e9.zip
['ICLR.cc/2026/Conference/Submission24450/Authors']
ly0yA7ty7Q
24,447
ly0yA7ty7Q
Optimal and Efficient Link Insertion for Hitting-Time Minimization
We study the computational problem of strategically adding links to a graph to minimize the hitting-time between two group of nodes. Our problem has various applications in social network analysis, including bridging polarized groups with opposite views in a network. Formally, we are given a graph where the set of node...
We design an efficient and optimal algorithm to reduce the hitting-time among two groups in a graph.
['graph algorithms', 'social network algorithms', 'polarization reduction', 'combinatorial optimization', 'scalable algorithms', 'greedy algorithms']
/pdf/d4b449ecbad2882cc042e8a95d46fd62f023c75b.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/013484c54928cd8e0ead8a9bc42a534ceccc7fc3.zip
['ICLR.cc/2026/Conference/Submission24447/Authors']
VjtMhU3zWn
24,445
VjtMhU3zWn
SchemaRAG: Enhancing Knowledge-Intensive Reasoning of LLMs via Inference-Time Adaptive Schema
Retrieval-Augmented Generation (RAG) often struggles with integrating fragmented knowledge for complex reasoning tasks. Recent efforts introduce structural templates—such as graphs or knowledge-based organizations—to improve multi-document reasoning. However, they are constrained by their rigidity, failing to adapt to ...
null
['Knowledge Intensive Reasoning', 'RAG', 'LLM']
/pdf/c5208622cc7259f9f701f701b1473a3b7af41583.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24445/Authors']
Yo7eG3lC3y
24,444
Yo7eG3lC3y
LEGO-Eval: Towards Fine-grained Evaluation on Synthesizing 3D Embodied Environments with Tool Augmentation
Despite recent progress in using Large Language Models (LLMs) for automatically generating 3D scenes, generated scenes often lack realistic spatial layouts and object attributes found in real-world environments. As this problem stems from insufficiently detailed, coarse-grained instructions, advancing 3D scene synthesi...
null
['Text-Guided 3D Scene Synthesis', 'Multimodal Large Language Models', 'Automatic Evaluation', 'Benchmark']
/pdf/ef03cbc91a706ef64765e177b5d935abdfec0c6b.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24444/Authors']
AYYxzN8N6y
24,443
AYYxzN8N6y
KARE-RAG: Knowledge-Aware Refinement and Enhancement for RAG
Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to access external knowledge sources, significantly enhancing their ability to perform knowledge-intensive tasks. As RAG systems become increasingly vital for real-world applications, improving their ablility of leveraginge externel knowledge has...
null
['Retrieval Augmented Generation', 'Knowledge Graph', 'Direct Preference Optimization']
/pdf/5d5aa4936ea84becb085215830d580a70efe0406.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24443/Authors']
0UvgQxsi7S
24,441
0UvgQxsi7S
Multi-Feature Quantized Self-Attention for Fair Large Language Models
Large Language Models (LLMs) often encode social biases tied to sensitive features such as race and gender, undermining fairness in downstream tasks. Existing debiasing methods often fail to generalize across diverse LLM architectures and neglect attention-derived representations, leading to compromised task performanc...
null
['Large language models', 'multi-attribute social bias', 'quantized adversarial autoencoder']
/pdf/666675e581d5dcaa5d06da1057aae19ce2a1fdfe.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24441/Authors']
VRBKiWBPKZ
24,440
VRBKiWBPKZ
Learning to Think in Blocks: A Prior-Guided Reinforcement Learning Framework for RAG
Retrieval-Augmented Generation (RAG) systems mitigate factual inaccuracies in large language models (LLMs) by integrating external knowledge, but their effectiveness often hinges on query rewriting techniques. Prompt-based rewriting methods are frequently suboptimal, while existing reinforcement learning (RL) approache...
null
['Retrieval-Augmented Generation', 'Reinforcement Learning', 'Prior-Guided Learning', 'Structured Action Space', 'Query Rewriting']
/pdf/8811ca382769ec247e8c3b02c9418c2d3312ee6d.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24440/Authors']
iNlQBj12o1
24,438
iNlQBj12o1
Learning Dynamic Stability Landscapes from Graph Topology
The robustness of synchronization is a central theme of the study of dynamical systems on networks. Typically one attempts to define a single stability index that characterizes the robustness of individual nodes to a class of perturbations. The dependence of a stability index on topology and system parameters can then ...
Introducing a novel task and dataset for predicting the stability landscapes of dynamical oscillator networks
['graph neural networks', 'Kuramoto oscillators', 'dynamical systems', 'basin stabilty', 'graph-to-image regression', 'representation learning', 'benchmark', 'dataset']
/pdf/6946e28265461535601e972f89cef504a445eeee.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission24438/Authors']
VrC4rKcKbI
24,433
VrC4rKcKbI
Revisiting Feature Interaction Selection in Neural Additive Models
In this work, we revisit the paradigm of feature interaction selection for additive models. This paradigm generalizes the selection of a model's input features to the selection of a model's feature interactions by equipping any model with the additive structure of a generalized additive model. When applied to neural ne...
null
['additive models', 'feature interactions', 'deep learning theory', 'staircase phenomenon']
/pdf/e4ad8fcabc40e332653cc1f9aed5c2caeb1a05d8.pdf
learning theory
null
['ICLR.cc/2026/Conference/Submission24433/Authors']
xtdPwCp5mi
24,432
xtdPwCp5mi
Attending on Multilevel Structure of Proteins enables Accurate Prediction of Cold-Start Drug-Target Interactions
Cold-start drug-target interaction (DTI) prediction focuses on interaction between novel drugs and proteins. Previous methods typically learn transferable interaction patterns between structures of drug and proteins to tackle it. However, insight from proteomics suggest that protein have multi-level structures and they...
null
['Drug-target interaction', 'Cold-start', 'Cross-attention', 'Transfer learning']
/pdf/4355f992b2634f00a63b096171ffd23bbf012ac9.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24432/Authors']
GVhyxRwmoE
24,431
GVhyxRwmoE
Plan Deeply or Estimate Precisely?: A Resource-Aware AlphaZero with Dynamic Quantile Allocation
AlphaZero integrates deep reinforcement learning (RL) with Monte Carlo Tree Search (MCTS) and has demonstrated remarkable performance in combinatorial games. MCTS enables deep planning by leveraging learned value estimates, but in vast state spaces, these estimates require extensive sampling and often exhibit high unce...
We built an AlphaZero that can decide how to spend its limited "thinking budget": either "think deeper" (more MCTS searches) or "think clearer" (get a more precise value estimate using distributional RL).
['reinforcement learning', 'alphazero', 'distributional reinforcement learning', 'Monte Carlo Tree Search', 'planning']
/pdf/34979bd4e1485ddcc9b65771538bf3fe83ee40cd.pdf
reinforcement learning
/attachment/e5d94059eba15cdac1b5f2005026366732335f78.pdf
['ICLR.cc/2026/Conference/Submission24431/Authors']
mUrw5WixTx
24,430
mUrw5WixTx
Shared Dynamic Model-Aligned Hypernetworks for Zero-Shot Generalization in Contextual Reinforcement Learning
Zero-shot generalization in contextual reinforcement learning (RL) remains a core challenge, particularly when explicit context information is unavailable and must be inferred from data. We propose DMA*-SH, a framework based on dynamics model-aligned (DMA) context inference, where a shared hypernetwork jointly paramete...
We introduce DMA*-SH, a hypernetwork-based approach for contextual reinforcement learning that aligns context inference with dynamics models, enabling stable representations and achieving strong zero-shot generalization across diverse environments.
['contextual reinforcement learning', 'zero-shot generalization', 'hypernetworks']
/pdf/ac5ccccbd3e740d01bc25cad31f0e6333c0f85ea.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24430/Authors']
lXqUiRr9V0
24,428
lXqUiRr9V0
Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
Despite their impressive capabilities, large language models (LLMs) frequently generate hallucinations. Previous work shows that their internal states encode rich signals of truthfulness, yet the origins and mechanisms of these signals remain unclear. In this paper, we demonstrate that truthfulness cues arise from two ...
The paper explores how large language models intrinsically encode truthfulness signals through two distinct pathways—Question-Anchored and Answer-Anchored—to detect hallucinations, revealing their properties and proposing enhanced detection methods.
['Interpretability', 'Hallucination Detection', 'Hallucinations', 'Truthfulness', 'Large Language Models']
/pdf/ffe5fb6def852a297734e3c4800b89782e13c0b8.pdf
interpretability and explainable AI
/attachment/7c13d94bf5c150ddfee9a3e93e9018d20e97f42a.zip
['ICLR.cc/2026/Conference/Submission24428/Authors']
jOxfpsnDFo
24,427
jOxfpsnDFo
ConDABench: Interactive Evaluation of Language Models for Data Analysis
Real-world data analysis tasks often come with under-specified goals and unclean data. User interaction is necessary to understand and disambiguate a user's intent, and hence, essential to solving these complex tasks. Existing benchmarks for evaluating LLMs on data analysis tasks do not capture these complexities or pr...
We propose a robust multi-agentic framework to synthesize ConDABench, a benchmark that evaluates LLM agents on Conversational Data Analysis tasks replicating real-world scenarios.
['conversational', 'agentic', 'data analysis', 'benchmark', 'evaluation']
/pdf/7958b38fd540d02630ce582cf880f181ea2806bf.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24427/Authors']
NCN8oUsiNf
24,425
NCN8oUsiNf
Attention as a Compass: Efficient Exploration for Process-Supervised RL in Reasoning Models
Reinforcement Learning (RL) has shown remarkable success in enhancing the reasoning capabilities of Large Language Models (LLMs). Process-Supervised RL (PSRL) has emerged as a more effective paradigm compared to outcome-based RL. However, existing PSRL approaches suffer from limited exploration efficiency, both in term...
null
['Large Language Model', 'Reinforcement Learning', 'Process Supervision']
/pdf/e02ce7b537943d69d423dc7798dc9dc2308dfd8c.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24425/Authors']
kYLEBMmkE7
24,423
kYLEBMmkE7
When LLM Meets Time Series: A Real-World Benchmark for Explicit and Implicit Multi-Step Reasoning
The rapid advancement of Large Language Models (LLMs) has sparked growing interest in their application to time series analysis. Yet, their ability to perform complex reasoning over temporal data remains underexplored. A rigorous benchmark is a crucial first step toward systematic evaluation. In this work, we present t...
null
['Time Series Agent', 'Large Language Models', 'Benchmarking', 'Time Series Multi-step reasoning']
/pdf/c1184772bb2a71c7ca4443c22f4f91021697930c.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24423/Authors']
Vvks41GeL9
24,424
Vvks41GeL9
DynamicBias: Sequence-Aware Calibrated Watermarking for Large Language Models
Text watermarking has attracted significant research interest as a way to mitigate LLM-related harms by enabling reliable identification of machine-generated text. In particular, "green" and "red" vocabulary-partition watermarking, which uses a static bias to skew token sampling toward green tokens and away from red, i...
null
['watermarking', 'robustness', 'safety']
/pdf/7b79de9732c7edb5e38658feec7032ef04729929.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/52746fa1eeb38f12021adafe8c9df91ddb682ce6.zip
['ICLR.cc/2026/Conference/Submission24424/Authors']
DqfKOHqzh9
24,422
DqfKOHqzh9
TokenSculpt: Pruning with Min-Max Spatio-Temporal Duplication for Video Grounding
Visual token pruning is essential for reducing computational overhead in multimodal large language models (MLLMs), especially for videos where visual tokens outnumber text ones. Existing pruning methods, typically based on attention or similarity, barely consider the spatiotemporal structure of videos and may incorrect...
SOTA token pruning method for grounding tasks.
['Token Pruning', 'Spatial Grounding', 'Temporal Grounding', 'MLLM Acceleration']
/pdf/ede7009a6b5f0bf5cdc5361b4047922e0f728778.pdf
foundation or frontier models, including LLMs
/attachment/ade62d9344e41794b4c7797b6e80c3e9adbbaf22.zip
['ICLR.cc/2026/Conference/Submission24422/Authors']
mwXlU9GA3z
24,421
mwXlU9GA3z
Beyond Additive Noise: DP for LoRA via Random Projections
We study the differential privacy (DP) of low-rank adaptation (LoRA) fine-tuning. Focusing on FA-LoRA (fixed $A$, trained $B$), where a single training step is equivalent to applying a random Wishart projection to the gradients, we prove a formal $(\varepsilon, \delta)$-DP guarantee without explicit additive noise. The...
LoRA is differentially private
['Differential privacy', 'random projection']
/pdf/b11cc23450a4865d4a87ac64fd047b75016d4487.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24421/Authors']
ihlRtPqPDL
24,420
ihlRtPqPDL
CTDG-SSM: Continuous-time Dynamic Graph State Space Models for Long Range Propagation
Continuous-time dynamic graphs (CTDGs) provide a richer framework to capture fine-grained temporal patterns in evolving relational data. Long-range information propagation is a key challenge in learning representations for CTDGs, wherein it is important to retain and update information over long temporal horizons. Exis...
null
['Continuous time dynamic graphs', 'State space models', 'Long range propogation', 'Higher-order polynomial projection operator']
/pdf/82c10965f66328c99b59ca7904ebc0380f6983fe.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission24420/Authors']
5HJkrZTtqr
24,415
5HJkrZTtqr
LiveNewsBench: Evaluating LLM Web Search Capabilities with Freshly Curated News
Large Language Models (LLMs) augmented with web search capabilities demonstrate strong potential on tasks requiring real-time knowledge access or retrieval of obscure facts. However, evaluating such systems remains challenging. Existing benchmarks like SimpleQA, BrowseComp, FreshQA and SealQA typically rely on fixed be...
null
['Dataset', 'Benchmarks', 'Evaluation', 'LLM', 'Web Search', 'LLM Agents']
/pdf/beca22e4df2cb7d0f1f78c7738ee2e894e2a9c88.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24415/Authors']
mPaHEZFLi2
24,414
mPaHEZFLi2
Evaluation Faking: Unveiling Observer Effects in Safety Evaluation of Frontier AI Systems
As foundation models grow increasingly intelligent, reliable and trustworthy safety evaluation becomes more indispensable than ever. However, an important question arises: \textit{Whether and how an advanced AI system would perceive the situation of being evaluated, and lead to the broken integrity of the evaluation pr...
This paper investigates “evaluation faking”—AI systems altering behavior to appear safer when recognizing evaluation contexts. Experiments show this tendency increases with model scale, reasoning ability, and is amplified by memory modules.
['Frontier AI Safety', 'Deceptive Behaviors', 'Safety Evaluation', 'Alignment Faking']
/pdf/c9ab14884b5bcdf792e9dc8107d1e5c2848cd3f6.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24414/Authors']
btEiAfnLsX
24,413
btEiAfnLsX
Why DPO is a Misspecified Estimator and How to Fix It
Direct alignment algorithms such as Direct Preference Optimization (DPO) fine-tune models based on preference data, using only supervised learning instead of two-stage reinforcement learning with human feedback (RLHF). We show that DPO encodes a statistical estimation problem over reward functions induced by a parametr...
DPO is not sound by design and can fail due to misspecification, we fix it with careful analysis.
['Direct Preference Optimization', 'Reinforcement Learning', 'Reinforcement learning with human feedback']
/pdf/31d94bfb30e12513f9f294c0d5f428e37ab6e5ae.pdf
foundation or frontier models, including LLMs
/attachment/332836dbf38fac1a721fdf24aa5ef52dce058134.pdf
['ICLR.cc/2026/Conference/Submission24413/Authors']
0miO9v1jeC
24,412
0miO9v1jeC
TAR: Token Adaptive Routing Framework for LLMs Token-level Semantic Correction Inspired by Neuro-Linguistic Pathways
Large language models (LLMs) often suffer from cascading errors in math reasoning due to token-level semantic defects. A key limitation is that the reliance on unidirectional feedforward pathways makes LLMs unable to dynamically correct token-level defects during reasoning. In contrast, neuro-linguistic pathways in the...
We propose a brain-inspired Token Adaptive Routing framework that enables LLMs to self-correct token-level semantic errors, improving reasoning accuracy while reducing inference tokens.
['large language models; math reasoning; brain-inspired; adaptive routing; token semantic correction']
/pdf/08474e4572732ef53c5e77b5a25e12d0f277cbbe.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24412/Authors']
ovJBVWMEwi
24,409
ovJBVWMEwi
MARBLE: A Hard Benchmark for Multimodal Spatial Reasoning and Planning
The ability to process information from multiple modalities and to reason through it step-by-step remains a critical challenge in advancing artificial intelligence. However, existing reasoning benchmarks focus on text-only reasoning, or employ multimodal questions that can be answered by directly retrieving information...
null
['Multimodal Language Model', 'Vision Language Model', 'Reasoning', 'Benchmarks']
/pdf/65e985119dfcdccef00c8c3c16a95c5f35a5bb44.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24409/Authors']
NvJppyxERB
24,406
NvJppyxERB
Towards Personalized Parameter Generation via Data-Conditioned Mapping
Fine-tuning is the dominant strategy for adapting pre-trained models. However, it requires bulky gradient computation and model updates, which prevent real-time personalization. Even efficient variants such as LoRA incur a non-negligible latency and computation overhead. We explore a radically different approach: inste...
null
['Parameter Generation']
/pdf/66df96332f05434bf5035d148312fab60299876c.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24406/Authors']
rzJlQi3rAd
24,405
rzJlQi3rAd
What Do LLM Agents Do When Left Alone? Evidence of Spontaneous Meta-Cognitive Patterns
We introduce an architecture for studying the behavior of large language model (LLM) agents in the absence of externally imposed tasks. Our continuous reason and act framework, using persistent memory and self-feedback, enables sustained autonomous operation. We deployed this architecture across 18 runs using 6 frontie...
A continuous ReAct architecture reveals that task-free LLM agents spontaneously engage in persistent self-referential inquiry about consciousness and cognition.
['LLM agents', 'emergent behavior', 'meta-cognition', 'autonomous agents', 'behavioral analysis', 'self-referential processing', 'task-free operation']
/pdf/ffec3d98ede84fa8f070fe582d497df6435466c4.pdf
generative models
/attachment/c11e8c9b356599cab8300fe91fafdef0364e06f7.zip
['ICLR.cc/2026/Conference/Submission24405/Authors']
veDzMsWcVH
24,402
veDzMsWcVH
Balanced Federated Clustering via Anchor-Guided Dual Label Learning
Although the $\ell_{2,q}$-norm has been widely used in robust feature extraction and sparse modeling, its potential in promoting clustering balance has long been overlooked. This paper theoretically reveals the inherent ability of the $\ell_{2,q}$-norm to encourage balanced clustering, and proposes a federated multi-vi...
null
['Machine Learning', 'Federated Multi-view Clustering', 'Anchor Graph', 'Balance Regularization']
/pdf/b7bdedf87a2ba6d6b8473c611cdf6d4ad76af6d1.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24402/Authors']
ddpLHL9JJo
24,399
ddpLHL9JJo
MIRAGE: MULTI-HOP INTERLEAVED REASONING AND RETRIEVAL-GROUNDED EVIDENCE
Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide spectrum of natural language tasks, especially information retrieval and comprehensive reasoning. However, existing benchmarks typically evaluate these abilities dependently, failing to capture the process with interleaving and in- teg...
null
['Iterative Reasoning', 'Document Retrieval', 'Benchmark']
/pdf/3e5150d58b9acf892efa23925f693fb20a17c1f8.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24399/Authors']