id stringlengths 10 10 | url stringlengths 42 42 | title stringlengths 5 214 | average_rating float64 -1 8.5 | average_confidence float64 -1 5 | ratings listlengths 0 9 | confidences listlengths 0 9 | reviewers_num int64 0 9 | keywords listlengths 1 42 | abstract stringlengths 26 4.31k | tldr stringlengths 0 250 | primary_area stringclasses 21
values | pdf_url stringlengths 40 40 | submission_date timestamp[s]date 2025-09-01 19:59:51 2025-09-20 20:18:08 | total_reviews int64 0 18 | reviews listlengths 0 9 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vxkzW4ljeX | https://openreview.net/forum?id=vxkzW4ljeX | A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws | 5.5 | 3 | [
4,
6,
8,
4
] | [
3,
3,
2,
4
] | 4 | [
"Neural scaling law",
"model compression",
"lottery ticket hypothesis",
"deep learning theory"
] | When training large-scale models, the performance typically scales with the number of parameters and the dataset size according to a slow power law. A fundamental theoretical and practical question is whether comparable performance can be achieved with significantly smaller models and substantially less data. In this w... | We prove that permutation symmetry enables polylogarithmic compression of neural networks and datasets, thus establishing the dynamical lottery ticket hypothesis and boosting neural scaling laws | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=vxkzW4ljeX | 2025-09-19T05:07:02 | 4 | [
{
"id": "vvIZ8RIzRX",
"forum": "vxkzW4ljeX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_YxjE",
"reviewer_name": "Reviewer_YxjE",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
fwCoRzh0Dw | https://openreview.net/forum?id=fwCoRzh0Dw | InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU | 4 | 3 | [
6,
4,
2
] | [
2,
3,
4
] | 3 | [
"Sparse Attention",
"Efficient Attention",
"Context Extrapolation",
"KV Cache Offloading"
] | In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical l... | InfiniteHiP extends the servable model context length beyond VRAM and pretrained model context limitation. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=fwCoRzh0Dw | 2025-09-17T09:29:23 | 3 | [
{
"id": "1VQ0xZHvLL",
"forum": "fwCoRzh0Dw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_SD7R",
"reviewer_name": "Reviewer_SD7R",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "Infini... |
5rjSeZCM6l | https://openreview.net/forum?id=5rjSeZCM6l | FedSumUp:Secure Federated Learning Without Client-Side Training for Resource-Constrained Edge Devices | 3.5 | 3.25 | [
4,
2,
4,
4
] | [
3,
3,
3,
4
] | 4 | [
"Federated Learning",
"Data Condensation",
"Server-Side Optimization",
"Privacy-Preserving",
"Edge Devices",
"Variational Autoencoder"
] | Horizontal Federated Learning (HFL) enables multiple clients with private data to collaboratively train a global model without sharing their local data. As a research branch of HFL, Federated Data Condensation with Distribution Matching (FDCDM) introduces a novel collaborative paradigm where clients upload small synthe... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=5rjSeZCM6l | 2025-09-20T12:40:47 | 4 | [
{
"id": "GcXZTsH254",
"forum": "5rjSeZCM6l",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_VTkQ",
"reviewer_name": "Reviewer_VTkQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
qN0Il4dtGg | https://openreview.net/forum?id=qN0Il4dtGg | HARMAP: Hierarchical Atomic Representation for Materials Property Prediction | 3.5 | 3 | [
2,
2,
4,
6
] | [
4,
3,
3,
2
] | 4 | [
"AI for Materials",
"Atomic Representation",
"Material Property Prediction"
] | Accurate prediction of material properties is a key step toward rapid materials discovery and cost-effective exploration of vast chemical spaces. Recent advances in machine learning (ML) offer a data-driven alternative that enables fast and scalable property estimation. However, prevailing graph-based pipelines use one... | A Hierarchical Atomic Representation for Materials Property prediction. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=qN0Il4dtGg | 2025-09-10T21:25:01 | 4 | [
{
"id": "Kr0LTtqs14",
"forum": "qN0Il4dtGg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_CDzq",
"reviewer_name": "Reviewer_CDzq",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
0hLuQAT3fV | https://openreview.net/forum?id=0hLuQAT3fV | Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection | 5 | 3.5 | [
4,
4,
4,
8
] | [
3,
4,
4,
3
] | 4 | [
"Diffusion Model",
"AI Safety",
"Image Immunization",
"Adversarial Attack",
"Image Editing"
] | Recent advances in diffusion models have enabled powerful image editing capabilities guided by natural language prompts, unlocking new creative possibilities. However, they introduce significant ethical and legal risks, such as deepfakes and unauthorized use of copyrighted visual content. To address these risks, image ... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=0hLuQAT3fV | 2025-09-12T19:50:27 | 4 | [
{
"id": "Cp6SNqZd08",
"forum": "0hLuQAT3fV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_nGCo",
"reviewer_name": "Reviewer_nGCo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
3sJ4zKToW6 | https://openreview.net/forum?id=3sJ4zKToW6 | Consistent Low-Rank Approximation | 6.666667 | 3.333333 | [
4,
8,
8
] | [
3,
2,
5
] | 3 | [
"low-rank approximation",
"online algorithms",
"consistency",
"recourse"
] | We introduce and study the problem of consistent low-rank approximation, in which rows of an input matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ arrive sequentially and the goal is to provide a sequence of subspaces that well-approximate the optimal rank-$k$ approximation to the submatrix $\mathbf{A}^{(t)}$ that has arr... | optimization | https://openreview.net/pdf?id=3sJ4zKToW6 | 2025-09-19T05:52:21 | 3 | [
{
"id": "G9M6d2dYmo",
"forum": "3sJ4zKToW6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_ex4U",
"reviewer_name": "Reviewer_ex4U",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
OyIJvyyB3R | https://openreview.net/forum?id=OyIJvyyB3R | LLM2Fx-Tools: Tool Calling for Music Post-Production | 5.5 | 3.5 | [
4,
8,
6,
4
] | [
3,
3,
4,
4
] | 4 | [
"Music Post Production",
"Fx Chain Generation",
"Tool Calling"
] | This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine their order, and estimate parameters, guided... | LLM2Fx-Tools is a framework that uses a multimodal LLM to automatically generate executable audio effect chains (as tools), chain-of-thought reasoning, and natural language responses. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=OyIJvyyB3R | 2025-09-19T13:42:11 | 4 | [
{
"id": "B7fQjc5nan",
"forum": "OyIJvyyB3R",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Rbd9",
"reviewer_name": "Reviewer_Rbd9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
rcsZNV9A5j | https://openreview.net/forum?id=rcsZNV9A5j | Flash Multi-Head Feed-Forward Network | 5 | 3.75 | [
6,
4,
4,
6
] | [
3,
4,
4,
4
] | 4 | [
"Machine Learning Systems",
"Machine Learning",
"Software-Hardware Codesign",
"Natural Language Processing",
"Transformer",
"Deep Learning",
"Model Architecture"
] | We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the... | We propose a novel multi-head FFN that achieves better transformer model performance while using 3-5x less memory and running 1.00-1.08x faster than standard SwiGLU FFNs. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=rcsZNV9A5j | 2025-09-16T16:13:44 | 4 | [
{
"id": "TygVX9zSRX",
"forum": "rcsZNV9A5j",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_i2pJ",
"reviewer_name": "Reviewer_i2pJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This p... |
eS4MAmmCHy | https://openreview.net/forum?id=eS4MAmmCHy | PEL-NAS: Search Space Partitioned Architecture Prompt Co-evolutionary LLM-driven Hardware-Aware Neural Architecture Search | 3.5 | 4 | [
4,
4,
4,
2
] | [
4,
4,
4,
4
] | 4 | [
"Large Language Model",
"Hardware-aware",
"Neural Architecture Search"
] | Hardware-Aware Neural Architecture Search (HW-NAS) requires joint optimization of accuracy and latency under device constraints.
Traditional supernet-based methods require multiple GPU days per dataset. Large Language Model (LLM)-driven approaches avoid training a large supernet and can provide quick feedback, but we ... | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=eS4MAmmCHy | 2025-09-18T03:16:21 | 4 | [
{
"id": "r5WN4tP0vh",
"forum": "eS4MAmmCHy",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_ygWA",
"reviewer_name": "Reviewer_ygWA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This p... | |
MgVNhx5uaa | https://openreview.net/forum?id=MgVNhx5uaa | ATOM-Bench: From Atoms to Conclusions in Objective Evaluation of Large Multimodal Models Reasoning | 3 | 3.75 | [
2,
4,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"multimodal Large Language Models",
"benchmark",
"chain of thought"
] | Chain-of-Thought (CoT) reasoning has significantly enhanced the ability of Large Multimodal Models (LMMs) to tackle complex image–text tasks, establishing itself as a cornerstone of multimodal learning. Despite significant progress, the impact of CoT on LMMs still lacks objective evaluation and in-depth research. Curre... | We introduce ATOM-Bench, a diagnostic benchmark for evaluating Chain-of-Thought reasoning in Large Multimodal Models via objective atomic questions, spanning 2,920 QAs over 570 real-world images, to address challenges of reasoning reliability. | datasets and benchmarks | https://openreview.net/pdf?id=MgVNhx5uaa | 2025-09-18T21:58:39 | 4 | [
{
"id": "qyea8A8FPG",
"forum": "MgVNhx5uaa",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sAqG",
"reviewer_name": "Reviewer_sAqG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
wztR0XcNW9 | https://openreview.net/forum?id=wztR0XcNW9 | TopoCore: Unifying Topology Manifolds and Persistent Homology for Data Pruning | 4 | 3 | [
4,
2,
6
] | [
3,
3,
3
] | 3 | [
"Coreset Selection",
"Topological Data Analysis",
"Persistent Homology",
"Architectural Transferability",
"Data-Efficient Learning",
"Manifold Learning",
"Pretrained Models"
] | Geometric coreset selection methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometric metrics makes them highly sensitive to variations in feature embeddings, leading to poor performance when transferring across different network architectures or when d... | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=wztR0XcNW9 | 2025-09-18T02:54:05 | 3 | [
{
"id": "p1cclI53pH",
"forum": "wztR0XcNW9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_Sq9q",
"reviewer_name": "Reviewer_Sq9q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The pa... | |
WnRzN4U8Y8 | https://openreview.net/forum?id=WnRzN4U8Y8 | WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation | 5 | 4.5 | [
4,
6,
4,
6
] | [
5,
5,
3,
5
] | 4 | [
"Referring image segmentation",
"parameter efficient tuning",
"computer vision"
] | Existing Parameter-Efficient Tuning (PET) methods for Referring Image Segmentation (RIS) primarily focus on layer-wise feature alignment, often neglecting the crucial role of a neck module for the intermediate fusion of aggregated multi-scale features, which creates a significant performance bottleneck. To address this... | This paper introduces WIMFRIS, a framework that achieves state-of-the-art in referring image segmentation by proposing a novel HMF neck module to efficiently fuse text with visual features , overcoming a key performance bottleneck in prior methods. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=WnRzN4U8Y8 | 2025-09-20T14:00:25 | 4 | [
{
"id": "l3NeqmvthW",
"forum": "WnRzN4U8Y8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_N61Y",
"reviewer_name": "Reviewer_N61Y",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
zDI2G8t0of | https://openreview.net/forum?id=zDI2G8t0of | A Statistical Benchmark for Diffusion Posterior Sampling Algorithms | 5.5 | 4 | [
4,
8,
4,
6
] | [
4,
5,
3,
4
] | 4 | [
"Diffusion models",
"Bayesian inverse problems",
"statistical evaluation",
"Gibbs sampling"
] | We propose a statistical benchmark for diffusion posterior sampling (DPS) algorithms in linear inverse problems. Our test signals are discretized Lévy processes whose posteriors admit efficient Gibbs methods. These Gibbs methods provide gold-standard posterior samples for direct, distribution-level comparisons with (DP... | We made an evaluation pipeline for diffusion posterior sampling algorithms for Bayesian linear inverse problems that relies on the construction of posteriors with known posteriors that we can efficiently sample from. | datasets and benchmarks | https://openreview.net/pdf?id=zDI2G8t0of | 2025-09-19T00:36:58 | 4 | [
{
"id": "qh8Nh3DeU4",
"forum": "zDI2G8t0of",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13084/Reviewer_tkeZ",
"reviewer_name": "Reviewer_tkeZ",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "- The... |
Bq5lSYZl4L | https://openreview.net/forum?id=Bq5lSYZl4L | Conversational Orientation Reasoning: Egocentric-to-Allocentric Navigation with Multimodal Chain-of-Thought | 2 | 2.666667 | [
2,
2,
2
] | [
4,
3,
1
] | 3 | [
"conversational AI",
"multimodal reasoning",
"chain-of-thought",
"spatial reasoning",
"egocentric navigation"
] | Conversational agents must translate egocentric utterances (e.g., “on my right”) into allocentric orientations (N/E/S/W). This challenge is particularly critical in indoor or complex facilities where GPS signals are weak and detailed maps are unavailable. While chain-of-thought (CoT) prompting has advanced reasoning in... | We introduce the Conversational Orientation Reasoning (COR) benchmark and propose a multimodal chain-of-thought framework for egocentric-to-allocentric orientation reasoning. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=Bq5lSYZl4L | 2025-09-18T16:58:03 | 3 | [
{
"id": "pd0onPVjy7",
"forum": "Bq5lSYZl4L",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10972/Reviewer_dPzU",
"reviewer_name": "Reviewer_dPzU",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... |
Fz0KFsZE6C | https://openreview.net/forum?id=Fz0KFsZE6C | OpenSIR: Open-Ended Self-Improving Reasoner | 4 | 3.75 | [
4,
4,
4,
4
] | [
3,
4,
4,
4
] | 4 | [
"large language model",
"math reasoning",
"self-play",
"reinforcement learning"
] | Recent advances in large language model (LLM) reasoning through reinforcement learning rely on annotated datasets for verifiable rewards, potentially limiting models' ability to exceed human-level performance. While self-play offers a promising alternative, existing approaches depend on external verifiers or cannot lea... | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=Fz0KFsZE6C | 2025-09-19T23:25:06 | 4 | [
{
"id": "k8yimgxXcV",
"forum": "Fz0KFsZE6C",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19344/Reviewer_LBsx",
"reviewer_name": "Reviewer_LBsx",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
QpqBqCTtW4 | https://openreview.net/forum?id=QpqBqCTtW4 | Unifying Stable Optimization and Reference Regularization in RLHF | 5 | 2.75 | [
6,
4,
4,
6
] | [
4,
2,
3,
2
] | 4 | [
"RLHF",
"LLM",
"Alignment"
] | Reinforcement Learning from Human Feedback (RLHF) has advanced alignment capabilities significantly but remains hindered by two core challenges: reward hacking and stable optimization. Current solutions independently address these issues through separate regularization strategies, specifically a KL-divergence penalty a... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=QpqBqCTtW4 | 2025-09-03T09:45:48 | 4 | [
{
"id": "yqzBpTdgiX",
"forum": "QpqBqCTtW4",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1200/Reviewer_7Noe",
"reviewer_name": "Reviewer_7Noe",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The pa... | |
kWl13kRJTQ | https://openreview.net/forum?id=kWl13kRJTQ | AC-Sampler: Accelerate and Correct Diffusion Sampling with Metropolis-Hastings Algorithm | 4.666667 | 3.666667 | [
4,
6,
4
] | [
3,
4,
4
] | 3 | [
"Diffusion model",
"Metropolis-Hastings Algorithm",
"Langevin Dynamics"
] | Diffusion-based generative models have recently achieved state-of-the-art performance in high-fidelity image synthesis. These models learn a sequence of denoising transition kernels that gradually transform a simple prior distribution into a complex data distribution. However, requiring many transitions not only slows ... | Accelerate and Correct Diffusion Sampling with Metropolis-Hastings Algorithm | generative models | https://openreview.net/pdf?id=kWl13kRJTQ | 2025-09-19T09:41:43 | 3 | [
{
"id": "WL3lfb3jFc",
"forum": "kWl13kRJTQ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14955/Reviewer_MQvF",
"reviewer_name": "Reviewer_MQvF",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "Dear ... |
nRl7D1D3qf | https://openreview.net/forum?id=nRl7D1D3qf | Spatial Sign based Direct Sparse Linear Discriminant Analysis for High Dimensional Data | 3.333333 | 3.666667 | [
2,
4,
4
] | [
4,
3,
4
] | 3 | [
"High dimensional data",
"Linear discriminant analysis",
"Spatial-sign"
] | Robust high-dimensional classification under heavy-tailed distributions without losing efficiency, is a central challenge in modern statistics and machine learning. However, most existing linear discriminant analysis (LDA) methods are sensitive to deviations from normality and may suffer from suboptimal performance in ... | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=nRl7D1D3qf | 2025-09-18T22:55:01 | 3 | [
{
"id": "iGiMRo6ObX",
"forum": "nRl7D1D3qf",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12368/Reviewer_pWtR",
"reviewer_name": "Reviewer_pWtR",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
JxmjzC6syB | https://openreview.net/forum?id=JxmjzC6syB | Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks | 3.5 | 3.75 | [
4,
2,
6,
2
] | [
4,
4,
4,
3
] | 4 | [
"Fair Machine Learning",
"stochastic approximation",
"Augmented Lagrangian",
"Sequential Quadratic Programming",
"benchmarking"
] | The ability to train Deep Neural Networks (DNNs) with constraints is instrumental in improving the fairness of modern machine-learning models. Many algorithms have been analysed in recent years, and yet there is no standard, widely accepted method for the constrained training of DNNs. In this paper, we provide a challe... | We provide a benchmark for comparing stochastic approximation algorithms, based on real-world fairness-constrained learning problems. | datasets and benchmarks | https://openreview.net/pdf?id=JxmjzC6syB | 2025-09-20T18:06:49 | 4 | [
{
"id": "FJkAp0M492",
"forum": "JxmjzC6syB",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24989/Reviewer_1Hbo",
"reviewer_name": "Reviewer_1Hbo",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... |
kXhPkDaFbJ | https://openreview.net/forum?id=kXhPkDaFbJ | ProtoKV: Long-context Knowledges Are Already Well-Organized Before Your Query | 5 | 3 | [
4,
4,
6,
6
] | [
3,
2,
3,
4
] | 4 | [
"Large Language Model",
"KV Cache"
] | Modern Large Language Models (LLMs) face fundamental challenges in processing long text sequences due to the quadratic complexity of attention mechanisms. Key-Value (KV) cache retention strategies mitigate this issue by selectively preserving salient KV pairs for autoregressive generation. However, existing methods fai... | We discovered a new paradigm for key distribution in LLMs and used it to guide the KV cache compression strategy. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=kXhPkDaFbJ | 2025-09-14T16:29:35 | 4 | [
{
"id": "cq17GdgvB8",
"forum": "kXhPkDaFbJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5042/Reviewer_mS4R",
"reviewer_name": "Reviewer_mS4R",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The au... |
qAfbeMal0m | https://openreview.net/forum?id=qAfbeMal0m | TimeExpert: Boosting Long Time Series Forecasting with Temporal Mix of Experts | 2.5 | 3.75 | [
2,
4,
2,
2
] | [
3,
4,
4,
4
] | 4 | [
"Time-Series",
"Mix of Experts",
"Lag Effects"
] | Transformer-based architectures dominate time series modeling by enabling global attention over all timestamps, yet their rigid “one-size-fits-all” context aggregation fails to address two critical challenges in real-world data: (1) inherent lag effects, where the relevance of historical timestamps to a query varies dy... | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=qAfbeMal0m | 2025-09-01T22:36:11 | 4 | [
{
"id": "mayMhGKz9x",
"forum": "qAfbeMal0m",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission382/Reviewer_uwnJ",
"reviewer_name": "Reviewer_uwnJ",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The aut... | |
TvpaeQVTGQ | https://openreview.net/forum?id=TvpaeQVTGQ | A Fast, Reliable, and Secure Programming Language for LLM Agents with Code Actions | 5.5 | 2.25 | [
6,
6,
4,
6
] | [
2,
2,
4,
1
] | 4 | [
"llm",
"agent",
"code actions",
"code generation"
] | Modern large language models (LLMs) are often deployed as agents, calling external tools adaptively to solve tasks. Rather than directly calling tools, it can be more effective for LLMs to write code to perform the tool calls, enabling them to automatically generate complex control flow such as conditionals and loops. ... | We propose a new language for LLM agents to use for actions, and we show its benefits over Python in terms of performance, reliability, and security. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=TvpaeQVTGQ | 2025-09-19T04:23:36 | 4 | [
{
"id": "uwDUZ3rzdg",
"forum": "TvpaeQVTGQ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14018/Reviewer_FyqM",
"reviewer_name": "Reviewer_FyqM",
"rating": 6,
"confidence": 2,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
K0idbmzcgc | https://openreview.net/forum?id=K0idbmzcgc | OS-W2S: An Automatic Labeling Engine for Language-Guided Open-Set Aerial Object Detection | 4.8 | 3 | [
8,
2,
4,
6,
4
] | [
3,
3,
2,
3,
4
] | 5 | [
"Open-Set Aerial Object Detection",
"Automatic Label Engine",
"Multi-instance Open-set Aerial Dataset"
] | In recent years, language-guided open-set aerial object detection has gained significant attention due to its better alignment with real-world application needs. However, due to limited datasets, most existing language-guided methods primarily focus on vocabulary-level descriptions, which fail to meet the demands of fi... | datasets and benchmarks | https://openreview.net/pdf?id=K0idbmzcgc | 2025-09-19T08:13:11 | 5 | [
{
"id": "jv63TR5pJc",
"forum": "K0idbmzcgc",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14647/Reviewer_6Dr8",
"reviewer_name": "Reviewer_6Dr8",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
KUn4IBIZC7 | https://openreview.net/forum?id=KUn4IBIZC7 | MotifGrIm: Motif-Based Multi-Granularity Graph-Image Pretraining for Molecular Representation Learning | 2.5 | 4.5 | [
2,
2,
2,
4
] | [
5,
4,
5,
4
] | 4 | [
"Multi-Modal Contrastive Learning",
"Molecular Representation Learning",
"Graph Neural Network"
] | Molecular representation learning is widely considered as a crucial task in computer-aided molecular applications and design. Recently, many studies have explored pretraining models on unlabeled data to learn molecular structures and enhance the performance of downstream tasks. However, existing methods mainly focus on... | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=KUn4IBIZC7 | 2025-09-19T13:26:21 | 4 | [
{
"id": "dpFgmibU9f",
"forum": "KUn4IBIZC7",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16081/Reviewer_dFht",
"reviewer_name": "Reviewer_dFht",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This ... | |
Bp2VlfYAMc | https://openreview.net/forum?id=Bp2VlfYAMc | TIPS: A Text-Image Pairs Synthesis Framework for Robust Text-based Person Retrieval | 5 | 4 | [
2,
6,
4,
8
] | [
4,
4,
4,
4
] | 4 | [
"Text-based Person Retrieval",
"Text-Image Pairs Synthesis",
"Diffusion Model",
"Identity Preservation",
"Test-Time Augmentation"
] | Text-based Person Retrieval (TPR) faces critical challenges in practical applications, including zero-shot adaptation, few-shot adaptation, and robustness issues. To address these challenges, we propose a Text-Image Pairs Synthesis (TIPS) framework, which is capable of generating high-fidelity and diverse pedestrian te... | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=Bp2VlfYAMc | 2025-09-20T01:35:00 | 4 | [
{
"id": "ZigShWA1Ae",
"forum": "Bp2VlfYAMc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20172/Reviewer_Ao47",
"reviewer_name": "Reviewer_Ao47",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... | |
JPLRtQINNy | https://openreview.net/forum?id=JPLRtQINNy | Domain Bridging: Enabling Adaptation without Peeking at Target Data | 3.333333 | 4 | [
4,
2,
4
] | [
4,
5,
3
] | 3 | [
"domain bridging",
"evaluation-based adaptation",
"zeroth-order optimization",
"proprietary target data"
] | Adapting models to target domains with proprietary data remains a challenging problem. One possible setup to enable adaptation is to allow target domain owners to privately evaluate candidate models on their own data. For example, model providers consider how to adjust models to better fit the unseen target data, relyi... | Domain Bridging introduces an efficient framework that learns source data perturbations to bridge domain gaps, enabling effective model fine-tuning for target domains without requiring direct access to proprietary target data. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=JPLRtQINNy | 2025-09-17T18:39:58 | 3 | [
{
"id": "tSIq7SSmfc",
"forum": "JPLRtQINNy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8980/Reviewer_RAvJ",
"reviewer_name": "Reviewer_RAvJ",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The pa... |
jAYHFBdQ0M | https://openreview.net/forum?id=jAYHFBdQ0M | Johnson-Lindenstrauss Transforms in Distributed Optimization | 3.5 | 3 | [
2,
4,
4,
4
] | [
4,
4,
2,
2
] | 4 | [
"optimization",
"distributed optimization",
"communication compresson"
] | Increasing volumes of data and models in the machine learning demand efficient methods. Distributed optimization addresses these challenges, for instance, by utilizing compression mechanisms, that reduce the number of bits transmitted. One of the known techniques, that diminish the dimension of the database are Johnson... | optimization | https://openreview.net/pdf?id=jAYHFBdQ0M | 2025-09-17T16:49:35 | 4 | [
{
"id": "wYRuKyGPEl",
"forum": "jAYHFBdQ0M",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8812/Reviewer_ecJS",
"reviewer_name": "Reviewer_ecJS",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The pa... | |
bjNKvuBMqJ | https://openreview.net/forum?id=bjNKvuBMqJ | Solving robust MDPs as a sequence of static RL problems | 3.5 | 3.25 | [
4,
2,
4,
4
] | [
3,
4,
3,
3
] | 4 | [
"Robust reinforcement learning"
] | Designing control policies whose performance level is guaranteed to remain above a given
threshold in a span of environments is a critical feature for the adoption of reinforcement learning
(RL) in real-world applications. The search for such robust policies is a notoriously difficult
problem, related to the so-called ... | We propose IWOCS, a method for robust MDPs that finds worst-case transitions, separates policy optimization from adversarial dynamics, and matches state-of-the-art deep RL performance. | reinforcement learning | https://openreview.net/pdf?id=bjNKvuBMqJ | 2025-09-19T15:48:19 | 4 | [
{
"id": "fLHf4O2p5e",
"forum": "bjNKvuBMqJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16724/Reviewer_E1gk",
"reviewer_name": "Reviewer_E1gk",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
2Aj7sA2vbb | https://openreview.net/forum?id=2Aj7sA2vbb | MADGen:Minority Attribute Discovery in Text-to-Image Generative Models | 4 | 3.666667 | [
6,
4,
2
] | [
3,
4,
4
] | 3 | [
"Bias identification",
"Bias mitigation",
"Fairness",
"Diffusion models"
] | Text-to-image diffusion models achieve impressive generation quality but also inherit and amplify biases from training data, resulting in biased coverage of semantic attributes. Prior work addresses this in two ways. Closed-set approaches mitigate biases in predefined fairness categories (e.g., gender, race), assuming ... | A framework to identify minority or underrepresented attributes in the intermediate representations of diffusion models. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=2Aj7sA2vbb | 2025-09-14T17:38:00 | 4 | [
{
"id": "7NFMIySpiC",
"forum": "2Aj7sA2vbb",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5074/Reviewer_Z4F1",
"reviewer_name": "Reviewer_Z4F1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This w... |
zq40cmz1JD | https://openreview.net/forum?id=zq40cmz1JD | When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs | 5 | 3.5 | [
6,
6,
4,
4
] | [
4,
4,
3,
3
] | 4 | [
"Large Language Models",
"Speculative Decoding",
"Side Channel Attack",
"Privacy"
] | Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by... | We develop a side channel attack leaking private user inputs by exploiting speculative decoding optimizations in LLM inference. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=zq40cmz1JD | 2025-09-19T04:10:50 | 4 | [
{
"id": "YAO1FTCAHu",
"forum": "zq40cmz1JD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13973/Reviewer_mQbv",
"reviewer_name": "Reviewer_mQbv",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
if1Ndb6RWD | https://openreview.net/forum?id=if1Ndb6RWD | Information-based Value Iteration Networks for Decision Making Under Uncertainty | 3.5 | 3 | [
2,
6,
2,
4
] | [
4,
4,
2,
2
] | 4 | [
"Reinforcement Learning",
"value iteration networks",
"planning under uncertainty"
] | Deep neural networks that incorporate classic reinforcement learning methods, such as value iteration, into their structure significantly outperform randomly structured networks in learning and generalization. These networks, however, are mostly limited to environments with no or very low amounts of uncertainty. In thi... | We proposed a novel deep architecture for decision making under uncertainty based on planning for reward maximization and information gathering. | reinforcement learning | https://openreview.net/pdf?id=if1Ndb6RWD | 2025-09-19T04:03:42 | 4 | [
{
"id": "QkxtFCaSOb",
"forum": "if1Ndb6RWD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13955/Reviewer_wGbv",
"reviewer_name": "Reviewer_wGbv",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
dlaNQM6YbZ | https://openreview.net/forum?id=dlaNQM6YbZ | The Flaw of Averages: Quantifying Uniformity of Performance on Benchmarks | 4.5 | 3.25 | [
6,
6,
2,
4
] | [
3,
3,
4,
3
] | 4 | [
"Benchmark reliability",
"meta-evaluation of benchmarks",
"evaluation reliability",
"diagnostic evaluation"
] | Benchmarks shape scientific conclusions about model capabilities and steer model development. This creates a feedback loop: stronger benchmarks drive better models, and better models demand more discriminative benchmarks. Ensuring benchmark reliability is therefore essential for trustworthy evaluation and meaningful pr... | datasets and benchmarks | https://openreview.net/pdf?id=dlaNQM6YbZ | 2025-09-19T08:12:50 | 4 | [
{
"id": "9tq7VP8KiW",
"forum": "dlaNQM6YbZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14643/Reviewer_F1zp",
"reviewer_name": "Reviewer_F1zp",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Bench... | |
ErED2dvR7Z | https://openreview.net/forum?id=ErED2dvR7Z | Cascaded Flow Matching for Heterogeneous Tabular Data with Mixed-Type Features | 2.5 | 3.5 | [
2,
2,
2,
4
] | [
4,
2,
4,
4
] | 4 | [
"tabular data",
"flow matching",
"generative modeling",
"synthetic data"
] | Advances in generative modeling have recently been adapted to heterogeneous tabular data. However, generating mixed-type features that combine discrete values with an otherwise continuous distribution remains challenging.
We advance the state-of-the-art in diffusion-based generative models for heterogeneous tabular da... | A cascaded flow matching framework that generates details in tabular data conditioned on low-resolution features. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=ErED2dvR7Z | 2025-09-19T21:57:07 | 4 | [
{
"id": "BXQ9NHnc2f",
"forum": "ErED2dvR7Z",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18697/Reviewer_5EfX",
"reviewer_name": "Reviewer_5EfX",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "Mixed... |
9PpLnRAZjN | https://openreview.net/forum?id=9PpLnRAZjN | End-to-End One Step Flow Matching via Flow Fitting | 4 | 4.25 | [
2,
6,
4,
4
] | [
5,
5,
4,
3
] | 4 | [
"Flow matching",
"Single step generative models"
] | Diffusion and flow-matching models have demonstrated impressive performance in generating diverse, high-fidelity images by learning transformations from noise to data. However, their reliance on multi-step sampling requires repeated neural network evaluations, leading to high computational cost. We propose FlowFit, a f... | generative models | https://openreview.net/pdf?id=9PpLnRAZjN | 2025-09-19T15:39:30 | 4 | [
{
"id": "EsTXV9su34",
"forum": "9PpLnRAZjN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16679/Reviewer_me4z",
"reviewer_name": "Reviewer_me4z",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "This ... | |
wcInjlUp8V | https://openreview.net/forum?id=wcInjlUp8V | CoTabBench: A Real-World Benchmark for Question Answering over Weakly-Structured and Heterogeneous Tables | 4 | 4 | [
2,
4,
4,
6
] | [
4,
4,
4,
4
] | 4 | [
"Table Question Answering",
"Large Language Models",
"Benchmark",
"Real-World Data"
] | Recent advancements in Large Language Models (LLMs) have significantly propelled their capabilities in table-based question answering. However, existing benchmarks predominantly feature well-structured tables, failing to address the complexities of real-world data, which is often weakly-structured and contains highly h... | To address the fact that LLMs fail on complex, real-world tables, we created CoTabBench: a comprehensive benchmark and dataset designed to push models beyond simple structured data and foster more robust table understanding. | datasets and benchmarks | https://openreview.net/pdf?id=wcInjlUp8V | 2025-09-17T16:36:52 | 4 | [
{
"id": "m7pRuiRqVr",
"forum": "wcInjlUp8V",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8787/Reviewer_TtPK",
"reviewer_name": "Reviewer_TtPK",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "This p... |
Pxd5mjwznl | https://openreview.net/forum?id=Pxd5mjwznl | Difference back propagation with inverse sigmoid function | 0 | 4.666667 | [
0,
0,
0
] | [
4,
5,
5
] | 3 | [
"Machine Learning",
"AI",
"Algorithm",
"Back Propagation"
] | Since the proposal of neural network, the derivative-based back propagation algorithm has been the default setting. However, the derivative for a non-linear function is an approximation for the difference of the function values, and it would be a more precise way to do back propagation using the difference directly ins... | We propose a new back propagation algorithm that calculates the back propagatiion updates using the difference instead of the derivative from the activation function | optimization | https://openreview.net/pdf?id=Pxd5mjwznl | 2025-09-19T11:05:56 | 3 | [
{
"id": "dI3B1VuSWi",
"forum": "Pxd5mjwznl",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15419/Reviewer_ZEzY",
"reviewer_name": "Reviewer_ZEzY",
"rating": 0,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The s... |
sJI2JCggyD | https://openreview.net/forum?id=sJI2JCggyD | Delta Activations: A Representation for Finetuned Large Language Models | 3.333333 | 3.666667 | [
2,
4,
4
] | [
4,
4,
3
] | 3 | [
"Representation",
"LLM",
"post-training",
"finetuning"
] | The success of powerful open source Large Language Models (LLMs) has enabled the community to create a vast collection of post-trained models adapted to specific tasks and domains. However, navigating and understanding these models remains challenging due to inconsistent metadata and unstructured repositories. We intro... | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=sJI2JCggyD | 2025-09-08T22:33:52 | 3 | [
{
"id": "TP1UBBYsj8",
"forum": "sJI2JCggyD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3142/Reviewer_KPrH",
"reviewer_name": "Reviewer_KPrH",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 4,
"summary": "This p... | |
uS2FiaAkCz | https://openreview.net/forum?id=uS2FiaAkCz | Towards Monotonic Improvement in In-Context Reinforcement Learning | 3 | 3.5 | [
6,
4,
0,
2
] | [
3,
3,
4,
4
] | 4 | [
"Reinforcement Learning",
"Meta-RL",
"In-context Reinforcement Learning",
"Transformers",
"Learning to Learn"
] | In-Context Reinforcement Learning (ICRL) has emerged as a promising paradigm for developing agents that can rapidly adapt to new tasks by leveraging past experiences as context, without updating their parameters. Recent approaches train large sequence models on monotonic policy improvement data from online RL, aiming t... | reinforcement learning | https://openreview.net/pdf?id=uS2FiaAkCz | 2025-09-16T22:52:04 | 4 | [
{
"id": "7RMkPfkIZB",
"forum": "uS2FiaAkCz",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7740/Reviewer_8tXY",
"reviewer_name": "Reviewer_8tXY",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This p... | |
xindJJLSr1 | https://openreview.net/forum?id=xindJJLSr1 | ReWatch-R1: Boosting Complex Video Reasoning in Large Vision-Language Models through Agentic Data Synthesis | 5 | 3.833333 | [
6,
6,
6,
4,
4,
4
] | [
4,
3,
4,
5,
3,
4
] | 6 | [
"Video Reasoning",
"Large Vision-Language Models (LVLMs)",
"Agentic Data Synthesis",
"Multi-Agent ReAct",
"Reinforcement Learning with Verifiable Reward (RLVR)",
"Chain-of-Thought (CoT)"
] | While Reinforcement Learning with Verifiable Reward (RLVR) significantly advances image reasoning in Large Vision-Language Models (LVLMs), its application to complex video reasoning remains underdeveloped. This gap stems primarily from a critical data bottleneck: existing datasets lack the challenging, multi-hop questi... | We introduce an agent-based pipeline to synthesize a high-quality video reasoning dataset (ReWatch) and a novel reinforcement learning reward (O&R) to train LVLMs, achieving state-of-the-art performance. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=xindJJLSr1 | 2025-09-19T20:00:47 | 6 | [
{
"id": "gSNBwjmODT",
"forum": "xindJJLSr1",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18045/Reviewer_tQx8",
"reviewer_name": "Reviewer_tQx8",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
mjLMdY0xul | https://openreview.net/forum?id=mjLMdY0xul | Efficient Self-Evaluation for Diffusion Language Models via Sequence Regeneration | 3.333333 | 4 | [
4,
2,
4
] | [
4,
4,
4
] | 3 | [
"Diffusion Large Language Models"
] | Diffusion large language models (dLLMs) have recently attracted significant attention for their ability to enhance diversity, controllability, and parallelism. However, their non-sequential, bidirectionally masked generation makes quality assessment difficult, underscoring the need for effective self-evaluation. In thi... | We propose a simple yet effective self-evaluation confidence quantification method for diffusion large language models (dLLMs), and introduce a flexible-length dLLM generation framework based on it. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=mjLMdY0xul | 2025-09-04T21:14:57 | 3 | [
{
"id": "yP2WOCnO6x",
"forum": "mjLMdY0xul",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2115/Reviewer_Ya5E",
"reviewer_name": "Reviewer_Ya5E",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
qVadFFSfrI | https://openreview.net/forum?id=qVadFFSfrI | Diagnosing and Remedying Knowledge Deficiencies in LLMs via Label-free Curricular Meaningful Learning | 6 | 3.5 | [
4,
6,
6,
8
] | [
3,
4,
4,
3
] | 4 | [
"Deficiency Diagnosis",
"Data Synthesis",
"LLMs Reasoning"
] | Large Language Models (LLMs) have demonstrated impressive generalization ability by learning from extensive unlabeled text. However, they still exhibit reasoning mistakes, which can affect their trustworthiness and reliability. Although users can interact with LLMs and provide diverse and comprehensive queries to expos... | Diagnose the knowledge deficiencies of LLMs and remedy them with a novel approach. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=qVadFFSfrI | 2025-09-19T23:56:45 | 4 | [
{
"id": "RXsidSYRgJ",
"forum": "qVadFFSfrI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19580/Reviewer_jb7t",
"reviewer_name": "Reviewer_jb7t",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
7FfZc9MePg | https://openreview.net/forum?id=7FfZc9MePg | PersonBias: A Lightweight Framework for Personalized Bias Mitigation in Large Language Models | 3 | 3.5 | [
2,
4,
4,
2
] | [
3,
4,
3,
4
] | 4 | [
"Personalized Debiasing",
"Dynamic Intervention",
"Large Language Models",
"Bias-Utility Trade-off"
] | Social bias in large language models (LLMs) outputs has emerged as a Social bias in large language model (LLM) outputs has emerged as a critical challenge in artificial intelligence. While existing bias detection methods pursue comprehensive identification and elimination of implicit biases, this \textit{one-size-fits-... | We introduce PersonBias, a plug-and-play module that detects and mitigates social biases in LLM outputs by dynamically adapting to individual user preferences, balancing fairness with response quality. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=7FfZc9MePg | 2025-09-17T14:57:48 | 4 | [
{
"id": "W4XPLxbMIt",
"forum": "7FfZc9MePg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8599/Reviewer_5vGB",
"reviewer_name": "Reviewer_5vGB",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This p... |
l7Vb3yxmuz | https://openreview.net/forum?id=l7Vb3yxmuz | WINA: Weight Informed Neuron Activation for Accelerating Large Language Model Inference | 5.666667 | 3 | [
6,
6,
6,
6,
4,
6
] | [
2,
3,
3,
3,
3,
4
] | 6 | [
"Sparse Activation",
"Efficient Inference"
] | The ever-increasing computational demands of large language models (LLMs) make efficient inference a central challenge. While recent advances leverage specialized architectures or selective activation, they typically require (re)training or architectural modifications, limiting their broad applicability. Training-free ... | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=l7Vb3yxmuz | 2025-09-17T05:11:00 | 6 | [
{
"id": "liI8CjRvSr",
"forum": "l7Vb3yxmuz",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8065/Reviewer_zwra",
"reviewer_name": "Reviewer_zwra",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... | |
qrKymA0zuY | https://openreview.net/forum?id=qrKymA0zuY | Explaining Multimodal LLMs via Intra-Modal Token Interactions | 4 | 3.5 | [
4,
6,
4,
2
] | [
4,
4,
3,
3
] | 4 | [
"XAI",
"Multimodal LLM"
] | Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse vision-language tasks, yet their internal decision-making mechanisms remain insufficiently understood. Existing interpretability research has primarily focused on cross-modal attribution, identifying which image regions the model a... | interpretability and explainable AI | https://openreview.net/pdf?id=qrKymA0zuY | 2025-09-20T07:30:34 | 4 | [
{
"id": "Ys70HRJ3G3",
"forum": "qrKymA0zuY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22000/Reviewer_UEeF",
"reviewer_name": "Reviewer_UEeF",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
VPju7xAxb1 | https://openreview.net/forum?id=VPju7xAxb1 | Comprehend and Talk: Text to Speech Synthesis via Dual Language Modeling | 2 | 4.75 | [
2,
2,
2,
2
] | [
4,
5,
5,
5
] | 4 | [
"Text to Speech; Speech Signal Processing; Speech Language Modeling; Audio Language Models"
] | Existing Large Language Model (LLM) based autoregressive (AR) text-to-speech (TTS) systems, while achieving state-of-the-art quality, still face critical challenges. The foundation of this LLM-based paradigm is the discretization of the continuous speech waveform into a sequence of discrete tokens by neural audio codec... | Propose a two stage method for audio language modeling | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=VPju7xAxb1 | 2025-09-14T20:08:12 | 5 | [
{
"id": "oiVs7XYTj7",
"forum": "VPju7xAxb1",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5123/Reviewer_9G9x",
"reviewer_name": "Reviewer_9G9x",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
vJBMYahZY5 | https://openreview.net/forum?id=vJBMYahZY5 | MSearcher: Self-Reflective Search Agent Empowered by Monte Carlo Tree Search Based Data Synthesis | 4.5 | 3.75 | [
4,
4,
4,
6
] | [
4,
4,
4,
3
] | 4 | [
"Data Construction",
"Monte Carlo Tree Search",
"Post Training",
"Reinforcement Learning",
"Question Answering"
] | Recent advances in reinforcement learning (RL) have enabled large language models (LLMs) to perform multi-turn chain-of-thought (CoT) reasoning with tool use, where web search serves as the most critical tool for answering complex questions. However, most existing methods apply RL directly to off-the-shelf models witho... | reinforcement learning | https://openreview.net/pdf?id=vJBMYahZY5 | 2025-09-20T18:20:21 | 4 | [
{
"id": "3zB7qa4SOb",
"forum": "vJBMYahZY5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25063/Reviewer_RG1c",
"reviewer_name": "Reviewer_RG1c",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
Eqbay04527 | https://openreview.net/forum?id=Eqbay04527 | HICO-GT: Hidden Community Based Tokenized Graph Transformer for Node Classification | 3.5 | 3.75 | [
2,
4,
4,
4
] | [
3,
4,
4,
4
] | 4 | [
"graph Transformer",
"node classification",
"hidden community detection"
] | Graph Transformers have been proven to be effective for the node classification task, of which tokenized graph Transformer is one of the most powerful approaches. When constructing tokens, existing methods focus on collecting multi-view node information as the Transformer's input. However, if a type of tokens only incl... | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=Eqbay04527 | 2025-09-19T16:34:10 | 4 | [
{
"id": "wThICaGWDK",
"forum": "Eqbay04527",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16978/Reviewer_8rpR",
"reviewer_name": "Reviewer_8rpR",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This ... | |
WtbXgc9GVA | https://openreview.net/forum?id=WtbXgc9GVA | LoRA meets Riemannion: Muon Optimizer for Parametrization-independent Low-Rank Adapters | 4 | 3.6 | [
4,
6,
2,
4,
4
] | [
4,
3,
4,
3,
4
] | 5 | [
"Low-rank Adaption",
"Fine-tuning",
"Smooth manifolds",
"Riemannian optimization",
"Fixed matrix rank manifold",
"LLM",
"Diffusion Models"
] | This work presents a novel, fully Riemannian framework for Low-Rank Adaptation (LoRA) that geometrically treats low-rank adapters by optimizing them directly on the fixed-rank manifold. This formulation eliminates the parametrization ambiguity present in standard Euclidean optimizers. Our framework integrates three key... | generative models | https://openreview.net/pdf?id=WtbXgc9GVA | 2025-09-20T02:34:41 | 5 | [
{
"id": "hdoJDubxke",
"forum": "WtbXgc9GVA",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20503/Reviewer_wX7P",
"reviewer_name": "Reviewer_wX7P",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
0fcVDzkGK2 | https://openreview.net/forum?id=0fcVDzkGK2 | DIVIDE-AND-DENOISE: A GAME THEORETIC METHOD FOR FAIRLY COMPOSING DIFFUSION MODELS | 2.666667 | 3.333333 | [
0,
4,
4
] | [
3,
4,
3
] | 3 | [
"Diffusion Models",
"Fair Composition",
"Game-Theoretic",
"Text-to-Image"
] | The widespread availability of large-scale pre-trained generative models raises a
question: how can we best leverage them beyond their original training distribu-
tions? Two strategies provide partial answers. Composition combines multiple
diffusion models, typically through linear averaging of their predictions, to pr... | a game-theoretic approach to compositional sampling from multiple pre-trained diffusion models | generative models | https://openreview.net/pdf?id=0fcVDzkGK2 | 2025-09-18T16:22:23 | 3 | [
{
"id": "5zG3M4XOSY",
"forum": "0fcVDzkGK2",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10873/Reviewer_nmYV",
"reviewer_name": "Reviewer_nmYV",
"rating": 0,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... |
vEh1ceS154 | https://openreview.net/forum?id=vEh1ceS154 | Partition Generative Modeling: Masked Modeling Without Masks | 7 | 3 | [
6,
6,
8,
8
] | [
3,
2,
4,
3
] | 4 | [
"masked generative modeling",
"discrete diffusion",
"masked diffusion language modeling",
"diffusion language modeling"
] | Masked generative models (MGMs) are widely used to capture complex data and enable faster generation than autoregressive models (AR) through parallel decoding.
However, MGMs typically operate on fixed-length inputs, which can be inefficient: early in sampling, most tokens are masked and carry little information, leadin... | We show that it is possible to train masked generative models without using MASK tokens, resulting in efficiency gains at inference. | generative models | https://openreview.net/pdf?id=vEh1ceS154 | 2025-09-17T01:36:32 | 4 | [
{
"id": "LabNEsk09h",
"forum": "vEh1ceS154",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7931/Reviewer_YDst",
"reviewer_name": "Reviewer_YDst",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The au... |
wSGle6ag5I | https://openreview.net/forum?id=wSGle6ag5I | Improving Diffusion Models for Class-imbalanced Training Data via Capacity Manipulation | 6 | 3.5 | [
6,
6,
6,
6
] | [
4,
3,
3,
4
] | 4 | [
"Imbalance",
"Diffusion Models"
] | While diffusion models have achieved remarkable performance in image generation, they often struggle with the imbalanced datasets frequently encountered in real-world applications, resulting in significant performance degradation on minority classes. In this paper, we identify model capacity allocation as a key and pre... | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=wSGle6ag5I | 2025-09-05T11:15:04 | 4 | [
{
"id": "y03pIN2wNq",
"forum": "wSGle6ag5I",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2255/Reviewer_NJJB",
"reviewer_name": "Reviewer_NJJB",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... | |
XGODWn7HeJ | https://openreview.net/forum?id=XGODWn7HeJ | Toward Principled Flexible Scaling for Self-Gated Neural Activation | 6.666667 | 4 | [
8,
6,
6
] | [
4,
4,
4
] | 3 | [
"Neural Activation Functions",
"Principled Neural Activation Modeling",
"Neural Activation Interpretation",
"Non-local Information Modeling"
] | Neural networks necessitate nonlinearities to achieve universal approximability.
Traditional activation functions introduce nonlinearities through rigid feature rectifications.
Recent self-gated variants improve traditional methods in fitting flexibility by incorporating learnable content-aware factors and non-local de... | We identify, elucidate, and address the underexplored non-local tension problem and introduce FleS, a self-gated activation function that enhances discriminative visual recognition through adaptive scaling. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=XGODWn7HeJ | 2025-09-19T20:16:30 | 3 | [
{
"id": "tkYg6DEAsm",
"forum": "XGODWn7HeJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18129/Reviewer_rpDy",
"reviewer_name": "Reviewer_rpDy",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The p... |
vIcqXbhU0Y | https://openreview.net/forum?id=vIcqXbhU0Y | Coherent Local Explanations for Mathematical Optimization | 3.333333 | 4 | [
4,
4,
2
] | [
4,
4,
4
] | 3 | [
"Optimization",
"Explainability",
"Interpretability",
"Sensitivity Analysis",
"Regression"
] | The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take ... | optimization | https://openreview.net/pdf?id=vIcqXbhU0Y | 2025-09-19T17:14:41 | 3 | [
{
"id": "DbMQxAveE3",
"forum": "vIcqXbhU0Y",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17192/Reviewer_LWrw",
"reviewer_name": "Reviewer_LWrw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
3CLscEAR9X | https://openreview.net/forum?id=3CLscEAR9X | ArtAug: Iterative Enhancement of Text-to-Image Models via Synthesis–Understanding Interaction | 4.5 | 3.5 | [
2,
6,
6,
4
] | [
4,
3,
3,
4
] | 4 | [
"Diffusion models",
"alignment",
"image synthesis"
] | The emergence of diffusion models has significantly advanced image synthesis. Recent studies of model interaction and self-corrective reasoning approaches in large language models offer new insights for enhancing text-to-image models. Inspired by these studies, we propose a novel method called ArtAug for enhancing text... | A paper on enhancement methods for text-to-image models. | generative models | https://openreview.net/pdf?id=3CLscEAR9X | 2025-09-18T10:03:37 | 4 | [
{
"id": "7d5GHfTHbh",
"forum": "3CLscEAR9X",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10112/Reviewer_seWm",
"reviewer_name": "Reviewer_seWm",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Inspi... |
RnNqSYqEcm | https://openreview.net/forum?id=RnNqSYqEcm | Online Multi-objective Convex Optimization: A Unified Framework and Joint Gradient Descent | 3 | 3.5 | [
2,
4,
2,
4
] | [
4,
3,
3,
4
] | 4 | [
"online multi-objective convex optimization",
"Pareto front",
"primal-dual method"
] | Online Convex Optimization (OCO) usually addresses the learning task with a single objective; however, in real-world applications, multiple conflicting objectives often need to be optimized simultaneously. In this paper, we present an Online Multi-objective Convex Optimization (OMCO) framework with a novel multi-object... | optimization | https://openreview.net/pdf?id=RnNqSYqEcm | 2025-09-04T17:10:44 | 4 | [
{
"id": "EMfLr82i5c",
"forum": "RnNqSYqEcm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2016/Reviewer_Ggst",
"reviewer_name": "Reviewer_Ggst",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The pa... | |
q1Waov7fd2 | https://openreview.net/forum?id=q1Waov7fd2 | Normalized Matching Transformer | 2 | 3.75 | [
2,
2,
2,
2
] | [
4,
3,
4,
4
] | 4 | [
"Keypoint Matching",
"Graph Matching",
"Normalized Transformer",
"Hyperspherical Learning"
] | We introduce the Normalized Matching Transformer (NMT), a deep learning approach for efficient and accurate sparse keypoint matching between image pairs. NMT consists of a strong visual backbone, geometric feature refinement via SplineCNN, followed by a normalized transformer for computing matching features. Central to... | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=q1Waov7fd2 | 2025-09-17T17:55:07 | 4 | [
{
"id": "YCKIg78f08",
"forum": "q1Waov7fd2",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8931/Reviewer_Ygxx",
"reviewer_name": "Reviewer_Ygxx",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This p... | |
TRM3GP3u2O | https://openreview.net/forum?id=TRM3GP3u2O | PSRT: Accelerating LRM-based Guard Models via Prefilled Safe Reasoning Traces | 4 | 3.75 | [
4,
6,
4,
2
] | [
5,
3,
4,
3
] | 4 | [
"AI Safety",
"LRM",
"Inference acceleration",
"Guard Model"
] | Large Reasoning Models (LRMs) have demonstrated remarkable performance on tasks such as mathematics and code generation. Motivated by these strengths, recent work has empirically demonstrated the effectiveness of LRMs as guard models in improving harmful query detection. However, LRMs typically generate long reasoning ... | We replace the LRM-based guard model’s reasoning process with a prefilled safe reasoning trace, thereby preserving its capability while significantly reducing the computational overhead. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=TRM3GP3u2O | 2025-09-17T11:35:47 | 4 | [
{
"id": "v56KOxMeMh",
"forum": "TRM3GP3u2O",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8365/Reviewer_1p4m",
"reviewer_name": "Reviewer_1p4m",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The to... |
FcuJY1dK7s | https://openreview.net/forum?id=FcuJY1dK7s | Reasoning Scaffolding: Distilling the Flow of Thought from LLMs | 5.5 | 3.75 | [
6,
6,
6,
4
] | [
3,
4,
4,
4
] | 4 | [
"LLM Reasoning Distillation",
"Large Reasoning Model",
"Reasoning Scaffolding",
"Semantic Signals"
] | The prevailing approach to distilling reasoning from Large Language Models (LLMs)—behavioral cloning from textual rationales—is fundamentally limited. It teaches Small Language Models (SLMs) to mimic surface-level patterns rather than the underlying algorithmic structure of thought, resulting in a critical lack of logi... | We introduce Reasoning Scaffolding, a new reasoning distillation framework that transfers reasoning patterns—not just text—from large to small language models, resulting in stronger small reasoning models. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=FcuJY1dK7s | 2025-09-18T17:04:36 | 4 | [
{
"id": "W5UVveaKeR",
"forum": "FcuJY1dK7s",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10987/Reviewer_qrY1",
"reviewer_name": "Reviewer_qrY1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
eHxQc2Q0aw | https://openreview.net/forum?id=eHxQc2Q0aw | Stability and Generalization for Bellman Residuals | 4 | 3.25 | [
2,
2,
6,
6
] | [
4,
3,
2,
4
] | 4 | [
"statistical learning theory",
"algorithmic stability",
"generalization analysis",
"offline reinforcement learning",
"inverse reinforcement learning"
] | Offline reinforcement learning and offline inverse reinforcement learning aim to recover near–optimal value functions or reward models from a fixed batch of logged trajectories, yet current practice still struggles to enforce Bellman consistency. Bellman residual minimization (BRM) has emerged as an attractive remedy, ... | Our analysis yields an $\mathcal{O}(1/n)$ on-average argument-stability bound for Bellman residual minimization—doubling the best known sample-complexity exponent for convex–concave saddle problems. | learning theory | https://openreview.net/pdf?id=eHxQc2Q0aw | 2025-09-14T17:17:33 | 5 | [
{
"id": "dVNEQM20Wb",
"forum": "eHxQc2Q0aw",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5065/Reviewer_MpYr",
"reviewer_name": "Reviewer_MpYr",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The pa... |
0miO9v1jeC | https://openreview.net/forum?id=0miO9v1jeC | TAR: Token Adaptive Routing Framework for LLMs Token-level Semantic Correction Inspired by Neuro-Linguistic Pathways | 3 | 3 | [
2,
2,
4,
4
] | [
4,
3,
3,
2
] | 4 | [
"large language models; math reasoning; brain-inspired; adaptive routing; token semantic correction"
] | Large language models (LLMs) often suffer from cascading errors in math reasoning due to token-level semantic defects. A key limitation is that the reliance on unidirectional feedforward pathways makes LLMs unable to dynamically correct token-level defects during reasoning. In contrast, neuro-linguistic pathways in the... | We propose a brain-inspired Token Adaptive Routing framework that enables LLMs to self-correct token-level semantic errors, improving reasoning accuracy while reducing inference tokens. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=0miO9v1jeC | 2025-09-20T16:22:51 | 4 | [
{
"id": "voMGPoVWiW",
"forum": "0miO9v1jeC",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24412/Reviewer_JgFs",
"reviewer_name": "Reviewer_JgFs",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "The p... |
WzLjwv8KAn | https://openreview.net/forum?id=WzLjwv8KAn | Which Cultural Lens Do Models Adopt? On Cultural Positioning Bias and Agentic Mitigation in LLMs | 2.5 | 3.5 | [
2,
2,
4,
2
] | [
3,
4,
3,
4
] | 4 | [
"Bias",
"Culture",
"LLM",
"Generation",
"Agent"
] | Large language models (LLMs) have unlocked a wide range of downstream generative applications.
However, we found that they also risk perpetuating subtle fairness issues tied to culture, positioning their generations from the perspectives of the mainstream US culture while demonstrating salient externality towards non-... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=WzLjwv8KAn | 2025-09-20T15:05:18 | 5 | [
{
"id": "Ks50sMgkwg",
"forum": "WzLjwv8KAn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24027/Reviewer_N7qu",
"reviewer_name": "Reviewer_N7qu",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
avdPTUXdPG | https://openreview.net/forum?id=avdPTUXdPG | Dissecting Demystifying Region-Based Representations in MLLMs | 3 | 3 | [
4,
4,
2,
2
] | [
3,
3,
3,
3
] | 4 | [
"Vision Language Models",
"Multimodal Models"
] | Multimodal Large Language Models (MLLMs) typically process visual information as a flat sequence of image patch tokens, which is computationally expensive and lacks explicit semantic structure. This paper provides a systematic, vision-centric analysis of region-based representations, which group patches into semantical... | Dissecting Demystifying Region-Based Representations in MLLMs | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=avdPTUXdPG | 2025-09-19T22:59:05 | 4 | [
{
"id": "PY4oNc5j0b",
"forum": "avdPTUXdPG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19156/Reviewer_xyPG",
"reviewer_name": "Reviewer_xyPG",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
vK6iDcs8SM | https://openreview.net/forum?id=vK6iDcs8SM | BulletGen: Improving 4D Reconstruction with Bullet-Time Generation | 4 | 3.75 | [
4,
6,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"4D reconstruction",
"bullet-time",
"generative models"
] | Transforming casually captured, monocular videos into fully immersive dynamic experiences is a highly ill-posed task, and comes with significant challenges, e.g., reconstructing unseen regions, and dealing with the ambiguity in monocular depth estimation. In this work we introduce BulletGen, an approach that takes adva... | We improve 4D reconstruction from monocular videos by augmenting with bullet-time reconstructions from a generative model. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=vK6iDcs8SM | 2025-09-18T23:08:10 | 4 | [
{
"id": "oHmENQ7Rpb",
"forum": "vK6iDcs8SM",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12477/Reviewer_q93b",
"reviewer_name": "Reviewer_q93b",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
lNcc1TypMd | https://openreview.net/forum?id=lNcc1TypMd | Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum | 5 | 3.75 | [
4,
6,
6,
4
] | [
3,
4,
4,
4
] | 4 | [
"Post-Training",
"SFT",
"training objectives"
] | Supervised fine-tuning (SFT) is the standard approach for post-training large language models (LLMs), yet it often shows limited generalization. We trace this limitation to its default training objective: negative log likelihood (NLL). While NLL is classically optimal when training from scratch, post-training operates... | We revisit supervised fine-tuning (SFT) for large language models, introducing a model-capability continuum that shows negative log-likelihood is not universally optimal and characterizes when alternative objectives succeed or fail. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=lNcc1TypMd | 2025-09-19T08:22:36 | 4 | [
{
"id": "ZOgxijcz1o",
"forum": "lNcc1TypMd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14671/Reviewer_FZpg",
"reviewer_name": "Reviewer_FZpg",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "LLMs ... |
BZ1vutP53o | https://openreview.net/forum?id=BZ1vutP53o | TEN-DM: Topology-Enhanced Diffusion Model for Spatio-Temporal Event Prediction | 4 | 3.666667 | [
6,
2,
4
] | [
3,
4,
4
] | 3 | [
"Spatio-temporal point process",
"Diffusion model",
"Topological data analysis"
] | Spatio-temporal point process (STPP) data appear in many domains. A natural way to model them is to describe how the instantaneous event rate varies over space and time given the observed history which enables interpretation, interaction detection, and forecasting. Traditional parametric kernel-based models, while hist... | learning on time series and dynamical systems | https://openreview.net/pdf?id=BZ1vutP53o | 2025-09-19T14:14:20 | 3 | [
{
"id": "kl72bsgome",
"forum": "BZ1vutP53o",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16261/Reviewer_rwpn",
"reviewer_name": "Reviewer_rwpn",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... | |
yirunib8l8 | https://openreview.net/forum?id=yirunib8l8 | Depth Anything 3: Recovering the Visual Space from Any Views | 7 | 3.5 | [
8,
8,
6,
6
] | [
3,
4,
4,
3
] | 4 | [
"Depth Estimation"
] | We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses.
In pursuit of minimal modeling, DA3 yields two key insights:
a single plain transformer (e.g., vanilla DINOv2 encoder) is sufficient as a backbone withou... | Depth Anything 3 uses a single vanilla DINOv2 transformer to take arbitrary input views and outputs consistent depth and ray maps, delivering leading pose, geometry, and visual rendering performance. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=yirunib8l8 | 2025-09-12T02:22:07 | 4 | [
{
"id": "88WiRwkmUt",
"forum": "yirunib8l8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4157/Reviewer_xgar",
"reviewer_name": "Reviewer_xgar",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
DDaaA4Uldp | https://openreview.net/forum?id=DDaaA4Uldp | XTransfer: Modality-Agnostic Few-Shot Model Transfer for Human Sensing at the Edge | 4 | 3.5 | [
4,
2,
6,
4
] | [
3,
3,
4,
4
] | 4 | [
"Human Sensing",
"Cross-Modality Few-Shot Model Transfer",
"Edge AI"
] | Deep learning for human sensing on edge systems presents significant potential for smart applications. However, its training and development are hindered by the limited availability of sensor data and resource constraints of edge systems. While transferring pre-trained models to different sensing applications is promis... | This paper proposes a pioneering and scalable method that enables modality-agnostic few-shot model transfer for advancing human sensing on edge systems. | transfer learning, meta learning, and lifelong learning | https://openreview.net/pdf?id=DDaaA4Uldp | 2025-09-18T22:42:35 | 4 | [
{
"id": "R0qel6PfsU",
"forum": "DDaaA4Uldp",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12243/Reviewer_BTLu",
"reviewer_name": "Reviewer_BTLu",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
YM3SskmtCE | https://openreview.net/forum?id=YM3SskmtCE | ATTS: Asynchronous Test-Time Scaling via Conformal Prediction | 6 | 3 | [
8,
8,
2
] | [
3,
2,
4
] | 3 | [
"Conformal Prediction",
"Test-Time Scaling",
"Speculative Decoding"
] | Large language models (LLMs) benefit from test-time scaling but are often hampered by high inference latency. Speculative decoding is a natural way to accelerate the scaling process; however, scaling along both the parallel and sequential dimensions poses significant challenges, including substantial memory-bound execu... | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=YM3SskmtCE | 2025-09-18T14:52:30 | 10 | [
{
"id": "HqbK5DjFzW",
"forum": "YM3SskmtCE",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10643/Reviewer_LsxV",
"reviewer_name": "Reviewer_LsxV",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The a... | |
MnQD69han5 | https://openreview.net/forum?id=MnQD69han5 | DFVEdit: Conditional Delta Flow Vector for Zero-shot Video Editing | 4 | 3.5 | [
4,
4,
4,
4
] | [
3,
4,
3,
4
] | 4 | [
"zeroshot",
"video editing",
"traning free",
"video transformer"
] | The advent of Video Diffusion Transformers (Video DiTs) marks a milestone in video generation. However, directly applying existing video editing methods to Video DiTs often incurs substantial computational overhead, due to resource-intensive attention modification or fine-tuning. To alleviate this problem, we present D... | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=MnQD69han5 | 2025-09-19T10:03:48 | 4 | [
{
"id": "igcn3kldv7",
"forum": "MnQD69han5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15070/Reviewer_eh2p",
"reviewer_name": "Reviewer_eh2p",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The a... | |
bl9hFm04Lc | https://openreview.net/forum?id=bl9hFm04Lc | Can AI Truly Represent Your Voice in Deliberations? A Comprehensive Study of Large-Scale Opinion Aggregation with LLMs | 5 | 3.5 | [
6,
4,
4,
6
] | [
4,
4,
3,
3
] | 4 | [
"Human Study; Reliable LLM; Public Deliberation; Computational Social Science; Large-Scale Evaluation"
] | Large-scale public deliberations generate thousands of free-form contributions that must be synthesized into representative and neutral summaries for policy use. While LLMs have been shown as a promising tool to generate summaries for large-scale deliberations, they also risk underrepresenting minority perspectives and... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=bl9hFm04Lc | 2025-09-04T00:20:50 | 4 | [
{
"id": "RKsweVwgMJ",
"forum": "bl9hFm04Lc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1768/Reviewer_GiWi",
"reviewer_name": "Reviewer_GiWi",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The au... | |
DzecbBEmud | https://openreview.net/forum?id=DzecbBEmud | Differentially and Integrally Attentive Convolutional-based Photoplethysmography Signal Quality Classification | 2.5 | 4 | [
2,
2,
4,
2
] | [
4,
5,
3,
4
] | 4 | [
"Differential Attention",
"Differential Inteh Attention",
"Signal Quality",
"Photoplethysmography",
"Wearables"
] | Photoplethysmography (PPG) is a non-intrusive and cost-effective optical technology that detects changes in blood volume within tissues, providing insights into the body’s physiological dynamics over time. By analyzing PPG data as a time series, valuable information about cardiovascular health and other physiological p... | Improving signal quality classification in photoplethysmography-based health applications using differential and integral attention | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=DzecbBEmud | 2025-09-16T02:18:58 | 4 | [
{
"id": "g59Nl6Pi7G",
"forum": "DzecbBEmud",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6230/Reviewer_7LK8",
"reviewer_name": "Reviewer_7LK8",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
bppDDqbO3V | https://openreview.net/forum?id=bppDDqbO3V | Dissecting the Role of Positional Encoding in Length Generalization | 4.5 | 3 | [
2,
4,
6,
6
] | [
4,
2,
3,
3
] | 4 | [
"Mechanistic Interpretation",
"Positional Encoding",
"Length Generalization",
"Iteration Head",
"Reasoning Tasks."
] | Length generalization (LG) is a persistent challenge for Transformers. Despite recent studies improving the models' LG capability, its underlying mechanisms are still underexplored. To better understand LG, we propose that LG requires alignment of the model’s inductive bias with the task’s computational structure, and ... | Exploring the mechanism of Positional Encoding in Length Generalization on Reasoning Tasks | interpretability and explainable AI | https://openreview.net/pdf?id=bppDDqbO3V | 2025-09-19T19:05:25 | 4 | [
{
"id": "yvMFEH7FbZ",
"forum": "bppDDqbO3V",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17734/Reviewer_nP5W",
"reviewer_name": "Reviewer_nP5W",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
ehtVTpcjES | https://openreview.net/forum?id=ehtVTpcjES | T³: Test-Time Model Merging in VLMs for Zero-Shot Medical Imaging Analysis | 3.5 | 3.5 | [
2,
4,
2,
6
] | [
4,
3,
3,
4
] | 4 | [
"medical imaging",
"vision language models",
"zero-shot generalization",
"model merging",
"healthcare"
] | In medical imaging, vision-language models face a critical duality: \textit{pretrained} networks offer broad robustness but lack subtle, modality-specific characteristics, while fine-tuned \textit{expert} models achieve high in-distribution accuracy yet falter under modality shift. Existing model-merging techniques, de... | We propose a sample-wise test-time model merging in vision-language models validating enhanced performance across four medical imaging classification tasks on a practical cross-dataset evaluation medical benchmark. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=ehtVTpcjES | 2025-09-19T00:00:27 | 5 | [
{
"id": "NMIjszVuSH",
"forum": "ehtVTpcjES",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12886/Reviewer_VAaC",
"reviewer_name": "Reviewer_VAaC",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
XNk56rmmiy | https://openreview.net/forum?id=XNk56rmmiy | Towards Adaptive ML Benchmarks: Web-Agent-Driven Construction, Domain Expansion, and Metric Optimization | 3.333333 | 3.333333 | [
2,
2,
6
] | [
4,
4,
2
] | 3 | [
"Benchmark",
"Large Language Models",
"Language Agents",
"End-to-End Machine Learning",
"Evaluation Framework",
"Data Science Automation"
] | Recent advances in large language models (LLMs) have enabled the emergence of general-purpose agents for automating end-to-end machine learning (ML) workflows, including data analysis, feature engineering, model training, and competition solving. However, existing benchmarks remain limited in task coverage, domain dive... | datasets and benchmarks | https://openreview.net/pdf?id=XNk56rmmiy | 2025-09-18T21:19:00 | 3 | [
{
"id": "IoNKb2h43G",
"forum": "XNk56rmmiy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11551/Reviewer_HQoU",
"reviewer_name": "Reviewer_HQoU",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The p... | |
4T9ncuf08p | https://openreview.net/forum?id=4T9ncuf08p | Dataset Regeneration for Cross Domain Recommendation | 6 | 3 | [
6,
4,
8
] | [
3,
3,
3
] | 3 | [
"Recommender System",
"Cross-domain recommendation",
"Dataset Regeneration"
] | Cross-domain recommendation (CDR) has emerged as an effective strategy to mitigate data sparsity and cold-start challenges by transferring knowledge from a source domain to a target domain. Despite recent progress, two key issues remain: (i) Sparse overlap. In real-world datasets such as Amazon, the proportion of users... | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=4T9ncuf08p | 2025-09-19T14:22:37 | 3 | [
{
"id": "LCnihkLKE7",
"forum": "4T9ncuf08p",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16305/Reviewer_p3jn",
"reviewer_name": "Reviewer_p3jn",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... | |
xFdT63wm5e | https://openreview.net/forum?id=xFdT63wm5e | Unified Continuous Generative Models for Denoising-based Diffusion | 5.5 | 3.5 | [
4,
6,
6,
6
] | [
3,
3,
3,
5
] | 4 | [
"generative modeling",
"denoising diffusion",
"consistency model",
"image generation"
] | Recent advances in continuous generative models, encompassing multi-step processes such as diffusion and flow matching (typically requiring $8$-$1000$ steps) and few-step methods such as consistency models (typically $1$-$8$ steps), have yielded impressive generative performance.
However, existing work often treats... | generative models | https://openreview.net/pdf?id=xFdT63wm5e | 2025-09-20T17:57:38 | 4 | [
{
"id": "c2rOfFPh8s",
"forum": "xFdT63wm5e",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24943/Reviewer_iL3K",
"reviewer_name": "Reviewer_iL3K",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
RDAhLHEHDm | https://openreview.net/forum?id=RDAhLHEHDm | Lost in Tokenization: Context as the Key to Unlocking Biomolecular Understanding in Scientific LLMs | 6.5 | 3.5 | [
6,
6,
6,
8
] | [
3,
3,
4,
4
] | 4 | [
"Biomolecular learning",
"Protein sequence"
] | Scientific Large Language Models (Sci-LLMs) have emerged as a promising frontier for accelerating biological discovery. However, these models face a fundamental challenge when processing raw biomolecular sequences: the tokenization dilemma. Whether treating sequences as a specialized language, risking the loss of funct... | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=RDAhLHEHDm | 2025-09-16T23:39:46 | 4 | [
{
"id": "mP2ddusOo8",
"forum": "RDAhLHEHDm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7811/Reviewer_oqHE",
"reviewer_name": "Reviewer_oqHE",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
q6kXd8Gpfj | https://openreview.net/forum?id=q6kXd8Gpfj | LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large Language Models | 6 | 4.333333 | [
4,
6,
8
] | [
5,
4,
4
] | 3 | [
"Large Language Model",
"Text-to-SQL"
] | Natural Language to SQL (NL2SQL) aims to translate natural language queries into executable SQL statements, offering non-expert users intuitive access to databases. While recent approaches leveraging large-scale private LLMs such as GPT-4 have achieved state-of-the-art results, they face two critical challenges: the la... | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=q6kXd8Gpfj | 2025-09-20T10:39:26 | 3 | [
{
"id": "SLzmpRkouv",
"forum": "q6kXd8Gpfj",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22830/Reviewer_kPT4",
"reviewer_name": "Reviewer_kPT4",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
EbgCEd8gyN | https://openreview.net/forum?id=EbgCEd8gyN | Sysformer: Safeguarding Frozen Large Language Models with Adaptive System Prompts | 5 | 3.25 | [
6,
4,
6,
4
] | [
3,
3,
3,
4
] | 4 | [
"Large Language Models",
"AI Safety",
"Jailbreaks",
"Guardrails",
"Frozen Model adaptation"
] | As large language models (LLMs) are deployed in safety-critical settings, it is essential to ensure that their responses comply with safety standards. Prior research has revealed that LLMs often fail to grasp the notion of safe behaviors, resulting in either unjustified refusals to harmless prompts or the generation of... | We present Sysformer, a transformer-based mechanism to adapt system prompt based on the user prompts to boost the robustness of LLMs. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=EbgCEd8gyN | 2025-09-18T23:43:05 | 4 | [
{
"id": "DnGSgQPPsM",
"forum": "EbgCEd8gyN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12772/Reviewer_BNqo",
"reviewer_name": "Reviewer_BNqo",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
VYQuICALXj | https://openreview.net/forum?id=VYQuICALXj | Cross-Modal Redundancy and the Geometry of Vision–Language Embeddings | 5 | 3.5 | [
8,
4,
6,
2
] | [
3,
3,
3,
5
] | 4 | [
"multimodal",
"concepts",
"sparse autoencoder",
"modality gap",
"applications of interpretability"
] | Vision–language models (VLMs) align images and text with remarkable success, yet the geometry of their shared embedding space remains poorly understood.
To probe this geometry, we begin from the Iso-Energy Assumption, which exploits cross-modal redundancy: a concept that is truly shared should exhibit the same average... | Understanding the geometry of multimodality through a concept-based approach, leading to applications like semantic vector arithmetic and modality gap free embeddings. | interpretability and explainable AI | https://openreview.net/pdf?id=VYQuICALXj | 2025-09-18T18:16:51 | 4 | [
{
"id": "P2FGxaMJlL",
"forum": "VYQuICALXj",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11140/Reviewer_U7cA",
"reviewer_name": "Reviewer_U7cA",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This ... |
32QQlzm9ft | https://openreview.net/forum?id=32QQlzm9ft | REFLEX-Med: Reinforcement for Label-Free Explainability in Unified Medical Reasoning | 3.666667 | 3.5 | [
4,
4,
2,
6,
2,
4
] | [
4,
4,
4,
2,
3,
4
] | 6 | [
"medical reasoning",
"large vision-language models",
"explainability"
] | Clinicians urgently need explanations they can audit, not merely fluent chains. Yet prevailing practices conflate interpretability with subjective human/LLM rationales, with post-hoc visuals loosely aligned to answers, or with answer rationale consistency. These proxies are annotation-hungry, bias-prone, and crucially ... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=32QQlzm9ft | 2025-09-13T17:07:44 | 6 | [
{
"id": "7tRBIbg4Sf",
"forum": "32QQlzm9ft",
"review_number": 9,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4734/Reviewer_vYv4",
"reviewer_name": "Reviewer_vYv4",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... | |
wWkyL8D9xd | https://openreview.net/forum?id=wWkyL8D9xd | FastFlow: Accelerating The Generative Flow Matching Models with Bandit Inference | 5.5 | 3.5 | [
4,
6,
6,
6
] | [
4,
3,
3,
4
] | 4 | [
"generative modelling",
"faster inference."
] | Flow-matching models deliver state-of-the-art fidelity in image and video generation, but the inherent sequential denoising process renders them slower. Existing acceleration methods like distillation, trajectory truncation, and consistency approaches are static, require retraining, and often fail to generalize across ... | Adaptive inference method for accelerating flow matching based visual generation. | generative models | https://openreview.net/pdf?id=wWkyL8D9xd | 2025-09-20T18:17:51 | 4 | [
{
"id": "AGajkCDJso",
"forum": "wWkyL8D9xd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25044/Reviewer_D8Ff",
"reviewer_name": "Reviewer_D8Ff",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
vv8EcCoBfr | https://openreview.net/forum?id=vv8EcCoBfr | Bilateral Information-aware Test-time Adaptation for Vision-Language Models | 4.333333 | 4.166667 | [
6,
6,
4,
4,
4,
2
] | [
3,
5,
5,
4,
4,
4
] | 6 | [
"Test-time Adaptation",
"Vision Language Model"
] | Test-time adaptation (TTA) fine-tunes models using new data encountered during inference, which enables the vision-language models to handle test data with covariant shifts. Unlike training-time adaptation, TTA does not require a test-distributed validation set or consider the worst-case distribution within a given tol... | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=vv8EcCoBfr | 2025-09-17T09:29:50 | 6 | [
{
"id": "dSxbk5YjnL",
"forum": "vv8EcCoBfr",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8180/Reviewer_kaur",
"reviewer_name": "Reviewer_kaur",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The au... | |
1AYy3T3Xjk | https://openreview.net/forum?id=1AYy3T3Xjk | A Process-Level Method for Creativity Evaluation in LLM-Assisted Learning | 2.5 | 3.5 | [
2,
2,
4,
2
] | [
4,
3,
3,
4
] | 4 | [
"LLM",
"Creativity assessment",
"Process-level evaluation"
] | Interpretable creativity assessment remains challenging, and the adoption of large language models (LLMs) in education amplifies issues of subjectivity and opacity. This study presents a process-level evaluation approach for LLM-assisted learning that attributes learner-versus-model contributions from multi-turn studen... | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=1AYy3T3Xjk | 2025-09-20T07:12:15 | 4 | [
{
"id": "WtYVSc2PGG",
"forum": "1AYy3T3Xjk",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21916/Reviewer_PNVN",
"reviewer_name": "Reviewer_PNVN",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... | |
IsOMU137M3 | https://openreview.net/forum?id=IsOMU137M3 | scCMIA: Self-supervised Dual Model for Mitigating Information Loss in Single-cell Cross-Modal Alignment | 3 | 3.75 | [
4,
2,
2,
4
] | [
3,
4,
4,
4
] | 4 | [
"Single-cell",
"Self-supervised",
"Alignment",
"Reconstruction",
"scRNA",
"scATAC"
] | Recent technological advances in single-cell sequencing have enabled simultaneous profiling of multiple omics modalities within individual cells. Despite these advancements, challenges such as high noise levels and information loss during computational integration persist. While existing methods align different modalit... | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=IsOMU137M3 | 2025-09-19T00:19:32 | 4 | [
{
"id": "jW5h2MOgxY",
"forum": "IsOMU137M3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12979/Reviewer_JfMn",
"reviewer_name": "Reviewer_JfMn",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... | |
LwjUKEWAvt | https://openreview.net/forum?id=LwjUKEWAvt | SafetyChat: Learning to Generate Physical Safety Warnings in Instructional Assistants | 4 | 3.5 | [
4,
4,
6,
2
] | [
3,
4,
3,
4
] | 4 | [
"Physical Safety",
"Instructional AI Assistant",
"LLM"
] | While large language models (LLMs) excel in language generation and conversational abilities, their broader utility hinges on meeting additional requirements to ensure reliability and safety. Recent research has explored areas such as minimizing hallucinations, grounding outputs in credible sources, and safeguarding us... | A new physical safety task for LLM chat assistant, a new dataset, and strong alignment results. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=LwjUKEWAvt | 2025-09-19T21:35:40 | 4 | [
{
"id": "zyVNBiJZ57",
"forum": "LwjUKEWAvt",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18539/Reviewer_6je4",
"reviewer_name": "Reviewer_6je4",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
6L3yCjx9s3 | https://openreview.net/forum?id=6L3yCjx9s3 | Dimension-Adaptive MCTS: Optimal Sample Complexity for Continuous Action Planning | 4.5 | 3 | [
6,
4,
4,
4
] | [
3,
4,
3,
2
] | 4 | [
"Monte-Carlo Tree Search; Continuous Reinforcement Learning Planning"
] | We study continuous-action Monte Carlo Tree Search (MCTS) in a $d$-dimensional action space when the
optimal action-value function $Q^*(s,\cdot)$ is $\beta$-Hölder continuous with constant~$L$. We show that a
dimension-adaptive $\varepsilon$-net schedule combined with power-mean backups and a polynomial exploration
... | reinforcement learning | https://openreview.net/pdf?id=6L3yCjx9s3 | 2025-09-19T23:10:22 | 4 | [
{
"id": "7HT5wVygqT",
"forum": "6L3yCjx9s3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19231/Reviewer_pCZE",
"reviewer_name": "Reviewer_pCZE",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
PRHNKeaZpP | https://openreview.net/forum?id=PRHNKeaZpP | Human-in-the-Loop Policy Optimization for Preference-Based Multi-Objective Reinforcement Learning | 4 | 3.75 | [
4,
4,
4,
4
] | [
4,
4,
3,
4
] | 4 | [
"Multi-objective reinforcement learning",
"human-in-the-loop",
"preference learning"
] | Multi-objective reinforcement learning (MORL) seeks policies that effectively balance conflicting objectives. However, presenting many diverse policies without accounting for the decision maker’s (DM’s) preferences can overwhelm the decision-making process. On the other hand, accurately specifying preferences in advanc... | We propose PBMORL, a human-in-the-loop MORL framework that learns preferences from limited feedback to efficiently discover high-quality, preference-aligned policies. | reinforcement learning | https://openreview.net/pdf?id=PRHNKeaZpP | 2025-09-18T22:48:56 | 4 | [
{
"id": "6bI8VtVUo8",
"forum": "PRHNKeaZpP",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12302/Reviewer_yZQN",
"reviewer_name": "Reviewer_yZQN",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
G3uNHQpP7J | https://openreview.net/forum?id=G3uNHQpP7J | Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models | 6 | 3.5 | [
4,
8,
6,
6
] | [
2,
4,
4,
4
] | 4 | [
"Multi-domain graph pre-training",
"graph neural network",
"graph foundation model",
"Riemannian geometry"
] | Multi-domain graph pre-training integrates knowledge from diverse domains to enhance performance in the target domains, which is crucial for building graph foundation models. Despite initial success, existing solutions often fall short of answering a fundamental question: how is knowledge integrated or transferred acro... | From differential geometry perspective, we present a novel framework that merges multi-domain graphs into a unified, smooth manifold with geometric consistency, enabling quantifiable transferability and geometric scaling behavior. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=G3uNHQpP7J | 2025-09-15T23:32:49 | 4 | [
{
"id": "Zw6xvN1iuH",
"forum": "G3uNHQpP7J",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6004/Reviewer_w9Kq",
"reviewer_name": "Reviewer_w9Kq",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This p... |
7AXP2RYw2N | https://openreview.net/forum?id=7AXP2RYw2N | Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding | 4.666667 | 4 | [
6,
4,
4
] | [
3,
5,
4
] | 3 | [
"Long-form video understanding;MLLM; multi-turn reasoning"
] | Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In t... | leveraging end-to-end RL to enable MLLMs to perform multi-turn reasoning. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=7AXP2RYw2N | 2025-09-17T11:49:28 | 3 | [
{
"id": "oVe9T3jgzW",
"forum": "7AXP2RYw2N",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8384/Reviewer_StqD",
"reviewer_name": "Reviewer_StqD",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
khBHJz2wcV | https://openreview.net/forum?id=khBHJz2wcV | Physics-Constrained Fine-Tuning of Flow-Matching Models for Generation and Inverse Problems | 3 | 3.75 | [
4,
6,
2,
0
] | [
4,
3,
4,
4
] | 4 | [
"Generative Modeling",
"Physics‑Informed Machine Learning",
"Inverse Problems",
"Parameter Identification"
] | We present a framework for fine-tuning flow-matching generative models to enforce physical constraints and solve inverse problems in scientific systems. Starting from a model trained on low-fidelity or observational data, we apply a differentiable post-training procedure that minimizes weak-form residuals of governing ... | generative models | https://openreview.net/pdf?id=khBHJz2wcV | 2025-09-19T19:19:43 | 4 | [
{
"id": "150xxQEo77",
"forum": "khBHJz2wcV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17809/Reviewer_Mbyw",
"reviewer_name": "Reviewer_Mbyw",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The p... | |
O4Oy7NsSwG | https://openreview.net/forum?id=O4Oy7NsSwG | Topology and geometry of the learning space of ReLU networks: connectivity and singularities | 5.5 | 3.25 | [
4,
6,
6,
6
] | [
4,
3,
4,
2
] | 4 | [
"learning dynamics",
"topology",
"neural networks",
"ReLU networks",
"geometry",
"symmetry",
"loss landscape",
"gradient",
"singularity",
"connectedness"
] | Understanding the properties of the parameter space in feed-forward ReLU networks is critical for effectively analyzing and guiding training dynamics. After initialization, training under gradient flow decisively restricts the parameter space to an algebraic variety that emerges from the homogeneous nature of the ReLU ... | learning theory | https://openreview.net/pdf?id=O4Oy7NsSwG | 2025-09-13T19:39:01 | 4 | [
{
"id": "QDl0LSwCbp",
"forum": "O4Oy7NsSwG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4772/Reviewer_pzFJ",
"reviewer_name": "Reviewer_pzFJ",
"rating": 4,
"confidence": 4,
"soundness": 4,
"contribution": 2,
"presentation": 4,
"summary": "The pa... | |
iITycdPaOd | https://openreview.net/forum?id=iITycdPaOd | Structure before the Machine: Input Space is the Prerequisite for Concepts | 3 | 3.5 | [
4,
2,
4,
2
] | [
3,
4,
4,
3
] | 4 | [
"Spectral Principal Paths",
"Linear Representation Hypothesis",
"Representation Learning"
] | High-level representations have become a central focus in enhancing AI transparency and control, shifting attention from individual neurons or circuits to structured semantic directions that align with human-interpretable concepts. Motivated by the Linear Representation Hypothesis (LRH), we propose the Input-Space Line... | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=iITycdPaOd | 2025-09-19T02:08:45 | 4 | [
{
"id": "3KtAkpoZn3",
"forum": "iITycdPaOd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13528/Reviewer_UAP6",
"reviewer_name": "Reviewer_UAP6",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
oRmo4p1KEE | https://openreview.net/forum?id=oRmo4p1KEE | QuadGPT: Native Quadrilateral Mesh Generation with Autoregressive Models | 5.5 | 3.75 | [
8,
4,
4,
6
] | [
4,
3,
4,
4
] | 4 | [
"Autoregressive Quad Mesh Generation",
"Reinforcement Learning",
"Topology Optimization"
] | The generation of quadrilateral-dominant meshes is a cornerstone of professional 3D content creation.
However, existing generative models generate quad meshes by first generating triangle meshes and then merging triangles into quadrilaterals with some specific rules, which typically produces quad meshes with poor topo... | A novel method that directly generates quad-dominant meshes with superior topology, overcoming the limitations of conversion-based approaches. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=oRmo4p1KEE | 2025-09-01T20:54:56 | 4 | [
{
"id": "w9Icudi0Iw",
"forum": "oRmo4p1KEE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission213/Reviewer_87rP",
"reviewer_name": "Reviewer_87rP",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "This pa... |
0aBAAS0rRT | https://openreview.net/forum?id=0aBAAS0rRT | Map as a Prompt: Learning Multi-Modal Spatial-Signal Foundation Models for Cross-scenario Wireless Localization | 5.333333 | 2.666667 | [
6,
4,
6
] | [
2,
3,
3
] | 3 | [
"Wireless Localization",
"Foundation Models",
"Self-Supervised Learning",
"Fine-Tuning",
"6G Networks"
] | Accurate and robust wireless localization is a critical enabler for emerging 5G/6G applications, including autonomous driving, extended reality, and smart manufacturing. Despite its importance, achieving precise localization across diverse environments remains challenging due to the complex nature of wireless signals a... | We propose SigMap, a foundation model that uses self-supervised learning with cycle-adaptive masking and map-conditioned prompting to achieve accurate and generalizable wireless localization across diverse scenarios. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=0aBAAS0rRT | 2025-09-17T17:40:02 | 3 | [
{
"id": "otZhJeUNq0",
"forum": "0aBAAS0rRT",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8908/Reviewer_ct8k",
"reviewer_name": "Reviewer_ct8k",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... |
Zz2gtWX8wn | https://openreview.net/forum?id=Zz2gtWX8wn | ReviewScore: Misinformed Peer Review Detection with Large Language Models | 4.5 | 3 | [
8,
2,
4,
4
] | [
3,
3,
3,
3
] | 4 | [
"Peer Review Evaluation",
"Argument Evaluation",
"Critical Thinking",
"Logic",
"Large Language Models",
"Neurosymbolic Approaches"
] | Peer review serves as a backbone of academic research, but in most AI conferences, the review quality is degrading as the number of submissions explodes. To reliably detect low-quality reviews, we define misinformed review points as either "weaknesses" in a review that contain incorrect premises, or "questions" in a re... | We introduce ReviewScore, a new evaluation of peer review quality, focusing on detecting misinformed review points. | datasets and benchmarks | https://openreview.net/pdf?id=Zz2gtWX8wn | 2025-09-18T06:55:12 | 4 | [
{
"id": "42tS1loj06",
"forum": "Zz2gtWX8wn",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9929/Reviewer_GJmX",
"reviewer_name": "Reviewer_GJmX",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
zwLpUxiqSE | https://openreview.net/forum?id=zwLpUxiqSE | Space Filling Curves as Spatial Priors for Small or Data-Scarce Vision Transformers | 4.5 | 3.5 | [
6,
6,
2,
4
] | [
3,
4,
4,
3
] | 4 | [
"space filling curves",
"ViT",
"spatial priors"
] | Vision Transformers (ViTs) have become a dominant backbone in computer vision, yet their attention mechanism lacks inherent spatial inductive biases, which are especially crucial in small models and low-data regimes. Inspired by the masking in Linear Transformers and the scanning patterns of Vision SSMs, we propose VIO... | A new attention mechanism for vision backbones using Space Filling Curves improving both fine-tuning and pre-training of ViTs. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=zwLpUxiqSE | 2025-09-20T01:56:33 | 4 | [
{
"id": "HX5jH4abzP",
"forum": "zwLpUxiqSE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20303/Reviewer_8zFr",
"reviewer_name": "Reviewer_8zFr",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The m... |
jeTiBeW3iZ | https://openreview.net/forum?id=jeTiBeW3iZ | Memorization Through the Lens of Sample Gradients | 5 | 3.75 | [
6,
6,
2,
6
] | [
3,
3,
5,
4
] | 4 | [
"Memorization",
"Sample Gradients"
] | Deep neural networks are known to often memorize underrepresented, hard examples, with implications for generalization and privacy. Feldman & Zhang (2020) defined a rigorous notion of memorization.
However it is prohibitively expensive to compute at scale because it requires training models both with and without the ... | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=jeTiBeW3iZ | 2025-09-18T23:24:19 | 4 | [
{
"id": "z3GZqYPWjF",
"forum": "jeTiBeW3iZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12621/Reviewer_P7vQ",
"reviewer_name": "Reviewer_P7vQ",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
eWBu4tY9ta | https://openreview.net/forum?id=eWBu4tY9ta | Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment | 4.666667 | 3.333333 | [
4,
4,
6
] | [
3,
3,
4
] | 3 | [
"Watermark",
"VLM",
"Dataset Copyright Protection"
] | As Retrieval-Augmented Generation (RAG) evolves into service-oriented platforms (Rag-as-a-Service) with shared knowledge bases, protecting the copyright of contributed data becomes essential. Existing watermarking methods in RAG focus solely on textual knowledge, leaving image knowledge unprotected. In this work, we pr... | An effective watermarking framework for protecting the copyright of multimodal knowledge, especially image knowledge, in RaaS. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=eWBu4tY9ta | 2025-09-19T16:34:38 | 3 | [
{
"id": "38wcTQEGqV",
"forum": "eWBu4tY9ta",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16979/Reviewer_k6AT",
"reviewer_name": "Reviewer_k6AT",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
T9ikO8tXfY | https://openreview.net/forum?id=T9ikO8tXfY | Breaking Down and Building Up: Mixture of Skill-Based Vision-and-Language Navigation Agents | 4 | 4 | [
4,
4,
4
] | [
4,
3,
5
] | 3 | [
"Vision-and-Language Navigation",
"Skill-Based Agents",
"Mixture-of-Experts"
] | Vision-and-Language Navigation (VLN) poses significant challenges for agents to interpret natural language instructions and navigate complex 3D environments. While recent progress has been driven by large-scale pre-training and data augmentation, current methods still struggle to generalize to unseen scenarios, particu... | We propose SkillNav, a modular framework that decomposes navigation into interpretable atomic skills and uses a vision-language model router to achieve state-of-the-art generalization in vision-and-language navigation. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=T9ikO8tXfY | 2025-09-19T07:48:19 | 3 | [
{
"id": "nXQ34dRGRC",
"forum": "T9ikO8tXfY",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14576/Reviewer_WcPc",
"reviewer_name": "Reviewer_WcPc",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
Subsets and Splits
ICLR 2026 Papers with CAD
Performs basic pattern matching to find records containing "cad" in the title, providing simple filtered results but offering minimal analytical insight into the dataset's broader patterns or relationships.
ICLR 2026 Papers with CAD
Performs basic pattern matching to find records containing "cad" in the title, providing simple filtered results but offering limited analytical insight.