year
int64 2.03k
2.03k
| id
stringlengths 10
10
| rating
listlengths 0
9
| decision
stringclasses 1
value | reviewer_comments
listlengths 0
9
| _raw_metadata
dict |
|---|---|---|---|---|---|
2,026
|
00F7BfXLYJ
|
[
4,
4,
4,
4
] |
[
{
"content": "This paper addresses the limitations of current Multimodal Large Language Models (MLLMs) in deep logical reasoning for video understanding—such as feed-forward processing constraints (lack of self-correction), poor test-time scaling, and hallucinations. Inspired by cybernetic principles (control, communication, self-regulation), it proposes CyberV, a training-free, test-time adaptive scaling framework that redesigns video MLLMs into closed-loop adaptive systems.",
"id": "turFNyeA8W",
"rating": 4
},
{
"content": "CyberV proposes a test-time, control-theoretic framework to boost logical reasoning in video understanding without any additional training. It runs a Best-of-N (BoN) set of reasoning paths (base + multiple CoT variants), uses a “Sensor” to measure attention drift between base and CoT answers (from the last-layer attention of the answer token to video/subtitle segments), and a “Controller” (Score Forest) to aggregate multi-signals (attention retention, confidence, stability, rank, repetition) into a TopScore that decides whether to stop or trigger feedback. When uncertain, CyberV performs targeted inference feedback by extracting key frames from segments with the largest negative drift (optionally with dense temporal sampling or spatial zoom-in) and re-injects them for a second round (N=1) to correct evidence usage. Across VideoMMMU, MMVU-MCQ, and MMR-V, the method consistently improves accuracy—often substantially for small open-source MLLMs—and avoids the perception degradation that naïve CoT can cause on perception-centric benchmarks. The approach emphasizes a lightweight, training-free, closed-loop that couples evidence perception with reasoning, showing strong performance-efficiency trade-offs (e.g., peak gains around N=8) and pointing to future work on more robust feedback selection and broader free-form generation.",
"id": "BbADxAAQx6",
"rating": 4
},
{
"content": "This paper designed a test-time scaling framework inspired by cybernetics, consisting of a MLLM, a sensor and a controller which are working together to determin the execution path of MLLM in multimodal reasoning. Experiments suggests that this framework can significantly improves the accuracy of esxisting MLLMs on certain benchmarks.",
"id": "Qs3Vw5qFe3",
"rating": 4
},
{
"content": "This paper introduces **CyberV**, an approach that leverages cybernetic structures to enhance the reasoning performance of Multi-Modal Large Language Models (MLLMs).",
"id": "f2QI7mx6wj",
"rating": 4
}
] |
{
"cdate": 1757998013559,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025cyberv,\ntitle={CyberV: A Cybernetic Framework for Enhancing Logical Reasoning in Video Understanding},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=00F7BfXLYJ},\nnote={under review}\n}"
},
"abstract": {
"value": "Current Multimodal Large Language Models (MLLMs) may struggle with tasks requiring deep logical reasoning about video content, primarily stemming from the feed-forward processing nature, which limits their ability for self-correction and iterative refinement. To address these limitations, we propose a novel framework inspired by cybernetic principles, redesigning video MLLMs as adaptive systems capable of self-monitoring, self-correction, and dynamic resource allocation during inference. Our approach, CyberV, introduces a cybernetic loop consisting of an MLLM Inference System, a Sensor, and a Controller. Specifically, the sensor monitors MLLM forward processes. It collects intermediate interpretations, such as attention drift, then the controller determines when and how to trigger self-correction and generate feedback to guide the next round. This test-time adaptive scaling framework enhances frozen MLLMs without requiring training or additional components. Experiments demonstrate significant improvements on complex reasoning benchmarks: CyberV boosts Qwen2.5-VL-7B by 8.3% and InternVL3-8B by 5.5% on VideoMMMU, surpassing the competitive proprietary model GPT-4o. When applied to Qwen2.5-VL-72B, it yields a 10.0% improvement, achieving performance even comparable to human experts. Furthermore, on other reasoning-focused benchmarks, our method shows consistent gains of 4.6% on the multiple-choice question section of MMVU and 2.4% on MMR-V, highlighting its robustness in enhancing logical reasoning for video understanding. The code will be released to support further research."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Video Understanding",
"Multimodal Large Language Models",
"Test-Time Scaling"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6befca6b66a747daaa91eea1475167c914c23565.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "CyberV: A Cybernetic Framework for Enhancing Logical Reasoning in Video Understanding"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "00F7BfXLYJ",
"id": "00F7BfXLYJ",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission6845/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897888857,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission6845/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission6845/Authors"
]
}
|
|
2,026
|
00HNN8O7Ni
|
[
4,
2,
2,
4
] |
[
{
"content": "This paper proposed a new reinforcement learning framework of synthesizing hardware circuits based on the feedback from model checking results.\nThe experiments are based on open datasets and the results are outperform supervised learning baselines.\n\nPros:\n1. The integration of model checking results and circuit synthesis is interesting.\n\nCons:\n1. Using feedback from formal methods for learning is not novel, the novelty of the method is limited.\n2. The experiment results are limited and not convincing.",
"id": "XWl4ZN0lS1",
"rating": 4
},
{
"content": "This paper proposes an approach for synthesizing circuits from linear temporal logic (LTL) specifications using machine learning. The method builds on prior work by integrating model checker feedback and adding a search component for circuit size optimization. The approach is evaluated on several datasets.",
"id": "TeuZ9Av2LB",
"rating": 2
},
{
"content": "This paper addresses the limitations of existing deep learning approaches to reactive synthesis—where supervised learning is confined to imitating synthesis tools and reinforcement learning has slow convergence. It proposes a hybrid method that initializes models via supervised learning, then refines them using model checking feedback to prioritize correct circuit synthesis over tool imitation.\n\nReactive synthesis, which constructs systems satisfying linear temporal logic specifications (critical for hardware design), is computationally hard (2EXPTIME-complete), leading traditional tools to timeout even for small specs. The paper’s hybrid framework first trains an initial model ($M_0$) on 200,000 Strix-generated specification-circuit pairs (supervised phase). In the second phase, it verifies the model’s predicted circuits ($\\hat{C}$) with nuXmv: if $\\hat{C}$ meets the spec, it reinforces the model with $(\\varphi, \\hat{C})$; if not, it falls back to the dataset’s correct circuit ($C$).\n\nThree core variants extend the framework: 1) \"Reinforcing Learned Semantics\" boosts generalization by leveraging correct non-dataset predictions; 2) \"Expert Iteration\" uses beam search (top-k predictions) to improve performance and minimize circuit size with 54% smaller than Strix on average; 3) \"Iterating on Open Problems\" samples unsolvable Timeouts dataset to exceed tool capabilities.\n\nExperiments on hierarchical transformers and fine-tuned CodeT5 show state-of-the-art results: CodeT5 with expert iteration hits 89.3% on Testset and 51.9% on Timeouts. The method advances reactive synthesis by combining efficiency, correctness, and scalability beyond traditional tools.",
"id": "LRlqwKLS5Y",
"rating": 2
},
{
"content": "Reactive synthesis is the problem of synthesizing finite-state models from temporal logic specifications. This paper explores if deep learning can be used to solve this problem. Compared to earlier attempts to use ML for reactive synthesis, the new ideas include use of a model checker to give feedback to update the model, use of top-k predictions for improving the quality of learnt solutions, and iterating on problems that model fails to solve. The methods are implemented and evaluated on benchmarks for synthesis competitions.",
"id": "pm3iJpCRUd",
"rating": 4
}
] |
{
"cdate": 1758322705432,
"content": {
"TLDR": {
"value": "We propose a deep learning approach for reactive synthesis that first initializes a model with imitation learning and then continues training by reinforcing formally verified solutions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025learning,\ntitle={Learning Reactive Synthesis from Model Checking Feedback},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=00HNN8O7Ni},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep learning applications to formal verification typically fall into one of two categories: employing reinforcement learning that suffers from slow convergence, or supervised learning that suffers from limited exploration. For reactive synthesis, the problem of automatically constructing a system that satisfies a formal specification, existing approaches fall into the latter category. In this paper, we propose a hybrid approach that only initializes the model with supervised learning and then continues training by reinforcing formally verified predictions. We show that by training the model to synthesize correct solutions rather than fixating on the supervised data, performance substantially improves. We can further utilize our approach to optimize for size without any performance degradation. Finally, we show that we can iteratively reinforce on open problems that synthesis tools are unable to solve. Our approach is demonstrated for both deep neural networks trained from scratch and pre-trained models fine-tuned on reactive synthesis, establishing new state-of-the-art results for learning reactive synthesis."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Temporal Logic",
"Reactive Synthesis",
"Expert Iteration"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/34d3a3eeb460a6177f52996e217332dfd2836e22.pdf"
},
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Learning Reactive Synthesis from Model Checking Feedback"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "00HNN8O7Ni",
"id": "00HNN8O7Ni",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission21857/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896899730,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission21857/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission21857/Authors"
]
}
|
|
2,026
|
00UQtHqB2k
|
[
2,
6,
2,
4
] |
[
{
"content": "The paper proposes a unified way to evaluate group fairness through sparsity. It studies links among Maximum Pairwise Difference, the Gini Index, and a PQ Index and argues that higher sparsity means lower fairness. Based on this view, it replaces the pairwise step in common criteria with a sparsity measure and defines S-SP and S-EO for classification and regression, with formulas and properties for PQ. Experiments across several datasets and bias mitigation methods show similar trends to MPD-style metrics and some differences in intersectional settings. The paper positions the work as an evaluation framework rather than a training algorithm.",
"id": "HQDVgNXwzo",
"rating": 2
},
{
"content": "The paper presents a novel framework for fairness evaluation based on sparsity. The authors first propose the use of the PQ index, originally introduced for pruning, as a sparsity measure for fairness evaluation, in a manner similar to the Gini Index. They then describe the properties of this index in comparison to the Gini Index, including differences with respect to the Maximum Pairwise Difference (MPD). The paper further outlines currently used fairness metrics based on MPD and suggests replacing MPD with alternative sparsity measures such as the Gini or PQ index.\nThe authors demonstrate that the behavior of the proposed metrics aligns with that of standard fairness metrics when applied to a binary sensitive attribute and bias mitigation algorithms. Moreover, they show that these sparsity-based metrics are better suited for capturing fairness in scenarios where the sensitive attribute consists of multiple groups. This is because both the Gini and PQ indices consider the full vector of group values, rather than just the maximum and minimum, and thus capture disparities more effectively.",
"id": "S7pg08xnu9",
"rating": 6
},
{
"content": "This paper experimentally examines the use of the PQ-index [1] in place of max pairwise distances (MPD) in two fairness criteria (statistical parity and equalized odds). The comparison is performed on 6 datasets used for fair classification and regression. Experimental results show that the baseline and sparsity-based measures of fairness have similar tradeoff curves between model performance and fairness. Experiments examining intersectional fairness were done on a single dataset. Authors claim these results suggest that sparsity-based fairness metrics may be more sensitive to heterogeneity in the groups.",
"id": "HxkFe3LDw4",
"rating": 2
},
{
"content": "This paper proposes a unified framework for evaluating algorithmic fairness through sparsity measures. The authors theoretically analyze the PQ Index as a sparsity measure, establish its relationships with MPD, and reformulate classical fairness metrics (SP and EO) in terms of sparsity. Experiments on multiple datasets and with several bias mitigation methods demonstrate empirical alignment between sparsity-based and traditional fairness measures.",
"id": "eRZZAU8odl",
"rating": 4
}
] |
{
"cdate": 1758232139112,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025toward,\ntitle={Toward Unifying Group Fairness Evaluation from a Sparsity Perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=00UQtHqB2k},\nnote={under review}\n}"
},
"abstract": {
"value": "Ensuring algorithmic fairness remains a significant challenge in machine learning, particularly as models are increasingly applied across diverse domains. While numerous fairness criteria exist, they often lack generalizability across different machine learning problems. This paper examines the connections and differences among various sparsity measures in promoting fairness and proposes a unified sparsity-based framework for evaluating algorithmic fairness. The framework aligns with existing fairness criteria and demonstrates broad applicability to a wide range of machine learning tasks. We demonstrate the effectiveness of the proposed framework as an evaluation metric through extensive experiments on a variety of datasets and bias mitigation methods. This work provides a novel perspective to algorithmic fairness by framing it through the lens of sparsity and social equity, offering potential for broader impact on fairness research and applications."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Fairness",
"Sparsity",
"Unified Framework"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/219ccddd225cef5a883ca674d9f1b6bc2e08423c.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/fde30f02a6849cd5c614e87efe679a0e788d23bb.zip"
},
"title": {
"value": "Toward Unifying Group Fairness Evaluation from a Sparsity Perspective"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "00UQtHqB2k",
"id": "00UQtHqB2k",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission14292/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897378369,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission14292/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission14292/Authors"
]
}
|
|
2,026
|
017F77AYeQ
|
[
2,
2,
4,
0
] |
[
{
"content": "The paper proposes SMART-3D, a mask token modeling approach for 3D generation.",
"id": "gZowcvNNqh",
"rating": 2
},
{
"content": "The paper proposes an framework that merges masked autoregressive generation with diffusion modeling and linear attention, addressing key efficiency bottlenecks in 3D shape generation. However, technically novelty and evaluation are limited.",
"id": "kE0H4cZdnO",
"rating": 2
},
{
"content": "This paper introduces SMART-3D (Scaling Masked AutoRegressive Transformers for 3D generation) for 3D shape generation. The framework combines the modeling capability of autoregressive models with the efficiency of masked generation strategies. It uses progressive masked decoding to enable parallel decoding and reduce sampling steps, and employs a linear attention mechanism to lower computational complexity, achieving state-of-the-art performance in both generation quality and speed.",
"id": "WIDwzbIezO",
"rating": 4
},
{
"content": "-",
"id": "dS8t6uDrPN",
"rating": 0
}
] |
{
"cdate": 1758113495159,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smartd,\ntitle={{SMART}-3D: Scaling Masked AutoRegressive Transformer for Efficient 3D Shape Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=017F77AYeQ},\nnote={under review}\n}"
},
"abstract": {
"value": "Autoregressive models have shown promise in 3D shape generation by modeling complex spatial dependencies between discrete shape tokens. However, their sequential nature and token-by-token sampling limit scalability and generation speed, especially for high-resolution shapes. In this work, we propose SMART-3D (Scaling Masked AutoRegressive Transformers for 3D generation), a novel framework that combines the modeling capacity of autoregressive transformers with the efficiency of masked generation. By introducing a hierarchical token representation and a progressive masked generation schedule, SMART-3D enables parallel decoding of 3D structures without sacrificing autoregressive fidelity. We further optimize the model with spatially-aware masking and lightweight transformer blocks, allowing generation of detailed 3D shapes with significantly reduced computational overhead. Experiments on ShapeNet, ModelNet, and ShapeNet-55 datasets demonstrate that SMART-3D achieves state-of-the-art performance in both generation quality and speed, outperforming previous competitive baselines. Our approach offers a scalable and practical solution for high-fidelity 3D shape synthesis in real-world applications."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Autoregressive models",
"3D shape generation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/676ed3977332fe4f530434b6e3796debb83cbe57.pdf"
},
"primary_area": {
"value": "generative models"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SMART-3D: Scaling Masked AutoRegressive Transformer for Efficient 3D Shape Generation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "017F77AYeQ",
"id": "017F77AYeQ",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9157/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897740443,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9157/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9157/Authors"
]
}
|
|
2,026
|
023yMrtHQP
|
[
4,
4,
4
] |
[
{
"content": "This paper introduces a prompting framework, named Expectation–Evidence Prompting (EEP), for large language models to enhance factual verification. Drawing from the Strategic Use of Evidence technique in cognitive psychology, EEP involves generating two sets of expectations, supportive and refutational, and comparing them to observed evidence using a semantic consistency function. The framework is also extended to a supervised learning setup with cross-entropy loss and regularization. Evaluated on three benchmarks using GPT-3.5-turbo, EEP outperforms baselines like Chain-of-Thought, Self-Ask, and Decompose.",
"id": "9JIFVlrjLv",
"rating": 4
},
{
"content": "This paper introduces Expectation–Evidence Prompting (EEP), a cognitive science inspired framework for factual verification in large language models (LLMs). Instead of directly mapping claims to truth labels, EEP guides the model to generate supportive and refutational expectations about what evidence should exist if a claim were true or false. These expectations are then compared to observed evidence using a semantic consistency function, producing support and refutation scores. This is evaluated with a variety of methods including Implicit LLM reasoning, embedding similarity and Natural Language Inference. A claim is accepted, rejected, or abstained from based on thresholded scores. The authors motivate EEP with parallels to the Strategic Use of Evidence (SUE) technique in investigative psychology and evaluate it on FEVER, PubHealth, and SciFact. EEP achieves competitive results, notably 86.3 macro-F1 on FEVER (+3.6 over CoT), 82.1 precision on PubHealth, and 76.1 F1 on the SUPPORTS class in SciFact. EEP thus formalizes a bidirectional reasoning mechanism that improves interpretability and robustness compared to Chain-of-Thought (CoT), Self-Ask, and DECOMP prompting.",
"id": "I6x7K1kcyF",
"rating": 4
},
{
"content": "This paper introduces Expectation–Evidence Prompting (EEP), a cognitively inspired prompting framework for factual verification with large language models (LLMs). Drawing on the Strategic Use of Evidence (SUE) technique from cognitive psychology, EEP prompts the LLM to generate both supportive and refutational expectations for a claim, then explicitly compares these with observed evidence to make a structured three-way decision: support, refute, or abstain. The method is evaluated on three standard fact-checking benchmarks (FEVER, PubHealth, SciFact) and compared to strong prompting baselines (Standard, Chain-of-Thought, Self-Ask, DECOMP). EEP achieves state-of-the-art macro-F1 on FEVER and strong precision on PubHealth, with consistent gains in main metrics.",
"id": "WiPOdGfIDz",
"rating": 4
}
] |
{
"cdate": 1758292986416,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025expectationevidence,\ntitle={Expectation{\\textendash}Evidence Prompting: Structuring Verification by Comparing Expected and Observed Evidence},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=023yMrtHQP},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) often fail in factual verification due to hallucinations, unreliable truthfulness judgments, and opaque reasoning. We identify a structural limitation underlying these failures: LLMs directly compare claims with evidence without accounting for expected refutational alternatives. Specifically, we demonstrate that this omission leads to ambiguity in contradiction detection and unreliable abstention. Leveraging this observation, we introduce Expectation-Evidence Prompting (EEP), a cognitively inspired strategy that first generates supportive and refutational expectations from a claim and then aligns them with observed evidence. This bidirectional reasoning process enforces logical symmetry, reduces bias toward agreement, and provides a principled abstention mechanism. Across three fact-checking benchmarks: FEVER, PubHealth, and SciFact, EEP achieves consistent gains over strong prompting baselines, including an 86.3 macro-F1 on FEVER (+3.6 over Chain-of-Thought), 82.1 precision on PubHealth (highest among all methods), and 76.1 F1 on the Supports class in SciFact. These results demonstrate that embedding expectation evidence alignment into prompt design yields more interpretable, robust, and trustworthy factual reasoning in LLMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models (LLMs)",
"Factual Verification",
"Prompt Engineering",
"Cognitive Psychology–Inspired Prompting",
"Expectation–Evidence Alignment",
"Contradiction Detection",
"Abstention Mechanism"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/da7fb984ac74ee03e0b7788c1519b84d690a4cbf.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Expectation–Evidence Prompting: Structuring Verification by Comparing Expected and Observed Evidence"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "023yMrtHQP",
"id": "023yMrtHQP",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19036/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897064617,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19036/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19036/Authors"
]
}
|
|
2,026
|
02NbD16OnA
|
[
4,
4,
4,
6
] |
[
{
"content": "This paper introduces DECEPTIONDECODED, a multimodal news benchmark with explicitly defined creator intent to support misleading intent detection, source attribution, and desire inference. It reveals that current VLMs fail to reason about intent beyond surface alignment and stylistic cues.",
"id": "fn4fwYc83Q",
"rating": 4
},
{
"content": "This paper introduces DECEPTIONDECODED, a benchmark dataset for analyzing misleading creator intent in multimodal news. The dataset contains 12,000 image–caption–article triplets, each grounded in verified VisualNews articles, with both misleading and non-misleading variants generated under predefined “creator intents.” They evaluate 14 vision–language models, including GPT-4o, Claude-3.7, Gemini-2.5-Pro, and Qwen2.5-VL. The results indicate that even state-of-the-art models perform poorly on intent reasoning, tending to rely on surface-level cues such as image-text consistency or stylistic polish.",
"id": "9WlB8Dphn2",
"rating": 4
},
{
"content": "This paper introduces DECEPTIONDECODED, a novel benchmark designed to evaluate Vision-Language Models (VLMs) in detecting creator intent behind misleading multimodal news content. The dataset is constructed using a synthetic, intent-guided framework that generates manipulations grounded in real news, ensuring relevance and control over deception intent. The study evaluates state-of-the-art VLMs under various input conditions (e.g., image+text, text+article) and with authenticity cues (helpful or adversarial hints).",
"id": "3M7P7dMXHb",
"rating": 4
},
{
"content": "This paper introduces DECEPTIONDECODED, a large-scale benchmark for understanding and detecting misleading creator intent in multimodal news. This work centers on modeling the combination of desired influence and execution plan behind deceptive news creation. The benchmark comprises 12,000 image–caption–article triplets, each grounded in trustworthy news contexts from VisualNews and simulated through intent-guided generation using GPT-4o and FLUX.1. It supports three intent-centric tasks: (1) misleading intent detection, (2) misleading source attribution, and (3) creator desire inference. Comprehensive evaluations of 14 VLMs reveal that even leading models struggle to reason about creator intent. Fine-tuning on DECEPTIONDECODED improves performance on external MMD benchmarks (e.g., MMFakeBench), underscoring its transferability.",
"id": "8qt7sRcLNz",
"rating": 6
}
] |
{
"cdate": 1756910313383,
"content": {
"TLDR": {
"value": "We reveal that state-of-the-art VLMs remain blind to misleading creator intent, establishing the need for intent-aware benchmarks and models as the next frontier in multimodal misinformation detection."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025seeing,\ntitle={Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02NbD16OnA},\nnote={under review}\n}"
},
"abstract": {
"value": "The impact of misinformation arises not only from factual inaccuracies but also from the misleading narratives that creators deliberately embed. Interpreting such creator intent is therefore essential for multimodal misinformation detection (MMD) and effective information governance. To this end, we introduce DeceptionDecoded, a large-scale benchmark of 12,000 image–caption pairs grounded in trustworthy reference articles, created using an intent-guided simulation framework that models both the desired influence and the execution plan of news creators. The dataset captures both misleading and non-misleading cases, spanning manipulations across visual and textual modalities, and supports three intent-centric tasks: (1) misleading intent detection, (2) misleading source attribution, and (3) creator desire inference. We evaluate 14 state-of-the-art vision–language models (VLMs) and find that they struggle with intent reasoning, often relying on shallow cues such as surface-level alignment, stylistic polish, or heuristic authenticity signals. These results highlight the limitations of current VLMs and position DeceptionDecoded as a foundation for developing intent-aware models that go beyond shallow cues in MMD."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"multimodal misinformation detection",
"vision-language models",
"creator intent"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9be01177d5da89276e95a5c85b7ef81c5e6a455e.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02NbD16OnA",
"id": "02NbD16OnA",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission1711/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898192988,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission1711/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission1711/Authors"
]
}
|
|
2,026
|
02cEkpURXH
|
[
2,
2,
6,
4
] |
[
{
"content": "This paper proposes a KD–based training strategy for OOD generalization. The authors first argue that training compact student models via simple KD from a teacher with strong OOD performance can often surpass standalone algorithmic DG methods. They further note that prior OOD-oriented KD approaches predominantly focus on the teacher’s design or the teacher–student relationship, leaving the design of the student model underexplored. To address this, the authors introduce a forecaster that quantifies per-sample difficulty using auxiliary models built on the student’s internal representations together with uncertainty measures. The KD loss is then reweighted on a per-sample basis according to the predicted difficulty. Experiments on four DomainBed datasets with ResNet-18 demonstrate the effectiveness of the proposed approach.",
"id": "LrWms20vTu",
"rating": 2
},
{
"content": "The paper proposes an adaptive KD framework for domain generalization where a lightweight forecaster uses early-layer readouts (auxiliary heads) and uncertainty features (entropy, confidence margin) to reweight per-instance contributions of supervised loss vs. teacher KL during student training. The forecaster is trained interleaved with the student and discarded at inference, so deployment cost matches vanilla KD.",
"id": "emYIxBo6Yd",
"rating": 2
},
{
"content": "This paper addresses out-of-distribution (OOD) generalization in knowledge distillation by proposing an adaptive framework that uses early layer predictions to dynamically weight the loss components. The authors introduce a \"forecaster\" meta-network that leverages auxiliary classifiers at intermediate layers, along with uncertainty measures (entropy and confidence margin), to predict sample difficulty and reweight the balance between supervised loss and distillation loss on a per-instance basis. The method is evaluated on domain generalization benchmarks (OfficeHome, PACS, VLCS, TerraIncognita) and shows consistent improvements over vanilla KD (+1.0-1.2% average accuracy) while adding no inference overhead.",
"id": "oKvq6yCVz8",
"rating": 6
},
{
"content": "The paper proposes a student-centric, adaptive KD scheme that learns an instance-wise weight to balance cross-entropy vs. KL terms using a lightweight “forecaster” fed by early-layer readouts (stacked intermediate logits) plus uncertainty signals (entropy and a confidence margin). The forecaster is trained with a correctness-prediction objective and its outputs are stabilized via a batch-standardized sigmoid adjustment before modulating the student loss; training alternates between updating the student/auxiliary heads on train splits and the forecaster on a held-out validation split, and all auxiliaries are discarded at inference. Reported results indicate consistent OOD gains over vanilla KD and DG baselines across multiple benchmarks.",
"id": "2cO0cHlmMu",
"rating": 4
}
] |
{
"cdate": 1758311939461,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025early,\ntitle={Early Layer Readouts for Robust Knowledge Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02cEkpURXH},\nnote={under review}\n}"
},
"abstract": {
"value": "Domain generalization (DG) aims to learn a model that can generalize to unseen i.e. out-of-distribution (OOD) test domain. While large-capacity networks trained with sophisticated DG algorithms tend to achieve high robustness, they tend to be impractical in deployment. Typically, Knowledge distillation (KD) can alleviate this via an efficient transfer of knowledge from a robust teacher to a smaller student network. Throughout our experiments, we find that vanilla KD already provides strong OOD performance, often outperforming standalone DG algorithms. Motivated by this observation, we propose an adaptive distillation strategy that utilizes early layer predictions and uncertainty measures to learn a meta network that effectively rebalances supervised and distillation losses as per sample difficulty. Our method adds no inference overhead and consistently outperforms canonical ERM, vanilla KD, and competing DG algorithms across OOD generalization benchmarks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"domain generalization",
"knowledge distillation",
"early layer readouts"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2bb11bab4ab35adbf1f2a9ad3d46d601f3b0111c.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Early Layer Readouts for Robust Knowledge Distillation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02cEkpURXH",
"id": "02cEkpURXH",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission20949/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896950334,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission20949/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission20949/Authors"
]
}
|
|
2,026
|
02mBAZjFzp
|
[
4,
4,
4,
6
] |
[
{
"content": "This paper introduces VRPAGENT, a framework for discovering heuristic operators for Vehicle Routing Problems (VRPs) using large language models (LLMs). The method combines LLM-generated “destroy” and “order” operators with a Large Neighborhood Search (LNS) metaheuristic, leveraging genetic algorithms (GAs) to iteratively evolve improved operators. Although the research motivation and validation results seem feasible, the approach is almost identical to existing LLM-guided heuristic frameworks, which weakens the overall contribution of the paper.",
"id": "uG0zaS46hU",
"rating": 4
},
{
"content": "This paper proposes a framework for automated heuristic discovery in VRPs using LLMs called VRPAgent. VRPAgent integrates LLM-generated problem-specific operators within a Large Neighborhood Search (LNS) metaheuristic and refines them through a genetic algorithm that employs elitism, biased crossover, and code-length penalty mechanisms.\n\nKey features include generating problem-specific destroy and insert heuristics via LLMs, and evolving these operators over multiple generations to maximize solution quality while controlling code complexity. The method is evaluated across standard VRPs (capacitated, time windows, prize-collecting), consistently discovering heuristics that outperform handcrafted and previous LLM/learning-based methods on large benchmark instances using only CPU resources.\n\nThe approach offers interpretability, practical efficiency, and a reproducible pipeline for discovering and improving heuristics for combinatorial optimization, highlighting a new path for LLM-driven algorithmic design in operations research.\n\nThe contributions include:\n1. A hybrid metaheuristic framework (LLM-in-the-loop LNS) for VRPs where LLMs generate, mutate, and combine code for local operators.\n2. A genetic algorithm with code-length penalties to evolve and select the best LLM-generated operators.\n3. Demonstrating state-of-the-art or superior performance compared to both expert-designed heuristic solvers and recent neural/LLM solutions on several large VRP benchmarks, with superior interpretability and scalability",
"id": "D0O7X821Fg",
"rating": 4
},
{
"content": "Designing effective heuristics for VRP problems based on the Large Neighborhood Search (LNS) algorithm typically requires extensive human expertise and trial-and-error. To address this issue, the paper proposes using large language models (LLMs) to automatically design heuristic operators. Building on the concept of genetic algorithms, the LLM generates diverse heuristic candidates, retains the best-performing ones according to the solution results, and performs heuristic modifications and explorations to further improve performance. The proposed method is validated on multiple types of VRP problems, demonstrating a significant overall performance advantage compared with other AI-enhanced LNS approaches.",
"id": "jD0850R4NE",
"rating": 4
},
{
"content": "This paper presents VRPAGENT, a framework that uses Large Language Models (LLMs) to automatically discover heuristic operators for Vehicle Routing Problems (VRPs). The approach embeds LLM-generated problem-specific operators within a Large Neighborhood Search (LNS) metaheuristic and refines them through a genetic algorithm with elitism and biased crossover. The authors evaluate their method on three VRP variants (CVRP, VRPTW, PCVRP) and demonstrate state-of-the-art performance using only a single CPU core at test time.",
"id": "ODFKpFC7tV",
"rating": 6
}
] |
{
"cdate": 1758296070926,
"content": {
"TLDR": {
"value": "We introduce VRPAgent, a framework that leverages LLMs and evolutionary search to discover novel heuristic operators for vehicle routing problems, achieving state-of-the-art performance across multiple VRP variants."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025vrpagent,\ntitle={{VRPA}gent: {LLM}-Driven Discovery of Heuristic Operators for Vehicle Routing Problems},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02mBAZjFzp},\nnote={under review}\n}"
},
"abstract": {
"value": "Designing high-performing heuristics for vehicle routing problems (VRPs) is a complex task that requires both intuition and deep domain knowledge. Large language model (LLM)-based code generation has recently shown promise across many domains, but it still falls short of producing heuristics that rival those crafted by human experts. In this paper, we propose VRPAgent, a framework that integrates LLM-generated components into a metaheuristic and refines them through a novel genetic search. By using the LLM to generate problem-specific operators, embedded within a generic metaheuristic framework, VRPAgent keeps tasks manageable, guarantees correctness, and still enables the discovery of novel and powerful strategies. Across multiple problems, including the capacitated VRP, the VRP with time windows, and the prize-collecting VRP, our method discovers heuristic operators that outperform handcrafted methods and recent learning-based approaches while requiring only a single CPU core. To our knowledge, VRPAgent is the first LLM-based paradigm to advance the state-of-the-art in VRPs, highlighting a promising future for automated heuristics discovery."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"automated algorithm design",
"evolutionary search",
"vehicle routing problem",
"LLM agent",
"heuristic discovery"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/35f37aa40fad450cb00124cdc83059fbb4cb843f.pdf"
},
"primary_area": {
"value": "optimization"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "VRPAgent: LLM-Driven Discovery of Heuristic Operators for Vehicle Routing Problems"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02mBAZjFzp",
"id": "02mBAZjFzp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19416/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897040045,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19416/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19416/Authors"
]
}
|
|
2,026
|
02mgFnnfqG
|
[
4,
8,
6,
6
] |
[
{
"content": "The paper presents LiveMoments, a method for selecting and restoring a new low-quality (LQ) key photo from a short clip surrounding some key high-quality (HQ) photo. To this end, the authors build a model based on latent flow models and learnable networks for the HQ key image, the LQ candidate, and the motion between the two frames modeled as optical flow. The authors also propose to perform image space motion alignment based on image patches. The authors train the model using open source high-quality data and introduce three benchmarks for evaluation, a synthetic one and two real-world Live Photo datasets.",
"id": "PmPY4GqdRf",
"rating": 4
},
{
"content": "The paper introduces LiveMoments for reselected key photo restoration in Live Photos. It adopts a dual branch diffusion architecture with a ReferenceNet and a RestorationNet, and adds a unified Motion Alignment module that injects flow guided priors at latent and image levels. The authors build three benchmarks and propose relative no reference metrics tailored to the task. Experiments on synthetic and real Live Photo datasets demonstrate consistent gains over RefISR, RefVSR, and diffusion based SISR baselines.",
"id": "q7t5PLY0Y2",
"rating": 8
},
{
"content": "I think the paper introduces a practical task: restoring a reselected low-quality Live Photo frame using the original high-quality (HQ) key photo as a reference. The method, LiveMoments, uses a dual-branch diffusion transformer (ReferenceNet + RestorationNet) with cross-attention fusion and a unified motion-alignment module: (i) latent-level motion embeddings from RAFT flow injected as attention bias; (ii) image-level Patch Correspondence Retrieval (PCR) for tile-wise inference at 4K. Datasets include SynLive260 (synthetic) and real vivoLive144 / iPhoneLive90, plus a relative no-reference metric that normalizes to the HQ reference. Results show consistent perceptual gains on real data.",
"id": "1ujaAv5W14",
"rating": 6
},
{
"content": "This paper introduces the task of Reselected Key Photo Restoration for Live Photos, \nwhere a user-selected frame from the short video is restored using the original high-quality key photo as reference. \nThe paper formulates this as a reference-guided diffusion problem and proposes a dual-branch architecture \ncombining a RestorationNet for the degraded frame and a ReferenceNet for the original photo, fused via cross-attention. \nA unified Motion Alignment module enables alignment both in the latent space through motion-guided attention \nand in the image space via a Patch Correspondence Retrieval (PCR) strategy.\nExperiments demonstrate significant quantitative and visual gains over baselines.",
"id": "LEq7GBrfwn",
"rating": 6
}
] |
{
"cdate": 1757934812324,
"content": {
"TLDR": {
"value": "We are the first to restore reselected key photos in Live Photos, achieving perceptual fidelity beyond existing solutions in real-world scenes."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025livemoments,\ntitle={LiveMoments: Reselected Key Photo Restoration in Live Photos via Reference-guided Diffusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=02mgFnnfqG},\nnote={under review}\n}"
},
"abstract": {
"value": "Live Photo captures both a high-quality key photo and a short video clip to preserve the precious dynamics around the captured moment. \nWhile users may choose alternative frames as the key photo to capture better expressions or timing, these frames often exhibit noticeable quality degradation, as the photo capture ISP pipeline delivers significantly higher image quality than the video pipeline. This quality gap highlights the need for dedicated restoration techniques to enhance the reselected key photo. To this end, we propose LiveMoments, a reference-guided image restoration framework tailored for the reselected key photo in Live Photos. Our method employs a two-branch neural network: a reference branch that extracts structural and textural information from the original high-quality key photo, and a main branch that restores the reselected frame using the guidance provided by the reference branch. Furthermore, we introduce a unified Motion Alignment module that incorporates motion guidance for spatial alignment at both the latent and image levels. Experiments on real and synthetic Live Photos demonstrate that LiveMoments significantly improves perceptual quality and fidelity over existing solutions, especially in scenes with fast motion or complex structures."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Live Photo",
"Reference-based Image Restoration",
"Conditional Image Generation",
"Motion Alignment"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bbbb05b5353518a72b45118dfb2eecd0c3ed7f78.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "LiveMoments: Reselected Key Photo Restoration in Live Photos via Reference-guided Diffusion"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "02mgFnnfqG",
"id": "02mgFnnfqG",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission5782/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897954152,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission5782/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission5782/Authors"
]
}
|
|
2,026
|
032sg6mGp9
|
[
4,
4,
6,
6
] |
[
{
"content": "This paper introduces a multinomial mixture modelling approach to address the identifiability problem in learning from noisy labels (LNL). The authors theoretically prove that LNL becomes identifiable when each sample has at least 2C−1 independent noisy labels, enabling the unique recovery of clean label distributions without relying on heuristic assumptions. To make this feasible in practice, they propose generating additional pseudo noisy labels from nearest neighbours and applying an Expectation–Maximization algorithm to infer clean labels. Extensive experiments on synthetic, web-controlled, and real-world noisy datasets demonstrate that the proposed method accurately estimates clean labels and achieves performance competitive with state-of-the-art LNL techniques.",
"id": "ur3yGYd6qM",
"rating": 4
},
{
"content": "This paper addresses the long-standing issue of identifiability in learning from noisy labels (LNL). The authors show that, under a multinomial mixture modeling approach, the LNL problem becomes identifiable if at least $2C-1$ independent and identically distributed (i.i.d.) noisy labels are available per instance (where $C$ is the number of classes). As manually acquiring such redundancy is impractical, the paper proposes estimating additional noisy labels via nearest-neighbour augmentation in feature space. Then the paper use an Expectation-Maximisation (EM) algorithm to estimate the clean label distributions. This algorithm works on the mixture model. The experiments show strong results on both synthetic and real-world datasets. This paper also ran many ablation studies. These studies back up our theoretical ideas and design decisions.",
"id": "AVKWRBfTig",
"rating": 4
},
{
"content": "The paper tackles the fundamental issue of identifiability in learning with noisy labels (LNL).\nThe authors demonstrate that when each sample is annotated with at least 2C−1 i.i.d. noisy labels (where C is the number of classes), the true clean-label distribution becomes identifiable under a multinomial mixture model.\nSince collecting that many labels per sample is infeasible in practice, the authors propose a practical algorithm that approximates i.i.d. noisy labels using KNN and LLC, followed by an EM procedure to recover clean posterior estimates.\nExtensive experiments on multiple benchmarks show that this surrogate approach is both theoretically justified and empirically effective.",
"id": "Q8Lv3taOc5",
"rating": 6
},
{
"content": "This paper studies the foundational identifiability problem in learning from noisy labels (LNL). It establishes that the standard single-label LNL setting is non-identifiable in theory, meaning the clean label distribution cannot be recovered without additional assumptions. The key contribution is proving that if each instance has at least 2C−1 i.i.d. noisy labels (where C is the number of classes), then clean labels are identifiable when modeling noisy labels as a multinomial mixture. Extensive experiments on synthetic and real-world noisy-label benchmarks support the theoretical claims, showing competitive or improved performance relative to state-of-the-art baselines such as DivideMix, HOC, and others.",
"id": "T4t5H3p3uv",
"rating": 6
}
] |
{
"cdate": 1758285923748,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025identifiability,\ntitle={Identifiability in Noisy Label Learning: A Multinomial Mixture Modelling Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=032sg6mGp9},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning from noisy labels (LNL) is crucial in deep learning, in which one of the approaches is to identify clean-label samples from poorly-annotated datasets. Such an identification is challenging because the conventional LNL problem, which assumes only one noisy label per instance, is non-identifiable, i.e., clean labels cannot be estimated theoretically without additional heuristics. This paper presents a novel data-driven approach that addresses this issue without requiring any heuristics about clean samples. We discover that the LNL problem becomes identifiable if there are at least $2C - 1$ i.i.d. noisy labels per instance, where $C$ is the number of classes. Our finding relies on the assumption of i.i.d. noisy labels and multinomial mixture modelling, making it easier to interpret than previous studies that require full-rank noisy-label transition matrices. To fulfil this condition without additional manual annotations, we propose a method that automatically generates additional i.i.d. noisy labels through nearest neighbours. These noisy labels are then used in the Expectation-Maximisation algorithm to infer clean labels. Our method demonstrably estimates clean labels accurately across various label noise benchmarks, including synthetic, web-controlled, and real-world datasets. Furthermore, the model trained with our method performs competitively with many state-of-the-art methods."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"label noise learning",
"expectation-maximisation",
"mixture models"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/39e718f6250a4d1ffcf2cdc9270d45e29131db80.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Identifiability in Noisy Label Learning: A Multinomial Mixture Modelling Approach"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "032sg6mGp9",
"id": "032sg6mGp9",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission18276/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897114753,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission18276/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission18276/Authors"
]
}
|
|
2,026
|
03Ek1qDZmI
|
[
4,
4,
4,
2
] |
[
{
"content": "This paper introduces SSTP, a sample selection framework for trajectory prediction. The primary motivation is to address two challenges in existing large-scale datasets: the high computational cost of training and the imbalance where common, low-density scenarios dominate over rare, safety-critical high-density ones. The proposed method consists of two stages. First, it partitions the dataset based on scene density (number of agents) and pre-trains a model to extract gradient information. Second, it uses a submodular selection objective with these gradient-based scores to select a compact and representative subset, while explicitly up-sampling high-density scenarios. Experiments on the Argoverse 1 and 2 datasets show that training on a 50% subset selected by SSTP can achieve comparable performance to training on the full dataset, while significantly improving performance in high-density scenes.",
"id": "MFO5ZWKx5H",
"rating": 4
},
{
"content": "This paper proposes SSTP, a two-stage sample selection framework that constructs a compact yet density-balanced dataset for trajectory prediction. It consists of two stages: (i) first partition the data by scene density; and (ii) select a compact and density-balanced subset via gradient-based scores and a submodular objective. The goal is to reduce training time and mitigate long-tail imbalance. On Argoverse 1 and 2 Datasets and several backbones (HiVT, HPNet, QCNet, DeMo), SSTP claims comparable average metrics to full-data training, and improving high-density performance with around 50% budget.",
"id": "qHWZoAZ0KL",
"rating": 4
},
{
"content": "The paper aims to address an important problem of reducing dependency on large-scale datasets in trajectory prediction, particularly under imbalanced data distributions.",
"id": "YgKroHHCl9",
"rating": 4
},
{
"content": "This paper proposes SSTP, a framework designed to improve data efficiency and scene-density balance in trajectory prediction. The authors observe that existing large-scale trajectory prediction datasets are heavily imbalanced, with low-density scenarios dominating and high-density cases underrepresented. SSTP tackles this issue through two-stage process: density-based partitioning of the dataset and gradient-based submodular selection to identify representative samples within each partition. Experiments on Argoverse 1 and Argoverse 2 show that SSTP achieves comparable performance to full-dataset training while reducing training cost and improving performance in high-density scenarios.",
"id": "h99kgB8KYg",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "ZrgOVhMZcB",
"rating": null
}
] |
{
"cdate": 1757189578927,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nyang2025sstp,\ntitle={{SSTP}: Efficient Sample Selection for Trajectory Prediction},\nauthor={Ruining Yang and Yi Xu and Yun Fu and Lili Su},\nyear={2025},\nurl={https://openreview.net/forum?id=03Ek1qDZmI}\n}"
},
"abstract": {
"value": "Trajectory prediction is a core task in autonomous driving. However, training advanced trajectory prediction models on existing large-scale datasets is both time-consuming and computationally expensive. More critically, these datasets are highly imbalanced in scenario density, with normal driving scenes (low-moderate traffic) overwhelmingly dominating the datasets, while high-density and safety-critical cases are underrepresented. As a result, models tend to overfit low/moderate-density scenarios and perform poorly in high-density scenarios. To address these challenges, we propose the SSTP framework, which constructs a compact yet density-balanced dataset tailored to trajectory prediction. SSTP consists of two main stages: (1) Extraction, where a baseline model is pretrained for a few epochs to obtain stable gradient estimates, and the dataset is partitioned by scenario density. (2) Selection, where gradient-based scores and a submodular objective select representative samples within each density category, while biased sampling emphasizes rare high-density interactions to avoid dominance by low-density cases. This approach significantly reduces the dataset size and mitigates scenario imbalance, without sacrificing prediction accuracy. Experiments on the Argoverse 1 and Argoverse 2 datasets with recent state-of-the-art models show that SSTP achieves comparable performance to full-dataset training using only half the data while delivering substantial improvements in high-density traffic scenes and significantly reducing training time. Robust trajectory prediction depends not only on data scale but also on balancing scene density to ensure reliable performance under complex multi agent interactions. The code is available at https://anonymous.4open.science/r/SSTP_v2-69E5/README.md."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Ruining_Yang1",
"~Yi_Xu9",
"~Yun_Fu1",
"~Lili_Su1"
]
},
"authors": {
"value": [
"Ruining Yang",
"Yi Xu",
"Yun Fu",
"Lili Su"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"data efficiency",
"trajectory prediction"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "yang|sstp_efficient_sample_selection_for_trajectory_prediction"
},
"pdf": {
"value": "/pdf/55bd982183b342ab8876bf09c69dfa0fea486112.pdf"
},
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SSTP: Efficient Sample Selection for Trajectory Prediction"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "03Ek1qDZmI",
"id": "03Ek1qDZmI",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission2669/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762981127212,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2669/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2669/Authors"
]
}
|
|
2,026
|
03MfCNn3pF
|
[
2,
4,
2,
6
] |
[
{
"content": "This paper presents PersonalQ, a two-stage system for personalized diffusion model serving. Check-in selects the intended personalized checkpoint via metadata reasoning and LLM-based prompt clarification, while Trigger-Aware Quantization (TAQ) preserves trigger-token features during quantization to maintain generation quality. Experiments on 1,000 checkpoints show improved selection accuracy and memory reduction.",
"id": "YFSuFNpwRu",
"rating": 2
},
{
"content": "The authors explore a setup where a system consists of hundreds of LoRA checkpoints obtained through fine-tuning of a diffusion model. A user interacts with this system via natural language prompts, without employing specific trigger words associated with individual LoRAs. Firstly, the ambiguity of selecting the best-fit LoRA is addressed through LLM interaction with LoRA-related metadata and clarification questions that are posed to the user. Furthermore, memory constraints are discussed through a new quantization strategy, TAQ, which omits quantization for trigger-word-related K/V rows. This approach is motivated by the observation that trigger-word related tokens are particularly vulnerable to quantization error.",
"id": "qiuWkGz332",
"rating": 4
},
{
"content": "This paper addresses the important and practical problem of how to use the large, community-driven repositories of personalized generative models according to user intent. The authors identify that personalized models are highly sensitive to quantization, particularly their \"trigger tokens\" (which invoke specific objects or styles), and that naive quantization degrades quality.\nTo overcome this, they propose TAQ (Trigger-Aware Quantization). Concurrently, they propose \"Check-in,\" a retrieval and selection framework to find desired checkpoints from large repositories based on user queries, and introduce the \"Repo-Prompt\" benchmark to evaluate such retrieval methods. The authors report that TAQ achieves quality close to full precision despite weight reduction, and that \"Check-in\" achieves an 89% win rate in human preference studies.",
"id": "P8eaiMc2Y0",
"rating": 2
},
{
"content": "This manuscript proposes personalQ, an interesting framework that address the ambiguous user prompt matching and quantization model degradation in personalized text-to-image model deployment. The authors introduce Check-In for checkpoint analysis and Type-aware Quantization (TAQ) for high quality inference. The authors also introduce the Repo-Prompt benchmark, and experiments on the benchmark demonstrate the superiority of the proposed method.",
"id": "6Bqe71rjef",
"rating": 6
}
] |
{
"cdate": 1757994763056,
"content": {
"TLDR": {
"value": "PersonalQ enables efficient serving of personalized diffusion models at scale through intelligent checkpoint selection and trigger-token-aware quantization that preserves personalization quality while reducing memory footprint."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025personalq,\ntitle={PersonalQ: Select, Quantize, and Serve Personalized Diffusion Models for Efficient Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03MfCNn3pF},\nnote={under review}\n}"
},
"abstract": {
"value": "Personalized text-to-image generation enables users to create custom AI models that generate their unique concepts—specific objects or artistic styles—achieving unprecedented creative control. However, deploying a large repository of personalized checkpoints faces two critical challenges: (1) ambiguous user prompts make it difficult to match the intended checkpoint in large repositories, and (2) standard post-training quantization methods degrade personalized diffusion checkpoints’ image quality. We analyze the importance of reasoning over checkpoint metadata and clarifying user prompts for intent-aligned checkpoint selection. Additionally, we find that trigger tokens for personalized diffusion play a crucial role in quantization. To address the challenges, we propose PersonalQ, a unified system with two components: Check-in analyzes checkpoint repositories and clarifies user intent for intent-aligned selection, and TAQ (Trigger-Aware Quantization), which protects the trigger-token-related representation to deliver high-quality inference from the chosen checkpoint under quantization. On our Repo-Prompts benchmark, PersonalQ achieves an 89% checkpoint-selection preference win rate and a 4.42/5 intent score. Across benchmarks, TAQ reduces inference memory by up to 75% while maintaining strong text-image alignment (CLIP score 0.297 vs. 0.315 at full precision) and image fidelity (FID 11.03 at W8A8 vs. 10.96 at full precision), enabling scalable deployment of personalized models without compromising quality."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Personalized text-to-image generation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/50f61b6537bdaf1e298c0bcf4390b40ad56a54eb.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/4878d33f88b5ea78ce8e4633adfff8251e992811.zip"
},
"title": {
"value": "PersonalQ: Select, Quantize, and Serve Personalized Diffusion Models for Efficient Inference"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03MfCNn3pF",
"id": "03MfCNn3pF",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission6759/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897895805,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission6759/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission6759/Authors"
]
}
|
|
2,026
|
03QzvMzxVM
|
[
2,
4,
4,
4
] |
[
{
"content": "This work presents Robust-NLL, which serves as a plug-and-play loss replacing vanilla NLL loss for robust uncertainty-aware training against label-space outliers. The proposed loss function uses softmax reweighting over sample losses to filter out outliers. The author also provides theoretical analysis and empirical verification of their proposed method.",
"id": "ObgeLTHjtu",
"rating": 2
},
{
"content": "The authors study uncertainty estimation for regression.\n\nThey propose Robust-NLL, a simple and intuitive modification of the standard NLL loss that weighs each loss term with a softmax weight computed across the batch. Robust-NLL is supposed to make the model training more robust to outliers in the train labels.\n\nThey evaluate Robust-NLL on two synthetic 1D regression examples, and on a visual localization dataset. They compare the performance with standard NLL and two NLL variants.",
"id": "H8bucPeIyD",
"rating": 4
},
{
"content": "This paper proposes a robust uncertainty-aware learning where they weight the NLL loss of each training through a temperature-dependent softmax distribution. They provide theortetical analysis of their proposed approach and demonstrate their proposed method's effiicany in three different tasks ranging from simple linear regression to visual localization.",
"id": "Nhxc4sQpSR",
"rating": 4
},
{
"content": "This paper introduces Robust-NLL, a modified loss function that improves uncertainty estimation in neural networks when training data contains outliers. The method uses Boltzmann weighting to down-weight noisy samples while maintaining compatibility with standard training procedures—requiring no architectural changes or additional parameters. Experiments on synthetic and real-world tasks show improvements in both prediction accuracy and uncertainty calibration compared to standard negative log-likelihood training.",
"id": "Ldzrt1maqB",
"rating": 4
}
] |
{
"cdate": 1758019401870,
"content": {
"TLDR": {
"value": "We introduce Robust-NLL for modeling uncertainty under the presence of outliers."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025robust,\ntitle={Robust Uncertainty-Aware Learning via Boltzmann-weighted {NLL}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03QzvMzxVM},\nnote={under review}\n}"
},
"abstract": {
"value": "Uncertainty estimation is critical for deploying deep learning models in high-stakes applications such as autonomy and decision-making. While prior works on data uncertainty modeling estimate aleatoric uncertainty by minimizing the negative log-likelihood (NLL) loss, they often fail under the presence of outliers. To address this limitation, we introduce Robust-NLL, a drop-in replacement for vanilla NLL that filters noisy or adversarial samples. Robust-NLL learns robust uncertainty estimates in neural networks through a Boltzmann-weighted NLL loss that requires no architectural changes, additional parameters, or iterative procedures, and acts as a plug-and-play loss function that maintains full differentiability and mini-batch compatibility. We evaluate our approach on synthetic regression tasks and real-world visual localization benchmarks with injected outliers. Experimental results demonstrate that simply replacing NLL with Robust-NLL consistently improves both prediction accuracy and reliability of uncertainty estimates, achieving substantial performance gains across diverse tasks and architectures."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"robust estimation",
"uncertainty estimation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/444e8304cd012c1ab5fb9f3ae96a85fe575c79e2.pdf"
},
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Robust Uncertainty-Aware Learning via Boltzmann-weighted NLL"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03QzvMzxVM",
"id": "03QzvMzxVM",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission7389/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897855752,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission7389/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission7389/Authors"
]
}
|
|
2,026
|
03ccrSpjOx
|
[
4,
4,
4,
6
] |
[
{
"content": "The paper studies how deliberation format shapes value expression and consensus in LLM-LLM debates over everyday moral dilemmas. Using 1,000 AITA cases, the authors run pairwise and three-way debates among GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash in two settings: synchronous (parallel) and round-robin (sequential). They quantify model inertia and conformity via a multinomial model, analyze verdict change rates, and classify values in explanations using a pruned set of 48 values drawn from “Values in the Wild” (Anthropic) with a separate judge model. Prompt tweaks that explicitly encourage consensus increase revision but do not dramatically raise consensus rates. The paper argues that sociotechnical alignment depends on interaction protocol, not only on single-turn outputs.",
"id": "PaMZ7VFZkc",
"rating": 4
},
{
"content": "This work studies values elicited from multi-agent debate verdicts, arriving to interesting conclusions across multiple deliberating formats and models. Experiments are done on 1000 questions from the AITA reddit community, with debates from models in {GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash}. Results cover aspects including consensus-forming, values orientations, effects of deliberation format and effects of system-prompt-steering.",
"id": "3W5XtvSQuy",
"rating": 4
},
{
"content": "This paper collect 1k everyday dilemmas from Reddit's r/AITA community as the basis for simulate LLM debates. They developed two settings for two models as a pair (synchronous setting: each comment its verdict; head-to-head: one by one between two models). They tested three models (GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash) for order effects and verdict revision. They show some behavioural differences (e.g. Gemini 2.0 Flash prioirizied more on empathy).",
"id": "3E32I36km9",
"rating": 4
},
{
"content": "The proposed approach leverages debate tactics to determine if deliberative dynamics in multi turn settings impact the socio-technical evaluation of LLMs. In particular, the authors leverage everyday situations from the Reddit AITA community as seed situations. Their findings report how deliberation impacts model behavior.",
"id": "hVZiWqQ28N",
"rating": 6
}
] |
{
"cdate": 1758148909076,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025deliberative,\ntitle={Deliberative Dynamics and Value Alignment in {LLM} Debates},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03ccrSpjOx},\nnote={under review}\n}"
},
"abstract": {
"value": "As large language models (LLMs) are increasingly deployed in sensitive everyday contexts -- offering personal advice, mental health support, and moral guidance -- understanding their elicited values in navigating complex moral reasoning is essential. Most evaluations study this sociotechnical alignment through single-turn prompts, but it is unclear if these findings extend to multi-turn settings where values emerge through dialogue, revision, and consensus. We address this gap using LLM debate to examine deliberative dynamics and value alignment in multi-turn settings by prompting subsets of three models (GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash) to collectively assign blame in 1,000 everyday dilemmas from Reddit's \"Am I the Asshole\" community. We use both synchronous (parallel responses) and round-robin (sequential responses) formats to test order effects and verdict revision. Our findings show striking behavioral differences. In the synchronous setting, GPT showed strong inertia (0.6-3.1% revision rates) while Claude and Gemini were far more flexible (28-41%). Value patterns also diverged: GPT emphasized personal autonomy and direct communication, while Claude and Gemini prioritized empathetic dialogue. Certain values proved especially effective at driving verdict changes. We further find that deliberation format had a strong impact on model behavior: GPT and Gemini stood out as highly conforming relative to Claude, with their verdict behavior strongly shaped by order effects. These results show how deliberation format and model-specific behaviors shape moral reasoning in multi-turn interactions, underscoring that sociotechnical alignment depends on how systems structure dialogue as much as on their outputs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"sociotechnical alignment",
"multi-agent debate",
"multi-turn interaction"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/53b15162b8d0641d663ed2799ca10373fb23b76b.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Deliberative Dynamics and Value Alignment in LLM Debates"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03ccrSpjOx",
"id": "03ccrSpjOx",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9918/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897686075,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9918/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9918/Authors"
]
}
|
|
2,026
|
03fFxN6Orj
|
[
4,
2,
4
] |
[
{
"content": "This paper proposed the Adviser-Actor-Critic (AAC) framework, targeting steady-state error reduction for high-precision robotic control tasks in reinforcement learning. AAC augments standard actor-critic architectures with an additional “adviser” module, implemented as a PI controller, that generates dynamically adjusted “virtual goals” to help the actor refine actions and reduce residual errors. The authors present a clear control-theoretic motivation, rigorous mathematical proof of zero steady-state error for constant references, and comprehensive empirical validation on both simulated (Gymnasium-Robotics benchmark tasks) and real-world (quadcopter attitude control) robotic platforms. Experimental results indicate that AAC achieves significant improvements in steady-state tracking error relative to baselines, including >80% error reduction across several benchmark tasks.",
"id": "PxTfOAWPdF",
"rating": 4
},
{
"content": "The paper introduces Advisor-Actor-Critic (AAC), a framework that adds a classical PI controller (advisor) to a standard goal-conditioned reinforcement learning (RL) agent to reduce steady-state error (SSE) in robotic control tasks. The advisor modifies the goal given to the RL agent, creating a \"virtual goal\" that pushes the agent to overcompensate for and thereby eliminate residual tracking errors.",
"id": "AnZBZMVoG2",
"rating": 2
},
{
"content": "The paper proposes Adviser-Actor-Critic (AAC), a hybrid reinforcement learning and control framework that introduces an “adviser” which generates virtual goals to compensate steady-state tracking errors. The adviser is instantiated as a proportional–integral controller that proposes a virtual goal to a goal-conditioned policy. The method is evaluated in six gymnasium-robotics environments and on a real quadcopter attitude-control task, reporting sizable reductions in steady-state error. The paper also presents a theoretical argument for steady-state error elimination under several assumptions.",
"id": "h0YMdMVPhG",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "Phq3Vz4xJs",
"rating": null
}
] |
{
"cdate": 1758271601146,
"content": {
"TLDR": {
"value": "Adviser-Actor-Critic (AAC) combines reinforcement learning with a novel adviser to generate virtual goals, effectively reducing steady-state errors by over 80% in high-precision robotic control tasks."
},
"_bibtex": {
"value": "@misc{\nchen2025adviseractorcritic,\ntitle={Adviser-Actor-Critic: Reducing Steady-State Error in Reinforcement Learning for Robotics Control},\nauthor={Donghe Chen and Jiaxuan Yue and Yubin Peng and Tengjie Zheng and Han Wang and Chaoran Qu and Lin Cheng},\nyear={2025},\nurl={https://openreview.net/forum?id=03fFxN6Orj}\n}"
},
"abstract": {
"value": "High-precision control tasks present substantial challenges for reinforcement learning (RL) algorithms, frequently resulting in suboptimal performance attributed to network approximation inaccuracies and inadequate sample quality. While existing RL frameworks can achieve task completion at coarse precision levels, steady-state tracking errors remain a critical limitation that prevents achieving sub-hardware-level precision. We introduce Adviser-Actor-Critic (AAC), designed to address this precision control dilemma by combining the precision of feedback control theory with the adaptive learning capability of RL and featuring an Adviser that mentors the actor to refine control actions, thereby enhancing the precision of goal attainment. Through extensive benchmark environments from gymnasium-robotics, coupled with real-world quadcopter attitude control, AAC significantly outperforms standard RL algorithms in precision-critical tasks while demonstrating an average $>80\\%$ steady-state error reduction compared to baseline methods."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Donghe_Chen1",
"~Jiaxuan_Yue2",
"~Yubin_Peng1",
"~Tengjie_Zheng1",
"~Han_Wang17",
"~Chaoran_Qu1",
"~Lin_Cheng7"
]
},
"authors": {
"value": [
"Donghe Chen",
"Jiaxuan Yue",
"Yubin Peng",
"Tengjie Zheng",
"Han Wang",
"Chaoran Qu",
"Lin Cheng"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"reinforcement learning",
"robotics",
"control system"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "chen|adviseractorcritic_reducing_steadystate_error_in_reinforcement_learning_for_robotics_control"
},
"pdf": {
"value": "/pdf/635d6df0d70e8cc046d12fa468fe1667715b0a02.pdf"
},
"primary_area": {
"value": "reinforcement learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Adviser-Actor-Critic: Reducing Steady-State Error in Reinforcement Learning for Robotics Control"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "03fFxN6Orj",
"id": "03fFxN6Orj",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762955287461,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission17048/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission17048/Authors"
]
}
|
|
2,026
|
03jzVlLxEe
|
[
6,
6,
4,
4
] |
[
{
"content": "The authors propose **NERVE**, a noise- and variability-robust EEG foundation model designed to address key challenges in EEG analysis, including low signal-to-noise ratios (SNR), high inter-sample variability, and spatial dependencies arising from electrode placement in acquisition systems. The proposed framework consists of three core components. First, a **noise-robust neural tokenizer** encodes EEG patches into discrete neural tokens. Second, a **variability-robust pretraining strategy** enforces alignment and uniformity in the representation space to improve robustness against distributional shifts. Third, an **electrode-position–aware (EPA) transformer** serves as the backbone for both the tokenizer and the foundation model, explicitly modeling the spatial structure of EEG channels.",
"id": "5Z3MN1JjSC",
"rating": 6
},
{
"content": "The paper proposes NERVE, a novel EEG foundation model designed to address key acquisition-related challenges of EEG signals: low signal-to-noise ratio, high inter- and intra-subject variability, and spatial dependencies among electrodes. By introducing a noise-robust neural tokenizer, a variability-robust pretraining objective, and an electrode-position-aware transformer architecture, NERVE demonstrates competitive performance across multiple BCI tasks and improved robustness compared to existing foundation models.",
"id": "wolwjwfHoQ",
"rating": 6
},
{
"content": "This paper highlights the importance of robustness to noise and intra-subject variability in EEG foundation models. To address these challenges, the authors designed specialized modules—such as the EAP and the noise-robust tokenizer—as well as tailored learning objectives, including masked codebook reconstruction with KoLeo regularization, to enhance model robustness. Their robustness analysis reveals that existing EEG foundation models often produce unstable representations for the same class and struggle to disentangle subject-specific from class-specific information. In contrast, the proposed approach demonstrates improved stability and resilience to variability. Overall, the paper raises important awareness of the diverse sources of noise, variability, and artifacts that EEG foundation models must effectively account for.",
"id": "wApgpd5kOO",
"rating": 4
},
{
"content": "This paper proposes NERVE, a noise- and variability-robust EEG foundation model that explicitly addresses three acquisition-related challenges: low signal-to-noise ratio, high inter- and intra-subject variability, and spatial dependencies among electrodes. NERVE introduces a noise-robust neural tokenizer trained via denoising temporal–spectral prediction, a variability-robust pre-training objective using KoLeo regularization, and an electrode-position-aware (EPA) transformer to capture spatial structure. Evaluated on multiple downstream BCI tasks, NERVE demonstrates competitive performance and improved robustness to noise and variability compared to existing EEG foundation models.",
"id": "dxvAusftiR",
"rating": 4
}
] |
{
"cdate": 1758337883115,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nerve,\ntitle={{NERVE}: Noise-Variability-Robust {EEG} Foundation Model with Electrode-Brain Interactions},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03jzVlLxEe},\nnote={under review}\n}"
},
"abstract": {
"value": "Electroencephalography (EEG) is an indispensable modality for measuring and recording brain electrical activity, with broad applications in brain–computer interfaces (BCI) and healthcare. While early EEG models predominantly adopted supervised learning methods due to the scarcity of large-scale datasets and the heterogeneity across tasks and datasets, the recent success of large foundation models has driven increasing efforts to build EEG foundation models. However, most existing studies focus on handling signals with varying formats while overlooking inherent characteristics of EEG signals during acquisition, including low signal-to-noise ratios (SNR), high variability across samples, and spatial dependencies arising from electrode placement within the acquisition system. To address these challenges, we propose NERVE, a novel noise-variability-robust EEG foundation model with electrode-brain interactions. Specifically, pre-training of NERVE begins with learning a noise-robust neural tokenizer that encodes EEG patches into discrete neural tokens. The tokenizer is trained through denoising temporal–spectral prediction to reconstruct temporal and frequency information of the original signal from noise-augmented inputs. NERVE is further pretrained to predict the neural codes of masked EEG patches, integrated with a variability-robust objective that promotes uniform EEG representations. To incorporate spatial structure in EEG, we propose an electrode-position-aware transformer as the backbone for both the tokenizer and the foundation model. It enables the model to capture spatial dependencies among electrodes and brain regions via attention mechanisms. NERVE demonstrates competitive performance across diverse BCI tasks and improved robustness to noise and variability compared to existing EEG foundation models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Foundation model",
"Electroencephalography",
"EEG",
"Self-supervised learning",
"Pre-training"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2af8f2986c76341d381f0b7aced096521dd9722f.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "NERVE: Noise-Variability-Robust EEG Foundation Model with Electrode-Brain Interactions"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03jzVlLxEe",
"id": "03jzVlLxEe",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission22991/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Edit"
],
"license": "CC BY 4.0",
"mdate": 1759896837180,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission22991/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission22991/Authors"
]
}
|
|
2,026
|
03qTI3NKqi
|
[
4,
4,
4,
4
] |
[
{
"content": "This work found that previous soft prompts often disrupted information flow and reduced reasoning. They argue that soft prompts should not be limited to the activation and guidance stages but should be inserted into appropriate stages to ensure smooth information flow between layers. Therefore, they proposed a Dynamic Hierarchical Awareness Mechanism (DHAM) to ensure effective coordination between the various stages of reasoning.",
"id": "iWYZVN0FL8",
"rating": 4
},
{
"content": "This paper investigates the role of soft prompt tuning in improving reasoning performance of large language models (LLMs). While previous works show that soft prompts can effectively activate prior knowledge and facilitate early reasoning, this paper observes that maintaining strong prompt influence in later reasoning stages can disrupt information flow and degrade performance. To address this issue, the paper proposes a Dynamic Hierarchy-Aware Mechanism (DHAM) that dynamically regulates soft prompts across reasoning stages. Specifically, DHAM performs hierarchical clustering to identify stage-specific representations and adaptively activates soft prompts based on semantic alignment, thereby ensuring smoother and more coherent information transmission through model layers. Experimental results demonstrate consistent improvements across different models and reasoning benchmarks. Ablation studies suggest that using CKA-based clustering and a moderate number of reasoning stages achieves the best performance, supporting the paper’s hypothesis of stable information flow as a key factor for effective reasoning.",
"id": "M0dqnNM2Ef",
"rating": 4
},
{
"content": "This paper identifies that static soft prompts (SP) can disrupt information flow when injected into middle or late layers. To address this, the paper proposes the Dynamic Hierarchy-Aware Mechanism (DHAM), which uses CKA-based clustering to group layers into functional stages and injects distinct prompts at each stage. This hierarchical alignment is shown to stabilize information flow and improve reasoning performance. However, clearer experimental evidence should be provided.",
"id": "fHO7wdUZYl",
"rating": 4
},
{
"content": "This paper proposes a novel method called Dynamic Hierarchical Awareness Mechanism (DHAM), which aims to address the issues of incoherent information flow and performance degradation in large language models (LLMs) during complex reasoning tasks due to the static injection of soft prompts. The authors found through analysis that improper prompt injection can cause severe oscillations in information propagation between model layers, disrupting the coherence of reasoning. To this end, DHAM first automatically divides the model's Transformer layers into several functionally similar semantic stages using Centered Kernel Alignment (CKA) and hierarchical clustering. Subsequently, it injects trainable soft prompts only at the starting layers of each stage, achieving phased and dynamic guidance of the information flow. Experiments show that this stage-aware injection strategy, especially the injection in the early stages, can effectively promote the smooth transfer of information and significantly improve the model's accuracy on complex reasoning tasks such as GSM8K and MATH.",
"id": "qKJXXFLd9Z",
"rating": 4
}
] |
{
"cdate": 1758191821554,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025unlocking,\ntitle={Unlocking Coherent Reasoning in {LLM}s with Hierarchical Soft Prompts},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03qTI3NKqi},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) exhibit strong reasoning capabilities in complex tasks. Soft prompt tuning, as a lightweight approach, injects trainable vectors into the input to guide the reasoning process and enhance model performance. Prior studies show that soft prompts effectively activate prior knowledge and improve problem understanding in the early stages of reasoning. However, when they continue to exert strong influence in the middle and later stages, they often disrupt the information flow and degrade reasoning performance. Based on this observation, we argue that the role of soft prompts should not be confined to a single stage of activation and guidance. Instead, they should be inserted at appropriate stages to ensure smooth information transmission across layers. Existing methods, however, typically rely on one-shot static injection and cannot dynamically regulate prompts across stages, leading to functional mismatches during reasoning. To address this limitation, we propose a dynamic hierarchy-aware mechanism(DHAM). This mechanism first employs hierarchical clustering to derive stage-specific representations, and then leverages the semantic guidance capability of soft prompts to adaptively align and activate them, ensuring effective coordination across reasoning stages. \nDHAM yields consistent gains across models and benchmarks (e.g., 29.5\\%→43.8\\% on Llama-2-13B/GSM8K), with ablations showing CKA clustering and moderate stage numbers (e.g., $G=3/4$) perform best, consistent with the stable information flow hypothesis."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models",
"Complex Reasoning",
"Soft Prompt Tuning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/511e5f43840e80d2617f1692ac8a2bf18b3b16d7.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Unlocking Coherent Reasoning in LLMs with Hierarchical Soft Prompts"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03qTI3NKqi",
"id": "03qTI3NKqi",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission11167/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897603181,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission11167/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission11167/Authors"
]
}
|
|
2,026
|
03u504EDJp
|
[
2,
4,
6,
2,
2
] |
[
{
"content": "This paper introduces APO, a new framework for distilling reasoning capabilities from multiple MLLMs that exhibit conceptual drift, defined as variability in their reasoning behaviors or conclusions. The core idea is that APO aggregates all available reasoning trajectories and learns to prefer the self-distillation as positive signals against all negative trajectories. This approach treats distillation as a preference optimization problem, aligning the student model’s reasoning trajectory with the highest-quality outputs among drifting teachers, in a “learn-compare-critique” paradigm. The method is tested on a newly constructed dataset, CXR-MAX, based on chest X-ray interpretation, and shows improvements in accuracy.",
"id": "ogYTGtXZT6",
"rating": 2
},
{
"content": "This paper discusses the concept drift problem in knowledge distillation of multimodal large language models (MLLM). Through the analysis of the connection between concept drift and knowledge distillation, the authors introduce the “learn–compare–critique” paradigm to tackle the issue. The resulting method, autonomous preference optimization (APO), trains the student with self relection over the drifting inference for concept alignment. Experiments demonstrate the effectiveness of APO on knowledge distillation tasks. The authors also contribute to a large-scale dataset called CXR-MAX.",
"id": "tQNQ14YJUU",
"rating": 4
},
{
"content": "The paper studies the problem of knowledge distillation from multiple multimodal large language models (MLLMs). The authors observe that the reasoning trajectories of different teacher models can change inconsistently across models or over time, and that such concept drift can propagate to student models during distillation. To address this issue, the paper proposes a “learn–compare–critique” pipeline. The student model first learns from multiple MLLM teachers; then it performs self-distillation to align and identify inconsistent teacher outputs. Finally, through a preference optimization step, the student reinforces alignment with stable reasoning outputs while down-weighting drifted or biased outputs.\n\nFor experiments, the authors construct the CXR-MAX dataset, which is an extension of the MIMIC-CXR dataset by adding reasoning trajectories about clinical chest X-ray interpretation from multiple MLLM teachers. Results show that the proposed method outperforms other existing distillation methods, while achieving performance comparable to or exceeding that of the original teacher models.",
"id": "Eu39zKqsIm",
"rating": 6
},
{
"content": "- This paper aims to address the challenge of knowledge distillation from multiple, heterogeneous MLLMs. The main challenge is the concept drift problem, where the teacher models provide conflicting information that can confuse the student model.\n- To tackle this, this work proposes a novel three-stage \"learn-compare-critique\" paradigm called Autonomous Preference Optimization (APO). The student model first learn a broad knowledge via standard supervised distillation from all teachers. Second, it compares and aggregates the teachers' outputs and performs self-distillation to generate a unified reasoning trajectory. Finally, it critiques the initial knowledge by using the consensus trajectory as a preferred sample and the individual teacher outputs as negative samples using a simple contrastive learning loss.",
"id": "uJUjaPDF61",
"rating": 2
},
{
"content": "This paper addresses the underexplored problem of knowledge distillation from multiple drifting MLLMs, where inconsistent reasoning trajectories across teachers cause concept drift and bias propagation. The authors propose APO, a “learn–compare–critique” paradigm that enables the student model to self-distill and align reasoning concepts autonomously. Experiments show that this method has certain effectiveness.",
"id": "YpLGCI4tXU",
"rating": 2
}
] |
{
"cdate": 1756744193214,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025learning,\ntitle={Learning from All: Concept Alignment for Autonomous Distillation from Multiple Drifting {MLLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=03u504EDJp},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper identifies a critical yet underexplored challenge in distilling from multi-modal large language models (MLLMs): the reasoning trajectories generated by multiple drifting teachers exhibit concept drift, whereby their reasoning distributions evolve unpredictably and transmit biases to the student model, ultimately compromising its performance. To tackle this issue, we pioneer a theoretical connection between concept drift and knowledge distillation, casting the non-stationary reasoning dynamics from multiple MLLM teachers as next-token prediction of multi-stream reasoning trajectories. Guided by concept drift, we introduce the “learn–compare–critique” paradigm, culminating in autonomous preference optimization (APO). Under the active guidance of the teachers, the student model first learns and self-distils preferred thinking by comparing multiple teachers. It then engages in critical reflection over the drifting inference from teachers, performing concept alignment through APO, ultimately yielding a robust, consistent, and generalizable model. Extensive experiments demonstrate our superior performance of consistency, robustness and generalization within knowledge distillation. Besides, we also contributed a large-scale dataset CXR-MAX (Multi-teachers Alignment X-rays), comprising 170,982 distilled reasoning trajectories derived from publicly accessible MLLMs based on MIMIC-CXR. Our code and data are public at: https://anonymous.4open.science/r/Autonomous-Distillation/."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"concept drift",
"transfer learning",
"multi view",
"knowledge distillation",
"multi modal large language model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fe4866ea94ed809fb98d3d8b49b15b242306766f.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/d3f2bf191b959b040fec6edae75de60b04403059.pdf"
},
"title": {
"value": "Learning from All: Concept Alignment for Autonomous Distillation from Multiple Drifting MLLMs"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "03u504EDJp",
"id": "03u504EDJp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission525/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898255701,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission525/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission525/Authors"
]
}
|
|
2,026
|
040ClRXMf3
|
[
6,
8,
2,
8
] |
[
{
"content": "This paper proposes a new algorithm to extract cardinal-minimal sufficient explanations for Neural Additive Models (NAMs).\nIt does so by exploiting key design choices of NAMs, showing how this family of models supports explanations with guarantees.\n\nThis is achieved as follows. First, the paper introduces a method to rank features based on how much they influence the final prediction. Then, after this ranking is obtained, an algorithm is discussed to exploit this order to efficiently explore which features to remove from the current sufficient explanation until a cardinal-minimal explanation is obtained.",
"id": "MthqrzzFcv",
"rating": 6
},
{
"content": "This paper presents a novel algorithm for computing provably cardinality-minimal explanations for Neural Additive Models (NAMs). The authors focus on post-hoc, per-instance explanations: given a trained NAM f and an input x, they seek to compute a subset of features S \\subseteq [n]that is sufficient to guarantee the same prediction under bounded perturbations of the remaining features (an\n\\epsilon-ball). Among all sufficient subsets, the goal is to find one of minimum cardinality (the global optimum).\n\nThe paper provides a novel contribution to the state-of-the-art in the broad area of explainability with provable guarantees (in this case, minimality). The paper focuses on NAMs, which to the best of my knowledge it is still a Still a niche but growing area in the interpretability subfield. They are Not widely used in industry production pipelines yet. but research interest persists. In fact, (NAMs) occupy an interesting middle ground in machine learning — they’re not mainstream, but they are important in specific contexts where interpretability and nonlinear modelling both matter. Their main limitation is that in a pure NAM, features don’t interact directly because the model assumes additivity. This means that the effect of each feature x_i on the output y is independent of any other feature x_j, which can be a strong limitation in some practical settings.\n\nThe proposed algorithm proceeds in two stages. In the first stage each univariate subnetwork f_i(x_i) is verified independently to estimate its influence on the model’s decision. This is done via parallelised binary search over feature importance intervals. In Stage 2, after sorting features by importance, a binary search identifies the globally cardinal-minimal sufficient subset of features that provably determines the model’s prediction. This reduces complexity from exponentially many calls to the network to logarithmically many.\n\nExperiments on standard tabular benchmarks demonstrate feasibility and show smaller, faster provable explanations than prior methods; sampling-based visualisations were also shown to be unreliable in some cases, whereas the proposed method always produces verifiably sufficient explanations.\n\nOverall, I am supportive of this paper. It makes a meaningful and well-justified contribution to formal explainability by showing that NAMs enable efficient computation of globally minimal sufficient explanations -- something previously infeasible for general neural networks. With minor revisions, I feel that this paper is a valuable contribution to the state of the art.",
"id": "vX0lZAngLb",
"rating": 8
},
{
"content": "This paper focuses on explainable artificial intelligence and aims to provide concise explanations for the predictions made by Neural Additive Models (NAMs). The primary issue addressed in this study is as follows: given a classifier $ f $ represented by a NAM, an input data instance $ x $ that requires an explanation, and a ball $ B $ centered at $ x $, the goal is to identify a feature subset $ S $ of the minimum size. This subset must ensure that for every instance $ z $ within the ball $ B $, if the values of $ z $ and $ x $, restricted to the features in $ S $, are indistinguishable, then the classifications made by $ f $ for both $ z $ and $ x $ are the same. Such an explanation $ S $ is referred to as a (ball-restricted) minimum-size abductive explanation or a minimum-size sufficient reason.\n\nTo address this problem, the authors propose a two-stage method. In the first stage, the univariate functions $ f_i $ are sorted based on their importance intervals. In the second stage, a minimal-size explanation $ S $ is derived using a greedy approach. The paper includes formal proofs for the correctness and complexity of this method, and it presents comparative experiments conducted on four different datasets that support the theoretical findings.",
"id": "jO4h06yElo",
"rating": 2
},
{
"content": "A computationally-efficient, novel method to compute explanations with provable guarantees for Neural Additive Models (NAMs). The explanations are guaranteed to be the smallest in size, globally. The method claims to be efficient in generating such certified explanations.",
"id": "EWw1viy5FE",
"rating": 8
}
] |
{
"cdate": 1758298867680,
"content": {
"TLDR": {
"value": "Our approach constructs provably sufficient and (globally) cardinal-minimal explanations for neural additive models with improved runtime complexity."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025provably,\ntitle={Provably Explaining Neural Additive Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=040ClRXMf3},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite significant progress in post-hoc explanation methods for neural networks, many remain heuristic and lack provable guarantees. A key approach for obtaining explanations with provable guarantees is by identifying a *(globally) cardinal-minimal* subset of input features which by itself is *provably sufficient* to determine the prediction. However, for standard neural networks, this task is often computationally infeasible, as it demands a worst-case *exponential* number of verification queries in the number of input features, each of which is NP-hard. In this work, we show that for Neural Additive Models (NAMs), a recent and more interpretable neural network family, we can *efficiently* generate explanations with such guarantees. We present a new model-specific algorithm for NAMs that generates provably (globally) cardinal-minimal explanations using only a *logarithmic* number of verification queries in the number of input features, after a parallelized preprocessing step with logarithmic runtime in the required precision is applied to each small univariate NAM component. Our algorithm not only makes the task of obtaining (globally) cardinal minimal explanations feasible, but even outperforms existing algorithms designed to find *(locally) subset-minimal* explanations -- which may be larger and less informative but easier to compute -- despite our algorithm solving a much more difficult task. Our experiments demonstrate that, compared to previous algorithms, our approach provides provably smaller explanations than existing works and substantially reduces the computation time. Moreover, we show that our generated provable explanations offer benefits that are unattainable by standard sampling-based techniques typically used to interpret NAMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"explainability",
"XAI",
"explainable AI",
"formal verification",
"sufficient explanations"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d5a73d9cf5e02a90d26e33e9057769ff66ff64fa.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/688a5ff66ccb15d28a06f568b0f04b60f4413e61.zip"
},
"title": {
"value": "Provably Explaining Neural Additive Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "040ClRXMf3",
"id": "040ClRXMf3",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19723/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897022892,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19723/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19723/Authors"
]
}
|
|
2,026
|
04HwYGgp2w
|
[
6,
8,
6,
6
] |
[
{
"content": "In this paper,the authors introduces ImageDoctor, a unified,multi-aspect evaluation framework for Text-to Image(T2I) models. Unlike previous methods that provide a single scalar, ImageDoctor assesses image quality across four dimensions: plausibility, semantic alignment, aesthetics, and overall quality.ImageDoctor also provides pixel-level flaw indicators in the form of heatmaps, which highlight misaligned or implausible regions, and can be used as a dense reward for T2I model preference alignment. The model is built on a multi-modal large language models(MLLMs) and adopts a “look-think-predict” paradigm. Training involves a two-phase process: cold start and reinforcement finetuning with Group Relative Policy Optimization(GRPO) using tailored rewards. Furthermore, the paper proposes DenseFlow-GRPO, which utilizes ImageDoctor’s dense, pixel-level heatmaps as a dense reward signal. Experiments demonstrates that ImageDoctor achieves strong alignment with human preference across multiple datasets. Furthermore,when used as a reward model for preference tuning, ImageDoctor achieves an improvement of 10% over scalar-based reward models.",
"id": "yxpL47YNqW",
"rating": 6
},
{
"content": "This paper proposes a novel VLM-based evaluation framework for text-to-image generation, named ImageDoctor. ImageDoctor not only provides multi-dimensional scoring capabilities, such as aesthetics and text-image alignment, but also offers pixel-level localization of flawed regions, enabling it to actively identify areas of misalignment and visual implausibility. Notably, the latter capability introduces a fresh perspective for reward modeling in text-to-image generation. Combined with the authors' proposed DenseFlow-GRPO method, which leverages pixel-level supervision signals for reinforcement learning, the framework effectively enhances the performance of image generation models.",
"id": "t3QGxQNWMn",
"rating": 8
},
{
"content": "This paper proposes ImageDoctor, a unified framework for Text-to-Image (T2I) evaluation that simultaneously outputs multi-aspect scores and spatially grounded heatmaps, offering richer and more interpretable feedback than traditional single-scalar assessments. The paper also introduces DenseFlow-GRPO, a method for T2I model fine-tuning, with experimental results demonstrating the value of pixel-level feedback in improving evaluation accuracy and eliminating local artifacts.",
"id": "ldm65H1lzA",
"rating": 6
},
{
"content": "This paper presents ImageDoctor, a unified and interpretable evaluation framework for text-to-image generation. ImageDoctor provides multi-dimensional feedback and introduces pixel-level diagnostic heatmaps for grounded and fine-grained evaluation. The model adopts a \"look-think-predict\" paradigm. Experimental results show that ImageDoctor achieves state-of-the-art correlation with human judgments and improves text-to-image generation quality.",
"id": "w94QtyzdrS",
"rating": 6
}
] |
{
"cdate": 1757544654492,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025imagedoctor,\ntitle={ImageDoctor: Diagnosing Text-to-Image Generation via Grounded Image Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04HwYGgp2w},\nnote={under review}\n}"
},
"abstract": {
"value": "The rapid advancement of text-to-image (T2I) models has increased the need for reliable human preference modeling, a demand further amplified by recent progress in reinforcement learning for preference alignment. However, existing approaches typically quantify the quality of a generated image using a single scalar, limiting their ability to provide comprehensive and interpretable feedback on image quality. To address this, we introduce ImageDoctor, a unified multi-aspect T2I model evaluation framework that assesses image quality across four complementary dimensions: plausibility, semantic alignment, aesthetics, and overall quality. ImageDoctor also provides pixel-level flaw indicators in the form of heatmaps, which highlight misaligned or implausible regions, and can be used as a dense reward for T2I model preference alignment. Inspired by the diagnostic process, we improve the detail sensitivity and reasoning capability of ImageDoctor by introducing a ``look-think-predict\" paradigm, where the model first localizes potential flaws, then generates reasoning, and finally concludes the evaluation with quantitative scores. Built on top of a vision-language model and trained through a combination of supervised fine-tuning and reinforcement learning, ImageDoctor demonstrates strong alignment with human preference across multiple datasets, establishing its effectiveness as an evaluation metric. Furthermore, when used as a reward model for preference tuning, ImageDoctor significantly improves generation quality—achieving an improvement of 10% over scalar-based reward models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Image reward model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ab62de115d368d82b0351f14bb9466e9bbe97c92.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "ImageDoctor: Diagnosing Text-to-Image Generation via Grounded Image Reasoning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04HwYGgp2w",
"id": "04HwYGgp2w",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3835/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898067519,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3835/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3835/Authors"
]
}
|
|
2,026
|
04JkPDiCnp
|
[
2,
6,
4,
2
] |
[
{
"content": "This paper introduces InternAgent-DR, a multi-agent deep-research framework that models scientific reasoning as a dynamic structured knowledge flow. Instead of relying on a linear task sequence, InternAgent-DR represents research workflows as directed acyclic graphs whose nodes correspond to subtasks such as search, solve, and answer, and whose edges encode knowledge dependencies. The system integrates three major components: a Knowledge Flow Planner that incrementally expands the research graph, a Knowledge Collector that executes outermost nodes through LLM-based agents equipped with tools, and a Knowledge Flow Refiner that dynamically modifies the graph based on intermediate results. This design enables both hierarchical decomposition and adaptive refinement of complex research tasks. Extensive experiments on GAIA, GPQA-diamond, HLE, and TRQA benchmarks demonstrate that InternAgent-DR achieves state-of-the-art performance, surpassing existing open- and closed-source deep-research systems such as OpenAI-DR, OWL, and Manus. Ablation studies confirm the effectiveness of structured planning and flow refinement, and case studies show interpretability and reproducibility advantages.",
"id": "F0Sq86M0oo",
"rating": 2
},
{
"content": "This paper introduces InternAgent-DR, a multi-agent system for complex scientific reasoning and problem-solving. It models research as a dynamic structured knowledge flow, where nodes represent subtasks and edges encode dependencies, enabling adaptive planning, reasoning, and refinement. The framework integrates three modules—Knowledge Flow Planner, Knowledge Collector, and Flow Refiner—to iteratively expand, execute, and adjust research plans. Experiments on benchmarks such as GAIA, GPQA, HLE, and TRQA show state-of-the-art performance, suggesting improved adaptability and reasoning depth compared to both single-agent and static multi-agent systems",
"id": "snn8W0Ai2l",
"rating": 6
},
{
"content": "This paper proposes InternAgent-DR, a deep-research system that constructs and evolves a dynamic structured knowledge flow. Instead of linear task pipelines, the method builds a DAG-structured research graph to explicitly model subproblem dependencies, support parallel exploration, and adapt structure during execution. The system includes (i) a flow planner, (ii) a knowledge collector with tool-augmented LLM agents, and (iii) a flow refiner for graph-level self-revision. Experiments on GAIA, GPQA, HLE, and TRQA show state-of-the-art or competitive performance. Ablations indicate benefits from both structured planning and dynamic refinement.",
"id": "qJmSCE5pAH",
"rating": 4
},
{
"content": "The paper proposes InternAgent-DR **,** a multi-agent deep research system that builds and continually refines a knowledge flow (planner → collector → refiner) to coordinate subtasks and dependencies. Experiments on GAIA, HLE, GPQA, and TRQA report strong or SOTA results, with ablations showing benefits from structured planning and dynamic refinement.",
"id": "2JPa5Z3Bum",
"rating": 2
}
] |
{
"cdate": 1756820032542,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025internagentdr,\ntitle={InternAgent-{DR}: Advancing deep research with dynamic structured knowledge flow},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04JkPDiCnp},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep research is an inherently challenging task that demands both breadth and depth of thinking. It involves navigating diverse knowledge spaces and reasoning over complex, multi-step dependencies, which presents substantial challenges for agentic systems. To address this, we propose InternAgent-DR (Deep Research), a multi-agent framework that actively constructs and evolves a dynamic structured knowledge flow to drive subtask execution and reasoning. InternAgent-DR is capable of strategically planning and expanding the knowledge flow to enable parallel exploration and hierarchical task decomposition, while also adjusting the knowledge flow in real time based on feedback from intermediate reasoning outcomes and insights. InternAgent-DR achieves state-of-the-art performance on both general and scientific benchmarks, including GAIA, HLE, GPQA and TRQA, demonstrating its effectiveness in multi-disciplinary research scenarios and its potential to advance scientific discovery."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"deep research",
"multi-agent",
"reasoning model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1733de55f54fb9280e4bfee98aaf47ded2d07fd1.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "InternAgent-DR: Advancing deep research with dynamic structured knowledge flow"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04JkPDiCnp",
"id": "04JkPDiCnp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission830/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898239693,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission830/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission830/Authors"
]
}
|
|
2,026
|
04Tfwy3LLC
|
[
2,
6,
4,
8
] |
[
{
"content": "The paper relates to the pruning of LLM layers. The paper consists of three main parts:\n1. Discussion of criteria for identifying prunable layers\n2. Comparison between LoRA and partial fine-tuning methods for recovering accuracy after pruning\n3. Theoretical analysis of gradient flow in the presence Pre-Layer Normalization, and how this affects layers by depth\n\nThe main observation in the paper is the relative unimportance of deep layers, and the fact that pruning the last layers is a more useful heuristic than other more elaborate importance estimators (c.f. Magnitude, Taylor, PPL, BI).\nThis claim is supported by Table 1, which shows superior results for the \"reverse order\" method, at a 20% pruning ratio, for Qwen1.5-7B, Llama-3.1-8B-It and Vicuna-7B-v1.5\n\nA parallel finding is the fact that partial fine-tuning of the last one or two layers yields a greater accuracy recovery than full LoRA fine-tuning.\nThis claim is supported by Table 2.\n\nIn the last paragraph of the main body of the paper, the theoretical analysis of gradient flow and show that Pre-LN architectures inherently weaken the gradients and contributions of deeper layers due to the normalization step scaling them down.",
"id": "US7LMRU6C4",
"rating": 2
},
{
"content": "This paper re-evaluates layer pruning methods for Large Language Models (LLMs), addressing whether complex metrics are needed to identify redundant layers and if LoRA is the optimal fine-tuning choice after pruning. Through extensive experiments across various metrics, LLMs, and fine-tuning methods, the paper reveals that a simple \"backward pruning\" (removing the last few layers directly) often outperforms more complex indicators. Furthermore, \"partial layer fine-tuning\" (tuning only the last few layers and the output layer) is found to be more effective and faster than LoRA for performance recovery. This paper provide a theoretical framework based on gradient flow to explain why deeper layers in Pre-LN Transformers contribute less, validating their approach. Pruned models based on these findings significantly surpass existing methods across benchmarks.",
"id": "ULnrI4m9Iy",
"rating": 6
},
{
"content": "This paper re-evaluates layer pruning for Pre-LN LLMs and shows that a simple strategy that prunes layers in reverse order and then fine-tune only the LM head plus the last 1-3 layers consistently matches or even outperforms more complicated pruning methods on a few standard benchmarks (PIQA, HellaSwag, WinoGrande, ARC-e/c, OBQA, MMLU, CMMLU). The empirical study is broad (several LLaMA and Qwen-style models) and scales up to LLaMA-3-70B. The authors give gradient-flow explanation for why deeper layers in Pre-LN are matter less, and they also find that this approach can beat the usual \"prune + LoRA\" recovery. This makes the paper especially useful for users who just want a reliable pruning recipe without complex per-layer scoring.",
"id": "nII3u1uhJm",
"rating": 4
},
{
"content": "The paper is about empirical benchmarking and methodological clarification for layer pruning.\n\nBenchmarks 7 layer-selection metrics and 6 fine-tuning methods across Vicuna-7B, Qwen-7B, and Llama-3.x models.\n\nFinds that reverse-order pruning (dropping last layers) consistently outperforms complex importance metrics.\n\nShows partial-layer fine-tuning (LM head + last 1–3 layers) surpasses LoRA/QLoRA for accuracy and training cost.\n\nExtends tests to Llama-3-70B.\n\nReports 2-19 pp improvement over prior layer-pruning baselines.\n\nAdds a gradient-flow derivation explaining why deep layers matter less.\n\nNotes that iterative prune–tune cycles provide no benefit over one-shot pruning.",
"id": "mCAFX6HKkP",
"rating": 8
}
] |
{
"cdate": 1757254648198,
"content": {
"TLDR": {
"value": "This paper presents a theoretical and empirical analysis of layer pruning in Large Language Models, aiming to improve and refine pruning strategies."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025reassessing,\ntitle={Reassessing Layer Pruning in {LLM}s: New Insights and Methods},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04Tfwy3LLC},\nnote={under review}\n}"
},
"abstract": {
"value": "Although large language models (LLMs) have achieved remarkable success across various domains, their considerable scale necessitates substantial computational resources, posing significant challenges for deployment in resource-constrained environments. Layer pruning, as a simple yet effective compression method, removes layers of a model directly, reducing computational overhead. However, what are the best practices for layer pruning in LLMs? Are sophisticated layer selection metrics truly effective? Does the LoRA (Low-Rank Approximation) family, widely regarded as a leading method for pruned model fine-tuning, truly meet expectations when applied to post-pruning fine-tuning? To answer these questions, we dedicate thousands of GPU hours to benchmarking layer pruning in LLMs and gaining insights across multiple dimensions. Our results demonstrate that a simple approach, i.e., pruning the final layers followed by fine-tuning the lm\\_head and the remaining last three layers, yields remarkably strong performance. These pruning strategies are further supported by theoretical analyses based on the gradient flow. Following this guide, our method surpasses existing state-of-the-art pruning methods by $5.62\\%$–$17.27\\%$ on Llama-3.1-8B-It, by $2.36\\%$–$19.45\\%$ on Llama-3-8B and by $4.34\\%$–$9.59\\%$ on Llama-3-70B. The code is available on GitHub."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Model",
"Layer Pruning",
"Model Compression"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c6ed1e0f689d0744c27ac966827d51d77a626dce.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Reassessing Layer Pruning in LLMs: New Insights and Methods"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04Tfwy3LLC",
"id": "04Tfwy3LLC",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898126388,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2804/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2804/Authors"
]
}
|
|
2,026
|
04h40hEgTj
|
[
6,
6,
2,
4
] |
[
{
"content": "In this paper, the authors aimed at creating a family of toy models for exploring the known challenge of long-context learning for LLM. The proposed toy model have different time series data interleaved with distinct labels. The authors found that LLM developed two distinct learning mechanisms in performing next token prediction on the toy model. The first mechanism focuses on identity regime change in the data, and the second one perform next token prediction based the data observed. The two mechanism also seem to follow different learning dynamic, and the second one developed earlier than the first.",
"id": "xGetJAj2RR",
"rating": 6
},
{
"content": "ICL is a well studied phenomenon in the ML community. Various tasks, such as MQAR and regression, have been proposed to test the ICL capabilities of models in the past. The beauty of each is it both tests the model's ability to perform lookup operations (MQAR) and more complex operations only depending on the previous token (regression). This work combines these into a task using linear dynamical systems, where each system is marked in-context by a specific query label. Two observations are seen: the model uses the open-query label to perform the correct task, and the model uses past elements in the sequence to continue the task. These observations are validated by configuring the systems and states to align, allowing for a clear test of these observations in a controlled setting. Further investigating that these different mechanisms exist within these learned models, a mechanistic study is conducted separating out two circuits from within the model that have markedly distinct performance on the two different subtasks of recall and execution.",
"id": "Akv2lLYAWU",
"rating": 6
},
{
"content": "This paper studies mechanisms through which transformers can perform in-context prediction. \nIn models trained on a novel synthetic task, the paper discovers two mechanisms (\"label-based\" and “observation-based”).\nA further experiment on OLMo checkpoints provides further evidence from a translation task.",
"id": "5QYEAhhMIz",
"rating": 2
},
{
"content": "This paper proposes a new methodology to study in-context behaviors in transformer models. They create a sequence which consists of segments of observations drawn from different distributions. Each segment begins with a special token, termed \"symbolic punctuation label\" (SPL), so model must choose between inferring the next observation based on the SPL or the observations in the context. They provide experimental evidence suggesting that the latter choice develops earlier in training than the first.",
"id": "fvnfFvblDP",
"rating": 4
}
] |
{
"cdate": 1758340263445,
"content": {
"TLDR": {
"value": "We introduce a new family of toy problems that combine features of linear-regression-style continuous in-context learning (ICL) with discrete associative recall and find distinct learning dynamics for different prediction mechanisms."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025decomposing,\ntitle={Decomposing Prediction Mechanisms for In-context Recall},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=04h40hEgTj},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce a new family of toy problems to explore challenges with long context learning and associative recall in transformer models. Our setup involves interleaved segments of observations from randomly drawn linear deterministic dynamical systems. Each system is associated with a discrete symbolic label that must be learned in-context since these associations randomly shuffle between training instances.\n\nVia out-of-distribution experiments we find that learned next-token prediction for this toy problem involves at least two separate mechanisms. One \"label-based\" mechanism uses the discrete symbolic labels to do the associative recall required to predict the start of a resumption of a previously seen system's observations. The second ``observation-based'' mechanism largely ignores the discrete symbolic labels and performs a prediction based on the state observations previously seen in context. These two mechanisms have different learning dynamics: the second mechanism develops much earlier than the first.\n\nThe behavior of our toy model suggested concrete experiments that we performed with OLMo training checkpoints on an ICL translation task. We see a similar phenomenon: the model learns to continue a translation task in-context earlier than it decisively learns to in-context identify the meaning of a symbolic label telling it to translate."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"emergence",
"in-context learning",
"time-series",
"associative recall",
"learning dynamics"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/874dd26fa4acf6f26e690461d6232071b158fd84.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Decomposing Prediction Mechanisms for In-context Recall"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "04h40hEgTj",
"id": "04h40hEgTj",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23149/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896830101,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23149/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23149/Authors"
]
}
|
|
2,026
|
053vZMxDB5
|
[
2,
8,
4
] |
[
{
"content": "This paper presents a reinforcement learning (RL) approach for learning from signal temporal logic (STL) to make learning more feasible for long-horizon tasks. The novel model-free approach divides and flattens complex STL formulas and searches for time-variable actualizations via Metropolis-Hastings (MH) sampling to enable efficient learning. The proposed method is compared with a range of existing approaches across several environments. I believe the idea is original and shows promise for improving over existing methods for STL learning. However, the paper still needs substantial work; specifically, a more thorough technical analysis and a systematic description of the proposed approach, as well as clearer explanations and presentation of the experimental results.",
"id": "Jnq4Ep2xfC",
"rating": 2
},
{
"content": "The paper proposes Temporal Grounded Policy Optimization (TGPO), a hierarchical reinforcement learning framework for solving control problems specified using Signal Temporal Logic (STL). STL enables rich task specifications with temporal and spatial constraints, but its non-Markovian structure and sparse reward signals make it difficult to handle with standard RL algorithms. TGPO decomposes STL formulas into subgoals with invariant constraints, and introduces a two-level architecture: a high-level “temporal grounding” component assigns time variables to each subgoal, while a low-level time-conditioned policy learns to satisfy them using dense, stage-wise rewards. The framework includes a critic-guided Bayesian time allocation step using Metropolis–Hastings sampling, which focuses exploration on promising temporal schedules.\nExperiments across five environments (2D navigation, unicycle, Franka Panda, quadrotor, and Ant) show that TGPO and its Bayesian variant (TGPO*) outperform several baselines—τ-MDP, F-MDP, RNN, Grad, and CEM—particularly on complex, high-dimensional, and long-horizon STL tasks.",
"id": "lGJIfeabQm",
"rating": 8
},
{
"content": "This paper presents a new reinforcement learning method to learn control policies for some types of STL specifications. The proposed method consists of first sampling time assignments for decomposed subgoals and then learn policies to achieve these subgoals conditioned on the time assignments.",
"id": "S02eqipeGp",
"rating": 4
}
] |
{
"cdate": 1756884774931,
"content": {
"TLDR": {
"value": "We design a Reinforcement Learning framework based on time variables and task decomposition to solve Signal Temporal Logic tasks"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025tgpo,\ntitle={{TGPO}: Temporal Grounded Policy Optimization for Signal Temporal Logic Tasks},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=053vZMxDB5},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning control policies for complex, long-horizon tasks is a central challenge in robotics and autonomous systems. Signal Temporal Logic (STL) offers a powerful and expressive language for specifying such tasks, but its non-Markovian nature and inherent sparse reward make it difficult to be solved via standard Reinforcement Learning (RL) algorithms. Prior RL approaches focus only on limited STL fragments or use STL robustness scores as sparse terminal rewards. In this paper, we propose TGPO, Temporal Grounded Policy Optimization, to solve general STL tasks. TGPO decomposes STL into timed subgoals and invariant constraints and provides a hierarchical framework to tackle the problem. The high-level component of TGPO proposes concrete time allocations for these subgoals, and the low-level time-conditioned policy learns to achieve the sequenced subgoals using a dense, stage-wise reward signal. During inference, we sample various time allocations and select the most promising assignment for the policy network to rollout the solution trajectory. To foster efficient policy learning for complex STL with multiple subgoals, we leverage the learned critic to guide the high-level temporal search via Metropolis-Hastings sampling, focusing exploration on temporally feasible solutions. We conduct experiments on five environments, ranging from low-dimensional navigation to manipulation, drone, and quadrupedal locomotion. Under a wide range of STL tasks, TGPO significantly outperforms state-of-the-art baselines (especially for high-dimensional and long-horizon cases), with an average of 31.6% improvement in task success rate compared to the best baseline."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Reinforcement Learning; Signal Temporal Logic"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/524211ceccea6ca532fc8ec47c9c896c13dd9fa7.pdf"
},
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "TGPO: Temporal Grounded Policy Optimization for Signal Temporal Logic Tasks"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "053vZMxDB5",
"id": "053vZMxDB5",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission1461/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898207954,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission1461/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission1461/Authors"
]
}
|
|
2,026
|
05NHmcEpNk
|
[
8,
4,
8
] |
[
{
"content": "This paper introduces CT-MLE, a model-based algorithm for continuous-time reinforcement learning (CTRL) that uses maximum likelihood estimation (MLE) of the state marginal density instead of directly modeling system dynamics.\nThe key idea is to achieve instance-dependent adaptivity, where the algorithm’s regret scales with the total reward variance rather than with fixed measurement schedules.\nThe authors derive theoretical guarantees, showing that regret can become independent of the measurement strategy when observation frequency adapts to problem complexity.\nAdditionally, they propose a randomized measurement schedule to enhance sample efficiency without additional measurement cost.",
"id": "AVfMqCDRyx",
"rating": 8
},
{
"content": "This paper analyzes the continuous-time RL setting where the dynamics is modelled as an SDE with both a drift and a diffusion terms. In this setting, the authors present an algorithm for minimizing the regret during interaction with the environment. Crucially, the algorithm is based on constructing two confidence sets around the max likelihood estimate of the dynamics, and then acting optimistically w.r.t. them. The paper next provides a theoretical analysis of the algorithm.",
"id": "dw7T6ras6J",
"rating": 4
},
{
"content": "This paper studies instance-dependent guarantees for continuous-time reinforcement learning (CTRL). Under some conditions, it establishes an instance-dependent second-order regret bound for CTRL. The results provides some new insights for CTRL, including robustness on choice of measurements and weaker horizon dependence compared with prior related works.",
"id": "pNcHIDegAr",
"rating": 8
},
{
"content": "Thank you for your insightful comments. Here we list our main revisions to our paper and highlight which are they for:\n\n**1.** We add a proof sketch in the starting from line 396 to 420 in the revised paper. (**Q1** for Reviewer mhMX)\n\n**2.** In line 340-345, We extended our setting from finite function class to infinite ones, by introducing the brackting number. We have revised our main theorem and corresponding lemmas in line 387-395 and line 1122-1128 accordingly.\n\nIn line 376-384 We have also added a new example of continuous-time dynamics that shows a low eluder dimension and low bracketing numbers. (**Q2** for Reviewer mhMX, **Q2** for Reviewer LDX4, **Q2** for Reviewer Zkqu).\n\n**3.** In line 942-971, we have explained why the continuous-time decomsposition as shown in (4.1) holds. (**Q4** for Reviewer LDX4)\n\n**4.** In line 2046-2054, we have added additional abalation study to study the robustness of our algorithm to the function approximator class (**Q3** for Reviewer Zkqu)",
"id": "6EJJ5epY35",
"rating": null
}
] |
{
"cdate": 1758213925539,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025instancedependent,\ntitle={Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05NHmcEpNk},\nnote={under review}\n}"
},
"abstract": {
"value": "Continuous-time reinforcement learning (CTRL) provides a natural framework for sequential decision-making in dynamic environments where interactions evolve continuously over time. While CTRL has shown growing empirical success, its ability to adapt to varying levels of problem difficulty remains poorly understood. In this work, we investigate the instance-dependent behavior of CTRL and introduce a simple, model-based algorithm built on maximum likelihood estimation (MLE) with a general function approximator. Unlike existing approaches that estimate system dynamics directly, our method estimates the state marginal density to guide learning. We establish instance-dependent performance guarantees by deriving a regret bound that scales with the total reward variance and measurement resolution. Notably, the regret becomes independent of the specific measurement strategy when the observation frequency adapts appropriately to the problem’s complexity. To further improve performance, our algorithm incorporates a randomized measurement schedule that enhances sample efficiency without increasing measurement cost. These results highlight a new direction for designing CTRL algorithms that automatically adjust their learning behavior based on the underlying difficulty of the environment."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Continuous-time reinforcement learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9f4ff6eac9d7af34e021903665ab4988e2f46ad6.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05NHmcEpNk",
"id": "05NHmcEpNk",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13133/-/Full_Submission",
"ICLR.cc/2026/Conference/Submission13133/-/Rebuttal_Revision"
],
"license": "CC BY 4.0",
"mdate": 1763388667273,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13133/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13133/Authors"
]
}
|
|
2,026
|
05PqjBzN6S
|
[
4,
2,
6
] |
[
{
"content": "This paper addresses the problem of determining when sufficient data is available to safely retrain a model after a sudden concept drift. The authors propose CALIPER, a model-agnostic and data-only test to estimate this required post-drift data size. The core idea is grounded in the concept of \"state dependence\" in dynamical systems. CALIPER employs a lightweight weighted local regression (WLR) to probe the local predictability of the post-drift data window. A retraining trigger is issued when the WLR's prediction error exhibits a monotonically non-increasing trend as the locality parameter increases, conditioned on a sufficient effective sample size (ESS). The authors provide theoretical analysis linking this trigger to state dependence and learnability, and empirical results across four datasets and three model families show that CALIPER outperforms fixed-window and incremental update strategies.",
"id": "eNjd7SQMz6",
"rating": 4
},
{
"content": "The paper proposes a method for determining the right time to retrain/adapt a model after concept drift has occurred. The proposed method is computational efficient because it only uses the data from the data stream together with some hyperparameters.",
"id": "QohygaGUI4",
"rating": 2
},
{
"content": "This paper focuses on handling the sudden drift in streaming data and tries to explore when to retrain after drift. A method called CALIPER has been developed for detecting concept drift occurrence and stable retraining. And a theoretical analysis of the proposed method has been given for fundamental support. The experiment on several datasets and benchmarks has been conducted, and the experiment results show the performance of the proposed method.",
"id": "wuX6eItFHt",
"rating": 6
}
] |
{
"cdate": 1758350444098,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025when,\ntitle={When to Retrain after Drift: A Data-Only Test of Post-Drift Data Size Sufficiency},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05PqjBzN6S},\nnote={under review}\n}"
},
"abstract": {
"value": "Sudden concept drift makes previously trained predictors unreliable, yet deciding when to retrain and what post-drift data size is sufficient is rarely addressed. We propose CALIPER —a detector- and model-agnostic, data-only test that estimates the post-drift data size required for stable retraining. CALIPER exploits state dependence in streams generated by dynamical systems: we run a single-pass weighted local regression over the post-drift window and track a one-step proxy error as a function of a locality parameter $\\theta$. When an effective sample size gate is satisfied, a monotonically non-increasing trend in this error with increasing a locality parameter indicates that the data size is sufficiently informative for retraining.\nWe also provide a theoretical analysis of our CALIPER, and we show that the algorithm has a low per-update time and memory. Across datasets from four heterogeneous domains, three learner families, and two detectors, CALIPER consistently matches or exceeds the best fixed data size for retraining while incurring negligible overhead and often outperforming incremental updates. CALIPER closes the gap between drift detection and data-sufficient adaptation in streaming learning."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Concept drift",
"Stream learning",
"Data sufficiency",
"Time series"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c36ecf51e14859470565e33d2e39e69232a4cb26.pdf"
},
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "When to Retrain after Drift: A Data-Only Test of Post-Drift Data Size Sufficiency"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05PqjBzN6S",
"id": "05PqjBzN6S",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23926/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896790097,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23926/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23926/Authors"
]
}
|
|
2,026
|
05SHW9ai9e
|
[
4,
2,
4,
4
] |
[
{
"content": "To address DocQA limitations (single-modality bias, isolated RAG, long-document overload), this paper proposes MDocAgent—a framework integrating dual RAG (text via ColBERTv2, image via ColPali) and 5 collaborative agents (General, Critical, Text, Image, Summarizing). Evaluated on 5 benchmarks (MMLongBench, FetaTab, etc.), it outperforms baselines: Top-1 accuracy 0.407 (new SOTA), Top-4 0.465. Ablation confirms all agents are necessary. Key contributions: \"dual RAG + multi-agent\" architecture, critical info extraction to reduce agent attention dispersion, and validation for complex multi-modal docs.",
"id": "10HA4uLhex",
"rating": 4
},
{
"content": "This paper introduces MDocAgent, a multi-modal multi-agent framework for document question answering (DocQA). Unlike traditional LLM-based or LVLM-based RAG systems that typically focus on a single modality (text or image), MDocAgent integrates both textual and visual information through five collaborative agents.\nThe system leverages dual RAG pipelines (ColBERTv2 for text and ColPali for images) to retrieve the most relevant segments and pages, and then coordinates these agents through staged reasoning and synthesis.\nExperiments across five benchmarks (MMLongBench, LongDocURL, PaperTab, PaperText, and FetaTab) show an average improvement of 12.1% over current state-of-the-art RAG methods (like M3DocRAG).",
"id": "Tp7nrTkJGa",
"rating": 2
},
{
"content": "This paper presents MDocAgent, a multi-modal, multi-agent framework for document understanding and question answering. The system integrates both text- and image-based retrieval (via ColBERT and ColPali) and coordinates several specialized agents (text, image, critical, and summarizing) to perform collaborative reasoning over multimodal documents. Experimental results on multiple DocQA benchmarks show consistent improvements over existing baselines.",
"id": "H8t7t7HedT",
"rating": 4
},
{
"content": "This paper proposes a multi-agent RAG framework to enhance document VQA. The motivation to integrate multimodal information for RAG-based document understanding is clear and relevant. The authors explore using multiple retrievers combined with different prompting strategies to progressively integrate information and improve performance. While the experimental results demonstrate potential, the paper’s novelty is limited. The approach mainly relies on prompt-based fusion of retrieval results from different modalities without introducing substantial methodological innovation. Moreover, as a training-free framework, the experimental validation remains limited.",
"id": "1rxGJ4nT1l",
"rating": 4
}
] |
{
"cdate": 1758214136657,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025mdocagent,\ntitle={{MD}ocAgent: A Multi-Modal Multi-Agent Framework for Document Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05SHW9ai9e},\nnote={under review}\n}"
},
"abstract": {
"value": "Document Question Answering (DocQA) is a very common task. Existing methods using Large Language Models (LLMs) or Large Vision Language Models (LVLMs) and Retrieval Augmented Generation (RAG) often prioritize information from a single modal, failing to effectively integrate textual and visual cues. These approaches struggle with complex multi-modal reasoning, limiting their performance on real-world documents. We present MDocAgent (A Multi-Modal Multi-Agent Framework for Document Question Answering), a novel RAG and multi-agent framework that leverages both text and image. Our system employs five specialized agents: a general agent, a critical agent, a text agent, an image agent and a summarizing agent. These agents engage in multi-modal context retrieval, combining their individual insights to achieve a more comprehensive understanding of the document's content. This collaborative approach enables the system to synthesize information from both textual and visual components, leading to improved accuracy in question answering. Preliminary experiments on five benchmarks like MMLongBench, LongDocURL demonstrate the effectiveness of our MDocAgent, achieve an average improvement of 12.1% compared to current state-of-the-art method. This work contributes to the development of more robust and comprehensive DocQA systems capable of handling the complexities of real-world documents containing rich textual and visual information."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Multimodal",
"DocQA",
"RAG",
"LVLM"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2ddcd015efb50efa2aa66b781add39ffb4dc6e92.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "MDocAgent: A Multi-Modal Multi-Agent Framework for Document Question Answering"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05SHW9ai9e",
"id": "05SHW9ai9e",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13150/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897460751,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13150/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13150/Authors"
]
}
|
|
2,026
|
05THHF0w3y
|
[
0,
2,
4,
4
] |
[
{
"content": "The paper proposes a new method for LLM reasoning, R-Capsule, where LLMs first output high-level plans which are in a latent space and then textual detailed steps and finally the answer. The authors choose several benchmarks on math reasoning (such as GSM-8k) and commensense reasoning (such as strategyQA). They tested four models: GPT-2 (0.2B), LLaMA-3 (1B), LLaMA3.1 (7B) and Qwen-3 (8B) and compared with baselines such as standard SFT, SFT+CoT, iCoT, coconut, etc.",
"id": "HYfkctmIss",
"rating": 0
},
{
"content": "This paper introduces the \"Reasoning Capsule\" (R-Capsule), a framework to improve the efficiency of CoT reasoning. The core idea is to compress the high-level plan into a small set of latent tokens (the \"capsule\") which then conditions the generation of explicit execution steps. This method is grounded in the Information Bottleneck principle, using a structural bottleneck to enforce minimality (compression) and a dual-loss (task accuracy + plan reconstruction) to ensure sufficiency. Experiments on math and commonsense benchmarks show R-Capsule improves both accuracy and efficiency (fewer tokens, lower latency) over strong CoT baselines.",
"id": "yB8K8rBKem",
"rating": 2
},
{
"content": "This paper introduces the \"Reasoning Capsule\" (R-Capsule), a novel framework designed to improve the efficiency and accuracy of large language models (LLMs) in complex reasoning tasks. The core idea is to address the high latency and verbosity of standard Chain-of-Thought (CoT) prompting by decoupling the reasoning process into a high-level plan and low-level execution steps. Instead of generating an explicit textual plan, the model learns to compress it into a small set of latent tokens—the Reasoning Capsule.\n\nThe method is theoretically grounded in the Information Bottleneck (IB) principle. The capsule is encouraged to be minimal through a low-capacity architectural bottleneck and sufficient through a dual training objective. This objective combines a primary task loss (for answer accuracy) with an auxiliary plan-reconstruction loss, where a separate, shallow decoder is trained to reconstruct the original textual plan from the capsule. This reconstruction loss grounds the latent representation, making it more interpretable and preventing the model from learning uninformative shortcuts.\n\nExperiments on mathematical and commonsense reasoning benchmarks (GSM8K, StrategyQA, etc.) with various model sizes (from GPT-2 to 8B models) show that R-Capsule consistently outperforms standard CoT fine-tuning and other baselines in accuracy, while significantly reducing the number of generated tokens and inference latency.",
"id": "5AV74hOxJy",
"rating": 4
},
{
"content": "This paper proposes R-Capsule, a framework that compresses the high-level plan of a reasoning chain into a small number of learned latent tokens, while leaving execution lightweight or explicit. The design is motivated by an Information Bottleneck objective: a low-capacity projection enforces minimality, and a plan-reconstruction loss encourages sufficiency and a semantically grounded latent (via a shallow decoder). Experiments on GSM8K, MultiArith, AQuA, StrategyQA, and CSQA2 with small/medium base models (e.g., GPT-2 ~150M, Llama-3-1B, and Qwen3-8B; limited results for 7B/8B) show modest accuracy gains over CoT-SFT and reduced token counts/latency.",
"id": "mg0F5OMupx",
"rating": 4
}
] |
{
"cdate": 1757406324840,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025rcapsule,\ntitle={R-Capsule: Compressing High-Level Plans for Efficient Large Language Model Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05THHF0w3y},\nnote={under review}\n}"
},
"abstract": {
"value": "Chain-of-Thought (CoT) prompting has enabled Large Language Models (LLMs) to tackle complex reasoning tasks by generating explicit step-by-step rationales. However, this verbosity incurs significant computational overhead in terms of latency and memory, and can lead to error propagation over long reasoning chains. We propose the \\textbf{Reasoning Capsule}, a novel framework that captures the efficiency of latent reasoning while retaining the transparency of explicit CoT. Our core idea is to compress the high-level strategic plan of a reasoning process into a compact, low-dimensional latent representation---the Reasoning Capsule---while leaving the low-level execution steps explicit. This hybrid approach is grounded in the Information Bottleneck principle, where we learn a capsule that is a \\emph{minimal sufficient statistic} for the reasoning task. Minimality is enforced structurally via a low-dimensional bottleneck, ensuring efficiency. Sufficiency is enforced via a dual-objective function: a primary task loss for answer accuracy and an auxiliary reconstruction loss that ensures the capsule faithfully represents the original textual plan. This reconstruction objective grounds the latent space, making the compressed plan interpretable and robust against uninformative shortcuts. Our framework unifies efficiency, accuracy, and interpretability, significantly reducing the token footprint of reasoning while maintaining or improving performance on complex reasoning benchmarks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Model",
"latent reasoning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/41ed6938581c932dbdf98a17f0863c19cb7cfbde.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "R-Capsule: Compressing High-Level Plans for Efficient Large Language Model Reasoning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05THHF0w3y",
"id": "05THHF0w3y",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3349/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898094406,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3349/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3349/Authors"
]
}
|
|
2,026
|
05hNleYOcG
|
[
2,
4,
2,
2
] |
[
{
"content": "The paper introduces PLAGUE, a plug-and-play framework for designing multi-turn jailbreak attacks on large language models (LLMs). Inspired by lifelong-learning and agentic architectures, PLAGUE divides the attack process into three stages — Planner, Primer, and Finisher — enabling adaptable and modular multi-turn red-teaming. The framework supports integration with prior attacks like GOAT, Crescendo, and ActorBreaker, and achieves significant improvements in attack success rates (ASR) across top-tier models. It also incorporates reflection, memory-based retrieval, and rubric-based evaluation to enhance contextual adaptation.",
"id": "tNNFEkSguZ",
"rating": 2
},
{
"content": "This paper introduces PLAGUE, a multi-stage framework for the automated generation of multi-turn jailbreak attacks against Large Language Models (LLMs). The framework decomposes the attack process into three distinct phases: a Planner, a Primer for context-building, and a Finisher for the final attack. The core design aims to enhance the success rate, diversity, and adaptability of multi-turn attacks through a plug-and-play modular architecture combined with a lifelong learning memory mechanism.",
"id": "ymKvgkHJvh",
"rating": 4
},
{
"content": "**NOTE: This paper violates the conference formatting guidelines by substantially reducing the page margins to fit more content. I would recommend a desk rejection due to this severe format violation. Nevertheless, I provide my technical evaluation below and defer the final desk-rejection decision to the AC and PC.**\n\n\nPLAGUE is a plug-and-play, lifelong-learning framework for generating modular multi-turn jailbreaks against black-box LLMs: it builds an n-step plan by retrieving successful past strategies (Planner), escalates context with benign-seeming intermediate prompts (Primer), and then executes the final exploit (Finisher), while using rubriced reflection, backtracking, and a memory of successful strategies to adapt over time. Evaluated on the HarmBench benchmark, PLAGUE outperforms prior multi-turn and single-turn methods, achieving ASRs such as 81.4% on OpenAI o3, 67.3% on Claude Opus 4.1, and up to 97.8% on Deepseek-R1, while remaining computationally efficient within a six-turn budget; the authors note ethical risks but argue the framework aids systematic vulnerability evaluation and defense development.",
"id": "twNOgBALCS",
"rating": 2
},
{
"content": "This paper introduces PLAGUE, a modular, memory-augmented multi-round jailbreak framework that coordinates a three-stage Planner–Primer–Finisher pipeline, achieving state-of-the-art attack-success rates on several mainstream LLMs.",
"id": "utFk1lpGtz",
"rating": 2
}
] |
{
"cdate": 1758135059535,
"content": {
"TLDR": {
"value": "Agentic framework for discovering novel potent multi-turn jailbreak attacks that achieve an attack success rate of 67.3% on Claude Opus 4.1"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025plague,\ntitle={{PLAGUE}: Plug-and-play Framework for Lifelong Adaptive Generation of Multi-turn Exploits},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=05hNleYOcG},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) are improving at an exceptional rate. With the advent of agentic workflows, multi-turn dialogue has become the de facto mode of interaction with LLMs for completing long and complex tasks. While LLM capabilities continue to improve, they remain increasingly susceptible to jailbreaking, especially in multi-turn scenarios where harmful intent can be subtly injected across the conversation to produce nefarious outcomes. While single-turn attacks have been extensively explored, adaptability, efficiency and effectiveness continue to remain key challenges for their multi-turn counterparts. To address these gaps, we present PLAGUE, a novel plug-and-play framework for designing multi-turn attacks inspired by lifelong-learning agents. PLAGUE dissects the lifetime of a multi-turn attack into three carefully designed phases (Primer, Planner and Finisher) that enable a systematic and information-rich exploration of the multi-turn attack family. Evaluations show that red-teaming agents designed using PLAGUE achieve state-of-the-art jailbreaking results, improving attack success rates (ASR) by more than 30% across leading models in a lesser or comparable query budget. Particularly, PLAGUE enables an ASR (based on StrongReject) of 81.4% on OpenAI's o3 and 67.3% on Claude's Opus 4.1, two models that are considered highly resistant to jailbreaks in safety literature. Our work offers tools and insights to understand the importance of plan initialization, context optimization, and lifelong learning in crafting multi-turn attacks for a comprehensive model vulnerability evaluation."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"LLM Red-Teaming",
"Agentic AI"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/de8dc0979b8266f26b81ee913344d9abba387bb0.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/da1b9d173949372d38df20cfd54baf183ccdf1be.zip"
},
"title": {
"value": "PLAGUE: Plug-and-play Framework for Lifelong Adaptive Generation of Multi-turn Exploits"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "05hNleYOcG",
"id": "05hNleYOcG",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9695/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897703848,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9695/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9695/Authors"
]
}
|
|
2,026
|
05pfP2khzx
|
[
2,
2,
4
] |
[
{
"content": "This paper introduces VIDEOREPAIR, a video refinement framework to correct text-video misalignments. It has three steps: 1. detect misalignment. Finding the issue and region with MLLM. 2. Plan the refinement including preserve the correct parts and construct prompts that could be used to re-generate the target parts. 3. regenerate the incorrect parts. \nThe method is evaluated on two benchmark EvalCrafter and T2V-CompBench on three different text to video models.",
"id": "3ygO9k7VKw",
"rating": 2
},
{
"content": "To address the challenge that current text-to-video (T2V) models often fail to align with complex text prompts,the authors propose VideoRepair, a training-free, self-correcting, and model-agnostic video refinement framework. VideoRepair automatically detects fine-grained text–video misalignments and performs targeted, localized corrections. The key contributions are as follows:\n- Misalignment detection, which identifies both faithful and misaligned regions within generated videos;\n- Refinement planning, which preserves correctly generated entities, segments their corresponding regions across frames, and constructs targeted prompts for misaligned areas;\n- Localized refinement, which selectively regenerates problematic regions while preserving faithful content through joint optimization of preserved and newly generated areas.",
"id": "nfBvAALDzB",
"rating": 2
},
{
"content": "This paper addresses the text-video misalignment problem under complex cues in T2V generation by proposing a model-agnostic, training-free, refined framework, VIDEOREPAIR. Its core achieves self-correction through a two-stage process: first, it utilizes a multimodal large model (MLLM) to generate fine-grained spatiotemporal problem detection, identifying misaligned regions and locking in the correct content; then, through region-preserving segmentation and target cue construction, it locally regenerates the problem region and integrates the global content.",
"id": "ycBToBY7Bj",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "6dOs45S72Q",
"rating": null
}
] |
{
"cdate": 1758222291968,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nlee2025selfcorrecting,\ntitle={Self-Correcting Text-to-Video Generation with Misalignment Detection and Localized Refinement},\nauthor={Daeun Lee and Jaehong Yoon and Jaemin Cho and Mohit Bansal},\nyear={2025},\nurl={https://openreview.net/forum?id=05pfP2khzx}\n}"
},
"abstract": {
"value": "Recent text-to-video (T2V) diffusion models have made remarkable progress in\ngenerating high-quality and diverse videos. However, they often struggle to align\nwith complex text prompts, particularly when multiple objects, attributes, or spatial\nrelations are specified. We introduce VideoRepair, the first self-correcting,\ntraining-free, and model-agnostic video refinement framework that automatically\ndetects fine-grained text–video misalignments and performs targeted, localized\ncorrections. Our key insight is that even misaligned videos usually contain correctly\nrendered regions that should be preserved rather than regenerated. Building on this\nobservation, VideoRepair proposes a novel region-preserving refinement strategy\nwith three stages: (i) misalignment detection, where systematic MLLM-based evaluation\nwith automatically generated spatio-temporal questions identifies faithful\nand misaligned regions; (ii) refinement planning, which preserves correctly generated\nentities, segments their regions across frames, and constructs targeted prompts\nfor misaligned areas; and (iii) localized refinement, which selectively regenerates\nproblematic regions while preserving faithful content through joint optimization\nof preserved and newly generated areas. This self-correcting, region-preserving\nstrategy converts evaluation signals into actionable guidance for refinement, enabling\nefficient and interpretable corrections. On two challenging benchmarks,\nEvalCrafter and T2V-CompBench, VideoRepair achieves substantial improvements\nover recent baselines across diverse alignment metrics. Comprehensive\nablations further demonstrate the efficiency, robustness, and interpretability of our\nframework."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Daeun_Lee2",
"~Jaehong_Yoon1",
"~Jaemin_Cho1",
"~Mohit_Bansal2"
]
},
"authors": {
"value": [
"Daeun Lee",
"Jaehong Yoon",
"Jaemin Cho",
"Mohit Bansal"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Video Generation",
"Multi-agent"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "lee|selfcorrecting_texttovideo_generation_with_misalignment_detection_and_localized_refinement"
},
"pdf": {
"value": "/pdf/92074a4083fee85665efd54a5e543a7af3d7095e.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/d49f3262bd569432cfbb01e316e81fba9e473798.zip"
},
"title": {
"value": "Self-Correcting Text-to-Video Generation with Misalignment Detection and Localized Refinement"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "05pfP2khzx",
"id": "05pfP2khzx",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13771/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762964082540,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13771/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13771/Authors"
]
}
|
|
2,026
|
05uq3XUJaT
|
[
2,
2,
4
] |
[
{
"content": "This paper introduces a listwise fine-tuning method for LLM-based text reranking. The method improves three limitations of existing LLM rankers (single-token compression, shallow scoring heads, and pairwise objectives).",
"id": "DvaKUEhgPp",
"rating": 2
},
{
"content": "This paper presents ListRank to address limitations in existing reranking approaches. The method includes three extra modules compared to the Qwen3-Reranker-4B backbone: (1) attention pooling, (2) a gated MLP, and (3) ListRank Loss. The model is trained on a RankGPT-refined subset of the MS MARCO passage ranking dataset. Experimental results show that ListRank achieves comparable performance on MS MARCO dev, TREC DL19, and DL20 benchmarks with a 4B model. Ablation studies confirm that each component contributes to performance.",
"id": "tqvEbUa5Yi",
"rating": 2
},
{
"content": "This paper proposes ListRank, a new framework designed for large language model (LLM)-based text retrieval and reranking tasks. The main contribution lies in addressing limitations of current LLM-based reranking approaches through three key innovations: A customized attention-based fusion of token-level representations. A multi-layer perceptron (MLP) module for enhanced feature transformation. A ListRank loss designed to model listwise ordering, thereby improving the fine-grained relevance order of candidate documents in a ranking task. The experimental results on MS MARCO and TREC datasets show that ListRank outperforms existing state-of-the-art reranking models in terms of mean reciprocal rank (MRR) and normalized discounted cumulative gain (nDCG) at 10.",
"id": "2ZQqLSLjjV",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "ZEel6wh69o",
"rating": null
}
] |
{
"cdate": 1757411444566,
"content": {
"TLDR": {
"value": "We propose a method to improve the fine-tuning performance of text ranking models by leveraging feature fusion, incorporating customized MLP modules, and optimizing with a listwise loss."
},
"_bibtex": {
"value": "@misc{\nsong2025finetuning,\ntitle={Fine-tuning large language models for text ranking with listwise constraints},\nauthor={Jiawen Song and Bingfei Zhang and Sai Gao and Xueyao Zhang and Wenqing Xu and Guanyu Chen and Junwei Xing and Hui Li and Yunpeng Peng and Zhi Zang},\nyear={2025},\nurl={https://openreview.net/forum?id=05uq3XUJaT}\n}"
},
"abstract": {
"value": "With the rapid adoption of large language models (LLMs) across diverse applications, retrieval augmentation has become a key factor for improving downstream performance. Recent advances show that LLM-based retrieval can substantially enhance ranking quality. In this work, we present a novel LLM-based retrieval framework optimized along three complementary dimensions: (1) a customized attention-based fusion of hidden-layer representations, (2) a dedicated multi-layer perceptron (MLP) module for enriched feature transformation, and (3) a new list-wise learning objective, ListRank loss, to capture fine-grained relevance order. Experimental results demonstrate that our model achieves state-of-the-art performance. The model is publicly available for download on HuggingFace."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Jiawen_Song1",
"~Bingfei_Zhang1",
"~Sai_Gao1",
"~Xueyao_Zhang2",
"~Wenqing_Xu3",
"~Guanyu_Chen14",
"~Junwei_Xing1",
"~Hui_Li58",
"~Yunpeng_Peng2",
"~Zhi_Zang1"
]
},
"authors": {
"value": [
"Jiawen Song",
"Bingfei Zhang",
"Sai Gao",
"Xueyao Zhang",
"Wenqing Xu",
"Guanyu Chen",
"Junwei Xing",
"Hui Li",
"Yunpeng Peng",
"Zhi Zang"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Feature fusion",
"listwise",
"LLM",
"rank"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "song|finetuning_large_language_models_for_text_ranking_with_listwise_constraints"
},
"pdf": {
"value": "/pdf/438531bfdc6d7eff6df3c9f4faf576cb9faa1f30.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Fine-tuning large language models for text ranking with listwise constraints"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "05uq3XUJaT",
"id": "05uq3XUJaT",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Edit",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3367/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763361432756,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3367/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3367/Authors"
]
}
|
|
2,026
|
0694m9ixnv
|
[
4,
6,
2
] |
[
{
"content": "This paper introduces Instruction Distillation, a new paradigm for improving the quality of low-quality instruction-following data. The authors propose a dataset called MIXTURE that maps multiple low-quality or redundant text inputs to a distilled high-quality target. Building on this dataset, they develop LM-Mixup, a reinforcement learning framework that fine-tunes language models using GRPO with three rewards. The method aims to transform low-quality, redundant, or noisy samples into information-dense outputs. Experimental results show that LM-Mixup outperforms SFT and several strong data selection baselines.",
"id": "H1wFP40ufY",
"rating": 4
},
{
"content": "The paper introduces a new task: instruction distillation, i.e., combining multiple low-quality instructions into a high-quality instruction. The authors then create a dataset for this task, where they trains a model with GRPO. They prove that the trained model is useful by applying it to improve the low-quality training data of other models. They observe an improvement on the performance when replacing the low-quality training data with distilled ones.",
"id": "8dqhL4443S",
"rating": 6
},
{
"content": "This paper introduces LM-mixup, a method for augmenting low-quality instruction data by distilling multiple imperfect inputs into high-quality outputs using a language model fine-tuned with reinforcement learning. The authors construct Mixture, a 144K-sample dataset, and train LM-mixup using GRPO with multi-dimensional rewards. Experiments show that training on a small mixup-augmented subset (∼3% of full data) can match or exceed full-dataset training and compete with data selection baselines on OpenLLM benchmarks.",
"id": "4xzSp8wRGS",
"rating": 2
}
] |
{
"cdate": 1758008662115,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025lmmixup,\ntitle={{LM}-mixup: Text Data Augmentation via Language Model based Mixup},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0694m9ixnv},\nnote={under review}\n}"
},
"abstract": {
"value": "Instruction tuning is crucial for aligning Large Language Models (LLMs), yet the quality of instruction-following data varies significantly. While high-quality data is paramount, it is often scarce; conversely, abundant low-quality data is frequently discarded, leading to substantial information loss. Existing data augmentation methods struggle to augment this low-quality data effectively, and the evaluation of such techniques remains poorly defined. To address this, we formally define the task of *Instruction Distillation*: distilling multiple low-quality and redundant inputs into high-quality and coherent instruction-output pairs. Specifically, we introduce a comprehensive data construction pipeline to create *MIXTURE*, a 144K-sample dataset pairing low-quality or semantically redundant imperfect instruction clusters with their high-quality distillations. We then introduce *LM-Mixup*, by first performing supervised fine-tuning on *MIXTURE* and then optimizing it with reinforcement learning. This process uses three complementary reward signals: quality, semantic alignment, and format compliance, via Group Relative Policy Optimization (GRPO). We demonstrate that *LM-Mixup* effectively augments imperfect datasets: fine-tuning LLMs on its distilled data, which accounts for only about 3% of the entire dataset, not only surpasses full-dataset training but also competes with state-of-the-art high-quality data selection methods across multiple benchmarks. Our work establishes that low-quality data is a valuable resource when properly distilled and augmented with *LM-Mixup*, significantly enhancing the efficiency and performance of instruction-tuned LLMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Instruction distillation",
"LM mixup"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/063db25688cafc17b63b0a73cc99a225f64ae83e.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "LM-mixup: Text Data Augmentation via Language Model based Mixup"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0694m9ixnv",
"id": "0694m9ixnv",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission7123/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897871663,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission7123/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission7123/Authors"
]
}
|
|
2,026
|
06I7jcrkW2
|
[
6,
6,
4,
8
] |
[
{
"content": "This paper tackles the important and challenging problem of accelerating Real-Time TDDFT (RT-TDDFT) computations using deep learning. \nSpecifically, it adopts an autoregressive framework to accelerate the propagations of RT-TDDFT, where the wavefunctions of previous steps are input into the network for the prediction of the next steps' wavefunctions. The paper proposes two model architectures (OrbEvo-FullWF and OrbEvo-DM) with different electronic state interacting strategies and compares their performance on their self-generated TDDFT dataset.\n\nI think the paper is in a good shape, with nontrivial contributions for a novel application (RT-TDDFT) and specifically designed models (OrbEvo-FullWF and OrbEvo-DM). Nonetheless, there exist several concerns, which should be addressed before acceptance.",
"id": "nfuK4RDRJ7",
"rating": 6
},
{
"content": "This paper proposes *Orbital Transformers*, an equivariant graph Transformer designed to directly predict the *time evolution of Kohn–Sham wavefunctions* in real-time time-dependent density functional theory (RT-TDDFT). Unlike prior approaches that predict energies, Hamiltonians, or spectral observables, this model learns the mapping $C(t) \\to C(t+\\Delta t)$ (or $\\Delta C_t$) directly, effectively learning the quantum propagation operator. The authors introduce an SO(2)-equivariant attention mechanism that takes the external electric field direction as the reference axis, and use FiLM-style conditioning to inject both the field’s direction and time-dependent amplitude. A local autoregressive temporal modeling scheme, along with pushforward training, enables the model to track the dynamic evolution of the system stably over several femtoseconds. Experiments on RT-TDDFT trajectories of QM9 and MD17 molecules under external fields show that the model accurately reproduces dipole dynamics and orbital evolution.",
"id": "bgy26jEHMo",
"rating": 6
},
{
"content": "The paper proposed a new model and method that learns the time-dependent DFT's properties, and has shown that the new proposed method, combined with a serious method improvement, can predict nicely the properties from TDDFT.",
"id": "0HKETKMwSW",
"rating": 4
},
{
"content": "This paper introduces OrbEvo, an equivariant graph transformer framework for learning the time evolution of Kohn–Sham wavefunctions in real-time time-dependent density functional theory (RT-TDDFT). Unlike prior works such as OrbFormer, which focus on static ground-state properties, OrbEvo aims to learn the dynamics of electronic states under external electric fields.\nThe authors propose two model variants: OrbEvo-FullWF, which aggregates wavefunction features through pooling across occupied states, and OrbEvo-DM, which computes density-matrix-based interactions between states via tensor contraction. The model employs SO(2)-equivariant conditioning to represent field-induced symmetry breaking and a pushforward training scheme to stabilize long-horizon rollout. Experiments on QM9 and MD17 demonstrate that OrbEvo-DM outperforms the pooling-based variant, capturing physically consistent time-dependent dipole moments and absorption spectra.",
"id": "BXaxWyNQ81",
"rating": 8
}
] |
{
"cdate": 1758291547393,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025orbital,\ntitle={Orbital Transformers for Predicting Wavefunctions in Time-Dependent Density Functional Theory},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=06I7jcrkW2},\nnote={under review}\n}"
},
"abstract": {
"value": "We aim to learn wavefunctions simulated by time-dependent density functional theory (TDDFT), which can be efficiently represented as linear combination coefficients of atomic orbitals. In real-time TDDFT, the electronic wavefunctions of a molecule evolve over time in response to an external excitation, enabling first-principles predictions of physical properties such as optical absorption, electron dynamics, and high-order response. However, conventional real-time TDDFT relies on time-consuming propagation of all occupied states with fine time steps. In this work, we propose OrbEvo, which is based on an equivariant graph transformer architecture and learns to evolve the full electronic wavefunction coefficients across time steps. First, to account for external field, we design an equivariant conditioning to encode both strength and direction of external electric field and break the symmetry from SO(3) to SO(2). Furthermore, we design two OrbEvo models, OrbEvo-FullWF and OrbEvo-DM, using wavefunction pooling and density matrix as interaction method, respectively. Motivated by the central role of the density functional in TDDFT, OrbEvo-DM encodes the density matrix aggregated from all occupied electronic states into feature vectors via tensor contraction, providing a more intuitive approach to learn the time evolution operator. We adopt a training strategy specifically tailored to limit the error accumulation of time-dependent wavefunctions over autoregressive rollout. To evaluate our approach, we generate TDDFT datasets consisting of 5,000 different molecules in the QM9 dataset and 1,500 molecular configurations of the malonaldehyde molecule in the MD17 dataset. Results show that our OrbEvo model accurately captures quantum dynamics of excited states under external field, including time-dependent wavefunctions, time-dependent dipole moment, and optical absorption spectra characterized by dipole oscillator strength. It also shows strong generalization capability on the diverse molecules in the QM9 dataset."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Machine learning density functional theory",
"Time dependent neural PDE solver"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b9b9470edaaf38e546adf996fb79f0e4341c771e.pdf"
},
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Orbital Transformers for Predicting Wavefunctions in Time-Dependent Density Functional Theory"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "06I7jcrkW2",
"id": "06I7jcrkW2",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission18854/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897077611,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission18854/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission18854/Authors"
]
}
|
|
2,026
|
06bDxmgdE0
|
[
4,
2,
4
] |
[
{
"content": "This paper introduces a novel and highly significant large-scale, multitask benchmark for evaluating speech understanding capabilities across 11 Southeast Asian (SEA) languages. This work directly addresses the critical lack of non-English evaluation frameworks, as current benchmarks are heavily English-centric, leaving a significant linguistic region severely underrepresented.\n\nThe benchmark comprises over 97,000 samples and 597 hours of curated audio data, covering 9 distinct tasks categorized into Speech Processing (e.g., ASR, ST), Paralinguistic Analysis (e.g., Age Recognition), and Temporal Understanding (e.g., SQA, TLoc). The paper provides a crucial evaluation of leading open-source and proprietary audio-language models, revealing a massive performance gap compared to English-centric performance.",
"id": "njfEXvEnQj",
"rating": 4
},
{
"content": "The paper proposes SEASpeechBench, a benchmark for audio LLMs on Southeast Asian speech that covers 9 tasks, categorized as \"speech processing\" (content-related), paralinguistic, and temporal understanding. Overall results show that existing models have poor performance, especially on low-resource languages and temporal reasoning, emotion recognition, and speech translation tasks. Failure modes of existing models are also showcased, such as tending to over-produce content for temporal understanding, and sensitivity to prompts in different languages.",
"id": "tin7jQrfjb",
"rating": 2
},
{
"content": "The paper presents SEA-SpeechBench, a new benchmark for speech understanding tasks in 11 Southeast Asian languages. The benchmark tests the abilities of speech language models (SLMs) across 3 axes: speech processing, paralinguistics, and temporal understanding. The authors evaluated several open-source and proprietary SLMs on the benchmark, showing that models struggle to perform on these languages, despite recent improvements in English. The authors find that prompting in low-resource languages instead of English degrades performance, showing that model capabilities do not necessarily reflect real-life use cases.",
"id": "Gh08P0fRf4",
"rating": 4
}
] |
{
"cdate": 1758092746264,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025seaspeechbench,\ntitle={{SEA}-SpeechBench: A Large-Scale Multitask Benchmark for Speech Understanding Across Southeast Asia},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=06bDxmgdE0},\nnote={under review}\n}"
},
"abstract": {
"value": "The rapid advancement of audio and multimodal large language models has unlocked transformative speech understanding capabilities, yet evaluation frameworks remain predominantly English-centric, leaving Southeast Asian (SEA) languages critically underrepresented. We introduce SEA-SpeechBench, the first large-scale multitask benchmark that evaluates speech understanding in 11 SEA languages through more than 97,000 samples and 597 hours of curated audio data. Our benchmark comprises 9 diverse tasks across 3 categories: speech processing (automatic speech recognition, speech translation, spoken question answering), paralinguistic analysis (emotion, gender, age, speaker recognition), and temporal understanding, a novel dimension featuring timestamped content queries and temporal localization within extended audio sequences up to 3 minutes. We implement multilingual prompting in both native SEA languages and English to reflect user interactions with audio-language models. \nEvaluation of leading open-source and proprietary systems reveals marked performance gaps. Across all models, performance remains underwhelming on temporal reasoning, emotion recognition, and speech translation, with most scores falling below 20. Prompting in low-resource languages such as Burmese, Lao, Tamil, and Khmer lag behind English by over 5%.\nOur findings expose critical model limitations and underscore the need for inclusive model development."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Southeast Asian Languages",
"Multilingual Speech Benchmark",
"Audio–language Models"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5c41f3c5d166884396fa71337e5d8815b0a06417.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/3b437ba45f2f0be6c4610c6c40d04907323b6921.pdf"
},
"title": {
"value": "SEA-SpeechBench: A Large-Scale Multitask Benchmark for Speech Understanding Across Southeast Asia"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "06bDxmgdE0",
"id": "06bDxmgdE0",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission8619/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897772905,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission8619/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission8619/Authors"
]
}
|
|
2,026
|
06upBSlAUy
|
[
4,
6,
4
] |
[
{
"content": "The paper proposes Stabilized and Improved Preference Optimization (SIPO), a framework designed to address two fundamental challenges in applying Direct Preference Optimization (DPO) to diffusion models: training instability and off-policy bias. The authors first conduct a systematic analysis of the diffusion process, identifying that training instability is primarily caused by high gradient variances originating from early time steps that have low importance weights.",
"id": "PrDKaAiwDb",
"rating": 4
},
{
"content": "This paper addresses the instability observed in applying Direct Preference Optimization (DPO) to diffusion models. The authors propose two complementary improvements:\n(1) DPO-C&M, which introduces timestep-dependent importance masking and gradient clipping to mitigate gradient explosion and overemphasis on uninformative steps; and\n(2) SIPO, which modifies the DPO objective by applying clipped importance reweighting to the log-likelihood ratio term and reformulates the loss as KL minimization toward a reward-shaped target distribution. By skipping early diffusion steps and leveraging importance sampling, SIPO aims to improve training stability and convergence behavior.",
"id": "5FpsCgkElj",
"rating": 6
},
{
"content": "The paper proposes SIPO (Stabilized and Improved Preference Optimization) for aligning diffusion models with human (or AI) preferences. The key ideas are: (1) identify that early timesteps in diffusion contribute high-variance, low-importance gradients; (2) introduce DPO-C&M (clipping & masking) using timestep-wise importance weights to stabilize training; and (3) further correct off-policy bias via importance-weighted DPO with clipped, timestep-aware weights. Experiments on SD1.5/SDXL for T2I and CogVideoX/Wan for T2V claim improved stability and accuracy over Diffusion-DPO and other baselines, with lower sensitivity to β and better human evals.",
"id": "oFK61ZRYdB",
"rating": 4
}
] |
{
"cdate": 1758341139376,
"content": {
"TLDR": {
"value": "We propose a stabilized and improved preference optimization framework for aligning diffusion generative models with human perferences."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025sipo,\ntitle={{SIPO}: Stabilized and Improved Preference Optimization for Aligning Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=06upBSlAUy},\nnote={under review}\n}"
},
"abstract": {
"value": "Preference learning has garnered extensive attention as an effective technique for aligning diffusion models with human preferences in visual generation tasks. However, existing alignment approaches such as Diffusion-DPO suffer from two fundamental challenges: training instability caused by high gradient variances at various timesteps and high parameter sensitivities, and off-policy bias arising from the discrepancy between the optimization data and the policy model's distribution. Our first contribution is a systematical analysis of the diffusion trajectories across different timesteps and identify that the instability primarily originates from early timesteps with low importance weights. To address these issues, we propose SIPO, a Stabilized and Improved preference Optimization framework for aligning diffusion models with human preferences. Concretely, a key gradient, \\emph{i.e.,} DPO-C\\&M is introduced to facilitate stabilize training by clipping and masking uninformative timesteps. Followed by a timestep aware importance re-weighting paradigm to fully correct off-policy bias and emphasize informative updates throughout the alignment process. Extensive experiments on various baseline models, including image generation models on SD1.5, SDXL, and video generation models CogVideoX-2B, CogVideoX-5B, and Wan2.1-1.3B, demonstrate that our SIPO consistently promotes stabilized training and outperforms existing alignment methods, with meticulous adjustments on parameters.\nOverall, these results highlight the importance of timestep-aware alignment and and provide valuable guidelines for improved preference optimization in diffusion models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Diffusion",
"DPO",
"image generate",
"video generate"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/38259828575fb003f42adcd3a998c716c9e1533f.pdf"
},
"primary_area": {
"value": "generative models"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/0e37b8ed7b6e3832740d3b2b86fda314d0c201d4.zip"
},
"title": {
"value": "SIPO: Stabilized and Improved Preference Optimization for Aligning Diffusion Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "06upBSlAUy",
"id": "06upBSlAUy",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23238/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896824854,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23238/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23238/Authors"
]
}
|
|
2,026
|
072P11r1wu
|
[
0,
10,
2,
2
] |
[
{
"content": "Benign and harmful overfitting have been extensively studied in the past few years in many settings and models. More recently, there has been interest in analyzing benign overfitting in simple transformers. This work aims to extend the previous works on benign/harmful overfitting in transformers by providing an analysis for a more realistic setting. They also provide experiments to support their theory.",
"id": "c3ShQghMW2",
"rating": 0
},
{
"content": "This is a theoretical work on (small) transformers and benign/harmful overfitting.\nThe authors analyze the generalization error and find the behavior to be grouped into three stages of training.\nThey also back up their theory with experiments that cohere with the theoretical prediction.\nThe theory is for data that are length-2 sequences where the tokens are drawn from a gaussian mixture model with orthogonal centroids. The first token is the signal and the second token is purely noise.",
"id": "56Wmngwvp5",
"rating": 10
},
{
"content": "The paper presents a theoretical study of a two-layer nonlinear transformer under label-flip noise, provides stage-wise error bounds for two regimes (benign vs harmful overfitting), and supports claims with synthetic experiments that visualize the three training phases.",
"id": "oINof9pl7i",
"rating": 2
},
{
"content": "The paper analyzes a two‑layer transformer with softmax self‑attention trained by gradient descent on logistic loss and evaluated with 0–1 test error under label‑flipping noise (flip rate α). The authors give stage‑wise (Phase 1/2/3) error bounds for benign and harmful overfitting and propose a critical condition.",
"id": "tzi8C9X57W",
"rating": 2
}
] |
{
"cdate": 1758292502092,
"content": {
"TLDR": {
"value": "We present generalization bounds for a two-layer Transformer under benign overfitting and harmful overfitting."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025understanding,\ntitle={Understanding Generalization in Transformers: Error Bounds and Training Dynamics Under Benign and Harmful Overfitting},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=072P11r1wu},\nnote={under review}\n}"
},
"abstract": {
"value": "Transformers serve as the foundational architecture for many successful large-scale models, demonstrating the ability to overfit the training data while maintaining strong generalization on unseen data, a phenomenon known as benign overfitting. However, existing research has not sufficiently explored generalization and training dynamics of transformers under benign overfitting. This paper addresses this gap by analyzing a two-layer transformer's training dynamics, convergence, and generalization under labeled noise. Specifically, we present generalization error bounds for benign and harmful overfitting under varying signal-to-noise ratios (SNR), where the training dynamics are categorized into three distinct stages, each with its corresponding error bounds. Additionally, we conduct extensive experiments to identify key factors in transformers that influence test losses. Our experimental results align closely with the theoretical predictions, validating our findings."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Transformer",
"Benign overfiting",
"Feature learning theory",
"Generalization error bounds",
"Signal-to-noise ratio"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/231fabf71a2994eee63b8ef73378fafe7e9c94c6.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/005086072521ed959ef9357196c2aea60a6faaab.zip"
},
"title": {
"value": "Understanding Generalization in Transformers: Error Bounds and Training Dynamics Under Benign and Harmful Overfitting"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "072P11r1wu",
"id": "072P11r1wu",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission18976/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897069595,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission18976/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission18976/Authors"
]
}
|
|
2,026
|
073WQjmWKU
|
[
6,
6,
4,
6,
8,
4
] |
[
{
"content": "This paper presents COMPACT, a data-efficient visual instruction tuning (VIT) framework that synthesizes training samples with controlled compositional complexity. The authors introduce the k-value, representing the number of atomic visual capabilities (e.g., object recognition, spatial reasoning) required to answer a question. By generating high-k samples using Gemini-2.0-Flash and combining them with a small subset of LLaVA-665K for instruction formatting, COMPACT attains 100.2% of the full-dataset performance using only 10% of the data. It substantially outperforms baselines on complex benchmarks such as MM-Vet (+8.6%) and MMStar (+2.9%).\n\nKey contributions:\n1. A complexity-aware VIT data recipe leveraging atomic capability composition.\n2. Empirical evidence that higher-k samples enhance data efficiency.\n3. A scalable synthetic data generation framework that reduces dependence on large-scale datasets.",
"id": "fGxZ6BdeHW",
"rating": 6
},
{
"content": "This paper focuses on the curation of informative training data to enhance MLLMs’ finetuning efficiency. It introduces COMPACT, a novel data synthesis approach that generates rich and informative text questions for each image by integrating multiple atomic visual capabilities into a single training sample. Experimental results across various benchmarks demonstrate that COMPACT significantly reduce the required number of training examples while achieving performance comparable to that of full-scale training data, highlighting its efficiency.",
"id": "sxCltr3Oqs",
"rating": 6
},
{
"content": "The main motivation is to compress multiple capabilities into a smaller number of data samples to increase sample efficiency, doing more with less data sets that compose multiple atomic capabilities into one.",
"id": "UZYbb3OMTj",
"rating": 4
},
{
"content": "This paper presents COMPACT (COMPositional Atomic-to-Complex Visual Capability Tuning), a new data recipe for visual instruction tuning (VIT) in multimodal large language models (MLLMs). COMPACT introduces the idea of compositional complexity, where each training sample is constructed by combining multiple atomic visual capabilities (e.g., object recognition, spatial reasoning, color, shape). By controlling the number of combined capabilities (“k-value”), the method generates more information-dense and complex questions using Gemini-2.0-Flash, leading to significant data efficiency gains. With only 10% of LLaVA-665K data, COMPACT achieves 100.2% of the full-scale performance across benchmarks such as MM-Vet and MMStar, highlighting the benefit of complexity-aware data curation for MLLMs.",
"id": "uH5FjlFHQb",
"rating": 6
},
{
"content": "COMPACT (COMPositional Atomic-to-Complex Visual Capability Tuning) introduces a method for generating complex, information-dense Visual Instruction Tuning (VIT) datasets by combining multiple atomic visual capabilities (e.g., color, spatial reasoning, object recognition) into single training examples. This complexity-aware curation improves data efficiency -- achieving 100.2% of full LLaVA-665K performance using only 10% of the data, with notable gains on complex multimodal benchmarks like MM-Vet and MMStar.",
"id": "W05YoSCMau",
"rating": 8
},
{
"content": "This paper proposes COMPACT, where images are from LLaVA-665K, complex instructions are generated by Gemini, to improve data efficiency in multimodal instruction tuning by generating questions that require combinations of atomic visual capabilities. The task complexity is operationalized by the number of atomic capabilities involved (k-value). \nThe paper demonstrates that increasing task complexity leads to better use of visual information and yields impressive performance. Experiment results shows that with only 10% of the LLaVA-665K data, COMPACT matches or exceeds the full dataset’s performance across a variety of multimodal benchmarks. \n\nThis paper presents a practically useful and empirically strong method. The idea of compositional capability tuning is promising and clearly validated by experiments. \nHowever, conceptual and theoretical foundations remain unclear. Several core concepts, such as task complexity, informativeness, information density, and k-value, are used interchangeably without rigorous justification. The mapping from \"number of atomic capabilities\" to \"actual complexity\" is assumed rather than demonstrated. In addition, the capability definitions are hand-picked without theoretical or empirical grounding. The analysis section provides statistics but not mechanism-level explanations. As a result, important questions remain unanswered: Why does COMPACT help? Which capabilities benefit? Why does improvement transfer to tasks outside the covered perceptual abilities? \n\nIf supplemented with theoretical evidence, more in-depth analysis and argumentation, this work has the potential to become a very influential contribution, and I will raise your rating.",
"id": "PmMUJS2uNP",
"rating": 4
}
] |
{
"cdate": 1757812256580,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025compact,\ntitle={{COMPACT}: {COMP}ositional Atomic-to-Complex Visual Capability Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=073WQjmWKU},\nnote={under review}\n}"
},
"abstract": {
"value": "Visual instruction tuning (VIT) datasets consist of randomly sampled image-question pairs without regard to the informativeness of each pair. Recent dataset selection methods have shown that a small fraction of such datasets enriched with informative samples can lead to efficient finetuning of Multimodal Large Language Models. In this work, we explore the impact of task complexity on informative data curation and introduce COMPACT (COMPositional Atomic-to-complex Visual Capability Tuning), a VIT data recipe that scales training sample complexity by combining multiple atomic visual capabilities in a single training example. Concretely, we synthesize rich and informative text questions for each image, allowing us to significantly reduce the number of training examples required for effective visual instruction tuning. COMPACT demonstrates superior data efficiency compared to existing data reduction methods. When applied to the LLaVA-665K VIT dataset, COMPACT reduces the data budget by 90% while still achieving 100.2% of the full VIT performance (compared to only 97.5% by the state-of-the-art method) across eight multimodal benchmarks. Further, training on the same COMPACT data even improves performance compared to training with full-scale data on particularly complex benchmarks such as MM-Vet (+8.6%) and MMStar (+2.9%). COMPACT offers a scalable and efficient synthetic data generation recipe to improve on visual language tasks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Complexity",
"Compositionality",
"Visual instruction tuning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/12d65dd15508e5a6a60d5da82927f4e8a13e8f75.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "COMPACT: COMPositional Atomic-to-Complex Visual Capability Tuning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "073WQjmWKU",
"id": "073WQjmWKU",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission4933/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898004339,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission4933/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission4933/Authors"
]
}
|
|
2,026
|
075TvkpZEK
|
[
2,
4,
8,
2
] |
[
{
"content": "This paper proposes an optimization algorithm SMARAN for deep learning. SMARAN has two main characteristics, the first is that it normalizes the gradient before updating the first-order momentum, and the second is that it adopts the objective function value to update the second-order momentum. Then this work provides the analysis of the regret bound for SMARAN based on common assumptions. Finally, SMARAN is compared with several classical or adaptive optimizers in the experiments of CV tasks. SMARAN achieves great test accuracies on CIFAR datasets, and obviously a low generalization gap on Tiny-Imagenet.\n\nThe main contribution of this work is that it adopts the function value in the adaptive learning rate, which could reduce the memory cost of optimizer states.",
"id": "Z6KDCNGnvE",
"rating": 2
},
{
"content": "The paper proposes SMARAN, a novel optimization method for deep learning that adjusts the learning rate based on the model's performance (i.e., the objective function value) rather than the gradient's curvature, aiming to close the generalization gap often seen in adaptive optimizers. Unlike Adam, which uses exponential moving averages (EMAs) of gradients, SMARAN uses the EMA of past loss values to scale the learning rate and incorporates a form of adaptive weight decay to prevent overfitting. Experiments on vision benchmarks (CIFAR, Tiny ImageNet) show that SMARAN achieves better generalization, with lower test loss and smaller generalization gaps, compared to state-of-the-art optimizers like Adam, AdamW, and SGD with momentum.",
"id": "XAtfGekxmD",
"rating": 4
},
{
"content": "This paper introduces a novel optimization method which the authors term as SMARAN, that aims to bridge the generalization gap often seen with adaptive optimizers and improve memory efficiency. SMARAN uniquely adjusts its learning rate based on the model's performance (loss values), utilizing exponential moving average (EMA) of normalized gradients to determine the update direction and an EMA of squared loss values to dynamically scale the learning rate.\n\nThe authors argue that such a strategy based on loss performance allows for cautious learning with high losses and accelerated convergence in the regime with low and decreasing losses, thus preventing stagnation in flat regions. \n\nSMARAN also integrates an adaptive weight decay regularization, whose strength is tied to the performance-based learning rate, to address overfitting. Theoretical analysis provided along with extensive experiments on image classification problems using multiple model architecture types.",
"id": "VD5MgrCKC7",
"rating": 8
},
{
"content": "The paper proposes an optimizer that adaptively changes its effective learning rate based on the an EMA of losses, in addition to using a EMA on the gradient history. Modulation of the learning rate as a function of the loss, is their main contribution. They provide a regret analysis in the online convex optimization setting. Empirically, across standard vision benchmarks they report competitive convergence with stronger generalization than Adam-style methods and SGD, while avoiding per-parameter second-moment storage.",
"id": "pICgwEUkNO",
"rating": 2
}
] |
{
"cdate": 1758257347389,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smaran,\ntitle={{SMARAN}: Closing the Generalization Gap with Performance Driven Optimization Method},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=075TvkpZEK},\nnote={under review}\n}"
},
"abstract": {
"value": "Optimization methods have evolved significantly by introducing various learning rate scheduling techniques and adaptive learning strategies. Although these methods have achieved faster convergence, they often struggle to generalize well to unseen data compared to traditional approaches such as Stochastic Gradient Descent (SGD) with momentum. Adaptive methods such as Adam store each parameter's first and second moments of gradients, which can be memory-intensive. To address these challenges, we propose a novel SMARAN optimization method that adjusts the learning rate based on the model's performance rather than the objective function's curvature. This approach is particularly effective for minimizing stochastic loss functions, standard in deep learning models. Traditional gradient-based methods may get stuck in regions where the gradient vanishes, such as plateaus or local minima. Therefore, instead of only depending on the gradient, we use the model's performance to estimate the appropriate step size. We performed extensive experiments on standard vision benchmarks, and the generalization trends observed with SMARAN demonstrate compelling distinctions relative to adaptive and non-adaptive optimizers."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Optimization",
"Gradient decent",
"Learning rate scheduler",
"Regularization"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/51bea55276c3d93f4d63af8d07abe5cf281cc86d.pdf"
},
"primary_area": {
"value": "optimization"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SMARAN: Closing the Generalization Gap with Performance Driven Optimization Method"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "075TvkpZEK",
"id": "075TvkpZEK",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission15935/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897272046,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission15935/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission15935/Authors"
]
}
|
|
2,026
|
07R3pHnBqc
|
[
2,
0,
4,
2
] |
[
{
"content": "This paper proposes Instruction Agent, a training-free GUI automation framework that uses a single expert demonstration, aiming to execute long-horizon and complex tasks. The instructor module ensures the agent follows the instruction, while the verifier and backtracker aim to ensure robustness. The approach is evaluated on OSWorld tasks that existing state-of-the-art GUI agents fail to complete.",
"id": "0ACuwsN9MM",
"rating": 2
},
{
"content": "The paper introduces \"Instruction Agent,\" a training-free framework designed to improve the robustness of GUI agents on complex, long-horizon, or personalized tasks. The core idea is to leverage a single, test-time expert demonstration. This demonstration (a sequence of actions and screenshots) is fed into an \"Instructor\" module, which uses an LLM (GPT-4O) to generate step-by-step natural language instructions. An \"Actor\" module then attempts to execute this plan. The Actor is composed of a grounding model (UI-Tars 1.5), an executor (GPT-4O), a \"Verifier\" (GPT-4O) to confirm if a step was successful by comparing screenshots, and a \"Backtracker\" (GPT-4O) to attempt recovery from failed steps. The authors evaluate this system on a curated set of 20 tasks from the OSWorld benchmark on which three top-ranked open-source agents fail, reporting a 60% success rate for their method.",
"id": "vAJkC35jao",
"rating": 0
},
{
"content": "The paper introduces the **Instruction Agent**, a GUI agent designed to automate complex tasks by leveraging expert demonstrations. Unlike current agents that struggle with novel UI elements, long-horizon actions, and interruptions, the Instruction Agent extracts step-by-step instructions from a single demonstration and strictly follows the user's intended trajectory. It uses **verifier** and **backtracker** modules to handle unexpected interruptions and assess outcomes during execution. Experimental results in the OSWorld environment show that the Instruction Agent achieves a 60% success rate on tasks that other top-ranked agents fail to complete, providing a reliable and practical solution for real-world GUI task automation.",
"id": "QOLNzivG3m",
"rating": 4
},
{
"content": "The paper introduces a GUI agent framework. It consists of an Instructor that extracts stepwise instructions from human demonstrations, and an Actor that executes the task strictly following those instructions. To enhance robustness, the system further integrates a Verifier (which checks success) and a Backtracker (which reverts and retries upon failure). On a set of 20 OSWorld tasks where all sota open-source agents fail, the proposed approach achieves a 60% success rate, with ablation studies demonstrating the contributions of the Verifier and Backtracker.",
"id": "xyFT4ZLJ65",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "mha7sXQUur",
"rating": null
}
] |
{
"cdate": 1757981401623,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nli2025instruction,\ntitle={Instruction Agent: Enhancing Agent with Expert Demonstration},\nauthor={Yinheng Li and Hailey Hultquist and Justin Wagle and Kazuhito Koishida},\nyear={2025},\nurl={https://openreview.net/forum?id=07R3pHnBqc}\n}"
},
"abstract": {
"value": "Graphical user interface (GUI) agents have advanced rapidly but still struggle with complex tasks involving novel UI elements, long-horizon actions, and personalized trajectories. In this work, we introduce Instruction Agent, a GUI agent that leverages expert demonstrations to solve such tasks, enabling completion of otherwise difficult workflows. Given a single demonstration, the agent extracts step-by-step instructions and executes them by strictly following the trajectory intended by the user, which avoids making mistakes during execution. The agent leverages the verifier and backtracker modules further to improve robustness. Both modules are critical to understand the current outcome from each action and handle unexpected interruptions(such as pop-up windows) during execution. Our experiments show that Instruction Agent achieves a 60% success rate on a set of tasks in OSWorld that all top-ranked agents failed to complete. The Instruction Agent offers a practical and extensible framework, bridging the gap between current GUI agents and reliable real-world GUI task automation."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Yinheng_Li2",
"~Hailey_Hultquist1",
"~Justin_Wagle1",
"~Kazuhito_Koishida1"
]
},
"authors": {
"value": [
"Yinheng Li",
"Hailey Hultquist",
"Justin Wagle",
"Kazuhito Koishida"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"GUI Agents",
"Expert Demonstrations",
"Human in the Loop",
"Test-Time Automation",
"Backtracking"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "li|instruction_agent_enhancing_agent_with_expert_demonstration"
},
"pdf": {
"value": "/pdf/fdf65d865fac30aa8de77cb5dab6e301148a2621.pdf"
},
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Instruction Agent: Enhancing Agent with Expert Demonstration"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "07R3pHnBqc",
"id": "07R3pHnBqc",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission6408/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762926589336,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission6408/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission6408/Authors"
]
}
|
|
2,026
|
07S1CPoQYP
|
[
6,
2,
2,
2
] |
[
{
"content": "This paper investigates how fMRI recordings can be used to fine-tune large language models (LLMs) toward human brain activity. The authors propose a dual-objective framework combining standard language modeling with brain alignment, leveraging over 50 hours of naturalistic movie-watching fMRI data. Experiments on GPT-2 and LLaMA-2 show consistent improvements in voxel-wise encoding, cross-subject generalization, and downstream visually grounded commonsense tasks. The study suggests that neural supervision can inject multimodal, human-like structure into text-only LMs.",
"id": "SSGuXRW8lJ",
"rating": 6
},
{
"content": "The paper explores whether functional MRI (fMRI) recordings of human brain activity can serve as a supervisory signal to train large language models (LLMs) toward more human-like, multimodal representations. Building on over 50 hours of fMRI data from participants watching Friends and 10 hours of additional movie data, the authors fine-tune GPT-2 (124M) and LLaMA-2 (7B) using Low-Rank Adaptation (LoRA) within a dual-objective framework that balances standard language modeling loss with a brain alignment loss. Through systematic experiments, the authors show that brain-informed fine-tuning improves voxel-level encoding accuracy across auditory, temporal, and frontal cortical regions, scaling with both model size and training duration, and generalizes across participants and unseen movie stimuli.",
"id": "i9j9yvi4Or",
"rating": 2
},
{
"content": "This paper applies LLMs to predict fMRI voxel responses from text stimuli. The authors compare three training approaches: frozen LM, LoRA finetuning, and full finetuning across different LLMs (GPT-2, Llama, Mistral). They test cross-subject and cross-movie generalization on a movie-watching dataset.\n\nWhile the results demonstrate that full finetuning with more data yields the best performance, the paper's contribution remains unclear. The architectural variants (LoRA/HRF) lack novelty, and the reported correlations are substantially lower than other LLM-based voxel prediction methods. The work would benefit from clearer positioning and substantial reframing to establish its distinct contribution to the field.",
"id": "WiyhUdoqij",
"rating": 2
},
{
"content": "This paper explores how fMRI signals can be used not only as evaluation data / predictive objective to measure alignment with LLMs but also as supervisory signals to fine-tune them. The authors explore the potential for guiding LLM training with brain data. Authors test several strategies: 1. LoRA-based fine-tuning of pre-trained LLMs, 2. training LLMs from scratch using brain data, and (3) joint optimisation combining language and brain-alignment losses.\n\nThey report improvements in voxel-level encoding performance for brain-informed fine-tuning over brain-only models. They also highlight potential knowledge enhancement on visually grounded language benchmarks, suggesting that fine-tuning on fMRI data injects perceptual and associative priors that text-only training lacks, against text-only models.",
"id": "ICOJ7ZEkbe",
"rating": 2
}
] |
{
"cdate": 1758149730341,
"content": {
"TLDR": {
"value": "We show that brain-informed training of language models, using dual objectives and scaling across data, models, and subjects, yields robust and generalizable alignment with human brain activity beyond baselines."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025braininformed,\ntitle={Brain-Informed Language Model Training Enables Scalable and Generalizable Alignment with Human Brain Activity},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=07S1CPoQYP},\nnote={under review}\n}"
},
"abstract": {
"value": "Language models (LMs) provide rich representational spaces that partially align with neural activity during naturalistic experiences such as movie watching. Yet leveraging brain recordings to actively guide LM training remains underexplored. Here, we address this question by investigating whether functional MRI recordings can guide LLM training by aligning language representations with brain dynamics. Using over 50 hours of fMRI data from six participants watching Friends, plus 10 hours of held-out movies, we augmented pre-trained and randomly initialized LMs with a brain alignment module and compared multiple training strategies. Our results show three main findings. First, brain-informed fine-tuning consistently outperforms text-only baselines and brain-from-scratch models, with voxel-level gains that scale with both model size (GPT-2 124M, LLaMA-2 7B) and training duration (1–40 hours). These improvements generalize across participants and out-of-sample movies, yielding robust cross-subject and cross-stimulus encoding. Second, a dual-objective loss that balances language modeling with brain alignment surpasses brain-only optimization, producing more stable and generalizable encoders. Finally, brain supervision enriches LM representations with multisensory inductive biases: brain-fine-tuned models outperform unimodal baselines on VL-Commonsense, better capturing perceptual and associative properties (e.g., color, shape, co-occurrence) that text-only training underrepresents. Together, these results establish cortical dynamics as an effective supervisory signal, enabling scalable, generalizable, and brain-aligned LMs that internalize aspects of human-like multimodal representation."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Language models",
"fMRI",
"Encoding models",
"naturalistic stimulus",
"Representation Learning",
"Multimodal Learning",
"Low-Rank Adaptation (LoRA)",
"Transfer Learning",
"Neuroscience-Informed AI",
"Scalability",
"Generalizability"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/88e12edfd2a5e85bc41b8134b798939c093b5c2d.pdf"
},
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Brain-Informed Language Model Training Enables Scalable and Generalizable Alignment with Human Brain Activity"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "07S1CPoQYP",
"id": "07S1CPoQYP",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9930/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897684766,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9930/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9930/Authors"
]
}
|
|
2,026
|
07o2iouN1Y
|
[
2,
4,
6,
2
] |
[
{
"content": "The paper solves Nash equilibrium in two-player zero-sum extensive-form games by adding additional regularization. By switching the reference strategy periodically, the algorithm converges to the NE of the original game rather than the regularized game. The paper proves convergence theoretically and empirically evaluates its performance.",
"id": "vaEoZmq6PL",
"rating": 2
},
{
"content": "This paper introduces Nash Policy Gradient (NashPG), a novel algorithm for finding Nash equilibria in two-player zero-sum games. The authors fix regularization at a large value for stability and achieve convergence through iterative refinement of a reference policy. Theoretically, the authors showed that NashPG demonstrates monotonic improvement and convergence to the Nash equilibrium. Experiments on benchmark and imperfect-information games show that NashPG not only exhibit faster convergence, but also achieves lower exploitability and higher Elo ratings than population-based and CFR-inspired baselines.",
"id": "MzkKg7PzXH",
"rating": 4
},
{
"content": "This paper examines regularization-based equilibrium finding in two-player zero-sum games. The paper proves that iteratively replacing the reference policy in Magnetic Mirror Descent (MMD) with the fixed-point policy of the last learning round guarantees monotonic improvement and last-iterate convergence towards Nash equilibrium (NE). Based on this finding, the paper proposes a reinforcement learning algorithm called Nash Policy Gradient (NashPG) that iteratively reduces the exploitability of the learned policy for practically solving large-scale games. Experiments in benchmark or real-world games demonstrate the superiority of NashPG over MMD and average-iterate-based algorithms like Neural Fictitious Self-Play (NFSP) and Policy-Space Response Oracles (PSRO).",
"id": "dLyE9Z2lZq",
"rating": 6
},
{
"content": "Regularized Nash equilibriums are easier to compute compared their non-regularized counterpart but are exploitable in the original game. There exists a body of works that try to shrink this gap. Commonly, this is done via temperature decay. Unfortunately, lower temperatures are often unstable and converge slowly. R-NaD and FoReL proposed to iteratively regularize to the previous Nash equilibrium in an iterative process. This work proposes a policy space version of the same result.",
"id": "EfVwoXDMud",
"rating": 2
}
] |
{
"cdate": 1757304899303,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nash,\ntitle={Nash Policy Gradient: A Policy Gradient Method with Iteratively Refined Regularization for Finding Nash Equilibria},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=07o2iouN1Y},\nnote={under review}\n}"
},
"abstract": {
"value": "Finding Nash equilibria in imperfect-information games remains a central challenge in multi-agent reinforcement learning. While regularization-based methods have recently achieved last-iteration convergence to a regularized equilibrium, they require the regularization strength to shrink toward zero to approximate a Nash equilibrium, often leading to unstable learning in practice. Instead, we fix the regularization strength at a large value for robustness and achieve convergence by iteratively refining the reference policy. Our main theoretical result shows that this procedure guarantees strictly monotonic improvement and convergence to an exact Nash equilibrium in two-player zero-sum games, without requiring a uniqueness assumption. Building on this framework, we develop a practical algorithm, *Nash Policy Gradient* (NashPG), which preserves the generalizability of policy gradient methods while relying solely on the current and reference policies. Empirically, NashPG achieves comparable or lower exploitability than prior model-free methods on classic benchmark games and scales to large domains such as *Battleship* and *No-Limit Texas Hold'em*, where NashPG consistently attains higher Elo ratings."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Multi-agent reinforcement learning",
"policy gradient",
"game theory",
"Nash equilibria"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8c6be1ccaa4cfd6f0abd440e6f11f0b609242681.pdf"
},
"primary_area": {
"value": "reinforcement learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Nash Policy Gradient: A Policy Gradient Method with Iteratively Refined Regularization for Finding Nash Equilibria"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "07o2iouN1Y",
"id": "07o2iouN1Y",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission2942/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898118185,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2942/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2942/Authors"
]
}
|
|
2,026
|
084SvT55yk
|
[
10,
4,
6
] |
[
{
"content": "Existing neural CO solvers either ensure local feasibility but lack global awareness (LC) or produce global predictions with constraint violations (GP). Current adaptive expansion is only an external wrapper with limited effectiveness.\nNEXCO makes adaptive expansion native through CO-specific masked diffusion where intermediate states are meaningful partial solutions, combined with time-agnostic training and confidence-based progressive unmasking with feasibility projection.\nThe framework achieves about 50% quality improvement and 2~4 times speedup over state-of-the-art, successfully embedding adaptive expansion as an intrinsic generative principle rather than external wrapper.\nNEXCO successfully realizes adaptive solution expansion as a native generative principle within masked diffusion, achieving superior performance across multiple CO problems. The framework opens new opportunities for integrating constructive expansion mechanisms into diffusion-based generative modeling, providing a step toward scalable and general-purpose neural solvers for combinatorial optimization.",
"id": "LaxIm3xFvT",
"rating": 10
},
{
"content": "The paper introduces NEXCO, a masked-diffusion framework for neural CO that \n(i) replaces uniform bit-flip noise with a CO-specific, one-way masking that only turns selected 1’s to 0 (never adding false positives), \n(ii) trains a time-agnostic denoiser with time-agnostic optimization consistency (TOC), and \n(iii) decodes via Native Adaptive Expansion (NAE)—progressively unmasking variables while a problem-specific projector enforces feasibility. \nClaimed benefits: feasible partial states along the forward trajectory, schedule-free training, and constructive, efficient decoding; experiments are shown for TSP/MIS/CVRP with strong results.",
"id": "YMGLEnBoQz",
"rating": 4
},
{
"content": "This paper introduces NEXCO, a diffusion-based framework for neural combinatorial optimization that makes adaptive expansion native to the generative model. The core idea is CO-specific masked diffusion that only drops active variables (1→0), a time‑agnostic GNN denoiser trained with optimization consistency across corruption levels, and an inference routine that expands solutions via confidence-ranked candidate sets with feasibility projection. Across TSP, MIS, and CVRP, NEXCO reports stronger solution quality and faster inference than prior LC/GP/AE baselines.",
"id": "6ycfxhIfSV",
"rating": 6
}
] |
{
"cdate": 1756822893695,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025native,\ntitle={Native Adaptive Solution Expansion for Diffusion-based Combinatorial Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=084SvT55yk},\nnote={under review}\n}"
},
"abstract": {
"value": "One central challenge in Neural Combinatorial Optimization (NCO) is handling hard constraints efficiently. Beyond the two classic paradigms, i.e., Local Construction (LC), which sequentially builds feasible solutions but scales poorly, and Global Prediction (GP), which produces one-shot heatmaps yet struggles with constraint conflicts, the recently proposed Adaptive Expansion (AE) shares the advantages of both by progressively growing partial solutions with instance-wise global awareness.\nHowever, existing realizations bolt AE onto external GP predictors, so their solution quality is bounded by the backbone and their inference cost scales with repeated global calls.\nIn this paper, we fundamentally rethink adaptive expansion and make it native to a generative model, acting as its intrinsic decoding principle rather than an external wrapper.\nWe propose NEXCO, a CO-specific masked diffusion framework that turns adaptive expansion into the model’s own iterative unmasking process.\nSpecifically, it involves a solution-expansion training procedure with a time-agnostic GNN denoiser, which learns diffusion trajectories between fully masked solutions and ground-truth solutions.\nWith the trained time-agnostic denoiser, we introduce a novel solution expansion scheme at the solving stage, enabling adaptive control over the intermediate solution states. \nIt is achieved by constructing candidate sets according to confidence scores and applying feasibility projection to expand the solution while respecting constraints. \nIn this way, ``adaptive\" is not an afterthought but the decoding itself: intermediate diffusion states are meaningful partial solutions and progress is instance-adaptive rather than schedule-bound.\nExtensive experiments on representative CO problems show that NEXCO achieves approximately 50\\% improvement in solution quality and up to $4\\times$ faster inference compared to prior state-of-the-art solvers."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"mask diffusion model",
"neural combinatorial optimization"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ba6d77226d561157026830477edbe78117c8becb.pdf"
},
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Native Adaptive Solution Expansion for Diffusion-based Combinatorial Optimization"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "084SvT55yk",
"id": "084SvT55yk",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission903/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898236270,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission903/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission903/Authors"
]
}
|
|
2,026
|
08EyZzhgl1
|
[
2,
4,
2,
4
] |
[
{
"content": "This paper presents TextME, a text‑only training framework for modality expansion that eliminates the need for paired multimodal data by leveraging the “consistent modality gap” property of pretrained encoders. TextME first pre‑computes a constant offset between text and non‑text embeddings for each modality, then trains lightweight projection networks solely on text embeddings anchored in either a large LLM space or a multimodal text encoder space. In inference, non‑text embeddings are centered by subtracting the pre‑computed offset and projected through the text‑trained network to enable zero‑shot cross‑modal retrieval and classification.",
"id": "N4OtC6UolF",
"rating": 2
},
{
"content": "In the TEXTME, a projection network is trained for each CLIP-type model of different modalities, which maps text embeddings into a unified anchor space. During the prediction, an overall offset is applied to allow other modalities to directly use the projection network to map their modal data into the anchor space. Through this approach, modal expansion can be achieved with only a small number of unpaired samples, which are used to calculate the offset.",
"id": "QZSQVFDrfA",
"rating": 4
},
{
"content": "The paper introduces TextME, estimating an offset between text and another modality, then learns lightweight projection on text embedding space. Evaluations on audio, 3D, X ray, and molecular modalities achieve on average 88.2% of the performance of fully supervised methods, while also exhibiting emergent transfer capabilities between unseen modality pairs.",
"id": "4u6tDZLLrB",
"rating": 2
},
{
"content": "The paper proposes TextME, a framework for text-only multimodal expansion. It claims that pre-trained contrastive encoders (e.g., CLIP, LanguageBind) exhibit a consistent modality gap (fixed, content-independent offset between text and non-text embeddings). By pre-computing this offset and training lightweight projection networks solely on text embeddings, the authors argue one can align new modalities to a common space without any paired data. Empirically, TextME achieves roughly \"88% of paired-data performance\" across diverse tasks (audio, 3D, medical imaging, molecule retrieval).",
"id": "8dIC80hnW5",
"rating": 4
}
] |
{
"cdate": 1758266488620,
"content": {
"TLDR": {
"value": "TextMEunifies specialized modalities without paired supervision by training text-only projectors and applying centering offsets to bridge the modality gap at inference."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025textme,\ntitle={Text{ME}: Text-only Training for Modality Expansion via {LLM} Space Pivoting},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=08EyZzhgl1},\nnote={under review}\n}"
},
"abstract": {
"value": "Expanding multimodal representations to novel modalities is constrained by reliance on large-scale paired datasets (e.g., text–image, text–audio, text–3D, text–molecule), which are costly and often infeasible in domains requiring expert annotation such as medical imaging, 3D modeling, and molecular analysis. We introduce TextME, the first framework for text-only modality expansion that removes paired data requirements. Our method leverages the universal geometric properties of pre-trained encoders—consistent modality gaps—which enable zero-shot cross-modal transfer once embedding spaces satisfy these properties. We empirically verify that these hold across audio, 3D, X-ray, and molecular domains, enabling effective cross-modal tasks without paired supervision. Furthermore, we evaluated LLM and multimodal text encoders to determine which is more effective as a unified anchor space. Experiments show that TextME achieves 88.2% of paired-data performance in zero-shot classification and cross-modal retrieval, while also supporting emergent capabilities between unseen modality pairs (e.g., audio-to-3D, molecule-to-image). These results highlight text-only modality expansion as a practical and scalable path toward foundation models spanning arbitrary modalities."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"multimodal learning",
"modality expansion",
"text-only training",
"modality gap",
"cross-modal retrieval",
"representation alignment"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d88a5b6ee725f802837c6cc50d23eae58e76be85.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "TextME: Text-only Training for Modality Expansion via LLM Space Pivoting"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "08EyZzhgl1",
"id": "08EyZzhgl1",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission16593/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897230825,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission16593/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission16593/Authors"
]
}
|
|
2,026
|
08FTG45E9m
|
[
4,
6,
2,
2
] |
[
{
"content": "The paper introduces Hermes, a multi-scale spatial-temporal hypergraph network for stock time series forecasting. The model aims to jointly model inter-industry lead-lag structures and multi-scale temporal dependencies. It incorporates a hyperedge-based moving aggregation module and a multi-scale fusion mechanism to capture both spatial and temporal dynamics. Experiments on NASDAQ, NYSE, and S&P500 datasets demonstrate significant improvements over several baselines (RNN-, GNN-, HGNN-, and MLP-based models).",
"id": "pcB4wVrzTh",
"rating": 4
},
{
"content": "This paper addresses the limitations of existing hypergraph-based models for Stock Time Series Forecasting (STSF), which tend to overlook two critical aspects of financial markets: **inter-industry lead-lag interactions** and **multi-scale information**. The authors propose Hermes, a novel spatial-temporal hypergraph network, designed to capture these correlations more comprehensively. The core of the framework integrates two primary modules into the hypergraph structure: **(1) Hyperedge-based Moving Aggregation**: This module utilizes a sliding window and dynamic temporal aggregation to model the non-simultaneous, causal (lead-lag) relationships that exist between different financial industries. **(2) Hyperedge-based Multi-scale Fusion**: To capture patterns at different time granularities (e.g., short-term cycles versus long-term trends) , the input time series is first decomposed into multiple scales. The fusion module then integrates this multi-scale information using a novel cross-scale, edge-to-edge message passing mechanism, which preserves the consistency of information at each scale while facilitating interaction between them. The architecture successfully unifies these spatio-temporal dynamics and demonstrates superior performance, in both accuracy and efficiency, across multiple real-world stock datasets.",
"id": "ptqB8YwaLY",
"rating": 6
},
{
"content": "This paper proposes Hermes, a multi-scale spatial-temporal hypergraph network that aims to capture inter-industry lead-lag correlations and multi-scale dependencies for stock time series forecasting. The overall presentation is clear and the paper is technically sound. However, the topic of stock prediction has been extensively studied over decades, and the contribution here seems rather incremental—mainly achieving better results on well-worn benchmark datasets without offering truly novel insights into financial modeling. Moreover, several core conceptual points remain insufficiently discussed.",
"id": "5003uRiAMX",
"rating": 2
},
{
"content": "This paper proposes a spatial-temporal hypergraph network framework Hermes with a hyperedge-based moving aggregation module and hyperedge-based multi-scale fusion module for stock time series forecasting, which can consider lead-lag dependencies and multi-scale information. Experiments demonstrate the effectiveness of Hermes.",
"id": "N08sBLsgxq",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "ps7Snddn38",
"rating": null
}
] |
{
"cdate": 1756734867685,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nqiu2025multiscale,\ntitle={Multi-Scale Spatial-Temporal Hypergraph Network with Lead-Lag Structures for Stock Time Series Forecasting},\nauthor={Xiangfei Qiu and Liu Yang and Hanyin Cheng and Xingjian Wu and Rongjia Wu and Zhang Zhigang and Tu ding and Chenjuan Guo and Bin Yang and Christian S. Jensen and Jilin Hu},\nyear={2025},\nurl={https://openreview.net/forum?id=08FTG45E9m}\n}"
},
"abstract": {
"value": "Time series forecasting occurs in a range of financial applications providing essential decision-making support to investors, regulatory institutions, and analysts. Unlike multivariate time series from other domains, stock time series exhibit industry correlation. Exploiting this kind of correlation can improve forecasting accuracy. However, existing methods based on hypergraphs can only capture industry correlation relatively superficially. These methods face two key limitations: they do not fully consider inter-industry lead-lag interactions, and they do not model multi-scale information within and among industries. This study proposes the Hermes framework for stock time series forecasting that aims to improve the exploitation of industry correlation by eliminating these limitations. The framework integrates moving aggregation and multi-scale fusion modules in a hypergraph network. Specifically, to more flexibly capture the lead-lag relationships among industries, Hermes proposes a hyperedge-based moving aggregation module. This module incorporates a sliding window and utilizes dynamic temporal aggregation operations to consider lead-lag dependencies among industries. Additionally, to effectively model multi-scale information, Hermes employs cross-scale, edge-to-edge message passing to integrate information from different scales while maintaining the consistency of each scale. Experimental results on multiple real-world stock datasets show that Hermes outperforms existing state-of-the-art methods in both efficiency and accuracy."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Xiangfei_Qiu1",
"~Liu_Yang22",
"~Hanyin_Cheng1",
"~Xingjian_Wu1",
"~Rongjia_Wu2",
"~Zhang_Zhigang1",
"~Tu_ding1",
"~Chenjuan_Guo1",
"~Bin_Yang4",
"~Christian_S._Jensen1",
"~Jilin_Hu1"
]
},
"authors": {
"value": [
"Xiangfei Qiu",
"Liu Yang",
"Hanyin Cheng",
"Xingjian Wu",
"Rongjia Wu",
"Zhang Zhigang",
"Tu ding",
"Chenjuan Guo",
"Bin Yang",
"Christian S. Jensen",
"Jilin Hu"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"time series",
"time series forecasting"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "qiu|multiscale_spatialtemporal_hypergraph_network_with_leadlag_structures_for_stock_time_series_forecasting"
},
"pdf": {
"value": "/pdf/bc21800b8aa59aafd1af1d5aa573e3f8aab3d55b.pdf"
},
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Multi-Scale Spatial-Temporal Hypergraph Network with Lead-Lag Structures for Stock Time Series Forecasting"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "08FTG45E9m",
"id": "08FTG45E9m",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission308/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763294467721,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission308/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission308/Authors"
]
}
|
|
2,026
|
08KOxSjRyj
|
[
4,
2,
4,
2
] |
[
{
"content": "The paper introduces LongEmotion, a long-context benchmark for evaluating LLMs Emotional Intelligence (EI) across six task: Emotion Classification, Emotion Detection, Emotion QA, Emotion Conversation, Emotion Summary, and Emotion Expression. Moreover, this paper propose RAG and CoEM frameworks to enhance performance by retrieving and enriching contextually relevant information. This paper conducts exhaustive experiments on the LongEmotion dataset under Base, RAG, and CoEM settings, analyzing models’ Emotional Intelligence from perspectives such as emotion enhancement, long-text performance, and expressive capability.",
"id": "gdxOD7kRrI",
"rating": 4
},
{
"content": "This paper introduces a benchmark called LONGEMOTION, designed to evaluate the Emotional Intelligence (EI) of Large Language Models (LLMs) in long-context interactions. The authors argue that existing EI benchmarks often focus on short texts with limited contextual information. LONGEMOTION aims to fill this gap by introducing longer (average input 8,777 tokens), noisier, and more realistic interactions. The benchmark includes six tasks: Emotion Classification, Emotion Detection, Emotion Question Answering, Emotion Summarization, Emotion Conversation, and Emotion Expression.\n\nTo improve model performance on these tasks, the authors also propose two methods: a novel RAG (Retrieval-Augmented Generation) approach that does not rely on external knowledge bases but instead uses the dialogue context itself as the retrieval source.\n\nThe other method is a multi-agent framework called COEM (Collaborative Emotional Modeling), which breaks down the task into five stages (Chunking, Initial Ranking, Multi-Agent Enrichment, Reranking, Emotional Integration Generation) to integrate retrieval and limited knowledge injection. Experimental results show that these two methods achieve significant improvements on most long-text EI tasks. The paper also provides a detailed analysis of the performance of the GPT series models.",
"id": "CJoGS3mOh6",
"rating": 2
},
{
"content": "This paper addresses the gap in evaluating LLMs’ Emotional Intelligence (EI) in long-context scenarios by proposing LONGEMOTION, a benchmark covering six tasks with an average input length of 8,777 tokens, constructed via reorganizing existing datasets and human annotation. It also introduces two enhanced methods: RAG and CoEM integrating retrieval augmentation and knowledge injection. Experiments on closed-source and open-source models, show both methods consistently improve EI performance across most tasks. The paper concludes by noting LONGEMOTION advances practical EI evaluation for LLMs, with all code and datasets to be open-sourced.",
"id": "7S31mnUqLY",
"rating": 4
},
{
"content": "This paper introduces LONGEMOTION, a new benchmark designed to evaluate the Emotional Intelligence (EI) of Large Language Models (LLMs) in long-context scenarios, which existing benchmarks often overlook(). The benchmark includes six diverse tasks: Emotion Classification, Emotion Detection, Emotion QA, Emotion Conversation, Emotion Summary, and Emotion Expression, with an average input length of 8,777 tokens.\nTo improve performance, the authors propose two methods: a Retrieval-Augmented Generation (RAG) approach that uses conversation history as a retrieval source, and a novel Collaborative Emotional Modeling (COEM) framework(). COEM is a multi-stage pipeline that involves chunking the context, ranking, multi-agent enrichment with external knowledge, re-ranking, and final response generation.\nExperiments were conducted on various closed-source models like the GPT series and open-source models such as DeepSeek-V3, Llama3.1-8B-Instruct, and Qwen3-8B. The results indicate that the RAG and COEM frameworks consistently improve EI-related performance across most tasks. The paper also provides detailed case studies, including a comparison of GPT series models, an analysis of the COEM framework's impact, and the advantages of the LONGEMOTION dataset.",
"id": "Y0xwVwNpmZ",
"rating": 2
}
] |
{
"cdate": 1758170614961,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025longemotion,\ntitle={LongEmotion: Measuring Emotional Intelligence of Large Language Models in Long-Context Interaction},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=08KOxSjRyj},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) make significant progress in Emotional Intelligence (EI) and long-context understanding. However, existing benchmarks tend to overlook certain aspects of EI in long-context scenarios, especially under $\\textit{realistic, practical settings}$ where interactions are lengthy, diverse, and often noisy. To move towards such realistic settings, we present $\\textit{LongEmotion}$, a benchmark specifically designed for long-context EI tasks. It covers a diverse set of tasks, including $\\textbf{Emotion Classification}$, $\\textbf{Emotion Detection}$, $\\textbf{Emotion QA}$, $\\textbf{Emotion Conversation}$, $\\textbf{Emotion Summary}$, and $\\textbf{Emotion Expression}$. On average, the input length for these tasks reaches 8${,}$777 tokens, with long-form generation required for $\\textit{Emotion Expression}$. To enhance performance under realistic constraints, we incorporate Retrieval-Augmented Generation ($\\textit{RAG}$) and Collaborative Emotional Modeling ($\\textit{CoEM}$), and compare them with standard prompt-based methods. Unlike conventional approaches, our $\\textit{RAG}$ method leverages both the conversation context and the large language model itself as retrieval sources, avoiding reliance on external knowledge bases. The $\\textit{CoEM}$ method further improves performance by decomposing the task into five stages, integrating both retrieval augmentation and limited knowledge injection. Experimental results show that both $\\textit{RAG}$ and $\\textit{CoEM}$ consistently enhance EI-related performance across most long-context tasks, advancing LLMs toward more $\\textit{practical and real-world EI applications}$. Furthermore, we conduct a detailed case study on the performance comparison among GPT series models, the application of CoEM in each stage and its impact on task scores, and the advantages of the LongEmotion dataset in advancing EI. All of our code and datasets will be open-sourced, which can be viewed at the anonymous repository link https://anonymous.4open.science/r/anonymous-578B."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Emotional Intelligence",
"Long-Context"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5c3e0c117d155281a7979617728393553fab82e5.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/bcac58c19d6cdce0babe4a787d2efe6dff891815.zip"
},
"title": {
"value": "LongEmotion: Measuring Emotional Intelligence of Large Language Models in Long-Context Interaction"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "08KOxSjRyj",
"id": "08KOxSjRyj",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission10412/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897652232,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission10412/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission10412/Authors"
]
}
|
|
2,026
|
08pxmTLKTT
|
[
2,
4,
4,
6
] |
[
{
"content": "The paper proposes to better address the problem of object ambiguity in interactive segmentation (IS) models with SmartSAM method. To achieve it, an agent generates a few branches with interactions (positive/negative click or bbox), considering the first user interaction, to produce candidate masks, then it compare every candidate with a reference object using cosine similarity and choose the most similar one. Using fuzzy statistics the paper demonstrates that its method achieved lower fuzzy entropy than traditional SAM. Finally, it demonstrates improved performance on DAVIS, PartImageNet and Amb-Occ (firstly presented in the paper) datasets of IS models with SmartSAM usage.",
"id": "Mq2T1GTssR",
"rating": 2
},
{
"content": "The Segment Anything Model (SAM) often encounters ambiguity in interactive segmentation, especially during the initial interaction. To reduce the need for extensive human input, the authors propose SmartSAM, a method aimed at improving segmentation accuracy. The key idea is to generate a diverse set of prompts around the initial user prompt, producing multiple candidate masks. The most appropriate mask is then selected by measuring feature similarity to the reference, computed using DINOv2 embeddings. Experiments on three benchmark datasets demonstrate the effectiveness of the proposed method.",
"id": "DzHzMrJ7MY",
"rating": 4
},
{
"content": "This paper considers a task of interactive object segmentation, using user prompts and reference (text- or visual-based). An agent-based method is proposed, which is training-free, and can be used on top of any interactive segmentation methods. The method has been evaluated using several variants of SAM-based methods, and demonstrated significant improvement in accuracy over baselines.",
"id": "OqG9xEQxPu",
"rating": 4
},
{
"content": "This paper proposes a novel interactive segmentation framework that utilizes the model's intrinsic knowledge to segment ambiguous objects effectively.",
"id": "yS8iEDnD22",
"rating": 6
}
] |
{
"cdate": 1758013764632,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smartsam,\ntitle={Smart{SAM}: Segment Ambiguous Objects like Smart Annotators},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=08pxmTLKTT},\nnote={under review}\n}"
},
"abstract": {
"value": "Segment Anything Model (SAM) often encounters ambiguity in interactive segmentation, where insufficient user interaction leads to inaccurate segmentation of the target object. Existing approaches primarily address ambiguity through repeated human-model interactions, which are time-consuming due to the inherent latency of human responses. To reduce human efforts, we propose a novel interactive segmentation framework that leverages the model’s inherent capabilities to effectively segment ambiguous objects.\nOur key idea is to create an annotator-like agent to interact with the model. The resulting SmartSAM method mimics intelligent human annotators, resolving ambiguity with a single click and one reference instance. The agent generates multiple prompts around the initial click to simulate diverse annotator behaviors and refines the output masks by iteratively adding click chains in uncertain regions, thereby producing a set of candidate masks. Finally, the agent selects the mask that most closely aligns with the user’s intent, as indicated by the reference instance. Furthermore, we formalize the agent’s behavior as a fuzzy regression problem by quantifying ambiguity using fuzzy entropy. We demonstrate that our agent yields lower entropy than traditional methods, and we establish robustness and sufficiency theorems to ensure effective, human-like decision-making within a bounded range of actions. We evaluate our approach on multiple segmentation benchmarks and demonstrate its superiority over state-of-the-art methods."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Ambiguity",
"Segment Anything Model",
"Interactive Segmentation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/820ea9ba35674f9e0d8e8f0e96b98688d2ecfb4e.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SmartSAM: Segment Ambiguous Objects like Smart Annotators"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "08pxmTLKTT",
"id": "08pxmTLKTT",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission7273/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897862687,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission7273/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission7273/Authors"
]
}
|
|
2,026
|
08tuDzMDEn
|
[
4,
4,
6,
4
] |
[
{
"content": "This paper studies the task of counterargument generation and introduces a persona-based approach with Tree-of-Thought (ToT) content planning. Specifically, given an original post (OP), the system first constructs three distinct personas, each representing a unique perspective. These personas then perform content planning via a Tree-of-Thought process before generating the final counterargument. All components are implemented through prompting a large language model (LLM). By decomposing the end-to-end generation and integrating persona creation, the proposed method aims to produce more diverse and audience-tailored arguments. Experiments conducted on the Reddit CMV dataset demonstrate the effectiveness of the approach with both automatic metrics and human evaluations.",
"id": "QGIgLj8FJN",
"rating": 4
},
{
"content": "The paper proposes a generation of counterarguments with the Tree-of-Thought-inspired method called Persona-guided Tree-based Counterargument Generation (PTCG). The authors cluster different persona types and first, define the personality cluster of the claim author, then find personas from the nearest and the farthest cluster, and then generate counterarguments based on the perspectives of the selected personas. They use 847 threads from ChangeMyView dataset. The evaluation stage includes automatic metrics, classifier-based and LLM-as-a-Judge, and human evaluation. The method is compared with other baselines and shows that PTCG improves diversity and persuasiveness.",
"id": "sFRIvPAQhX",
"rating": 4
},
{
"content": "This paper proposes PTCG (Persona-Guided Tree-Based Counterargument Generation) — a framework that integrates persona-based conditioning with Tree-of-Thoughts (ToT)-inspired step-wise generation and pruning to produce multiple, diverse, and persuasive counterarguments.\nThe key idea is to estimate the persona of the original argument’s author (the “OP persona”), select three personas from distinct clusters (same, nearest, furthest), and use a structured reasoning tree to plan and generate counterarguments from those perspectives. The method is evaluated on 847 posts from the ChangeMyView (CMV) subreddit, comparing against multiple baselines using both LLM-as-a-Judge and human evaluations.\nResults indicate improvements in persuasiveness, diversity, and stance quality compared to strong LLM baselines (e.g., Llama-3.1-8B, DeepSeek-R1).",
"id": "p4JT2YqbAA",
"rating": 6
},
{
"content": "This paper proposes PTCG (Persona-Guided Tree-based Counterargument Generation) , a framework that integrates persona grounding and Tree-of-Thoughts (ToT) style reasoning to generate diverse and persuasive counterarguments. The system estimates the persona of the original author (OP) from an input argument, selects three contrasting speaker personas (same, nearest, and furthest cluster), and then employs a tree-based generation process to plan, prune, and produce multiple counterarguments from distinct perspectives.",
"id": "dC7819oe3Y",
"rating": 4
}
] |
{
"cdate": 1758269143514,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025ptcg,\ntitle={{PTCG}: Persona-guided Tree-based Counterargument Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=08tuDzMDEn},\nnote={under review}\n}"
},
"abstract": {
"value": "The ability to generate counterarguments is important for fostering critical thinking, balanced discourse, and informed decision-making.\nHowever, existing approaches typically produce only a single counterargument, thereby overlooking the diversity and persuasiveness required in real-world debates.\nThis limitation is critical, as the same topic may persuade different individuals only when framed from distinct perspectives.\nTo address this limitation, we propose Persona-guided Tree-based Counterargument Generation (PTCG), a framework that combines Tree-of-Thoughts–inspired step-wise generation and pruning with speaker persona selection.\nBy estimating the author’s persona from the original argument and incorporating speaker personas representing distinct perspectives, the framework operationalizes perspective-taking, enabling reasoning from multiple standpoints and supporting the generation of diverse counterarguments.\nWe propose a tree-based procedure that generates plans, selects the best, and produces multiple speaker persona-specific counterarguments, from which the most effective are chosen.\nWe evaluate PTCG through a comprehensive multi-faceted setup, combining Large Language Model (LLM)-as-a-Judge, classifier-based assessment, and human evaluations.\nOur experimental results show that PTCG substantially improves both the diversity and persuasiveness of counterarguments compared to baselines.\nThese findings highlight the effectiveness of adaptive persona integration in boosting diversity and strengthening persuasiveness."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Counterargument",
"Generation",
"Persona"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/91dbff0a08990793b704ae81302b0af1ddc72c2c.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/3269496ebac5f0c8275f3b4891cbd3a2309344fb.zip"
},
"title": {
"value": "PTCG: Persona-guided Tree-based Counterargument Generation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "08tuDzMDEn",
"id": "08tuDzMDEn",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission16826/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897216977,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission16826/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission16826/Authors"
]
}
|
|
2,026
|
09FE8nv4sV
|
[
6,
4,
4,
4
] |
[
{
"content": "This paper introduces \"MILP-Retrieval,\" a novel framework to address the critical problem of data scarcity for training data-driven Mixed-Integer Linear Programming (MILP) solvers. The authors argue that existing generative methods (e.g., VAEs, diffusion models) are highly inefficient, as they require training a separate, complex model for each distinct problem class and offer poor control over the generated instance's properties.\n\nMILP-Retrieval proposes a paradigm shift, changing the problem from \"generation\" to \"retrieval and tuning.\" The framework consists of several key components:\n\nMILP Library, MILP Embedding Model, Embedding Metric, Retrieval and Tuning.",
"id": "ceqvjhqNjr",
"rating": 6
},
{
"content": "This paper proposes MILP-Retrieval, a retrieval-and-tune framework for targeted MILP instance generation. Instead of reconstructing instance structures with a class-specific generative model, the method builds a multi-modal MILP library (instances, formulation code, bipartite graphs, textual descriptions), trains a graph–text contrastive embedding model, uses an embedding-based similarity metric to retrieve the closest formulation code, and then tunes code parameters (randomized or Bayesian/SMAC) to control scale and difficulty before executing the code to synthesize instances. Experiments show higher semantic similarity under the proposed embedding metric, controllable hardness, and downstream gains for Neural Diving across four classes.",
"id": "xU9FN4jMns",
"rating": 4
},
{
"content": "This paper introduces a MILP instance generation framework centered on formulation code retrieval. The core workflow involves constructing a multi-modal MILP library encompassing diverse problem instances, their corresponding formulation codes, bipartite graph representations, and textual descriptions. For a given set of target instances, the framework first computes their embeddings using a pre-trained model, then retrieves the most semantically similar formulation codes from the library. Extensive experiments validate the framework’s effectiveness across multiple tasks and benchmark datasets, demonstrating strong performance in generating high-quality, target-aligned MILP instances.",
"id": "Hquwcpsxtc",
"rating": 4
},
{
"content": "The paper proposes MILP-Retrieval, a framework for targeted MILP instance generation via formulation-code retrieval. Built on top of a multi-modal MILP library, the approach first trains a MILP embedding model by contrastively aligning graph and text representations. Given a target instance, the model embeds it, retrieves the most relevant formulation code from the library, and then adjusts the code’s exposed parameters to synthesize new instances with controllable size and difficulty. Experiments demonstrate that MILP-Retrieval can generate coherent instance families across various difficulty levels, and that the synthesized data further enhances Neural Diving when used for downstream training.",
"id": "EzqNzLSDQj",
"rating": 4
}
] |
{
"cdate": 1758211996865,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025targeted,\ntitle={Targeted {MILP} Instance Generation via Formulation Code Retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=09FE8nv4sV},\nnote={under review}\n}"
},
"abstract": {
"value": "Efficient and controllable data generation is critical for improving the performance of data-driven Mixed-Integer Linear Programming (MILP) solvers, especially in applications facing data scarcity. However, existing MILP instance generation methods typically require training a separate model for each problem class, which can be computationally intensive and does not allow for the generation of instances with varying sizes and solution difficulties. To address these challenges, we introduce MILP-Retrieval, a framework for targeted MILP instance generation via formulation code retrieval. We first build a diverse MILP library that includes multiple modalities and use it to pretrain an MILP embedding model. Based on the output of this embedding model, we propose a novel similarity metric that accurately measures the similarity between instances of different sizes within the same problem class. MILP-Retrieval leverages this new metric to retrieve the formulation code of a target instance and further tune it. Experimental results demonstrate the effectiveness of generating MILP instances through formulation code retrieval, with the ability to control both the scale and difficulty of the generated instances. This approach provides a novel perspective on MILP instance generation and opens up new possibilities for learning-based solvers."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Mixed-Integer Linear Programming",
"Combinatorial Optimization",
"MILP Instance Generation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/68b05fde92bc24ee1cdc82d5c27e9096a8eaed7c.pdf"
},
"primary_area": {
"value": "optimization"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Targeted MILP Instance Generation via Formulation Code Retrieval"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "09FE8nv4sV",
"id": "09FE8nv4sV",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission12955/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897474393,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission12955/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission12955/Authors"
]
}
|
|
2,026
|
09Nj40ScvC
|
[
2,
6,
4
] |
[
{
"content": "The paper proposes a heuristic to select preference pairs to train PRM. Meanwhile, it also modifies the advantage function of original GRPO to adapt to process reward settings.",
"id": "rgdCO8yLHy",
"rating": 2
},
{
"content": "The paper presents a novel reinforcement learning framework guided by a Preference-Based Process Reward Model (PPRM). The method first uses MCTS to select chosen and rejected rollouts. Then, Bradley-Terry loss function is used to mitigate bias in MC-value estimation by leveraging pairwise comparisons of reasoning trajectories. The method is trained using GRPO with an optimized advantage estimator to better captures the structure of preference-based process reward model. Experimental results show that the proposed PPRM improves performance on intermediate step accuracy and enhances the final policy model's performance compared to existing works, demonstrating the method's effectiveness.",
"id": "imas59seJq",
"rating": 6
},
{
"content": "This paper introduced preference-based process reward model (PPRM). It leverages Bradley-Terry pairwise comparison to reduce bias in process reward modeling. PPRM combines this preference-based formulation with a modified GRPO, to use a preference-aware advantage estimator to stablize training and reduce variance. The experiments are conducted on ProcessBench and RL finetuning tasks and the results show 2-3% accuracy improvement over strong baselines.",
"id": "0jys5qVn5S",
"rating": 4
}
] |
{
"cdate": 1758348612106,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025preferencebased,\ntitle={Preference-Based Process Reward Model for Robust Mathematical Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=09Nj40ScvC},\nnote={under review}\n}"
},
"abstract": {
"value": "Process reward models (PRMs) have emerged as a promising approach to guide LLMs by providing step-wise supervision, but traditional methods often rely on heuristic search strategies like Monte Carlo Tree Search (MCTS), which introduce bias and limit generalization. In this work, we propose a reinforcement learning framework guided by a Preference-Based Process Reward Model (PPRM) , which provides step-wise supervision to refine reasoning trajectories. We first employ MCTS to estimate and select chosen and rejected rollouts, thereby constructing a high-quality step-level dataset. Our PPRM is trained on Bradley-Terry loss function, which mitigates the bias introduced by the heuristic search strategies of MCTS by leveraging preference-based learning. To enable effective RL training with PPRM, we enhance Group Relative Policy Optimization (GRPO) by introducing a robust advantage estimator that better captures the structure of preference-based process reward model enabling stable and efficient policy optimization. Experimental results on ProcessBench and best-of-n strategy demonstrate that our approach achieves $2$-$3\\%$ improvement in intermediate step accuracy compared to existing methods for complex reasoning processes, thereby improving the reasoning accuracy of the policy model across several key reasoning benchmarks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Process Reward Model",
"Reinforcement Learning",
"Monte Carlo Tree Search"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1a96cc56ba0b8dae8c095aa61356d3af1319282c.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Preference-Based Process Reward Model for Robust Mathematical Reasoning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "09Nj40ScvC",
"id": "09Nj40ScvC",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23801/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896796319,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23801/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23801/Authors"
]
}
|
|
2,026
|
09YSBymX6O
|
[
6,
4,
8
] |
[
{
"content": "The authors propose using spatial point processes as a self-supervision prior that explicitly models spatial distributions of objects to address the gap in previous un- and self-supervised methods that miss the spatial correlations. Thus, the paper proposes spatially informed variational autoencoders (SI-VAE) to predict spatial organization patterns from images. The authors apply SI-VAE to a real world microscopy dataset, OpenCell, and correctly identify the protein localization classes.",
"id": "Kx7KXvnROr",
"rating": 6
},
{
"content": "This paper proposes a spatially informed variational autoencoder - SI-VAE, which consists of a VAE augmented with a spatial point process. The latent representation z from the VAE is used as input to a neural network that then predicts the Gibbs potentials that define the point process. Experiments are performed with synthetic data, showing a comparison to standard VAE, generalization to unseen processes, and zero-shot conditional simulation. Lastly, an application to protein localization patterns is given, where it is demonstrated that the learned potentials agree with domain knowledge (eg proteins in vesicles being homogeneously distributed versus nucleus proteins being inhomogeneously distributed within the nuclei).",
"id": "iaaD2UHH3v",
"rating": 4
},
{
"content": "The paper developed a self-supervised deep-learning model that use stochastic point processes to predict spatial organization patterns from images, coined as Spatially Informed Variational Autoencoders (SI-VAE). The self-supervision mechanism is modeled by the Papangelou conditional intensity. Extensive experiments were presented to illustrate the effectiveness of the SI-VAE model.",
"id": "aah6oDvMks",
"rating": 8
}
] |
{
"cdate": 1758200193649,
"content": {
"TLDR": {
"value": "We present spatially informed variational autoencoders that use stochastic point processes to learn interpretable spatial patterns from images."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025spatially,\ntitle={Spatially Informed Autoencoders for Interpretable Visual Representation Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=09YSBymX6O},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce spatially informed variational autoencoders (SI-VAE) as self-supervised deep-learning models that use stochastic point processes to predict spatial organization patterns from images. Existing approaches to learning visual representations based on variational autoencoders (VAE) struggle to capture spatial correlations between objects or events, focusing instead on pixel intensities. We address this limitation by incorporating a point-process likelihood, derived from the Papangelou conditional intensity, as a self-supervision target. This results in a hybrid model that learns statistically interpretable representations of spatial localization patterns and enables zero-shot conditional simulation directly from images. Experiments with synthetic images show that SI-VAE improve the classification accuracy of attractive, repulsive, and uncorrelated point patterns from 48% (VAE) to over 80% in the worst case and 90% in the best case, while generalizing to unseen data. We apply SI-VAE to a real-world microscopy data set, demonstrating its use for studying the spatial organization of proteins in human cells and for using the representations in downstream statistical analysis."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"autoencoder",
"visual representation",
"point process",
"conditional simulation",
"interpretable machine learning",
"self supervision",
"spatial statistics"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7748035b13c9cd12469e6ea6b4f2b5d09a0bd09d.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Spatially Informed Autoencoders for Interpretable Visual Representation Learning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "09YSBymX6O",
"id": "09YSBymX6O",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission11483/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897572558,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission11483/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission11483/Authors"
]
}
|
|
2,026
|
09lmwhDqZ3
|
[
6,
4,
6,
6
] |
[
{
"content": "This paper focuses on the task of automatic formalization in theorem proving, which currently faces two major challenges: model hallucination and the semantic gap caused by ambiguous or missing premises in natural language descriptions. To address these issues, the authors propose a framework called CRAMF (Concept-driven Retrieval-Augmented Mathematical Formalization). The framework consists of three main components: (1) construction of a concept definition knowledge base, (2) mathematical concept extraction, and (3) definition retrieval. Experimental results demonstrate that CRAMF can effectively extract information useful for formalization, thereby improving overall accuracy.",
"id": "gh94iSSoEf",
"rating": 6
},
{
"content": "This paper proposes CRAMF, a concept-driven retrieval-augmented framework for automated statement formalization in Lean 4. It builds a concept–definition knowledge base from Mathlib4 and uses hybrid retrieval with reranking to provide context for LLMs. Experiments on miniF2F, ProofNet, and a new AdvancedMath dataset show improved compilation and formalization accuracy across models.",
"id": "aip7qZZBax",
"rating": 4
},
{
"content": "This paper presents CRAMF, a retrieval-augmented framework that enhances LLM-based automated formalization for theorem proving in Lean 4. CRAMF builds a structured concept-definition knowledge base from Mathlib4 (over 26,000 definitions and 1,000 core concepts) and retrieves relevant definitions to reduce hallucinations and semantic gaps during autoformalization. Experiments on miniF2F, ProofNet, and a new dataset show consistent improvements, with up to 62.1% accuracy gain.",
"id": "XR63kK1JIs",
"rating": 6
},
{
"content": "This paper proposes CRAMF, a retrieval-augmented framework that enhances LLM based automated formalization in theorem provers such as Lean 4. \nThe method builds a structured concept-definition knowledge base from Mathlib4 by aligning formal definitions with natural language expressions through reverse translation, concept extraction, and embedding via MathBERT. CRAMF retrieves relevant formal definitions for a given theorem through a dual pathway hybrid retrieval.",
"id": "MGtzcVKKP4",
"rating": 6
}
] |
{
"cdate": 1758191085758,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025automated,\ntitle={Automated Formalization via Conceptual Retrieval-Augmented {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=09lmwhDqZ3},\nnote={under review}\n}"
},
"abstract": {
"value": "Interactive theorem provers (ITPs) require manual formalization, which is labor-intensive and demands expert knowledge. While automated formalization offers a potential solution, it faces two major challenges: model hallucination (e.g., undefined predicates, symbol misuse, and version incompatibility) and the semantic gap caused by ambiguous or missing premises in natural language descriptions. To address these issues, we propose CRAMF, a Concept-driven Retrieval-Augmented Mathematical Formalization framework. CRAMF enhances LLM-based autoformalization by retrieving formal definitions of core mathematical concepts, providing contextual grounding during code generation. However, applying retrieval-augmented generation (RAG) in this setting is non-trivial due to the lack of structured knowledge bases, the polymorphic nature of mathematical concepts, and the high precision required in formal retrieval. We introduce a framework for automatically constructing a concept-definition knowledge base from Mathlib4, the standard mathematical library for the Lean 4 theorem prover, indexing over 26,000 formal definitions and 1,000+ core mathematical concepts. To address conceptual polymorphism, we propose contextual query augmentation with domain- and application-level signals. In addition, we design a dual-channel hybrid retrieval strategy with reranking to ensure accurate and relevant definition retrieval. Experiments on miniF2F, ProofNet, and our newly proposed AdvancedMath benchmark show that CRAMF can be seamlessly integrated into LLM-based autoformalizers, yielding consistent improvements in translation accuracy—achieving up to 62.1% and an average of 29.9% relative improvement."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Autoformalization",
"Retrieval-augmented Generation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4089192f22e6b5c7e3ec876501c9474f4e95d04d.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/a8c5ee057e703486a6d99918ad99eceeec8faff6.zip"
},
"title": {
"value": "Automated Formalization via Conceptual Retrieval-Augmented LLMs"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "09lmwhDqZ3",
"id": "09lmwhDqZ3",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission11148/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897604259,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission11148/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission11148/Authors"
]
}
|
|
2,026
|
0A2rXt5SAy
|
[
2,
6,
4,
4
] |
[
{
"content": "This paper proposes a method that solves the pipeline scheduling problem using mixed-integer linear programming (MILP), treating activation offloading as a decision variable. It models whether activations are offloaded or retained in GPU memory and enforces constraints on data dependencies, resource exclusivity, memory capacity, synchronization, and GPU–CPU topology. Because solving the MILP can be computationally expensive, the paper introduces several accelerations: variable fixing, cut generation, redundancy elimination, a cached scheduling strategy, and warm starts with initial solutions. The method is evaluated on up to 16 NVIDIA H100 GPUs with GPT-3–like architectures, demonstrating schedules that achieve speedups and avoid out-of-memory errors compared to baselines. The baselines include five pipeline-parallelism methods: 1F1B, 1F1B-Interleaved, ZeroBubble, ZeroBubble-V, and PipeOffload. Results show >30% faster performance than PipeOffload in memory-rich settings and >20% faster in memory-limited settings. Since the MILP solver can sometimes run for a long time, a time limit is imposed and the best solution found within that limit is used. The proposed method consumes more memory than PipeOffload to realize its speedups, illustrating a clear time–memory trade-off.",
"id": "QNhDezqQrF",
"rating": 2
},
{
"content": "This work introduces a pipeline-parallel training scheduler that jointly optimizes computation order, memory usage, and GPU–CPU activation transfers. The authors formulate scheduling as a Mixed-Integer Linear Program (MILP) involving: binary variables for offloading decisions and precedence constraints, and continuous variables for operation timing and memory dynamics.\nPipeline execution is optimized to minimize makespan under GPU memory limits including the GPU-CPU interconnect topology constraints. The system incorporates solver-side improvements (redundancy elimination, triangle inequality cuts, warm-start from AdaOffload, and cached solution reuse) and supports online schedule refinement. Experiments on up to 16 H100 GPUs and models up to 14.2B parameters show significant throughput improvements, especially in memory-limited settings.",
"id": "QecX0zE5aU",
"rating": 6
},
{
"content": "This paper presents OptPipe, a mixed-integer linear programming (MILP)–based scheduler for pipeline parallelism (PP), designed to maximize throughput under memory constraints. The main contributions include a MILP formulation that jointly models memory usage and end-to-end makespan, as well as AdaOffload, an initialization strategy that improves the efficiency of the MILP solver. Empirical results demonstrate that OptPipe achieves over 30% higher throughput compared to existing heuristic-based methods. Further analysis indicates that the performance improvements primarily arise from more effective memory utilization, enabled by the flexibility of the MILP framework.",
"id": "6j2kKifghr",
"rating": 4
},
{
"content": "This paper presents OptPipe, a new framework for optimizing pipeline parallelism (PP) in LLM training, specifically addressing the trade-off between pipeline bubble latency and activation memory consumption. Unlike prior methods such as PipeOffload that rely on coarse-grained heuristics to manage activation offloading, OptPipe takes a principled optimization approach.\n\nThe core contribution is the formulation of the end-to-end pipeline scheduling problem—including all computation (Forward, Backward, Weight) and data transfer (Offload, Reload) operations —as a Mixed-Integer Linear Programming (MILP) model. The objective is to find a schedule that minimizes the total training makespan while strictly adhering to per-device memory constraints.",
"id": "Wc0GF1KxKK",
"rating": 4
}
] |
{
"cdate": 1758208530100,
"content": {
"TLDR": {
"value": "Use Mathematical Programming to model Pipeline Parallelism with Offloading to balance efficiency and memory requirement."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025optpipe,\ntitle={OptPipe: Memory- and Scheduling-Optimized Pipeline Parallelism for {LLM} Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0A2rXt5SAy},\nnote={under review}\n}"
},
"abstract": {
"value": "Pipeline parallelism (PP) has become a standard technique for scaling large language model (LLM) training across multiple devices. However, despite recent progress in reducing memory consumption through activation offloading, existing approaches remain largely heuristic and coarse-grained, often overlooking the fine-grained trade-offs between memory, computation, and scheduling latency. In this work, we revisit the pipeline scheduling problem from a principled optimization perspective.\nWe observe that prevailing strategies either rely on static rules or aggressively offload activations without fully leveraging the interaction between memory constraints and scheduling efficiency. To address this, we formulate scheduling as a constrained optimization problem that jointly accounts for memory capacity, activation reuse, and pipeline bubble minimization.\nSolving this model yields fine-grained schedules that reduce pipeline bubbles while adhering to strict memory budgets. Our approach complements existing offloading techniques: whereas prior approaches trade memory for time in a fixed pattern, we dynamically optimize the tradeoff with respect to model structure and hardware configuration.\nExperimental results demonstrate that our method consistently improves both throughput and memory utilization. In particular, we reduce idle pipeline time by up to 50% under the same per-device memory limit, and in some cases, enable the training of larger models within limited memory budgets."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Pipeline Parallelism",
"Scheduling",
"Offloading",
"LLM Training"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/eb86a344ab8ad7fa5d18a9b85971432726eb2c2d.pdf"
},
"primary_area": {
"value": "infrastructure, software libraries, hardware, systems, etc."
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "OptPipe: Memory- and Scheduling-Optimized Pipeline Parallelism for LLM Training"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0A2rXt5SAy",
"id": "0A2rXt5SAy",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission12551/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897502272,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission12551/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission12551/Authors"
]
}
|
|
2,026
|
0A3qzLmRHd
|
[
4,
2,
4
] |
[
{
"content": "The paper introduces SECVULEVAL, a new vulnerability detection dataset focusing on LLM-based solutions and C/C++ projects. The authors collected the vulnerability data from the national vulnerability database (NVD), and extracted line-level vulnerability labels. Furthermore, they used an LLM to extract useful contextual information which is added to the dataset. The paper finally benchmarks five state-of-the-art LLMs on two tasks: vulnerability detection and context identification, and shows that those models struggle to achieve acceptable performance, with F1-scores below 25%.",
"id": "vIjXB140a5",
"rating": 4
},
{
"content": "The paper introduces SECVULEVAL, a C/C++ vulnerability benchmark designed for function and statement-level detection with rich contextual information. It supplies metadata (CVE/CWE, commit IDs/messages, pre- and post-patch code), and five classes of “context” (arguments, external functions, type definitions, globals, environment). Duplicate functions are removed via MD5 hashing over normalized functions. The authors also propose a multi-agent LLM pipeline (planning → context extraction → detection → validation), and evaluate 5 models. The best model (Claude-3.7-Sonnet) achieves F1 scores of 53.89% and 23.83% at the function level and statement level vulnerability detection, respectively, with GPT-4.1 being close behind. Ablations without the agent pipeline show very low F1 (<4%) for function-level vulnerability detection.",
"id": "az0XoYwq4v",
"rating": 2
},
{
"content": "This paper introduces SECVULEVAL, a C/C++ vulnerability detection benchmark with 5,867 CVEs covering 25,440 functions with statement-level annotations and contextual information. The authors evaluate five LLMs using a multi-agent pipeline, finding that even the best model (Claude-3.7-Sonnet) achieves only 23.83% F1-score for statement-level detection with correct reasoning. The dataset addresses limitations in existing benchmarks by providing finer granularity, rigorous deduplication, and five categories of contextual information.",
"id": "2jeQWM45Hi",
"rating": 4
}
] |
{
"cdate": 1758230106795,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025secvuleval,\ntitle={{SECVULEVAL}: Benchmarking {LLM}s for Real-World C/C++ Vulnerability Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0A3qzLmRHd},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) have shown promise in various software engineering tasks, but evaluating their effectiveness in vulnerability detection remains challenging due to the lack of high-quality benchmark datasets. Most existing datasets are limited to function-level labels, ignoring finer-grained vulnerability patterns and crucial contextual information. They also often suffer from poor data quality, such as mislabeling, inconsistent annotations, and duplicates, which can lead to inflated performance and weak generalization. Moreover, by including only the vulnerable functions, these datasets miss broader program context, like data/control dependencies and interprocedural interactions, that are essential for accurately detecting and understanding real-world security flaws. Without this context, detection models are evaluated under unrealistic assumptions, limiting their practical impact. To address these limitations, this paper introduces SECVULEVAL, a comprehensive benchmark designed to support fine-grained evaluation of LLMs and other detection methods with rich contextual information. SECVULEVAL focuses on real-world C/C++ vulnerabilities at the statement level. This granularity enables more precise evaluation of a model’s ability to localize and understand vulnerabilities, beyond simple binary classification at the function level. By incorporating rich contextual information, SECVULEVAL sets a new standard for benchmarking vulnerability detection in realistic software development scenarios. This benchmark includes 25,440 function samples covering 5,867 unique CVEs in C/C++ projects from 1999 to 2024. We evaluated the SOTA LLMs with a multi-agent-based approach. The evaluation on our dataset shows that the models are still far from accurately predicting vulnerable statements in a given function. The best-performing Claude-3.7-Sonnet model achieves a 23.83% F1-score for detecting vulnerable statements with correct reasoning, with GPT-4.1 closely behind. We also evaluate the effect of using contextual information for the vulnerability detection task. Finally, we analyze the LLM outputs and provide insights into their behavior in vulnerability detection for C/C++."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Benchmark",
"Security Vunerability",
"Large Language Model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9568b0a1094b7803769e4cfc3d795afafd2e0f92.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/89003334187309cbc3769306208e64115cd2fa03.zip"
},
"title": {
"value": "SECVULEVAL: Benchmarking LLMs for Real-World C/C++ Vulnerability Detection"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0A3qzLmRHd",
"id": "0A3qzLmRHd",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission14192/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897384880,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission14192/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission14192/Authors"
]
}
|
|
2,026
|
0A4Uf88pog
|
[
4,
4,
6
] |
[
{
"content": "This paper introduces VERINA (Verifiable Code Generation Arena), a benchmark comprising 189 manually curated programming tasks in Lean for evaluating end-to-end verifiable code generation. The benchmark assesses three foundational tasks—code generation (CodeGen), specification generation (SpecGen), and proof generation (ProofGen)—along with their flexible combinations. The best model (o4-mini) achieves 61.4% code correctness, 51.0% specification soundness/completeness, and only 3.6% proof success.",
"id": "Un0Re2smcs",
"rating": 4
},
{
"content": "This paper introduces VERINA (Verifiable Code Generation Arena), a new benchmark designed to evaluate the ability of Large Language Models (LLMs) to perform verifiable code generation. The authors define this as the joint task of generating code, formal specifications, and proofs that the code aligns with the specifications. The benchmark consists of 189 manually curated programming tasks in the Lean language, each with detailed descriptions, reference implementations, formal specifications, and comprehensive test suites. The authors evaluated state-of-the-art LLMs on VERINA and found that these models face significant challenges, especially in proof generation. The paper concludes that VERINA provides a rigorous framework for measuring and advancing LLM capabilities in producing formally verified software.",
"id": "KFocKYbhDX",
"rating": 4
},
{
"content": "The paper introduces VERINA, a benchmark for verifiable code generation built on the Lean 4 proof assistant. VERINA provides a framework for evaluating three core tasks: code generation (CodeGen), formal specification generation (SpecGen), and proof generation (ProofGen), as well as their modular compositions. The benchmark consists of 189 manually curated problems, each with a natural-language description, reference implementation, formal specification, and a test suite. The authors propose a hybrid evaluation pipeline for SpecGen that combines automated proof attempts with property-based testing. An extensive evaluation of general-purpose LLMs and specialized theorem provers reveals that ProofGen is the primary bottleneck in the end-to-end pipeline, with even iterative refinement showing limited success.",
"id": "RoUiS6l85X",
"rating": 6
}
] |
{
"cdate": 1758225274549,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025verina,\ntitle={{VERINA}: Benchmarking Verifiable Code Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0A4Uf88pog},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) are increasingly integrated in software development, but ensuring correctness in LLM-generated code remains challenging and often requires costly manual review. Verifiable code generation---jointly generating code, specifications, and proofs of code-specification alignment---offers a promising path to address this limitation and further unleash LLMs' benefits in coding. Yet, there exists a significant gap in evaluation: current benchmarks often focus on only individual components rather than providing a holistic evaluation framework of all tasks. In this paper, we introduce VERINA (Verifiable Code Generation Arena), a high-quality benchmark enabling a comprehensive and modular evaluation of code, specification, and proof generation as well as their compositions. VERINA consists of 189 manually curated coding tasks in Lean, with detailed problem descriptions, reference implementations, formal specifications, and extensive test suites. Our extensive evaluation of state-of-the-art LLMs reveals significant challenges in verifiable code generation, especially in proof generation, underscoring the need for improving LLM-based theorem provers in verification domains.\nThe best model, OpenAI o4-mini, achieves a 61.4% code correctness rate, 51.0% for specification soundness and completeness, and a mere 3.6% proof success rate (based on one trial per task).\nWe hope VERINA will catalyze progress in verifiable code generation by providing a rigorous and comprehensive benchmark."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"code generation",
"formal verification",
"verifiable code generation",
"AI for math",
"theorem proving",
"AI for code"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4006dc82db471198ed3d9f554b01e5c1b118c85d.pdf"
},
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/8a4f0137ad2c8b021a2908eea1a659d6e34d6d0f.zip"
},
"title": {
"value": "VERINA: Benchmarking Verifiable Code Generation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0A4Uf88pog",
"id": "0A4Uf88pog",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13925/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897403022,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13925/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13925/Authors"
]
}
|
|
2,026
|
0A4iQqwwLG
|
[
2,
4,
4,
4
] |
[
{
"content": "The paper presents a new framework to improve long-form video understanding, by incorporating a SLM-based answer clue generation that complements query-based retrieval. The system also incorporates a compressor to summarize frame features into compact tokens, reducing token load while maintaining semantic fidelity. Experiments across multiple benchmarks show consistent but modest performance improvements over query-based retrieval.",
"id": "07mAusg29J",
"rating": 2
},
{
"content": "This paper presents ClueVQA, a retrieval-enhancement framework for long-form VideoQA with Video-LLMs. The key idea is to augment query-based frame retrieval with generated answer clues—latent, answer-oriented representations derived from the query and a global scan of the video. A compact module, ClueSLM, is trained in two modes (compression and clue generation) and produces clue-based frame relevance scores, which are fused with query-based scores via a generalized noisy-OR mechanism.",
"id": "iGbUEfjlMR",
"rating": 4
},
{
"content": "Brief Summary: The paper tackles the task of video frame retrieval for video qa where given a question and a video the task is to find relevant frames from a long-video which are then passed to a separate VLM for answering the question. Standard approach is some variation of RAG, but here authors suggest a nice idea of using small-language model (ClueSLM) which acts to generate both a clue-generator and frame-summary tokens. Finally, the similarity between the answer scores and the frame-summary tokens provide the top-K retrieved frames. The small-language model is based on Rene which is mamba-based arch 1.3B model, training is done in three stages including alignment, image and then video instruction tuning. Experiments are conducted on standard long-video understanding benchmarks such as Video-MME, MLVU, LongVideoBench, where authors show proposed method consistently improves performance.",
"id": "d7t9Wj1rsh",
"rating": 4
},
{
"content": "This paper introduces ClueVQA, a retrieval-enhancement framework for long-form VideoQA that improves frame selection by generating and integrating supplementary answer clues. The core idea is to move beyond simple query-frame similarity by training a small language model (ClueSLM) to operate in dual modes: as a frame compressor and as a clue generator that produces latent answer clues from a global video scan. These clues create a secondary relevance distribution that is fused with the standard query-based distribution via a generalized noisy-OR mechanism. The method demonstrates consistent improvements over query-only retrieval across multiple Video-LLMs and long-form VideoQA benchmarks (VideoMME, LongVideoBench, MLVU), while maintaining compatibility with existing models.",
"id": "tPpdCZUZFa",
"rating": 4
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "3BdiiKqJAn",
"rating": null
}
] |
{
"cdate": 1758336799281,
"content": {
"TLDR": {
"value": "We propose ClueVQA, a novel retrieval framework enhances query-based frame retrieval for VideoQA by generating and integrating supplementary answer clues, leading to improved performance across long-form video benchmarks and various VideoLLMs."
},
"_bibtex": {
"value": "@misc{\nfozilov2025cluevqa,\ntitle={Clue{VQA}: Enhancing Query Based Retrieval in Video-{LLM}s with Answer Clues},\nauthor={Eldor Fozilov and Donggyu Lee and Seokhoon Jeong and Taehwan Kim},\nyear={2025},\nurl={https://openreview.net/forum?id=0A4iQqwwLG}\n}"
},
"abstract": {
"value": "Video-language models have achieved notable success in understanding complex visual narratives and answering fine-grained questions about video content. However, the computational burden of processing long videos - coupled with the growing size of modern models - restricts most approaches to processing only a limited number of frames. A widely adopted strategy to address this limitation is query-based frame retrieval, where frames are selected based on their semantic similarity to the given query. While effective in many cases, such methods are primarily limited to surface-level relevance matching and can fail when faced with implicit, ambiguous, or reasoning-intensive queries, potentially overlooking critical evidence in the video. In this work, we introduce ClueVQA, a novel retrieval framework that improves upon a standard query-based approach by generating and integrating supplementary answer clues and effectively utilizing them for frame selection. The answer clues are derived from the input query and a global scan of the video, which are then used to produce a secondary scoring distribution over frames. This clue-based distribution is then fused with the original query-based frame score distribution to yield a more informed frame selection. The final selected frames are passed to an off-the-shelf Video-LLM for answer generation. Extensive experiments on long-form VideoQA benchmarks, including MLVU, LongVideoBench, and VideoMME, show that our method considerably improves performance over a standard query-based retrieval method across different Video-LLMs."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Eldor_Fozilov1",
"~Donggyu_Lee4",
"~Seokhoon_Jeong2",
"~Taehwan_Kim1"
]
},
"authors": {
"value": [
"Eldor Fozilov",
"Donggyu Lee",
"Seokhoon Jeong",
"Taehwan Kim"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"video-language models",
"frame selection",
"long video question answering"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "fozilov|cluevqa_enhancing_query_based_retrieval_in_videollms_with_answer_clues"
},
"pdf": {
"value": "/pdf/42125810676c44f8f09af5cce1b0770930fd2869.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "ClueVQA: Enhancing Query Based Retrieval in Video-LLMs with Answer Clues"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "0A4iQqwwLG",
"id": "0A4iQqwwLG",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission22893/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763019938697,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission22893/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission22893/Authors"
]
}
|
|
2,026
|
0ACUx9pMWJ
|
[
6,
4,
4,
6
] |
[
{
"content": "The authors propose to study the ability of execution-guided program synthesis approach and transduction approaches with test-time training to generalize to new ARC-AGI-like tasks at test time. Train and test tasks are designed by hand to involve different compositions of the same set of predefined primitives. \n\nOverall, the paper asks an interesting and timely question about compositional generalization in ARC-like domains and provides a well-controlled experimental setup. The execution-guided synthesis results are solid and the TTFT analysis is carefully done. However, the scope of the comparison is narrow: both methods rely on hand-crafted DSLs and synthetic data, the OOD tasks are simple, and the advantage of EG-NPS may mostly reflect its explicit search and verification loop rather than deeper generalization. The study is informative within its controlled sandbox but limited in how much it says about solving the broader ARC-AGI challenge or about generalization in pretrained LLMs.",
"id": "0ka5y62Il9",
"rating": 6
},
{
"content": "This paper investigates out-of-distribution (OOD) generalization on the ARC-AGI benchmark by comparing execution-guided neural program synthesis (EG-NPS) with test-time fine-tuning (TTFT). The authors introduce a novel EG-NPS algorithm alongside a custom domain-specific language (DSL). Through controlled experiments, EG-NPS significantly outperforms non-execution-guided methods and TTFT on a series of experimental setups. The results demonstrate that execution-guided synthesis enables stronger compositional reasoning and more robust OOD generalization than fine-tuning-based approaches in the ARC-AGI domain.",
"id": "7qUgrLkTWn",
"rating": 4
},
{
"content": "The paper target the problem of solving out-of-distribution (OOD) tasks in the ARC-AGI-2 visual reasoning benchmark. The approach extends the line of work on execution-guided program synthesis with an approach the paper calls EG-NPS. It uses a simple DSL which integrates with a standard encoder-decoder transformer as the program generator. The idea is to generate instructions and to partially evaluate them, exposing both of these as execution trace to program generator. This informs the search procedure, which is a tree search with tunable randomness, to complete the program that reaches the end goal. The system is trained on ground truth programs that successfully solve some task. The evaluation is on different tasks in the same visual domain. \n\nThe experiments compare with other competing approaches which do not carry all the features of EG-NPS. The results show significant gains on 7 task that were evaluated during tests.",
"id": "cLTuQKHk1M",
"rating": 4
},
{
"content": "The paper considers compositional generalization within the the ARC-AGI domain. The authors demonstrate that execution-guided program synthesis performs best, beating out other neural program synthesis approaches as well as test-time fine-tuning of LLMs. To achieve this, the authors developed a DSL for the ARC-AGI domain.",
"id": "XXoFlgDP50",
"rating": 6
}
] |
{
"cdate": 1758130722525,
"content": {
"TLDR": {
"value": "Comparing the OOD generalization performance of execution-guided neural program synthesis with test-time fine-tuning on the ARC-AGI domain"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025outofdistribution,\ntitle={Out-of-Distribution Generalization in the {ARC}-{AGI} Domain: Comparing Execution-Guided Neural Program Synthesis and Test-Time Fine-Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0ACUx9pMWJ},\nnote={under review}\n}"
},
"abstract": {
"value": "We run a controlled compositional generalization experiment in the ARC-AGI domain: an open-world problem domain in which the ability to generalize out-of-distribution is, by design, an essential characteristic for success. We compare neural program synthesis and test-time fine-tuning approaches on this experiment. We find that execution-guided neural program synthesis outperforms all reference algorithms in its ability to compose novel solutions. Our empirical findings also suggest that the success of TTFT on ARC-AGI lies mainly in eliciting in-distribution knowledge that the LLM otherwise fails to rely on directly."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"ARC-AGI",
"Neural program synthesis",
"test-time fine-tuning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4edd49acd88859d69d21513a344c02a728179996.pdf"
},
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/066fe09216496bc466dccf7dd77ae7519d72640a.zip"
},
"title": {
"value": "Out-of-Distribution Generalization in the ARC-AGI Domain: Comparing Execution-Guided Neural Program Synthesis and Test-Time Fine-Tuning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0ACUx9pMWJ",
"id": "0ACUx9pMWJ",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9620/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897708604,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9620/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9620/Authors"
]
}
|
|
2,026
|
0Af7UiJISU
|
[
6,
6,
4
] |
[
{
"content": "THOR is a technically competent and empirically thorough paper that explores hierarchical optimization for tool-integrated reasoning—a topic of rising importance in post-RLHF LLM training. The paper’s main contributions, especially the combination of TIR data generation (TIRGen) and dual-level GRPO optimization, are sound and moderately novel. The experiments are comprehensive, covering both reasoning and code tasks, and the observed gains are consistent.\n\nHowever, while the framework is methodically interesting, it does not radically depart from prior lines such as ToRL (Li et al., 2025b) or (Lin et al., 2025). The hierarchical decomposition is intuitively motivated but somewhat incremental relative to the existing literature, e.g., GiGPO (Feng et al., 2025) , and several details are under-specified.",
"id": "xQuXLSQ6vg",
"rating": 6
},
{
"content": "This paper tackles the poor computational and symbolic reasoning of LLMs in math problems. It introduces THOR, a framework built on three key parts. 1) TIRGen: A data pipeline described as a multi-agent \"actor-critic\" system, for generating high-quality examples of tool-integrated reasoning. 2) Hierarchical RL: A two-level optimization strategy. It combines a sparse, \"episode-level\" reward for the final answer's correctness with a denser, \"step-level\" reward for the success of intermediate code generation. 3) Self-Correction: An inference-time mechanism that lets the model backtrack and revise its reasoning path when a tool call fails.",
"id": "YMT6biYpyO",
"rating": 6
},
{
"content": "This paper proposes a tool-integrated reasoning framework with three components: (i) a multi-agent pipeline that builds tool-integrated reasoning (TIR) trajectories for cold-start SFT; (ii) hierarchical RL that jointly optimizes episode-level answer accuracy and step-level code-pass rewards; and (iii) an inference-time self-correction mechanism that backtracks to the last failed tool-use step.",
"id": "CqQpvW8Dol",
"rating": 4
}
] |
{
"cdate": 1757839164924,
"content": {
"TLDR": {
"value": "We introduce THOR, a tool-integrated framework that combines hierarchical reinforcement learning with self-correcting inference to achieve SOTA mathematical reasoning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025thor,\ntitle={{THOR}: Tool-Integrated Hierarchical Optimization via {RL} for Mathematical Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0Af7UiJISU},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) have made remarkable progress in mathematical reasoning, but still continue to struggle with high-precision tasks like numerical computation and formal symbolic manipulation. Integrating external tools has emerged as a promising approach to bridge this gap. Despite recent advances, existing methods struggle with three key challenges: constructing tool-integrated reasoning data, performing fine-grained optimization, and enhancing inference. To overcome these limitations, we propose THOR (Tool-Integrated Hierarchical Optimization via RL). First, we introduce TIRGen, a multi-agent actor-critic-based pipeline for constructing high-quality datasets of tool-integrated reasoning paths, aligning with the policy and generalizing well across diverse models. Second, to perform fine-grained hierarchical optimization, we introduce an RL strategy that jointly optimizes for both trajectory-level problem solving and step-level code generation. This is motivated by our key insight that the success of an intermediate tool call is a strong predictor of the final answer's correctness. Finally, THOR incorporates a self-correction mechanism that leverages immediate tool feedback to dynamically revise erroneous reasoning paths during inference. Our approach demonstrates strong generalization across diverse models, performing effectively in both reasoning and non-reasoning models. It further achieves state-of-the-art performance for models of a similar scale on multiple mathematical benchmarks, while also delivering consistent improvements on code benchmarks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models",
"Mathematical Problem Solving",
"Tool-Integrated Reasoning",
"Reinforcement Learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1d81b28f8346083153cf34d0bc2540521a6e87bc.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/c35d11b552e4f7a5d42cf66249ee8cc24c0071c6.zip"
},
"title": {
"value": "THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0Af7UiJISU",
"id": "0Af7UiJISU",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission5050/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897997957,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission5050/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission5050/Authors"
]
}
|
|
2,026
|
0B5K9pIdSK
|
[
4,
6,
2,
6
] |
[
{
"content": "This paper introduces TITOK, a framework for transferring LoRA adapters between large language models through token-level contrastive knowledge transfer.\nUnlike existing methods such as TransLoRA, which rely on synthetic data filtered by an external discriminator, TITOK uses a self-contained contrastive excess mechanism that compares token-level likelihoods between a source model with and without LoRA. Tokens where the LoRA contributes the most (“excessive” tokens) are identified as carrying rich, task-specific knowledge.",
"id": "UQSkLVq5Pd",
"rating": 4
},
{
"content": "This paper proposes TITOK, a lightweight framework for transferring LoRA adapters across LLMs without original training data. It uses token-level contrastive excess scores from the difference between a base model and its LoRA version to filter synthetic data and focus training on informative tokens. TITOK avoids extra discriminators or full datasets, achieving consistent improvements over baselines like KD and TransLoRA across tasks and model types.",
"id": "lynnvkiRbS",
"rating": 6
},
{
"content": "This paper presents a framework called TITOK for transferring LoRA-based knowledge between large language models (LLMs). The authors note that existing parameter-efficient fine-tuning (PEFT) methods such as LoRA cannot be directly transplanted across different backbones, while knowledge distillation (KD) approaches depend heavily on access to original task data. TITOK introduces the concept of contrastive excess, computed between a source model with LoRA and its corresponding base model without LoRA, to identify task-informative tokens. Using these token-level signals, the method filters synthetic data and trains the target model’s new LoRA adapter only on the most informative tokens, avoiding the need for an additional discriminator or real training data. Experiments on BBH, MMLU, and LaMP benchmarks show that TITOK consistently outperforms KD and TransLoRA across multiple transfer settings, achieving average performance gains of 4–8%. Overall, the paper proposes a lightweight token-level transfer framework and demonstrates its effectiveness through empirical evaluation.",
"id": "hG139fdx1l",
"rating": 2
},
{
"content": "This paper introduces TITOK (Transfer Token-level Knowledge), a framework for transplanting LoRA adapters across different LLM backbones by transferring token-level knowledge instead of sequence-level knowledge.\nUnlike previous approaches such as TransLoRA, which require a discriminator model to filter synthetic data, TITOK leverages a contrastive excess signal to identify informative tokens that encode task-specific knowledge. TITOK requires neither extra models nor additional training overhead. It comes with an effective mechanism to resolve tokenizer mismatches between source and target models, which enhances robustness and applicability.",
"id": "NQV3STp173",
"rating": 6
}
] |
{
"cdate": 1758360984163,
"content": {
"TLDR": {
"value": "We propose a new framework TiTok, which enables effective LoRA transplantation through token-level knowledge transfer"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025titok,\ntitle={TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant Lo{RA}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0B5K9pIdSK},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) are widely applied in real world scenarios, but fine-tuning them comes with significant computational and storage costs. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA mitigate these costs, but the adapted parameters are dependent on the base model and cannot be transferred across different backbones. One way to address this issue is through knowledge distillation, but its effectiveness inherently depends on training data. Recent work such as TransLoRA avoids this by generating synthetic data, but this adds complexity because it requires training an additional discriminator model. In this paper, we propose TiTok, a new framework that enables effective LoRA Transplantation through Token-level knowledge transfer. Specifically, TiTok captures task-relevant information through a contrastive excess between a source model with and without LoRA. This excess highlights informative tokens and enables selective filtering of synthetic data, all without additional models or overhead. Through experiments on three benchmarks across multiple transfer settings, our experiments show that the proposed method is consistently effective, achieving average performance gains of +4–8% compared to baselines overall."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models",
"Knowledge Transfer",
"PEFT"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/36dec6a7418f68bdb0d5b16bf31ccf444eadf348.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0B5K9pIdSK",
"id": "0B5K9pIdSK",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission24843/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896745854,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission24843/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission24843/Authors"
]
}
|
|
2,026
|
0BD2dCM4Ig
|
[
2,
2,
6,
2
] |
[
{
"content": "This paper studies graph foundation model, and proposes a MoE module together with a graph self-supervised learning-based regularization term to enhance the performance.",
"id": "OopTbQalPe",
"rating": 2
},
{
"content": "The paper studies graph foundation models (GFM) through the lens of graph oriented GFMs. The authors first identifies current bottlenecks in training gVQ-MAEs and propose several tricks to improve the optimization and regularization of the training process. Empirical results are competitive.",
"id": "v9eTWyGtes",
"rating": 2
},
{
"content": "This paper investigates critical optimization challenges in graph foundation models (GFMs), particularly focusing on graph vector-quantized masked autoencoders. It identifies two interrelated pitfalls that arise during multi-domain pre-training: failure to capture input diversity and loss of semantic separability. They propose Mixture-of-Tinkers (MoT), which integrates an Information Tinker (edge-wise semantic fusion and mixture-of-codebooks) and a Regularization Tinker (contrastive alignment and load-balancing constraint). In addition, the study provides theoretical analyses that connect MoT with the Information Bottleneck principle. The extensive experiments on 22 datasets across 6 domains demonstrate state-of-the-art results under supervised, few-shot and zero-shot settings.",
"id": "ooLFrjUZy7",
"rating": 6
},
{
"content": "I am unable to recommend acceptance for this paper. My primary reason is that the paper, in its current form, is not reviewable due to severe issues with clarity, structure, and presentation. While the proposed method may have technical merit, the current manuscript is unintelligible, making it impossible to assess the validity of the approach or the significance of the results.\n\nMy core concerns are centered on the quality of the writing and presentation:\n- The paper fails to provide sufficient context for the problem being addressed. Section 3, in particular, introduces experiments without a clear explanation of their purpose, the hypotheses being tested, or how they connect to the paper's broader claims. The reader is left to guess the motivation behind the experimental design.\n- The figures and diagrams provided are overly dense and they do not serve as an effective aid for understanding. The captions are minimal and fail to explain the figures or what the reader should conclude from them. A good figure should be largely understandable on its own, and these are not.\n- The paper's structure does not follow a logical narrative. Most paragraphs, instead of building an argument, are a dense collection of bullet points. This style of writing makes it incredibly difficult to follow the authors' reasoning or understand the flow of the proposed method.\n\nIt is entirely possible that the research described in this paper is solid and contains valuable contributions. However, in the absence of an intelligible presentation, I cannot provide a technical assessment.",
"id": "WyHQT1OPHJ",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "83kGCkURLI",
"rating": null
}
] |
{
"cdate": 1757136379041,
"content": {
"TLDR": {
"value": "We propose MoT to address optimization pitfalls in GFMs and achieve SOTA cross-domain generalization."
},
"_bibtex": {
"value": "@misc{\nli2025two,\ntitle={Two Sides of the Same Optimization Coin: Model Degradation and Representation Collapse in Graph Foundation Models},\nauthor={Xunkai Li and Daohan Su and Sicheng Liu and Ru Zhang and Zhenjun Li and Bing Zhou and Rong-Hua Li and Guoren Wang},\nyear={2025},\nurl={https://openreview.net/forum?id=0BD2dCM4Ig}\n}"
},
"abstract": {
"value": "Graph foundation models (GFMs), inspired by the success of LLMs, are designed to learn the optimal embedding function from multi-domain text-attributed graphs (pre-training) for the downstream cross-task generalization capability (fine-tuning). During our investigation, graph vector quantized-masked autoencoder (gVQ-MAE) stands out among the increasingly diverse landscape of GFM architectures. This is attributed to its ability to jointly encode topology and textual attributes from multiple domains into discrete embedding spaces with clear semantic boundaries. Despite its potential, domain generalization conflicts cause imperceptible pitfalls. In this paper, we instantiate two of them, and they are just like two sides of the same GFM optimization coin - Side 1 Model Degradation: The encoder and codebook fail to capture the diversity of inputs (e.g., social networks and molecular graphs); Side 2 Representation Collapse: The hidden embedding and codebook vector fail to preserve semantic separability due to constraints from narrow representation subspaces. These two pitfalls (sides) collectively impair the decoder and generate the low-quality reconstructed supervision, causing the GFM optimization dilemma during pre-training (coin). Through empirical investigation, we attribute the above challenges to Information Bottleneck and Regularization Deficit. To address them, we propose MoT (Mixture-of-Tinkers) - (1) Information Tinker for Two Pitfalls, which utilizes an edge-wise semantic fusion strategy and a mixture-of-codebooks with domain-aware routing to improve information capacity. (2) Regularization Tinker for Optimization Coin, which utilizes two additional regularizations to further improve gradient supervision in our proposed Information Tinker. Notably, as a flexible architecture, MoT adheres to the scaling laws of GFM, offering a controllable model scale. Compared to SOTA baselines, experiments on 22 datasets across 6 domains demonstrate that MoT achieves significant improvements in supervised (1.4%), few-shot (3.1%), and zero-shot (3.3%) scenarios."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Xunkai_Li1",
"~Daohan_Su1",
"~Sicheng_Liu2",
"~Ru_Zhang8",
"~Zhenjun_Li1",
"~Bing_Zhou3",
"~Rong-Hua_Li2",
"~Guoren_Wang1"
]
},
"authors": {
"value": [
"Xunkai Li",
"Daohan Su",
"Sicheng Liu",
"Ru Zhang",
"Zhenjun Li",
"Bing Zhou",
"Rong-Hua Li",
"Guoren Wang"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Foundation Model; Graph Pre-training; Vector Quantization"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "li|two_sides_of_the_same_optimization_coin_model_degradation_and_representation_collapse_in_graph_foundation_models"
},
"pdf": {
"value": "/pdf/2abdb6e4ce624e14ff91cd45148798851f75c236.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/fedf3aa3843901c06791f68b4f82d7bafef5c686.zip"
},
"title": {
"value": "Two Sides of the Same Optimization Coin: Model Degradation and Representation Collapse in Graph Foundation Models"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "0BD2dCM4Ig",
"id": "0BD2dCM4Ig",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763275020400,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2528/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2528/Authors"
]
}
|
|
2,026
|
0BWu7DLuIU
|
[
2,
2,
4,
2
] |
[
{
"content": "This paper considers the ethical implications of using Homomorphic Encryption (HE). Specifically, they investigate the desirable outcomes (The Good), trade-offs related to accountability, interpretability and responsibility (The Bad) and ways in which HE can be used to mask unethical practices (The Ugly). They highlight the practical limitations of using HE that limit visibility into model training and make it harder to explain predictions. Additionally, they argue that HE does not eliminate side-channels which can be used to compromise user privacy.",
"id": "4uRaYXqTeP",
"rating": 2
},
{
"content": "This paper explores the ethical dimensions of using Homomorphic Encryption (HE) in privacy-preserving machine and deep learning (PP-MDL). It organizes the discussion into three perspectives: The Good, The Bad, and The Ugly, highlighting privacy benefits, ethical trade-offs, and potential misuse scenarios.",
"id": "7d9ItZgvs5",
"rating": 2
},
{
"content": "This paper discusses the positive and negative ethical implications of homomorphic encryption (HE) for privacy-preserving ML and deep learning (PP-MDL); particularly, the goal is to explore the ethics of HE beyond plain privacy. The main message is the HE does not automatically solve all ethical issues of PP-MDL, and it incurs inherent tradeoffs between cryptographical privacy and practical ethical concerns.\n\n**Setting**: The paper considers two settings: i) \"Plain\" training and encrypted inference, where a model is trained on public data, and a user encrypts their inputs and obtains an encrypted inference result that only they can decrypt. ii) Encrypted training+inference, where the entire training data is encrypted with a single key, and the training procedure uses homomorphic encryption, yielding encrypted weights. For the sake of discussion, the paper assumes that both settings are practically feasible.\n\n**Discussion**: The paper groups its discussion into three categories: What are ethically desirable outcomes of HE (\"good\"), what are direct technical tradeoffs from using HE (\"bad\"), and what are indirect, higher-order ethical issues that result from deploying PP-DML in practice (\"ugly\").\n\n**Good (ethically desirable outcomes of HE in PP-MDL)**:\n- NB: In this context, the paper argues that preserving privacy implies being ethical.\n- HE can, under the assumptions of this paper, be applied to existing AI services to make them private. This enables inference for data which is protected (e.g., medical data) or could more generally violate contextual integrity (e.g., satellite images in a military context).\n- HE is also a building block to improve other privacy-enhancing technologies (e.g., federated learning or private information retrieval).\n\n**Bad (direct technical tradeoffs from using HE)**:\n- For HE training, a model provider cannot observe and act on the training process (e.g., does the loss converge) or evaluate the trained model. This shifts the burden to the end-user, who holds the only key.\n- HE inference and training introduces challenges to model explainability due to numerical issues or lack of (unencrypted) training statistics.\n- HE hinders misuse detection. For HE training, a model provider cannot reject unethical datasets. For HE inference, a model provider might be able to prevent unethical inference, but cannot learn that an inference request was refused.\n\n**Ugly (indirect ethical issues from deploying PP-MDL in practice)**:\n- HE does not preserve predictive privacy. That is, it still allows inferring personal information about an individual (e.g., serving targeted ads) and does hence not prevent unethical practices.\n- A model trained with HE might still contain unethical biases (e.g., being unfair). This is obfuscated via encryption, and regulatory auditing is impossible on encrypted weights.\n- Even if training and inference are fully encrypted, metadata (e.g., access patterns) might act as a side-channel that enables privacy inference.",
"id": "bbbrTNRN6L",
"rating": 4
},
{
"content": "This paper explores the ethical implications of using Homomorphic Encryption (HE) for Privacy-Preserving Machine and Deep Learning (PP-MDL). The authors structure their analysis around three perspectives: \"The Good\" (privacy benefits), \"The Bad\" (direct ethical trade-offs around accountability and transparency), and \"The Ugly\" (second-order societal implications). The paper formalizes PP-MDL services under HE in two modalities: Plain Training-Encrypted Inference (PT-EI) and Encrypted Training-Encrypted Inference (ET-EI), then discusses how HE affects various ethical dimensions including quality assurance, explainability, misuse detection, and broader privacy considerations.",
"id": "GxBSXQ4btj",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "p1PN2STfjU",
"rating": null
}
] |
{
"cdate": 1757316151995,
"content": {
"TLDR": {
"value": "Privacy is not everything in privacy-preserving Artificial Intelligence with Homomorphic Encryption."
},
"_bibtex": {
"value": "@misc{\nfalcetta2025the,\ntitle={The Ethics of Privacy-Preserving Deep Learning: the Good, the Bad, and the Ugly},\nauthor={Alessandro Falcetta and Stefano Canali and Viola Schiaffonati and Manuel Roveri},\nyear={2025},\nurl={https://openreview.net/forum?id=0BWu7DLuIU}\n}"
},
"abstract": {
"value": "Homomorphic Encryption (HE) is gaining traction in Artificial Intelligence (AI) as a solution to privacy concerns, particularly in sensitive areas such as health-care. HE enables computation directly on encrypted data without ever decrypting it, ensuring that private information remains protected throughout the process, effectively enabling Privacy-Preserving Machine and Deep Learning (PP-MDL). While much of the discussion focuses on its privacy benefits, little attention has been paid to the ethical implications of using and training AI models on encrypted data, especially when the resulting models themselves remain encrypted and opaque to both developers and users. In this paper, we explore three ethical perspectives on the use of HE for PP-MDL: the Good, i.e., the clear advantages in terms of privacy and data protection; the Bad, i.e., the practical and conceptual ethical challenges it introduces; and the Ugly, i.e., the subtle, unexpected ethical issues that may arise when HE-powered PP-MDL is deployed in the real-world. Our aim is to show that while HE can strengthen privacy, it is not a silver bullet for ethical AI. It can complicate accountability, transparency, and trust, raising important ethical and societal questions that should not be overlooked."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Alessandro_Falcetta1",
"~Stefano_Canali1",
"~Viola_Schiaffonati1",
"~Manuel_Roveri2"
]
},
"authors": {
"value": [
"Alessandro Falcetta",
"Stefano Canali",
"Viola Schiaffonati",
"Manuel Roveri"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"artificial intelligence",
"homomorphic encryption",
"ethics",
"privacy"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "falcetta|the_ethics_of_privacypreserving_deep_learning_the_good_the_bad_and_the_ugly"
},
"pdf": {
"value": "/pdf/205efb398f7dea966a0d6bd7b731332c2cbb6d1d.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "The Ethics of Privacy-Preserving Deep Learning: the Good, the Bad, and the Ugly"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "0BWu7DLuIU",
"id": "0BWu7DLuIU",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3011/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762966891512,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3011/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3011/Authors"
]
}
|
|
2,026
|
0BhjNjxpaC
|
[
2,
8,
8,
2
] |
[
{
"content": "This work studies the question: \"how many 'reasoning steps' can an $L$-layer Transformer carry out in a single forward pass?\". To answer this question, the paper posits a formulation of \"reasoning chains\" as sequences of pairs of integers $a_i^1 \\to a_i^2$, which can be permuted. It then formalizes an abstract model intended to represent multi-layer Transformers by defining rules of \"information propagation\", where the information at each token in the transformer is a set $V_i^l$ that expands at each layer (i.e., $V_i^{l+1} = V_{\\mathcal{I}}^{l} \\cup V_i^{l}$). The question of how many reasoning steps a transformer can carry out is mapped to the question of how big $C_i^l = | V_i^l|$ can be. The main result of the paper is to show that this is between $2^{l-1}$ and $3^{l-1}$.",
"id": "SKjx4rjlyz",
"rating": 2
},
{
"content": "This paper studies the intrinsic single-pass reasoning depth of transformer decoders with L attention layers. Within a symbolic multi-step reasoning setup, the authors formalize “information propagation rules” (adjacent-position matching, same-token matching, residuals) and prove bounds on the maximum number of effective reasoning steps in one forward pass lies between. They give matching constructions showing a binary-tree lower bound and a ternary-tree upper bound, and provide experiments (e.g., a 3-layer model achieving perfect 3-step reasoning with sufficiently large hidden dimension) consistent with the theory.",
"id": "BgMAHkg4yh",
"rating": 8
},
{
"content": "This paper provides a theoretical analysis of the symbolic multi-step reasoning limits of a Transformer in a single forward pass. By modeling information propagation through adjacent position and same-token matching, the authors show that an $L$-layer Transformer can support between $O(2^{L-1})$ and $O(3^{L-1})$ reasoning steps. Experiments on symbolic reasoning tasks confirm these bounds, showing that a 3-layer model succeeds on up to 4-step reasoning but fails beyond that. The work offers a formal and tight characterization of single-pass reasoning capacity, extending beyond prior empirical analyses.",
"id": "j2qYC8hyRk",
"rating": 8
},
{
"content": "The paper studies the intrinsic, single-pass reasoning capacity of decoder-only transformers by introducing an idealized information-propagation model that abstracts attention and residual connections into set-based rules over “reasoning pairs.” The core task is symbolic: inputs are made of shuffled pairs (a_i^1, a_i^2) that form a reasoning chain, and the model must recover the target reached after multiple reasoning steps. Within this abstract model, the authors prove that for an L-layer system following their rules, the number of reasoning steps whose information can reach the final position is between a lower bound of 2^{L-1}-1 and an upper bound of \\frac{3^{L-1}-1}{2}, i.e. exponential in depth with bases 2 and 3 respectively. These bounds are made concrete by constructing special input permutations that attain the two extremes. They further provide a constructive mapping from the abstract rules to a hand-designed single-head masked transformer that realizes adjacent-position matching and same-token matching with specific Q/K/V choices, arguing this shows the rules are implementable in principle. Experiments on synthetic data with a 3-layer transformer show: (i) near-perfect 3-step reasoning when width is large, (ii) partial success on 4-step when the ordering is favorable, and (iii) clear degradation on 5-step, which the authors present as consistent with their theoretical limit. However, the experiments do not test the theory’s central scaling claim in L, and the symbolic abstraction is stronger than what real, trained models are guaranteed to implement.",
"id": "rOv2oMhYbU",
"rating": 2
}
] |
{
"cdate": 1757819222345,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025limit,\ntitle={Limit Analysis for Symbolic Multi-step Reasoning Tasks with Information Propagation Rules Based on Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0BhjNjxpaC},\nnote={under review}\n}"
},
"abstract": {
"value": "Transformers have ability to perform reasoning tasks, however the intrinsic mechanism remains widely open. In this paper we propose a set of information propagation rules based on Transformers and utilize symbolic reasoning tasks to theoretically analyze the limit reasoning steps. \nWe show that the number of reasoning steps has an upper bound of $O(3^{L-1})$ and a lower bound of $O(2^{L-1})$ for a model with $L$ attention layers."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"multi-step reasoning",
"parallel reasoning",
"large language models",
"interpretability",
"buffer mechanism",
"model capacity"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f02231666f3fcc43712aa327cc31312d8550481e.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Limit Analysis for Symbolic Multi-step Reasoning Tasks with Information Propagation Rules Based on Transformers"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0BhjNjxpaC",
"id": "0BhjNjxpaC",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission4954/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898003047,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission4954/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission4954/Authors"
]
}
|
|
2,026
|
0BkvUY61MX
|
[
6,
2,
8
] |
[
{
"content": "1. The paper presents scaling laws in the multilingual setup across different axis:\n\n1.1 For repeated epochs in the monolingual setup\n\n1.2 To account for cross lingual transfer in a language mixture setup\n\n1.3. Account for the curse of multilinguality by providing a closed form approximation to account for how much more data to train / how many more parameters to account for to keep the loss on a target language consistent, while adding new languages.\n\n1.4. Address the issue when is it better to pretrain a language model for a target language from scratch vs finetune a multilingual language model.\n\n2. The authors demonstrate that their scaling laws fit better for monolingual and multilingual scenarios compared to other monolingual, data constrained and multilingual scaling laws respectively. In order to account for scaling across different languages with different vocabularies, they fit scaling laws to the vocabulary insensitive loss.\n\n3. The authors also present a very large scale cross lingual transfer study. They propose a cross lingual transfer metric: the number of target language tokens taken by a bilingual model to reach the same loss as that taken by the target language monolingual model. By analysing the transfer matrix, the authors highlight the key factors associated with positive transfer: language script as well as language family. They also demonstrate that these factors additionaly seem predictive of whether a symmetric transfer might occur between the languages. \n\n4. The authors present their results over an impressive number of experiments. If the results (esp. details on number of parameters, tokens, loss convergence values, mixture weights, data subset etc.) are released, it can enable additional analysis especially w.r.t cross-lingual transfer.",
"id": "w7Hmgb6E2t",
"rating": 6
},
{
"content": "This paper studies the scaling law of multi-lingual models w.r.t. model size, data size and computation budgets.",
"id": "1ArIbUDFMQ",
"rating": 2
},
{
"content": "The paper proposes a new multilingual scaling framework that models how performance in multilingual language models scales with model size, data size, and the number of languages during pretraining and finetuning. To understand cross-lingual transfer, the work provides a large-scale empirical study quantifying the pairwise language transfer in a model-based manner. In addition, it models and quantifies the curse of multilinguality, providing scaling rules for maintaining performance as language coverage expands. Finally, the work analyzes when it is more efficient to pretrain from scratch compared with finetuning from a multilingual checkpoint.",
"id": "xlZ5IXbVL4",
"rating": 8
}
] |
{
"cdate": 1758341769702,
"content": {
"TLDR": {
"value": "Scaling laws for multilingual pretraining, finetuning, and language transfer."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025atlas,\ntitle={{ATLAS}: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0BkvUY61MX},\nnote={under review}\n}"
},
"abstract": {
"value": "Scaling laws research has focused overwhelmingly on English—yet the most prominent AI models explicitly serve billions of international users. In this work, we undertake the largest multilingual scaling laws study to date, totaling 774 multilingual training experiments, spanning 10M-8B model parameters, 400+ training languages and 48 evaluation languages. We introduce the Adaptive Transfer Scaling Law (ATLAS) for both monolingual and multilingual pretraining, which outperforms existing scaling laws' out-of-sample generalization often by more than 0.3 R². Our analyses of the experiments shed light on multilingual learning dynamics, transfer properties between languages, and the curse of multilinguality. First, we derive a cross-lingual transfer matrix, empirically measuring mutual benefit scores between 38 × 38 = 1444 language pairs. Second, we derive a language-agnostic scaling law that reveals how to optimally scale model size and data when adding languages without sacrificing performance. Third, we identify the computational crossover points for when to pretrain from scratch versus finetune from multilingual checkpoints. We hope these findings provide the scientific foundation for democratizing scaling laws across languages, and enable practitioners to efficiently scale models—beyond English-first AI."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"scaling laws",
"multilinguality"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8058aba8e6566d3587e70df9f688d9784b59a1bf.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "ATLAS: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0BkvUY61MX",
"id": "0BkvUY61MX",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23289/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896822547,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23289/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23289/Authors"
]
}
|
|
2,026
|
0CQnhxpE7w
|
[
4,
8,
4,
6
] |
[
{
"content": "The authors propose EVCtrl, a training free method for speeding up inference of ControlNet based models. The method is based on the observation that when using sparse conditioning modalities mostly consisting of black pixels, only tokens corresponding to non-black regions are critical for updating. In addition, authors identify precise steps along the denoising trajectory where large changes happen inside the model, requiring full recomputation of features to preserve performance. Evaluation on CogVideo, Flux and Wan show positive performance and better or comparable speedups with respect to the baselines on the chosen metrics.",
"id": "qtMhn9MRxT",
"rating": 4
},
{
"content": "This paper introduces EVCtrl, an innovative, training-free, plug-and-play acceleration framework that tackles the low efficiency and high computational cost of ControlNet-based controllable image/video generation. By exploiting spatial redundancy via Local-Focused Caching (LFoC) and temporal redundancy via Denoising-Step Skipping (DSS), EVCtrl achieves 2.16 times speed-up on CogVideo-ControlNet without sacrificing quality. Extensive experiments on Flux, CogVideo, and Wan2.1 under four control conditions show superior SSIM, PSNR, LPIPS, and lower latency compared with existing training-free acceleration baselines. However, performance under high-resolution, long-video, or complex control scenarios has not been examined, and comparisons with non-ControlNet controllers are missing; consequently, the generality of the approach remains insufficiently validated. Although the novelty is somewhat conservative and lacks theoretical depth, the paper is of good overall quality. Strengthening theoretical analysis and presentation in the final version is expected to further increase its impact.",
"id": "HXBU2QkAt5",
"rating": 8
},
{
"content": "This work proposes EVCtrl, an efficient control adapter mainly addressing the spatial redundancy and temporal redundancy issues prevalent in current image and video generation methods that leverage ControlNet. This method has been experimented on the ControlNet branches of multiple models, achieving impressive acceleration effects without significant visual degradation in generation quality.",
"id": "oUhADN6RlD",
"rating": 4
},
{
"content": "**Summary:** \nThis paper proposes **EVCtrl**, a lightweight, training-free control adapter designed to improve the efficiency of controllable image and video generation based on diffusion transformers (DiTs). The method addresses the significant spatial and temporal redundancies in ControlNet by introducing two components: **Local Focused Caching (LFoC)** for spatial redundancy reduction and **Denoising Step Skipping (DSS)** for temporal redundancy pruning. EVCtrl selectively recomputes tokens containing fine-grained control cues while caching redundant ones, and dynamically skips diffusion steps with high similarity between adjacent states. Experiments on multiple image and video diffusion backbones (Flux, CogVideo, Wan2.1) show 2.0×‒2.4× acceleration with minimal degradation in FID, SSIM, and LPIPS scores.",
"id": "eP7IhEuMbL",
"rating": 6
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "50ATPmwujZ",
"rating": null
}
] |
{
"cdate": 1758260698841,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nyang2025evctrl,\ntitle={{EVC}trl: Efficient Control Adapter for Visual Generation},\nauthor={Zixiang Yang and Yue Ma and Yinhan Zhang and Shanhui Mo and Dongrui Liu and Linfeng Zhang},\nyear={2025},\nurl={https://openreview.net/forum?id=0CQnhxpE7w}\n}"
},
"abstract": {
"value": "Visual generation includes both image and video generation, training probabilistic models to create coherent, diverse, and semantically faithful content from scratch. While early research focused on unconditional sampling, practitioners now demand controllable generation that allows precise specification of layout, pose, motion, or style. While ControlNet grants precise spatial-temporal control, its auxiliary branch markedly increases latency and introduces redundant computation in both uncontrolled regions and denoising steps, especially for video. To address this problem, we introduce EVCtrl, a lightweight, plug-and-play control adapter that slashes overhead without retraining the model. Specifically, we propose a spatio-temporal dual caching strategy for sparse control information. For spatial redundancy, we first profile how each layer of DiT-ControlNet responds to fine-grained control, then partition the network into global and local functional zones. A locality-aware cache focuses computation on the local zones that truly need the control signal, skipping the bulk of redundant computation in global regions. For temporal redundancy, we selectively omit unnecessary denoising steps to improve efficiency. Extensive experiments on CogVideo-Controlnet, Wan2.1-Controlnet, and Flux demonstrate that our method is effective in image and video control generation without the need for training. For example, it achieves 2.16 and 2.05 times speedups on CogVideo-Controlnet and Wan2.1-Controlnet, respectively, with almost no degradation in generation quality.Codes are available in the supplementary materials."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Zixiang_Yang1",
"~Yue_Ma2",
"~Yinhan_Zhang1",
"~Shanhui_Mo2",
"~Dongrui_Liu1",
"~Linfeng_Zhang2"
]
},
"authors": {
"value": [
"Zixiang Yang",
"Yue Ma",
"Yinhan Zhang",
"Shanhui Mo",
"Dongrui Liu",
"Linfeng Zhang"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Visual Generation",
"Diffusion Models",
"Control Adapter"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "yang|evctrl_efficient_control_adapter_for_visual_generation"
},
"pdf": {
"value": "/pdf/4e4a0c89b75c377756da0a0ce2fd4fac6083971b.pdf"
},
"primary_area": {
"value": "generative models"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/6310dd2067b10a2bd41d3015e0c97ea35f348fad.zip"
},
"title": {
"value": "EVCtrl: Efficient Control Adapter for Visual Generation"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "0CQnhxpE7w",
"id": "0CQnhxpE7w",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission16151/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1762943364931,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission16151/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission16151/Authors"
]
}
|
|
2,026
|
0CXjpAxHUE
|
[
8,
6,
4,
8
] |
[
{
"content": "This paper presents a theoretical framework for understanding multi-epoch data reuse in the context of linear regression and its implications for data-scaling laws in large model training. It shows that larger datasets can be repeated more times effectively. Simulation and LLM pretraining experiments confirm the theory’s predictions.",
"id": "DwPuDWrQK4",
"rating": 8
},
{
"content": "This paper presents a theoretical analysis of multi-epoch training for large-scale models, a common practice when high-quality training data is limited. The authors introduce a metric called the \"effective reuse rate,\" $E(K, N)$, to quantify how many additional \"fresh\" data samples one-pass training would need to match the performance of training for K epochs on a dataset of size N. Through a detailed analysis of Stochastic Gradient Descent (SGD) in linear regression, they demonstrate two key regimes: 1) for a small number of epochs (K), the benefit is nearly linear (E(K, N) ≈ K), meaning each pass is almost as good as seeing new data; 2) as K increases, the benefit saturates. Crucially, they prove that the saturation point itself grows with the dataset size N (e.g., logarithmically or as a power of N). This central finding—\"larger datasets can be repeated more\"—challenges previous empirical work that suggested the reuse rate was independent of N. The authors validate this theoretical insight with both synthetic data simulations and pre-training experiments on a large language model.",
"id": "xIVqKegPGT",
"rating": 6
},
{
"content": "This paper challenges prior work that assumed the effective reuse rate of data is independent of dataset size. Through rigorous theoretical analysis in linear regression, the authors demonstrate that larger datasets can be trained for more epochs before experiencing diminishing returns. Specifically, they show that the effective reuse rate E(K,N) depends not only on the number of epochs K, but critically on the dataset size N, which is a factor overlooked in previous empirical scaling laws.",
"id": "SzJw0akHp1",
"rating": 4
},
{
"content": "This paper studies the question: how large of a dataset is required for one-pass training to match the loss of a dataset of size N trained for K epochs?\n\nThey theoretically characterize the scaling behavior for SGD in linear regression in two settings: strong convexity and Zipf-distributed data. In each settings, there are two phases, one phase where K is small and data can be repeated without harm to the performance, and one where K is large and reused data plateaus in usefulness. The point where this phase transition occurs depends on the setting (strongly convex vs Zipf-distributed data) and the data distribution.\n\nIn contrast to recent empirical work, their analysis supports a functional form where the number of times you can repeat the dataset grows with the size of the dataset. In other words, the practical takeaway is that larger datasets can be repeated more.\n\nThey perform LLM pretraining experiments where they take different size datasets, train them for 100 epochs, extract the loss after varying numbers of epochs, and compare to a 200B dataset trained for one epoch. The experiments validate the small K regime where data reuse doesn't hurt performance significantly, and that the larger datasets can be repeated more.",
"id": "O7JXCqHYeE",
"rating": 8
}
] |
{
"cdate": 1758246906466,
"content": {
"TLDR": {
"value": "Theoretical analysis of multi-epoch scaling in linear regression"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025larger,\ntitle={Larger Datasets Can Be Repeated More: A Theoretical Analysis of Multi-Epoch Scaling in Linear Regression},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0CXjpAxHUE},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Model (LLM) training often processes vast text corpora in a single pass, leaving much available data underutilized. This paper presents a theoretical analysis of how a common workaround, training for multiple epochs on the same dataset, reshapes the data scaling laws. Concretely, given a $K$-epoch training on $N$ samples, how many fresh samples would one-pass training require to match the same performance? We quantify this using the \\textit{effective reuse rate} of the data, $E(K, N)$, which we define as the factor by which the dataset must grow under one-pass training to match the test loss of multi-epoch training. Our analysis precisely characterizes the scaling behavior of $E(K, N)$ for SGD in linear regression under either strong convexity or Zipf-distributed data: (1) When $K$ is small, we prove that $E(K, N) \\approx K$, indicating that every new epoch yields a linear gain; (2) As $K$ increases, $E(K, N)$ plateaus at a problem-dependent value that grows with $N$ ($\\Theta(\\log N)$ for the strongly-convex case), implying that larger datasets can be repeated more times before the marginal benefit vanishes. These theoretical findings complement a recent empirical study by [Muennighoff et al. (2023)](https://arxiv.org/abs/2305.16264), which found that training LLMs for up to $4$ epochs results in negligible loss differences compared to using fresh data at each step, \\textit{i.e.}, $E(K, N) \\approx K$ for $K \\le 4$ in our notation. \n Supported by further empirical validation with LLMs, our results reveal how this behavior depends on the underlying data size and distribution, and underscore the need to explicitly model both factors in future studies of scaling laws with data reuse."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Deep learning theory",
"Multi-epoch training",
"Data-reuse",
"Optimization",
"Scaling law",
"Large language model"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/51ee840632db57dc0e2199f27424cf25620fc996.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Larger Datasets Can Be Repeated More: A Theoretical Analysis of Multi-Epoch Scaling in Linear Regression"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0CXjpAxHUE",
"id": "0CXjpAxHUE",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission15017/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897335276,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission15017/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission15017/Authors"
]
}
|
|
2,026
|
0CZAimzcVr
|
[
6,
6,
6,
6
] |
[
{
"content": "The paper studies a quite general optimization problem, namely maximizing a function F on [0,1]^n under some contraints defining a feasible set $\\cal C \\in [0,1]^n$. The function is DR-submodular. The feasible set is convex. Such a problem is typically solved using gradient ascend, and there is a huge literature on these techniques.\n\nThere are 3 contributions.\n\n1. Extending the Lyapunov framework, to allow gradients to be imprecise, with a bias and a noise.\n2. Proposition an 1/e approximation for non-monotone DR-submodular maximization over a convex set. The novelty is the assumption on a largest feasible point. This overcomes a 1/4 upper bound on the general optimization problem.\n3. Providing a quantum algorithm, based on the improved quantum Jordan algorithm from 2023, which has the same performance guarantees as its classical counterpart but improves in cubic iteration time.\n\nSome performance guarantees are proven, and experiments are conducted with standard benchmarks.\n\nMy background is too weak in this area to judge the results.",
"id": "1wohvWDEQJ",
"rating": 6
},
{
"content": "This paper studies continuous DR-submodular maximization under stochastic biased gradient oracles. It extends Du's Lyapunov framework to handle bias and variance in gradient estimators. Based on this, it provides approximation algorithms under three constraint classes: general convex, down-closed convex, and convex sets with a largest element. The paper develops zeroth-order algorithms, where the classical version achieves $O(\\epsilon^{-3})$ while the quantum version achieves $O(\\epsilon^{-1})$ iteration complexity, matching the performance of classical first-order methods.",
"id": "NcKE4SRqr7",
"rating": 6
},
{
"content": "This paper investigates the problem of continuous DR-submodular maximization under the practically relevant yet theoretically challenging setting of stochastic biased gradients. The authors extend the Lyapunov framework, traditionally developed for exact or unbiased stochastic gradients, to handle gradient estimators that contain both bias and noise, thereby characterizing their effects on convergence and approximation guarantees. They further introduce a new class of constraints, namely convex sets with a largest element, that naturally arise in resource allocation and similar applications. Under this setting, the paper proposes a $1/e$-approximation algorithm for non-monotone DR-submodular maximization, which surpasses the known $1/4$ hardness bound for general convex sets. Building upon this framework, the authors design both classical and quantum zero-order algorithms, showing that the quantum version achieves the same approximation ratio with only $O(\\varepsilon^{-1})$ iteration complexity, demonstrating a quantum acceleration compared with classical zero-order methods that require $O(\\varepsilon^{-3})$. Numerical experiments on quadratic and coverage-type DR-submodular functions validate the theoretical results, showing that quantum algorithms converge faster and achieve comparable solution quality to classical first-order methods. Overall, the paper provides a unified theoretical and algorithmic treatment of DR-submodular maximization in the presence of biased gradients and connects classical optimization with emerging quantum techniques.",
"id": "t5aP3DN3Kh",
"rating": 6
},
{
"content": "The authors consider the problem of DR-submodular maximization. They consider a few combinations of monotone/non-monotone functions over the hypercube and classes of convex constraints (general, down-closed, largest element). The authors first propose an extension of a Lyapunov framework from exact to stochastic and biased gradients. The authors consider a new constraint setting (largest element) for the non-monotone setting and obtain an improved approximation ratio (over using a general convex region based method). The authors also show significant improvements in complexity for the value oracle setting using a quantum algorithm for gradient estimation. Lastly, the authors run several experiments to demonstrate the improvements of the quantum based method.",
"id": "8QnjozI3NS",
"rating": 6
}
] |
{
"cdate": 1758106456114,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025drsubmodular,\ntitle={{DR}-Submodular Maximization with Stochastic Biased Gradients: Classical and Quantum Gradient Algorithms},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0CZAimzcVr},\nnote={under review}\n}"
},
"abstract": {
"value": "In this work, we investigate DR-submodular maximization using stochastic biased gradients, which is a more realistic but challenging setting than stochastic unbiased gradients. We first generalize the Lyapunov framework to incorporate biased stochastic gradients, characterizing the adverse impacts of bias and noise. Leveraging this framework, we consider not only conventional constraints but also a novel constraint class: convex sets with a largest element, which naturally arises in applications such as resource allocations. For this constraint, we propose an $1/e$ approximation algorithm for non-monotone DR-submodular maximization, surpassing the hardness result $1/4$ for general convex constraints. As a direct application of stochastic biased gradients, we consider zero-order DR-submodular maximization and introduce both classical and quantum gradient estimation algorithms. In each constraint we consider, while retaining the same approximation ratio, the iteration complexity of our classical zero-order algorithms is $O(\\epsilon^{-3})$, matching that of stochastic unbiased gradients; our quantum zero-order algorithms reach $O(\\epsilon^{-1})$ iteration complexity, on par with classical first-order algorithms, demonstrating quantum acceleration and validated in numerical experiments."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"DR-submodular Maximization",
"Stochastic Biased Gradients",
"Zero-Order Optimization",
"Quantum Gradient Estimation",
"Approximation Algorithms"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a13ab47d1b8bef974ca2dc3482b61e402ac773a8.pdf"
},
"primary_area": {
"value": "optimization"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/7bc45e059cf0db278334218ed862e2f62c1f0345.pdf"
},
"title": {
"value": "DR-Submodular Maximization with Stochastic Biased Gradients: Classical and Quantum Gradient Algorithms"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0CZAimzcVr",
"id": "0CZAimzcVr",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission8996/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897749083,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission8996/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission8996/Authors"
]
}
|
|
2,026
|
0CajQNVKyB
|
[
8,
6,
6,
4
] |
[
{
"content": "This paper introduces HERO (Hybrid Ensemble Reward Optimization)- a framework that integrates dense signals from reward models with binary-valued feedback from rule-based verifiers. The paper systematically reports the merits of each individual approach while highlighting its limitations; they further caution against naively combining the two approach. The proposed solution uses a `stratified' strategy to leverage expressiveness of reward models within structures imposed by the verifier. Experimental evaluations across a wide range of regimes in math reasoning tasks demonstrate that HERO performs significantly better than reward model only or verifier only baselines.",
"id": "GGioH3qrWw",
"rating": 8
},
{
"content": "This paper proposed HERO. The core idea is to combine sparse but reliable verifier-based binary rewards with dense but noisy reward-model scores. HERO proposes Stratified normalization, which anchors RM scores within verifier-defined correctness groups to preserve semantic correctness, and Variance-aware reweighting, which emphasizes prompts with higher score variance. Experiments on multiple math reasoning benchmarks (e.g., MATH500, AMC, Olympiad, HardVerify-Math, and TextBookReasoning) show that HERO outperforms both verifier-only and RM-only baselines across verifiable, hard-to-verify, and mixed settings.",
"id": "2Maq7Aylqs",
"rating": 6
},
{
"content": "This paper proposes a hybrid reward to integrate reward model and binary verifier scores. The hybrid reward is shown to be effective on multiple models across multiple tasks including verifiable and hard-to-verify ones.",
"id": "qfnF8MIDBt",
"rating": 6
},
{
"content": "The paper proposes HERO, a reinforcement learning framework designed to enhance reasoning capabilities in large language models (LLMs) by integrating sparse, binary verifiable rewards with dense, continuous scores from reward models. The core innovation lies in addressing the brittleness of binary verifiers and the misalignment of reward models through two mechanisms: stratified normalization, which bounds reward model scores within verifier-defined correctness groups, and variance-aware weighting, which emphasizes challenging prompts with high score variance. Built on GRPO, HERO aims to provide stable, informative supervision for mathematical reasoning tasks. Empirical results on verifiable benchmarks and hard-to-verify ones demonstrate that HERO outperforms baselines like RM-only RL and verifier-only methods.",
"id": "zYNaYaWIZg",
"rating": 4
}
] |
{
"cdate": 1758300838198,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025hybrid,\ntitle={Hybrid Reinforcement: when reward is sparse, better to be dense},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0CajQNVKyB},\nnote={under review}\n}"
},
"abstract": {
"value": "Post-training for reasoning of Large language models (LLMs) increasingly rely on verifiable rewards: deterministic checkers that provide 0–1 correctness signals. While reliable, such binary feedback is brittle—many tasks admit partially correct or alternative answers that verifiers under-credit, and the resulting all-or-nothing supervision limits learning. Reward models offer richer, continuous feedback, which can serve as a complementary supervisory signal to verifiers. We introduce HERO (Hybrid Ensemble Reward Optimization), a reinforcement learning framework that integrates verifier signals with reward-model scores in a structured way. HERO employs stratified normalization to bound reward-model scores within verifier-defined groups, preserving correctness while refining quality distinctions, and variance-aware weighting to emphasize challenging prompts where dense signals matter most. Across diverse mathematical reasoning benchmarks, HERO consistently outperforms RM-only and verifier-only baselines, with strong gains on both verifiable and hard-to-verify tasks. Our results show that hybrid reward design retains the stability of verifiers while leveraging the nuance of reward models to advance reasoning."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Hybrid rewards for reinforcement learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1b9434a429cc2bf09028a5a7aefb7e84e5534b50.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Hybrid Reinforcement: when reward is sparse, better to be dense"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0CajQNVKyB",
"id": "0CajQNVKyB",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19944/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897011255,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19944/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19944/Authors"
]
}
|
|
2,026
|
0Cv0whP7l8
|
[
6,
4,
4,
6
] |
[
{
"content": "This paper introduces a framework to diagnose and mitigate modality interference in multimodal large language models (MLLMs)—a phenomenon where irrelevant or misleading modality signals degrade model performance. The authors define the broader cross-modality competency problem, identifying modality interference as a concrete instance. They propose (1) a perturbation-based causal evaluation that quantifies interference by injecting irrelevant signals into one modality, and (2) a fine-tuning strategy combining heuristic and adversarial perturbations with a consistency regularization objective. Experiments on image-heavy, text-heavy, and multimodal tasks (Mini-ImageNet, Caltech-101, OpenBookQA, MMLU, ScienceQA, MM-Bench, Seed-Bench) demonstrate significant robustness gains and improved unimodal reasoning without harming multimodal performance. The paper is technically solid and the framing is clear, though the conceptual advance is moderate.",
"id": "cGiQOwhszv",
"rating": 6
},
{
"content": "The paper proposes the cross-modality competency problem where a multimodal large language model (MLLM) fails to evaluate all modalities. A concrete example of this problem is modality interference, where MLLMs often use spurious information from irrelevant modalities when they are expected to rely solely on one-modality data. As a result, MLLMs often underperform on pure visual recognition and textual reasoning. The paper designs a perturbation-based experiment to verify and quantify this problem. Then, it proposes to fine-tune MLLMs with perturbation-based data augmentations. Experiments on image-heavy, text-heavy and multimodal tasks and multiple model families verify the effectiveness of the proposed approach in boosting unimodal reasoning ability while enhancing performance on multimodal tasks.",
"id": "lXgoikdmFC",
"rating": 4
},
{
"content": "The paper first defines the Cross-Modality Competency Problem where existing Multimodal Large Language Models (MLLMs) are susceptible to misleading inputs especially in modality-specific tasks, such as image classification or pure text question answering, where models are expected to rely solely on one modality. The paper, first benchmarks this across a range of models using a perturbation-based causal diagnostic setup. Perturbations include - for image-heavy tasks, unrelated scientific facts and misleading descriptions and - for text-heavy tasks are including a real, irrelevant image or a full black/white canvas image. Next, to improve upon this shortcoming, a novel framework to finetune MLLMs is presented which includes adversarial losses and a consistency regularizer strategy at the input and representation level. Experiments on multiple datasets and models demonstrate significant improvements in robustness and cross-modality competency, indicating the method’s effectiveness in boosting unimodal reasoning ability while enhancing performance on multimodal tasks.",
"id": "7G719w3zFI",
"rating": 4
},
{
"content": "The paper investigates modality interference in MLLMs, particularly in tasks like Visual Question Answering. It defines modality interference as the negative impact of irrelevant modalities on unimodal tasks and quantifies this issue through perturbation-based causal diagnostics.\n\nTo mitigate this interference, the authors introduce a new fine-tuning framework that incorporates data augmentation and consistency regularization strategies to improve model stability across different inputs. Experimental results demonstrate significant enhancements in robustness and overall performance.",
"id": "ZKXAuzxiMM",
"rating": 6
}
] |
{
"cdate": 1758184011136,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025diagnosing,\ntitle={Diagnosing and Mitigating Modality Interference in Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0Cv0whP7l8},\nnote={under review}\n}"
},
"abstract": {
"value": "Multimodal Large Language Models have demonstrated impressive capabilities across tasks, yet they often exhibit difficulty in distinguishing task-relevant from irrelevant signals—particularly in tasks like Visual Question Answering—which can lead to susceptibility to misleading or spurious inputs. We refer to this broader limitation as the Cross-Modality Competency Problem—the model’s inability to fairly evaluate all modalities. This vulnerability becomes more evident in modality-specific tasks—such as image classification or pure text question answering—where models are expected to rely solely on one modality. In such tasks, spurious information from irrelevant modalities often leads to significant performance degradation. We refer to this failure as Modality Interference, which serves as a concrete and measurable instance of the cross-modality competency problem, and we further design a perturbation-based causal diagnostic experiment to verify and quantify this problem. To mitigate modality interference, we propose a novel framework to finetune MLLMs, including perturbation-based data augmentations with both heuristic perturbations and adversarial perturbations, and a consistency regularization strategy applying on model outputs with original and perturbed inputs. Experiments on multiple benchmark datasets (image-heavy, text-heavy and multimodal tasks) and multiple model families with different scales demonstrate significant improvements in robustness and cross-modality competency, indicating our method's effectiveness in boosting unimodal reasoning ability while enhancing performance on multimodal tasks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Multimodal Large Language Models",
"Modality Interference",
"Causal Intervention"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7a652a4137746f54b609bc9a256c5f5c918f2c97.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/0c5cfb17d948f2eba0a86516a122f3174a871ac7.zip"
},
"title": {
"value": "Diagnosing and Mitigating Modality Interference in Multimodal Large Language Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0Cv0whP7l8",
"id": "0Cv0whP7l8",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission10880/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897622846,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission10880/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission10880/Authors"
]
}
|
|
2,026
|
0Cv9PwL7cI
|
[
8,
4,
4,
6
] |
[
{
"content": "This paper investigates the limitations of fixed block-size semi-autoregressive decoding in diffusion-based large language models (dLLMs). The authors identify two inefficiencies — Late Decoding Overhead and Premature Decoding Error — that arise when fixed-size blocks fail to align with semantic structure during decoding. To address this, they propose AdaBlock-dLLM, a training-free, plug-and-play adaptive scheduler that dynamically adjusts block size according to semantic cues and confidence scores during inference. The method leverages a novel concept called the Volatility Band (VB) — regions of fluctuating confidence that correspond to evolving semantic steps — to determine when to expand or contract blocks.",
"id": "ihVGh2vEjb",
"rating": 8
},
{
"content": "This paper introduces a method to adaptively select the left and right boundaries for the block in the semi-AR setting for dLLM. The main idea come from the analysis of the confidence dynamics, where inside a volatility band area, the confidence fluctuates dynamically, and the VB regions exhibit semantics structure. Then the author propose to collect indices whose predicted tokens fall in the delimiter set to determine the block size. Experiments show that the performance exceeds the one in DualCache.",
"id": "7v1A9hwxs3",
"rating": 4
},
{
"content": "This paper identifies that fixed block sizes in semi-autoregressive (semi-AR) dLLM decoding cause \"late decoding overhead\" and \"premature decoding error\". It proposes AdaBlock-dLLM, a training-free scheduler that adaptively aligns block boundaries with semantic steps based on delimiter token confidence. This method improves accuracy by up to 5.3% without sacrificing throughput.",
"id": "sQUlurkjnd",
"rating": 4
},
{
"content": "This paper presents AdaBlock-dLLM, a semantic-aware adaptive block-size decoding method for diffusion-based large language models (dLLMs). By analyzing confidence dynamics during denoising, the authors propose AdaBlock-dLLM, which is a training-free, plug-and-play scheduler that dynamically adjusts block boundaries according to semantic step length. Extensive experiments on benchmarks show that the proposed approach maintains compatibility with caching mechanisms and improves both efficiency and semantic consistency in diffusion LLM inference.",
"id": "MIp4G97tqX",
"rating": 6
}
] |
{
"cdate": 1756733104796,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025semanticaware,\ntitle={Semantic-Aware Diffusion {LLM} Inference With Adaptive Block Size},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0Cv9PwL7cI},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion-based large language models (dLLMs) are gaining attention for their inherent capacity for parallel decoding, offering a compelling alternative to autoregressive LLMs. Among various decoding strategies, blockwise semi-autoregressive (semi-AR) approaches are widely adopted due to their natural support for KV caching and their favorable accuracy–speed trade-off. However, this paper identifies two fundamental limitations in the conventional semi-AR decoding approach that applies a fixed block size: i) late decoding overhead, where the unmasking of high-confidence tokens outside the current block is unnecessarily delayed, and ii) premature decoding error, where low-confidence tokens inside the current block are committed too early, leading to incorrect tokens. This paper presents the first systematic investigation challenging the fixed block size assumption in semi-AR decoding. Through a statistical analysis of confidence dynamics during the denoising process, we identify a volatility band (VB) region during dLLM decoding, which encodes local semantic structure and can be used to guide adaptive block sizing. Leveraging these insights, we introduce AdaBlock-dLLM, a training-free, plug-and-play scheduler that adaptively aligns block boundaries with semantic steps by adjusting block size during runtime. Extensive experiments across diverse benchmarks show that AdaBlock-dLLM achieves up to 5.3% accuracy improvement under the same throughput budget. Beyond inference-time optimization, we hope our semantics-aware adaptive scheduling approach and confidence-based analysis will inspire future training strategies for dLLMs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Diffusion Large Language Models",
"Non-Autoregressive Decoding"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3d28ee2d83cf87267c69760ca94c762c93cbe5fa.pdf"
},
"primary_area": {
"value": "generative models"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/9f1d83373188a93be3c31dd3a48db5b6b39307e5.zip"
},
"title": {
"value": "Semantic-Aware Diffusion LLM Inference With Adaptive Block Size"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0Cv9PwL7cI",
"id": "0Cv9PwL7cI",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission272/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898269585,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission272/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission272/Authors"
]
}
|
|
2,026
|
0DaB4jeGaf
|
[
4,
6,
4
] |
[
{
"content": "This paper proposes a quantile regression framework, where an ReLU neural network is used to approximate the quantile function, and a convolution-type smooth quantile loss is used to train the network. Experimental results on synthetic data show that the proposed framework outperforms ReLU networks trained with quantile loss in terms of MSE evaluated at different quantile levels and runtime.",
"id": "iD5O1BaAuX",
"rating": 4
},
{
"content": "**Summary**\n\nThis paper extends the convolution-smoothed quantile regression (Conquer) framework to deep neural networks. \nIt replaces the non-differentiable pinball loss with a kernel-convolved, smooth surrogate, allowing gradient-based optimization while preserving quantile consistency.\n\nMain contributions:\n- Defines the **ConquerNN estimator**, minimizing a smoothed quantile loss over ReLU networks.\n- Provides **non-asymptotic risk bounds** and proves **minimax-rate optimality** over Besov spaces (up to logarithmic factors).\n- Derives a **generalization bound** depending on architecture parameters (depth, width, sparsity).\n- Presents **synthetic experiments** under heavy-tailed noise showing improved MSE and training stability vs. standard pinball loss networks.",
"id": "PYrmLbYe8T",
"rating": 6
},
{
"content": "This paper analyzes the integration of the conquer estimator in quantile regression with neural networks. The theoretical results mainly show that a ReLU-activated neural network that minimizes the conquer estimator performs optimally in $L^2$ norm, up to a logarithmic factor. An error bound is also established for a general neural network. Empirical studies demonstrates the benefits of applying the conquer estimator in neural networks.",
"id": "KeErNi9G94",
"rating": 4
}
] |
{
"cdate": 1758039348094,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025conquer,\ntitle={Conquer the Quantile: Convolution-Smoothed Quantile Regression with Neural Networks and Minimax Guarantees},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0DaB4jeGaf},\nnote={under review}\n}"
},
"abstract": {
"value": "Quantile regression provides a flexible approach to modeling heterogeneous effects and tail behaviors. This paper introduces the first quantile neural network estimator built upon the \\textbf{con}volution-type smoothing \\textbf{qu}antil\\textbf{e} \\textbf{r}egression (known as \\textit{conquer}) framework, which preserves both convexity and differentiability while retaining the robustness of the quantile loss. Extending the conquer estimator beyond linear models, we develop a nonparametric deep learning framework and establish sharp statistical guarantees. Specifically, we show that our estimator attains the minimax convergence rate over Besov spaces up to a logarithmic factor, matching the fundamental limits of nonparametric quantile estimation, and further derive general upper bounds for the estimation error in more general function classes. Empirical studies demonstrate that our method consistently outperforms existing quantile networks in both estimating accuracy and computational efficiency, underscoring the benefits of incorporating conquer into deep quantile learning."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"quantile regression",
"minimax rate",
"convolution",
"deep learning theory",
"Besov space"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6472a64f207cbb489dffe7fe14f19ce1bb6da913.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Conquer the Quantile: Convolution-Smoothed Quantile Regression with Neural Networks and Minimax Guarantees"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0DaB4jeGaf",
"id": "0DaB4jeGaf",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission7858/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897826972,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission7858/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission7858/Authors"
]
}
|
|
2,026
|
0DekoBl3te
|
[
8,
2,
4,
2
] |
[
{
"content": "This paper proposes Dual-MoE, a dual-path mixture-of-experts framework for multivariate time series forecasting that jointly models temporal distribution shifts and noisy channel dependencies. The model consists of two complementary components: the Temporal Fusion MoE and the Channel Fusion MoE. Additionally, a Mask Loss mechanism is introduced to enhance robustness. Extensive experiments conducted on ten real-world datasets demonstrate that the proposed method outperforms existing approaches.",
"id": "RsDwEJdjBM",
"rating": 8
},
{
"content": "This paper proposes Dual-MoE, a dual mixture-of-experts framework for multivariate time series forecasting. It integrates a Temporal-fusion MoE for adaptive long–short term pattern modeling and a Channel-fusion MoE for inter-variable dependency learning. A quantile-based Mask Loss enhances robustness to noise. The framework is modular, flexible, and demonstrates consistent improvements across real-world datasets.",
"id": "80ACZmjGE9",
"rating": 2
},
{
"content": "This paper introduces Dual-MoE, a novel and highly complex framework for multivariate time series forecasting. The work is motivated by two challenges: 1) Temporal Distribution Shift, where a fixed-length lookback window struggles to balance long-term trends and short-term shocks, and 2) Noisy Channel Dependencies, where models either over-rely on all channel correlations (like Transformers) or ignore them completely (like channel-independent models). The authors solve this challenge with Dual-MoE with two different mixture-of-experts and quantile-based loss function.",
"id": "91XPN8sUjb",
"rating": 4
},
{
"content": "The manuscript introduces Dual-MoE, a dual mixture-of-experts (MoE) framework for multivariate time series forecasting (MTSF). The model integrates two complementary modules: (1) a Temporal Fusion MoE that dynamically balances long- and short-term temporal dependencies via exponential moving average (EMA) and multi-scale lookback windows, and (2) a Channel Fusion MoE that models inter-variable relationships using a learnable probability matrix based on frequency-domain similarities. A quantile-based Mask Loss Function further enhances robustness to noisy or unpredictable channels.",
"id": "LJZU9Yb0RV",
"rating": 2
}
] |
{
"cdate": 1758115541458,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025dualmoe,\ntitle={Dual-MoE: Learning Time and Channel Dependencies via Dual Mixture-of-Experts for Time Series Forecasting},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0DekoBl3te},\nnote={under review}\n}"
},
"abstract": {
"value": "Multivariate time series forecasting holds significant value in finance, energy, and transportation systems, yet faces critical challenges in jointly modeling temporal heterogeneity and dynamic channel dependencies. Existing approaches exhibit limitations in balancing long-term trends with short-term fluctuations, while struggling to capture time-varying inter-variable relationships. This paper proposes Dual-MoE, a dual mixture-of-experts framework that synergistically integrates temporal and channel modeling. The temporal expert dynamically combines multi-scale historical features (e.g., hourly details and weekly patterns) through adaptive gating mechanisms, whereas the channel expert learns dependency weights between variables via frequency-aware interaction modeling. Extensive experiments on real-world datasets demonstrate Dual-MoE's superior forecasting accuracy and robustness compared to state-of-the-art baselines. Its modular architecture provides a flexible and scalable paradigm for complex temporal dependency modeling, paving the way for further advancements in time series analysis. Code is available in Appendix."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Time Series Forecasting"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5d91d2446dc98942c3a8851d632fb64f3af14172.pdf"
},
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/5f14436e934acc36691aa0e8867c141ea4f6105d.zip"
},
"title": {
"value": "Dual-MoE: Learning Time and Channel Dependencies via Dual Mixture-of-Experts for Time Series Forecasting"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0DekoBl3te",
"id": "0DekoBl3te",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission9221/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897737118,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission9221/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission9221/Authors"
]
}
|
|
2,026
|
0Dhpt9aY3n
|
[
6,
8,
2,
6
] |
[
{
"content": "This paper introduces DeepSynth, a very challenging benchmark for evaluating LLM agents. DeepSynth consists of 120 diverse tasks created by 16 experts, where each task requires an agent to navigate through about 4 web pages and read up to 15 documents and tables. The tasks are designed to reflect real-world analysis demand, and cover knowledge across 42 countries. The authors evaluated 5 state-of-the-art models (o4-mini, GPT-4.1, GPT-5, Gemini-2.5-Pro and DeepSeek-R1), as well as 3 state-of-the-art agentic frameworks (o3-deep-research, smolagents and OWL) on DeepSynth. It is observed that this benchmark is challenging for all these models and frameworks, with the best F1 score being only 8.97 points. The authors also conducted a bunch of ablation studies to understand the challenges in DeepSynth. They found models perform worse as the number of intermediate steps increases, and most failures are caused by either navigation error or synthesis error, which suggests future directions for developing LLM agents.",
"id": "oaj7MjU1m5",
"rating": 6
},
{
"content": "This paper proposes DEEPSYNTH, a high-quality, well-designed benchmark comprising 120 challenging and diverse information synthesis tasks across 42 countries and 7 domains. The authors clearly articulate the motivation, methodology, data collection and curation process, followed by a detailed analysis of the dataset characteristics. The paper presents comprehensive experiments evaluating popular SOTA models and specialized agent frameworks, with detailed performance analysis revealing significant limitations in current systems.\n\nOverall, this is a solid paper with rigorous benchmark design and high-quality implementation. The presentation effectively explains the design choices and underlying rationale. The work addresses an important gap in agent evaluation by focusing on information synthesis rather than simple fact retrieval. Recommend accept.",
"id": "jXNSIDbyYm",
"rating": 8
},
{
"content": "The paper presents DEEPSYNTH, a benchmark of 120 tasks for evaluating LLM agents on multi-source information synthesis across 7 domains and 42 countries. It uses a multi-stage expert-driven pipeline for task creation and shows SOTA agents (e.g., GPT-5, o3-deep-research) achieve low F1 scores (max 8.97), exposing gaps in reasoning and tool use.",
"id": "T2V1eup4Jm",
"rating": 2
},
{
"content": "This paper introduces DeepSynth, a benchmark that evaluates agent on realistic and time consuming problems. DeepSynth covers 120 tasks across 7 domains and data sources covering 42 countries. Each task requires agents to navigate ~4.2 web pages, and read between 1 to 15 context documents. The benchmark has been proven to be challenging even by state-of-the-art models like GPT-5 and Gemini-Pro-2.5 and agentic systems like o3-deep-research and smolagent, showing DeepSynth can serve as an arena of evaluating the capabilities of agentic systems.",
"id": "xCk0gwFcS8",
"rating": 6
}
] |
{
"cdate": 1758304923126,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025a,\ntitle={A Benchmark for Deep Information Synthesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0Dhpt9aY3n},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language model (LLM)-based agents are increasingly used to solve complex tasks involving tool use, such as web browsing, code execution, and data analysis. However, current evaluation benchmarks do not adequately assess their ability to solve real-world tasks that require synthesizing information from multiple sources and inferring insights beyond simple fact retrieval. To address this, we introduce DEEPSYNTH , a novel benchmark designed to evaluate agents on realistic, time-consuming problems that combine information gathering, synthesis, and structured reasoning to produce insights. DEEPSYNTH contains 120 tasks collected across 7 domains and data sources covering 42 countries. DEEPSYNTH is constructed using a multi-stage data collection pipeline that requires annotators to collect official data sources, create hypotheses, perform manual analysis and design tasks with verifiable answers. When evaluated on DEEPSYNTH, 9 state-of-the-art LLMs and deep research agents achieve a maximum F1 score of 8.97. Our analysis reveals that current agents struggle with hallucinations and reasoning over large information spaces, highlighting \\ourdata as a crucial benchmark for guiding future research."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Benchmark",
"Deep Information Synthesis",
"LLM agents",
"Deep Research",
"AI agents"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6b0392289d2433b7468aed713a9794645dc21cad.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "A Benchmark for Deep Information Synthesis"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0Dhpt9aY3n",
"id": "0Dhpt9aY3n",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission20343/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896982619,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission20343/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission20343/Authors"
]
}
|
|
2,026
|
0DqB1vxGTn
|
[
2,
4,
2,
6
] |
[
{
"content": "The paper propose a method to train a model that predicts depth map from a single image at metric scale. Real-world camera heights are assumed to be known during training and is used to recover metric depths, which are then used as pseudo label ground-truth depth to supervise another student network. As a results, the method predict depth at metric scale and achieve comparable performance as if Lidar depth are used to recover the true depth scale.",
"id": "WPBpTvLiJF",
"rating": 2
},
{
"content": "This paper addresses scale ambiguity and under-utilized camera physical information in self-supervised monocular depth estimation. It proposes a framework that uses physics-derived depth priors—computed from camera intrinsics/extrinsics and semantic cues—to provide metric-scale supervision. A partial \"physics depth\" map (e.g., ground, aligned structures) is calculated via camera projection and filled to form a dense prior, offering absolute scale (unlike conventional relative scale).This prior integrates into a two-network scheme: an Anchor (teacher) network trained with physics depth as pseudo-labels, and a Target (student) network updated via exponential moving average, learning from photometric reconstruction and a novel contrastive loss. Pixels are classified by prediction entropy (low-entropy pixels are reliable, high-entropy pixels are unreliable): reliable pixels supervise the Target, while unreliable ones act as contrastive loss negative samples.The method outperforms state-of-the-art models on KITTI (no external LiDAR), generalizes to Cityscapes (finetuning/scratch training) and zero-shot Make3D. Key contributions include camera physics-based supervision, scale ambiguity resolution, entropy-driven contrastive learning, and calibration correction for reliable physics depth.",
"id": "Lt344yzt2i",
"rating": 4
},
{
"content": "This paper is about utilizing camera physical model parameters to calculate scene depth, which the authors call \"physics depth,\" to provide supervisory signals for depth estimation. They introduce a contrastive learning self-supervised framework designed to integrate this physics depth supervision, aiming to provide an absolute scale and enhance accuracy even when the generated priors are noisy.",
"id": "PtvVUCgBPM",
"rating": 2
},
{
"content": "This paper proposes to enhance self-supervised monocular depth estimation by introducing a physics-based depth prior computed from known camera parameters (intrinsics and extrinsics). The authors claim this prior provides absolute scale and can be integrated into a contrastive self-supervised training framework without relying on LiDAR. Experiments on KITTI, Cityscapes, and Make3D show improved metrics compared to existing self-supervised baselines.",
"id": "OdJjtSEKL6",
"rating": 6
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "IumAdKCoLa",
"rating": null
}
] |
{
"cdate": 1757487648325,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nzhang2025enhancing,\ntitle={Enhancing Self-Supervised Depth Estimation Through Camera Parameter Priors},\nauthor={Jinchang Zhang and Xue Iuan Wong and Guoyu Lu},\nyear={2025},\nurl={https://openreview.net/forum?id=0DqB1vxGTn}\n}"
},
"abstract": {
"value": "Depth estimation is a key topic in the field of computer vision. Self-supervised monocular depth estimation offers a powerful method to extract 3D scene information from a single camera image, allowing training on arbitrary image sequences without the need for depth labels. However, monocular unsupervised depth estimation still cannot address the issue of scale and often requires ground-truth depth data for calibration.\nIn the deep learning era, existing methods primarily rely on relationships between images to train unsupervised neural networks, often overlooking the foundational information provided by the camera itself. In fact, based on physical principles, the camera’s intrinsic and extrinsic parameters can be used to calculate depth information for the ground and related areas and extend it from planar regions to full scene depth. To make full use of scene depth, even in the presence of errors, we introduce a contrastive learning self-supervised framework. This framework consists of two networks with the same structure: the Anchor network and the Target network. The predictions from the Anchor network are used as pseudo-labels for training the Target network. Depth reliability is determined by entropy, dividing the predicted depth into positive and negative samples to maximize the use of physical depth information, and effectively enhance the depth estimation accuracy."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Jinchang_Zhang2",
"~Xue_Iuan_Wong1",
"~Guoyu_Lu4"
]
},
"authors": {
"value": [
"Jinchang Zhang",
"Xue Iuan Wong",
"Guoyu Lu"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"Depth Estimation",
"Camera"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "zhang|enhancing_selfsupervised_depth_estimation_through_camera_parameter_priors"
},
"pdf": {
"value": "/pdf/9f05a7e06ca3f5d4f64cb8fa0677eca60d89ae49.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/13a7cb39cd35a6f472af021230d2c0ea29633c59.pdf"
},
"title": {
"value": "Enhancing Self-Supervised Depth Estimation Through Camera Parameter Priors"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "0DqB1vxGTn",
"id": "0DqB1vxGTn",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission3621/-/Full_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763330917332,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission3621/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission3621/Authors"
]
}
|
|
2,026
|
0EV92jgJaZ
|
[
2,
2,
6
] |
[
{
"content": "Knowledge compilation approaches in probabilistic answer set programming (PASP) can be categorised into top-down or bottom-up approaches.\nTop-down typically require a CNF as input, which generally means additional auxiliary variables are introduced to first transform the PASP program into a CNF.\nBottom-up approaches do not have this requirement as they incrementally built up the compiled representation. During this process, intermediate representations may grow exponentially large. This work \n1) develops a non-incremental bottom-up knowledge compilation strategy to reduce the size of the intermediate representations, and\n2) explores a vtree initialization heuristic with dynamic variable ordering.",
"id": "H7Z4ziUNzg",
"rating": 2
},
{
"content": "This paper introduces a non-incremental bottom-up knowledge compilation (KC) strategy for probabilistic answer set programming (PASP), targeting neuro-symbolic reasoning systems. Traditional incremental bottom-up compilation suffers from large intermediate circuits even when final circuits are compact. The authors propose to partition PASP programs into variable-disjoint subcomponents, compile each separately, and conjoin the results, theoretically bounding intermediate circuit size. They also present a heuristic for V-tree initialization based on dependency graph structure, enabling dynamic variable ordering. Experiments on four PASP benchmarks (Coloring, Smokers, IRL, IRN) show improvements in memory and compilation time compared to incremental and top-down compilers (C2D, D4, SHARPSAT-TD).",
"id": "s3pHoGL7To",
"rating": 2
},
{
"content": "The paper describes an approach to perform efficient knowledge compilation for probabilistic answer set programs (PASPs). The traditional approaches require conversion to CNFs which introduces many auxiliary variables and increases complexity. Further, in existing approaches, the intermediate circuits generated during incremental compilation may be too large to be handled. The proposed approach presents a heuristic for compilation as well as compiling in an incremental manner to avoid exponential blow-up of circuit size. Experiments are performed on 3 benchmark problems and compared with existing state-of-the-art showing superior performance in terms of computational efficiency and resulting circuit size.",
"id": "7hAdQZupYC",
"rating": 6
}
] |
{
"cdate": 1758322135358,
"content": {
"TLDR": {
"value": "We propose a non-incremental approach for Bottom-Up Knowledge Compilation of Probabilistic Answer Set programs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nonincremental,\ntitle={Non-Incremental Bottom-Up Knowledge Compilation of Neuro-Answer Set Programs},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0EV92jgJaZ},\nnote={under review}\n}"
},
"abstract": {
"value": "Neuro-Probabilistic Answer Set Programming offers an intuitive and expressive framework for representing knowledge involving relations, non-determinism, logical constraints, and uncertainty-aware perception. Such a high expressivity comes at a significant computational cost. To mitigate that, Knowledge Compilation (KC) approaches translate the logic program into a logic circuit for which inference and learning can be performed efficiently. Top-down KC approaches employ an intermediary step of translating the logic program into a CNF propositional formula, before the actual KC step. This has the drawback of requiring the use of auxiliary variables and a fixed variable ordering. Bottom-up KC approaches instead construct a circuit representation compositionally, by employing circuit operations that represent the subparts of the logic program, without the need of auxiliary variables and allowing dynamic variable ordering. However, intermediary circuits can grow quite large even when the end circuit is succinct. In this work, we develop a non-incremental bottom-up KC strategy that provably and empirically reduces the size of the intermediary representations compared to its incremental counterpart. We explore heuristics for v-tree initialization and dynamic variable reordering. Experimental results show that our method achieves state-of-the-art performance for a large class of programs."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Probabilistic Logic Programming",
"Knowledge Compilation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8c15560d7d1761b796b890a6ed9a9cbd6b6b139a.pdf"
},
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/a76f1ad33861acd68d7a78ec7105bdd343f6ef04.zip"
},
"title": {
"value": "Non-Incremental Bottom-Up Knowledge Compilation of Neuro-Answer Set Programs"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0EV92jgJaZ",
"id": "0EV92jgJaZ",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission21809/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896902012,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission21809/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission21809/Authors"
]
}
|
|
2,026
|
0EXuliYnfW
|
[
4,
2,
4,
6
] |
[
{
"content": "This paper propose PPBoost , a method proposed to tackle zero-shot medical image segmentation by bridging the gap between text prompts and visual prompts. PPBoost progressively transform a natural language description of the target anatomy into a high-quality spatial bounding box, which then guides a segmentation model. PPBoost achieves significantly better segmentation accuracy (measured by mean Dice Similarity and Normalized Surface Distance) than both text-prompted and visual-prompted baselines.",
"id": "J6S6FeUUxC",
"rating": 4
},
{
"content": "The paper operates under a strict zero-shot setting, using a two-stage “text → pseudo-box → segmentation” pipeline to transform weak textual cues into strong spatial prompts. During training, a Vision-Language Model (VLM) based on *BiomedParse* generates confidence maps from textual descriptions, followed by temperature perturbation and KL-divergence filtering to remove unstable samples. The remaining image–box pairs are used to train a semi-supervised detector (Teacher–Student/EMA). At inference, the detector produces boxes which are then selectively expanded based on confidence, and the refined boxes are used as visual prompts for segmentation models such as MedSAM, SAM, or SAM-Med2D, yielding the final masks. Across BraTS21, LiTS17, and KidneySeg, PPBoost consistently improves Dice and NSD over text- and visual-prompt baselines, and even surpasses several few-shot segmentation models without using labeled data. The code repository is publicly released.",
"id": "1GJzefphXy",
"rating": 2
},
{
"content": "This paper introduces PPBoost, a bridge between VLM and visual prompt-based segmentors to address the challenges that obtaining good visual prompts is costly, especially when dealing with medical images. The detector, the main modification of this framework, is evaluated on different datasets using different backbones, demonstrating better performance. However, the description of the method is unclear, making it difficult to evaluate its contribution.",
"id": "VwLsmmgOEH",
"rating": 4
},
{
"content": "The proposed PPBOOST framework is a multi-stage progressive pseudo-label denoising pipeline. First, a pre-trained vision–language model is used to generate an initial pseudo bounding box for the target object from the text prompt. To improve quality, an uncertainty-based filtering is applied: only high-confidence predictions are kept. Using these reliable pseudo-labeled cases, the authors train a teacher–student object detector (semi-supervised) to better localize the target across all images. At inference, this trained detector produces a bounding box for a new image given the text query. The box is then selectively refined to ensure it fully covers the target. Finally, the refined box serves as a visual prompt to a segmentation model.",
"id": "FckQ2BrEvB",
"rating": 6
}
] |
{
"cdate": 1758295172014,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025ppboost,\ntitle={{PPBOOST}: {PROGRESSIVE} {PROMPT} {BOOSTING} {FOR} {TEXT}-{DRIVEN} {MEDICAL} {IMAGE} {SEGMENTATION}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0EXuliYnfW},\nnote={under review}\n}"
},
"abstract": {
"value": "Text-prompted foundation models for medical image segmentation offer an intuitive\nway to delineate anatomical structures from natural language queries, but\ntheir predictions often lack spatial precision and degrade under domain shift.\nIn contrast, visual-prompted models achieve strong segmentation performance\nacross diverse modalities by leveraging spatial cues of precise bounding-box\n(bbox) prompts to guide the segmentation of target lesions. However, it is costly\nand challenging to obtain the precise visual prompts in clinical practice. We propose\nPPBoost (Progressive Prompt-Boosting), a framework that bridges these limitations\nby transforming weak text-derived signals into strong, spatially grounded\nvisual prompts, operating under a strict zero-shot regime with no image- or pixellevel\nsegmentation labels. PPBoost first uses vision-language model to produce\ninitial pseudo-bboxes conditioned on the textual object names and applies an\nuncertainty-aware criterion to filter unreliable predictions. The retained imagebboxes\npairs are then leveraged to train a pseudo-labeled detector, producing the\nhigh-quality bboxes for the query images. At inference, PPBoost further refines\nthe generated bboxes by appropriately expand them to tightly cover the target\nanatomical structures. The enhanced spatially-grounding bbox prompts guide existing\nsegmentation models to generate final dense masks, effectively amplifying\nweak text cues into strong spatial guidance. Across three datasets spanning diverse\nmodalities and anatomies, PPBoost consistently improves Dice and Normalized\nSurface Distance over text- and visual-prompted baselines and, notably,\nsurpasses few-shot segmentation models without using labeled data. PPBoost can\ngeneralize to multiple typical visual segmentation model backbones. The anonymous\ncode implementation is in: https://anonymous.4open.science/\nr/submission-code-E2BB/."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Medical Image Segmentation",
"Foundation Model",
"VLM",
"SAM"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/59d2cf254c56cdd90c0fb47f901564179fe3c906.pdf"
},
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "PPBOOST: PROGRESSIVE PROMPT BOOSTING FOR TEXT-DRIVEN MEDICAL IMAGE SEGMENTATION"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0EXuliYnfW",
"id": "0EXuliYnfW",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19298/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897047068,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19298/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19298/Authors"
]
}
|
|
2,026
|
0FH7ceYzCq
|
[
8,
4,
4,
4
] |
[
{
"content": "This paper proposes a sequence-agnostic method for continuous multi-modal clustering, Sequence-Agnostic Continual Multi-Modal Clustering (SCMC). It aims to address two core issues in existing continuous multi-modal clustering: the unreliable fusion of historical and new modal information, and the strong dependence of clustering performance on the order of modal input. The paper first analyzes the shortcomings of existing methods in fusing noise and forgetting high-quality modalities, and constructs a reliable continuous information propagation framework. Through a residual fusion network and a cross-temporal knowledge collaboration mechanism, it achieves bidirectional information filtering between new and old modalities, thereby enhancing complementarity and suppressing redundancy. Subsequently, the authors designed a sequence-agnostic anti-forgetting strategy, including cross-temporal consistency transfer and quality-aware history integration, enabling the model to retain the discriminative knowledge of early high-value modalities when new modalities arrive.",
"id": "BVR6XWMEs2",
"rating": 8
},
{
"content": "This submissions introduces a framework for Continual Multi-Modal Clustering that explicitly addresses the challenge of sequence sensitivity in modality arrival. The method combines a Residual Fusion Network for reliable continual information propagation, a Cross-Temporal Knowledge Collaboration mechanism to filter redundant information between historical and new modalities, and a Sequence-Agnostic Anti-Forgetting Strategy that uses cross-temporal consistency transfer and quality-aware consolidation.",
"id": "HP2QUgzoWi",
"rating": 4
},
{
"content": "The paper introduces SCMC (Sequence-agnostic Continual Multi-modal Clustering), a framework for continual clustering over streaming modalities that aims to be insensitive to modality arrival order. The method (i) fuses historical and new features via a Residual Fusion Network (RFN), (ii) performs Cross-Temporal Knowledge Collaboration (CTKC) using a matrix cross-entropy regularizer to maximize cross-temporal mutual information, and (iii) applies a Sequence-Agnostic Anti-Forgetting Strategy (SAAS) composed of consistency transfer (CCT) and quality-aware consolidation (QHC) with MMD-based modality weights.",
"id": "ykYFxK0W4k",
"rating": 4
},
{
"content": "This paper tackles Continual Multi-modal Clustering and argues that existing methods are both (i) unreliable when fusing historical and newly arriving modalities and (ii) sequence-sensitive—early, high-quality modalities get forgotten as more (potentially noisy) modalities arrive. The proposed method, SCMC, has three components: (1) a Residual Fusion Network (RFN) that keeps a stable high-rank historical basis while adding a new modality; (2) Cross-Temporal Knowledge Collaboration, which links the fused representation with both historical and current modality features using a matrix cross-entropy objective; and (3) a Sequence-Agnostic Anti-Forgetting Strategy combining Cross-Temporal Consistency Transfer and Quality-aware Historical Consolidation with MMD-based modality importance weights. Experiments on five datasets show strong improvements over baselines and reduced sensitivity to modality order, supported by ablations and sequence-stability plots.",
"id": "q0ibukrVIS",
"rating": 4
}
] |
{
"cdate": 1757658463010,
"content": {
"TLDR": {
"value": "We propose a novel sequence-agnostic continual multi-modal clustering method."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025sequenceagnostic,\ntitle={Sequence-agnostic Continual Multi-modal Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0FH7ceYzCq},\nnote={under review}\n}"
},
"abstract": {
"value": "Continual multi-modal clustering (CMC) aims to address the challenges posed by the continuous arrival of multi-modal data streams, enabling models to progressively update cluster assignments while avoiding catastrophic forgetting.\nCMC closely aligns with the requirements of real-world scenarios and has attracted significant attention from researchers.\nHowever, existing CMC methods face two limitations.\n(1) They fail to reliably model the relationship between historical and new information, leading to redundancy in the shared representation and weakened discriminative power of clustering.\n(2) They are highly sensitive to modality sequence, as early high-quality modalities are gradually forgotten, making the results dependent on the input order.\nTo address these limitations, we propose a novel Sequence-agnostic Continual Multi-modal Clustering (SCMC) method that achieves reliable continual learning and is insensitive to the modality arrival sequence. Specifically, SCMC employs a residual fusion network to suppress the update bias introduced by the newly arrived modalities. It then leverages a cross-temporal knowledge collaboration mechanism to bidirectionally filter information between the historical information and the new modalities, thereby maximizing the preservation of task-relevant information and ensuring reliable continual learning.\nTo eliminate the high sequence sensitivity, we design a sequence-agnostic anti-forgetting strategy, which aligns the current features and cluster distribution with the previous step through cross-temporal consistency transfer, and then prioritizes retaining high-value modality information based on modality importance scores.\nExtensive experiments demonstrate that SCMC outperforms existing SOTA methods, exhibiting sequence insensitivity and strong anti-forgetting capabilities. To the best of our knowledge, SCMC is the first approach to explicitly address the sequence sensitivity problem in CMC."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Multi-modal Clustering",
"Continual Learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6528d0f2a8073d07dfede23027512d9eef7d506e.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Sequence-agnostic Continual Multi-modal Clustering"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0FH7ceYzCq",
"id": "0FH7ceYzCq",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission4298/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898040863,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission4298/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission4298/Authors"
]
}
|
|
2,026
|
0FJYicpOj0
|
[
6,
6,
4,
8
] |
[
{
"content": "The paper introduces ε-Gaussian certifiability (GPAR)—a new theoretical notion for analyzing machine unlearning in high-dimensional regimes (p ~ n). It reformulates unlearning guarantees via hypothesis testing and Gaussian trade-off functions, showing that a single Newton step with Gaussian noise suffices to ensure both privacy and accuracy. This contrasts with prior work (notably Zou et al., 2025), which required at least two steps under ε-certifiability. The authors also provide proofs of convergence, theoretical bounds for Generalization Error Divergence (GED), and simulation results validating the high-dimensional behavior.",
"id": "E6PSSs8Dys",
"rating": 6
},
{
"content": "This paper introduces ε-Gaussian certifiability, a novel and robust framework tailored for high-dimensional machine unlearning. The authors demonstrate that a single noisy Newton step suffices to achieve both privacy and statistical accuracy under this new notion, contrasting with prior work that required at least two steps. Theoretical results are supported by convincing simulations.",
"id": "ixlMa7rDu2",
"rating": 6
},
{
"content": "This paper tackles the problem of certified machine unlearning in high-dimensional settings where the number of parameters p is comparable to the sample size n. It introduces a new privacy notion called (phi, epsilon)-Gaussian certifiability, which is argued to be the canonical and optimal framework for high dimensions. The main theoretical result shows that a single step of Newton's method, followed by calibrated Gaussian noise, is sufficient to achieve both this strong privacy guarantee and maintain model accuracy. This finding directly contrasts with the prior state-of-the-art analysis by Zou et al. (2025), which concluded that at least two Newton steps were necessary.",
"id": "9EXvZ0MfIy",
"rating": 4
},
{
"content": "This paper introduces **$(\\phi, \\epsilon)$-Gaussian certifiability**, a new privacy-certification framework for high-dimensional settings ($p \\approx n$). It shows that a single Newton step with Gaussian noise can achieve certified unlearning while maintaining model accuracy. The work improves upon prior high-dimensional unlearning methods by reducing the number of required Newton steps and provides empirical validation on synthetic linear models, demonstrating the advantages of Gaussian noise for generalization and unlearning metrics.",
"id": "5BoXXnyC4r",
"rating": 8
}
] |
{
"cdate": 1758328790343,
"content": {
"TLDR": {
"value": "We introduce the canonical dimension free notion of certifiability suitable to high dimensions and show its utility via a Newton based unlearning algorithm"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025gaussian,\ntitle={Gaussian certified unlearning in high dimensions: A hypothesis testing approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0FJYicpOj0},\nnote={under review}\n}"
},
"abstract": {
"value": "Machine unlearning seeks to efficiently remove the influence of selected data while preserving generalization. Significant progress has been made in low dimensions $(p \\ll n)$, but high dimensions pose serious theoretical challenges as standard optimization assumptions of $\\Omega(1)$ strong convexity and $O(1)$ smoothness of the per-example loss $f$ rarely hold simultaneously in proportional regimes $(p\\sim n)$.\nIn this work, we introduce $\\varepsilon$-Gaussian certifiability, a canonical and robust notion well-suited to high-dimensional regimes, that optimally captures a broad class of noise adding mechanisms. Then we theoretically analyze the performance of a widely used unlearning algorithm based on one step of the Newton method in the high-dimensional setting described above. Our analysis shows that a single Newton step, followed by a well-calibrated Gaussian noise, is sufficient to achieve both privacy and accuracy in this setting. This result stands in sharp contrast to the only prior work that analyzes machine unlearning in high dimensions \\citet{zou2025certified}, which relaxes some of the standard optimization assumptions for high-dimensional applicability, but operates under the notion of $\\varepsilon$-certifiability. That work concludes %that a single Newton step is insufficient even for removing a single data point, and\nthat at least two steps are required to ensure both privacy and accuracy. Our result leads us to conclude that the discrepancy in the number of steps arises because of the sub optimality of the notion of $\\varepsilon$-certifiability and its incompatibility with noise adding mechanisms, which $\\varepsilon$-Gaussian certifiability is able to overcome optimally."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Machine unlearning in high dimensions",
"Proportional asymptotics",
"High dimensional statistical theory",
"Privacy–accuracy tradeoff",
"Hypothesis testing",
"Gaussian noise calibration",
"Newton method"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2aa1ed1036190b875838c897c261f703465c440f.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Gaussian certified unlearning in high dimensions: A hypothesis testing approach"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0FJYicpOj0",
"id": "0FJYicpOj0",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission22272/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896875746,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission22272/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission22272/Authors"
]
}
|
|
2,026
|
0FN0u6qTAi
|
[
4,
4,
2,
6
] |
[
{
"content": "This paper introduces the \"Protein-as-Second-Language\" framework, which aims to enable large language models (LLMs) to interpret protein (amino acid) sequences as if they were acquiring a second symbolic language. By curating a bilingual dataset of almost 80k protein-question-answer triples and implementing an adaptive context construction mechanism, the approach supplies protein sequence–question–answer exemplars as in-context learning cues for frozen LLMs. Extensive experiments on multiple protein-text QA benchmarks demonstrate that LLMs guided by this framework substantially outperform their zero-shot counterparts, and in some cases, even domain-specialized, fine-tuned protein LLMs.",
"id": "CkCbUup7ou",
"rating": 4
},
{
"content": "This paper proposes Protein-as-Second-Language (PSL), a training-free framework that enables large language models to interpret protein sequences as a “second language.” Instead of fine-tuning, PSL performs retrieval-based in-context learning by constructing bilingual contexts that pair amino-acid sequences with natural-language descriptions. The authors build a 79K protein–QA corpus via Gene Ontology–based functional grouping, MMseqs2 clustering with semantic deduplication, and automatic QA generation using DeepSeek-R1 across four question types. During inference, PSL selects relevant examples based on sequence homology and semantic similarity, forming adaptive prompts for frozen LLMs (GPT-4o, Qwen, Mistral). Across three benchmarks (ProtDescribe, Protein2Text-QA, Mol-Instructions), PSL achieves up to 17.2% ROUGE-L improvement, outperforming domain-specific models like ProLLaMA-7B and BioT5+, and reframes protein understanding as retrieval-driven bilingual reasoning rather than supervised fine-tuning.",
"id": "oaHpKil1AE",
"rating": 4
},
{
"content": "This work introduces a novel question-answering (QA) dataset focused on protein expression, localization, mechanism, and interaction. The authors also propose a retrieval-based framework that enables pretrained, generic large language models (LLMs) to analyze unknown protein sequences using an in-context learning approach. By including similar proteins and their corresponding descriptions in the prompt, the paper reports an average 7% improvement in the ROUGE-L score on QA tasks for the target unknown protein.",
"id": "VFgQqijUHf",
"rating": 2
},
{
"content": "Protein-as-Second-Language (PSL) framework is a method for protein function understanding using large language models (LLMs) without fine-tuning. The approach reformulates amino acid sequences as symbolic language and uses adaptive, bilingual context construction (sequence + natural language) to enable zero-shot reasoning. A curated dataset of ~80k protein–QA pairs spanning functional, descriptive, and reasoning tasks supports the method.",
"id": "3weR2cHmVb",
"rating": 6
}
] |
{
"cdate": 1757862318304,
"content": {
"TLDR": {
"value": "We propose a protein–language framework and bilingual dataset that enable LLMs to reason about protein function via context-driven learning without fine-tuning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025protein,\ntitle={Protein as a Second Language for {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0FN0u6qTAi},\nnote={under review}\n}"
},
"abstract": {
"value": "Deciphering the function of unseen protein sequences is a fundamental challenge with broad scientific impact, yet most existing methods depend on task-specific adapters or large-scale supervised fine-tuning. We introduce the \"Protein-as-Second-Language\" framework, which reformulates amino-acid sequences as sentences in a novel symbolic language that large language models can interpret through contextual exemplars. Our approach adaptively constructs sequence–question–answer triples that reveal functional cues in a zero-shot setting, without any further training. To support this process we curate a bilingual corpus of 79,926 protein–QA instances spanning attribute prediction, descriptive understanding, and extended reasoning. Empirically, our method delivers consistent gains across diverse open-source LLMs and GPT-4o, achieving up to 17.2% ROUGE-L improvement (average +7%) and even surpassing fine-tuned protein-specific language models. These results highlight that generic LLMs, when guided with protein-as-language cues, can outperform domain-specialized models, offering a scalable pathway for protein understanding in foundation models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large language models; Protein–QA dataset; Context-Driven Learning; Zero-shot learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/68dc93b54dfdeefd77080a635770b44d29c1e969.pdf"
},
"primary_area": {
"value": "datasets and benchmarks"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Protein as a Second Language for LLMs"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0FN0u6qTAi",
"id": "0FN0u6qTAi",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission5183/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897990156,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission5183/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission5183/Authors"
]
}
|
|
2,026
|
0Fc9yLlIYX
|
[
4,
2,
2,
4
] |
[
{
"content": "This paper proposes a systematic and comprehensive analysis of temporal bias in Large Audio Language Models (LALMs). Through experiments, the paper reveals that LALMs consistently predict the temporal occurrence of acoustic events earlier. Through detailed analysis, the paper shows that in LALMs: 1) temporal bias increases with audio length, 2) prediction precision varies across events, and 3) prediction error is lower when an event is in the middle of the audio and higher at both ends. The paper also shows that the temporal bias results from the interaction between attending to the end in earlier layers and attending to the correct position in later layers.",
"id": "KuvH2YPy8T",
"rating": 4
},
{
"content": "The paper systematically explores the capability of Large audio language models to predict the temporal location of sound events. The analysis is done across datasets, varying audio lengths, and across event types and position in audio.",
"id": "st0nIjmbYg",
"rating": 2
},
{
"content": "The paper presents the first systematic study of temporal bias in Large Audio Language Models (LALMs), exposing a fundamental limitation in their ability to accurately determine when events occur within an audio stream. Temporal awareness—knowing *when* an event happens—is essential in real-world tasks such as lecture indexing or analyzing political debates. The authors identify temporal bias as a consistent tendency for LALMs to misplace audio events along the time axis. While these models demonstrate strong semantic understanding by summarizing key events accurately, they often misreport timestamps by several seconds. For example, when locating a formula introduction in a lecture, models tend to predict systematically earlier or later timings than the ground truth—sometimes by more than seven seconds—even though their attention mechanisms correctly focus on the relevant segments.\n\nTo explore this issue, the researchers conducted controlled experiments manipulating three key factors: sequence length, event duration, and event position, using the STARSS22 dataset. They introduced a new metric, the Temporal Bias Index (TBI), defined as the average signed difference between predicted and actual onset times.\n\nA negative TBI indicates systematic early bias, while a positive value indicates late bias. The Mean Absolute Error (MAE) complements this by measuring the magnitude of deviation regardless of direction. Four state-of-the-art LALMs—Voxtral-Mini-3B-2507, Qwen2-Audio-7B-Instruct, Kimi-Audio-7B-Instruct, and Aero-1-Audio—were evaluated against a non-LALM baseline, PretrainedSED (Sound Event Detection).\n\nThe findings revealed that temporal bias is systematic yet varies across models, pointing to deeper architectural issues. First, the bias is pervasive: all LALMs tested displayed consistent patterns across different domains (acoustic events, conversations, music), with TBI values between -4.7 and -6.8 seconds, whereas the supervised baseline maintained near-zero bias. Second, audio length significantly amplifies the bias—errors grow non-linearly with longer inputs. In 120-second contexts, MAE values rose to over 20 seconds for some models, reflecting both increased bias and variance. Notably, models exhibited a stronger tendency toward early predictions as context length increased. Third, event duration and nature influenced performance: longer events worsened localization accuracy, and transient sounds (e.g., knocks, bells) caused greater degradation than sustained speech. The deviation increase for transient events, such as “Door Closing,” was especially pronounced. Fourth, event position introduced distinct patterns—Voxtral displayed a symmetric, U-shaped error curve with higher errors at clip boundaries, while Qwen2-Audio showed asymmetric bias with larger errors toward the end of clips.\n\nTo explain why this bias occurs, the paper connects it to the models’ internal attention mechanisms. Temporal predictions arise from the interplay between two signals developing across decoder layers. Early layers exhibit a structural prior or “attention front-loading,” where attention is disproportionately concentrated near the beginning of the sequence (around 0 seconds). This default bias acts as a temporal anchor, accounting for boundary-related errors. Later layers, however, produce content-aware semantic grounding, focusing accurately on the true event. The observed temporal bias emerges when these two signals interfere—accurate semantic cues are distorted by the ingrained structural bias from earlier layers. In other words, while the models understand *what* happens, their internal timing mechanisms misrepresent *when* it happens.\n\nThe authors liken this to a baker who knows every ingredient in a cake but whose timer is unreliable—starting too early or finishing too late depending on the oven’s size, not the recipe itself. Similarly, while LALMs excel at semantic comprehension, their temporal foundations remain fragile, revealing a critical gap in the alignment between audio understanding and temporal precision.",
"id": "idKEdf6dXC",
"rating": 2
},
{
"content": "This paper studies temporal localization ability (i.e., temporal bias) of large audio–language models (LALMs) when answering timestamp-based audio queries. The authors conduct experiments over multiple scenarios (audio length, event duration, and event position) to primarily use STARSS22 to evaluate the SED performance via Temporal Bias Index (TBI) and Mean Absolute Error (MAE) metrics on recent LALMs (e.g., Voxtral, Qwen2-Audio, Kimi-Audio, Aero). And the results show that these LALMs are strong in audio-conditioned semantic reasoning, but their ability on sound event detection has been largely unexplored.",
"id": "T8fQHvimfg",
"rating": 4
}
] |
{
"cdate": 1758297559979,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025lost,\ntitle={Lost in Time: Systematic Temporal Bias in Large Audio Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0Fc9yLlIYX},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Audio Language Models (LALMs) are increasingly applied to audio understanding and multimodal reasoning, yet their ability to locate when events occur remains underexplored. We present the first systematic study of temporal bias in LALMs, revealing a key limitation in their timestamp prediction. For example, when asked “At which second does the lecturer introduce the key formula?”, models often predict timestamps that are consistently earlier or later than the ground truth. Through controlled experiments on timestamped datasets, we find that temporal bias (i) is prevalent across datasets and models, (ii) increases with audio length—even accumulating to tens of seconds in extended recordings, and (iii) varies across event types and positions. We quantify this effect with the Temporal Bias Index (TBI), measuring systematic misalignment in predicted event timings, and complement it with a visualization framework. Our findings highlight a fundamental limitation in current LALMs and call for the development of temporally robust architectures."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Audio Language Models",
"Temporal Bias",
"Audio Understanding"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e383f2c4c0f2a00cde1c8b0496044e619c38268b.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Lost in Time: Systematic Temporal Bias in Large Audio Language Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0Fc9yLlIYX",
"id": "0Fc9yLlIYX",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission19599/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897031002,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission19599/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission19599/Authors"
]
}
|
|
2,026
|
0FhrtdKLtD
|
[
6,
6,
6,
2
] |
[
{
"content": "This paper introduces MindCube, a new benchmark for evaluating the ability of VLMs to form \"spatial mental models\" from limited views. The authors show that existing models perform poorly, then propose a \"map-then-reason\" approach, which is to train a model to first generate a structured cognitive map and then reason upon it. This method leads to performance improvement through a combination of supervised fine-tuning and reinforcement learning.",
"id": "oLxTbfkOSn",
"rating": 6
},
{
"content": "This paper addresses the critical failure of VLMs in forming spatial mental models: the ability to reason about unseen spaces from limited views. To systematically evaluate this gap, the authors introduce the MINDCUBE benchmark, demonstrating that existing models perform near-randomly on such tasks. The paper's key contribution is a synergistic \"map-then-reason\" approach, which trains a model to first generate an internal cognitive map of an environment and then reason upon it. This method actively constructs and uses an internal spatial representation, especially when refined with reinforcement learning, can significantly boost task accuracy from a baseline of 37.8% to 70.7%.",
"id": "runJ1cTwFg",
"rating": 6
},
{
"content": "The paper introduces MindCube, a multi-view reasoning benchmark tailored for evaluating whether VLMs can build \"spatial mental models\" of scenes from partial observations. Using MindCube, authors show that state-of-the-art VLMs perform near chance and struggle to maintain cross-view consistency or reason about occluded objects. The paper also explored three structural scaffolds (view interpolation, free-form reasoning, and cognitive maps) and find a synergistic \"map-then-reason\" approach yields the largest gains. Finally, they train models with SFT and reinforce with RL, and find that the joint map-then-reason setup with RL boosts accuracy, indicating that constructing and using internal structured maps substantially improves multi-view spatial reasoning.",
"id": "gHclhk0LLg",
"rating": 6
},
{
"content": "This paper focuses on the VLMs’ capabilities in \"spatial mental modeling\", the ability to imagine environments from a few visual observations. The paper first proposes a benchmark to measure current VLMs’ capabilities, finding that most existing models do not perform well on these tasks. Then, the paper explores methods to improve such capabilities through two approaches: (1) Scaffolds: carefully designed data structures to encourage spatial mental modeling, and (2) Training (SFT and RL). The paper identifies several scaffolds that could benefit spatial mental modeling and observes that combining SFT and RL leads to the best spatial reasoning performance.",
"id": "JCOXPVDJrF",
"rating": 2
}
] |
{
"cdate": 1757013520622,
"content": {
"TLDR": {
"value": "We propose MindCube and find existing VLMs perform poorly on it. Supervising models to first generate cognitive maps and then reason upon them proves to be a quite effective approximation for spatial mental modeling from limited views."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025understanding,\ntitle={Understanding {VLM}s Spatial Mental Modeling Capability from Limited Views},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0FhrtdKLtD},\nnote={under review}\n}"
},
"abstract": {
"value": "Can Vision Language Models (VLMs) imagine the full scene from just a few views, like humans do? Humans form spatial mental models, internal representations of unseen space, to reason about layout, perspective, and motion. Our new MindCube benchmark with 21,154 questions across 3,268 images exposes this critical gap, where existing VLMs exhibit near-random performance. Using MindCube, we systematically evaluate how well VLMs build robust spatial mental models through representing positions (cognitive mapping), orientations (perspective-taking), and dynamics (mental simulation for \"what-if\" movements). We then explore three approaches to help VLMs approximate spatial mental models, including unseen intermediate views, natural language reasoning chains, and cognitive maps. The significant improvement comes from a synergistic approach, \"map-then-reason\", that jointly trains the model to first generate a cognitive map and then reason upon it. By training models to reason over these internal maps, we boosted accuracy from 37.8% to 60.8% (+23.0%). Adding reinforcement learning pushed performance even further to 70.7% (+32.9%). Our key insight is that such scaffolding of spatial mental models, actively constructing and utilizing internal structured spatial representations with flexible reasoning processes, significantly improves understanding of unobservable space."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Vision Language Models",
"VLMs",
"Multi Modal Language Models",
"Spatial Intelligence",
"Spatial Reasoning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b3ae14a291ec47f47838c66b9d91f330cab8c231.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/5082d91d065fb9b9d69c480d6f04be96d47a8858.zip"
},
"title": {
"value": "Understanding VLMs Spatial Mental Modeling Capability from Limited Views"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0FhrtdKLtD",
"id": "0FhrtdKLtD",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission2183/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898164442,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2183/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2183/Authors"
]
}
|
|
2,026
|
0G8Cq9z2Hp
|
[
4,
4,
4,
6,
6
] |
[
{
"content": "This paper addresses the computational complexity issue in AlphaFold by presenting a hierarchical pipeline, refered to as HieraFold, which decomposes the end-to-end structure prediction task in a coarse-to-fine manner.\nHieraFold first performs a coarse global prediction using a \"lightweight\" version of AlphaFold with a smaller diffusion module, and then locally refines critical subunits identified via the pAE matrix.\nExperimental results on protein-protein and protein-ligand benchmarks demonstrate the effectiveness of the proposed method.",
"id": "NunVZw0836",
"rating": 4
},
{
"content": "This paper presents HIERAFOLD, which exploits the modularity of large complexes via PAE-guided (Predicted Aligned Error) subunit decomposition, targeted interface-aware refinement, and confidence-weighted assembly. By leveraging coarse predictions and PAE-guided modular decomposition, the method automatically identifies subunits and interfaces, and refines each chain in the context of its key interacting partners. Experiments from various benchmarks clearly show that HIERAFOLD matches AlphaFold3 on standard benchmarks and, critically, extends tractable prediction to complexes exceeding 5,000 tokens, achieving substantial gains over prior divide-and-conquer approaches.",
"id": "jVACbfYMDm",
"rating": 4
},
{
"content": "HIERAFOLD: Hierarchical pipeline for efficient large complex prediction using PAE decomposition & refinement.",
"id": "MvwGgBSTYF",
"rating": 4
},
{
"content": "The paper introduces a new approach for predicting the structure of large protein complexes by decomposing them into modular subunits using Predicted Aligned Error (PAE) scores. The proposed approach uses a 3-stage process from coarse to split the segments, then performing fine prediction using existing models and the final alignment. It achieves similar accuracy to AlphaFold3 with lower GPU memory requirements for large protein complex.",
"id": "Qx33eEQwDX",
"rating": 6
},
{
"content": "The method HIERAFOLD provides a memory-efficient and accurate solution to the prediction of large macromolecular complexes. It uses an optimised version of Protenix, an open-source reproduction of AlphaFold3, to generates 3D models of subparts, a subunit segmentation strategy based on PAE, and a combinatorial algorithm for selecting and assembling subparts. It performs favourably compared to AlphaFold3 and CombFold.",
"id": "xkkneQDAbS",
"rating": 6
}
] |
{
"cdate": 1758359066119,
"content": {
"TLDR": {
"value": "We introduce HierAFold, a hierarchical pipeline that exploits the modularity of large complexes via PAE-guided (Predicted Aligned Error) subunit decomposition, targeted interface-aware refinement, and confidence-weighted assembly."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025efficient,\ntitle={Efficient Prediction of Large Protein Complexes via Subunit-Guided Hierarchical Refinement},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0G8Cq9z2Hp},\nnote={under review}\n}"
},
"abstract": {
"value": "State-of-the-art protein structure predictors have revolutionized structural biology, yet quadratic memory growth with token length makes end-to-end inference impractical for large complexes beyond a few thousand tokens. We introduce \\textsc{HierAFold}, a hierarchical pipeline that exploits the modularity of large complexes via PAE-guided (Predicted Aligned Error) subunit decomposition, targeted interface-aware refinement, and confidence-weighted assembly. PAE maps localize rigid intra-chain segments and sparse inter-chain interfaces, enabling joint refinement of likely interacting subunits to capture multi-body cooperativity without increasing memory. \\textsc{HierAFold} matches AlphaFold3 accuracy, raises success rates from 49.9\\% (CombFold) to 73.1\\% on recent PDB set. While for large complexes, it cuts peak memory by $\\sim$25\\,GB on a 4{,}000-token target ($\\sim$40\\%), successfully models complexes with over $5{,}000$ tokens that are out-of-memory for AlphaFold3, and raises success rates by two-fold compared with CombFold."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Protein complex structure prediction",
"AlphaFold3",
"complex modularity"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/cb9c051ccacad225ae82bee4a700f0e28025501a.pdf"
},
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Efficient Prediction of Large Protein Complexes via Subunit-Guided Hierarchical Refinement"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0G8Cq9z2Hp",
"id": "0G8Cq9z2Hp",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission24657/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896756603,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission24657/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission24657/Authors"
]
}
|
|
2,026
|
0GMt2OWeCb
|
[
4,
4,
2,
2
] |
[
{
"content": "This paper addresses two critical limitations of existing memory-augmented Large Language Model (LLM)-based agents: low data efficiency (relying on extensive task-specific interaction data for early training) and poor adaptability (using static memory retrieval strategies that fail to balance cross-task knowledge and current task needs) . To resolve these, it proposes a memory-augmented LLM agent with cross-task experience learning, centered on a dual-memory architecture and a dynamic retrieval mechanism, validated on the WebShop benchmark (a multi-turn online shopping simulation with 1M+ product descriptions and 12k human instructions)",
"id": "TrQAKNiKtd",
"rating": 4
},
{
"content": "The paper introduces a memory-augmented LLM-based agent that incorporates cross-task experience learning to improve data efficiency and adaptability. It features a dual-memory architecture with Target memory for task-specific experiences,Source memory for transferable knowledge from related tasks. A dynamic retrieval mechanism adaptively balances these two memories based on interaction context and task progression. Evaluated on the WebShop benchmark, the agent outperforms strong baselines in task success rate and cross-domain generalization.",
"id": "zw0TmiisLT",
"rating": 4
},
{
"content": "This paper proposes a memory-augmented LLM-based agent with cross-task experience learning to address two limitations of existing memory-augmented agents: (1) low data efficiency due to reliance on extensive task-specific interaction data, and (2) static memory retrieval strategies that hinder adaptability. The method introduces a dual-memory architecture:\n\nSource Experience Memory (Ms): Stores transferable knowledge from related tasks.\nTarget Experience Memory (Mt): Accumulates task-specific experiences during interactions.\n\nA Dynamic Memory Retrieval Mechanism (DMRM) adaptively balances retrieval from Ms and Mt based on task progression, mitigating negative transfer. Evaluated on the WebShop benchmark demonstrates improved data efficiency and generalization.",
"id": "gd9bNSBK5Q",
"rating": 2
},
{
"content": "The paper proposes a memory-augmented LLM agent that introduces a cross-task experience learning mechanism, allowing the agent to reuse knowledge from previously completed tasks while dynamically adapting to new ones. It also adds a Dynamic Memory Retrieval Mechanism (DMRM) to balance between task-specific and cross-task experiences.",
"id": "IqIZOwZrNh",
"rating": 2
}
] |
{
"cdate": 1758343379022,
"content": {
"TLDR": {
"value": "We propose a memory-augmented LLM agent with cross-task learning and dynamic memory retrieval to improve adaptability and efficiency in multi-turn instruction-following tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025memoryaugmented,\ntitle={Memory-Augmented Large Language Model-Based Agent with Cross-Task Experience Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0GMt2OWeCb},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Model (LLM)-based agents have demonstrated impressive capabilities in complex decision-making and multi-turn instruction-following tasks. To enhance knowledge retention and contextual adaptability, recent work has equipped these agents with memory modules that store and reuse historical interaction experiences. However, existing memory-augmented approaches face two key limitations: they often require large amounts of interaction data during early training to reach competitive performance, resulting in low data efficiency; and they rely on static, self-derived experience reuse strategies, limiting their ability to adapt when prior learning is insufficient and preventing the use of transferable knowledge from related tasks. Building on these observations, in this paper, we propose a memory-augmented LLM agent with cross-task experience learning, designed to improve data efficiency and adaptability. Our method augments the conventional task-specific memory with an additional source experience memory that retains transferable knowledge from related but distinct tasks. We further introduce a dynamic memory retrieval mechanism that adaptively draws from both task and source memories, allowing the agent to balance prior task-specific experiences with cross-task knowledge according to the current context and progression. We validate the proposed method on the WebShop benchmark, which comprises diverse, multi-turn instruction-following tasks across product domains with varying semantic complexity. Experimental results show that our approach consistently outperforms state-of-the-art memory-augmented LLM agents in task success rate and generalization, demonstrating the effectiveness of the proposed memory architecture and retrieval mechanism."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models",
"LLM-based Agents",
"Experience Transfer",
"Long-Term Memory Mechanisms"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a643457b43dfbecfbe3e4273007e64a36d4b82c4.pdf"
},
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Memory-Augmented Large Language Model-Based Agent with Cross-Task Experience Learning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0GMt2OWeCb",
"id": "0GMt2OWeCb",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission23410/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896816282,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission23410/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission23410/Authors"
]
}
|
|
2,026
|
0GNBqoYcAP
|
[
4,
4,
6
] |
[
{
"content": "his paper examines how world models can learn and adapt through context, focusing on in-context learning (ICL) within both MDP and POMDP settings. The authors distinguish between two key processes, namely In-Context Environment Learning (ICEL) and In-Context Environment Recognition (ICER). Furthermore, analyze their theoretical properties by deriving error upper bounds that explain when and how each process is effective.\n\nTo validate these insights, the paper introduces L2World, a framework designed to study in-context world modeling, and evaluates it on cart-pole and navigation tasks. The experiments show that world models can adapt to new environments through context alone and that both environmental diversity and longer context windows are crucial for enabling ICEL.",
"id": "rwLbimnkUF",
"rating": 4
},
{
"content": "This paper investigates in-context learning (ICL) in world model training, focusing on how models adapt to novel environments through ICL rather than parameter updates. The authors distinguish two ICL mechanisms: Environment Recognition (ER) and Environment Learning (EL) and derive error bounds characterizing when each emerges. They introduce L2World, a linear-attention architecture for long-context world modeling, and validate their theoretical predictions on cart-pole control and vision-based navigation tasks. The key findings emphasize that environment diversity and long context windows are critical for enabling EL over ER.",
"id": "rqipGLKiA7",
"rating": 4
},
{
"content": "The authors consider in-context learning of a world model, in particular focusing on the dichotomy between environment-learning and environment-recognition observed in other in-context learning setups. Consistent with theoretical error bounds that they derive, their main contribution is to show that long contexts and environment diversity favor environment-learning in cart-pole and procedural maze navigation tasks.",
"id": "VpjvIa0kF2",
"rating": 6
}
] |
{
"cdate": 1758188335809,
"content": {
"TLDR": {
"value": "We formalize, bound, and validate in-context environment learning, showing that long-context, diverse-input world models can self-adapt by recognizing or learning new dynamics without parameter updates."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025context,\ntitle={Context and Diversity Matter: The Emergence of In-Context Learning in World Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0GNBqoYcAP},\nnote={under review}\n}"
},
"abstract": {
"value": "The capability of predicting environmental dynamics underpins both biological neural systems and general embodied AI in adapting to their surroundings. Yet prevailing approaches rest on static world models that falter when confronted with novel or rare configurations. We investigate in-context environment learning (ICEL), shifting attention from zero-shot performance to the growth and asymptotic limits of the world model. Our contributions are three-fold: (1) we formalize in-context learning of a world model and identify two core mechanisms: environment recognition and environment learning; (2) we derive error upper-bounds for both mechanisms that expose how the mechanisms emerge; and (3) we empirically confirm that distinct ICL mechanisms exist in the world model, and we further investigate how data distribution and model architecture affect ICL in a manner consistent with theory. These findings demonstrate the potential of self-adapting world models and highlight the key factors behind the emergence of ICEL, most notably the necessity of long context and diverse environments."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"In-Context Learning; World Models"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f6b6f2acb5612c1bcb18dd2a927c42e5e641931b.pdf"
},
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/5003af42abf06668a2f7d72aec3b310657d4c3cc.zip"
},
"title": {
"value": "Context and Diversity Matter: The Emergence of In-Context Learning in World Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0GNBqoYcAP",
"id": "0GNBqoYcAP",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission11055/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897611882,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission11055/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission11055/Authors"
]
}
|
|
2,026
|
0GaCfBRFnf
|
[
6,
4,
6,
8
] |
[
{
"content": "This paper introduces ProActive Self-Refinement (PASR) as a novel method for enabling Large Language Models (LLMs) to refine their outputs during the generation process, rather than as a post-hoc step. \n\nThe authors formalize this as a MDP and use RL to train models to decide whether, when, and how to refine their reasoning trace. \n\nKey contributions include:\n* A formulation of proactive self-refinement and trained the model with on-policy RL with GRPO.\n* The design of a fine-grained, comparison-based reward function that encourages meaningful refinements while penalizing unnecessary or harmful ones.\n* Extensive experiments on 10 diverse tasks using Qwen models, demonstrating that PASR improves accuracy and problem-solving performance.",
"id": "4hn8Hu1hzV",
"rating": 6
},
{
"content": "This paper introduces ProActive Self-Refinement (PASR), a reinforcement learning framework that enables large language models to refine their outputs during generation instead of after completion. PASR formulates the process as a Markov Decision Process, allowing the model to decide whether, when, and how to refine based on context. Using a GRPO method and a multi-part reward (format, accuracy, and refinement), PASR improves both efficiency and reasoning quality.",
"id": "EzW14sbScN",
"rating": 4
},
{
"content": "Current language models often benefit from self-refinement, where they are reflect upon their reasoning traces to come up with a more accurate final answer. The authors argue that this refinement should not come after each explicit reasoning chain, but instead be dynamic and inserted into the reasoning steps, as humans do. This paper proposes PASR, a method that enables LLMs to refine their reasoning during their generation process. Specifically, PASR requires employing reinforcement learning (via GRPO) with rollouts which are encouraged, via a reward function, to include specific tags which correspond with thinking and refinement. Assessed on Qwen models across a variety of datasets, PASR improves over base models and often is the strongest method, even compared to a large number of baselines.",
"id": "fJvPRFbEEv",
"rating": 6
},
{
"content": "The paper introduces ProActive Self-Refinement (PASR), a method that allows large language models to refine their outputs dynamically during generation rather than through fixed, reactive iterations. PASR enables the model to decide whether, when, and how to refine based on its internal state and evolving context, instead of regenerating entire responses",
"id": "ykGVimnysm",
"rating": 8
}
] |
{
"cdate": 1758282395015,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025a,\ntitle={A Stitch in Time Saves Nine: Proactive Self-Refinement for Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0GaCfBRFnf},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advances in self-refinement have demonstrated significant potential for improving the outputs of large language models (LLMs) through iterative refinement. However, most existing self-refinement methods rely on a reactive process with a fixed number of iterations, making it difficult to determine the optimal timing and content of refinement based on the evolving generation context. Inspired by the way humans dynamically refine their thoughts during execution, we propose ProActive Self-Refinement (PASR), a novel method that enables LLMs to refine their outputs during the generation process. Unlike methods that regenerate entire responses, PASR proactively decides whether, when, and how to refine based on the model’s internal state and evolving context. We conduct extensive experiments on a diverse set of 10 tasks to evaluate the effectiveness of PASR. Experimental results show that PASR significantly enhances problem-solving performance. In particular, on Qwen3-8B, PASR reduces average token consumption by 41.6% compared to standard generation, while also achieving an 8.2% improvement in accuracy. Our code and all baselines used in the paper are available in the GitHub."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large language models",
"Self-refine"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bc704b7bedab5c7f854cc8447788460efbeba2e4.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "A Stitch in Time Saves Nine: Proactive Self-Refinement for Language Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0GaCfBRFnf",
"id": "0GaCfBRFnf",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission17956/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897142928,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission17956/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission17956/Authors"
]
}
|
|
2,026
|
0GdjEJCHOE
|
[
6,
2,
2,
2
] |
[
{
"content": "This paper presents DRMLP, a Dynamic Regularized Multi-Layer Perceptron framework for discovering Granger causal structure in multivariate time series. DRMLP introduces a dual-branch neural architecture, combining a linear (MLP-based) causal discovery path with a recurrent (LSTM-based) sampling strategy, and applies an adaptive, hierarchical sparse penalty on input convolutional weights to improve temporal lag selection. The paper claims improved robustness to long-range dependencies and enhanced accuracy in causal discovery, demonstrated on both simulated (VAR, Lorenz-96) and real-world-inspired (DREAM-3) datasets, with empirical comparisons to state-of-the-art baselines.",
"id": "MuDuqC1c9A",
"rating": 6
},
{
"content": "This paper proposes a two level hierarchy for learning Granger Causal Networks from observational data as a dynamically regulated multi-layer perceptron using: (i) a linear causal discovery network is utilized to extract causal relations from sampled weight data; (ii) hierarchical regularization strategy is introduced to optimize the weights of the network. They have used synthetic datasets and some real world datasets to showcase how their approach can learn rich granger causal networks in different contexts.",
"id": "vaxY4WLCix",
"rating": 2
},
{
"content": "The paper proposes a dual-branch framework for nonlinear Granger causality (GC) discovery in multivariate time series. One branch is a per-variable MLP with hierarchical, lag-aware sparsity applied to the input layer, while the other branch is an LSTM trained on inputs masked by a Gumbel–Softmax–sampled causal graph inferred from the MLP. The two branches are trained alternately. The core assumption is that as the sampled causal graph becomes closer to the true underlying graph, the selected inputs will better approximate the true causal variables of each target, thereby improving the LSTM’s predictive performance.",
"id": "mL3hXH308L",
"rating": 2
},
{
"content": "### The review\n\nThis paper proposes DRMLP, a novel dual-branch neural network for discovering Granger causal relationships in multivariate time series. The model aims to address key limitations of existing neural Granger causality methods, namely the difficulty in modeling long-range or periodic dependencies and the use of static regularization penalties that treat all time lags equally.\n\nThe core technical novelty is the dynamic regularized penalty, a hierarchical group Lasso applied to the input weights of the linear MLP. This penalty is updated during training based on the learned dependencies at different lags, allowing the model to encourage near lag first, far lag if necessary. The prediction losses from both branches are combined, allowing the LSTM to supervise the causal structure learned by the MLP.",
"id": "ZKrimerZ12",
"rating": 2
},
{
"content": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.",
"id": "ocv81Bd2J0",
"rating": null
}
] |
{
"cdate": 1757163600341,
"content": {
"TLDR": {
"value": "A dynamic regularization approach for Granger-based causal discovery achieves superior performance on simulated and real-world time series data."
},
"_bibtex": {
"value": "@misc{\nliu2025drmlp,\ntitle={{DRMLP}: Dynamic Regularized Multi-Layer Perceptron for Neural Granger Causality Discovery with Adaptive Temporal Penalties},\nauthor={Haiyang Liu and Wenrui Jiang and Xiaokang Wang and Muyun Yao},\nyear={2025},\nurl={https://openreview.net/forum?id=0GdjEJCHOE}\n}"
},
"abstract": {
"value": "With the rapid development of IoT devices, collecting multivariate time series data has become increasingly convenient. Understanding the causal relationships among different time series variables is critical for validating causal discovery methods and benchmarking their ability to recover ground-truth interactions in controlled synthetic environments. However, existing Granger causality approaches based on neural networks typically require modeling each time series variable separately and assume that the influence of historical values decays over time. These limitations result in complex models and poor performance in discovering causality in time series with long-range dependencies. To address these drawbacks, this paper proposes a model called DRMLP: Dynamic Regularized Multi-Layer Perceptron, a Granger causality discovery method capturing periodic temporal dependencies from the input weights of a convolutional network. The proposed approach employs a dual-branch neural network architecture: a linear causal discovery network is utilized to extract causal relations from sampled weight data, while a hierarchical regularization strategy is introduced to optimize the weights of the convolutional network. This design enhances the accuracy of causal relation discovery and reduces noise interference, thereby ensuring the temporal consistency of the identified causal structures. Experiments conducted on simulated datasets and real-world system-generated datasets show that DRMLP outperforms state-of-the-art baseline methods."
},
"anonymous_url": null,
"authorids": {
"value": [
"~Haiyang_Liu3",
"~Wenrui_Jiang1",
"~Xiaokang_Wang6",
"~Muyun_Yao1"
]
},
"authors": {
"value": [
"Haiyang Liu",
"Wenrui Jiang",
"Xiaokang Wang",
"Muyun Yao"
]
},
"code_of_ethics": null,
"keywords": {
"value": [
"time series",
"causal discovery",
"deep learning",
"regularization",
"mlp"
]
},
"no_acknowledgement_section": null,
"paperhash": {
"value": "liu|drmlp_dynamic_regularized_multilayer_perceptron_for_neural_granger_causality_discovery_with_adaptive_temporal_penalties"
},
"pdf": {
"value": "/pdf/8da9719f16b180456bce3c0635cf484d9aaaadd6.pdf"
},
"primary_area": {
"value": "causal reasoning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "DRMLP: Dynamic Regularized Multi-Layer Perceptron for Neural Granger Causality Discovery with Adaptive Temporal Penalties"
},
"venue": {
"value": "ICLR 2026 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Withdrawn_Submission"
}
},
"forum": "0GdjEJCHOE",
"id": "0GdjEJCHOE",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/-/Withdrawn_Submission"
],
"license": "CC BY 4.0",
"mdate": 1763026722172,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2610/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2610/Authors"
]
}
|
|
2,026
|
0GjORP5Duq
|
[
4,
6,
2,
6
] |
[
{
"content": "The paper addresses the persistent challenge of compositional reasoning in vision–language models such as CLIP. \nIt proposes RACA-CLIP, a structured contrastive learning framework that integrates scene-graph supervision to align visual and textual representations at the object, attribute, and relation levels.\nThe method achieves consistent gains on compositional reasoning benchmarks and maintains competitive zero-shot performance on ImageNet and retrieval tasks.",
"id": "Lq50BSn8wy",
"rating": 4
},
{
"content": "This paper proposes RACA-CLIP, a structured contrastive learning framework designed to improve compositional reasoning in vision-language models (VLMs), specifically focusing on relation-aware and attribute-grounded alignment. The method introduces region-level contrastive learning with IoU-weighted alignment between detected objects and caption spans, as well as a triplet supervision mechanism over structured ⟨subject, relation, object⟩ units, leveraging scene-graph annotations to provide fine-grained grounding signals. The approach preserves the dual-encoder architecture of CLIP while injecting relational inductive bias into the learned embeddings. Experiments across five compositional benchmarks show consistent improvements over CLIP and other enhanced baselines, including large gains in SugarCrepe’s Add and Swap settings (+16.24 and +24.86), while retaining — and occasionally improving — zero-shot recognition and retrieval performance. Ablation studies and controlled analyses suggest that the performance gains stem from improved binding between objects, attributes, and relations, rather than from memorization or dataset artifacts. Overall, the paper contributes an impactful improvement to a key weakness of modern contrastive VLMs.",
"id": "QkqygkmdWh",
"rating": 6
},
{
"content": "This paper proposes to use scene-graph as a way to augment training in CLIP for enahnced compositional understanding capacity. The method uses existing scene-graph dataset to supervise region-aware contrastive learning, with improvement shown in the downstream benchmarks.",
"id": "Lr6LD66aqI",
"rating": 2
},
{
"content": "This paper proposes RACA-CLIP, a structured contrastive framework that enhances CLIP’s compositional reasoning by integrating scene-graph-based supervision. This paper introduces region-level IoU-weighted alignment and relation-aware triplet losses to better capture object–attribute bindings and inter-object relations. Trained on the graph-based captioning dataset, RACA-CLIP achieves large improvements on compositional benchmarks",
"id": "YNlayCypFs",
"rating": 6
}
] |
{
"cdate": 1758352745950,
"content": {
"TLDR": {
"value": "Building compositionality robust CLIP model by region aware training objectives, pushing them towards better reasoning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025racaclip,\ntitle={{RACA}-{CLIP}: Relation-Aware Compositional Alignment for {CLIP}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0GjORP5Duq},\nnote={under review}\n}"
},
"abstract": {
"value": "Vision-Language Models (VLMs) such as CLIP excel at broad multimodal tasks, yet struggle with compositional reasoning. Despite capturing coarse correlations, they often act like “bags-of-words” missing fine-grained structures such as object–attribute bindings and inter-object relations. We attribute this to: (i) limited compositional diversity in large-scale image–text data, and (ii) contrastive objectives that emphasize global alignment over grounded structure. To address this, we propose a hierarchical fine-grained alignment framework that explicitly bridges visual and textual components at the object, attribute, and relation levels. Unlike prior work relying on parsers, we leverage scene graph annotated datasets for structured supervision, requiring no extra labeling. We introduce a hierarchical fine-grained loss to complement standard contrastive learning by grounding entities and relations across modalities. Experiments on compositional benchmarks SugarCrepe, What’sUp, and Cola show large gains in capturing nuanced structure, while preserving performance on standard vision-language tasks. RACA CLIP method improves compositional reasoning accuracy by +24.86% on SugarCrepe, +5.7% on What’sUp, and +4.76 on Cola, offering a simple yet effective path toward stronger, human-like compositional understanding."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Explainablity",
"Vision-Language Models",
"Compositionality"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9bd2b282be01ace8b9e99b998f90c76c516b8eed.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "RACA-CLIP: Relation-Aware Compositional Alignment for CLIP"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0GjORP5Duq",
"id": "0GjORP5Duq",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission24104/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896781548,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission24104/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission24104/Authors"
]
}
|
|
2,026
|
0GlStRq4Xw
|
[
0,
8,
2,
6
] |
[
{
"content": "This paper proposes a machine learning architecture for constrained optimization learning that approximates an iterative descent algorithm. The proposed approach integrates an active set strategy, an approximate descent direction computation, and a projection operator to ensure equality constraint feasibility. The approach is evaluated on small nonlinear problems (synthetic convex QPs, a mildly nonlinear variant of these, and small AC Optimal Power Flow instances).\n\nOverall, I have several reservations regarding the validity of the proposed methodology, stemming from limited assumptions (e.g. linear equality constraints) and unpractical existential results (universal approximation theorem). In particular, as far as I can tell, the proposed scheme, in general, is not guaranteed to converge to a feasible solution. Furthermore, numerical experiments are conducted on small instances that are either synthetic or orders of magnitude smaller than real-life instances.",
"id": "cMy9b9joTN",
"rating": 0
},
{
"content": "This paper proposes DescentNet, an unrolled optimization module for neural networks based on the method of feasible directions framework, which takes a feasible solution to an optimization problem and aims to iteratively refine it into one that is still feasible but has improved optimality. The paper:\n* Designs a Descent-Module that unrolls projected (sub)gradient descent on a penalty version of the uniformly feasible direction (UFD) formulation. This module includes the application of a learnable module to the subgradient term, and a step size-sizing strategy with a learnable parameter to ensure feasibility retention and optimality improvement.\n* Provides theoretical guarantees about the existence of a Descent-Net instantiation that provides an optimal solution (assuming linear equality constraints).\n* Provides experiments on convex QPs, a simple non-convex problem, and ACOPF, where Descent-Modules are appended to DC3. For the convex QP and simple non-convex settings, solutions obtain close-to-optimal solutions (in contrast to DC3) while solving 1-2 orders of magnitude faster than standard optimization solvers. For ACOPF (which has nonlinear equality constraints), \"the relative error of the solution obtained by Descent-Net decreases only marginally compared to the initial point,\" and the solution time is about 50% faster than a traditional optimization solver.",
"id": "OCvgleCKrj",
"rating": 8
},
{
"content": "This paper presents Descent-Net, a learn-to-optimize framework for solving constrained optimization problems. The authors prove that Descent-Net achieves global convergence to a KKT point when both the inequality and equality constraints are linear. The proposed Descent-Net mimics the projected gradient descent algorithm but incorporates a nonlinear preconditioner. Specifically, each layer of Descent-Net applies a nonlinear transformation (implemented as a trainable two-layer ReLU network) to a descent direction computed from the penalized Topkis-Veinott uniformly feasible direction. The layer input is then updated along this descent direction using a trainable step size, followed by a projection onto the tangent space of the equality constraints to ensure the output remains feasible.\n\nThe authors provide simulation results for convex QPs, nonconvex problem (by replacing $y$ with $\\mathrm{sin}(y)$ in the objective of convex QPs), and ACOPF problems, demonstrating that the proposed Descent-Net efficiently achieves an approximate KKT solution.",
"id": "CJ2wcbeoqp",
"rating": 2
},
{
"content": "This paper proposes Descent-Net, a neural network-based framework for constrained optimization, which refines projected gradient descent (PGD) directions using learned operators. The method aims to handle general equality and inequality constraints by integrating projection steps and neural descent modules.",
"id": "P7xxW2Cy8o",
"rating": 6
}
] |
{
"cdate": 1757218565386,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025descentnet,\ntitle={Descent-Net: Learning Descent Directions for Constrained Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0GlStRq4Xw},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep learning approaches, known for their ability to model complex relationships and fast execution, are increasingly being applied to solve large optimization problems. However, existing methods often face challenges in simultaneously ensuring feasibility and achieving an optimal objective value. To address this issue, we propose Descent-Net, a neural network designed to learn an effective descent direction from a feasible solution. By updating the solution along this learned direction, Descent-Net improves the objective value while preserving feasibility. Our method demonstrates strong performance on both synthetic optimization tasks and the real-world AC optimal power flow problem."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"descent direction",
"unrolling",
"L2O",
"constrained optimization"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9f80b2167f277c1913f7e3983be25ea3c1ecb2e5.pdf"
},
"primary_area": {
"value": "optimization"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/6fdadc0816e37f5892c278ac3d9df6c5837c9786.zip"
},
"title": {
"value": "Descent-Net: Learning Descent Directions for Constrained Optimization"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0GlStRq4Xw",
"id": "0GlStRq4Xw",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission2713/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898132028,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission2713/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission2713/Authors"
]
}
|
|
2,026
|
0GpolO2auw
|
[
8,
6,
4
] |
[
{
"content": "This paper considers the task of community detection in well-clusterable graphs with sublinear space. The goal is to design a data structure D that fits in sublinear memory, and that enables one to query the cluster assignment for each node in sublinear time. Previous approaches all require $\\Omega(\\sqrt{n})$ space for such a datastructure D, but this paper overcomes that. In particular, it is able to design a data structure with a much smaller memory requirement that still allow for sublinear time which-cluster queries. The paper also provides new insights into the time-space tradeoff for this problem, by designing oracles there memory usage S and query time $T$ satisfy $S\\cdot T \\approx \\tilde{O}(n)$. Again, this holds for a class of graphs with good clustering structure. The paper also proves that this is optimal up to logarithmic factors for a certain class of techniques.\n\nThe notion of well-clusterable graphs corresponds (roughly) to graphs that have a k partition where clusters are roughly balanced in size and have small conductance (and large inner conductance, which measures internal connectivity of clusters).\n\nThe paper also proves new results (sublinear algorithms and lower bounds) for the 1-cluster/2-cluster problem, which seeks to tell the difference between graphs that are expenders on n nodes or that are disjoint unions of two identical expanders on n/2 nodes. \n\nFor the clustering results, the key technical advance is to provide a new way to estimate the dot product between the spectral embedding of two nodes in sublinear space and time (the spectral embedding for a node comes from the node's entries in the first few eigenvectors of the normalized Laplacian). This primitive is combined with a previously observation that if two nodes are from the same cluster (under the well-clusterability assumptions), then the dot product of their embeddings with be large, and otherwise the dot product will be zero.",
"id": "X7AyAq78Em",
"rating": 8
},
{
"content": "The paper studied the construction of spectral clustering oracles on well-clustered graphs with limited memory. The problem has recently attracted a flurry of work due to its applications in sublinear clustering algorithms. Here, we are given a graph that could be partitioned into $k$ clusters where the conductance between the clusters are high and the conductance inside the clusters are low. As such, we could label the vertices to generate a ‘ground truth’ clustering. The goal for the algorithm is to compute a data structure such that upon querying a vertex $x$, the algorithm can answer the cluster label of $x$ with high efficiency. The metrics for good algorithms in this application include:\n- Pre-processing time: the time to construct the data structure\n- Querying time: the time complexity needed to return the answer for each cluster\n- Accuracy: The answer for most of the vertex queries should be correct\n- Memory efficiency: the memory used by the data structure should be small\n\nThe last aspect was the main contribution of this paper. The paper discussed that all previous algorithms require $\\Omega(\\sqrt{n})$ space for such applications; in contrast, this algorithm is able to design an algorithm with only $n^{O(\\varepsilon/\\phi^2)}$ space, where $\\varepsilon$ and $\\phi$ are parameters that characterize the clusterability of the graph. The query time will be affected, which is now $n^{1+O(\\varepsilon/\\phi^2)}$ time. In fact, the trade-off could be made general with $n^{O(\\varepsilon/\\phi^2)}M$ space and $n^{1+O(\\varepsilon/\\phi^2)}/M$ time.\n\n**Main techniques.** The main techniques of the paper follow from the construction in Shen and Peng [NeurIPS’23]. In a nutshell, this line of techniques reduces the algorithm for the spectral clustering oracle to the approximation of the dot products of vertex embeddings. The previous space lower bound is due to the computation of the approximation dot product using random walks, and this paper adopted the simple idea to conduct the walk in batches to trade time efficiency for space efficiency.",
"id": "ppVUwwgPZk",
"rating": 6
},
{
"content": "This paper considered the problem of designing sublinear spectral clustering oracles for well-clustered graphs. The authors assume query access to the adjacency list of the graph. They have given a space-time tradeoff for this problem, and also showed this tradeoff is tight for approaches using only random walk oracles. One of the interesting feature of this work is that their algorithm has space complexity of $o(\\sqrt{n})$, in contrast to previous algorithms which require $\\Omega(\\sqrt{n})$ space.",
"id": "7xEUqS5ynn",
"rating": 4
}
] |
{
"cdate": 1758270235850,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025sublinear,\ntitle={Sublinear Spectral Clustering Oracle with Little Memory},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0GpolO2auw},\nnote={under review}\n}"
},
"abstract": {
"value": "We study the problem of designing *sublinear spectral clustering oracles* for well-clusterable graphs. Such an oracle is an algorithm that, given query access to the adjacency list of a graph $G$, first constructs a compact data structure $\\mathcal{D}$ that captures the clustering structure of $G$. Once built, $\\mathcal{D}$ enables sublinear time responses to \\textsc{WhichCluster}$(G,x)$ queries for any vertex $x$. A major limitation of existing oracles is that constructing $\\mathcal{D}$ requires $\\Omega(\\sqrt{n})$ memory, which becomes a bottleneck for massive graphs and memory-limited settings. In this paper, we break this barrier and establish a memory-time trade-off for sublinear spectral clustering oracles. Specifically, for well-clusterable graphs, we present oracles that construct $\\mathcal{D}$ using much smaller than $O(\\sqrt{n})$ memory (e.g., $O(n^{0.01})$) while still answering membership queries in sublinear time. We also characterize the trade-off frontier between memory usage $S$ and query time $T$, showing, for example, that $S\\cdot T=\\widetilde{O}(n)$ for clusterable graphs with a logarithmic conductance gap, and we show that this trade-off is nearly optimal (up to logarithmic factors) for a natural class of approaches. Finally, to complement our theory, we validate the performance of our oracles through experiments on synthetic networks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Graph Clustering",
"Spectral Clustering",
"Memory-Efficient Algorithms",
"Sublinear Algorithms",
"Space-Time Trade-offs"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/16c54ec8b3c8fd02e67e16d05874f80566e17f23.pdf"
},
"primary_area": {
"value": "learning theory"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Sublinear Spectral Clustering Oracle with Little Memory"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0GpolO2auw",
"id": "0GpolO2auw",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission16919/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897210152,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission16919/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission16919/Authors"
]
}
|
|
2,026
|
0H5iD4he7R
|
[
2,
6,
6,
2
] |
[
{
"content": "The paper proposes f-DMU, a unified framework for diffusion model unlearning based on f-divergence. It generalizes existing MSE-based and KL-based unlearning approaches by allowing any f-divergence. The method provides two formulations—closed-form and variational—to balance simplicity and generality. The authors offer theoretical analyses of gradient behavior and convergence properties, showing that different divergences lead to distinct unlearning dynamics. Empirical results on Stable Diffusion v1.4 demonstrate that the Hellinger-based loss achieves better trade-offs between erasure effectiveness and image quality preservation compared to the standard MSE loss.",
"id": "E1UyvbBM9t",
"rating": 2
},
{
"content": "This paper proposes a unified framework for diffusion model (DM) unlearning based on f-divergences. The work introduces a theoretical generalization of concept erasure via any f-divergence in DM. The contributions are: closed-form derivations for special cases (Hellinger, χ², Jeffreys), a variational adversarial formulation for general divergences and gradient behavior analysis & local convergence guarantees. Authors show empirical evaluation on Stable Diffusion 1.4 with multiple concepts and anchors. The paper argues that Hellinger divergence yields better stability and prior preservation while χ² gives more aggressive unlearning.",
"id": "jPKOFZQz27",
"rating": 6
},
{
"content": "The paper presents a generalization of the KL divergence-based unlearning algorithm to a broader class of f-divergence and performs a detailed analysis of the behavior of different divergence measures. Specifically, the paper compares the closed-form solution of Squared Hellinger ($H^2$) distance, the Pearson $\\chi^2$ divergence, and the KL-divergence measures. The gradient analysis shows that $H^2$ distance has a much lower gradient norm, which leads to a more stable unlearning process with less disruption on the nearby unrelated concepts. The paper also proposes a variational framework for a general f-divergence-based training, but this is shown to be much more disruptive, although fast.",
"id": "x4uX4ylIiK",
"rating": 6
},
{
"content": "The authors generalize existing concept unlearning methods in diffusion models by replacing KL divergence with more general f-divergence. The authors then theoretically analyze the impact of different choices of f on convergence behaviors near equilibria and gradient size. Empirically, it was found that the squared Hellinger distance has more stable gradient updates and better prior preservation.",
"id": "qHr39n2Dwi",
"rating": 2
}
] |
{
"cdate": 1758357192051,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025a,\ntitle={A Unified Framework for Diffusion Model Unlearning with f-Divergence},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0H5iD4he7R},\nnote={under review}\n}"
},
"abstract": {
"value": "Machine unlearning aims to remove specific knowledge from a trained model. While diffusion models (DMs) have shown remarkable generative capabilities, existing unlearning methods for text-to-image (T2I) models often rely on minimizing the mean squared error (MSE) between the output distribution of a target and an anchor concept. \nWe show that this MSE-based approach is a special case of a unified $f$-divergence-based framework, in which any $f$-divergence can be utilized.\nWe analyze the benefits of using different $f$-divergences, that mainly impact the convergence properties of the algorithm and the quality of unlearning. \nThe proposed unified framework offers a flexible paradigm that allows to select the optimal divergence for a specific application, balancing different trade-offs between aggressive unlearning and concept preservation."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"machine unlearning",
"diffusion models",
"f-divergence"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fdc476126ec69532b1dc4e0377d2bc0e0aa18520.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/d71c45a22db08d53a7c3b022703ec331a121c4c8.zip"
},
"title": {
"value": "A Unified Framework for Diffusion Model Unlearning with f-Divergence"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0H5iD4he7R",
"id": "0H5iD4he7R",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission24471/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896764314,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission24471/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission24471/Authors"
]
}
|
|
2,026
|
0HcqZkv1zs
|
[
4,
4,
4,
8
] |
[
{
"content": "The authors propose a method to incorporate semantic context into theorem proving models. They propose a clarity score to evaluate the understanding of this context. They demonstrate that this clarity helps downstream performance in proving theorems.",
"id": "XnDxg2qrMS",
"rating": 4
},
{
"content": "This paper investigated the relationship between a model's conceptual understanding and its reasoning performance in mathematical theorem proving. It proposes a new metric, the clarity score, that evaluates how well a model understands a task-related concept. Building on this, it further proposes a planner-executer pipeline that uses a general-purpose LLM to first identify relevant concepts and strategies, then translates strategies into Coq tactics. Empirically, the paper presents an analysis of clarity under different prompt configurations, demonstrates correlation between clarity and proof success, and shows that incorporating structured semantic context improves theorem-proving performance over prior methods.",
"id": "pnVFbZk1ZD",
"rating": 4
},
{
"content": "This paper proposes enhancing theorem proving in Coq by introducing structured semantic context extracted from Coq’s internal type system, combined with a Planner–Executor architecture. A new metric, Clarity Score, is introduced to quantify how well a model “understands” a task. The authors claim that increasing clarity leads to a proportional improvement in theorem-proving.",
"id": "PpBXu23uqd",
"rating": 4
},
{
"content": "This paper introduces a novel approach to LLM-based theorem proving in Coq, built on the hypothesis that enhancing task clarity is a distinct and important step for improving reasoning. The authors present: 1) a \"Clarity Score\" metric to quantify a model's understanding of formal concepts; 2) a data pipeline that extracts \"structured semantic context\" from the Coq compiler's internal representations; and 3) a Planner-Executor architecture that leverages this structured data.\n\nUsing this method, the authors claim to more than double the proof success rate of a general-purpose model (DeepSeek-V3), outperforming the previous state-of-the-art, Graph2Tac. While the core hypothesis is strong and the technical execution is impressive, the evaluation remains fairly narrow to support the SOTA claims. The comparison hinging on a single baselines, and only Coq programs is the most notable limitation.",
"id": "U8DPSHg43Z",
"rating": 8
}
] |
{
"cdate": 1758161650677,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025clarifying,\ntitle={Clarifying Before Reasoning: A Coq Prover with Structural Context},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0HcqZkv1zs},\nnote={under review}\n}"
},
"abstract": {
"value": "In this work, we investigate whether improving task clarity can enhance reasoning ability of large language models, focusing on theorem proving in Coq. We introduce a concept-level metric to evaluate task clarity and show that adding structured semantic context to the standard input used by modern LLMs, leads to a 1.85$\\times$ improvement in clarity score (44.5\\%~$\\rightarrow$~82.3\\%). Using the general-purpose model DeepSeek-V3, our approach leads to a 2.1$\\times$ improvement in proof success (21.8\\%~$\\rightarrow$~45.8\\%) and outperforms the previous state-of-the-art Graph2Tac (33.2\\%). We evaluate this on 1,386 theorems randomly sampled from 15 standard Coq packages, following the same evaluation protocol as Graph2Tac.\nFurthermore, fine-tuning smaller models on our structured data can achieve even higher performance (48.6\\%).\nOur method uses selective concept unfolding to enrich task descriptions, and employs a Planner-Executor architecture. These findings highlight the value of structured task representations in bridging the gap between understanding and reasoning."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"theorem proving",
"Coq",
"structured reasoning",
"formal verification"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ead97ffe8496c0912157e9ef7e4234c8823ec935.pdf"
},
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Clarifying Before Reasoning: A Coq Prover with Structural Context"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0HcqZkv1zs",
"id": "0HcqZkv1zs",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission10135/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897671780,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission10135/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission10135/Authors"
]
}
|
|
2,026
|
0I2N8KxOAo
|
[
2,
2,
2,
6
] |
[
{
"content": "This paper proposes DeFa, a framework for non-stationary multivariate time series forecasting. DeFa consists of two main components: (1) NAILong, a decomposition strategy that separates a time series into a time-varying Amplifier, normalized Seasonality, and sparse Residual via a multiplicative formulation ($X = Amp · NS + R$), and (2) FaTA, a factorized tensor autoregression module designed to forecast the complex dynamics of the Amplifier component. The authors evaluate DeFa on several real-world benchmarks, claiming state-of-the-art performance, and demonstrate its utility as a plug-in module for existing models.",
"id": "b92yiFslTa",
"rating": 2
},
{
"content": "The paper presents **DeFa**, a decomposition-then-forecast framework for multivariate non-stationary time series. It introduces **NAILong**, a multiplicative factorization $X=\\text{Amp} \\cdot \\text{NS} + R$ that isolates a time-varying Amplifier, normalized Seasonality, and sparse Residuals. To forecast the challenging amplifier dynamics, it further proposes **FaTA**, a factorized tensor autoregression that extends Tucker with specialized temporal (sparse circular convolutions), cross-variate (permutation-invariant), and per-variate (identity-anchored) factors, plus a plug-in option.",
"id": "NISyxZN6Fm",
"rating": 2
},
{
"content": "The paper proposes DeFa, a unified decomposition-and-forecasting framework designed for non-stationary multivariate time-series forecasting. outperforming the state-of-the-art methods in terms of both interpretable forecasting accuracy and scalability.",
"id": "GHeBbT1AOv",
"rating": 2
},
{
"content": "This paper proposes DeFa, a “decompose-then-forecast” framework for long-horizon multivariate time series. First, NAILong splits the series into three components: an amplifier (Amp) that captures time-varying and cross-channel interactions, a relatively stationary seasonal component (NS), and a sparse residual (R) that absorbs anomalies, using a multiplicative coupling to match non-stationary scaling effects. Then, FaTA applies low-rank factorization to the autoregressive coefficient tensor of Amp, selects sparse key lags in the time dimension, and imposes permutation-invariance and physical interpretability constraints in the variable dimensions; NS and R are extrapolated with lightweight linear heads. Training jointly optimizes historical reconstruction + future forecasting. Experiments across strong baselines and standard datasets show stable advantages for long-horizon prediction, and DeFa can also serve as a plug-in to boost other models with noticeable gains.",
"id": "SeL6RQC7Sd",
"rating": 6
}
] |
{
"cdate": 1756731475441,
"content": {
"TLDR": {
"value": "DeFA introduces a decomposition-based framework with tensor autoregressive forecasting that effectively captures non-stationary dynamics and long-term dependencies in multivariate time series."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025defa,\ntitle={DeFa: Non-Stationary Decomposition and Factorized Forecasting for Multivariate Time Series},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0I2N8KxOAo},\nnote={under review}\n}"
},
"abstract": {
"value": "Multivariate time series forecasting is essential in fields like energy systems, weather prediction, and traffic monitoring. While recent deep learning models, including Transformer-based architectures, show potential, they often struggle to capture the complex dynamics and non-stationary patterns inherent in real-world data. This limitation arises from over-parametrization and the difficulty in modelling shifting patterns in simple short- and long-term terms. In this paper, we propose a unified framework, DeFa, that addresses these challenges by combining decomposition-based modelling with tensor autoregressive forecasting. To capture long-term dynamics, stationary seasonality, and sparse residuals unique to non-stationary time series, DeFa decomposes the input series into three components using the Non-stationary AdaptiveInteractive Long-term strategy (NAILong). Furthermore, to improve the prediction of the Amplifier, which encodes time-varying dynamics, DeFa is enhanced with the Factorized Tensor Autoregression framework (FaTA). Unlike existing methods that disentangle or represent input series directly, FaTA explicitly models the autoregressive coefficient tensor across variates and temporal dimensions. This fusion enables a more flexible and interpretable representation of multi-variable interactions, improving forecasting accuracy while maintaining computational efficiency. Extensive experiments on real-world datasets show that DeFa outperforms state-of-the-art methods in terms of both interpretable forecasting accuracy and scalability. Additionally, DeFa handles long-term dynamics and drifting seasonalities efficiently through a plug-in option, extending its adaptability."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"time series forecasting",
"deep learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b94cded0e13a77d167ddff7dbbf4055ceb64596c.pdf"
},
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "DeFa: Non-Stationary Decomposition and Factorized Forecasting for Multivariate Time Series"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0I2N8KxOAo",
"id": "0I2N8KxOAo",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission217/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759898271392,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission217/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission217/Authors"
]
}
|
|
2,026
|
0IFqBfX7Ak
|
[
4,
2,
6
] |
[
{
"content": "This paper introduces Integrated Policy Gradient (IPG), a method intended to attribute and modulate reasoning components in large language models by applying a policy‐gradient–like formulation on hidden activations, followed by scaling of the identified components. The aim is to locate “reasoning circuits” and then control them via a scalar $\\gamma$. Experiments on math‐reasoning datasets (GSM8K, MATH500, AIME2024, GPQA‐Diamond) show modest gains.\n\nWhile the aim is timely, the paper falls short of clearly distinguishing itself from recent work in RL‐based reasoning and lacks methodological and empirical depth to justify its claims of mechanistic interpretability.",
"id": "LdLNBdLfcA",
"rating": 4
},
{
"content": "This paper provides a framework for attributing behaviors to internal components and then steering those components to control behavior. The paper is pretty well written and the methods are clear. However, the main problem is that this paper is not properly situated in the existing literature and fails to compare many baseline methods for model control. It also makes some very wrong claims about the state of existing literature. \n\nThe framing that mechanistic interpretability lacks causality is simply wrong. In fact, you cite Stolfo et al. (2023), a causal mediation analysis paper, in the sentence that makes that claim!\n\nThere is loads of work in the frameworks of causal mediation and abstraction analysis of LLMs. For example, https://arxiv.org/abs/2004.12265 and https://arxiv.org/abs/2004.12265; see https://arxiv.org/abs/2301.04709 or https://arxiv.org/pdf/2408.01416 for surveys with citations. \n\nSteering literature. There really is a huge amount of steering literature and this is all about causality, i.e., causing the behavior to change. For example, https://arxiv.org/abs/2205.05124, or https://arxiv.org/pdf/2310.06824 or https://arxiv.org/abs/2306.0334 or https://arxiv.org/abs/2308.10248\n\nYou also should be aware of representation fine-tuning: https://arxiv.org/abs/2404.03592. This is should be included as a baseline in your paper. Also, axbench https://arxiv.org/abs/2501.17148 evaluates a bunch of methods for control, and you could evaluate your method on this benchmark to get standardized comparisons against a lot of different methods.\n\n\nWithout further experiments that demonstrate the performance of the method in context of existing literature, its hard to know whether it marks a novel improvement. \n\nI think the focus on long term dependencies is a good part of the paper, but you would need to compare against something like ReFT with a language modeling loss to show you are better at handling long term dependencies. Overall, this method might be an improvement, but we can't know with the given set of experiments.",
"id": "W6DCiAbxsC",
"rating": 2
},
{
"content": "This paper introduces integrated policy gradient (IPG), a gradient-based method that attributes intermediate activations to some final outcome. This is done as follows:\n1. Take the gradient $\\nabla_h E_\\pi[J(h)]$ where $J$ is the outcome (reward). Do so using the score function / policy gradient trick to get unbiased estimates.\n2. Estimate this individually for each component of some hidden state, and average over all vectors from the latent vector $h_i$ to some baseline $h_i’$ (path gradient to reduce noise)\n3. Pick the top p components by the average magnitude of their IPG over some dataset \n\nAfter identifying the top components, the behaviour of the LLM can be controlled via scaling each component by a scalar to be larger / smaller. The authors suggest it is better to do this in k-sparse autoencoder latent space, rather than over raw neurons. The authors find that IPG can steer vectors to drive LM behaviour in the form of improved output accuracy, and do so better than other baseline interpretability methods.",
"id": "P1UhPxUr9m",
"rating": 6
}
] |
{
"cdate": 1758256779840,
"content": {
"TLDR": {
"value": "A method to causally control and interpret LLM reasoning behaviors by identifying and intevening internal reasoning-critical components."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025interpreting,\ntitle={Interpreting and Controlling {LLM} Reasoning through Integrated Policy Gradient},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0IFqBfX7Ak},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) demonstrate strong reasoning abilities in solving complex real-world problems. Yet, the internal mechanisms that support these behaviors remain opaque, raising concerns regarding truthfulness, safety, and controllability in practical applications.\nExisting interpretability approaches either rely on human-annotated contrastive pairs to derive control vectors, which limits reliability and generalization, or identify neurons correlated with superficial textual concepts, failing to capture the complexity of reasoning processes. \nConsequently, current methods struggle to precisely localize complex reasoning mechanisms or capture causal effects from model internal workings to the reasoning outputs.\nIn this paper, we build on causality-aware and outcome-oriented principles that focus on identifying components that have causal contributions to reasoning behavior where outcomes are cumulated by long-range effects.\nWe propose Integrated Policy Gradient (IPG), a novel framework that attributes reasoning behaviors to model inner workings like neurons, by propagating compound outcome-based signals (e.g., post reasoning accuracy) backward through model inference trajectories.\nIPG is efficient requiring only a few calls to the standard gradient operator, which uncovers causal structures governing complex reasoning and avoids large manual supervision.\nEmpirical evaluations demonstrate that our approach achieves more precise mechanistic interpretability and enables reliable modulation of reasoning behaviors across diverse reasoning models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models",
"Reasoning",
"Mechanistic Interpretability",
"Policy Gradient",
"Sparse Autoencoder"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/37cc955be40917d01dca561f285b20133300783f.pdf"
},
"primary_area": {
"value": "interpretability and explainable AI"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Interpreting and Controlling LLM Reasoning through Integrated Policy Gradient"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0IFqBfX7Ak",
"id": "0IFqBfX7Ak",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission15897/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897274656,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission15897/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission15897/Authors"
]
}
|
|
2,026
|
0IN8RiFbmg
|
[
4,
4,
2,
2
] |
[
{
"content": "This paper investigates the performance of Parameter-Efficient Fine-Tuning (PEFT) methods under increasing distribution shifts across tasks. We introduce a novel PEFT technique, AUG, which augments matrix-vector products with learnable parameters conditioned on both the input data and pretrained weights.\n\nThe efficacy of AUG is evaluated across a diverse set of tasks: English language understanding (SuperGLUE), multilingual classification (XNLI), multimodal processing (MS-COCO), and multitemporal radar interferometry. AUG is shown to consistently match or outperform existing PEFT methods, demonstrating particular strengths in low-resource and highly distribution-shifted settings.\n\nThis paper proposes a new Synthetic Aperture Radar (SAR) imaging task for detecting charcoal kilns, demonstrating that AUG is scalable, efficient, and highly adaptable across different modalities.",
"id": "6rZUBYsjXR",
"rating": 4
},
{
"content": "The paper investigates a Parameter-Efficient Fine-Tuning (PEFT) approach for domain adaptation, with a focus on handling distribution shifts. While the proposed method demonstrates competitive performance on certain tasks, there are significant concerns regarding experimental setup, the clarity of citation formatting, and the claimed advantages of the proposed approach over existing methods such as LoRA.",
"id": "yYUzvJQ1RX",
"rating": 4
},
{
"content": "This paper aims to propose a new approach for parameter-efficient fine-tuning for a foundation model. They propose to augment the pre-trained model's weights by training a newly introduced lightweight matrix. Additionally, they propose to apply their approach to SAR satellite imagery to predict the presence of charcoal production kilns. Their empirical results indicate that in some cases, their proposed approaches can outperform several parameter-efficient tuning techniques.",
"id": "6xOQvTbWXO",
"rating": 2
},
{
"content": "This paper investigates how Parameter-Efficient Fine-Tuning (PEFT) methods perform under varying degrees of distribution shifts, proposing a novel PEFT approach called matrix vector product augmentation for domain adaptation. The method augments pretrained weights with learnable parameters, conditioning updates on both inputs and frozen knowledge, and is evaluated across English, multilingual, multimodal, and remote sensing tasks. Results show it outperforms state-of-the-art PEFT methods in low-resource and multimodal settings while remaining competitive on in-domain tasks, though it requires more memory than some alternatives.",
"id": "uCeoNLOk1I",
"rating": 2
}
] |
{
"cdate": 1758158798611,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025scaling,\ntitle={Scaling Parameter-Efficiency with Distribution Shifts for Domain Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0IN8RiFbmg},\nnote={under review}\n}"
},
"abstract": {
"value": "Distribution shifts between source and target domains pose significant challenges to the generalization capabilities of machine learning models. While foundation models are often fine-tuned to adapt to new domains, their increasing size has led to a rise in the computational resources required for domain adaptation. This has driven interest in Parameter-Efficient Fine-Tuning (PEFT) methods, which have shown strong performance on in-domain tasks. In this work, we investigate how PEFT methods scale with varying degrees of distribution shifts and propose a novel PEFT method designed for domain adaptation. We select an English pre-trained Large Language Model (LLM) as the foundation model and apply PEFT techniques across tasks that progressively introduce larger distribution shifts. Specifically, we begin with SuperGLUE English benchmark, followed by a multilingual inference task for high-resource and low-resource languages, then a multimodal image captioning task. Finally, We introduce a novel multimodal and multitemporal radar interferometry task for detecting charcoal production sites in remote areas. Separately, we propose a PEFT method that augments matrix vector products with learnable parameters, inducing a learning paradigm that conditions on both training data and encoded information. Our method is competitive against SOTA PEFT methods for English tasks and out-performs SOTA methods for larger distribution shifts i.e. low-resource multilingual, image captioning, and radar interferometry tasks."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"peft",
"remote-sensing"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/26c0d3120b9a28d108df43635c02ee2e81499105.pdf"
},
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Scaling Parameter-Efficiency with Distribution Shifts for Domain Adaptation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0IN8RiFbmg",
"id": "0IN8RiFbmg",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission10053/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897678056,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission10053/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission10053/Authors"
]
}
|
|
2,026
|
0IWZjbMmry
|
[
4,
4,
2,
2
] |
[
{
"content": "This paper introduces LayerDecompose, a compression framework for large language models that combines weight sharing with low-rank adapters. The key idea is to represent groups of consecutive layers with a single shared weight matrix W, augmented with layer-specific low-rank residuals and per-channel scaling vectors. The authors exploit permutation invariances in transformer modules to better align weights before decomposition. Experiments on LLaMA-7B and other models show 30% compression while retaining 89% of original performance.",
"id": "bDmUvdhqX0",
"rating": 4
},
{
"content": "This paper introduces LAYERDECOMPOSE, an LLM compression approach leveraging weight sharing. The method compresses models by sharing a single \"base\" weight matrix across a group of transformer layers while augmenting each layer with lightweight, low-rank residual adapters and scaling vectors. Notably, it permutes the weights before decomposition, minimizing reconstruction error. The authors demonstrate that this approach can achieve a 30% reduction in model size on LLaMA-7B while retaining 88.9% of its original performance on seven benchmarks, outperforming SVD- and pruning-based baselines.",
"id": "pX1HPG1QJy",
"rating": 4
},
{
"content": "This paper proposes a method for compressing LLMs by sharing a core weight matrix across layers, while employing low-rank adapters for each type of weight matrix within a layer. To further reduce approximation error, the method groups output similar layers and permutes the weight matrices within each group.",
"id": "oundaTNzbd",
"rating": 2
},
{
"content": "This paper proposes a novel method for model compression, exploring weight sharing between layers and using low-rank factors to compensate for errors. Its accuracy is validated on several 7B models.",
"id": "m514VQsYk3",
"rating": 2
}
] |
{
"cdate": 1758267368176,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025layerdecompose,\ntitle={LayerDecompose: Exploring weight sharing for Large Language Model Compression},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0IWZjbMmry},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advances in large language model (LLM) compression have predominantly focused on pruning and low-rank factorization, leaving weight sharing—despite its success in classical neural network compression—largely unexplored. We introduce LayerDecompose, a novel framework that reduces parameter redundancy by sharing a core weight matrix across transformer layers and augmenting each layer with lightweight, low-rank adapters. Unlike prior SVD- and pruning-based methods, our joint optimization of shared weights and residual adapters achieves a 30% model size reduction while retaining 89% of the original performance on seven standard benchmarks. Experiments on LLaMA and other models demonstrate that LayerDecompose consistently outperforms state-of-the-art baselines. These results highlight the promise of combining weight sharing with low-rank adaptation for efficient, scalable LLM deployment."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Large Language Models (LLMs)",
"Model Compression",
"Weight Sharing"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8cec12cc44b2a6394692a3239cbd59726b460b02.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "LayerDecompose: Exploring weight sharing for Large Language Model Compression"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0IWZjbMmry",
"id": "0IWZjbMmry",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission16659/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897226556,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission16659/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission16659/Authors"
]
}
|
|
2,026
|
0IceiDrfxI
|
[
4,
2,
4,
4
] |
[
{
"content": "This paper introduces NATLM, a framework that combines static analysis (AST/CFG) and LLM reasoning (Gemini Pro 1.5) to detect four NFT smart-contract defect types: ERC-721 Reentrancy, Public Burn, Risky Mutable Proxy, and Unlimited Minting. AST features are derived via CodeBERT; CFG features via TextCNN + GCN; features are fused and compared against a vector database of known defects using cosine similarity and Euclidean distance, after which an LLM performs “reasoning” to produce a detection report. Experiments on 8,672 contracts report per-category metrics and claim better performance than classical tools and direct LLM baselines.",
"id": "9vnQmFWKpY",
"rating": 4
},
{
"content": "This paper proposes a smart contract vulnerability detection approach that combines structural similarity computation with large language model reasoning. By computing similarity between AST and CFG feature vectors using Cosine Similarity and Euclidean Distance, the method leverages a retrieval-augmented generation (RAG) framework with the Gemini model to determine the presence of vulnerabilities. The core strength of this work lies in its integration of semantic retrieval with natural language explanations, enhancing the interpretability and usability of detection results. It also covers a diverse set of high-risk vulnerability types observed in real-world contracts, indicating strong practical relevance. However, the paper suffers from several critical weaknesses in both methodological clarity and experimental design. First, it does not explicitly analyze the fundamental structural differences between vulnerable and non-vulnerable contracts, which undermines the interpretability of the feature representations. Second, the evaluation dataset appears to overlap with the retrieval corpus, raising concerns about potential data leakage and inflated performance metrics. Third, while the authors introduce a confidence score \\hat{p}_i and threshold \\tau, they fail to clarify how the confidence is derived, and do not provide sensitivity analysis on varying \\tau values. Finally, structural similarity to a vulnerable contract does not necessarily imply the presence of a vulnerability; thus, a quantitative analysis of false positives and the actual likelihood of vulnerability given structural similarity is warranted. Overall, while the proposed approach shows promise, its current presentation lacks sufficient empirical and theoretical support, and requires further refinement.",
"id": "CPOf5iREP2",
"rating": 2
},
{
"content": "This paper proposes a novel framework called NATLM, designed to address the challenge of detecting specific vulnerabilities in NFT smart contracts. The authors suggest that traditional static analysis tools can only detect a limited range of vulnerabilities, while using large language models alone can detect most vulnerabilities but with very low precision.\n\nIn the NATLM framework, the authors combine static analysis with LLM. Experimental results show that NATLM significantly outperforms all baselines in terms of F1-Score, successfully increasing precision from 30–40% to 85–90% while maintaining a high recall rate.",
"id": "Bqtv4CfjtB",
"rating": 4
},
{
"content": "This paper presents NATLM, a neural framework designed to detect defects or inconsistencies in natural language textual models (e.g., software requirements, specifications, or documentation). The approach leverages large language model (LLM) embeddings combined with attention-based classification layers to identify semantic conflicts, logical contradictions, or incomplete statements in structured textual artifacts. The authors claim that NATLM can generalize across domains by training on a mix of annotated datasets representing different types of textual defects. Experimental results suggest that NATLM outperforms baseline models such as BERT, RoBERTa, and traditional sequence classifiers in precision and F1-score, while maintaining reasonable efficiency.",
"id": "dpZ0eciDaU",
"rating": 4
}
] |
{
"cdate": 1757838049602,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025natlm,\ntitle={{NATLM}: Detecting Defects in {NFT} Smart Contracts Leveraging {LLM}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0IceiDrfxI},\nnote={under review}\n}"
},
"abstract": {
"value": "Security issues are becoming increasingly significant with the rapid evolution of Non-fungible Tokens (NFTs). The potential defects in NFT smart contracts could lead to substantial financial losses if exploited. To tackle this issue, this paper presents a framework called NATLM (NFT Assistant LLM), to detect potential defects in NFT smart contracts. NATLM effectively identifies 4 common types of vulnerabilities in NFT smart contracts, including ERC-721 Reentrancy, Public Burn, Risky Mutable Proxy, and Unlimited Minting. Relying exclusively on large language models (LLMs) for defect detection can lead to a high false-positive rate. To improve it, NATLM integrates static analysis with LLMs, specifically Gemini Pro 1.5. Initially, NATLM employs static analysis to extract structural, syntactic, and execution flow information from the code, represented through Abstract Syntax Trees (AST) and Control Flow Graphs (CFG). These extracted features are then combined with vectors of known defect examples to create a matrix for input into the knowledge base. Subsequently, the feature vectors and code vectors of the analyzed contract are compared with the contents in the knowledge base. Finally, the deep semantic analysis capabilities of LLM are used to identify defects in NFTs. Experimental results indicate that NATLM analyzed 8,672 collected NFT smart contracts, achieving an F1 score of 88.94\\%, outperforming other baselines."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"NFT",
"LLM",
"Smart Contract",
"Semantic Analysis"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bdddf86568d8764dd37257e29334ad87194789b7.pdf"
},
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"submission_guidelines": null,
"supplementary_material": {
"value": "/attachment/33d9a3380a902ed5ddb639a6f6db176ac71a716a.zip"
},
"title": {
"value": "NATLM: Detecting Defects in NFT Smart Contracts Leveraging LLM"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0IceiDrfxI",
"id": "0IceiDrfxI",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission5038/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897999057,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission5038/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission5038/Authors"
]
}
|
|
2,026
|
0Iw52EDu82
|
[
2,
4,
6,
6
] |
[
{
"content": "This paper investigates the scaling law of fully sparsely-activated language models. They first conduct experiments to compare different activation functions, sparsification functions, and gradient estimation methods. Then, they use scaling law (the relationship between cross-entropy loss and training tokens and model size) to arrive at the optimal sparsity ratio in terms of inference costs. Finally, they also conduct experiments with 1.58-bit models, and find that the optimal sparsity ratio is greater.",
"id": "u7hfkPevi1",
"rating": 2
},
{
"content": "This paper introduces scaling laws for fully sparsely-activated Large Language Models (LLMs), where activation sparsity is applied to every linear transformation. Through extensive experiments, the authors derive a novel scaling law that incorporates the sparsity ratio S as a variable, demonstrating that model performance scales favorably as model size increases, narrowing the gap with dense counterparts. A key contribution is the \"inference-optimal\" scaling law, which predicts an optimal sparsity ratio (around 45.58% for full-precision models) that maximizes performance for a fixed inference compute budget. The findings are further shown to be compatible with 1-bit quantization, suggesting a promising path toward more efficient future models.",
"id": "iw1BUiIY7w",
"rating": 4
},
{
"content": "The paper investigates scaling laws for fully sparsely-activated LLMs. It shows that loss scales as a power-law in parameters (N) and data (D), and exponentially with sparsity (S). They recast the law in terms of activated parameters to derive an inference-optimal sparsity (≈45.58% FP, ≈61.25% at 1.58-bit).",
"id": "BjmVufuVVO",
"rating": 6
},
{
"content": "This work aims to investigate the scaling laws for fully sparse-activated MoEs, unlike previous works that study scaling laws for dense models or models with sparse MoEs.",
"id": "E6fTknsWm7",
"rating": 6
}
] |
{
"cdate": 1758204654491,
"content": {
"TLDR": {
"value": "In this work, we investigate the architecture and scaling laws for fully sparsely-activated models, where every activation in linear transformations is sparse."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025scaling,\ntitle={Scaling Laws for Fully Sparsely-Activated Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0Iw52EDu82},\nnote={under review}\n}"
},
"abstract": {
"value": "Scaling laws play a crucial role in understanding and optimizing Large Language Models (LLMs). While previous work on scaling laws has primarily focused on either fully dense models or models with sparse Mixture of Experts (MoE), our work investigates fully sparsely-activated models, where every activation in linear transformations is sparse. We derive scaling laws for these models through extensive experiments with varying model sizes, training token counts, and activation sparsity ratios. Our findings demonstrate that fully sparsely-activated LLMs exhibit favorable scaling properties: as the total model size increases, LLMs can maintain higher activation sparsity while the performance gap between sparsely-activated and dense models narrows. Notably, our scaling laws indicate that a sparsely-activated full-precision model with a 45.58% sparsity ratio achieves optimal performance while maintaining the same number of active parameters. Furthermore, our scaling laws remain applicable to 1-bit pre-training of LLMs, suggesting promising directions for improving the efficiency of future models."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Activation sparsity",
"scaling law",
"large language models"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/68d8801b255d9afe13fd2fa81741bcc86a829bc4.pdf"
},
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Scaling Laws for Fully Sparsely-Activated Large Language Models"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0Iw52EDu82",
"id": "0Iw52EDu82",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897545966,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission11921/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission11921/Authors"
]
}
|
|
2,026
|
0IwSQsqMU9
|
[
8,
4,
4
] |
[
{
"content": "Quite interesting work; A novel Darwinian perspective to optimization dynamics in NN. The paper presents a novel bio-inspired optimization method called Natural Selection (NS) that introduces explicit competition among training samples. By computing competitive scores through image stitching and dynamically adjusting sample loss weights, the method as empirically shown achieves consistent improvements across diverse computer vision tasks.",
"id": "0iOvDVQUCC",
"rating": 8
},
{
"content": "This paper introduces Darwinian Optimization, a bio-inspired training paradigm for deep neural networks based on the principle of natural selection. By explicitly modeling competition among samples through a Natural Selection (NS) score, the method dynamically adjusts per-sample loss weights to mimic ecological adaptation, enabling more balanced, efficient, and generalizable optimization of deep networks.",
"id": "tWjJdYHMcQ",
"rating": 4
},
{
"content": "The paper proposes Natural Selection (NS) as a novel optimization method, drawing inspiration from species competition and adaptation in natural ecosystems. NS introduces a dynamic competition mechanism among training samples. This bio-inspired approach is designed to mitigate classic deep learning training challenges, including class imbalance bias, insufficient learning of hard samples, and instability due to noisy data by applying non-uniform selective pressure.",
"id": "N5GMXWnKJu",
"rating": 4
}
] |
{
"cdate": 1757926747886,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025darwinian,\ntitle={Darwinian Optimization: Training Deep Networks with Natural Selection},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0IwSQsqMU9},\nnote={under review}\n}"
},
"abstract": {
"value": "In conventional deep learning training paradigms, all samples are usually subjected to uniform selective pressure, which fails to adequately account for variations in competitive intensity and diversity among them. This often leads to challenges such as class imbalance bias, insufficient learning of hard samples, and improper handling of noisy samples. Drawing inspiration from the principles of species competition and adaptation in natural ecosystems, we propose a bio-inspired optimization method for deep networks, termed Natural Selection (NS). NS introduces a competition mechanism by stitching and scaling a group of samples before forward prediction. Each sample is then assigned a natural selection score based on its prediction, reflecting its competitive status within the group. This score is further used to dynamically adjust the loss weight of each sample, thereby forming an optimization process that more closely mimics a Darwinian ecological equilibrium. Experimental results on 12 public datasets consistently demonstrate that NS improves performance without being tied to specific network architectures or task assumptions. This study offers a novel perspective on deep network optimization and holds instructive significance for broader applications. The code will be made publicly accessible."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Deep network optimization",
"natural selection",
"sample weighting",
"image classification",
"emotion recognition"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bcff301a2f37f95db217c28b899c4e2e5e423b9d.pdf"
},
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Darwinian Optimization: Training Deep Networks with Natural Selection"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0IwSQsqMU9",
"id": "0IwSQsqMU9",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission5674/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897961688,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission5674/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission5674/Authors"
]
}
|
|
2,026
|
0JLUFJMo5p
|
[
0,
2,
0,
2
] |
[
{
"content": "The manuscript strongly resembles AI-generated content and may have been produced as an internal test for prospective AI researchers. If so, it suggests that the current state of such roles remains immature and requires further development.",
"id": "nKMdFIiiCf",
"rating": 0
},
{
"content": "This paper introduces Dynamic Task-Embedded Reward Machines (DTERM), a novel framework for reinforcement learning (RL) in code generation and manipulation tasks. Unlike traditional reward models that rely on fixed or manually tuned weights, DTERM employs a hypernetwork-driven architecture to dynamically adjust the contributions of various reward components - such as syntactic correctness, semantic correctness, and computational efficiency - based on task embeddings. The framework integrates a transformer-based task embedding generator, a modular reward decomposer, and a hypernetwork to produce context-aware reward weightings. Experiments across multiple benchmarks (e.g., CodeXGLUE, APPS, DeepFix, HumanEval) demonstrate consistent improvements over static reward baselines and strong generalization to unseen tasks.",
"id": "xxVpR7yuLY",
"rating": 2
},
{
"content": "The paper presents DTERM, a framework for RL in code generation and manipulation tasks. DTERM combines transformer-based task embeddings, modular decomposition of reward components, and a hypernetwork that produces context-dependent weights over these components. Experiments across four prominent code-generation benchmarks show that DTERM outperforms static and manually tuned reward baselines, particularly in cross-task generalization and adaptability.",
"id": "7TmB1fIdjw",
"rating": 0
},
{
"content": "This paper proposes Dynamic Task-Embedded Reward Machines (DTERM), a hypernetwork-based framework for dynamically weighting reward components in reinforcement learning for code generation and manipulation tasks. Instead of using static weights for sub-rewards (e.g., syntax, functionality, style), DTERM employs task embeddings derived from transformer encoders (e.g., CodeBERT) to generate adaptive weighting through a hypernetwork. The method is tested on several code generation benchmarks such as CodeXGLUE, HumanEval, APPS, and DeepFix, reporting modest improvements over static baselines.",
"id": "hDyMiHcIpR",
"rating": 2
}
] |
{
"cdate": 1758368192077,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025dynamic,\ntitle={Dynamic Task-Embedded Reward Machines for {\\textbackslash}{\\textbackslash} Adaptive Code Generation and Manipulation {\\textbackslash}{\\textbackslash} in Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0JLUFJMo5p},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce Dynamic Task-Embedded Reward Machine (DTERM), a new machine learning approach for reinforcement learning on tasks of code generation and code manipulation. Conventional reward models tend to be based on fixed weightings or manual tuning, which is not flexible enough for many different coding tasks, such as translation, completion and repair. To overcome that, DTERM dynamically modulates reward components using a hypernetwork-driven architecture, which can balance the task-aware configuration of syntactic correctness, semantic correctness, and computational efficiency. The framework combines three key modules, including a transformer-based task embedding generator, a modular reward decomposer, and a hypernetwork to generate context-dependent weights of sub-rewards."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Reinforcement Learning"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fa6de8f172967f9988c29abcc16091879272bcd0.pdf"
},
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Dynamic Task-Embedded Reward Machines for \\\\ Adaptive Code Generation and Manipulation \\\\ in Reinforcement Learning"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0JLUFJMo5p",
"id": "0JLUFJMo5p",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission25449/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759896720791,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission25449/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission25449/Authors"
]
}
|
|
2,026
|
0JWhSwwXak
|
[
4,
4,
6,
6
] |
[
{
"content": "This paper proposes SYMMATIKA, a symbolic regression framework that combines multi-island genetic programming and a reusable symbol library to accelerate search, supporting both explicit (y=f(x)) and implicit (F(x,y)=0) regression tasks. Experimental results demonstrate its superiority over existing methods on benchmarks, particularly in recovery rate and computational efficiency.",
"id": "z0xjEicZ00",
"rating": 4
},
{
"content": "This paper proposes SYMMATIKA, a structure-aware symbolic regression (SR) framework that can discover both explicit relations and implicit relations. SYMMATIKA integrates two main innovations: 1) Feedback-driven multi-island genetic programming, which adaptively tunes mutation, crossover, and selection rates based on evolutionary progress. 2) A reusable motif library, inspired by biological sequence motifs, to identify and reuse high-impact symbolic subexpressions for faster convergence. The method achieves state-of-the-art recovery rates on Nguyen, Feynman, SRBench, and Eureqa benchmarks. Notably, it recovers 96.5% of Nguyen tasks (including 61% success on Nguyen-12, compared to 2% for prior methods), and converges 10–100× faster than Eureqa on implicit physical systems. The system is fully implemented in optimized C++ and requires only multi-core CPUs.",
"id": "2EkGP3fe3Z",
"rating": 4
},
{
"content": "The work introduces SymMatika, a symbolic regression algorithm that integrates multi-island genetic programming with a motif library for structural reuse and feedback-driven operator scheduling.",
"id": "RftPqtUVQA",
"rating": 6
},
{
"content": "This work proposed a novel symbolic regression framework-SYMMATIKA that combines feedback-driven genetic programming with a reusable structural motif library to discover both explicit and implicit mathematical expressions from data. By leveraging adaptive operator scheduling, motif-based recombination, and implicit-derivative fitness evaluation, SYMMATIKA achieves state-of-the-art recovery rates on standard benchmarks like Nguyen and Feynman equations, significantly outperforming existing methods in accuracy, convergence speed, and model complexity.",
"id": "3dsN2ewnxF",
"rating": 6
}
] |
{
"cdate": 1758226660875,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025symmatika,\ntitle={SymMatika: Structure-Aware Symbolic Discovery},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0JWhSwwXak},\nnote={under review}\n}"
},
"abstract": {
"value": "Symbolic regression (SR) seeks to recover closed-form mathematical expressions that describe observed data. While existing methods have advanced the discovery of either explicit mappings (i.e., $y = f(\\mathbf{x})$) or discovering implicit relations (i.e., $F(\\mathbf{x}, y)=0$), few modern and accessible frameworks support both. Moreover, most approaches treat each expression candidate in isolation, without reusing recurring structural patterns that could accelerate search. We introduce SymMatika, a hybrid SR algorithm that combines multi-island genetic programming (GP) with a reusable motif library inspired by biological sequence analysis. SymMatika identifies high-impact substructures in top-performing candidates and reintroduces them to guide future generations. Additionally, it incorporates a feedback-driven evolutionary engine and supports both explicit and implicit relation discovery using implicit-derivative metrics. Across benchmarks, SymMatika achieves state-of-the-art recovery rates on the Nguyen and Feynman benchmark suites, an impressive recovery rate of 61\\% on Nguyen-12 compared to the next best 2\\%, and strong placement on the error-complexity Pareto fronts on the Feynman equations and on a subset of 57 SRBench Black-box problems. Our results demonstrate the power of structure-aware evolutionary search for scientific discovery. To support broader research in interpretable modeling and symbolic discovery, we have open-sourced the full SymMatika framework."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"AI for science",
"symbolic regression",
"genetic programming"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5d2a444c0a7b4c93be4abcd3f68c1c25286e4c1c.pdf"
},
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "SymMatika: Structure-Aware Symbolic Discovery"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0JWhSwwXak",
"id": "0JWhSwwXak",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission13998/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897397261,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission13998/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission13998/Authors"
]
}
|
|
2,026
|
0JYtXNl7ns
|
[
2,
2,
4,
4
] |
[
{
"content": "The paper introduces an inference-time scaling framework (SHARS), aiming to allocate additional computational resources to detect and mitigate hallucinations during decoding.\nAs a main component of the framework, the uncertainty-based hallucination detection method HalluSE which aims to improve semantic entropy is introduced.\nHalluSE is evaluated on long-form hallucination benchmarks, showing improved performance over some baseline methods.\nThe overall SHARS framework is evaluated by the FactScore benchmark, indicating less unsupported answers and high factual precision, at the cost of decreased response rate, showcasing scaling behavior with increased compute.",
"id": "1IXmJQk7je",
"rating": 2
},
{
"content": "The submission introduces SHARS/HalluSE - Step-wise HAllucination Rejection Sampling, a method that is intended to address some classes of the hallucinations by resampling the outputs that have high semantic entropy.\nThe method uses a multistage pipeline that breaks down the model outputs into atomic facts, formulates them as subquestions, samples and evaluates semantic entropy (SE) on each of them individually. \nThis improves the hallucination detection on test data compared to regular SE.",
"id": "q2It44urJB",
"rating": 2
},
{
"content": "This paper addresses the \"hallucination snowballing\" effect in long-form generation, where early factual errors propagate and degrade overall reliability. The authors propose a novel inference-time framework, Step-wise HAllucination Rejection Sampling (SHARS), which operates incrementally at the sentence level. Instead of post-hoc verification, SHARS assesses each new sentence for factuality as it is generated. Hallucinated sentences are either discarded or rewritten, ensuring that subsequent generation is conditioned only on verified content. To enable this, the authors introduce HalluSE, an improved uncertainty-based hallucination detector that refines prior semantic entropy methods. A key feature is that the system is self-contained, not requiring external knowledge sources, though it remains compatible with them. Extensive experiments on benchmarks like FactScore and LongFact demonstrate that SHARS significantly reduces hallucinations and improves factual precision, often while increasing the total amount of supported factual information.",
"id": "G8Ta2HLfsb",
"rating": 4
},
{
"content": "This paper introduces Step-wise Hallucination Rejection Sampling (SHARS), an inference-time framework aimed at improving the factual reliability of large language models (LLMs) in long-form generation. The key idea is to allocate additional compute during decoding by detecting and rejecting hallucinated sentences as they are produced, preventing “hallucination snowballing.”\n\nEmpirical results on FactualBio, FactScore, and LongFact benchmarks (using Llama3.1-8B-Instruct and Qwen3-32B) demonstrate that SHARS with HalluSE reduces hallucination rates by 20–26% and improves factual precision, with a consistent positive scaling trend with increased inference-time computation.",
"id": "rKIEQQLhG6",
"rating": 4
}
] |
{
"cdate": 1758288192041,
"content": {
"TLDR": {
"value": "an inference-time scaling framework for hallucination mitigation in open-ended generation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025building,\ntitle={Building Reliable Long-Form Generation via Step-Wise Hallucination Rejection Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0JYtXNl7ns},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) have achieved remarkable progress in open-ended text generation, yet they remain prone to hallucinating incorrect or unsupported content, which undermines their reliability. This issue is exacerbated in long-form generation due to hallucination snowballing, a phenomenon where early errors propagate and compound into subsequent outputs. To address this challenge, we propose a novel inference-time scaling framework, named Step-wise HAllucination Rejection Sampling (SHARS), that allocates additional computation during decoding to detect and reject hallucinated content as it is produced. By retaining only confident information and building subsequent generations upon it, the framework mitigates hallucination accumulation and enhances factual consistency. To instantiate this framework, we further introduce a new uncertainty-based hallucination detection method, named HalluSE, for long-form generation, improving upon the prior semantic entropy approach. The combined system enables models to self-correct hallucinations without requiring external resources such as web search or knowledge bases, while remaining compatible with them for future extensions. Empirical evaluations on standardized hallucination benchmarks demonstrate that our method substantially reduces hallucinations in long-form generation while preserving or even improving the informativeness of generation."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"hallucination",
"inferece-time scaling",
"large language models",
"semantic entropy"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4c0a73711949f5a4c0e597230eb920ffb943bbf3.pdf"
},
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "Building Reliable Long-Form Generation via Step-Wise Hallucination Rejection Sampling"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0JYtXNl7ns",
"id": "0JYtXNl7ns",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission18485/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897100328,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission18485/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission18485/Authors"
]
}
|
|
2,026
|
0JayjvOKxt
|
[
2,
4,
6,
4
] |
[
{
"content": "This paper proposes the Adaptive and Selective Reset (ASR) scheme to address the problem of model collapse in long-term Test-Time Adaptation (TTA). The main contributions are: 1) The ASR mechanism dynamically determines when and which parts of the model to reset; 2) An importance-aware knowledge recovery regularizer based on Fisher information; 3) Dynamic adjustment of hyperparameters according to domain differences to enhance adaptability. Experiments show that ASR performs well in multiple benchmark tests and significantly improves the stability and adaptability of the model.",
"id": "BK9O8JKWTr",
"rating": 2
},
{
"content": "This paper addresses the problem of model collapse in long-term continual test-time adaptation (TTA) due to error accumulation. The authors propose an Adaptive and Selective Reset (ASR) scheme that dynamically determines when to reset by monitoring prediction concentration and where to reset by selectively resetting layers based on the estimated collapse risk. The method is supplemented by an importance-aware regularizer to recover lost knowledge and an on-the-fly mechanism to adjust adaptation based on domain discrepancy.",
"id": "WxBWCfdJ4p",
"rating": 4
},
{
"content": "This paper tackles long-term continual TTA, where models suffer from error accumulation and eventual collapse (predicting only a few classes). The authors propose: 1. Adaptive and Selective Reset (ASR) — dynamically determines when and which layers to reset based on prediction concentration, mitigating both over- and under-resetting. 2. Importance-aware knowledge recovery — recovers lost information post-reset using Fisher-based regularization with a hybrid (CMA + EMA) accumulation scheme. 3. On-the-fly adaptation adjustment — adaptively adjusts according to prediction inconsistency between the source and the current model. Extensive experiments on CCC, CIN-C, IN-C, and IN-D109 show large gains over prior TTA methods (e.g., +44.12% on CCC-Hard).",
"id": "e2pQ0EyV9j",
"rating": 6
},
{
"content": "The authors investigate the problem of long-term continual test time adaptation. Previous algorithms were shown to collapse at some point during longer term adaptation, and simple resetting methods have been proposed as baselines in this problem setting. Full model reset like in RDumb naturally yields a substantial drop in downstream performance in the first steps after the reset, hence the authors propose an adaptive resetting scheme where the timing of these resets is adaptively computed, and instead of resetting the full model, only parts of the model parameters are reset to baseline. The authors show in multiple experiments vs. main baselines ROID and RDumb that their adaptive strategy yields performance improvements in the CCC and other continual adaptation benchmarks.",
"id": "zNnuh2kKTv",
"rating": 4
}
] |
{
"cdate": 1758270901913,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025when,\ntitle={When and Where to Reset Matters for Long-Term Test-Time Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={https://openreview.net/forum?id=0JayjvOKxt},\nnote={under review}\n}"
},
"abstract": {
"value": "When continual test-time adaptation (TTA) persists over the long term, errors accumulate in a model and further lead it to predict only a few classes regardless of the input, known as model collapse. Recent studies have explored reset strategies that erase these accumulated errors completely. However, their periodic resets lead to suboptimal adaptation, as they occur independently of collapse. Also, their full resets cause the catastrophic loss of knowledge acquired over time, even though it could be beneficial in future. To this end, we propose 1) an Adaptive and Selective Reset (ASR) scheme that dynamically determines when and where to reset, 2) an importance-aware regularizer to recover essential knowledge lost from reset, and 3) an on-the-fly adaptation adjustment scheme to enhance adaptability under challenging domain shifts. Extensive experiments across long-term TTA benchmarks demonstrate the effectiveness of our approach, particularly under challenging conditions. Our code will be released."
},
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_ethics": null,
"keywords": {
"value": [
"Test-Time Adaptation",
"Continual Test-Time Adaptation"
]
},
"no_acknowledgement_section": null,
"paperhash": null,
"pdf": {
"value": "/pdf/504c73f24c6ccca26313babd1bdf7dfd05964f6b.pdf"
},
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"submission_guidelines": null,
"supplementary_material": null,
"title": {
"value": "When and Where to Reset Matters for Long-Term Test-Time Adaptation"
},
"venue": {
"value": "ICLR 2026 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2026/Conference/Submission"
}
},
"forum": "0JayjvOKxt",
"id": "0JayjvOKxt",
"invitations": [
"ICLR.cc/2026/Conference/-/Submission",
"ICLR.cc/2026/Conference/-/Post_Submission",
"ICLR.cc/2026/Conference/Submission16981/-/Full_Submission"
],
"license": "CC BY 4.0",
"mdate": 1759897206393,
"odate": 1759896705795,
"readers": [
"everyone"
],
"signatures": [
"ICLR.cc/2026/Conference/Submission16981/Authors"
],
"writers": [
"ICLR.cc/2026/Conference",
"ICLR.cc/2026/Conference/Submission16981/Authors"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.