year int64 2.03k 2.03k | id stringlengths 10 10 | rating listlengths 0 9 | decision stringclasses 1
value | reviewer_comments listlengths 0 9 | _raw_metadata dict |
|---|---|---|---|---|---|
2,026 | 00F7BfXLYJ | [
4,
4,
4,
4
] | [
{
"content": "This paper addresses the limitations of current Multimodal Large Language Models (MLLMs) in deep logical reasoning for video understanding—such as feed-forward processing constraints (lack of self-correction), poor test-time scaling, and hallucinations. Inspired by cybernetic principles (control, ... | {
"cdate": 1757998013559,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025cyberv,\ntitle={CyberV: A Cybernetic Framework for Enhancing Logical Reasoning in Video Understanding},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Lea... | |
2,026 | 00HNN8O7Ni | [
4,
2,
2,
4
] | [
{
"content": "This paper proposed a new reinforcement learning framework of synthesizing hardware circuits based on the feedback from model checking results.\nThe experiments are based on open datasets and the results are outperform supervised learning baselines.\n\nPros:\n1. The integration of model checking r... | {
"cdate": 1758322705432,
"content": {
"TLDR": {
"value": "We propose a deep learning approach for reactive synthesis that first initializes a model with imitation learning and then continues training by reinforcing formally verified solutions."
},
"_bibtex": {
"value": "@inproceedings{\nano... | |
2,026 | 00UQtHqB2k | [
2,
6,
2,
4
] | [
{
"content": "The paper proposes a unified way to evaluate group fairness through sparsity. It studies links among Maximum Pairwise Difference, the Gini Index, and a PQ Index and argues that higher sparsity means lower fairness. Based on this view, it replaces the pairwise step in common criteria with a sparsit... | {
"cdate": 1758232139112,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025toward,\ntitle={Toward Unifying Group Fairness Evaluation from a Sparsity Perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representa... | |
2,026 | 017F77AYeQ | [
2,
2,
4,
0
] | [
{
"content": "The paper proposes SMART-3D, a mask token modeling approach for 3D generation.",
"id": "gZowcvNNqh",
"rating": 2
},
{
"content": "The paper proposes an framework that merges masked autoregressive generation with diffusion modeling and linear attention, addressing key efficiency bot... | {
"cdate": 1758113495159,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smartd,\ntitle={{SMART}-3D: Scaling Masked AutoRegressive Transformer for Efficient 3D Shape Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on L... | |
2,026 | 023yMrtHQP | [
4,
4,
4
] | [
{
"content": "This paper introduces a prompting framework, named Expectation–Evidence Prompting (EEP), for large language models to enhance factual verification. Drawing from the Strategic Use of Evidence technique in cognitive psychology, EEP involves generating two sets of expectations, supportive and refutat... | {
"cdate": 1758292986416,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025expectationevidence,\ntitle={Expectation{\\textendash}Evidence Prompting: Structuring Verification by Comparing Expected and Observed Evidence},\nauthor={Anonymous},\nbooktitle={Submitted to The F... | |
2,026 | 02NbD16OnA | [
4,
4,
4,
6
] | [
{
"content": "This paper introduces DECEPTIONDECODED, a multimodal news benchmark with explicitly defined creator intent to support misleading intent detection, source attribution, and desire inference. It reveals that current VLMs fail to reason about intent beyond surface alignment and stylistic cues.",
"... | {
"cdate": 1756910313383,
"content": {
"TLDR": {
"value": "We reveal that state-of-the-art VLMs remain blind to misleading creator intent, establishing the need for intent-aware benchmarks and models as the next frontier in multimodal misinformation detection."
},
"_bibtex": {
"value": "@inp... | |
2,026 | 02cEkpURXH | [
2,
2,
6,
4
] | [
{
"content": "This paper proposes a KD–based training strategy for OOD generalization. The authors first argue that training compact student models via simple KD from a teacher with strong OOD performance can often surpass standalone algorithmic DG methods. They further note that prior OOD-oriented KD approache... | {
"cdate": 1758311939461,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025early,\ntitle={Early Layer Readouts for Robust Knowledge Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2... | |
2,026 | 02mBAZjFzp | [
4,
4,
4,
6
] | [
{
"content": "This paper introduces VRPAGENT, a framework for discovering heuristic operators for Vehicle Routing Problems (VRPs) using large language models (LLMs). The method combines LLM-generated “destroy” and “order” operators with a Large Neighborhood Search (LNS) metaheuristic, leveraging genetic algorit... | {
"cdate": 1758296070926,
"content": {
"TLDR": {
"value": "We introduce VRPAgent, a framework that leverages LLMs and evolutionary search to discover novel heuristic operators for vehicle routing problems, achieving state-of-the-art performance across multiple VRP variants."
},
"_bibtex": {
... | |
2,026 | 02mgFnnfqG | [
4,
8,
6,
6
] | [
{
"content": "The paper presents LiveMoments, a method for selecting and restoring a new low-quality (LQ) key photo from a short clip surrounding some key high-quality (HQ) photo. To this end, the authors build a model based on latent flow models and learnable networks for the HQ key image, the LQ candidate, an... | {
"cdate": 1757934812324,
"content": {
"TLDR": {
"value": "We are the first to restore reselected key photos in Live Photos, achieving perceptual fidelity beyond existing solutions in real-world scenes."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025livemoments,\ntitle={LiveMoments... | |
2,026 | 032sg6mGp9 | [
4,
4,
6,
6
] | [
{
"content": "This paper introduces a multinomial mixture modelling approach to address the identifiability problem in learning from noisy labels (LNL). The authors theoretically prove that LNL becomes identifiable when each sample has at least 2C−1 independent noisy labels, enabling the unique recovery of clea... | {
"cdate": 1758285923748,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025identifiability,\ntitle={Identifiability in Noisy Label Learning: A Multinomial Mixture Modelling Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference o... | |
2,026 | 03Ek1qDZmI | [
4,
4,
4,
2
] | [
{
"content": "This paper introduces SSTP, a sample selection framework for trajectory prediction. The primary motivation is to address two challenges in existing large-scale datasets: the high computational cost of training and the imbalance where common, low-density scenarios dominate over rare, safety-critica... | {
"cdate": 1757189578927,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nyang2025sstp,\ntitle={{SSTP}: Efficient Sample Selection for Trajectory Prediction},\nauthor={Ruining Yang and Yi Xu and Yun Fu and Lili Su},\nyear={2025},\nurl={https://openreview.net/forum?id=03Ek1qDZmI}\n}"
},
... | |
2,026 | 03MfCNn3pF | [
2,
4,
2,
6
] | [
{
"content": "This paper presents PersonalQ, a two-stage system for personalized diffusion model serving. Check-in selects the intended personalized checkpoint via metadata reasoning and LLM-based prompt clarification, while Trigger-Aware Quantization (TAQ) preserves trigger-token features during quantization t... | {
"cdate": 1757994763056,
"content": {
"TLDR": {
"value": "PersonalQ enables efficient serving of personalized diffusion models at scale through intelligent checkpoint selection and trigger-token-aware quantization that preserves personalization quality while reducing memory footprint."
},
"_bibte... | |
2,026 | 03QzvMzxVM | [
2,
4,
4,
4
] | [
{
"content": "This work presents Robust-NLL, which serves as a plug-and-play loss replacing vanilla NLL loss for robust uncertainty-aware training against label-space outliers. The proposed loss function uses softmax reweighting over sample losses to filter out outliers. The author also provides theoretical ana... | {
"cdate": 1758019401870,
"content": {
"TLDR": {
"value": "We introduce Robust-NLL for modeling uncertainty under the presence of outliers."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025robust,\ntitle={Robust Uncertainty-Aware Learning via Boltzmann-weighted {NLL}},\nauthor={Anony... | |
2,026 | 03ccrSpjOx | [
4,
4,
4,
6
] | [
{
"content": "The paper studies how deliberation format shapes value expression and consensus in LLM-LLM debates over everyday moral dilemmas. Using 1,000 AITA cases, the authors run pairwise and three-way debates among GPT-4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash in two settings: synchronous (parallel) and... | {
"cdate": 1758148909076,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025deliberative,\ntitle={Deliberative Dynamics and Value Alignment in {LLM} Debates},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations... | |
2,026 | 03fFxN6Orj | [
4,
2,
4
] | [
{
"content": "This paper proposed the Adviser-Actor-Critic (AAC) framework, targeting steady-state error reduction for high-precision robotic control tasks in reinforcement learning. AAC augments standard actor-critic architectures with an additional “adviser” module, implemented as a PI controller, that genera... | {
"cdate": 1758271601146,
"content": {
"TLDR": {
"value": "Adviser-Actor-Critic (AAC) combines reinforcement learning with a novel adviser to generate virtual goals, effectively reducing steady-state errors by over 80% in high-precision robotic control tasks."
},
"_bibtex": {
"value": "@misc... | |
2,026 | 03jzVlLxEe | [
6,
6,
4,
4
] | [
{
"content": "The authors propose **NERVE**, a noise- and variability-robust EEG foundation model designed to address key challenges in EEG analysis, including low signal-to-noise ratios (SNR), high inter-sample variability, and spatial dependencies arising from electrode placement in acquisition systems. The p... | {
"cdate": 1758337883115,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nerve,\ntitle={{NERVE}: Noise-Variability-Robust {EEG} Foundation Model with Electrode-Brain Interactions},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on... | |
2,026 | 03qTI3NKqi | [
4,
4,
4,
4
] | [
{
"content": "This work found that previous soft prompts often disrupted information flow and reduced reasoning. They argue that soft prompts should not be limited to the activation and guidance stages but should be inserted into appropriate stages to ensure smooth information flow between layers. Therefore, th... | {
"cdate": 1758191821554,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025unlocking,\ntitle={Unlocking Coherent Reasoning in {LLM}s with Hierarchical Soft Prompts},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Represe... | |
2,026 | 03u504EDJp | [
2,
4,
6,
2,
2
] | [
{
"content": "This paper introduces APO, a new framework for distilling reasoning capabilities from multiple MLLMs that exhibit conceptual drift, defined as variability in their reasoning behaviors or conclusions. The core idea is that APO aggregates all available reasoning trajectories and learns to prefer the... | {
"cdate": 1756744193214,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025learning,\ntitle={Learning from All: Concept Alignment for Autonomous Distillation from Multiple Drifting {MLLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Confe... | |
2,026 | 040ClRXMf3 | [
6,
8,
2,
8
] | [
{
"content": "This paper proposes a new algorithm to extract cardinal-minimal sufficient explanations for Neural Additive Models (NAMs).\nIt does so by exploiting key design choices of NAMs, showing how this family of models supports explanations with guarantees.\n\nThis is achieved as follows. First, the paper... | {
"cdate": 1758298867680,
"content": {
"TLDR": {
"value": "Our approach constructs provably sufficient and (globally) cardinal-minimal explanations for neural additive models with improved runtime complexity."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025provably,\ntitle={Provably... | |
2,026 | 04HwYGgp2w | [
6,
8,
6,
6
] | [
{
"content": "In this paper,the authors introduces ImageDoctor, a unified,multi-aspect evaluation framework for Text-to Image(T2I) models. Unlike previous methods that provide a single scalar, ImageDoctor assesses image quality across four dimensions: plausibility, semantic alignment, aesthetics, and overall qu... | {
"cdate": 1757544654492,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025imagedoctor,\ntitle={ImageDoctor: Diagnosing Text-to-Image Generation via Grounded Image Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learni... | |
2,026 | 04JkPDiCnp | [
2,
6,
4,
2
] | [
{
"content": "This paper introduces InternAgent-DR, a multi-agent deep-research framework that models scientific reasoning as a dynamic structured knowledge flow. Instead of relying on a linear task sequence, InternAgent-DR represents research workflows as directed acyclic graphs whose nodes correspond to subta... | {
"cdate": 1756820032542,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025internagentdr,\ntitle={InternAgent-{DR}: Advancing deep research with dynamic structured knowledge flow},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on L... | |
2,026 | 04Tfwy3LLC | [
2,
6,
4,
8
] | [
{
"content": "The paper relates to the pruning of LLM layers. The paper consists of three main parts:\n1. Discussion of criteria for identifying prunable layers\n2. Comparison between LoRA and partial fine-tuning methods for recovering accuracy after pruning\n3. Theoretical analysis of gradient flow in the pres... | {
"cdate": 1757254648198,
"content": {
"TLDR": {
"value": "This paper presents a theoretical and empirical analysis of layer pruning in Large Language Models, aiming to improve and refine pruning strategies."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025reassessing,\ntitle={Reasse... | |
2,026 | 04h40hEgTj | [
6,
6,
2,
4
] | [
{
"content": "In this paper, the authors aimed at creating a family of toy models for exploring the known challenge of long-context learning for LLM. The proposed toy model have different time series data interleaved with distinct labels. The authors found that LLM developed two distinct learning mechanisms in ... | {
"cdate": 1758340263445,
"content": {
"TLDR": {
"value": "We introduce a new family of toy problems that combine features of linear-regression-style continuous in-context learning (ICL) with discrete associative recall and find distinct learning dynamics for different prediction mechanisms."
},
"... | |
2,026 | 053vZMxDB5 | [
2,
8,
4
] | [
{
"content": "This paper presents a reinforcement learning (RL) approach for learning from signal temporal logic (STL) to make learning more feasible for long-horizon tasks. The novel model-free approach divides and flattens complex STL formulas and searches for time-variable actualizations via Metropolis-Hasti... | {
"cdate": 1756884774931,
"content": {
"TLDR": {
"value": "We design a Reinforcement Learning framework based on time variables and task decomposition to solve Signal Temporal Logic tasks"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025tgpo,\ntitle={{TGPO}: Temporal Grounded Policy ... | |
2,026 | 05NHmcEpNk | [
8,
4,
8
] | [
{
"content": "This paper introduces CT-MLE, a model-based algorithm for continuous-time reinforcement learning (CTRL) that uses maximum likelihood estimation (MLE) of the state marginal density instead of directly modeling system dynamics.\nThe key idea is to achieve instance-dependent adaptivity, where the alg... | {
"cdate": 1758213925539,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025instancedependent,\ntitle={Instance-Dependent Continuous-Time Reinforcement Learning via Maximum Likelihood Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International ... | |
2,026 | 05PqjBzN6S | [
4,
2,
6
] | [
{
"content": "This paper addresses the problem of determining when sufficient data is available to safely retrain a model after a sudden concept drift. The authors propose CALIPER, a model-agnostic and data-only test to estimate this required post-drift data size. The core idea is grounded in the concept of \"s... | {
"cdate": 1758350444098,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025when,\ntitle={When to Retrain after Drift: A Data-Only Test of Post-Drift Data Size Sufficiency},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning ... | |
2,026 | 05SHW9ai9e | [
4,
2,
4,
4
] | [
{
"content": "To address DocQA limitations (single-modality bias, isolated RAG, long-document overload), this paper proposes MDocAgent—a framework integrating dual RAG (text via ColBERTv2, image via ColPali) and 5 collaborative agents (General, Critical, Text, Image, Summarizing). Evaluated on 5 benchmarks (MML... | {
"cdate": 1758214136657,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025mdocagent,\ntitle={{MD}ocAgent: A Multi-Modal Multi-Agent Framework for Document Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learn... | |
2,026 | 05THHF0w3y | [
0,
2,
4,
4
] | [
{
"content": "The paper proposes a new method for LLM reasoning, R-Capsule, where LLMs first output high-level plans which are in a latent space and then textual detailed steps and finally the answer. The authors choose several benchmarks on math reasoning (such as GSM-8k) and commensense reasoning (such as str... | {
"cdate": 1757406324840,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025rcapsule,\ntitle={R-Capsule: Compressing High-Level Plans for Efficient Large Language Model Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Le... | |
2,026 | 05hNleYOcG | [
2,
4,
2,
2
] | [
{
"content": "The paper introduces PLAGUE, a plug-and-play framework for designing multi-turn jailbreak attacks on large language models (LLMs). Inspired by lifelong-learning and agentic architectures, PLAGUE divides the attack process into three stages — Planner, Primer, and Finisher — enabling adaptable and m... | {
"cdate": 1758135059535,
"content": {
"TLDR": {
"value": "Agentic framework for discovering novel potent multi-turn jailbreak attacks that achieve an attack success rate of 67.3% on Claude Opus 4.1"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025plague,\ntitle={{PLAGUE}: Plug-and-p... | |
2,026 | 05pfP2khzx | [
2,
2,
4
] | [
{
"content": "This paper introduces VIDEOREPAIR, a video refinement framework to correct text-video misalignments. It has three steps: 1. detect misalignment. Finding the issue and region with MLLM. 2. Plan the refinement including preserve the correct parts and construct prompts that could be used to re-genera... | {
"cdate": 1758222291968,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nlee2025selfcorrecting,\ntitle={Self-Correcting Text-to-Video Generation with Misalignment Detection and Localized Refinement},\nauthor={Daeun Lee and Jaehong Yoon and Jaemin Cho and Mohit Bansal},\nyear={2025},\nurl={h... | |
2,026 | 05uq3XUJaT | [
2,
2,
4
] | [
{
"content": "This paper introduces a listwise fine-tuning method for LLM-based text reranking. The method improves three limitations of existing LLM rankers (single-token compression, shallow scoring heads, and pairwise objectives).",
"id": "DvaKUEhgPp",
"rating": 2
},
{
"content": "This paper ... | {
"cdate": 1757411444566,
"content": {
"TLDR": {
"value": "We propose a method to improve the fine-tuning performance of text ranking models by leveraging feature fusion, incorporating customized MLP modules, and optimizing with a listwise loss."
},
"_bibtex": {
"value": "@misc{\nsong2025fin... | |
2,026 | 0694m9ixnv | [
4,
6,
2
] | [
{
"content": "This paper introduces Instruction Distillation, a new paradigm for improving the quality of low-quality instruction-following data. The authors propose a dataset called MIXTURE that maps multiple low-quality or redundant text inputs to a distilled high-quality target. Building on this dataset, the... | {
"cdate": 1758008662115,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025lmmixup,\ntitle={{LM}-mixup: Text Data Augmentation via Language Model based Mixup},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representatio... | |
2,026 | 06I7jcrkW2 | [
6,
6,
4,
8
] | [
{
"content": "This paper tackles the important and challenging problem of accelerating Real-Time TDDFT (RT-TDDFT) computations using deep learning. \nSpecifically, it adopts an autoregressive framework to accelerate the propagations of RT-TDDFT, where the wavefunctions of previous steps are input into the netw... | {
"cdate": 1758291547393,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025orbital,\ntitle={Orbital Transformers for Predicting Wavefunctions in Time-Dependent Density Functional Theory},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conferen... | |
2,026 | 06bDxmgdE0 | [
4,
2,
4
] | [
{
"content": "This paper introduces a novel and highly significant large-scale, multitask benchmark for evaluating speech understanding capabilities across 11 Southeast Asian (SEA) languages. This work directly addresses the critical lack of non-English evaluation frameworks, as current benchmarks are heavily E... | {
"cdate": 1758092746264,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025seaspeechbench,\ntitle={{SEA}-SpeechBench: A Large-Scale Multitask Benchmark for Speech Understanding Across Southeast Asia},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth Internati... | |
2,026 | 06upBSlAUy | [
4,
6,
4
] | [
{
"content": "The paper proposes Stabilized and Improved Preference Optimization (SIPO), a framework designed to address two fundamental challenges in applying Direct Preference Optimization (DPO) to diffusion models: training instability and off-policy bias. The authors first conduct a systematic analysis of t... | {
"cdate": 1758341139376,
"content": {
"TLDR": {
"value": "We propose a stabilized and improved preference optimization framework for aligning diffusion generative models with human perferences."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025sipo,\ntitle={{SIPO}: Stabilized and Imp... | |
2,026 | 072P11r1wu | [
0,
10,
2,
2
] | [
{
"content": "Benign and harmful overfitting have been extensively studied in the past few years in many settings and models. More recently, there has been interest in analyzing benign overfitting in simple transformers. This work aims to extend the previous works on benign/harmful overfitting in transformers b... | {
"cdate": 1758292502092,
"content": {
"TLDR": {
"value": "We present generalization bounds for a two-layer Transformer under benign overfitting and harmful overfitting."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025understanding,\ntitle={Understanding Generalization in Transforme... | |
2,026 | 073WQjmWKU | [
6,
6,
4,
6,
8,
4
] | [
{
"content": "This paper presents COMPACT, a data-efficient visual instruction tuning (VIT) framework that synthesizes training samples with controlled compositional complexity. The authors introduce the k-value, representing the number of atomic visual capabilities (e.g., object recognition, spatial reasoning)... | {
"cdate": 1757812256580,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025compact,\ntitle={{COMPACT}: {COMP}ositional Atomic-to-Complex Visual Capability Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Represent... | |
2,026 | 075TvkpZEK | [
2,
4,
8,
2
] | [
{
"content": "This paper proposes an optimization algorithm SMARAN for deep learning. SMARAN has two main characteristics, the first is that it normalizes the gradient before updating the first-order momentum, and the second is that it adopts the objective function value to update the second-order momentum. The... | {
"cdate": 1758257347389,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smaran,\ntitle={{SMARAN}: Closing the Generalization Gap with Performance Driven Optimization Method},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Lear... | |
2,026 | 07R3pHnBqc | [
2,
0,
4,
2
] | [
{
"content": "This paper proposes Instruction Agent, a training-free GUI automation framework that uses a single expert demonstration, aiming to execute long-horizon and complex tasks. The instructor module ensures the agent follows the instruction, while the verifier and backtracker aim to ensure robustness. T... | {
"cdate": 1757981401623,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nli2025instruction,\ntitle={Instruction Agent: Enhancing Agent with Expert Demonstration},\nauthor={Yinheng Li and Hailey Hultquist and Justin Wagle and Kazuhito Koishida},\nyear={2025},\nurl={https://openreview.net/for... | |
2,026 | 07S1CPoQYP | [
6,
2,
2,
2
] | [
{
"content": "This paper investigates how fMRI recordings can be used to fine-tune large language models (LLMs) toward human brain activity. The authors propose a dual-objective framework combining standard language modeling with brain alignment, leveraging over 50 hours of naturalistic movie-watching fMRI data... | {
"cdate": 1758149730341,
"content": {
"TLDR": {
"value": "We show that brain-informed training of language models, using dual objectives and scaling across data, models, and subjects, yields robust and generalizable alignment with human brain activity beyond baselines."
},
"_bibtex": {
"val... | |
2,026 | 07o2iouN1Y | [
2,
4,
6,
2
] | [
{
"content": "The paper solves Nash equilibrium in two-player zero-sum extensive-form games by adding additional regularization. By switching the reference strategy periodically, the algorithm converges to the NE of the original game rather than the regularized game. The paper proves convergence theoretically a... | {
"cdate": 1757304899303,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nash,\ntitle={Nash Policy Gradient: A Policy Gradient Method with Iteratively Refined Regularization for Finding Nash Equilibria},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth Inte... | |
2,026 | 084SvT55yk | [
10,
4,
6
] | [
{
"content": "Existing neural CO solvers either ensure local feasibility but lack global awareness (LC) or produce global predictions with constraint violations (GP). Current adaptive expansion is only an external wrapper with limited effectiveness.\nNEXCO makes adaptive expansion native through CO-specific mas... | {
"cdate": 1756822893695,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025native,\ntitle={Native Adaptive Solution Expansion for Diffusion-based Combinatorial Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learnin... | |
2,026 | 08EyZzhgl1 | [
2,
4,
2,
4
] | [
{
"content": "This paper presents TextME, a text‑only training framework for modality expansion that eliminates the need for paired multimodal data by leveraging the “consistent modality gap” property of pretrained encoders. TextME first pre‑computes a constant offset between text and non‑text embeddings for ea... | {
"cdate": 1758266488620,
"content": {
"TLDR": {
"value": "TextMEunifies specialized modalities without paired supervision by training text-only projectors and applying centering offsets to bridge the modality gap at inference."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025textme,... | |
2,026 | 08FTG45E9m | [
4,
6,
2,
2
] | [
{
"content": "The paper introduces Hermes, a multi-scale spatial-temporal hypergraph network for stock time series forecasting. The model aims to jointly model inter-industry lead-lag structures and multi-scale temporal dependencies. It incorporates a hyperedge-based moving aggregation module and a multi-scale ... | {
"cdate": 1756734867685,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nqiu2025multiscale,\ntitle={Multi-Scale Spatial-Temporal Hypergraph Network with Lead-Lag Structures for Stock Time Series Forecasting},\nauthor={Xiangfei Qiu and Liu Yang and Hanyin Cheng and Xingjian Wu and Rongjia Wu... | |
2,026 | 08KOxSjRyj | [
4,
2,
4,
2
] | [
{
"content": "The paper introduces LongEmotion, a long-context benchmark for evaluating LLMs Emotional Intelligence (EI) across six task: Emotion Classification, Emotion Detection, Emotion QA, Emotion Conversation, Emotion Summary, and Emotion Expression. Moreover, this paper propose RAG and CoEM frameworks to ... | {
"cdate": 1758170614961,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025longemotion,\ntitle={LongEmotion: Measuring Emotional Intelligence of Large Language Models in Long-Context Interaction},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International... | |
2,026 | 08pxmTLKTT | [
2,
4,
4,
6
] | [
{
"content": "The paper proposes to better address the problem of object ambiguity in interactive segmentation (IS) models with SmartSAM method. To achieve it, an agent generates a few branches with interactions (positive/negative click or bbox), considering the first user interaction, to produce candidate mask... | {
"cdate": 1758013764632,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025smartsam,\ntitle={Smart{SAM}: Segment Ambiguous Objects like Smart Annotators},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\... | |
2,026 | 08tuDzMDEn | [
4,
4,
6,
4
] | [
{
"content": "This paper studies the task of counterargument generation and introduces a persona-based approach with Tree-of-Thought (ToT) content planning. Specifically, given an original post (OP), the system first constructs three distinct personas, each representing a unique perspective. These personas then... | {
"cdate": 1758269143514,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025ptcg,\ntitle={{PTCG}: Persona-guided Tree-based Counterargument Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nye... | |
2,026 | 09FE8nv4sV | [
6,
4,
4,
4
] | [
{
"content": "This paper introduces \"MILP-Retrieval,\" a novel framework to address the critical problem of data scarcity for training data-driven Mixed-Integer Linear Programming (MILP) solvers. The authors argue that existing generative methods (e.g., VAEs, diffusion models) are highly inefficient, as they r... | {
"cdate": 1758211996865,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025targeted,\ntitle={Targeted {MILP} Instance Generation via Formulation Code Retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representat... | |
2,026 | 09Nj40ScvC | [
2,
6,
4
] | [
{
"content": "The paper proposes a heuristic to select preference pairs to train PRM. Meanwhile, it also modifies the advantage function of original GRPO to adapt to process reward settings.",
"id": "rgdCO8yLHy",
"rating": 2
},
{
"content": "The paper presents a novel reinforcement learning fram... | {
"cdate": 1758348612106,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025preferencebased,\ntitle={Preference-Based Process Reward Model for Robust Mathematical Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning... | |
2,026 | 09YSBymX6O | [
6,
4,
8
] | [
{
"content": "The authors propose using spatial point processes as a self-supervision prior that explicitly models spatial distributions of objects to address the gap in previous un- and self-supervised methods that miss the spatial correlations. Thus, the paper proposes spatially informed variational autoenco... | {
"cdate": 1758200193649,
"content": {
"TLDR": {
"value": "We present spatially informed variational autoencoders that use stochastic point processes to learn interpretable spatial patterns from images."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025spatially,\ntitle={Spatially Inf... | |
2,026 | 09lmwhDqZ3 | [
6,
4,
6,
6
] | [
{
"content": "This paper focuses on the task of automatic formalization in theorem proving, which currently faces two major challenges: model hallucination and the semantic gap caused by ambiguous or missing premises in natural language descriptions. To address these issues, the authors propose a framework call... | {
"cdate": 1758191085758,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025automated,\ntitle={Automated Formalization via Conceptual Retrieval-Augmented {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representat... | |
2,026 | 0A2rXt5SAy | [
2,
6,
4,
4
] | [
{
"content": "This paper proposes a method that solves the pipeline scheduling problem using mixed-integer linear programming (MILP), treating activation offloading as a decision variable. It models whether activations are offloaded or retained in GPU memory and enforces constraints on data dependencies, resour... | {
"cdate": 1758208530100,
"content": {
"TLDR": {
"value": "Use Mathematical Programming to model Pipeline Parallelism with Offloading to balance efficiency and memory requirement."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025optpipe,\ntitle={OptPipe: Memory- and Scheduling-Optimi... | |
2,026 | 0A3qzLmRHd | [
4,
2,
4
] | [
{
"content": "The paper introduces SECVULEVAL, a new vulnerability detection dataset focusing on LLM-based solutions and C/C++ projects. The authors collected the vulnerability data from the national vulnerability database (NVD), and extracted line-level vulnerability labels. Furthermore, they used an LLM to ex... | {
"cdate": 1758230106795,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025secvuleval,\ntitle={{SECVULEVAL}: Benchmarking {LLM}s for Real-World C/C++ Vulnerability Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learni... | |
2,026 | 0A4Uf88pog | [
4,
4,
6
] | [
{
"content": "This paper introduces VERINA (Verifiable Code Generation Arena), a benchmark comprising 189 manually curated programming tasks in Lean for evaluating end-to-end verifiable code generation. The benchmark assesses three foundational tasks—code generation (CodeGen), specification generation (SpecGen)... | {
"cdate": 1758225274549,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025verina,\ntitle={{VERINA}: Benchmarking Verifiable Code Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025}... | |
2,026 | 0A4iQqwwLG | [
2,
4,
4,
4
] | [
{
"content": "The paper presents a new framework to improve long-form video understanding, by incorporating a SLM-based answer clue generation that complements query-based retrieval. The system also incorporates a compressor to summarize frame features into compact tokens, reducing token load while maintaining ... | {
"cdate": 1758336799281,
"content": {
"TLDR": {
"value": "We propose ClueVQA, a novel retrieval framework enhances query-based frame retrieval for VideoQA by generating and integrating supplementary answer clues, leading to improved performance across long-form video benchmarks and various VideoLLMs."
... | |
2,026 | 0ACUx9pMWJ | [
6,
4,
4,
6
] | [
{
"content": "The authors propose to study the ability of execution-guided program synthesis approach and transduction approaches with test-time training to generalize to new ARC-AGI-like tasks at test time. Train and test tasks are designed by hand to involve different compositions of the same set of predefine... | {
"cdate": 1758130722525,
"content": {
"TLDR": {
"value": "Comparing the OOD generalization performance of execution-guided neural program synthesis with test-time fine-tuning on the ARC-AGI domain"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025outofdistribution,\ntitle={Out-of-Dis... | |
2,026 | 0Af7UiJISU | [
6,
6,
4
] | [
{
"content": "THOR is a technically competent and empirically thorough paper that explores hierarchical optimization for tool-integrated reasoning—a topic of rising importance in post-RLHF LLM training. The paper’s main contributions, especially the combination of TIR data generation (TIRGen) and dual-level GRP... | {
"cdate": 1757839164924,
"content": {
"TLDR": {
"value": "We introduce THOR, a tool-integrated framework that combines hierarchical reinforcement learning with self-correcting inference to achieve SOTA mathematical reasoning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025thor,\nt... | |
2,026 | 0B5K9pIdSK | [
4,
6,
2,
6
] | [
{
"content": "This paper introduces TITOK, a framework for transferring LoRA adapters between large language models through token-level contrastive knowledge transfer.\nUnlike existing methods such as TransLoRA, which rely on synthetic data filtered by an external discriminator, TITOK uses a self-contained cont... | {
"cdate": 1758360984163,
"content": {
"TLDR": {
"value": "We propose a new framework TiTok, which enables effective LoRA transplantation through token-level knowledge transfer"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025titok,\ntitle={TiTok: Transfer Token-level Knowledge via C... | |
2,026 | 0BD2dCM4Ig | [
2,
2,
6,
2
] | [
{
"content": "This paper studies graph foundation model, and proposes a MoE module together with a graph self-supervised learning-based regularization term to enhance the performance.",
"id": "OopTbQalPe",
"rating": 2
},
{
"content": "The paper studies graph foundation models (GFM) through the l... | {
"cdate": 1757136379041,
"content": {
"TLDR": {
"value": "We propose MoT to address optimization pitfalls in GFMs and achieve SOTA cross-domain generalization."
},
"_bibtex": {
"value": "@misc{\nli2025two,\ntitle={Two Sides of the Same Optimization Coin: Model Degradation and Representation... | |
2,026 | 0BWu7DLuIU | [
2,
2,
4,
2
] | [
{
"content": "This paper considers the ethical implications of using Homomorphic Encryption (HE). Specifically, they investigate the desirable outcomes (The Good), trade-offs related to accountability, interpretability and responsibility (The Bad) and ways in which HE can be used to mask unethical practices (Th... | {
"cdate": 1757316151995,
"content": {
"TLDR": {
"value": "Privacy is not everything in privacy-preserving Artificial Intelligence with Homomorphic Encryption."
},
"_bibtex": {
"value": "@misc{\nfalcetta2025the,\ntitle={The Ethics of Privacy-Preserving Deep Learning: the Good, the Bad, and t... | |
2,026 | 0BhjNjxpaC | [
2,
8,
8,
2
] | [
{
"content": "This work studies the question: \"how many 'reasoning steps' can an $L$-layer Transformer carry out in a single forward pass?\". To answer this question, the paper posits a formulation of \"reasoning chains\" as sequences of pairs of integers $a_i^1 \\to a_i^2$, which can be permuted. It then form... | {
"cdate": 1757819222345,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025limit,\ntitle={Limit Analysis for Symbolic Multi-step Reasoning Tasks with Information Propagation Rules Based on Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth Intern... | |
2,026 | 0BkvUY61MX | [
6,
2,
8
] | [
{
"content": "1. The paper presents scaling laws in the multilingual setup across different axis:\n\n1.1 For repeated epochs in the monolingual setup\n\n1.2 To account for cross lingual transfer in a language mixture setup\n\n1.3. Account for the curse of multilinguality by providing a closed form approximation... | {
"cdate": 1758341769702,
"content": {
"TLDR": {
"value": "Scaling laws for multilingual pretraining, finetuning, and language transfer."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025atlas,\ntitle={{ATLAS}: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, a... | |
2,026 | 0CQnhxpE7w | [
4,
8,
4,
6
] | [
{
"content": "The authors propose EVCtrl, a training free method for speeding up inference of ControlNet based models. The method is based on the observation that when using sparse conditioning modalities mostly consisting of black pixels, only tokens corresponding to non-black regions are critical for updating... | {
"cdate": 1758260698841,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nyang2025evctrl,\ntitle={{EVC}trl: Efficient Control Adapter for Visual Generation},\nauthor={Zixiang Yang and Yue Ma and Yinhan Zhang and Shanhui Mo and Dongrui Liu and Linfeng Zhang},\nyear={2025},\nurl={https://openr... | |
2,026 | 0CXjpAxHUE | [
8,
6,
4,
8
] | [
{
"content": "This paper presents a theoretical framework for understanding multi-epoch data reuse in the context of linear regression and its implications for data-scaling laws in large model training. It shows that larger datasets can be repeated more times effectively. Simulation and LLM pretraining experime... | {
"cdate": 1758246906466,
"content": {
"TLDR": {
"value": "Theoretical analysis of multi-epoch scaling in linear regression"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025larger,\ntitle={Larger Datasets Can Be Repeated More: A Theoretical Analysis of Multi-Epoch Scaling in Linear R... | |
2,026 | 0CZAimzcVr | [
6,
6,
6,
6
] | [
{
"content": "The paper studies a quite general optimization problem, namely maximizing a function F on [0,1]^n under some contraints defining a feasible set $\\cal C \\in [0,1]^n$. The function is DR-submodular. The feasible set is convex. Such a problem is typically solved using gradient ascend, and there is ... | {
"cdate": 1758106456114,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025drsubmodular,\ntitle={{DR}-Submodular Maximization with Stochastic Biased Gradients: Classical and Quantum Gradient Algorithms},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth Intern... | |
2,026 | 0CajQNVKyB | [
8,
6,
6,
4
] | [
{
"content": "This paper introduces HERO (Hybrid Ensemble Reward Optimization)- a framework that integrates dense signals from reward models with binary-valued feedback from rule-based verifiers. The paper systematically reports the merits of each individual approach while highlighting its limitations; they fur... | {
"cdate": 1758300838198,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025hybrid,\ntitle={Hybrid Reinforcement: when reward is sparse, better to be dense},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations}... | |
2,026 | 0Cv0whP7l8 | [
6,
4,
4,
6
] | [
{
"content": "This paper introduces a framework to diagnose and mitigate modality interference in multimodal large language models (MLLMs)—a phenomenon where irrelevant or misleading modality signals degrade model performance. The authors define the broader cross-modality competency problem, identifying modalit... | {
"cdate": 1758184011136,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025diagnosing,\ntitle={Diagnosing and Mitigating Modality Interference in Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on L... | |
2,026 | 0Cv9PwL7cI | [
8,
4,
4,
6
] | [
{
"content": "This paper investigates the limitations of fixed block-size semi-autoregressive decoding in diffusion-based large language models (dLLMs). The authors identify two inefficiencies — Late Decoding Overhead and Premature Decoding Error — that arise when fixed-size blocks fail to align with semantic s... | {
"cdate": 1756733104796,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025semanticaware,\ntitle={Semantic-Aware Diffusion {LLM} Inference With Adaptive Block Size},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Represe... | |
2,026 | 0DaB4jeGaf | [
4,
6,
4
] | [
{
"content": "This paper proposes a quantile regression framework, where an ReLU neural network is used to approximate the quantile function, and a convolution-type smooth quantile loss is used to train the network. Experimental results on synthetic data show that the proposed framework outperforms ReLU network... | {
"cdate": 1758039348094,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025conquer,\ntitle={Conquer the Quantile: Convolution-Smoothed Quantile Regression with Neural Networks and Minimax Guarantees},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth Internati... | |
2,026 | 0DekoBl3te | [
8,
2,
4,
2
] | [
{
"content": "This paper proposes Dual-MoE, a dual-path mixture-of-experts framework for multivariate time series forecasting that jointly models temporal distribution shifts and noisy channel dependencies. The model consists of two complementary components: the Temporal Fusion MoE and the Channel Fusion MoE. A... | {
"cdate": 1758115541458,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025dualmoe,\ntitle={Dual-MoE: Learning Time and Channel Dependencies via Dual Mixture-of-Experts for Time Series Forecasting},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth Internation... | |
2,026 | 0Dhpt9aY3n | [
6,
8,
2,
6
] | [
{
"content": "This paper introduces DeepSynth, a very challenging benchmark for evaluating LLM agents. DeepSynth consists of 120 diverse tasks created by 16 experts, where each task requires an agent to navigate through about 4 web pages and read up to 15 documents and tables. The tasks are designed to reflect ... | {
"cdate": 1758304923126,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025a,\ntitle={A Benchmark for Deep Information Synthesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},\nurl={http... | |
2,026 | 0DqB1vxGTn | [
2,
4,
2,
6
] | [
{
"content": "The paper propose a method to train a model that predicts depth map from a single image at metric scale. Real-world camera heights are assumed to be known during training and is used to recover metric depths, which are then used as pseudo label ground-truth depth to supervise another student netwo... | {
"cdate": 1757487648325,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@misc{\nzhang2025enhancing,\ntitle={Enhancing Self-Supervised Depth Estimation Through Camera Parameter Priors},\nauthor={Jinchang Zhang and Xue Iuan Wong and Guoyu Lu},\nyear={2025},\nurl={https://openreview.net/forum?id=0DqB... | |
2,026 | 0EV92jgJaZ | [
2,
2,
6
] | [
{
"content": "Knowledge compilation approaches in probabilistic answer set programming (PASP) can be categorised into top-down or bottom-up approaches.\nTop-down typically require a CNF as input, which generally means additional auxiliary variables are introduced to first transform the PASP program into a CNF.\... | {
"cdate": 1758322135358,
"content": {
"TLDR": {
"value": "We propose a non-incremental approach for Bottom-Up Knowledge Compilation of Probabilistic Answer Set programs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025nonincremental,\ntitle={Non-Incremental Bottom-Up Knowledge Comp... | |
2,026 | 0EXuliYnfW | [
4,
2,
4,
6
] | [
{
"content": "This paper propose PPBoost , a method proposed to tackle zero-shot medical image segmentation by bridging the gap between text prompts and visual prompts. PPBoost progressively transform a natural language description of the target anatomy into a high-quality spatial bounding box, which then guide... | {
"cdate": 1758295172014,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025ppboost,\ntitle={{PPBOOST}: {PROGRESSIVE} {PROMPT} {BOOSTING} {FOR} {TEXT}-{DRIVEN} {MEDICAL} {IMAGE} {SEGMENTATION}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Co... | |
2,026 | 0FH7ceYzCq | [
8,
4,
4,
4
] | [
{
"content": "This paper proposes a sequence-agnostic method for continuous multi-modal clustering, Sequence-Agnostic Continual Multi-Modal Clustering (SCMC). It aims to address two core issues in existing continuous multi-modal clustering: the unreliable fusion of historical and new modal information, and the ... | {
"cdate": 1757658463010,
"content": {
"TLDR": {
"value": "We propose a novel sequence-agnostic continual multi-modal clustering method."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025sequenceagnostic,\ntitle={Sequence-agnostic Continual Multi-modal Clustering},\nauthor={Anonymous}... | |
2,026 | 0FJYicpOj0 | [
6,
6,
4,
8
] | [
{
"content": "The paper introduces ε-Gaussian certifiability (GPAR)—a new theoretical notion for analyzing machine unlearning in high-dimensional regimes (p ~ n). It reformulates unlearning guarantees via hypothesis testing and Gaussian trade-off functions, showing that a single Newton step with Gaussian noise ... | {
"cdate": 1758328790343,
"content": {
"TLDR": {
"value": "We introduce the canonical dimension free notion of certifiability suitable to high dimensions and show its utility via a Newton based unlearning algorithm"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025gaussian,\ntitle={Ga... | |
2,026 | 0FN0u6qTAi | [
4,
4,
2,
6
] | [
{
"content": "This paper introduces the \"Protein-as-Second-Language\" framework, which aims to enable large language models (LLMs) to interpret protein (amino acid) sequences as if they were acquiring a second symbolic language. By curating a bilingual dataset of almost 80k protein-question-answer triples and ... | {
"cdate": 1757862318304,
"content": {
"TLDR": {
"value": "We propose a protein–language framework and bilingual dataset that enable LLMs to reason about protein function via context-driven learning without fine-tuning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025protein,\ntitle... | |
2,026 | 0Fc9yLlIYX | [
4,
2,
2,
4
] | [
{
"content": "This paper proposes a systematic and comprehensive analysis of temporal bias in Large Audio Language Models (LALMs). Through experiments, the paper reveals that LALMs consistently predict the temporal occurrence of acoustic events earlier. Through detailed analysis, the paper shows that in LALMs: ... | {
"cdate": 1758297559979,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025lost,\ntitle={Lost in Time: Systematic Temporal Bias in Large Audio Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representati... | |
2,026 | 0FhrtdKLtD | [
6,
6,
6,
2
] | [
{
"content": "This paper introduces MindCube, a new benchmark for evaluating the ability of VLMs to form \"spatial mental models\" from limited views. The authors show that existing models perform poorly, then propose a \"map-then-reason\" approach, which is to train a model to first generate a structured cogni... | {
"cdate": 1757013520622,
"content": {
"TLDR": {
"value": "We propose MindCube and find existing VLMs perform poorly on it. Supervising models to first generate cognitive maps and then reason upon them proves to be a quite effective approximation for spatial mental modeling from limited views."
},
... | |
2,026 | 0G8Cq9z2Hp | [
4,
4,
4,
6,
6
] | [
{
"content": "This paper addresses the computational complexity issue in AlphaFold by presenting a hierarchical pipeline, refered to as HieraFold, which decomposes the end-to-end structure prediction task in a coarse-to-fine manner.\nHieraFold first performs a coarse global prediction using a \"lightweight\" ve... | {
"cdate": 1758359066119,
"content": {
"TLDR": {
"value": "We introduce HierAFold, a hierarchical pipeline that exploits the modularity of large complexes via PAE-guided (Predicted Aligned Error) subunit decomposition, targeted interface-aware refinement, and confidence-weighted assembly."
},
"_bi... | |
2,026 | 0GMt2OWeCb | [
4,
4,
2,
2
] | [
{
"content": "This paper addresses two critical limitations of existing memory-augmented Large Language Model (LLM)-based agents: low data efficiency (relying on extensive task-specific interaction data for early training) and poor adaptability (using static memory retrieval strategies that fail to balance cros... | {
"cdate": 1758343379022,
"content": {
"TLDR": {
"value": "We propose a memory-augmented LLM agent with cross-task learning and dynamic memory retrieval to improve adaptability and efficiency in multi-turn instruction-following tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025... | |
2,026 | 0GNBqoYcAP | [
4,
4,
6
] | [
{
"content": "his paper examines how world models can learn and adapt through context, focusing on in-context learning (ICL) within both MDP and POMDP settings. The authors distinguish between two key processes, namely In-Context Environment Learning (ICEL) and In-Context Environment Recognition (ICER). Further... | {
"cdate": 1758188335809,
"content": {
"TLDR": {
"value": "We formalize, bound, and validate in-context environment learning, showing that long-context, diverse-input world models can self-adapt by recognizing or learning new dynamics without parameter updates."
},
"_bibtex": {
"value": "@in... | |
2,026 | 0GaCfBRFnf | [
6,
4,
6,
8
] | [
{
"content": "This paper introduces ProActive Self-Refinement (PASR) as a novel method for enabling Large Language Models (LLMs) to refine their outputs during the generation process, rather than as a post-hoc step. \n\nThe authors formalize this as a MDP and use RL to train models to decide whether, when, and ... | {
"cdate": 1758282395015,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025a,\ntitle={A Stitch in Time Saves Nine: Proactive Self-Refinement for Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representa... | |
2,026 | 0GdjEJCHOE | [
6,
2,
2,
2
] | [
{
"content": "This paper presents DRMLP, a Dynamic Regularized Multi-Layer Perceptron framework for discovering Granger causal structure in multivariate time series. DRMLP introduces a dual-branch neural architecture, combining a linear (MLP-based) causal discovery path with a recurrent (LSTM-based) sampling st... | {
"cdate": 1757163600341,
"content": {
"TLDR": {
"value": "A dynamic regularization approach for Granger-based causal discovery achieves superior performance on simulated and real-world time series data."
},
"_bibtex": {
"value": "@misc{\nliu2025drmlp,\ntitle={{DRMLP}: Dynamic Regularized Mu... | |
2,026 | 0GjORP5Duq | [
4,
6,
2,
6
] | [
{
"content": "The paper addresses the persistent challenge of compositional reasoning in vision–language models such as CLIP. \nIt proposes RACA-CLIP, a structured contrastive learning framework that integrates scene-graph supervision to align visual and textual representations at the object, attribute, and rel... | {
"cdate": 1758352745950,
"content": {
"TLDR": {
"value": "Building compositionality robust CLIP model by region aware training objectives, pushing them towards better reasoning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025racaclip,\ntitle={{RACA}-{CLIP}: Relation-Aware Composit... | |
2,026 | 0GlStRq4Xw | [
0,
8,
2,
6
] | [
{
"content": "This paper proposes a machine learning architecture for constrained optimization learning that approximates an iterative descent algorithm. The proposed approach integrates an active set strategy, an approximate descent direction computation, and a projection operator to ensure equality constraint... | {
"cdate": 1757218565386,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025descentnet,\ntitle={Descent-Net: Learning Descent Directions for Constrained Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Repres... | |
2,026 | 0GpolO2auw | [
8,
6,
4
] | [
{
"content": "This paper considers the task of community detection in well-clusterable graphs with sublinear space. The goal is to design a data structure D that fits in sublinear memory, and that enables one to query the cluster assignment for each node in sublinear time. Previous approaches all require $\\Ome... | {
"cdate": 1758270235850,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025sublinear,\ntitle={Sublinear Spectral Clustering Oracle with Little Memory},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nye... | |
2,026 | 0H5iD4he7R | [
2,
6,
6,
2
] | [
{
"content": "The paper proposes f-DMU, a unified framework for diffusion model unlearning based on f-divergence. It generalizes existing MSE-based and KL-based unlearning approaches by allowing any f-divergence. The method provides two formulations—closed-form and variational—to balance simplicity and generali... | {
"cdate": 1758357192051,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025a,\ntitle={A Unified Framework for Diffusion Model Unlearning with f-Divergence},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations}... | |
2,026 | 0HcqZkv1zs | [
4,
4,
4,
8
] | [
{
"content": "The authors propose a method to incorporate semantic context into theorem proving models. They propose a clarity score to evaluate the understanding of this context. They demonstrate that this clarity helps downstream performance in proving theorems.",
"id": "XnDxg2qrMS",
"rating": 4
},
... | {
"cdate": 1758161650677,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025clarifying,\ntitle={Clarifying Before Reasoning: A Coq Prover with Structural Context},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representa... | |
2,026 | 0I2N8KxOAo | [
2,
2,
2,
6
] | [
{
"content": "This paper proposes DeFa, a framework for non-stationary multivariate time series forecasting. DeFa consists of two main components: (1) NAILong, a decomposition strategy that separates a time series into a time-varying Amplifier, normalized Seasonality, and sparse Residual via a multiplicative fo... | {
"cdate": 1756731475441,
"content": {
"TLDR": {
"value": "DeFA introduces a decomposition-based framework with tensor autoregressive forecasting that effectively captures non-stationary dynamics and long-term dependencies in multivariate time series."
},
"_bibtex": {
"value": "@inproceeding... | |
2,026 | 0IFqBfX7Ak | [
4,
2,
6
] | [
{
"content": "This paper introduces Integrated Policy Gradient (IPG), a method intended to attribute and modulate reasoning components in large language models by applying a policy‐gradient–like formulation on hidden activations, followed by scaling of the identified components. The aim is to locate “reasoning ... | {
"cdate": 1758256779840,
"content": {
"TLDR": {
"value": "A method to causally control and interpret LLM reasoning behaviors by identifying and intevening internal reasoning-critical components."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025interpreting,\ntitle={Interpreting and ... | |
2,026 | 0IN8RiFbmg | [
4,
4,
2,
2
] | [
{
"content": "This paper investigates the performance of Parameter-Efficient Fine-Tuning (PEFT) methods under increasing distribution shifts across tasks. We introduce a novel PEFT technique, AUG, which augments matrix-vector products with learnable parameters conditioned on both the input data and pretrained w... | {
"cdate": 1758158798611,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025scaling,\ntitle={Scaling Parameter-Efficiency with Distribution Shifts for Domain Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Rep... | |
2,026 | 0IWZjbMmry | [
4,
4,
2,
2
] | [
{
"content": "This paper introduces LayerDecompose, a compression framework for large language models that combines weight sharing with low-rank adapters. The key idea is to represent groups of consecutive layers with a single shared weight matrix W, augmented with layer-specific low-rank residuals and per-chan... | {
"cdate": 1758267368176,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025layerdecompose,\ntitle={LayerDecompose: Exploring weight sharing for Large Language Model Compression},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Lea... | |
2,026 | 0IceiDrfxI | [
4,
2,
4,
4
] | [
{
"content": "This paper introduces NATLM, a framework that combines static analysis (AST/CFG) and LLM reasoning (Gemini Pro 1.5) to detect four NFT smart-contract defect types: ERC-721 Reentrancy, Public Burn, Risky Mutable Proxy, and Unlimited Minting. AST features are derived via CodeBERT; CFG features via T... | {
"cdate": 1757838049602,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025natlm,\ntitle={{NATLM}: Detecting Defects in {NFT} Smart Contracts Leveraging {LLM}},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representati... | |
2,026 | 0Iw52EDu82 | [
2,
4,
6,
6
] | [
{
"content": "This paper investigates the scaling law of fully sparsely-activated language models. They first conduct experiments to compare different activation functions, sparsification functions, and gradient estimation methods. Then, they use scaling law (the relationship between cross-entropy loss and trai... | {
"cdate": 1758204654491,
"content": {
"TLDR": {
"value": "In this work, we investigate the architecture and scaling laws for fully sparsely-activated models, where every activation in linear transformations is sparse."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025scaling,\ntitle=... | |
2,026 | 0IwSQsqMU9 | [
8,
4,
4
] | [
{
"content": "Quite interesting work; A novel Darwinian perspective to optimization dynamics in NN. The paper presents a novel bio-inspired optimization method called Natural Selection (NS) that introduces explicit competition among training samples. By computing competitive scores through image stitching and d... | {
"cdate": 1757926747886,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025darwinian,\ntitle={Darwinian Optimization: Training Deep Networks with Natural Selection},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Represe... | |
2,026 | 0JLUFJMo5p | [
0,
2,
0,
2
] | [
{
"content": "The manuscript strongly resembles AI-generated content and may have been produced as an internal test for prospective AI researchers. If so, it suggests that the current state of such roles remains immature and requires further development.",
"id": "nKMdFIiiCf",
"rating": 0
},
{
"c... | {
"cdate": 1758368192077,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025dynamic,\ntitle={Dynamic Task-Embedded Reward Machines for {\\textbackslash}{\\textbackslash} Adaptive Code Generation and Manipulation {\\textbackslash}{\\textbackslash} in Reinforcement Learning... | |
2,026 | 0JWhSwwXak | [
4,
4,
6,
6
] | [
{
"content": "This paper proposes SYMMATIKA, a symbolic regression framework that combines multi-island genetic programming and a reusable symbol library to accelerate search, supporting both explicit (y=f(x)) and implicit (F(x,y)=0) regression tasks. Experimental results demonstrate its superiority over existi... | {
"cdate": 1758226660875,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025symmatika,\ntitle={SymMatika: Structure-Aware Symbolic Discovery},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations},\nyear={2025},... | |
2,026 | 0JYtXNl7ns | [
2,
2,
4,
4
] | [
{
"content": "The paper introduces an inference-time scaling framework (SHARS), aiming to allocate additional computational resources to detect and mitigate hallucinations during decoding.\nAs a main component of the framework, the uncertainty-based hallucination detection method HalluSE which aims to improve s... | {
"cdate": 1758288192041,
"content": {
"TLDR": {
"value": "an inference-time scaling framework for hallucination mitigation in open-ended generation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2025building,\ntitle={Building Reliable Long-Form Generation via Step-Wise Hallucination ... | |
2,026 | 0JayjvOKxt | [
2,
4,
6,
4
] | [
{
"content": "This paper proposes the Adaptive and Selective Reset (ASR) scheme to address the problem of model collapse in long-term Test-Time Adaptation (TTA). The main contributions are: 1) The ASR mechanism dynamically determines when and which parts of the model to reset; 2) An importance-aware knowledge r... | {
"cdate": 1758270901913,
"content": {
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2025when,\ntitle={When and Where to Reset Matters for Long-Term Test-Time Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Fourteenth International Conference on Learning Representations... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.