arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,505.09265 | MetaUAS: Universal Anomaly Segmentation with One-Prompt Meta-Learning | ['Bin-Bin Gao'] | ['cs.CV', 'cs.AI'] | Zero- and few-shot visual anomaly segmentation relies on powerful
vision-language models that detect unseen anomalies using manually designed
textual prompts. However, visual representations are inherently independent of
language. In this paper, we explore the potential of a pure visual foundation
model as an alternati... | 2025-05-14T10:25:26Z | Accepted by NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,505.09358 | Marigold: Affordable Adaptation of Diffusion-Based Image Generators for
Image Analysis | ['Bingxin Ke', 'Kevin Qu', 'Tianfu Wang', 'Nando Metzger', 'Shengyu Huang', 'Bo Li', 'Anton Obukhov', 'Konrad Schindler'] | ['cs.CV', 'cs.LG'] | The success of deep learning in computer vision over the past decade has
hinged on large labeled datasets and strong pretrained models. In data-scarce
settings, the quality of these pretrained models becomes crucial for effective
transfer learning. Image classification and self-supervised learning have
traditionally be... | 2025-05-14T13:07:03Z | Journal extension of our CVPR 2024 paper, featuring new tasks,
improved efficiency, high-resolution capabilities, and enhanced accessibility | null | null | Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis | ['Bingxin Ke', 'Kevin Qu', 'Tianfu Wang', 'Nando Metzger', 'Shengyu Huang', 'Bo Li', 'Anton Obukhov', 'Konrad Schindler'] | 2,025 | arXiv.org | 1 | 134 | ['Computer Science'] |
2,505.09372 | MAKE: Multi-Aspect Knowledge-Enhanced Vision-Language Pretraining for
Zero-shot Dermatological Assessment | ['Siyuan Yan', 'Xieji Li', 'Ming Hu', 'Yiwen Jiang', 'Zhen Yu', 'Zongyuan Ge'] | ['cs.CV'] | Dermatological diagnosis represents a complex multimodal challenge that
requires integrating visual features with specialized clinical knowledge. While
vision-language pretraining (VLP) has advanced medical AI, its effectiveness in
dermatology is limited by text length constraints and the lack of structured
texts. In t... | 2025-05-14T13:24:08Z | MICCAI2025 early acceptance; First two authors contribute equally | null | null | null | null | null | null | null | null | null |
2,505.09388 | Qwen3 Technical Report | ['An Yang', 'Anfeng Li', 'Baosong Yang', 'Beichen Zhang', 'Binyuan Hui', 'Bo Zheng', 'Bowen Yu', 'Chang Gao', 'Chengen Huang', 'Chenxu Lv', 'Chujie Zheng', 'Dayiheng Liu', 'Fan Zhou', 'Fei Huang', 'Feng Hu', 'Hao Ge', 'Haoran Wei', 'Huan Lin', 'Jialong Tang', 'Jian Yang', 'Jianhong Tu', 'Jianwei Zhang', 'Jianxin Yang',... | ['cs.CL'] | In this work, we present Qwen3, the latest version of the Qwen model family.
Qwen3 comprises a series of large language models (LLMs) designed to advance
performance, efficiency, and multilingual capabilities. The Qwen3 series
includes models of both dense and Mixture-of-Expert (MoE) architectures, with
parameter scale... | 2025-05-14T13:41:34Z | null | null | null | null | null | null | null | null | null | null |
2,505.09498 | Flash-VL 2B: Optimizing Vision-Language Model Performance for Ultra-Low
Latency and High Throughput | ['Bo Zhang', 'Shuo Li', 'Runhe Tian', 'Yang Yang', 'Jixin Tang', 'Jinhao Zhou', 'Lin Ma'] | ['cs.CV', 'cs.AI'] | In this paper, we introduce Flash-VL 2B, a novel approach to optimizing
Vision-Language Models (VLMs) for real-time applications, targeting ultra-low
latency and high throughput without sacrificing accuracy. Leveraging advanced
architectural enhancements and efficient computational strategies, Flash-VL 2B
is designed t... | 2025-05-14T15:45:17Z | 18 pages, 7 figures | null | null | Flash-VL 2B: Optimizing Vision-Language Model Performance for Ultra-Low Latency and High Throughput | ['Bo Zhang', 'Shuo Li', 'Runhe Tian', 'Yang Yang', 'Jixin Tang', 'Jinhao Zhou', 'Lin Ma'] | 2,025 | arXiv.org | 0 | 80 | ['Computer Science'] |
2,505.09655 | DRA-GRPO: Exploring Diversity-Aware Reward Adjustment for R1-Zero-Like
Training of Large Language Models | ['Xiwen Chen', 'Wenhui Zhu', 'Peijie Qiu', 'Xuanzhao Dong', 'Hao Wang', 'Haiyu Wu', 'Huayu Li', 'Aristeidis Sotiras', 'Yalin Wang', 'Abolfazl Razi'] | ['cs.CL', 'cs.LG'] | Recent advances in reinforcement learning for language model post-training,
such as Group Relative Policy Optimization (GRPO), have shown promise in
low-resource settings. However, GRPO typically relies on solution-level and
scalar reward signals that fail to capture the semantic diversity among sampled
completions. Th... | 2025-05-14T02:02:32Z | null | null | null | null | null | null | null | null | null | null |
2,505.09694 | EWMBench: Evaluating Scene, Motion, and Semantic Quality in Embodied
World Models | ['Hu Yue', 'Siyuan Huang', 'Yue Liao', 'Shengcong Chen', 'Pengfei Zhou', 'Liliang Chen', 'Maoqing Yao', 'Guanghui Ren'] | ['cs.RO'] | Recent advances in creative AI have enabled the synthesis of high-fidelity
images and videos conditioned on language instructions. Building on these
developments, text-to-video diffusion models have evolved into embodied world
models (EWMs) capable of generating physically plausible scenes from language
commands, effec... | 2025-05-14T18:00:19Z | Website: https://github.com/AgibotTech/EWMBench | null | null | null | null | null | null | null | null | null |
2,505.09723 | EnerVerse-AC: Envisioning Embodied Environments with Action Condition | ['Yuxin Jiang', 'Shengcong Chen', 'Siyuan Huang', 'Liliang Chen', 'Pengfei Zhou', 'Yue Liao', 'Xindong He', 'Chiming Liu', 'Hongsheng Li', 'Maoqing Yao', 'Guanghui Ren'] | ['cs.RO', 'cs.CV'] | Robotic imitation learning has advanced from solving static tasks to
addressing dynamic interaction scenarios, but testing and evaluation remain
costly and challenging due to the need for real-time interaction with dynamic
environments. We propose EnerVerse-AC (EVAC), an action-conditional world model
that generates fu... | 2025-05-14T18:30:53Z | Website: https://annaj2178.github.io/EnerverseAC.github.io | null | null | null | null | null | null | null | null | null |
2,505.0993 | Rethinking Prompt Optimizers: From Prompt Merits to Optimization | ['Zixiao Zhu', 'Hanzhang Zhou', 'Zijian Feng', 'Tianjiao Li', 'Chua Jia Jim Deryl', 'Mak Lee Onn', 'Gee Wah Ng', 'Kezhi Mao'] | ['cs.CL'] | Prompt optimization (PO) provides a practical way to improve response quality
when users lack the time or expertise to manually craft effective prompts.
Existing methods typically rely on advanced, large-scale LLMs like GPT-4 to
generate optimized prompts. However, due to limited downward compatibility,
verbose, instru... | 2025-05-15T03:31:37Z | 21 pages, 14 figures | null | null | null | null | null | null | null | null | null |
2,505.10046 | Exploring the Deep Fusion of Large Language Models and Diffusion
Transformers for Text-to-Image Synthesis | ['Bingda Tang', 'Boyang Zheng', 'Xichen Pan', 'Sayak Paul', 'Saining Xie'] | ['cs.CV'] | This paper does not describe a new method; instead, it provides a thorough
exploration of an important yet understudied design space related to recent
advances in text-to-image synthesis -- specifically, the deep fusion of large
language models (LLMs) and diffusion transformers (DiTs) for multi-modal
generation. Previo... | 2025-05-15T07:43:23Z | null | null | null | null | null | null | null | null | null | null |
2,505.10238 | MTVCrafter: 4D Motion Tokenization for Open-World Human Image Animation | ['Yanbo Ding', 'Xirui Hu', 'Zhizhi Guo', 'Chi Zhang', 'Yali Wang'] | ['cs.CV'] | Human image animation has gained increasing attention and developed rapidly
due to its broad applications in digital humans. However, existing methods rely
largely on 2D-rendered pose images for motion guidance, which limits
generalization and discards essential 3D information for open-world animation.
To tackle this p... | 2025-05-15T12:50:29Z | null | null | null | null | null | null | null | null | null | null |
2,505.10292 | StoryReasoning Dataset: Using Chain-of-Thought for Scene Understanding
and Grounded Story Generation | ['Daniel A. P. Oliveira', 'David Martins de Matos'] | ['cs.CV', 'cs.CL', 'I.2.10; I.2.7'] | Visual storytelling systems struggle to maintain character identity across
frames and link actions to appropriate subjects, frequently leading to
referential hallucinations. These issues can be addressed through grounding of
characters, objects, and other entities on the visual elements. We propose
StoryReasoning, a da... | 2025-05-15T13:42:14Z | 31 pages, 14 figures | null | null | StoryReasoning Dataset: Using Chain-of-Thought for Scene Understanding and Grounded Story Generation | ['Daniel A. P. Oliveira', 'David Martins de Matos'] | 2,025 | arXiv.org | 0 | 42 | ['Computer Science'] |
2,505.10294 | MIPHEI-ViT: Multiplex Immunofluorescence Prediction from H&E Images
using ViT Foundation Models | ['Guillaume Balezo', 'Roger Trullo', 'Albert Pla Planas', 'Etienne Decenciere', 'Thomas Walter'] | ['cs.CV', 'q-bio.TO', '68T07 (Primary), 92C55 (Secondary)', 'I.4.9; I.2.10; I.5.4; J.3'] | Histopathological analysis is a cornerstone of cancer diagnosis, with
Hematoxylin and Eosin (H&E) staining routinely acquired for every patient to
visualize cell morphology and tissue architecture. On the other hand, multiplex
immunofluorescence (mIF) enables more precise cell type identification via
proteomic markers,... | 2025-05-15T13:42:48Z | null | null | null | MIPHEI-ViT: Multiplex Immunofluorescence Prediction from H&E Images using ViT Foundation Models | ['Guillaume Balezo', 'R. Trullo', 'Albert Pla Planas', 'Etienne Decencière', 'Thomas Walter'] | 2,025 | arXiv.org | 0 | 47 | ['Computer Science', 'Biology'] |
2,505.10446 | Reinforcing the Diffusion Chain of Lateral Thought with Diffusion
Language Models | ['Zemin Huang', 'Zhiyang Chen', 'Zijun Wang', 'Tiancheng Li', 'Guo-Jun Qi'] | ['cs.CL'] | We introduce the Diffusion Chain of Lateral Thought (DCoLT), a reasoning
framework for diffusion language models. DCoLT treats each intermediate step in
the reverse diffusion process as a latent "thinking" action and optimizes the
entire reasoning trajectory to maximize the reward on the correctness of the
final answer... | 2025-05-15T16:06:32Z | null | null | null | null | null | null | null | null | null | null |
2,505.10475 | Parallel Scaling Law for Language Models | ['Mouxiang Chen', 'Binyuan Hui', 'Zeyu Cui', 'Jiaxi Yang', 'Dayiheng Liu', 'Jianling Sun', 'Junyang Lin', 'Zhongxin Liu'] | ['cs.LG', 'cs.CL'] | It is commonly believed that scaling language models should commit a
significant space or time cost, by increasing the parameters (parameter
scaling) or output tokens (inference-time scaling). We introduce the third and
more inference-efficient scaling paradigm: increasing the model's parallel
computation during both t... | 2025-05-15T16:24:45Z | null | null | null | Parallel Scaling Law for Language Models | ['Mouxiang Chen', 'Binyuan Hui', 'Zeyu Cui', 'Jiaxin Yang', 'Dayiheng Liu', 'Jianling Sun', 'Junyang Lin', 'Zhongxin Liu'] | 2,025 | arXiv.org | 2 | 100 | ['Computer Science'] |
2,505.10518 | Multi-Token Prediction Needs Registers | ['Anastasios Gerontopoulos', 'Spyros Gidaris', 'Nikos Komodakis'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Multi-token prediction has emerged as a promising objective for improving
language model pretraining, but its benefits have not consistently generalized
to other settings such as fine-tuning. In this paper, we propose MuToR, a
simple and effective approach to multi-token prediction that interleaves
learnable register t... | 2025-05-15T17:25:03Z | null | null | null | null | null | null | null | null | null | null |
2,505.10527 | WorldPM: Scaling Human Preference Modeling | ['Binghai Wang', 'Runji Lin', 'Keming Lu', 'Le Yu', 'Zhenru Zhang', 'Fei Huang', 'Chujie Zheng', 'Kai Dang', 'Yang Fan', 'Xingzhang Ren', 'An Yang', 'Binyuan Hui', 'Dayiheng Liu', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang', 'Yu-Gang Jiang', 'Bowen Yu', 'Jingren Zhou', 'Junyang Lin'] | ['cs.CL'] | Motivated by scaling laws in language modeling that demonstrate how test loss
scales as a power law with model and dataset sizes, we find that similar laws
exist in preference modeling. We propose World Preference Modeling$ (WorldPM)
to emphasize this scaling potential, where World Preference embodies a unified
represe... | 2025-05-15T17:38:37Z | null | null | null | WorldPM: Scaling Human Preference Modeling | ['Bing Wang', 'Runji Lin', 'Keming Lu', 'Le Yu', 'Zhenru Zhang', 'Fei Huang', 'Chujie Zheng', 'Kai Dang', 'Yang Fan', 'Xingzhang Ren', 'An Yang', 'Binyuan Hui', 'Dayiheng Liu', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang', 'Yu-Gang Jiang', 'Bowen Yu', 'Jingren Zhou', 'Junyang Lin'] | 2,025 | arXiv.org | 1 | 55 | ['Computer Science'] |
2,505.10554 | Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large
Reasoning Models | ['Zhiyuan Hu', 'Yibo Wang', 'Hanze Dong', 'Yuhui Xu', 'Amrita Saha', 'Caiming Xiong', 'Bryan Hooi', 'Junnan Li'] | ['cs.CL'] | Large reasoning models (LRMs) already possess a latent capacity for long
chain-of-thought reasoning. Prior work has shown that outcome-based
reinforcement learning (RL) can incidentally elicit advanced reasoning
behaviors such as self-correction, backtracking, and verification phenomena
often referred to as the model's... | 2025-05-15T17:58:33Z | In Progress | null | null | null | null | null | null | null | null | null |
2,505.10557 | MathCoder-VL: Bridging Vision and Code for Enhanced Multimodal
Mathematical Reasoning | ['Ke Wang', 'Junting Pan', 'Linda Wei', 'Aojun Zhou', 'Weikang Shi', 'Zimu Lu', 'Han Xiao', 'Yunqiao Yang', 'Houxing Ren', 'Mingjie Zhan', 'Hongsheng Li'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Natural language image-caption datasets, widely used for training Large
Multimodal Models, mainly focus on natural scenarios and overlook the intricate
details of mathematical figures that are critical for problem-solving,
hindering the advancement of current LMMs in multimodal mathematical reasoning.
To this end, we p... | 2025-05-15T17:59:21Z | Accepted to ACL 2025 Findings | null | null | null | null | null | null | null | null | null |
2,505.10717 | A Modular Approach for Clinical SLMs Driven by Synthetic Data with
Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment | ['Jean-Philippe Corbeil', 'Amin Dada', 'Jean-Michel Attendu', 'Asma Ben Abacha', 'Alessandro Sordoni', 'Lucas Caccia', 'François Beaulieu', 'Thomas Lin', 'Jens Kleesiek', 'Paul Vozila'] | ['cs.CL', 'cs.AI'] | High computation costs and latency of large language models such as GPT-4
have limited their deployment in clinical settings. Small language models
(SLMs) offer a cost-effective alternative, but their limited capacity requires
biomedical domain adaptation, which remains challenging. An additional
bottleneck is the unav... | 2025-05-15T21:40:21Z | null | null | null | null | null | null | null | null | null | null |
2,505.10792 | Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in
Retrieval-Augmented Generation | ['Zhan Peng Lee', 'Andre Lin', 'Calvin Tan'] | ['cs.CL'] | Retrieval-Augmented Generation (RAG) has emerged as a powerful framework to
improve factuality in large language models (LLMs) by grounding their outputs
in retrieved documents. However, ensuring perfect retrieval of relevant
information remains challenging, and when irrelevant content is passed
downstream to an LLM, i... | 2025-05-16T02:06:06Z | null | null | null | Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation | ['Zhan Peng Lee', 'Andre Lin', 'Calvin Tan'] | 2,025 | arXiv.org | 0 | 31 | ['Computer Science'] |
2,505.10937 | Reasoning with OmniThought: A Large CoT Dataset with Verbosity and
Cognitive Difficulty Annotations | ['Wenrui Cai', 'Chengyu Wang', 'Junbing Yan', 'Jun Huang', 'Xiangzhong Fang'] | ['cs.CL', 'cs.AI'] | The emergence of large reasoning models (LRMs) has transformed Natural
Language Processing by excelling in complex tasks such as mathematical
problem-solving and code generation. These models leverage chain-of-thought
(CoT) processes, enabling them to emulate human-like reasoning strategies.
However, the advancement of... | 2025-05-16T07:15:30Z | null | null | null | Reasoning with OmniThought: A Large CoT Dataset with Verbosity and Cognitive Difficulty Annotations | ['Wenrui Cai', 'Chengyu Wang', 'Junbing Yan', 'Jun Huang', 'Xiangzhong Fang'] | 2,025 | arXiv.org | 1 | 39 | ['Computer Science'] |
2,505.10978 | Group-in-Group Policy Optimization for LLM Agent Training | ['Lang Feng', 'Zhenghai Xue', 'Tingcong Liu', 'Bo An'] | ['cs.LG', 'cs.AI'] | Recent advances in group-based reinforcement learning (RL) have driven
frontier large language models (LLMs) in single-turn tasks like mathematical
reasoning. However, their scalability to long-horizon LLM agent training
remains limited. Unlike static tasks, agent-environment interactions unfold
over many steps and oft... | 2025-05-16T08:26:59Z | Preprint | null | null | null | null | null | null | null | null | null |
2,505.11049 | GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning | ['Yue Liu', 'Shengfang Zhai', 'Mingzhe Du', 'Yulin Chen', 'Tri Cao', 'Hongcheng Gao', 'Cheng Wang', 'Xinfeng Li', 'Kun Wang', 'Junfeng Fang', 'Jiaheng Zhang', 'Bryan Hooi'] | ['cs.AI', 'cs.CR'] | To enhance the safety of VLMs, this paper introduces a novel reasoning-based
VLM guard model dubbed GuardReasoner-VL. The core idea is to incentivize the
guard model to deliberatively reason before making moderation decisions via
online RL. First, we construct GuardReasoner-VLTrain, a reasoning corpus with
123K samples... | 2025-05-16T09:46:10Z | null | null | null | null | null | null | null | null | null | null |
2,505.1108 | BLEUBERI: BLEU is a surprisingly effective reward for instruction
following | ['Yapei Chang', 'Yekyung Kim', 'Michael Krumdick', 'Amir Zadeh', 'Chuan Li', 'Chris Tanner', 'Mohit Iyyer'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Reward models are central to aligning LLMs with human preferences, but they
are costly to train, requiring large-scale human-labeled preference data and
powerful pretrained LLM backbones. Meanwhile, the increasing availability of
high-quality synthetic instruction-following datasets raises the question: can
simpler, re... | 2025-05-16T10:11:43Z | 28 pages, 11 figures, 15 tables; updated table 1 with random reward
results, fixed broken references in appendix | null | null | BLEUBERI: BLEU is a surprisingly effective reward for instruction following | ['Yapei Chang', 'Yekyung Kim', 'Michael Krumdick', 'Amir Zadeh', 'Chuan Li', 'Chris Tanner', 'Mohit Iyyer'] | 2,025 | arXiv.org | 0 | 73 | ['Computer Science'] |
2,505.11095 | Towards Better Evaluation for Generated Patent Claims | ['Lekang Jiang', 'Pascal A Scherz', 'Stephan Goetz'] | ['cs.CL'] | Patent claims define the scope of protection and establish the legal
boundaries of an invention. Drafting these claims is a complex and
time-consuming process that usually requires the expertise of skilled patent
attorneys, which can form a large access barrier for many small enterprises. To
solve these challenges, res... | 2025-05-16T10:27:16Z | Accepted to ACL 2025. 14 pages, 8 tables | null | null | Towards Better Evaluation for Generated Patent Claims | ['Lekang Jiang', 'Pascal A Scherz', 'Stephan Goetz'] | 2,025 | arXiv.org | 2 | 44 | ['Computer Science'] |
2,505.1114 | Scaling Reasoning can Improve Factuality in Large Language Models | ['Mike Zhang', 'Johannes Bjerva', 'Russa Biswas'] | ['cs.CL', 'cs.AI'] | Recent studies on large language model (LLM) reasoning capabilities have
demonstrated promising improvements in model performance by leveraging a
lengthy thinking process and additional computational resources during
inference, primarily in tasks involving mathematical reasoning (Muennighoff et
al., 2025). However, it ... | 2025-05-16T11:39:33Z | null | null | null | Scaling Reasoning can Improve Factuality in Large Language Models | ['Mike Zhang', 'Johannes Bjerva', 'Russa Biswas'] | 2,025 | arXiv.org | 0 | 7 | ['Computer Science'] |
2,505.11151 | STEP: A Unified Spiking Transformer Evaluation Platform for Fair and
Reproducible Benchmarking | ['Sicheng Shen', 'Dongcheng Zhao', 'Linghao Feng', 'Zeyang Yue', 'Jindong Li', 'Tenglong Li', 'Guobin Shen', 'Yi Zeng'] | ['cs.NE'] | Spiking Transformers have recently emerged as promising architectures for
combining the efficiency of spiking neural networks with the representational
power of self-attention. However, the lack of standardized implementations,
evaluation pipelines, and consistent design choices has hindered fair
comparison and princip... | 2025-05-16T11:50:14Z | 21 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,505.11196 | DiCo: Revitalizing ConvNets for Scalable and Efficient Diffusion
Modeling | ['Yuang Ai', 'Qihang Fan', 'Xuefeng Hu', 'Zhenheng Yang', 'Ran He', 'Huaibo Huang'] | ['cs.CV'] | Diffusion Transformer (DiT), a promising diffusion model for visual
generation, demonstrates impressive performance but incurs significant
computational overhead. Intriguingly, analysis of pre-trained DiT models
reveals that global self-attention is often redundant, predominantly capturing
local patterns-highlighting t... | 2025-05-16T12:54:04Z | 27 pages, 29 figures, 9 tables | null | null | null | null | null | null | null | null | null |
2,505.11293 | Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch
Mining | ['Raghuveer Thirukovalluru', 'Rui Meng', 'Ye Liu', 'Karthikeyan K', 'Mingyi Su', 'Ping Nie', 'Semih Yavuz', 'Yingbo Zhou', 'Wenhu Chen', 'Bhuwan Dhingra'] | ['cs.CV'] | Contrastive learning (CL) is a prevalent technique for training embedding
models, which pulls semantically similar examples (positives) closer in the
representation space while pushing dissimilar ones (negatives) further apart. A
key source of negatives are 'in-batch' examples, i.e., positives from other
examples in th... | 2025-05-16T14:25:43Z | 14 pages, 4 figures | null | null | null | null | null | null | null | null | null |
2,505.11336 | XtraGPT: LLMs for Human-AI Collaboration on Controllable Academic Paper
Revision | ['Nuo Chen', 'Andre Lin HuiKai', 'Jiaying Wu', 'Junyi Hou', 'Zining Zhang', 'Qian Wang', 'Xidong Wang', 'Bingsheng He'] | ['cs.CL'] | Despite the growing adoption of large language models (LLMs) in academic
workflows, their capabilities remain limited when it comes to supporting
high-quality scientific writing. Most existing systems are designed for
general-purpose scientific text generation and fail to meet the sophisticated
demands of research comm... | 2025-05-16T15:02:19Z | preprint | null | null | XtraGPT: LLMs for Human-AI Collaboration on Controllable Academic Paper Revision | ['Nuo Chen', 'Andre Lin HuiKai', 'Jiaying Wu', 'Junyi Hou', 'Zining Zhang', 'Qian Wang', 'Xidong Wang', 'Bingsheng He'] | 2,025 | arXiv.org | 1 | 80 | ['Computer Science'] |
2,505.1135 | Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual
Search in the Wild | ['Derek Ming Siang Tan', 'Shailesh', 'Boyang Liu', 'Alok Raj', 'Qi Xuan Ang', 'Weiheng Dai', 'Tanishq Duhan', 'Jimmy Chiun', 'Yuhong Cao', 'Florian Shkurti', 'Guillaume Sartoretti'] | ['cs.RO'] | To perform autonomous visual search for environmental monitoring, a robot may
leverage satellite imagery as a prior map. This can help inform coarse,
high-level search and exploration strategies, even when such images lack
sufficient resolution to allow fine-grained, explicit visual recognition of
targets. However, the... | 2025-05-16T15:15:00Z | null | null | null | null | null | null | null | null | null | null |
2,505.11404 | Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert
Reasoner | ['Wenchuan Zhang', 'Penghao Zhang', 'Jingru Guo', 'Tao Cheng', 'Jie Chen', 'Shuwan Zhang', 'Zhang Zhang', 'Yuhao Yi', 'Hong Bu'] | ['cs.CV', 'cs.AI'] | Recent advances in vision language models (VLMs) have enabled broad progress
in the general medical field. However, pathology still remains a more
challenging subdomain, with current pathology specific VLMs exhibiting
limitations in both diagnostic accuracy and reasoning plausibility. Such
shortcomings are largely attr... | 2025-05-16T16:12:50Z | null | null | null | Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert Reasoner | ['Wenchuan Zhang', 'Penghao Zhang', 'Jingru Guo', 'Tao Cheng', 'Jie Chen', 'Shuwan Zhang', 'Zhang Zhang', 'Yuhao Yi', 'Hong Bu'] | 2,025 | arXiv.org | 0 | 62 | ['Computer Science'] |
2,505.11462 | Disentangling Reasoning and Knowledge in Medical Large Language Models | ['Rahul Thapa', 'Qingyang Wu', 'Kevin Wu', 'Harrison Zhang', 'Angela Zhang', 'Eric Wu', 'Haotian Ye', 'Suhana Bedi', 'Nevin Aresh', 'Joseph Boen', 'Shriya Reddy', 'Ben Athiwaratkun', 'Shuaiwen Leon Song', 'James Zou'] | ['cs.CL', 'cs.AI'] | Medical reasoning in large language models (LLMs) aims to emulate clinicians'
diagnostic thinking, but current benchmarks such as MedQA-USMLE, MedMCQA, and
PubMedQA often mix reasoning with factual recall. We address this by separating
11 biomedical QA benchmarks into reasoning- and knowledge-focused subsets using
a Pu... | 2025-05-16T17:16:27Z | null | null | null | Disentangling Reasoning and Knowledge in Medical Large Language Models | ['Rahul Thapa', 'Qingyang Wu', 'Kevin Wu', 'Harrison Zhang', 'Angela Zhang', 'Eric Wu', 'Haotian Ye', 'Suhana Bedi', 'Nevin Aresh', 'Joseph Boen', 'Shriya Reddy', 'Ben Athiwaratkun', 'S. Song', 'James Zou'] | 2,025 | arXiv.org | 2 | 43 | ['Computer Science'] |
2,505.11475 | HelpSteer3-Preference: Open Human-Annotated Preference Data across
Diverse Tasks and Languages | ['Zhilin Wang', 'Jiaqi Zeng', 'Olivier Delalleau', 'Hoo-Chang Shin', 'Felipe Soares', 'Alexander Bukharin', 'Ellie Evans', 'Yi Dong', 'Oleksii Kuchaiev'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Preference datasets are essential for training general-domain,
instruction-following language models with Reinforcement Learning from Human
Feedback (RLHF). Each subsequent data release raises expectations for future
data collection, meaning there is a constant need to advance the quality and
diversity of openly availa... | 2025-05-16T17:31:19Z | 38 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,505.11594 | SageAttention3: Microscaling FP4 Attention for Inference and An
Exploration of 8-Bit Training | ['Jintao Zhang', 'Jia Wei', 'Pengle Zhang', 'Xiaoming Xu', 'Haofeng Huang', 'Haoxu Wang', 'Kai Jiang', 'Jun Zhu', 'Jianfei Chen'] | ['cs.LG', 'cs.AI', 'cs.AR', 'cs.CV', 'cs.PF'] | The efficiency of attention is important due to its quadratic time
complexity. We enhance the efficiency of attention through two key
contributions: First, we leverage the new FP4 Tensor Cores in Blackwell GPUs to
accelerate attention computation. Our implementation achieves 1038 TOPS on
RTX5090, which is a 5x speedup ... | 2025-05-16T18:01:54Z | null | null | null | null | null | null | null | null | null | null |
2,505.11764 | Towards Universal Semantics With Large Language Models | ['Raymond Baartmans', 'Matthew Raffel', 'Rahul Vikram', 'Aiden Deringer', 'Lizhong Chen'] | ['cs.CL', 'cs.AI'] | The Natural Semantic Metalanguage (NSM) is a linguistic theory based on a
universal set of semantic primes: simple, primitive word-meanings that have
been shown to exist in most, if not all, languages of the world. According to
this framework, any word, regardless of complexity, can be paraphrased using
these primes, r... | 2025-05-17T00:11:58Z | null | null | null | null | null | null | null | null | null | null |
2,505.11792 | Solver-Informed RL: Grounding Large Language Models for Authentic
Optimization Modeling | ['Yitian Chen', 'Jingfan Xia', 'Siyu Shao', 'Dongdong Ge', 'Yinyu Ye'] | ['cs.AI'] | Optimization modeling is fundamental to decision-making across diverse
domains. Despite progress in automating optimization formulation from natural
language descriptions, Large Language Models (LLMs) often struggle to generate
formally correct and usable models against hallucinations, posing a challenge
for reliable a... | 2025-05-17T02:32:03Z | null | null | null | null | null | null | null | null | null | null |
2,505.11849 | VeriReason: Reinforcement Learning with Testbench Feedback for
Reasoning-Enhanced Verilog Generation | ['Yiting Wang', 'Guoheng Sun', 'Wanghao Ye', 'Gang Qu', 'Ang Li'] | ['cs.AI', 'cs.AR', 'cs.LG', 'cs.PL'] | Automating Register Transfer Level (RTL) code generation using Large Language
Models (LLMs) offers substantial promise for streamlining digital circuit
design and reducing human effort. However, current LLM-based approaches face
significant challenges with training data scarcity, poor specification-code
alignment, lack... | 2025-05-17T05:25:01Z | 11 pages, 2 figures | null | null | VeriReason: Reinforcement Learning with Testbench Feedback for Reasoning-Enhanced Verilog Generation | ['Yiting Wang', 'Guoheng Sun', 'Wanghao Ye', 'Gang Qu', 'Ang Li'] | 2,025 | arXiv.org | 0 | 23 | ['Computer Science'] |
2,505.11881 | Revisiting Residual Connections: Orthogonal Updates for Stable and
Efficient Deep Networks | ['Giyeong Oh', 'Woohyun Cho', 'Siyeol Kim', 'Suhwan Choi', 'Younjae Yu'] | ['cs.CV', 'cs.AI'] | Residual connections are pivotal for deep neural networks, enabling greater
depth by mitigating vanishing gradients. However, in standard residual updates,
the module's output is directly added to the input stream. This can lead to
updates that predominantly reinforce or modulate the existing stream direction,
potentia... | 2025-05-17T07:16:11Z | 27 pages, WIP | null | null | Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks | ['Giyeong Oh', 'Woohyun Cho', 'Siyeol Kim', 'Suhwan Choi', 'Younjae Yu'] | 2,025 | arXiv.org | 0 | 40 | ['Computer Science'] |
2,505.11932 | Neuro-Symbolic Query Compiler | ['Yuyao Zhang', 'Zhicheng Dou', 'Xiaoxi Li', 'Jiajie Jin', 'Yongkang Wu', 'Zhonghua Li', 'Qi Ye', 'Ji-Rong Wen'] | ['cs.CL', 'cs.IR'] | Precise recognition of search intent in Retrieval-Augmented Generation (RAG)
systems remains a challenging goal, especially under resource constraints and
for complex queries with nested structures and dependencies. This paper
presents QCompiler, a neuro-symbolic framework inspired by linguistic grammar
rules and compi... | 2025-05-17T09:36:03Z | Findings of ACL2025, codes are available at this url:
https://github.com/YuyaoZhangQAQ/Query_Compiler | null | null | null | null | null | null | null | null | null |
2,505.11988 | TechniqueRAG: Retrieval Augmented Generation for Adversarial Technique
Annotation in Cyber Threat Intelligence Text | ['Ahmed Lekssays', 'Utsav Shukla', 'Husrev Taha Sencar', 'Md Rizwan Parvez'] | ['cs.CR'] | Accurately identifying adversarial techniques in security texts is critical
for effective cyber defense. However, existing methods face a fundamental
trade-off: they either rely on generic models with limited domain precision or
require resource-intensive pipelines that depend on large labeled datasets and
task-specifi... | 2025-05-17T12:46:10Z | Accepted at ACL (Findings) 2025 | null | null | null | null | null | null | null | null | null |
2,505.12081 | VisionReasoner: Unified Visual Perception and Reasoning via
Reinforcement Learning | ['Yuqi Liu', 'Tianyuan Qu', 'Zhisheng Zhong', 'Bohao Peng', 'Shu Liu', 'Bei Yu', 'Jiaya Jia'] | ['cs.CV'] | Large vision-language models exhibit inherent capabilities to handle diverse
visual perception tasks. In this paper, we introduce VisionReasoner, a unified
framework capable of reasoning and solving multiple visual perception tasks
within a shared model. Specifically, by designing novel multi-object cognitive
learning ... | 2025-05-17T16:51:47Z | null | null | null | null | null | null | null | null | null | null |
2,505.12116 | A Multi-Task Benchmark for Abusive Language Detection in Low-Resource
Settings | ['Fitsum Gaim', 'Hoyun Song', 'Huije Lee', 'Changgeon Ko', 'Eui Jun Hwang', 'Jong C. Park'] | ['cs.CL', 'I.2.7'] | Content moderation research has recently made significant advances, but still
fails to serve the majority of the world's languages due to the lack of
resources, leaving millions of vulnerable users to online hostility. This work
presents a large-scale human-annotated multi-task benchmark dataset for abusive
language de... | 2025-05-17T18:52:47Z | null | null | null | A Multi-Task Benchmark for Abusive Language Detection in Low-Resource Settings | ['Fitsum Gaim', 'Hoyun Song', 'Huije Lee', 'Changgeon Ko', 'Eui Jun Hwang', 'Jong C. Park'] | 2,025 | arXiv.org | 0 | 65 | ['Computer Science'] |
2,505.12224 | RoboFAC: A Comprehensive Framework for Robotic Failure Analysis and
Correction | ['Weifeng Lu', 'Minghao Ye', 'Zewei Ye', 'Ruihan Tao', 'Shuo Yang', 'Bo Zhao'] | ['cs.RO', 'cs.AI'] | Vision-Language-Action (VLA) models have recently advanced robotic
manipulation by translating natural-language instructions and image information
into sequential control actions. However, these models often underperform in
open-world scenarios, as they are predominantly trained on successful expert
demonstrations and ... | 2025-05-18T03:57:08Z | null | null | null | null | null | null | null | null | null | null |
2,505.12345 | UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models | ['Qizhou Chen', 'Dakan Wang', 'Taolin Zhang', 'Zaoming Yan', 'Chengsong You', 'Chengyu Wang', 'Xiaofeng He'] | ['cs.CL'] | Model editing aims to enhance the accuracy and reliability of large language
models (LLMs) by efficiently adjusting their internal parameters. Currently,
most LLM editing datasets are confined to narrow knowledge domains and cover a
limited range of editing evaluation. They often overlook the broad scope of
editing dem... | 2025-05-18T10:19:01Z | UniEdit Dataset: https://huggingface.co/datasets/qizhou/UniEdit Code:
https://github.com/qizhou000/UniEdit | null | null | UniEdit: A Unified Knowledge Editing Benchmark for Large Language Models | ['Qizhou Chen', 'Dakan Wang', 'Taolin Zhang', 'Zaoming Yan', 'Chengsong You', 'Chengyu Wang', 'Xiaofeng He'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,505.12366 | DisCO: Reinforcing Large Reasoning Models with Discriminative
Constrained Optimization | ['Gang Li', 'Ming Lin', 'Tomer Galanti', 'Zhengzhong Tu', 'Tianbao Yang'] | ['cs.LG', 'cs.AI'] | The recent success and openness of DeepSeek-R1 have brought widespread
attention to Group Relative Policy Optimization (GRPO) as a reinforcement
learning method for large reasoning models (LRMs). In this work, we analyze the
GRPO objective under a binary reward setting and reveal an inherent limitation
of question-leve... | 2025-05-18T11:08:32Z | 20 pages, 4 figures | null | null | DisCO: Reinforcing Large Reasoning Models with Discriminative Constrained Optimization | ['Gang Li', 'Ming Lin', 'Tomer Galanti', 'Zhengzhong Tu', 'Tianbao Yang'] | 2,025 | arXiv.org | 1 | 78 | ['Computer Science'] |
2,505.12448 | SSR: Enhancing Depth Perception in Vision-Language Models via
Rationale-Guided Spatial Reasoning | ['Yang Liu', 'Ming Ma', 'Xiaomin Yu', 'Pengxiang Ding', 'Han Zhao', 'Mingyang Sun', 'Siteng Huang', 'Donglin Wang'] | ['cs.CV'] | Despite impressive advancements in Visual-Language Models (VLMs) for
multi-modal tasks, their reliance on RGB inputs limits precise spatial
understanding. Existing methods for integrating spatial cues, such as point
clouds or depth, either require specialized sensors or fail to effectively
exploit depth information for... | 2025-05-18T14:40:16Z | null | null | null | SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning | ['Yang Liu', 'Ming Ma', 'Xiaomin Yu', 'Pengxiang Ding', 'Han Zhao', 'Mingyang Sun', 'Siteng Huang', 'Donglin Wang'] | 2,025 | arXiv.org | 0 | 100 | ['Computer Science'] |
2,505.12489 | Video-GPT via Next Clip Diffusion | ['Shaobin Zhuang', 'Zhipeng Huang', 'Ying Zhang', 'Fangyikang Wang', 'Canmiao Fu', 'Binxin Yang', 'Chong Sun', 'Chen Li', 'Yali Wang'] | ['cs.CV', 'cs.AI'] | GPT has shown its remarkable success in natural language processing. However,
the language sequence is not sufficient to describe spatial-temporal details in
the visual world. Alternatively, the video sequence is good at capturing such
details. Motivated by this fact, we propose a concise Video-GPT in this paper
by tre... | 2025-05-18T16:22:58Z | 22 pages, 12 figures, 18 tables | null | null | Video-GPT via Next Clip Diffusion | ['Shaobin Zhuang', 'Zhipeng Huang', 'Ying Zhang', 'Fangyikang Wang', 'Canmiao Fu', 'Binxin Yang', 'Chong Sun', 'Chen Li', 'Yali Wang'] | 2,025 | arXiv.org | 0 | 88 | ['Computer Science'] |
2,505.125 | MARGE: Improving Math Reasoning for LLMs with Guided Exploration | ['Jingyue Gao', 'Runji Lin', 'Keming Lu', 'Bowen Yu', 'Junyang Lin', 'Jianyu Chen'] | ['cs.AI'] | Large Language Models (LLMs) exhibit strong potential in mathematical
reasoning, yet their effectiveness is often limited by a shortage of
high-quality queries. This limitation necessitates scaling up computational
responses through self-generated data, yet current methods struggle due to
spurious correlated data cause... | 2025-05-18T17:24:16Z | To appear at ICML 2025 | null | null | null | null | null | null | null | null | null |
2,505.12504 | CPGD: Toward Stable Rule-based Reinforcement Learning for Language
Models | ['Zongkai Liu', 'Fanqing Meng', 'Lingxiao Du', 'Zhixiang Zhou', 'Chao Yu', 'Wenqi Shao', 'Qiaosheng Zhang'] | ['cs.LG', 'cs.AI'] | Recent advances in rule-based reinforcement learning (RL) have significantly
improved the reasoning capability of language models (LMs) with rule-based
rewards. However, existing RL methods -- such as GRPO, REINFORCE++, and RLOO --
often suffer from training instability, where large policy updates and improper
clipping... | 2025-05-18T17:44:53Z | null | null | null | null | null | null | null | null | null | null |
2,505.12514 | Reasoning by Superposition: A Theoretical Perspective on Chain of
Continuous Thought | ['Hanlin Zhu', 'Shibo Hao', 'Zhiting Hu', 'Jiantao Jiao', 'Stuart Russell', 'Yuandong Tian'] | ['cs.LG'] | Large Language Models (LLMs) have demonstrated remarkable performance in many
applications, including challenging reasoning problems via chain-of-thoughts
(CoTs) techniques that generate ``thinking tokens'' before answering the
questions. While existing theoretical works demonstrate that CoTs with discrete
tokens boost... | 2025-05-18T18:36:53Z | 26 pages, 7 figures | null | null | null | null | null | null | null | null | null |
2,505.12697 | Towards A Generalist Code Embedding Model Based On Massive Data
Synthesis | ['Chaofan Li', 'Jianlyu Chen', 'Yingxia Shao', 'Defu Lian', 'Zheng Liu'] | ['cs.IR'] | Code embedding models attract increasing attention due to the widespread
popularity of retrieval-augmented generation (RAG) in software development.
These models are expected to capture the rich semantic relationships inherent
to code, which differ significantly from those found in text. However, existing
models remain... | 2025-05-19T04:37:53Z | null | null | null | null | null | null | null | null | null | null |
2,505.12716 | Shadow-FT: Tuning Instruct via Base | ['Taiqiang Wu', 'Runming Yang', 'Jiayi Li', 'Pengfei Hu', 'Ngai Wong', 'Yujiu Yang'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) consistently benefit from further fine-tuning on
various tasks. However, we observe that directly tuning the INSTRUCT (i.e.,
instruction tuned) models often leads to marginal improvements and even
performance degeneration. Notably, paired BASE models, the foundation for these
INSTRUCT varia... | 2025-05-19T05:16:21Z | 19 pages, 10 tables, 6 figures | null | null | null | null | null | null | null | null | null |
2,505.12795 | FRABench and GenEval: Scaling Fine-Grained Aspect Evaluation across
Tasks, Modalities | ['Shibo Hong', 'Jiahao Ying', 'Haiyuan Liang', 'Mengdi Zhang', 'Jun Kuang', 'Jiazheng Zhang', 'Yixin Cao'] | ['cs.AI', 'cs.LG'] | Evaluating the open-ended outputs of large language models (LLMs) has become
a bottleneck as model capabilities, task diversity, and modality coverage
rapidly expand. Existing "LLM-as-a-Judge" evaluators are typically narrow in a
few tasks, aspects, or modalities, and easily suffer from low consistency. In
this paper, ... | 2025-05-19T07:29:26Z | null | null | null | null | null | null | null | null | null | null |
2,505.12849 | Accelerate TarFlow Sampling with GS-Jacobi Iteration | ['Ben Liu', 'Zhen Qin'] | ['cs.CV'] | Image generation models have achieved widespread applications. As an
instance, the TarFlow model combines the transformer architecture with
Normalizing Flow models, achieving state-of-the-art results on multiple
benchmarks. However, due to the causal form of attention requiring sequential
computation, TarFlow's samplin... | 2025-05-19T08:35:44Z | 17 pages, 7 figures, 5 tables | null | null | null | null | null | null | null | null | null |
2,505.12973 | Fast, Not Fancy: Rethinking G2P with Rich Data and Rule-Based Models | ['Mahta Fetrat Qharabagh', 'Zahra Dehghanian', 'Hamid R. Rabiee'] | ['cs.CL'] | Homograph disambiguation remains a significant challenge in
grapheme-to-phoneme (G2P) conversion, especially for low-resource languages.
This challenge is twofold: (1) creating balanced and comprehensive homograph
datasets is labor-intensive and costly, and (2) specific disambiguation
strategies introduce additional la... | 2025-05-19T11:11:12Z | 8 main body pages, total 25 pages, 15 figures | null | null | null | null | null | null | null | null | null |
2,505.13 | DualCodec: A Low-Frame-Rate, Semantically-Enhanced Neural Audio Codec
for Speech Generation | ['Jiaqi Li', 'Xiaolong Lin', 'Zhekai Li', 'Shixi Huang', 'Yuancheng Wang', 'Chaoren Wang', 'Zhenpeng Zhan', 'Zhizheng Wu'] | ['cs.SD', 'eess.AS'] | Neural audio codecs form the foundational building blocks for language model
(LM)-based speech generation. Typically, there is a trade-off between frame
rate and audio quality. This study introduces a low-frame-rate, semantically
enhanced codec model. Existing approaches distill semantically rich
self-supervised (SSL) ... | 2025-05-19T11:41:08Z | Accepted to Interspeech 2025. Github:
https://github.com/jiaqili3/dualcodec | null | null | null | null | null | null | null | null | null |
2,505.1301 | To Bias or Not to Bias: Detecting bias in News with bias-detector | ['Himel Ghosh', 'Ahmed Mosharafa', 'Georg Groh'] | ['cs.CL', 'cs.AI', 'cs.HC'] | Media bias detection is a critical task in ensuring fair and balanced
information dissemination, yet it remains challenging due to the subjectivity
of bias and the scarcity of high-quality annotated data. In this work, we
perform sentence-level bias classification by fine-tuning a RoBERTa-based model
on the expert-anno... | 2025-05-19T11:54:39Z | 7 pages, 5 figures, 2 tables | null | null | null | null | null | null | null | null | null |
2,505.13031 | MindOmni: Unleashing Reasoning Generation in Vision Language Models with
RGPO | ['Yicheng Xiao', 'Lin Song', 'Yukang Chen', 'Yingmin Luo', 'Yuxin Chen', 'Yukang Gan', 'Wei Huang', 'Xiu Li', 'Xiaojuan Qi', 'Ying Shan'] | ['cs.AI'] | Recent text-to-image systems face limitations in handling multimodal inputs
and complex reasoning tasks. We introduce MindOmni, a unified multimodal large
language model that addresses these challenges by incorporating reasoning
generation through reinforcement learning. MindOmni leverages a three-phase
training strate... | 2025-05-19T12:17:04Z | Code: https://github.com/TencentARC/MindOmni | null | null | MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO | ['Yicheng Xiao', 'Lin Song', 'Yukang Chen', 'Yingmin Luo', 'Yuxin Chen', 'Yukang Gan', 'Wei Huang', 'Xiu Li', 'Xiaojuan Qi', 'Ying Shan'] | 2,025 | arXiv.org | 5 | 70 | ['Computer Science'] |
2,505.13032 | MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio,
Music, and Their Mix | ['Ziyang Ma', 'Yinghao Ma', 'Yanqiao Zhu', 'Chen Yang', 'Yi-Wen Chao', 'Ruiyang Xu', 'Wenxi Chen', 'Yuanzhe Chen', 'Zhuo Chen', 'Jian Cong', 'Kai Li', 'Keliang Li', 'Siyou Li', 'Xinfeng Li', 'Xiquan Li', 'Zheng Lian', 'Yuzhe Liang', 'Minghao Liu', 'Zhikang Niu', 'Tianrui Wang', 'Yuping Wang', 'Yuxuan Wang', 'Yihao Wu',... | ['cs.SD', 'cs.CL', 'cs.MM', 'eess.AS'] | We introduce MMAR, a new benchmark designed to evaluate the deep reasoning
capabilities of Audio-Language Models (ALMs) across massive multi-disciplinary
tasks. MMAR comprises 1,000 meticulously curated audio-question-answer
triplets, collected from real-world internet videos and refined through
iterative error correct... | 2025-05-19T12:18:42Z | Open-source at https://github.com/ddlBoJack/MMAR | null | null | null | null | null | null | null | null | null |
2,505.13033 | TSPulse: Dual Space Tiny Pre-Trained Models for Rapid Time-Series
Analysis | ['Vijay Ekambaram', 'Subodh Kumar', 'Arindam Jati', 'Sumanta Mukherjee', 'Tomoya Sakai', 'Pankaj Dayama', 'Wesley M. Gifford', 'Jayant Kalagnanam'] | ['cs.LG', 'cs.AI'] | The rise of time-series pre-trained models has advanced temporal
representation learning, but current state-of-the-art models are often
large-scale, requiring substantial compute. We introduce TSPulse, ultra-compact
time-series pre-trained models with only 1M parameters, specialized to perform
strongly across classific... | 2025-05-19T12:18:53Z | null | null | null | null | null | null | null | null | null | null |
2,505.13036 | KIT's Offline Speech Translation and Instruction Following Submission
for IWSLT 2025 | ['Sai Koneru', 'Maike Züfle', 'Thai-Binh Nguyen', 'Seymanur Akti', 'Jan Niehues', 'Alexander Waibel'] | ['cs.CL', 'cs.AI'] | The scope of the International Workshop on Spoken Language Translation
(IWSLT) has recently broadened beyond traditional Speech Translation (ST) to
encompass a wider array of tasks, including Speech Question Answering and
Summarization. This shift is partly driven by the growing capabilities of
modern systems, particul... | 2025-05-19T12:21:29Z | null | null | null | KIT's Offline Speech Translation and Instruction Following Submission for IWSLT 2025 | ['Sai Koneru', 'Maike Zufle', 'Thai-Binh Nguyen', 'Seymanur Akti', 'Jan Niehues', 'Alexander H. Waibel'] | 2,025 | arXiv.org | 0 | 43 | ['Computer Science'] |
2,505.13088 | Cross-modal feature fusion for robust point cloud registration with
ambiguous geometry | ['Zhaoyi Wang', 'Shengyu Huang', 'Jemil Avers Butt', 'Yuanzhou Cai', 'Matej Varga', 'Andreas Wieser'] | ['cs.CV', 'cs.LG'] | Point cloud registration has seen significant advancements with the
application of deep learning techniques. However, existing approaches often
overlook the potential of integrating radiometric information from RGB images.
This limitation reduces their effectiveness in aligning point clouds pairs,
especially in regions... | 2025-05-19T13:22:46Z | To appear in the ISPRS Journal of Photogrammetry and Remote Sensing.
19 pages, 14 figures | ISPRS J. Photogramm. Remote Sens. 227 (2025) 31-47 | 10.1016/j.isprsjprs.2025.05.012 | Cross-modal feature fusion for robust point cloud registration with ambiguous geometry | ['Zhaoyi Wang', 'Shengyu Huang', 'J. Butt', 'Yuanzhou Cai', 'Matej Varga', 'A. Wieser'] | 2,025 | Isprs Journal of Photogrammetry and Remote Sensing | 1 | 78 | ['Computer Science'] |
2,505.13136 | ModernGBERT: German-only 1B Encoder Model Trained from Scratch | ['Anton Ehrmanntraut', 'Julia Wunderle', 'Jan Pfister', 'Fotis Jannidis', 'Andreas Hotho'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite the prominence of decoder-only language models, encoders remain
crucial for resource-constrained applications. We introduce ModernGBERT (134M,
1B), a fully transparent family of German encoder models trained from scratch,
incorporating architectural innovations from ModernBERT. To evaluate the
practical trade-o... | 2025-05-19T14:07:20Z | under review @ARR | null | null | ModernGBERT: German-only 1B Encoder Model Trained from Scratch | ['Anton Ehrmanntraut', 'Julia Wunderle', 'Jan Pfister', 'Fotis Jannidis', 'Andreas Hotho'] | 2,025 | arXiv.org | 0 | 50 | ['Computer Science'] |
2,505.13181 | Efficient Speech Language Modeling via Energy Distance in Continuous
Latent Space | ['Zhengrui Ma', 'Yang Feng', 'Chenze Shao', 'Fandong Meng', 'Jie Zhou', 'Min Zhang'] | ['cs.CL', 'cs.SD', 'eess.AS'] | We introduce SLED, an alternative approach to speech language modeling by
encoding speech waveforms into sequences of continuous latent representations
and modeling them autoregressively using an energy distance objective. The
energy distance offers an analytical measure of the distributional gap by
contrasting simulat... | 2025-05-19T14:38:59Z | Demos and code are available at https://github.com/ictnlp/SLED-TTS | null | null | null | null | null | null | null | null | null |
2,505.13211 | MAGI-1: Autoregressive Video Generation at Scale | ['Sand. ai', 'Hansi Teng', 'Hongyu Jia', 'Lei Sun', 'Lingzhi Li', 'Maolin Li', 'Mingqiu Tang', 'Shuai Han', 'Tianning Zhang', 'W. Q. Zhang', 'Weifeng Luo', 'Xiaoyang Kang', 'Yuchen Sun', 'Yue Cao', 'Yunpeng Huang', 'Yutong Lin', 'Yuxin Fang', 'Zewei Tao', 'Zheng Zhang', 'Zhongshu Wang', 'Zixun Liu', 'Dai Shi', 'Guoli S... | ['cs.CV', 'cs.AI'] | We present MAGI-1, a world model that generates videos by autoregressively
predicting a sequence of video chunks, defined as fixed-length segments of
consecutive frames. Trained to denoise per-chunk noise that increases
monotonically over time, MAGI-1 enables causal temporal modeling and naturally
supports streaming ge... | 2025-05-19T14:58:50Z | null | null | null | MAGI-1: Autoregressive Video Generation at Scale | ['Sand. ai', 'Hansi Teng', 'Hongyu Jia', 'Lei Sun', 'Lingzhi Li', 'Maolin Li', 'Mingqiu Tang', 'Shuai Han', 'Tianning Zhang', 'W. Q. Zhang', 'Weifeng Luo', 'Xiaoyang Kang', 'Yuchen Sun', 'Yue Cao', 'Yunpeng Huang', 'Yutong Lin', 'Yuxin Fang', 'Zewei Tao', 'Zheng Zhang', 'Zhongshu Wang', 'Zixun Liu', 'Dai Shi', 'Guoli S... | 2,025 | arXiv.org | 8 | 0 | ['Computer Science'] |
2,505.13227 | Scaling Computer-Use Grounding via User Interface Decomposition and
Synthesis | ['Tianbao Xie', 'Jiaqi Deng', 'Xiaochuan Li', 'Junlin Yang', 'Haoyuan Wu', 'Jixuan Chen', 'Wenjing Hu', 'Xinyuan Wang', 'Yuhui Xu', 'Zekun Wang', 'Yiheng Xu', 'Junli Wang', 'Doyen Sahoo', 'Tao Yu', 'Caiming Xiong'] | ['cs.AI', 'cs.CL', 'cs.CV', 'cs.HC'] | Graphical user interface (GUI) grounding, the ability to map natural language
instructions to specific actions on graphical user interfaces, remains a
critical bottleneck in computer use agent development. Current benchmarks
oversimplify grounding tasks as short referring expressions, failing to capture
the complexity ... | 2025-05-19T15:09:23Z | 49 pages, 13 figures | null | null | Scaling Computer-Use Grounding via User Interface Decomposition and Synthesis | ['Tianbao Xie', 'Jiaqi Deng', 'Xiaochuan Li', 'Junlin Yang', 'Haoyuan Wu', 'Jixuan Chen', 'Wenjing Hu', 'Xinyuan Wang', 'Yuhui Xu', 'Zekun Wang', 'Yiheng Xu', 'Junli Wang', 'Doyen Sahoo', 'Tao Yu', 'Caiming Xiong'] | 2,025 | arXiv.org | 1 | 50 | ['Computer Science'] |
2,505.13258 | Effective and Transparent RAG: Adaptive-Reward Reinforcement Learning
for Decision Traceability | ['Jingyi Ren', 'Yekun Xu', 'Xiaolong Wang', 'Weitao Li', 'Weizhi Ma', 'Yang Liu'] | ['cs.CL'] | Retrieval-Augmented Generation (RAG) has significantly improved the
performance of large language models (LLMs) on knowledge-intensive domains.
However, although RAG achieved successes across distinct domains, there are
still some unsolved challenges: 1) Effectiveness. Existing research mainly
focuses on developing mor... | 2025-05-19T15:40:29Z | null | null | null | Effective and Transparent RAG: Adaptive-Reward Reinforcement Learning for Decision Traceability | ['Jingyi Ren', 'Yekun Xu', 'Xiaolong Wang', 'Weitao Li', 'Weizhi Ma', 'Yang Liu'] | 2,025 | arXiv.org | 0 | 50 | ['Computer Science'] |
2,505.13271 | CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement
Learning | ['Lei Sheng', 'Shuai-Shuai Xu'] | ['cs.CL'] | Large language models (LLMs) have demonstrated strong capabilities in
translating natural language questions about relational databases into SQL
queries. In particular, test-time scaling techniques such as Self-Consistency
and Self-Correction can enhance SQL generation accuracy by increasing
computational effort during... | 2025-05-19T15:52:19Z | 25 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,505.13344 | RoPECraft: Training-Free Motion Transfer with Trajectory-Guided RoPE
Optimization on Diffusion Transformers | ['Ahmet Berke Gokmen', 'Yigit Ekin', 'Bahri Batuhan Bilecen', 'Aysegul Dundar'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We propose RoPECraft, a training-free video motion transfer method for
diffusion transformers that operates solely by modifying their rotary
positional embeddings (RoPE). We first extract dense optical flow from a
reference video, and utilize the resulting motion offsets to warp the
complex-exponential tensors of RoPE,... | 2025-05-19T16:50:26Z | https://berkegokmen1.github.io/RoPECraft/ | null | null | null | null | null | null | null | null | null |
2,505.13379 | Thinkless: LLM Learns When to Think | ['Gongfan Fang', 'Xinyin Ma', 'Xinchao Wang'] | ['cs.CL', 'cs.AI'] | Reasoning Language Models, capable of extended chain-of-thought reasoning,
have demonstrated remarkable performance on tasks requiring complex logical
inference. However, applying elaborate reasoning for all queries often results
in substantial computational inefficiencies, particularly when many problems
admit straigh... | 2025-05-19T17:24:16Z | null | null | null | null | null | null | null | null | null | null |
2,505.1338 | CompeteSMoE -- Statistically Guaranteed Mixture of Experts Training via
Competition | ['Nam V. Nguyen', 'Huy Nguyen', 'Quang Pham', 'Van Nguyen', 'Savitha Ramasamy', 'Nhat Ho'] | ['cs.AI', 'cs.CL'] | Sparse mixture of experts (SMoE) offers an appealing solution to scale up the
model complexity beyond the mean of increasing the network's depth or width.
However, we argue that effective SMoE training remains challenging because of
the suboptimal routing process where experts that perform computation do not
directly c... | 2025-05-19T17:24:26Z | 52 pages. This work is an improved version of the previous study at
arXiv:2402.02526 | null | null | CompeteSMoE - Statistically Guaranteed Mixture of Experts Training via Competition | ['Nam V. Nguyen', 'Huy Nguyen', 'Quang Pham', 'Van Nguyen', 'Savitha Ramasamy', 'Nhat Ho'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,505.13388 | R3: Robust Rubric-Agnostic Reward Models | ['David Anugraha', 'Zilu Tang', 'Lester James V. Miranda', 'Hanyang Zhao', 'Mohammad Rifqi Farhansyah', 'Garry Kuwanto', 'Derry Wijaya', 'Genta Indra Winata'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Reward models are essential for aligning language model outputs with human
preferences, yet existing approaches often lack both controllability and
interpretability. These models are typically optimized for narrow objectives,
limiting their generalizability to broader downstream tasks. Moreover, their
scalar outputs ar... | 2025-05-19T17:29:03Z | Preprint | null | null | null | null | null | null | null | null | null |
2,505.13404 | Granary: Speech Recognition and Translation Dataset in 25 European
Languages | ['Nithin Rao Koluguri', 'Monica Sekoyan', 'George Zelenfroynd', 'Sasha Meister', 'Shuoyang Ding', 'Sofia Kostandian', 'He Huang', 'Nikolay Karpov', 'Jagadeesh Balam', 'Vitaly Lavrukhin', 'Yifan Peng', 'Sara Papi', 'Marco Gaido', 'Alessio Brutti', 'Boris Ginsburg'] | ['cs.CL', 'eess.AS'] | Multi-task and multilingual approaches benefit large models, yet speech
processing for low-resource languages remains underexplored due to data
scarcity. To address this, we present Granary, a large-scale collection of
speech datasets for recognition and translation across 25 European languages.
This is the first open-... | 2025-05-19T17:40:58Z | Accepted at Interspeech 2025 v2: Added links | null | null | Granary: Speech Recognition and Translation Dataset in 25 European Languages | ['N. Koluguri', 'Monica Sekoyan', 'George Zelenfroynd', 'Sasha Meister', 'Shuoyang Ding', 'Sofia Kostandian', 'He Huang', 'Nikolay Karpov', 'Jagadeesh Balam', 'Vitaly Lavrukhin', 'Yifan Peng', 'Sara Papi', 'Marco Gaido', 'A. Brutti', 'Boris Ginsburg'] | 2,025 | arXiv.org | 0 | 31 | ['Computer Science', 'Engineering'] |
2,505.13417 | AdaptThink: Reasoning Models Can Learn When to Think | ['Jiajie Zhang', 'Nianyi Lin', 'Lei Hou', 'Ling Feng', 'Juanzi Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recently, large reasoning models have achieved impressive performance on
various tasks by employing human-like deep thinking. However, the lengthy
thinking process substantially increases inference overhead, making efficiency
a critical bottleneck. In this work, we first demonstrate that NoThinking,
which prompts the r... | 2025-05-19T17:50:52Z | null | null | null | null | null | null | null | null | null | null |
2,505.13427 | MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable
Step-Level Supervision | ['Lingxiao Du', 'Fanqing Meng', 'Zongkai Liu', 'Zhixiang Zhou', 'Ping Luo', 'Qiaosheng Zhang', 'Wenqi Shao'] | ['cs.AI', 'cs.CV'] | While Multimodal Large Language Models (MLLMs) have achieved impressive
progress in vision-language understanding, they still struggle with complex
multi-step reasoning, often producing logically inconsistent or partially
correct solutions. A key limitation lies in the lack of fine-grained
supervision over intermediate... | 2025-05-19T17:55:08Z | null | null | null | null | null | null | null | null | null | null |
2,505.13441 | GraspMolmo: Generalizable Task-Oriented Grasping via Large-Scale
Synthetic Data Generation | ['Abhay Deshpande', 'Yuquan Deng', 'Arijit Ray', 'Jordi Salvador', 'Winson Han', 'Jiafei Duan', 'Kuo-Hao Zeng', 'Yuke Zhu', 'Ranjay Krishna', 'Rose Hendrix'] | ['cs.RO'] | We present GrasMolmo, a generalizable open-vocabulary task-oriented grasping
(TOG) model. GraspMolmo predicts semantically appropriate, stable grasps
conditioned on a natural language instruction and a single RGB-D frame. For
instance, given "pour me some tea", GraspMolmo selects a grasp on a teapot
handle rather than ... | 2025-05-19T17:59:06Z | null | null | null | null | null | null | null | null | null | null |
2,505.13447 | Mean Flows for One-step Generative Modeling | ['Zhengyang Geng', 'Mingyang Deng', 'Xingjian Bai', 'J. Zico Kolter', 'Kaiming He'] | ['cs.LG', 'cs.CV'] | We propose a principled and effective framework for one-step generative
modeling. We introduce the notion of average velocity to characterize flow
fields, in contrast to instantaneous velocity modeled by Flow Matching methods.
A well-defined identity between average and instantaneous velocities is derived
and used to g... | 2025-05-19T17:59:42Z | Tech report | null | null | null | null | null | null | null | null | null |
2,505.13508 | Time-R1: Towards Comprehensive Temporal Reasoning in LLMs | ['Zijia Liu', 'Peixuan Han', 'Haofei Yu', 'Haoru Li', 'Jiaxuan You'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) demonstrate impressive capabilities but lack
robust temporal intelligence, struggling to integrate reasoning about the past
with predictions and plausible generations of the future. Meanwhile, existing
methods typically target isolated temporal skills, such as question answering
about past ... | 2025-05-16T13:46:28Z | null | null | null | Time-R1: Towards Comprehensive Temporal Reasoning in LLMs | ['Zijia Liu', 'Peixuan Han', 'Haofei Yu', 'Haoru Li', 'Jiaxuan You'] | 2,025 | arXiv.org | 1 | 46 | ['Computer Science'] |
2,505.13718 | Warm Up Before You Train: Unlocking General Reasoning in
Resource-Constrained Settings | ['Safal Shrestha', 'Minwu Kim', 'Aadim Nepal', 'Anubhav Shrestha', 'Keith Ross'] | ['cs.AI', 'cs.CL'] | Designing effective reasoning-capable LLMs typically requires training using
Reinforcement Learning with Verifiable Rewards (RLVR) or distillation with
carefully curated Long Chain of Thoughts (CoT), both of which depend heavily on
extensive training data. This creates a major challenge when the amount of
quality train... | 2025-05-19T20:29:15Z | null | null | null | Warm Up Before You Train: Unlocking General Reasoning in Resource-Constrained Settings | ['Safal Shrestha', 'Minwu Kim', 'Aadim Nepal', 'Anubhav Shrestha', 'Keith Ross'] | 2,025 | arXiv.org | 0 | 30 | ['Computer Science'] |
2,505.13755 | Panda: A pretrained forecast model for universal representation of
chaotic dynamics | ['Jeffrey Lai', 'Anthony Bao', 'William Gilpin'] | ['cs.LG', 'cs.NE', 'nlin.CD', 'stat.ML'] | Chaotic systems are intrinsically sensitive to small errors, challenging
efforts to construct predictive data-driven models of real-world dynamical
systems such as fluid flows or neuronal activity. Prior efforts comprise either
specialized models trained separately on individual time series, or foundation
models traine... | 2025-05-19T21:59:19Z | null | null | null | null | null | null | null | null | null | null |
2,505.13772 | Krikri: Advancing Open Large Language Models for Greek | ['Dimitris Roussis', 'Leon Voukoutis', 'Georgios Paraskevopoulos', 'Sokratis Sofianopoulos', 'Prokopis Prokopidis', 'Vassilis Papavasileiou', 'Athanasios Katsamanis', 'Stelios Piperidis', 'Vassilis Katsouros'] | ['cs.CL'] | We introduce Llama-Krikri-8B, a cutting-edge Large Language Model tailored
for the Greek language, built on Meta's Llama 3.1-8B. Llama-Krikri-8B has been
extensively trained on high-quality Greek data to ensure superior adaptation to
linguistic nuances. With 8 billion parameters, it offers advanced capabilities
while m... | 2025-05-19T23:18:27Z | null | null | null | null | null | null | null | null | null | null |
2,505.13886 | Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General
Reasoning | ['Jingqi Tong', 'Jixin Tang', 'Hangcheng Li', 'Yurong Mou', 'Ming Zhang', 'Jun Zhao', 'Yanbo Wen', 'Fan Song', 'Jiahao Zhan', 'Yuyang Lu', 'Chaoran Tao', 'Zhiyuan Guo', 'Jizhou Yu', 'Tianhao Cheng', 'Changhao Jiang', 'Zhen Wang', 'Tao Liang', 'Zhihui Fei', 'Mingyang Wan', 'Guojun Ma', 'Weifeng Ge', 'Guanhua Chen', 'Tao... | ['cs.CL', 'I.2.7; I.2.10'] | Visual-language Chain-of-Thought (CoT) data resources are relatively scarce
compared to text-only counterparts, limiting the improvement of reasoning
capabilities in Vision Language Models (VLMs). However, high-quality
vision-language reasoning data is expensive and labor-intensive to annotate. To
address this issue, w... | 2025-05-20T03:47:44Z | 63 pages, 23 figures, submitted to NeurIPS 2025 | null | null | Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning | ['Jingqi Tong', 'Jixin Tang', 'Hangcheng Li', 'Yurong Mou', 'Ming Zhang', 'Jun Zhao', 'Yanbo Wen', 'Fan Song', 'Jiahao Zhan', 'Yuyang Lu', 'Chaoran Tao', 'Zhiyuan Guo', 'Jizhou Yu', 'Tianhao Cheng', 'Changhao Jiang', 'Zhen Wang', 'Tao Liang', 'Zhihui Fei', 'Ming-Xi Wan', 'Guojun Ma', 'Weifeng Ge', 'Guanhua Chen', 'Tao ... | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,505.13893 | InfiGFusion: Graph-on-Logits Distillation via Efficient
Gromov-Wasserstein for Model Fusion | ['Yuanyi Wang', 'Zhaoyi Yan', 'Yiming Zhang', 'Qi Zhou', 'Yanggan Gu', 'Fei Wu', 'Hongxia Yang'] | ['cs.CL'] | Recent advances in large language models (LLMs) have intensified efforts to
fuse heterogeneous open-source models into a unified system that inherits their
complementary strengths. Existing logit-based fusion methods maintain inference
efficiency but treat vocabulary dimensions independently, overlooking semantic
depen... | 2025-05-20T03:55:35Z | null | null | null | null | null | null | null | null | null | null |
2,505.13909 | Efficient Agent Training for Computer Use | ['Yanheng He', 'Jiahe Jin', 'Pengfei Liu'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Scaling up high-quality trajectory data has long been a critical bottleneck
for developing human-like computer use agents. We introduce PC Agent-E, an
efficient agent training framework that significantly reduces reliance on
large-scale human demonstrations. Starting with just 312 human-annotated
computer use trajector... | 2025-05-20T04:20:18Z | We open-source our entire suite of code, data, and models to
facilitate future research at https://github.com/GAIR-NLP/PC-Agent-E | null | null | Efficient Agent Training for Computer Use | ['Yanheng He', 'Jiahe Jin', 'Pengfei Liu'] | 2,025 | arXiv.org | 0 | 47 | ['Computer Science'] |
2,505.13934 | RLVR-World: Training World Models with Reinforcement Learning | ['Jialong Wu', 'Shaofeng Yin', 'Ningya Feng', 'Mingsheng Long'] | ['cs.LG', 'cs.AI'] | World models predict state transitions in response to actions and are
increasingly developed across diverse modalities. However, standard training
objectives such as maximum likelihood estimation (MLE) often misalign with
task-specific goals of world models, i.e., transition prediction metrics like
accuracy or perceptu... | 2025-05-20T05:02:53Z | Code is available at project website:
https://thuml.github.io/RLVR-World/ | null | null | RLVR-World: Training World Models with Reinforcement Learning | ['Jialong Wu', 'Shaofeng Yin', 'Ningya Feng', 'Mingsheng Long'] | 2,025 | arXiv.org | 2 | 69 | ['Computer Science'] |
2,505.14142 | AudSemThinker: Enhancing Audio-Language Models through Reasoning over
Semantics of Sound | ['Gijs Wijngaard', 'Elia Formisano', 'Michele Esposito', 'Michel Dumontier'] | ['cs.SD', 'eess.AS'] | Audio-language models have shown promising results in various sound
understanding tasks, yet they remain limited in their ability to reason over
the fine-grained semantics of sound. In this paper, we present AudSemThinker, a
model whose reasoning is structured around a framework of auditory semantics
inspired by human ... | 2025-05-20T09:46:29Z | null | null | null | null | null | null | null | null | null | null |
2,505.14231 | UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement
Learning | ['Sule Bai', 'Mingxing Li', 'Yong Liu', 'Jing Tang', 'Haoji Zhang', 'Lei Sun', 'Xiangxiang Chu', 'Yansong Tang'] | ['cs.CV'] | Traditional visual grounding methods primarily focus on single-image
scenarios with simple textual references. However, extending these methods to
real-world scenarios that involve implicit and complex instructions,
particularly in conjunction with multiple images, poses significant challenges,
which is mainly due to t... | 2025-05-20T11:40:43Z | null | null | null | UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learning | ['Sule Bai', 'Mingxing Li', 'Yong Liu', 'Jing Tang', 'Haoji Zhang', 'Lei Sun', 'Xiangxiang Chu', 'Yansong Tang'] | 2,025 | arXiv.org | 3 | 69 | ['Computer Science'] |
2,505.14279 | YESciEval: Robust LLM-as-a-Judge for Scientific Question Answering | ["Jennifer D'Souza", 'Hamed Babaei Giglou', 'Quentin Münch'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) drive scientific question-answering on modern
search engines, yet their evaluation robustness remains underexplored. We
introduce YESciEval, an open-source framework that combines fine-grained
rubric-based assessment with reinforcement learning to mitigate optimism bias
in LLM evaluators. W... | 2025-05-20T12:30:46Z | 9 pages, 4 figures, Accepted as a Long Paper at the 63rd Annual
Meeting of the Association for Computational Linguistics (ACL 2025) | null | null | null | null | null | null | null | null | null |
2,505.14352 | Towards eliciting latent knowledge from LLMs with mechanistic
interpretability | ['Bartosz Cywiński', 'Emil Ryd', 'Senthooran Rajamanoharan', 'Neel Nanda'] | ['cs.LG'] | As language models become more powerful and sophisticated, it is crucial that
they remain trustworthy and reliable. There is concerning preliminary evidence
that models may attempt to deceive or keep secrets from their operators. To
explore the ability of current techniques to elicit such hidden knowledge, we
train a T... | 2025-05-20T13:36:37Z | null | null | null | null | null | null | null | null | null | null |
2,505.14362 | DeepEyes: Incentivizing "Thinking with Images" via Reinforcement
Learning | ['Ziwei Zheng', 'Michael Yang', 'Jack Hong', 'Chenxiao Zhao', 'Guohai Xu', 'Le Yang', 'Chao Shen', 'Xing Yu'] | ['cs.CV'] | Large Vision-Language Models (VLMs) have shown strong capabilities in
multimodal understanding and reasoning, yet they are primarily constrained by
text-based reasoning processes. However, achieving seamless integration of
visual and textual reasoning which mirrors human cognitive processes remains a
significant challe... | 2025-05-20T13:48:11Z | Ziwei, Michael, Jack, and Chenxiao are equal-contribution. The list
order is random | null | null | null | null | null | null | null | null | null |
2,505.14432 | Rank-K: Test-Time Reasoning for Listwise Reranking | ['Eugene Yang', 'Andrew Yates', 'Kathryn Ricci', 'Orion Weller', 'Vivek Chari', 'Benjamin Van Durme', 'Dawn Lawrie'] | ['cs.IR', 'cs.CL'] | Retrieve-and-rerank is a popular retrieval pipeline because of its ability to
make slow but effective rerankers efficient enough at query time by reducing
the number of comparisons. Recent works in neural rerankers take advantage of
large language models for their capability in reasoning between queries and
passages an... | 2025-05-20T14:39:34Z | 15 pages, 4 figures | null | null | Rank-K: Test-Time Reasoning for Listwise Reranking | ['Eugene Yang', 'Andrew Yates', 'Kathryn Ricci', 'Orion Weller', 'Vivek Chari', 'Benjamin Van Durme', 'Dawn J. Lawrie'] | 2,025 | arXiv.org | 2 | 64 | ['Computer Science'] |
2,505.1446 | VisualQuality-R1: Reasoning-Induced Image Quality Assessment via
Reinforcement Learning to Rank | ['Tianhe Wu', 'Jian Zou', 'Jie Liang', 'Lei Zhang', 'Kede Ma'] | ['cs.CV'] | DeepSeek-R1 has demonstrated remarkable effectiveness in incentivizing
reasoning and generalization capabilities of large language models (LLMs)
through reinforcement learning. Nevertheless, the potential of
reasoning-induced computational modeling has not been thoroughly explored in
the context of image quality assess... | 2025-05-20T14:56:50Z | null | null | null | VisualQuality-R1: Reasoning-Induced Image Quality Assessment via Reinforcement Learning to Rank | ['Tianhe Wu', 'Jian Zou', 'Jie-Kai Liang', 'Lei Zhang', 'Kede Ma'] | 2,025 | arXiv.org | 0 | 57 | ['Computer Science'] |
2,505.1447 | PAST: Phonetic-Acoustic Speech Tokenizer | ['Nadav Har-Tuv', 'Or Tal', 'Yossi Adi'] | ['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS'] | We present PAST, a novel end-to-end framework that jointly models phonetic
information alongside signal reconstruction, eliminating the need for external
pretrained models. Unlike previous approaches that rely on pretrained
self-supervised models, PAST employs supervised phonetic data, directly
integrating domain knowl... | 2025-05-20T15:05:14Z | null | null | null | PAST: Phonetic-Acoustic Speech Tokenizer | ['Nadav Har-Tuv', 'Or Tal', 'Yossi Adi'] | 2,025 | arXiv.org | 0 | 37 | ['Computer Science', 'Engineering'] |
2,505.14625 | TinyV: Reducing False Negatives in Verification Improves RL for LLM
Reasoning | ['Zhangchen Xu', 'Yuetai Li', 'Fengqing Jiang', 'Bhaskar Ramasubramanian', 'Luyao Niu', 'Bill Yuchen Lin', 'Radha Poovendran'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Reinforcement Learning (RL) has become a powerful tool for enhancing the
reasoning abilities of large language models (LLMs) by optimizing their
policies with reward signals. Yet, RL's success relies on the reliability of
rewards, which are provided by verifiers. In this paper, we expose and analyze
a widespread proble... | 2025-05-20T17:16:44Z | null | null | null | null | null | null | null | null | null | null |
2,505.14648 | Vox-Profile: A Speech Foundation Model Benchmark for Characterizing
Diverse Speaker and Speech Traits | ['Tiantian Feng', 'Jihwan Lee', 'Anfeng Xu', 'Yoonjeong Lee', 'Thanathai Lertpetchpun', 'Xuan Shi', 'Helin Wang', 'Thomas Thebaud', 'Laureano Moro-Velazquez', 'Dani Byrd', 'Najim Dehak', 'Shrikanth Narayanan'] | ['cs.SD', 'eess.AS'] | We introduce Vox-Profile, a comprehensive benchmark to characterize rich
speaker and speech traits using speech foundation models. Unlike existing works
that focus on a single dimension of speaker traits, Vox-Profile provides
holistic and multi-dimensional profiles that reflect both static speaker traits
(e.g., age, se... | 2025-05-20T17:36:41Z | null | null | null | Vox-Profile: A Speech Foundation Model Benchmark for Characterizing Diverse Speaker and Speech Traits | ['Tiantian Feng', 'Jihwan Lee', 'Anfeng Xu', 'Yoonjeong Lee', 'Thanathai Lertpetchpun', 'Xuan Shi', 'Helin Wang', 'Thomas Thebaud', 'L. Moro-Velázquez', 'Dani Byrd', 'N. Dehak', 'Shrikanth S. Narayanan'] | 2,025 | arXiv.org | 1 | 62 | ['Computer Science', 'Engineering'] |
2,505.14652 | General-Reasoner: Advancing LLM Reasoning Across All Domains | ['Xueguang Ma', 'Qian Liu', 'Dongfu Jiang', 'Ge Zhang', 'Zejun Ma', 'Wenhu Chen'] | ['cs.CL'] | Reinforcement learning (RL) has recently demonstrated strong potential in
enhancing the reasoning capabilities of large language models (LLMs).
Particularly, the "Zero" reinforcement learning introduced by Deepseek-R1-Zero,
enables direct RL training of base LLMs without relying on an intermediate
supervised fine-tunin... | 2025-05-20T17:41:33Z | null | null | null | null | null | null | null | null | null | null |
2,505.14667 | SAFEPATH: Preventing Harmful Reasoning in Chain-of-Thought via Early
Alignment | ['Wonje Jeung', 'Sangyeon Yoon', 'Minsuk Kahng', 'Albert No'] | ['cs.AI', 'cs.CL'] | Large Reasoning Models (LRMs) have become powerful tools for complex problem
solving, but their structured reasoning pathways can lead to unsafe outputs
when exposed to harmful prompts. Existing safety alignment methods reduce
harmful outputs but can degrade reasoning depth, leading to significant
trade-offs in complex... | 2025-05-20T17:54:54Z | Code and models are available at https://ai-isl.github.io/safepath | null | null | SAFEPATH: Preventing Harmful Reasoning in Chain-of-Thought via Early Alignment | ['Wonje Jeung', 'Sangyeon Yoon', 'Minsuk Kahng', 'Albert No'] | 2,025 | arXiv.org | 1 | 53 | ['Computer Science'] |
2,505.14673 | Training-Free Watermarking for Autoregressive Image Generation | ['Yu Tong', 'Zihao Pan', 'Shuai Yang', 'Kaiyang Zhou'] | ['cs.CV', 'cs.AI', 'cs.CR'] | Invisible image watermarking can protect image ownership and prevent
malicious misuse of visual generative models. However, existing generative
watermarking methods are mainly designed for diffusion models while
watermarking for autoregressive image generation models remains largely
underexplored. We propose IndexMark,... | 2025-05-20T17:58:02Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.