arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,505.20161 | Prismatic Synthesis: Gradient-based Data Diversification Boosts
Generalization in LLM Reasoning | ['Jaehun Jung', 'Seungju Han', 'Ximing Lu', 'Skyler Hallinan', 'David Acuna', 'Shrimai Prabhumoye', 'Mostafa Patwary', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Yejin Choi'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Effective generalization in language models depends critically on the
diversity of their training data. Yet existing diversity metrics often fall
short of this goal, relying on surface-level heuristics that are decoupled from
model behavior. This motivates us to ask: What kind of diversity in training
data actually drives generalization in language models -- and how can we
measure and amplify it? Through large-scale empirical analyses spanning over
300 training runs, carefully controlled for data scale and quality, we show
that data diversity can be a strong predictor of generalization in LLM
reasoning -- as measured by average model performance on unseen
out-of-distribution benchmarks. We introduce G-Vendi, a metric that quantifies
diversity via the entropy of model-induced gradients. Despite using a small
off-the-shelf proxy model for gradients, G-Vendi consistently outperforms
alternative measures, achieving strong correlation (Spearman's $\rho \approx
0.9$) with out-of-distribution (OOD) performance on both natural language
inference (NLI) and math reasoning tasks. Building on this insight, we present
Prismatic Synthesis, a framework for generating diverse synthetic data by
targeting underrepresented regions in gradient space. Experimental results show
that Prismatic Synthesis consistently improves model performance as we scale
synthetic data -- not just on in-distribution test but across unseen,
out-of-distribution benchmarks -- significantly outperforming state-of-the-art
models that rely on 20 times larger data generator than ours. For example,
PrismMath-7B, our model distilled from a 32B LLM, outperforms
R1-Distill-Qwen-7B -- the same base model trained on proprietary data generated
by 671B R1 -- on 6 out of 7 challenging benchmarks. | 2025-05-26T16:05:10Z | null | null | null | Prismatic Synthesis: Gradient-based Data Diversification Boosts Generalization in LLM Reasoning | ['Jaehun Jung', 'Seungju Han', 'Ximing Lu', 'Skyler Hallinan', 'David Acuna', 'Shrimai Prabhumoye', 'Mostafa Patwary', 'M. Shoeybi', 'Bryan Catanzaro', 'Yejin Choi'] | 2,025 | arXiv.org | 1 | 54 | ['Computer Science'] |
2,505.20192 | FunReason: Enhancing Large Language Models' Function Calling via
Self-Refinement Multiscale Loss and Automated Data Refinement | ['Bingguang Hao', 'Maolin Wang', 'Zengzhuang Xu', 'Cunyin Peng', 'Yicheng Chen', 'Xiangyu Zhao', 'Jinjie Gu', 'Chenyi Zhuang'] | ['cs.LG', 'cs.IR'] | The integration of large language models (LLMs) with function calling has
emerged as a crucial capability for enhancing their practical utility in
real-world applications. However, effectively combining reasoning processes
with accurate function execution remains a significant challenge. Traditional
training approaches often struggle to balance the detailed reasoning steps with
the precision of function calls, leading to suboptimal performance. To address
these limitations, we introduce FunReason, a novel framework that enhances
LLMs' function calling capabilities through an automated data refinement
strategy and a Self-Refinement Multiscale Loss (SRML) approach. FunReason
leverages LLMs' natural reasoning abilities to generate high-quality training
examples, focusing on query parseability, reasoning coherence, and function
call precision. The SRML approach dynamically balances the contribution of
reasoning processes and function call accuracy during training, addressing the
inherent trade-off between these two critical aspects. FunReason achieves
performance comparable to GPT-4o while effectively mitigating catastrophic
forgetting during fine-tuning. FunReason provides a comprehensive solution for
enhancing LLMs' function calling capabilities by introducing a balanced
training methodology and a data refinement pipeline. For code and dataset,
please refer to our repository at GitHub
https://github.com/BingguangHao/FunReason | 2025-05-26T16:38:06Z | null | null | null | FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement | ['Bingguang Hao', 'Maolin Wang', 'Zengzhuang Xu', 'Cunyin Peng', 'Yicheng Chen', 'Xiangyu Zhao', 'Jinjie Gu', 'Chenyi Zhuang'] | 2,025 | arXiv.org | 0 | 44 | ['Computer Science'] |
2,505.20225 | FLAME-MoE: A Transparent End-to-End Research Platform for
Mixture-of-Experts Language Models | ['Hao Kang', 'Zichun Yu', 'Chenyan Xiong'] | ['cs.CL', 'cs.LG'] | Recent large language models such as Gemini-1.5, DeepSeek-V3, and Llama-4
increasingly adopt Mixture-of-Experts (MoE) architectures, which offer strong
efficiency-performance trade-offs by activating only a fraction of the model
per token. Yet academic researchers still lack a fully open, end-to-end MoE
platform for investigating scaling, routing, and expert behavior. We release
FLAME-MoE, a completely open-source research suite composed of seven
decoder-only models, ranging from 38M to 1.7B active parameters, whose
architecture--64 experts with top-8 gating and 2 shared experts--closely
reflects modern production LLMs. All training data pipelines, scripts, logs,
and checkpoints are publicly available to enable reproducible experimentation.
Across six evaluation tasks, FLAME-MoE improves average accuracy by up to 3.4
points over dense baselines trained with identical FLOPs. Leveraging full
training trace transparency, we present initial analyses showing that (i)
experts increasingly specialize on distinct token subsets, (ii) co-activation
matrices remain sparse, reflecting diverse expert usage, and (iii) routing
behavior stabilizes early in training. All code, training logs, and model
checkpoints are available at https://github.com/cmu-flame/FLAME-MoE. | 2025-05-26T17:06:25Z | All code, training logs, and model checkpoints are available at
https://github.com/cmu-flame/FLAME-MoE | null | null | FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language Models | ['Hao Kang', 'Zichun Yu', 'Chenyan Xiong'] | 2,025 | arXiv.org | 0 | 39 | ['Computer Science'] |
2,505.20255 | AniCrafter: Customizing Realistic Human-Centric Animation via
Avatar-Background Conditioning in Video Diffusion Models | ['Muyao Niu', 'Mingdeng Cao', 'Yifan Zhan', 'Qingtian Zhu', 'Mingze Ma', 'Jiancheng Zhao', 'Yanhong Zeng', 'Zhihang Zhong', 'Xiao Sun', 'Yinqiang Zheng'] | ['cs.CV'] | Recent advances in video diffusion models have significantly improved
character animation techniques. However, current approaches rely on basic
structural conditions such as DWPose or SMPL-X to animate character images,
limiting their effectiveness in open-domain scenarios with dynamic backgrounds
or challenging human poses. In this paper, we introduce \textbf{AniCrafter}, a
diffusion-based human-centric animation model that can seamlessly integrate and
animate a given character into open-domain dynamic backgrounds while following
given human motion sequences. Built on cutting-edge Image-to-Video (I2V)
diffusion architectures, our model incorporates an innovative
''avatar-background'' conditioning mechanism that reframes open-domain
human-centric animation as a restoration task, enabling more stable and
versatile animation outputs. Experimental results demonstrate the superior
performance of our method. Codes are available at
https://github.com/MyNiuuu/AniCrafter. | 2025-05-26T17:32:10Z | Homepage: https://myniuuu.github.io/AniCrafter ; Codes:
https://github.com/MyNiuuu/AniCrafter | null | null | null | null | null | null | null | null | null |
2,505.20256 | Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System
Collaboration | ['Hao Zhong', 'Muzhi Zhu', 'Zongze Du', 'Zheng Huang', 'Canyu Zhao', 'Mingyu Liu', 'Wen Wang', 'Hao Chen', 'Chunhua Shen'] | ['cs.CV'] | Long-horizon video-audio reasoning and fine-grained pixel understanding
impose conflicting requirements on omnimodal models: dense temporal coverage
demands many low-resolution frames, whereas precise grounding calls for
high-resolution inputs. We tackle this trade-off with a two-system
architecture: a Global Reasoning System selects informative keyframes and
rewrites the task at low spatial cost, while a Detail Understanding System
performs pixel-level grounding on the selected high-resolution snippets.
Because ``optimal'' keyframe selection and reformulation are ambiguous and hard
to supervise, we formulate them as a reinforcement learning (RL) problem and
present Omni-R1, an end-to-end RL framework built on Group Relative Policy
Optimization. Omni-R1 trains the Global Reasoning System through hierarchical
rewards obtained via online collaboration with the Detail Understanding System,
requiring only one epoch of RL on small task splits.
Experiments on two challenging benchmarks, namely Referring Audio-Visual
Segmentation (RefAVS) and Reasoning Video Object Segmentation (REVOS), show
that Omni-R1 not only surpasses strong supervised baselines but also
outperforms specialized state-of-the-art models, while substantially improving
out-of-domain generalization and mitigating multimodal hallucination. Our
results demonstrate the first successful application of RL to large-scale
omnimodal reasoning and highlight a scalable path toward universally foundation
models. | 2025-05-26T17:34:06Z | Project page: https://aim-uofa.github.io/OmniR1 | null | null | null | null | null | null | null | null | null |
2,505.20282 | One-shot Entropy Minimization | ['Zitian Gao', 'Lynx Chen', 'Joey Zhou', 'Bryan Dai'] | ['cs.CL'] | We trained 13,440 large language models and found that entropy minimization
requires only a single unlabeled data and 10 steps optimization to achieve
performance improvements comparable to or even greater than those obtained
using thousands of data and carefully designed rewards in rule-based
reinforcement learning. This striking result may prompt a rethinking of
post-training paradigms for large language models. Our code is avaliable at
https://github.com/zitian-gao/one-shot-em. | 2025-05-26T17:58:30Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,505.20287 | MotionPro: A Precise Motion Controller for Image-to-Video Generation | ['Zhongwei Zhang', 'Fuchen Long', 'Zhaofan Qiu', 'Yingwei Pan', 'Wu Liu', 'Ting Yao', 'Tao Mei'] | ['cs.CV', 'cs.MM'] | Animating images with interactive motion control has garnered popularity for
image-to-video (I2V) generation. Modern approaches typically rely on large
Gaussian kernels to extend motion trajectories as condition without explicitly
defining movement region, leading to coarse motion control and failing to
disentangle object and camera moving. To alleviate these, we present MotionPro,
a precise motion controller that novelly leverages region-wise trajectory and
motion mask to regulate fine-grained motion synthesis and identify target
motion category (i.e., object or camera moving), respectively. Technically,
MotionPro first estimates the flow maps on each training video via a tracking
model, and then samples the region-wise trajectories to simulate inference
scenario. Instead of extending flow through large Gaussian kernels, our
region-wise trajectory approach enables more precise control by directly
utilizing trajectories within local regions, thereby effectively characterizing
fine-grained movements. A motion mask is simultaneously derived from the
predicted flow maps to capture the holistic motion dynamics of the movement
regions. To pursue natural motion control, MotionPro further strengthens video
denoising by incorporating both region-wise trajectories and motion mask
through feature modulation. More remarkably, we meticulously construct a
benchmark, i.e., MC-Bench, with 1.1K user-annotated image-trajectory pairs, for
the evaluation of both fine-grained and object-level I2V motion control.
Extensive experiments conducted on WebVid-10M and MC-Bench demonstrate the
effectiveness of MotionPro. Please refer to our project page for more results:
https://zhw-zhang.github.io/MotionPro-page/. | 2025-05-26T17:59:03Z | CVPR 2025. Project page: https://zhw-zhang.github.io/MotionPro-page/ | null | null | null | null | null | null | null | null | null |
2,505.20292 | OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for
Subject-to-Video Generation | ['Shenghai Yuan', 'Xianyi He', 'Yufan Deng', 'Yang Ye', 'Jinfa Huang', 'Bin Lin', 'Jiebo Luo', 'Li Yuan'] | ['cs.CV', 'cs.AI'] | Subject-to-Video (S2V) generation aims to create videos that faithfully
incorporate reference content, providing enhanced flexibility in the production
of videos. To establish the infrastructure for S2V generation, we propose
OpenS2V-Nexus, consisting of (i) OpenS2V-Eval, a fine-grained benchmark, and
(ii) OpenS2V-5M, a million-scale dataset. In contrast to existing S2V
benchmarks inherited from VBench that focus on global and coarse-grained
assessment of generated videos, OpenS2V-Eval focuses on the model's ability to
generate subject-consistent videos with natural subject appearance and identity
fidelity. For these purposes, OpenS2V-Eval introduces 180 prompts from seven
major categories of S2V, which incorporate both real and synthetic test data.
Furthermore, to accurately align human preferences with S2V benchmarks, we
propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to
separately quantify subject consistency, naturalness, and text relevance in
generated videos. Building on this, we conduct a comprehensive evaluation of 18
representative S2V models, highlighting their strengths and weaknesses across
different content. Moreover, we create the first open-source large-scale S2V
generation dataset OpenS2V-5M, which consists of five million high-quality 720P
subject-text-video triples. Specifically, we ensure subject-information
diversity in our dataset by (1) segmenting subjects and building pairing
information via cross-video associations and (2) prompting GPT-Image-1 on raw
frames to synthesize multi-view representations. Through OpenS2V-Nexus, we
deliver a robust infrastructure to accelerate future S2V generation research. | 2025-05-26T17:59:46Z | Code and Dataset: https://github.com/PKU-YuanGroup/OpenS2V-Nexus | null | null | null | null | null | null | null | null | null |
2,505.20298 | MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal
Manga Understanding | ['Jeonghun Baek', 'Kazuki Egashira', 'Shota Onohara', 'Atsuyuki Miyai', 'Yuki Imajuku', 'Hikaru Ikuta', 'Kiyoharu Aizawa'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Manga, or Japanese comics, is a richly multimodal narrative form that blends
images and text in complex ways. Teaching large multimodal models (LMMs) to
understand such narratives at a human-like level could help manga creators
reflect on and refine their stories. To this end, we introduce two benchmarks
for multimodal manga understanding: MangaOCR, which targets in-page text
recognition, and MangaVQA, a novel benchmark designed to evaluate contextual
understanding through visual question answering. MangaVQA consists of 526
high-quality, manually constructed question-answer pairs, enabling reliable
evaluation across diverse narrative and visual scenarios. Building on these
benchmarks, we develop MangaLMM, a manga-specialized model finetuned from the
open-source LMM Qwen2.5-VL to jointly handle both tasks. Through extensive
experiments, including comparisons with proprietary models such as GPT-4o and
Gemini 2.5, we assess how well LMMs understand manga. Our benchmark and model
provide a comprehensive foundation for evaluating and advancing LMMs in the
richly narrative domain of manga. | 2025-05-26T17:59:59Z | 20 pages, 11 figures | null | null | null | null | null | null | null | null | null |
2,505.20302 | VeriThoughts: Enabling Automated Verilog Code Generation using Reasoning
and Formal Verification | ['Patrick Yubeaton', 'Andre Nakkab', 'Weihua Xiao', 'Luca Collini', 'Ramesh Karri', 'Chinmay Hegde', 'Siddharth Garg'] | ['cs.PL', 'cs.AI', 'cs.LO'] | This paper introduces VeriThoughts, a novel dataset designed for
reasoning-based Verilog code generation. We establish a new benchmark framework
grounded in formal verification methods to evaluate the quality and correctness
of generated hardware descriptions. Additionally, we present a suite of
specialized small-scale models optimized specifically for Verilog generation.
Our work addresses the growing need for automated hardware design tools that
can produce verifiably correct implementations from high-level specifications,
potentially accelerating the hardware development process while maintaining
rigorous correctness guarantees. Our code and data are available at
\href{https://github.com/wilyub/VeriThoughts}{this URL}. | 2025-05-16T21:33:14Z | null | null | null | null | null | null | null | null | null | null |
2,505.20315 | Arctic-Text2SQL-R1: Simple Rewards, Strong Reasoning in Text-to-SQL | ['Zhewei Yao', 'Guoheng Sun', 'Lukasz Borchmann', 'Zheyu Shen', 'Minghang Deng', 'Bohan Zhai', 'Hao Zhang', 'Ang Li', 'Yuxiong He'] | ['cs.CL', 'cs.AI'] | Translating natural language into SQL (Test2SQL) is a longstanding challenge
at the intersection of natural language understanding and structured data
access. While large language models (LLMs) have significantly improved fluency
in SQL generation, producing correct and executable SQL--particularly for
complex queries--remains a bottleneck. We present Arctic-Text2SQL-R1, a
reinforcement learning (RL) framework and model family designed to generate
accurate, executable SQL using a lightweight reward signal based solely on
execution correctness. Our approach avoids brittle intermediate supervision and
complex reward shaping, promoting stable training and alignment with the end
task. Combined with carefully curated data, strong supervised initialization,
and effective training practices, Arctic-Text2SQL-R1 achieves state-of-the-art
execution accuracy across six diverse Test2SQL benchmarks, including the top
position on the BIRD leaderboard. Notably, our 7B model outperforms prior
70B-class systems, highlighting the framework's scalability and efficiency. We
further demonstrate inference-time robustness through simple extensions like
value retrieval and majority voting. Extensive experiments and ablation studies
offer both positive and negative insights, providing practical guidance for
future Test2SQL research. | 2025-05-22T23:33:47Z | 22 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,505.20325 | Guided by Gut: Efficient Test-Time Scaling with Reinforced Intrinsic
Confidence | ['Amirhosein Ghasemabadi', 'Keith G. Mills', 'Baochun Li', 'Di Niu'] | ['cs.CL', 'cs.AI'] | Test-Time Scaling (TTS) methods for enhancing Large Language Model (LLM)
reasoning often incur substantial computational costs, primarily due to
extensive reliance on external Process Reward Models (PRMs) or sampling methods
like Best-of-N (BoN). This paper introduces Guided by Gut (GG), an efficient
self-guided TTS framework that achieves PRM-level performance without costly
external verifier models. Our method employs a lightweight tree search guided
solely by intrinsic LLM signals, token-level confidence and step novelty. One
critical innovation is improving the reliability of internal confidence
estimates via a targeted reinforcement learning fine-tuning phase. Empirical
evaluations on challenging mathematical reasoning benchmarks demonstrate that
GG enables smaller models (e.g., 1.5B parameters) to achieve accuracy matching
or surpassing significantly larger models (e.g., 32B-70B parameters), while
reducing GPU memory usage by up to 10x. Compared to PRM-based methods, GG
achieves comparable accuracy with 8x faster inference speeds and 4-5x lower
memory usage. Additionally, GG reduces KV cache memory usage by approximately
50% compared to the BoN strategy, facilitating more efficient and practical
deployment of TTS techniques. | 2025-05-23T18:19:09Z | null | null | null | null | null | null | null | null | null | null |
2,505.20715 | MUSEG: Reinforcing Video Temporal Understanding via Timestamp-Aware
Multi-Segment Grounding | ['Fuwen Luo', 'Shengfeng Lou', 'Chi Chen', 'Ziyue Wang', 'Chenliang Li', 'Weizhou Shen', 'Jiyue Guo', 'Peng Li', 'Ming Yan', 'Ji Zhang', 'Fei Huang', 'Yang Liu'] | ['cs.CV', 'cs.CL'] | Video temporal understanding is crucial for multimodal large language models
(MLLMs) to reason over events in videos. Despite recent advances in general
video understanding, current MLLMs still struggle with fine-grained temporal
reasoning. While reinforcement learning (RL) has been explored to address this
issue recently, existing RL approaches remain limited in effectiveness. In this
work, we propose MUSEG, a novel RL-based method that enhances temporal
understanding by introducing timestamp-aware multi-segment grounding. MUSEG
enables MLLMs to align queries with multiple relevant video segments, promoting
more comprehensive temporal reasoning. To facilitate effective learning, we
design a customized RL training recipe with phased rewards that progressively
guides the model toward temporally grounded reasoning. Extensive experiments on
temporal grounding and time-sensitive video QA tasks demonstrate that MUSEG
significantly outperforms existing methods and generalizes well across diverse
temporal understanding scenarios. View our project at
https://github.com/THUNLP-MT/MUSEG. | 2025-05-27T04:50:07Z | null | null | null | null | null | null | null | null | null | null |
2,505.20767 | CogniBench: A Legal-inspired Framework and Dataset for Assessing
Cognitive Faithfulness of Large Language Models | ['Xiaqiang Tang', 'Jian Li', 'Keyu Hu', 'Du Nan', 'Xiaolong Li', 'Xi Zhang', 'Weigao Sun', 'Sihong Xie'] | ['cs.CL', 'cs.AI'] | Faithfulness hallucinations are claims generated by a Large Language Model
(LLM) not supported by contexts provided to the LLM. Lacking assessment
standards, existing benchmarks focus on "factual statements" that rephrase
source materials while overlooking "cognitive statements" that involve making
inferences from the given context. Consequently, evaluating and detecting the
hallucination of cognitive statements remains challenging. Inspired by how
evidence is assessed in the legal domain, we design a rigorous framework to
assess different levels of faithfulness of cognitive statements and introduce
the CogniBench dataset where we reveal insightful statistics. To keep pace with
rapidly evolving LLMs, we further develop an automatic annotation pipeline that
scales easily across different models. This results in a large-scale
CogniBench-L dataset, which facilitates training accurate detectors for both
factual and cognitive hallucinations. We release our model and datasets at:
https://github.com/FUTUREEEEEE/CogniBench | 2025-05-27T06:16:27Z | ACL 2025 | null | null | CogniBench: A Legal-inspired Framework and Dataset for Assessing Cognitive Faithfulness of Large Language Models | ['Xiaqiang Tang', 'Jian Li', 'Ke-Bang Hu', 'Du Nan', 'Xiaolong Li', 'Xi Zhang', 'Weigao Sun', 'Sihong Xie'] | 2,025 | arXiv.org | 0 | 40 | ['Computer Science'] |
2,505.20779 | CHIMERA: A Knowledge Base of Idea Recombination in Scientific Literature | ['Noy Sternlicht', 'Tom Hope'] | ['cs.CL'] | A hallmark of human innovation is the process of recombination -- creating
original ideas by integrating elements of existing mechanisms and concepts. In
this work, we automatically mine the scientific literature and build CHIMERA: a
large-scale knowledge base (KB) of recombination examples. CHIMERA can be used
to empirically explore at scale how scientists recombine concepts and take
inspiration from different areas, or to train supervised machine learning
models that learn to predict new creative cross-domain directions. To build
this KB, we present a novel information extraction task of extracting
recombination from scientific paper abstracts, collect a high-quality corpus of
hundreds of manually annotated abstracts, and use it to train an LLM-based
extraction model. The model is applied to a large corpus of papers in the AI
domain, yielding a KB of over 28K recombination examples. We analyze CHIMERA to
explore the properties of recombination in different subareas of AI. Finally,
we train a scientific hypothesis generation model using the KB, which predicts
new recombination directions that real-world researchers find inspiring. Our
data and code are available at https://github.com/noy-sternlicht/CHIMERA-KB | 2025-05-27T06:36:04Z | Project page: https://noy-sternlicht.github.io/CHIMERA-Web | null | null | null | null | null | null | null | null | null |
2,505.20793 | Rendering-Aware Reinforcement Learning for Vector Graphics Generation | ['Juan A. Rodriguez', 'Haotian Zhang', 'Abhay Puri', 'Aarash Feizi', 'Rishav Pramanik', 'Pascal Wichmann', 'Arnab Mondal', 'Mohammad Reza Samsami', 'Rabiul Awal', 'Perouz Taslakian', 'Spandana Gella', 'Sai Rajeswar', 'David Vazquez', 'Christopher Pal', 'Marco Pedersoli'] | ['cs.CV', 'cs.AI'] | Scalable Vector Graphics (SVG) offer a powerful format for representing
visual designs as interpretable code. Recent advances in vision-language models
(VLMs) have enabled high-quality SVG generation by framing the problem as a
code generation task and leveraging large-scale pretraining. VLMs are
particularly suitable for this task as they capture both global semantics and
fine-grained visual patterns, while transferring knowledge across vision,
natural language, and code domains. However, existing VLM approaches often
struggle to produce faithful and efficient SVGs because they never observe the
rendered images during training. Although differentiable rendering for
autoregressive SVG code generation remains unavailable, rendered outputs can
still be compared to original inputs, enabling evaluative feedback suitable for
reinforcement learning (RL). We introduce RLRF(Reinforcement Learning from
Rendering Feedback), an RL method that enhances SVG generation in
autoregressive VLMs by leveraging feedback from rendered SVG outputs. Given an
input image, the model generates SVG roll-outs that are rendered and compared
to the original image to compute a reward. This visual fidelity feedback guides
the model toward producing more accurate, efficient, and semantically coherent
SVGs. RLRF significantly outperforms supervised fine-tuning, addressing common
failure modes and enabling precise, high-quality SVG generation with strong
structural understanding and generalization. | 2025-05-27T06:56:00Z | null | null | null | null | null | null | null | null | null | null |
2,505.20979 | MelodySim: Measuring Melody-aware Music Similarity for Plagiarism
Detection | ['Tongyu Lu', 'Charlotta-Marlena Geist', 'Jan Melechovsky', 'Abhinaba Roy', 'Dorien Herremans'] | ['cs.SD', 'cs.AI', 'eess.AS'] | We propose MelodySim, a melody-aware music similarity model and dataset for
plagiarism detection. First, we introduce a novel method to construct a dataset
with focus on melodic similarity. By augmenting Slakh2100; an existing MIDI
dataset, we generate variations of each piece while preserving the melody
through modifications such as note splitting, arpeggiation, minor track dropout
(excluding bass), and re-instrumentation. A user study confirms that positive
pairs indeed contain similar melodies, with other musical tracks significantly
changed. Second, we develop a segment-wise melodic-similarity detection model
that uses a MERT encoder and applies a triplet neural network to capture
melodic similarity. The resultant decision matrix highlights where plagiarism
might occur. Our model achieves high accuracy on the MelodySim test set. | 2025-05-27T10:14:03Z | null | null | null | null | null | null | null | null | null | null |
2,505.20993 | Who Reasons in the Large Language Models? | ['Jie Shao', 'Jianxin Wu'] | ['cs.CL', 'cs.AI'] | Despite the impressive performance of large language models (LLMs), the
process of endowing them with new capabilities--such as mathematical
reasoning--remains largely empirical and opaque. A critical open question is
whether reasoning abilities stem from the entire model, specific modules, or
are merely artifacts of overfitting. In this work, we hypothesize that the
reasoning capabilities in well-trained LLMs are primarily attributed to the
output projection module (oproj) in the Transformer's multi-head self-attention
(MHSA) mechanism. To support this hypothesis, we introduce Stethoscope for
Networks (SfN), a suite of diagnostic tools designed to probe and analyze the
internal behaviors of LLMs. Using SfN, we provide both circumstantial and
empirical evidence suggesting that oproj plays a central role in enabling
reasoning, whereas other modules contribute more to fluent dialogue. These
findings offer a new perspective on LLM interpretability and open avenues for
more targeted training strategies, potentially enabling more efficient and
specialized LLMs. | 2025-05-27T10:26:47Z | null | null | null | Who Reasons in the Large Language Models? | ['Jie Shao', 'Jianxin Wu'] | 2,025 | arXiv.org | 0 | 51 | ['Computer Science'] |
2,505.2102 | NeuralOM: Neural Ocean Model for Subseasonal-to-Seasonal Simulation | ['Yuan Gao', 'Ruiqi Shu', 'Hao Wu', 'Fan Xu', 'Yanfei Xiang', 'Ruijian Gou', 'Qingsong Wen', 'Xian Wu', 'Xiaomeng Huang'] | ['cs.LG', 'physics.ao-ph'] | Accurate Subseasonal-to-Seasonal (S2S) ocean simulation is critically
important for marine research, yet remains challenging due to its substantial
thermal inertia and extended time delay. Machine learning (ML)-based models
have demonstrated significant advancements in simulation accuracy and
computational efficiency compared to traditional numerical methods.
Nevertheless, a significant limitation of current ML models for S2S ocean
simulation is their inadequate incorporation of physical consistency and the
slow-changing properties of the ocean system. In this work, we propose a neural
ocean model (NeuralOM) for S2S ocean simulation with a multi-scale interactive
graph neural network to emulate diverse physical phenomena associated with
ocean systems effectively. Specifically, we propose a multi-stage framework
tailored to model the ocean's slowly changing nature. Additionally, we
introduce a multi-scale interactive messaging module to capture complex
dynamical behaviors, such as gradient changes and multiplicative coupling
relationships inherent in ocean dynamics. Extensive experimental evaluations
confirm that our proposed NeuralOM outperforms state-of-the-art models in S2S
and extreme event simulation. The codes are available at
https://github.com/YuanGao-YG/NeuralOM. | 2025-05-27T10:54:40Z | null | null | null | null | null | null | null | null | null | null |
2,505.21062 | Inverse Virtual Try-On: Generating Multi-Category Product-Style Images
from Clothed Individuals | ['Davide Lobba', 'Fulvio Sanguigni', 'Bin Ren', 'Marcella Cornia', 'Rita Cucchiara', 'Nicu Sebe'] | ['cs.CV'] | While virtual try-on (VTON) systems aim to render a garment onto a target
person image, this paper tackles the novel task of virtual try-off (VTOFF),
which addresses the inverse problem: generating standardized product images of
garments from real-world photos of clothed individuals. Unlike VTON, which must
resolve diverse pose and style variations, VTOFF benefits from a consistent and
well-defined output format -- typically a flat, lay-down-style representation
of the garment -- making it a promising tool for data generation and dataset
enhancement. However, existing VTOFF approaches face two major limitations: (i)
difficulty in disentangling garment features from occlusions and complex poses,
often leading to visual artifacts, and (ii) restricted applicability to
single-category garments (e.g., upper-body clothes only), limiting
generalization. To address these challenges, we present Text-Enhanced
MUlti-category Virtual Try-Off (TEMU-VTOFF), a novel architecture featuring a
dual DiT-based backbone with a modified multimodal attention mechanism for
robust garment feature extraction. Our architecture is designed to receive
garment information from multiple modalities like images, text, and masks to
work in a multi-category setting. Finally, we propose an additional alignment
module to further refine the generated visual details. Experiments on VITON-HD
and Dress Code datasets show that TEMU-VTOFF sets a new state-of-the-art on the
VTOFF task, significantly improving both visual quality and fidelity to the
target garments. | 2025-05-27T11:47:51Z | null | null | null | Inverse Virtual Try-On: Generating Multi-Category Product-Style Images from Clothed Individuals | ['Davide Lobba', 'Fulvio Sanguigni', 'Bin Ren', 'Marcella Cornia', 'Rita Cucchiara', 'N. Sebe'] | 2,025 | arXiv.org | 0 | 59 | ['Computer Science'] |
2,505.21115 | Will It Still Be True Tomorrow? Multilingual Evergreen Question
Classification to Improve Trustworthy QA | ['Sergey Pletenev', 'Maria Marina', 'Nikolay Ivanov', 'Daria Galimzianova', 'Nikita Krayko', 'Mikhail Salnikov', 'Vasily Konovalov', 'Alexander Panchenko', 'Viktor Moskvoretskii'] | ['cs.CL'] | Large Language Models (LLMs) often hallucinate in question answering (QA)
tasks. A key yet underexplored factor contributing to this is the temporality
of questions -- whether they are evergreen (answers remain stable over time) or
mutable (answers change). In this work, we introduce EverGreenQA, the first
multilingual QA dataset with evergreen labels, supporting both evaluation and
training. Using EverGreenQA, we benchmark 12 modern LLMs to assess whether they
encode question temporality explicitly (via verbalized judgments) or implicitly
(via uncertainty signals). We also train EG-E5, a lightweight multilingual
classifier that achieves SoTA performance on this task. Finally, we demonstrate
the practical utility of evergreen classification across three applications:
improving self-knowledge estimation, filtering QA datasets, and explaining
GPT-4o retrieval behavior. | 2025-05-27T12:35:13Z | null | null | null | null | null | null | null | null | null | null |
2,505.21136 | SageAttention2++: A More Efficient Implementation of SageAttention2 | ['Jintao Zhang', 'Xiaoming Xu', 'Jia Wei', 'Haofeng Huang', 'Pengle Zhang', 'Chendong Xiang', 'Jun Zhu', 'Jianfei Chen'] | ['cs.LG', 'cs.AI', 'cs.AR', 'cs.CV'] | The efficiency of attention is critical because its time complexity grows
quadratically with sequence length. SageAttention2 addresses this by utilizing
quantization to accelerate matrix multiplications (Matmul) in attention. To
further accelerate SageAttention2, we propose to utilize the faster instruction
of FP8 Matmul accumulated in FP16. The instruction is 2x faster than the FP8
Matmul used in SageAttention2. Our experiments show that SageAttention2++
achieves a 3.9x speedup over FlashAttention while maintaining the same
attention accuracy as SageAttention2. This means SageAttention2++ effectively
accelerates various models, including those for language, image, and video
generation, with negligible end-to-end metrics loss. The code will be available
at https://github.com/thu-ml/SageAttention. | 2025-05-27T12:50:36Z | null | null | null | null | null | null | null | null | null | null |
2,505.21172 | TAT-R1: Terminology-Aware Translation with Reinforcement Learning and
Word Alignment | ['Zheng Li', 'Mao Zheng', 'Mingyang Song', 'Wenjie Yang'] | ['cs.CL'] | Recently, deep reasoning large language models(LLMs) like DeepSeek-R1 have
made significant progress in tasks such as mathematics and coding. Inspired by
this, several studies have employed reinforcement learning(RL) to enhance
models' deep reasoning capabilities and improve machine translation(MT)
quality. However, the terminology translation, an essential task in MT, remains
unexplored in deep reasoning LLMs. In this paper, we propose \textbf{TAT-R1}, a
terminology-aware translation model trained with reinforcement learning and
word alignment. Specifically, we first extract the keyword translation pairs
using a word alignment model. Then we carefully design three types of
rule-based alignment rewards with the extracted alignment relationships. With
those alignment rewards, the RL-trained translation model can learn to focus on
the accurate translation of key information, including terminology in the
source text. Experimental results show the effectiveness of TAT-R1. Our model
significantly improves terminology translation accuracy compared to the
baseline models while maintaining comparable performance on general translation
tasks. In addition, we conduct detailed ablation studies of the
DeepSeek-R1-like training paradigm for machine translation and reveal several
key findings. | 2025-05-27T13:26:02Z | null | null | null | null | null | null | null | null | null | null |
2,505.21178 | Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning | ['Mingyang Song', 'Mao Zheng'] | ['cs.CL'] | As test-time scaling becomes a pivotal research frontier in Large Language
Models (LLMs) development, contemporary and advanced post-training
methodologies increasingly focus on extending the generation length of long
Chain-of-Thought (CoT) responses to enhance reasoning capabilities toward
DeepSeek R1-like performance. However, recent studies reveal a persistent
overthinking phenomenon in state-of-the-art reasoning models, manifesting as
excessive redundancy or repetitive thinking patterns in long CoT responses. To
address this issue, in this paper, we propose a simple yet effective two-stage
reinforcement learning framework for achieving concise reasoning in LLMs, named
ConciseR. Specifically, the first stage, using more training steps, aims to
incentivize the model's reasoning capabilities via Group Relative Policy
Optimization with clip-higher and dynamic sampling components (GRPO++), and the
second stage, using fewer training steps, explicitly enforces conciseness and
improves efficiency via Length-aware Group Relative Policy Optimization
(L-GRPO). Significantly, ConciseR only optimizes response length once all
rollouts of a sample are correct, following the "walk before you run"
principle. Extensive experimental results demonstrate that our ConciseR model,
which generates more concise CoT reasoning responses, outperforms recent
state-of-the-art reasoning models with zero RL paradigm across AIME 2024,
MATH-500, AMC 2023, Minerva, and Olympiad benchmarks. | 2025-05-27T13:29:51Z | Ongoing Work | null | null | null | null | null | null | null | null | null |
2,505.21325 | MagicTryOn: Harnessing Diffusion Transformer for Garment-Preserving
Video Virtual Try-on | ['Guangyuan Li', 'Siming Zheng', 'Hao Zhang', 'Jinwei Chen', 'Junsheng Luan', 'Binkai Ou', 'Lei Zhao', 'Bo Li', 'Peng-Tao Jiang'] | ['cs.CV'] | Video Virtual Try-On (VVT) aims to simulate the natural appearance of
garments across consecutive video frames, capturing their dynamic variations
and interactions with human body motion. However, current VVT methods still
face challenges in terms of spatiotemporal consistency and garment content
preservation. First, they use diffusion models based on the U-Net, which are
limited in their expressive capability and struggle to reconstruct complex
details. Second, they adopt a separative modeling approach for spatial and
temporal attention, which hinders the effective capture of structural
relationships and dynamic consistency across frames. Third, their expression of
garment details remains insufficient, affecting the realism and stability of
the overall synthesized results, especially during human motion. To address the
above challenges, we propose MagicTryOn, a video virtual try-on framework built
upon the large-scale video diffusion Transformer. We replace the U-Net
architecture with a diffusion Transformer and combine full self-attention to
jointly model the spatiotemporal consistency of videos. We design a
coarse-to-fine garment preservation strategy. The coarse strategy integrates
garment tokens during the embedding stage, while the fine strategy incorporates
multiple garment-based conditions, such as semantics, textures, and contour
lines during the denoising stage. Moreover, we introduce a mask-aware loss to
further optimize garment region fidelity. Extensive experiments on both image
and video try-on datasets demonstrate that our method outperforms existing SOTA
methods in comprehensive evaluations and generalizes to in-the-wild scenarios. | 2025-05-27T15:22:02Z | null | null | null | MagicTryOn: Harnessing Diffusion Transformer for Garment-Preserving Video Virtual Try-on | ['Guangyuan Li', 'Siming Zheng', 'Hao Zhang', 'Jinwei Chen', 'Junsheng Luan', 'Binkai Ou', 'Lei Zhao', 'Bo Li', 'Peng-Tao Jiang'] | 2,025 | arXiv.org | 0 | 58 | ['Computer Science'] |
2,505.21411 | Pangu Pro MoE: Mixture of Grouped Experts for Efficient Sparsity | ['Yehui Tang', 'Xiaosong Li', 'Fangcheng Liu', 'Wei Guo', 'Hang Zhou', 'Yaoyuan Wang', 'Kai Han', 'Xianzhi Yu', 'Jinpeng Li', 'Hui Zang', 'Fei Mi', 'Xiaojun Meng', 'Zhicheng Liu', 'Hanting Chen', 'Binfan Zheng', 'Can Chen', 'Youliang Yan', 'Ruiming Tang', 'Peifeng Qin', 'Xinghao Chen', 'Dacheng Tao', 'Yunhe Wang'] | ['cs.CL'] | The surgence of Mixture of Experts (MoE) in Large Language Models promises a
small price of execution cost for a much larger model parameter count and
learning capacity, because only a small fraction of parameters are activated
for each input token. However, it is commonly observed that some experts are
activated far more often than others, leading to system inefficiency when
running the experts on different devices in parallel. Therefore, we introduce
Mixture of Grouped Experts (MoGE), which groups the experts during selection
and balances the expert workload better than MoE in nature. It constrains
tokens to activate an equal number of experts within each predefined expert
group. When a model execution is distributed on multiple devices, this
architectural design ensures a balanced computational load across devices,
significantly enhancing throughput, particularly for the inference phase.
Further, we build Pangu Pro MoE on Ascend NPUs, a sparse model based on MoGE
with 72 billion total parameters, 16 billion of which are activated for each
token. The configuration of Pangu Pro MoE is optimized for Ascend 300I Duo and
800I A2 through extensive system simulation studies. Our experiments indicate
that MoGE indeed leads to better expert load balancing and more efficient
execution for both model training and inference on Ascend NPUs. The inference
performance of Pangu Pro MoE achieves 1148 tokens/s per card and can be further
improved to 1528 tokens/s per card by speculative acceleration, outperforming
comparable 32B and 72B Dense models. Furthermore, we achieve an excellent
cost-to-performance ratio for model inference on Ascend 300I Duo. Our studies
show that Ascend NPUs are capable of training Pangu Pro MoE with massive
parallelization to make it a leading model within the sub-100B total parameter
class, outperforming prominent open-source models like GLM-Z1-32B and
Qwen3-32B. | 2025-05-27T16:40:21Z | null | null | null | Pangu Pro MoE: Mixture of Grouped Experts for Efficient Sparsity | ['Yehui Tang', 'Xiaosong Li', 'Fangcheng Liu', 'Wei Guo', 'Hang Zhou', 'Yaoyuan Wang', 'Kai Han', 'Xian Yu', 'Jinpeng Li', 'Hui Zang', 'Fei Mi', 'Xiaojun Meng', 'Zhicheng Liu', 'Hanting Chen', 'Binfan Zheng', 'Can Chen', 'Youliang Yan', 'Ruiming Tang', 'Peifeng Qin', 'Xinghao Chen', 'Dacheng Tao', 'Yunhe Wang'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,505.21432 | Hume: Introducing System-2 Thinking in Visual-Language-Action Model | ['Haoming Song', 'Delin Qu', 'Yuanqi Yao', 'Qizhi Chen', 'Qi Lv', 'Yiwen Tang', 'Modi Shi', 'Guanghui Ren', 'Maoqing Yao', 'Bin Zhao', 'Dong Wang', 'Xuelong Li'] | ['cs.RO', 'cs.AI'] | Humans practice slow thinking before performing actual actions when handling
complex tasks in the physical world. This thinking paradigm, recently, has
achieved remarkable advancement in boosting Large Language Models (LLMs) to
solve complex tasks in digital domains. However, the potential of slow thinking
remains largely unexplored for robotic foundation models interacting with the
physical world. In this work, we propose Hume: a dual-system
Vision-Language-Action (VLA) model with value-guided System-2 thinking and
cascaded action denoising, exploring human-like thinking capabilities of
Vision-Language-Action models for dexterous robot control. System 2 of Hume
implements value-Guided thinking by extending a Vision-Language-Action Model
backbone with a novel value-query head to estimate the state-action value of
predicted actions. The value-guided thinking is conducted by repeat sampling
multiple action candidates and selecting one according to state-action value.
System 1 of Hume is a lightweight reactive visuomotor policy that takes System
2 selected action and performs cascaded action denoising for dexterous robot
control. At deployment time, System 2 performs value-guided thinking at a low
frequency while System 1 asynchronously receives the System 2 selected action
candidate and predicts fluid actions in real time. We show that Hume
outperforms the existing state-of-the-art Vision-Language-Action models across
multiple simulation benchmark and real-robot deployments. | 2025-05-27T17:04:21Z | null | null | null | null | null | null | null | null | null | null |
2,505.21496 | UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based
Mobile GUI Agents | ['Han Xiao', 'Guozhi Wang', 'Yuxiang Chai', 'Zimu Lu', 'Weifeng Lin', 'Hao He', 'Lue Fan', 'Liuyang Bian', 'Rui Hu', 'Liang Liu', 'Shuai Ren', 'Yafei Wen', 'Xiaoxin Chen', 'Aojun Zhou', 'Hongsheng Li'] | ['cs.CL', 'cs.CV', 'cs.LG'] | In this paper, we introduce UI-Genie, a self-improving framework addressing
two key challenges in GUI agents: verification of trajectory outcome is
challenging and high-quality training data are not scalable. These challenges
are addressed by a reward model and a self-improving pipeline, respectively.
The reward model, UI-Genie-RM, features an image-text interleaved architecture
that efficiently pro- cesses historical context and unifies action-level and
task-level rewards. To sup- port the training of UI-Genie-RM, we develop
deliberately-designed data genera- tion strategies including rule-based
verification, controlled trajectory corruption, and hard negative mining. To
address the second challenge, a self-improvement pipeline progressively expands
solvable complex GUI tasks by enhancing both the agent and reward models
through reward-guided exploration and outcome verification in dynamic
environments. For training the model, we generate UI- Genie-RM-517k and
UI-Genie-Agent-16k, establishing the first reward-specific dataset for GUI
agents while demonstrating high-quality synthetic trajectory gen- eration
without manual annotation. Experimental results show that UI-Genie achieves
state-of-the-art performance across multiple GUI agent benchmarks with three
generations of data-model self-improvement. We open-source our complete
framework implementation and generated datasets to facilitate further research
in https://github.com/Euphoria16/UI-Genie. | 2025-05-27T17:58:06Z | https://github.com/Euphoria16/UI-Genie | null | null | null | null | null | null | null | null | null |
2,505.216 | R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large
Model Token Routing | ['Tianyu Fu', 'Yi Ge', 'Yichen You', 'Enshu Liu', 'Zhihang Yuan', 'Guohao Dai', 'Shengen Yan', 'Huazhong Yang', 'Yu Wang'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.PF', 'I.2.7'] | Large Language Models (LLMs) achieve impressive reasoning capabilities at the
cost of substantial inference overhead, posing substantial deployment
challenges. Although distilled Small Language Models (SLMs) significantly
enhance efficiency, their performance suffers as they fail to follow LLMs'
reasoning paths. Luckily, we reveal that only a small fraction of tokens
genuinely diverge reasoning paths between LLMs and SLMs. Most generated tokens
are either identical or exhibit neutral differences, such as minor variations
in abbreviations or expressions. Leveraging this insight, we introduce **Roads
to Rome (R2R)**, a neural token routing method that selectively utilizes LLMs
only for these critical, path-divergent tokens, while leaving the majority of
token generation to the SLM. We also develop an automatic data generation
pipeline that identifies divergent tokens and generates token-level routing
labels to train the lightweight router. We apply R2R to combine R1-1.5B and
R1-32B models from the DeepSeek family, and evaluate on challenging math,
coding, and QA benchmarks. With an average activated parameter size of 5.6B,
R2R surpasses the average accuracy of R1-7B by 1.6x, outperforming even the
R1-14B model. Compared to R1-32B, it delivers a 2.8x wall-clock speedup with
comparable performance, advancing the Pareto frontier of test-time scaling
efficiency. Our code is available at https://github.com/thu-nics/R2R. | 2025-05-27T16:57:20Z | null | null | null | null | null | null | null | null | null | null |
2,505.21668 | R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised
and Reinforcement Learning | ['Yongchao Chen', 'Yueying Liu', 'Junwei Zhou', 'Yilun Hao', 'Jingquan Wang', 'Yang Zhang', 'Chuchu Fan'] | ['cs.AI', 'cs.CL', 'cs.SC'] | Despite advances in reasoning and planning of R1-like models, Large Language
Models (LLMs) still struggle with tasks requiring precise computation, symbolic
manipulation, optimization, and algorithmic reasoning, in which textual
reasoning lacks the rigor of code execution. A key challenge is enabling LLMs
to decide when to use textual reasoning versus code generation. While OpenAI
trains models to invoke a Code Interpreter as needed, public research lacks
guidance on aligning pre-trained LLMs to effectively leverage code and
generalize across diverse tasks. We present R1-Code-Interpreter, an extension
of a text-only LLM trained via multi-turn supervised fine-tuning (SFT) and
reinforcement learning (RL) to autonomously generate multiple code queries
during step-by-step reasoning. We curate 144 reasoning and planning tasks (107
for training, 37 for testing), each with over 200 diverse questions. We
fine-tune Qwen-2.5 models (3B/7B/14B) using various SFT and RL strategies,
investigating different answer formats, reasoning vs. non-reasoning models,
cold vs. warm starts, GRPO vs. PPO, and masked vs. unmasked code outputs.
Unlike prior RL work on narrow domains, we find that Code Interpreter training
is significantly harder due to high task diversity and expensive code
execution, highlighting the critical role of the SFT stage. Our final model,
R1-CI-14B, improves average accuracy on the 37 test tasks from 44.0\% to
64.1\%, outperforming GPT-4o (text-only: 58.6\%) and approaching GPT-4o with
Code Interpreter (70.9\%), with the emergent self-checking behavior via code
generation. Datasets, Codes, and Models are available at
https://github.com/yongchao98/R1-Code-Interpreter and
https://huggingface.co/yongchao98. | 2025-05-27T18:47:33Z | 33 pages, 8 figures | null | null | R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning | ['Yongchao Chen', 'Yueying Liu', 'Junwei Zhou', 'Yilun Hao', 'Jingquan Wang', 'Yang Zhang', 'Chuchu Fan'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,505.21847 | RePaViT: Scalable Vision Transformer Acceleration via Structural
Reparameterization on Feedforward Network Layers | ['Xuwei Xu', 'Yang Li', 'Yudong Chen', 'Jiajun Liu', 'Sen Wang'] | ['cs.CV', 'cs.AI'] | We reveal that feedforward network (FFN) layers, rather than attention
layers, are the primary contributors to Vision Transformer (ViT) inference
latency, with their impact signifying as model size increases. This finding
highlights a critical opportunity for optimizing the efficiency of large-scale
ViTs by focusing on FFN layers. In this work, we propose a novel channel idle
mechanism that facilitates post-training structural reparameterization for
efficient FFN layers during testing. Specifically, a set of feature channels
remains idle and bypasses the nonlinear activation function in each FFN layer,
thereby forming a linear pathway that enables structural reparameterization
during inference. This mechanism results in a family of ReParameterizable
Vision Transformers (RePaViTs), which achieve remarkable latency reductions
with acceptable sacrifices (sometimes gains) in accuracy across various ViTs.
The benefits of our method scale consistently with model sizes, demonstrating
greater speed improvements and progressively narrowing accuracy gaps or even
higher accuracies on larger models. In particular, RePa-ViT-Large and
RePa-ViT-Huge enjoy 66.8% and 68.7% speed-ups with +1.7% and +1.1% higher top-1
accuracies under the same training strategy, respectively. RePaViT is the first
to employ structural reparameterization on FFN layers to expedite ViTs to our
best knowledge, and we believe that it represents an auspicious direction for
efficient ViTs. Source code is available at
https://github.com/Ackesnal/RePaViT. | 2025-05-28T00:27:18Z | Accepted to ICML2025 | null | null | RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers | ['Xuwei Xu', 'Yang Li', 'Yudong Chen', 'Jiajun Liu', 'Sen Wang'] | 2,025 | arXiv.org | 0 | 73 | ['Computer Science'] |
2,505.21925 | RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with
Global Illumination | ['Chong Zeng', 'Yue Dong', 'Pieter Peers', 'Hongzhi Wu', 'Xin Tong'] | ['cs.GR', 'cs.CV', 'cs.LG'] | We present RenderFormer, a neural rendering pipeline that directly renders an
image from a triangle-based representation of a scene with full global
illumination effects and that does not require per-scene training or
fine-tuning. Instead of taking a physics-centric approach to rendering, we
formulate rendering as a sequence-to-sequence transformation where a sequence
of tokens representing triangles with reflectance properties is converted to a
sequence of output tokens representing small patches of pixels. RenderFormer
follows a two stage pipeline: a view-independent stage that models
triangle-to-triangle light transport, and a view-dependent stage that
transforms a token representing a bundle of rays to the corresponding pixel
values guided by the triangle-sequence from the view-independent stage. Both
stages are based on the transformer architecture and are learned with minimal
prior constraints. We demonstrate and evaluate RenderFormer on scenes with
varying complexity in shape and light transport. | 2025-05-28T03:20:46Z | Accepted to SIGGRAPH 2025. Project page:
https://microsoft.github.io/renderformer | ACM SIGGRAPH 2025 Conference Papers | 10.1145/3721238.3730595 | null | null | null | null | null | null | null |
2,505.2196 | One-Way Ticket:Time-Independent Unified Encoder for Distilling
Text-to-Image Diffusion Models | ['Senmao Li', 'Lei Wang', 'Kai Wang', 'Tao Liu', 'Jiehang Xie', 'Joost van de Weijer', 'Fahad Shahbaz Khan', 'Shiqi Yang', 'Yaxing Wang', 'Jian Yang'] | ['cs.CV'] | Text-to-Image (T2I) diffusion models have made remarkable advancements in
generative modeling; however, they face a trade-off between inference speed and
image quality, posing challenges for efficient deployment. Existing distilled
T2I models can generate high-fidelity images with fewer sampling steps, but
often struggle with diversity and quality, especially in one-step models. From
our analysis, we observe redundant computations in the UNet encoders. Our
findings suggest that, for T2I diffusion models, decoders are more adept at
capturing richer and more explicit semantic information, while encoders can be
effectively shared across decoders from diverse time steps. Based on these
observations, we introduce the first Time-independent Unified Encoder TiUE for
the student model UNet architecture, which is a loop-free image generation
approach for distilling T2I diffusion models. Using a one-pass scheme, TiUE
shares encoder features across multiple decoder time steps, enabling parallel
sampling and significantly reducing inference time complexity. In addition, we
incorporate a KL divergence term to regularize noise prediction, which enhances
the perceptual realism and diversity of the generated images. Experimental
results demonstrate that TiUE outperforms state-of-the-art methods, including
LCM, SD-Turbo, and SwiftBrushv2, producing more diverse and realistic results
while maintaining the computational efficiency. | 2025-05-28T04:23:22Z | Accepted at CVPR2025, Code: https://github.com/sen-mao/Loopfree | null | null | null | null | null | null | null | null | null |
2,505.22019 | VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich
Information Understanding via Iterative Reasoning with Reinforcement Learning | ['Qiuchen Wang', 'Ruixue Ding', 'Yu Zeng', 'Zehui Chen', 'Lin Chen', 'Shihang Wang', 'Pengjun Xie', 'Fei Huang', 'Feng Zhao'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Effectively retrieving, reasoning and understanding visually rich information
remains a challenge for RAG methods. Traditional text-based methods cannot
handle visual-related information. On the other hand, current vision-based RAG
approaches are often limited by fixed pipelines and frequently struggle to
reason effectively due to the insufficient activation of the fundamental
capabilities of models. As RL has been proven to be beneficial for model
reasoning, we introduce VRAG-RL, a novel RL framework tailored for complex
reasoning across visually rich information. With this framework, VLMs interact
with search engines, autonomously sampling single-turn or multi-turn reasoning
trajectories with the help of visual perception tokens and undergoing continual
optimization based on these samples. Our approach highlights key limitations of
RL in RAG domains: (i) Prior Multi-modal RAG approaches tend to merely
incorporate images into the context, leading to insufficient reasoning token
allocation and neglecting visual-specific perception; and (ii) When models
interact with search engines, their queries often fail to retrieve relevant
information due to the inability to articulate requirements, thereby leading to
suboptimal performance. To address these challenges, we define an action space
tailored for visually rich inputs, with actions including cropping and scaling,
allowing the model to gather information from a coarse-to-fine perspective.
Furthermore, to bridge the gap between users' original inquiries and the
retriever, we employ a simple yet effective reward that integrates query
rewriting and retrieval performance with a model-based reward. Our VRAG-RL
optimizes VLMs for RAG tasks using specially designed RL strategies, aligning
the model with real-world applications. The code is available at
https://github.com/Alibaba-NLP/VRAG. | 2025-05-28T06:30:51Z | null | null | null | null | null | null | null | null | null | null |
2,505.22232 | Judging Quality Across Languages: A Multilingual Approach to Pretraining
Data Filtering with Language Models | ['Mehdi Ali', 'Manuel Brack', 'Max Lübbering', 'Elias Wendt', 'Abbas Goher Khan', 'Richard Rutmann', 'Alex Jude', 'Maurice Kraus', 'Alexander Arno Weber', 'David Kaczér', 'Florian Mai', 'Lucie Flek', 'Rafet Sifa', 'Nicolas Flores-Herr', 'Joachim Köhler', 'Patrick Schramowski', 'Michael Fromm', 'Kristian Kersting'] | ['cs.CL', 'cs.AI', 'cs.LG'] | High-quality multilingual training data is essential for effectively
pretraining large language models (LLMs). Yet, the availability of suitable
open-source multilingual datasets remains limited. Existing state-of-the-art
datasets mostly rely on heuristic filtering methods, restricting both their
cross-lingual transferability and scalability. Here, we introduce JQL, a
systematic approach that efficiently curates diverse and high-quality
multilingual data at scale while significantly reducing computational demands.
JQL distills LLMs' annotation capabilities into lightweight annotators based on
pretrained multilingual embeddings. These models exhibit robust multilingual
and cross-lingual performance, even for languages and scripts unseen during
training. Evaluated empirically across 35 languages, the resulting annotation
pipeline substantially outperforms current heuristic filtering methods like
Fineweb2. JQL notably enhances downstream model training quality and increases
data retention rates. Our research provides practical insights and valuable
resources for multilingual data curation, raising the standards of multilingual
dataset development. | 2025-05-28T11:06:54Z | Project page available at https://huggingface.co/spaces/Jackal-AI/JQL | null | null | Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models | ['Mehdi Ali', 'Manuel Brack', 'Max Lubbering', 'Elias Wendt', 'Abbas Goher Khan', 'Richard Rutmann', 'Alex Jude', 'Maurice Kraus', 'Alexander Arno Weber', 'Felix Stollenwerk', "David Kacz'er", 'Florian Mai', 'Lucie Flek', 'R. Sifa', 'Nicolas Flores-Herr', 'Joachim Kohler', 'P. Schramowski', 'Michael Fromm', 'K. Kersting'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,505.22312 | Skywork Open Reasoner 1 Technical Report | ['Jujie He', 'Jiacai Liu', 'Chris Yuhao Liu', 'Rui Yan', 'Chaojie Wang', 'Peng Cheng', 'Xiaoyu Zhang', 'Fuxiang Zhang', 'Jiacheng Xu', 'Wei Shen', 'Siyuan Li', 'Liang Zeng', 'Tianwen Wei', 'Cheng Cheng', 'Bo An', 'Yang Liu', 'Yahui Zhou'] | ['cs.LG', 'cs.AI', 'cs.CL'] | The success of DeepSeek-R1 underscores the significant role of reinforcement
learning (RL) in enhancing the reasoning capabilities of large language models
(LLMs). In this work, we present Skywork-OR1, an effective and scalable RL
implementation for long Chain-of-Thought (CoT) models. Building on the
DeepSeek-R1-Distill model series, our RL approach achieves notable performance
gains, increasing average accuracy across AIME24, AIME25, and LiveCodeBench
from 57.8% to 72.8% (+15.0%) for the 32B model and from 43.6% to 57.5% (+13.9%)
for the 7B model. Our Skywork-OR1-32B model surpasses both DeepSeek-R1 and
Qwen3-32B on the AIME24 and AIME25 benchmarks, while achieving comparable
results on LiveCodeBench. The Skywork-OR1-7B and Skywork-OR1-Math-7B models
demonstrate competitive reasoning capabilities among models of similar size. We
perform comprehensive ablation studies on the core components of our training
pipeline to validate their effectiveness. Additionally, we thoroughly
investigate the phenomenon of entropy collapse, identify key factors affecting
entropy dynamics, and demonstrate that mitigating premature entropy collapse is
critical for improved test performance. To support community research, we fully
open-source our model weights, training code, and training datasets. | 2025-05-28T12:56:04Z | null | null | null | Skywork Open Reasoner 1 Technical Report | ['Jujie He', 'Jiacai Liu', 'Chris Liu', 'Rui Yan', 'Chaojie Wang', 'Peng Cheng', 'Xiaoyu Zhang', 'Fuxiang Zhang', 'Jiacheng Xu', 'Wei Shen', 'Siyuan Li', 'Liang Zeng', 'Tianwen Wei', 'Cheng Cheng', 'Bo An', 'Yang Liu', 'Yahui Zhou'] | 2,025 | arXiv.org | 7 | 0 | ['Computer Science'] |
2,505.22334 | Advancing Multimodal Reasoning via Reinforcement Learning with Cold
Start | ['Lai Wei', 'Yuting Li', 'Kaipeng Zheng', 'Chen Wang', 'Yue Wang', 'Linghe Kong', 'Lichao Sun', 'Weiran Huang'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Recent advancements in large language models (LLMs) have demonstrated
impressive chain-of-thought reasoning capabilities, with reinforcement learning
(RL) playing a crucial role in this progress. While "aha moment"
patterns--where models exhibit self-correction through reflection--are often
attributed to emergent properties from RL, we first demonstrate that these
patterns exist in multimodal LLMs (MLLMs) prior to RL training but may not
necessarily correlate with improved reasoning performance. Building on these
insights, we present a comprehensive study on enhancing multimodal reasoning
through a two-stage approach: (1) supervised fine-tuning (SFT) as a cold start
with structured chain-of-thought reasoning patterns, followed by (2)
reinforcement learning via GRPO to further refine these capabilities. Our
extensive experiments show that this combined approach consistently outperforms
both SFT-only and RL-only methods across challenging multimodal reasoning
benchmarks. The resulting models achieve state-of-the-art performance among
open-source MLLMs at both 3B and 7B scales, with our 7B model showing
substantial improvements over base models (e.g., 66.3 %$\rightarrow$73.4 % on
MathVista, 62.9 %$\rightarrow$70.4 % on We-Math) and our 3B model achieving
performance competitive with several 7B models. Overall, this work provides
practical guidance for building advanced multimodal reasoning models. Our code
is available at https://github.com/waltonfuture/RL-with-Cold-Start. | 2025-05-28T13:21:38Z | null | null | null | null | null | null | null | null | null | null |
2,505.22425 | Scaling Reasoning without Attention | ['Xueliang Zhao', 'Wei Wu', 'Lingpeng Kong'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Large language models (LLMs) have made significant advances in complex
reasoning tasks, yet they remain bottlenecked by two core challenges:
architectural inefficiency due to reliance on Transformers, and a lack of
structured fine-tuning for high-difficulty domains. We introduce \ourmodel, an
attention-free language model that addresses both issues through architectural
and data-centric innovations. Built on the state space dual (SSD) layers of
Mamba-2, our model eliminates the need for self-attention and key-value
caching, enabling fixed-memory, constant-time inference. To train it for
complex reasoning, we propose a two-phase curriculum fine-tuning strategy based
on the \textsc{PromptCoT} synthesis paradigm, which generates pedagogically
structured problems via abstract concept selection and rationale-guided
generation. On benchmark evaluations, \ourmodel-7B outperforms strong
Transformer and hybrid models of comparable scale, and even surpasses the much
larger Gemma3-27B by 2.6\% on AIME 24, 0.6\% on AIME 25, and 3.0\% on
Livecodebench. These results highlight the potential of state space models as
efficient and scalable alternatives to attention-based architectures for
high-capacity reasoning. | 2025-05-28T14:52:15Z | preprint | null | null | Scaling Reasoning without Attention | ['Xueliang Zhao', 'Wei Wu', 'Lingpeng Kong'] | 2,025 | arXiv.org | 0 | 41 | ['Computer Science'] |
2,505.22453 | Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO | ['Lai Wei', 'Yuting Li', 'Chen Wang', 'Yue Wang', 'Linghe Kong', 'Weiran Huang', 'Lichao Sun'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Improving Multi-modal Large Language Models (MLLMs) in the post-training
stage typically relies on supervised fine-tuning (SFT) or reinforcement
learning (RL). However, these supervised methods require expensive and manually
annotated multi-modal data--an ultimately unsustainable resource. While recent
efforts have explored unsupervised post-training, their methods are complex and
difficult to iterate. In this work, we are the first to investigate the use of
GRPO, a stable and scalable online RL algorithm, for enabling continual
self-improvement without any external supervision. We propose MM-UPT, a simple
yet effective framework for unsupervised post-training of MLLMs. MM-UPT builds
upon GRPO, replacing traditional reward signals with a self-rewarding mechanism
based on majority voting over multiple sampled responses. Our experiments
demonstrate that MM-UPT significantly improves the reasoning ability of
Qwen2.5-VL-7B (e.g., 66.3 %$\rightarrow$72.9 % on MathVista, 62.9
%$\rightarrow$68.7 % on We-Math), using standard dataset without ground truth
labels. MM-UPT also outperforms prior unsupervised baselines and even
approaches the results of supervised GRPO. Furthermore, we show that
incorporating synthetic questions, generated solely by MLLM itself, can boost
performance as well, highlighting a promising approach for scalable
self-improvement. Overall, MM-UPT offers a new paradigm for continual,
autonomous enhancement of MLLMs in the absence of external supervision. Our
code is available at https://github.com/waltonfuture/MM-UPT. | 2025-05-28T15:11:16Z | null | null | null | null | null | null | null | null | null | null |
2,505.22569 | ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion
Models | ['Dmitrii Sorokin', 'Maksim Nakhodnov', 'Andrey Kuznetsov', 'Aibek Alanov'] | ['cs.CV'] | Recent advances in diffusion models have led to impressive image generation
capabilities, but aligning these models with human preferences remains
challenging. Reward-based fine-tuning using models trained on human feedback
improves alignment but often harms diversity, producing less varied outputs. In
this work, we address this trade-off with two contributions. First, we
introduce \textit{combined generation}, a novel sampling strategy that applies
a reward-tuned diffusion model only in the later stages of the generation
process, while preserving the base model for earlier steps. This approach
mitigates early-stage overfitting and helps retain global structure and
diversity. Second, we propose \textit{ImageReFL}, a fine-tuning method that
improves image diversity with minimal loss in quality by training on real
images and incorporating multiple regularizers, including diffusion and ReFL
losses. Our approach outperforms conventional reward tuning methods on standard
quality and diversity metrics. A user study further confirms that our method
better balances human preference alignment and visual diversity. The source
code can be found at https://github.com/ControlGenAI/ImageReFL . | 2025-05-28T16:45:07Z | The source code can be found at
https://github.com/ControlGenAI/ImageReFL | null | null | null | null | null | null | null | null | null |
2,505.22636 | ObjectClear: Complete Object Removal via Object-Effect Attention | ['Jixin Zhao', 'Shangchen Zhou', 'Zhouxia Wang', 'Peiqing Yang', 'Chen Change Loy'] | ['cs.CV'] | Object removal requires eliminating not only the target object but also its
effects, such as shadows and reflections. However, diffusion-based inpainting
methods often produce artifacts, hallucinate content, alter background, and
struggle to remove object effects accurately. To address this challenge, we
introduce a new dataset for OBject-Effect Removal, named OBER, which provides
paired images with and without object effects, along with precise masks for
both objects and their associated visual artifacts. The dataset comprises
high-quality captured and simulated data, covering diverse object categories
and complex multi-object scenes. Building on OBER, we propose a novel
framework, ObjectClear, which incorporates an object-effect attention mechanism
to guide the model toward the foreground removal regions by learning attention
masks, effectively decoupling foreground removal from background
reconstruction. Furthermore, the predicted attention map enables an
attention-guided fusion strategy during inference, greatly preserving
background details. Extensive experiments demonstrate that ObjectClear
outperforms existing methods, achieving improved object-effect removal quality
and background fidelity, especially in complex scenarios. | 2025-05-28T17:51:17Z | Project page: https://zjx0101.github.io/projects/ObjectClear/ | null | null | null | null | null | null | null | null | null |
2,505.22647 | Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation | ['Zhe Kong', 'Feng Gao', 'Yong Zhang', 'Zhuoliang Kang', 'Xiaoming Wei', 'Xunliang Cai', 'Guanying Chen', 'Wenhan Luo'] | ['cs.CV'] | Audio-driven human animation methods, such as talking head and talking body
generation, have made remarkable progress in generating synchronized facial
movements and appealing visual quality videos. However, existing methods
primarily focus on single human animation and struggle with multi-stream audio
inputs, facing incorrect binding problems between audio and persons.
Additionally, they exhibit limitations in instruction-following capabilities.
To solve this problem, in this paper, we propose a novel task: Multi-Person
Conversational Video Generation, and introduce a new framework, MultiTalk, to
address the challenges during multi-person generation. Specifically, for audio
injection, we investigate several schemes and propose the Label Rotary Position
Embedding (L-RoPE) method to resolve the audio and person binding problem.
Furthermore, during training, we observe that partial parameter training and
multi-task training are crucial for preserving the instruction-following
ability of the base model. MultiTalk achieves superior performance compared to
other methods on several datasets, including talking head, talking body, and
multi-person datasets, demonstrating the powerful generation capabilities of
our approach. | 2025-05-28T17:57:06Z | Homepage: https://meigen-ai.github.io/multi-talk Github:
https://github.com/MeiGen-AI/MultiTalk | null | null | null | null | null | null | null | null | null |
2,505.22648 | WebDancer: Towards Autonomous Information Seeking Agency | ['Jialong Wu', 'Baixuan Li', 'Runnan Fang', 'Wenbiao Yin', 'Liwen Zhang', 'Zhengwei Tao', 'Dingchu Zhang', 'Zekun Xi', 'Gang Fu', 'Yong Jiang', 'Pengjun Xie', 'Fei Huang', 'Jingren Zhou'] | ['cs.CL'] | Addressing intricate real-world problems necessitates in-depth information
seeking and multi-step reasoning. Recent progress in agentic systems,
exemplified by Deep Research, underscores the potential for autonomous
multi-step research. In this work, we present a cohesive paradigm for building
end-to-end agentic information seeking agents from a data-centric and
training-stage perspective. Our approach consists of four key stages: (1)
browsing data construction, (2) trajectories sampling, (3) supervised
fine-tuning for effective cold start, and (4) reinforcement learning for
enhanced generalisation. We instantiate this framework in a web agent based on
the ReAct, WebDancer. Empirical evaluations on the challenging information
seeking benchmarks, GAIA and WebWalkerQA, demonstrate the strong performance of
WebDancer, achieving considerable results and highlighting the efficacy of our
training paradigm. Further analysis of agent training provides valuable
insights and actionable, systematic pathways for developing more capable
agentic models. The codes and demo will be released in
https://github.com/Alibaba-NLP/WebAgent. | 2025-05-28T17:57:07Z | null | null | null | null | null | null | null | null | null | null |
2,505.22651 | Sherlock: Self-Correcting Reasoning in Vision-Language Models | ['Yi Ding', 'Ruqi Zhang'] | ['cs.CV', 'cs.CL', 'cs.LG'] | Reasoning Vision-Language Models (VLMs) have shown promising performance on
complex multimodal tasks. However, they still face significant challenges: they
are highly sensitive to reasoning errors, require large volumes of annotated
data or accurate verifiers, and struggle to generalize beyond specific domains.
To address these limitations, we explore self-correction as a strategy to
enhance reasoning VLMs. We first conduct an in-depth analysis of reasoning
VLMs' self-correction abilities and identify key gaps. Based on our findings,
we introduce Sherlock, a self-correction and self-improvement training
framework. Sherlock introduces a trajectory-level self-correction objective, a
preference data construction method based on visual perturbation, and a dynamic
$\beta$ for preference tuning. Once the model acquires self-correction
capabilities using only 20k randomly sampled annotated data, it continues to
self-improve without external supervision. Built on the Llama3.2-Vision-11B
model, Sherlock achieves remarkable results across eight benchmarks, reaching
an average accuracy of 64.1 with direct generation and 65.4 after
self-correction. It outperforms LLaVA-CoT (63.2), Mulberry (63.9), and
LlamaV-o1 (63.4) while using less than 20% of the annotated data. | 2025-05-28T17:58:03Z | 27 pages | null | null | null | null | null | null | null | null | null |
2,505.22653 | The Climb Carves Wisdom Deeper Than the Summit: On the Noisy Rewards in
Learning to Reason | ['Ang Lv', 'Ruobing Xie', 'Xingwu Sun', 'Zhanhui Kang', 'Rui Yan'] | ['cs.CL'] | Recent studies on post-training large language models (LLMs) for reasoning
through reinforcement learning (RL) typically focus on tasks that can be
accurately verified and rewarded, such as solving math problems. In contrast,
our research investigates the impact of reward noise, a more practical
consideration for real-world scenarios involving the post-training of LLMs
using reward models. We found that LLMs demonstrate strong robustness to
substantial reward noise. For example, manually flipping 40% of the reward
function's outputs in math tasks still allows a Qwen-2.5-7B model to achieve
rapid convergence, improving its performance on math tasks from 5% to 72%,
compared to the 75% accuracy achieved by a model trained with noiseless
rewards. Surprisingly, by only rewarding the appearance of key reasoning
phrases (namely reasoning pattern reward, RPR), such as ``first, I need
to''-without verifying the correctness of answers, the model achieved peak
downstream performance (over 70% accuracy for Qwen-2.5-7B) comparable to models
trained with strict correctness verification and accurate rewards. Recognizing
the importance of the reasoning process over the final results, we combined RPR
with noisy reward models. RPR helped calibrate the noisy reward models,
mitigating potential false negatives and enhancing the LLM's performance on
open-ended tasks. These findings suggest the importance of improving models'
foundational abilities during the pre-training phase while providing insights
for advancing post-training techniques. Our code and scripts are available at
https://github.com/trestad/Noisy-Rewards-in-Learning-to-Reason. | 2025-05-28T17:59:03Z | Preprint | null | null | The Climb Carves Wisdom Deeper Than the Summit: On the Noisy Rewards in Learning to Reason | ['Ang Lv', 'Ruobing Xie', 'Xingwu Sun', 'Zhanhui Kang', 'Rui Yan'] | 2,025 | arXiv.org | 0 | 42 | ['Computer Science'] |
2,505.22662 | AutoL2S: Auto Long-Short Reasoning for Efficient Large Language Models | ['Feng Luo', 'Yu-Neng Chuang', 'Guanchu Wang', 'Hoang Anh Duy Le', 'Shaochen Zhong', 'Hongyi Liu', 'Jiayi Yuan', 'Yang Sui', 'Vladimir Braverman', 'Vipin Chaudhary', 'Xia Hu'] | ['cs.CL', 'cs.LG'] | The reasoning-capable large language models (LLMs) demonstrate strong
performance on complex reasoning tasks but often suffer from overthinking,
generating unnecessarily long chain-of-thought (CoT) reasoning paths for easy
reasoning questions, thereby increasing inference cost and latency. Recent
approaches attempt to address this challenge by manually deciding when to apply
long or short reasoning. However, they lack the flexibility to adapt CoT length
dynamically based on question complexity. In this paper, we propose Auto
Long-Short Reasoning (AutoL2S), a dynamic and model-agnostic framework that
enables LLMs to dynamically compress their generated reasoning path based on
the complexity of the reasoning question. AutoL2S enables a learned paradigm,
in which LLMs themselves can decide when longer reasoning is necessary and when
shorter reasoning suffices, by training on data annotated with our proposed
method, which includes both long and short CoT paths and a special <EASY>
token. We then use <EASY> token to indicate when the model can skip generating
lengthy CoT reasoning. This proposed annotation strategy can enhance the LLMs'
ability to generate shorter CoT reasoning paths with improved quality after
training. Extensive evaluation results show that AutoL2S reduces the length of
reasoning generation by up to 57% without compromising performance,
demonstrating the effectiveness of AutoL2S for scalable and efficient LLM
reasoning. | 2025-05-28T17:59:53Z | null | null | null | null | null | null | null | null | null | null |
2,505.22664 | Zero-Shot Vision Encoder Grafting via LLM Surrogates | ['Kaiyu Yue', 'Vasu Singla', 'Menglin Jia', 'John Kirchenbauer', 'Rifaa Qadri', 'Zikui Cai', 'Abhinav Bhatele', 'Furong Huang', 'Tom Goldstein'] | ['cs.CV'] | Vision language models (VLMs) typically pair a modestly sized vision encoder
with a large language model (LLM), e.g., Llama-70B, making the decoder the
primary computational burden during training. To reduce costs, a potential
promising strategy is to first train the vision encoder using a small language
model before transferring it to the large one. We construct small "surrogate
models" that share the same embedding space and representation language as the
large target LLM by directly inheriting its shallow layers. Vision encoders
trained on the surrogate can then be directly transferred to the larger model,
a process we call zero-shot grafting -- when plugged directly into the
full-size target LLM, the grafted pair surpasses the encoder-surrogate pair
and, on some benchmarks, even performs on par with full decoder training with
the target LLM. Furthermore, our surrogate training approach reduces overall
VLM training costs by ~45% when using Llama-70B as the decoder. | 2025-05-28T17:59:59Z | 15 pages | null | null | Zero-Shot Vision Encoder Grafting via LLM Surrogates | ['Kaiyu Yue', 'Vasu Singla', 'Menglin Jia', 'John Kirchenbauer', 'Rifaa Qadri', 'Zikui Cai', 'A. Bhatele', 'Furong Huang', 'Tom Goldstein'] | 2,025 | arXiv.org | 0 | 50 | ['Computer Science'] |
2,505.22705 | HiDream-I1: A High-Efficient Image Generative Foundation Model with
Sparse Diffusion Transformer | ['Qi Cai', 'Jingwen Chen', 'Yang Chen', 'Yehao Li', 'Fuchen Long', 'Yingwei Pan', 'Zhaofan Qiu', 'Yiheng Zhang', 'Fengbin Gao', 'Peihan Xu', 'Yimeng Wang', 'Kai Yu', 'Wenxuan Chen', 'Ziwei Feng', 'Zijian Gong', 'Jianzhuang Pan', 'Yi Peng', 'Rui Tian', 'Siyu Wang', 'Bo Zhao', 'Ting Yao', 'Tao Mei'] | ['cs.CV', 'cs.MM'] | Recent advancements in image generative foundation models have prioritized
quality improvements but often at the cost of increased computational
complexity and inference latency. To address this critical trade-off, we
introduce HiDream-I1, a new open-source image generative foundation model with
17B parameters that achieves state-of-the-art image generation quality within
seconds. HiDream-I1 is constructed with a new sparse Diffusion Transformer
(DiT) structure. Specifically, it starts with a dual-stream decoupled design of
sparse DiT with dynamic Mixture-of-Experts (MoE) architecture, in which two
separate encoders are first involved to independently process image and text
tokens. Then, a single-stream sparse DiT structure with dynamic MoE
architecture is adopted to trigger multi-model interaction for image generation
in a cost-efficient manner. To support flexiable accessibility with varied
model capabilities, we provide HiDream-I1 in three variants: HiDream-I1-Full,
HiDream-I1-Dev, and HiDream-I1-Fast.
Furthermore, we go beyond the typical text-to-image generation and remould
HiDream-I1 with additional image conditions to perform precise,
instruction-based editing on given images, yielding a new instruction-based
image editing model namely HiDream-E1. Ultimately, by integrating text-to-image
generation and instruction-based image editing, HiDream-I1 evolves to form a
comprehensive image agent (HiDream-A1) capable of fully interactive image
creation and refinement. To accelerate multi-modal AIGC research, we have
open-sourced all the codes and model weights of HiDream-I1-Full,
HiDream-I1-Dev, HiDream-I1-Fast, HiDream-E1 through our project websites:
https://github.com/HiDream-ai/HiDream-I1 and
https://github.com/HiDream-ai/HiDream-E1. All features can be directly
experienced via https://vivago.ai/studio. | 2025-05-28T17:59:15Z | Source codes and models are available at
https://github.com/HiDream-ai/HiDream-I1 and
https://github.com/HiDream-ai/HiDream-E1 | null | null | null | null | null | null | null | null | null |
2,505.22759 | FAMA: The First Large-Scale Open-Science Speech Foundation Model for
English and Italian | ['Sara Papi', 'Marco Gaido', 'Luisa Bentivogli', 'Alessio Brutti', 'Mauro Cettolo', 'Roberto Gretter', 'Marco Matassoni', 'Mohamed Nabih', 'Matteo Negri'] | ['cs.CL', 'cs.AI', 'cs.SD'] | The development of speech foundation models (SFMs) like Whisper and
SeamlessM4T has significantly advanced the field of speech processing. However,
their closed nature--with inaccessible training data and code--poses major
reproducibility and fair evaluation challenges. While other domains have made
substantial progress toward open science by developing fully transparent models
trained on open-source (OS) code and data, similar efforts in speech remain
limited. To fill this gap, we introduce FAMA, the first family of open science
SFMs for English and Italian, trained on 150k+ hours of OS speech data.
Moreover, we present a new dataset containing 16k hours of cleaned and
pseudo-labeled speech for both languages. Results show that FAMA achieves
competitive performance compared to existing SFMs while being up to 8 times
faster. All artifacts, including code, datasets, and models, are released under
OS-compliant licenses, promoting openness in speech technology research. | 2025-05-28T18:19:34Z | null | null | null | FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian | ['Sara Papi', 'Marco Gaido', 'L. Bentivogli', 'A. Brutti', 'Mauro Cettolo', 'Roberto Gretter', 'M. Matassoni', 'Mohamed Nabih', 'Matteo Negri'] | 2,025 | arXiv.org | 0 | 42 | ['Computer Science'] |
2,505.22765 | StressTest: Can YOUR Speech LM Handle the Stress? | ['Iddo Yosha', 'Gallil Maimon', 'Yossi Adi'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Sentence stress refers to emphasis, placed on specific words within a spoken
utterance to highlight or contrast an idea, or to introduce new information. It
is often used to imply an underlying intention that is not explicitly stated.
Recent advances in speech-aware language models (SLMs) have enabled direct
processing of audio, allowing models to bypass transcription and access the
full richness of the speech signal and perform audio reasoning tasks such as
spoken question answering. Despite the crucial role of sentence stress in
shaping meaning and speaker intent, it remains largely overlooked in evaluation
and development of such models. In this work, we address this gap by
introducing StressTest, a benchmark specifically designed to evaluate a model's
ability to distinguish between interpretations of spoken sentences based on the
stress pattern. We assess the performance of several leading SLMs and find
that, despite their overall capabilities, they perform poorly on such tasks. To
overcome this limitation, we propose a novel synthetic data generation
pipeline, and create Stress17k, a training set that simulates change of meaning
implied by stress variation. Then, we empirically show that optimizing models
with this synthetic dataset aligns well with real-world recordings and enables
effective finetuning of SLMs. Results suggest, that our finetuned model,
StresSLM, significantly outperforms existing models on both sentence stress
reasoning and detection tasks. Code, models, data, and audio samples -
pages.cs.huji.ac.il/adiyoss-lab/stresstest. | 2025-05-28T18:32:56Z | null | null | null | null | null | null | null | null | null | null |
2,505.22914 | cadrille: Multi-modal CAD Reconstruction with Online Reinforcement
Learning | ['Maksim Kolodiazhnyi', 'Denis Tarasov', 'Dmitrii Zhemchuzhnikov', 'Alexander Nikulin', 'Ilya Zisman', 'Anna Vorontsova', 'Anton Konushin', 'Vladislav Kurenkov', 'Danila Rukhovich'] | ['cs.CV', 'cs.LG'] | Computer-Aided Design (CAD) plays a central role in engineering and
manufacturing, making it possible to create precise and editable 3D models.
Using a variety of sensor or user-provided data as inputs for CAD
reconstruction can democratize access to design applications. However, existing
methods typically focus on a single input modality, such as point clouds,
images, or text, which limits their generalizability and robustness. Leveraging
recent advances in vision-language models (VLM), we propose a multi-modal CAD
reconstruction model that simultaneously processes all three input modalities.
Inspired by large language model (LLM) training paradigms, we adopt a two-stage
pipeline: supervised fine-tuning (SFT) on large-scale procedurally generated
data, followed by reinforcement learning (RL) fine-tuning using online
feedback, obtained programatically. Furthermore, we are the first to explore RL
fine-tuning of LLMs for CAD tasks demonstrating that online RL algorithms such
as Group Relative Preference Optimization (GRPO) outperform offline
alternatives. In the DeepCAD benchmark, our SFT model outperforms existing
single-modal approaches in all three input modalities simultaneously. More
importantly, after RL fine-tuning, cadrille sets new state-of-the-art on three
challenging datasets, including a real-world one. | 2025-05-28T22:32:31Z | null | null | null | null | null | null | null | null | null | null |
2,505.22943 | Can LLMs Deceive CLIP? Benchmarking Adversarial Compositionality of
Pre-trained Multimodal Representation via Text Updates | ['Jaewoo Ahn', 'Heeseung Yun', 'Dayoon Ko', 'Gunhee Kim'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG', 'cs.SD'] | While pre-trained multimodal representations (e.g., CLIP) have shown
impressive capabilities, they exhibit significant compositional vulnerabilities
leading to counterintuitive judgments. We introduce Multimodal Adversarial
Compositionality (MAC), a benchmark that leverages large language models (LLMs)
to generate deceptive text samples to exploit these vulnerabilities across
different modalities and evaluates them through both sample-wise attack success
rate and group-wise entropy-based diversity. To improve zero-shot methods, we
propose a self-training approach that leverages rejection-sampling fine-tuning
with diversity-promoting filtering, which enhances both attack success rate and
sample diversity. Using smaller language models like Llama-3.1-8B, our approach
demonstrates superior performance in revealing compositional vulnerabilities
across various multimodal representations, including images, videos, and
audios. | 2025-05-28T23:45:55Z | ACL 2025 Main. Code is released at
https://vision.snu.ac.kr/projects/mac | null | null | null | null | null | null | null | null | null |
2,505.22944 | ATI: Any Trajectory Instruction for Controllable Video Generation | ['Angtian Wang', 'Haibin Huang', 'Jacob Zhiyuan Fang', 'Yiding Yang', 'Chongyang Ma'] | ['cs.CV', 'cs.AI'] | We propose a unified framework for motion control in video generation that
seamlessly integrates camera movement, object-level translation, and
fine-grained local motion using trajectory-based inputs. In contrast to prior
methods that address these motion types through separate modules or
task-specific designs, our approach offers a cohesive solution by projecting
user-defined trajectories into the latent space of pre-trained image-to-video
generation models via a lightweight motion injector. Users can specify
keypoints and their motion paths to control localized deformations, entire
object motion, virtual camera dynamics, or combinations of these. The injected
trajectory signals guide the generative process to produce temporally
consistent and semantically aligned motion sequences. Our framework
demonstrates superior performance across multiple video motion control tasks,
including stylized motion effects (e.g., motion brushes), dynamic viewpoint
changes, and precise local motion manipulation. Experiments show that our
method provides significantly better controllability and visual quality
compared to prior approaches and commercial solutions, while remaining broadly
compatible with various state-of-the-art video generation backbones. Project
page: https://anytraj.github.io/. | 2025-05-28T23:49:18Z | null | null | null | null | null | null | null | null | null | null |
2,505.22961 | ToMAP: Training Opponent-Aware LLM Persuaders with Theory of Mind | ['Peixuan Han', 'Zijia Liu', 'Jiaxuan You'] | ['cs.CL', 'cs.LG'] | Large language models (LLMs) have shown promising potential in persuasion,
but existing works on training LLM persuaders are still preliminary. Notably,
while humans are skilled in modeling their opponent's thoughts and opinions
proactively and dynamically, current LLMs struggle with such Theory of Mind
(ToM) reasoning, resulting in limited diversity and opponent awareness. To
address this limitation, we introduce Theory of Mind Augmented Persuader
(ToMAP), a novel approach for building more flexible persuader agents by
incorporating two theory of mind modules that enhance the persuader's awareness
and analysis of the opponent's mental state. Specifically, we begin by
prompting the persuader to consider possible objections to the target central
claim, and then use a text encoder paired with a trained MLP classifier to
predict the opponent's current stance on these counterclaims. Our carefully
designed reinforcement learning schema enables the persuader learns how to
analyze opponent-related information and utilize it to generate more effective
arguments. Experiments show that the ToMAP persuader, while containing only 3B
parameters, outperforms much larger baselines, like GPT-4o, with a relative
gain of 39.4% across multiple persuadee models and diverse corpora. Notably,
ToMAP exhibits complex reasoning chains and reduced repetition during training,
which leads to more diverse and effective arguments. The opponent-aware feature
of ToMAP also makes it suitable for long conversations and enables it to employ
more logical and opponent-aware strategies. These results underscore our
method's effectiveness and highlight its potential for developing more
persuasive language agents. Code is available at:
https://github.com/ulab-uiuc/ToMAP. | 2025-05-29T01:03:41Z | null | null | null | null | null | null | null | null | null | null |
2,505.22977 | HyperMotion: DiT-Based Pose-Guided Human Image Animation of Complex
Motions | ['Shuolin Xu', 'Siming Zheng', 'Ziyi Wang', 'HC Yu', 'Jinwei Chen', 'Huaqi Zhang', 'Bo Li', 'Peng-Tao Jiang'] | ['cs.CV'] | Recent advances in diffusion models have significantly improved conditional
video generation, particularly in the pose-guided human image animation task.
Although existing methods are capable of generating high-fidelity and
time-consistent animation sequences in regular motions and static scenes, there
are still obvious limitations when facing complex human body motions
(Hypermotion) that contain highly dynamic, non-standard motions, and the lack
of a high-quality benchmark for evaluation of complex human motion animations.
To address this challenge, we introduce the \textbf{Open-HyperMotionX Dataset}
and \textbf{HyperMotionX Bench}, which provide high-quality human pose
annotations and curated video clips for evaluating and improving pose-guided
human image animation models under complex human motion conditions.
Furthermore, we propose a simple yet powerful DiT-based video generation
baseline and design spatial low-frequency enhanced RoPE, a novel module that
selectively enhances low-frequency spatial feature modeling by introducing
learnable frequency scaling. Our method significantly improves structural
stability and appearance consistency in highly dynamic human motion sequences.
Extensive experiments demonstrate the effectiveness of our dataset and proposed
approach in advancing the generation quality of complex human motion image
animations. Code and dataset will be made publicly available. | 2025-05-29T01:30:46Z | 17 pages, 7 figures | null | null | HyperMotion: DiT-Based Pose-Guided Human Image Animation of Complex Motions | ['Shuolin Xu', 'Siming Zheng', 'Ziyi Wang', 'HC Yu', 'Jinwei Chen', 'Huaqi Zhang', 'Bo Li', 'Peng-Tao Jiang'] | 2,025 | arXiv.org | 0 | 54 | ['Computer Science'] |
2,505.2306 | Self-Correcting Code Generation Using Small Language Models | ['Jeonghun Cho', 'Deokhyung Kang', 'Hyounghun Kim', 'Gary Geunbae Lee'] | ['cs.CL'] | Self-correction has demonstrated potential in code generation by allowing
language models to revise and improve their outputs through successive
refinement. Recent studies have explored prompting-based strategies that
incorporate verification or feedback loops using proprietary models, as well as
training-based methods that leverage their strong reasoning capabilities.
However, whether smaller models possess the capacity to effectively guide their
outputs through self-reflection remains unexplored. Our findings reveal that
smaller models struggle to exhibit reflective revision behavior across both
self-correction paradigms. In response, we introduce CoCoS, an approach
designed to enhance the ability of small language models for multi-turn code
correction. Specifically, we propose an online reinforcement learning objective
that trains the model to confidently maintain correct outputs while
progressively correcting incorrect outputs as turns proceed. Our approach
features an accumulated reward function that aggregates rewards across the
entire trajectory and a fine-grained reward better suited to multi-turn
correction scenarios. This facilitates the model in enhancing initial response
quality while achieving substantial improvements through self-correction. With
1B-scale models, CoCoS achieves improvements of 35.8% on the MBPP and 27.7% on
HumanEval compared to the baselines. | 2025-05-29T04:04:44Z | null | null | null | Self-Correcting Code Generation Using Small Language Models | ['Jeonghun Cho', 'Deokhyung Kang', 'Hyounghun Kim', 'G. Lee'] | 2,025 | arXiv.org | 0 | 32 | ['Computer Science'] |
2,505.23091 | Infi-MMR: Curriculum-based Unlocking Multimodal Reasoning via Phased
Reinforcement Learning in Multimodal Small Language Models | ['Zeyu Liu', 'Yuhang Liu', 'Guanghao Zhu', 'Congkai Xie', 'Zhen Li', 'Jianbo Yuan', 'Xinyao Wang', 'Qing Li', 'Shing-Chi Cheung', 'Shengyu Zhang', 'Fei Wu', 'Hongxia Yang'] | ['cs.AI', 'cs.CL'] | Recent advancements in large language models (LLMs) have demonstrated
substantial progress in reasoning capabilities, such as DeepSeek-R1, which
leverages rule-based reinforcement learning to enhance logical reasoning
significantly. However, extending these achievements to multimodal large
language models (MLLMs) presents critical challenges, which are frequently more
pronounced for Multimodal Small Language Models (MSLMs) given their typically
weaker foundational reasoning abilities: (1) the scarcity of high-quality
multimodal reasoning datasets, (2) the degradation of reasoning capabilities
due to the integration of visual processing, and (3) the risk that direct
application of reinforcement learning may produce complex yet incorrect
reasoning processes. To address these challenges, we design a novel framework
Infi-MMR to systematically unlock the reasoning potential of MSLMs through a
curriculum of three carefully structured phases and propose our multimodal
reasoning model Infi-MMR-3B. The first phase, Foundational Reasoning
Activation, leverages high-quality textual reasoning datasets to activate and
strengthen the model's logical reasoning capabilities. The second phase,
Cross-Modal Reasoning Adaptation, utilizes caption-augmented multimodal data to
facilitate the progressive transfer of reasoning skills to multimodal contexts.
The third phase, Multimodal Reasoning Enhancement, employs curated,
caption-free multimodal data to mitigate linguistic biases and promote robust
cross-modal reasoning. Infi-MMR-3B achieves both state-of-the-art multimodal
math reasoning ability (43.68% on MathVerse testmini, 27.04% on MathVision
test, and 21.33% on OlympiadBench) and general reasoning ability (67.2% on
MathVista testmini). Resources are available at
https://huggingface.co/Reallm-Labs/Infi-MMR-3B. | 2025-05-29T04:51:56Z | null | null | null | Infi-MMR: Curriculum-based Unlocking Multimodal Reasoning via Phased Reinforcement Learning in Multimodal Small Language Models | ['Zeyu Liu', 'Yuhang Liu', 'Guanghao Zhu', 'Congkai Xie', 'Zhen Li', 'Jianbo Yuan', 'Xinyao Wang', 'Qing Li', 'Shing-Chi Cheung', 'Sheng Zhang', 'Fei Wu', 'Hongxia Yang'] | 2,025 | arXiv.org | 0 | 35 | ['Computer Science'] |
2,505.23253 | UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes | ['Yixun Liang', 'Kunming Luo', 'Xiao Chen', 'Rui Chen', 'Hongyu Yan', 'Weiyu Li', 'Jiarui Liu', 'Ping Tan'] | ['cs.CV'] | We present UniTEX, a novel two-stage 3D texture generation framework to
create high-quality, consistent textures for 3D assets. Existing approaches
predominantly rely on UV-based inpainting to refine textures after reprojecting
the generated multi-view images onto the 3D shapes, which introduces challenges
related to topological ambiguity. To address this, we propose to bypass the
limitations of UV mapping by operating directly in a unified 3D functional
space. Specifically, we first propose that lifts texture generation into 3D
space via Texture Functions (TFs)--a continuous, volumetric representation that
maps any 3D point to a texture value based solely on surface proximity,
independent of mesh topology. Then, we propose to predict these TFs directly
from images and geometry inputs using a transformer-based Large Texturing Model
(LTM). To further enhance texture quality and leverage powerful 2D priors, we
develop an advanced LoRA-based strategy for efficiently adapting large-scale
Diffusion Transformers (DiTs) for high-quality multi-view texture synthesis as
our first stage. Extensive experiments demonstrate that UniTEX achieves
superior visual quality and texture integrity compared to existing approaches,
offering a generalizable and scalable solution for automated 3D texture
generation. Code will available in: https://github.com/YixunLiang/UniTEX. | 2025-05-29T08:58:41Z | 10 pages, 9 figures | null | null | null | null | null | null | null | null | null |
2,505.23277 | Sentinel: Attention Probing of Proxy Models for LLM Context Compression
with an Understanding Perspective | ['Yong Zhang', 'Yanwen Huang', 'Ning Cheng', 'Yang Guo', 'Yun Zhu', 'Yanmeng Wang', 'Shaojun Wang', 'Jing Xiao'] | ['cs.CL', 'cs.AI'] | Retrieval-augmented generation (RAG) enhances large language models (LLMs)
with external context, but retrieved passages are often lengthy, noisy, or
exceed input limits. Existing compression methods typically require supervised
training of dedicated compression models, increasing cost and reducing
portability. We propose Sentinel, a lightweight sentence-level compression
framework that reframes context filtering as an attention-based understanding
task. Rather than training a compression model, Sentinel probes decoder
attention from an off-the-shelf 0.5B proxy LLM using a lightweight classifier
to identify sentence relevance. Empirically, we find that query-context
relevance estimation is consistent across model scales, with 0.5B proxies
closely matching the behaviors of larger models. On the LongBench benchmark,
Sentinel achieves up to 5$\times$ compression while matching the QA performance
of 7B-scale compression systems. Our results suggest that probing native
attention signals enables fast, effective, and question-aware context
compression. Code available at: https://github.com/yzhangchuck/Sentinel. | 2025-05-29T09:24:12Z | Preprint. 17 pages including appendix | null | null | null | null | null | null | null | null | null |
2,505.23297 | EmoBench-UA: A Benchmark Dataset for Emotion Detection in Ukrainian | ['Daryna Dementieva', 'Nikolay Babakov', 'Alexander Fraser'] | ['cs.CL'] | While Ukrainian NLP has seen progress in many texts processing tasks, emotion
classification remains an underexplored area with no publicly available
benchmark to date. In this work, we introduce EmoBench-UA, the first annotated
dataset for emotion detection in Ukrainian texts. Our annotation schema is
adapted from the previous English-centric works on emotion detection (Mohammad
et al., 2018; Mohammad, 2022) guidelines. The dataset was created through
crowdsourcing using the Toloka.ai platform ensuring high-quality of the
annotation process. Then, we evaluate a range of approaches on the collected
dataset, starting from linguistic-based baselines, synthetic data translated
from English, to large language models (LLMs). Our findings highlight the
challenges of emotion classification in non-mainstream languages like Ukrainian
and emphasize the need for further development of Ukrainian-specific models and
training resources. | 2025-05-29T09:49:57Z | null | null | null | null | null | null | null | null | null | null |
2,505.23325 | Dimension-Reduction Attack! Video Generative Models are Experts on
Controllable Image Synthesis | ['Hengyuan Cao', 'Yutong Feng', 'Biao Gong', 'Yijing Tian', 'Yunhong Lu', 'Chuang Liu', 'Bin Wang'] | ['cs.CV'] | Video generative models can be regarded as world simulators due to their
ability to capture dynamic, continuous changes inherent in real-world
environments. These models integrate high-dimensional information across
visual, temporal, spatial, and causal dimensions, enabling predictions of
subjects in various status. A natural and valuable research direction is to
explore whether a fully trained video generative model in high-dimensional
space can effectively support lower-dimensional tasks such as controllable
image generation. In this work, we propose a paradigm for video-to-image
knowledge compression and task adaptation, termed \textit{Dimension-Reduction
Attack} (\texttt{DRA-Ctrl}), which utilizes the strengths of video models,
including long-range context modeling and flatten full-attention, to perform
various generation tasks. Specially, to address the challenging gap between
continuous video frames and discrete image generation, we introduce a
mixup-based transition strategy that ensures smooth adaptation. Moreover, we
redesign the attention structure with a tailored masking mechanism to better
align text prompts with image-level control. Experiments across diverse image
generation tasks, such as subject-driven and spatially conditioned generation,
show that repurposed video models outperform those trained directly on images.
These results highlight the untapped potential of large-scale video generators
for broader visual applications. \texttt{DRA-Ctrl} provides new insights into
reusing resource-intensive video models and lays foundation for future unified
generative models across visual modalities. The project page is
https://dra-ctrl-2025.github.io/DRA-Ctrl/. | 2025-05-29T10:34:45Z | null | null | null | null | null | null | null | null | null | null |
2,505.23604 | Satori-SWE: Evolutionary Test-Time Scaling for Sample-Efficient Software
Engineering | ['Guangtao Zeng', 'Maohao Shen', 'Delin Chen', 'Zhenting Qi', 'Subhro Das', 'Dan Gutfreund', 'David Cox', 'Gregory Wornell', 'Wei Lu', 'Zhang-Wei Hong', 'Chuang Gan'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Language models (LMs) perform well on standardized coding benchmarks but
struggle with real-world software engineering tasks such as resolving GitHub
issues in SWE-Bench, especially when model parameters are less than 100B. While
smaller models are preferable in practice due to their lower computational
cost, improving their performance remains challenging. Existing approaches
primarily rely on supervised fine-tuning (SFT) with high-quality data, which is
expensive to curate at scale. An alternative is test-time scaling: generating
multiple outputs, scoring them using a verifier, and selecting the best one.
Although effective, this strategy often requires excessive sampling and costly
scoring, limiting its practical application. We propose Evolutionary Test-Time
Scaling (EvoScale), a sample-efficient method that treats generation as an
evolutionary process. By iteratively refining outputs via selection and
mutation, EvoScale shifts the output distribution toward higher-scoring
regions, reducing the number of samples needed to find correct solutions. To
reduce the overhead from repeatedly sampling and selection, we train the model
to self-evolve using reinforcement learning (RL). Rather than relying on
external verifiers at inference time, the model learns to self-improve the
scores of its own generations across iterations. Evaluated on
SWE-Bench-Verified, EvoScale enables our 32B model, Satori-SWE-32B, to match or
exceed the performance of models with over 100B parameters while using a few
samples. Code, data, and models will be fully open-sourced. | 2025-05-29T16:15:36Z | null | null | null | Satori-SWE: Evolutionary Test-Time Scaling for Sample-Efficient Software Engineering | ['Guangtao Zeng', 'Maohao Shen', 'Delin Chen', 'Zhenting Qi', 'Subhro Das', 'Dan Gutfreund', 'David Cox', 'Greg Wornell', 'Wei Lu', 'Zhang-Wei Hong', 'Chuang Gan'] | 2,025 | arXiv.org | 0 | 35 | ['Computer Science'] |
2,505.23606 | Muddit: Liberating Generation Beyond Text-to-Image with a Unified
Discrete Diffusion Model | ['Qingyu Shi', 'Jinbin Bai', 'Zhuoran Zhao', 'Wenhao Chai', 'Kaidong Yu', 'Jianzong Wu', 'Shuangyong Song', 'Yunhai Tong', 'Xiangtai Li', 'Xuelong Li', 'Shuicheng Yan'] | ['cs.LG', 'cs.CV'] | Unified generation models aim to handle diverse tasks across modalities --
such as text generation, image generation, and vision-language reasoning --
within a single architecture and decoding paradigm. Autoregressive unified
models suffer from slow inference due to sequential decoding, and
non-autoregressive unified models suffer from weak generalization due to
limited pretrained backbones. We introduce Muddit, a unified discrete diffusion
transformer that enables fast and parallel generation across both text and
image modalities. Unlike prior unified diffusion models trained from scratch,
Muddit integrates strong visual priors from a pretrained text-to-image backbone
with a lightweight text decoder, enabling flexible and high-quality multimodal
generation under a unified architecture. Empirical results show that Muddit
achieves competitive or superior performance compared to significantly larger
autoregressive models in both quality and efficiency. The work highlights the
potential of purely discrete diffusion, when equipped with strong visual
priors, as a scalable and effective backbone for unified generation. | 2025-05-29T16:15:48Z | The code and model are available at
https://github.com/M-E-AGI-Lab/Muddit | null | null | null | null | null | null | null | null | null |
2,505.23621 | Table-R1: Inference-Time Scaling for Table Reasoning | ['Zheyuan Yang', 'Lyuhao Chen', 'Arman Cohan', 'Yilun Zhao'] | ['cs.CL'] | In this work, we present the first study to explore inference-time scaling on
table reasoning tasks. We develop and evaluate two post-training strategies to
enable inference-time scaling: distillation from frontier model reasoning
traces and reinforcement learning with verifiable rewards (RLVR). For
distillation, we introduce a large-scale dataset of reasoning traces generated
by DeepSeek-R1, which we use to fine-tune LLMs into the Table-R1-SFT model. For
RLVR, we propose task-specific verifiable reward functions and apply the GRPO
algorithm to obtain the Table-R1-Zero model. We evaluate our Table-R1-series
models across diverse table reasoning tasks, including short-form QA, fact
verification, and free-form QA. Notably, the Table-R1-Zero model matches or
exceeds the performance of GPT-4.1 and DeepSeek-R1, while using only a
7B-parameter LLM. It also demonstrates strong generalization to out-of-domain
datasets. Extensive ablation and qualitative analyses reveal the benefits of
instruction tuning, model architecture choices, and cross-task generalization,
as well as emergence of essential table reasoning skills during RL training. | 2025-05-29T16:28:50Z | null | null | null | null | null | null | null | null | null | null |
2,505.23678 | Grounded Reinforcement Learning for Visual Reasoning | ['Gabriel Sarch', 'Snigdha Saha', 'Naitik Khandelwal', 'Ayush Jain', 'Michael J. Tarr', 'Aviral Kumar', 'Katerina Fragkiadaki'] | ['cs.CV'] | While reinforcement learning (RL) over chains of thought has significantly
advanced language models in tasks such as mathematics and coding, visual
reasoning introduces added complexity by requiring models to direct visual
attention, interpret perceptual inputs, and ground abstract reasoning in
spatial evidence. We introduce ViGoRL (Visually Grounded Reinforcement
Learning), a vision-language model trained with RL to explicitly anchor each
reasoning step to specific visual coordinates. Inspired by human visual
decision-making, ViGoRL learns to produce spatially grounded reasoning traces,
guiding visual attention to task-relevant regions at each step. When
fine-grained exploration is required, our novel multi-turn RL framework enables
the model to dynamically zoom into predicted coordinates as reasoning unfolds.
Across a diverse set of visual reasoning benchmarks--including SAT-2 and BLINK
for spatial reasoning, V*bench for visual search, and ScreenSpot and
VisualWebArena for web-based grounding--ViGoRL consistently outperforms both
supervised fine-tuning and conventional RL baselines that lack explicit
grounding mechanisms. Incorporating multi-turn RL with zoomed-in visual
feedback significantly improves ViGoRL's performance on localizing small GUI
elements and visual search, achieving 86.4% on V*Bench. Additionally, we find
that grounding amplifies other visual behaviors such as region exploration,
grounded subgoal setting, and visual verification. Finally, human evaluations
show that the model's visual references are not only spatially accurate but
also helpful for understanding model reasoning steps. Our results show that
visually grounded RL is a strong paradigm for imbuing models with
general-purpose visual reasoning. | 2025-05-29T17:20:26Z | Project website: https://visually-grounded-rl.github.io/ | null | null | Grounded Reinforcement Learning for Visual Reasoning | ['Gabriel Sarch', 'Snigdha Saha', 'Naitik Khandelwal', 'Ayush Jain', 'Michael J. Tarr', 'Aviral Kumar', 'Katerina Fragkiadaki'] | 2,025 | arXiv.org | 0 | 85 | ['Computer Science'] |
2,505.23716 | AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views | ['Lihan Jiang', 'Yucheng Mao', 'Linning Xu', 'Tao Lu', 'Kerui Ren', 'Yichen Jin', 'Xudong Xu', 'Mulin Yu', 'Jiangmiao Pang', 'Feng Zhao', 'Dahua Lin', 'Bo Dai'] | ['cs.CV'] | We introduce AnySplat, a feed forward network for novel view synthesis from
uncalibrated image collections. In contrast to traditional neural rendering
pipelines that demand known camera poses and per scene optimization, or recent
feed forward methods that buckle under the computational weight of dense views,
our model predicts everything in one shot. A single forward pass yields a set
of 3D Gaussian primitives encoding both scene geometry and appearance, and the
corresponding camera intrinsics and extrinsics for each input image. This
unified design scales effortlessly to casually captured, multi view datasets
without any pose annotations. In extensive zero shot evaluations, AnySplat
matches the quality of pose aware baselines in both sparse and dense view
scenarios while surpassing existing pose free approaches. Moreover, it greatly
reduce rendering latency compared to optimization based neural fields, bringing
real time novel view synthesis within reach for unconstrained capture
settings.Project page: https://city-super.github.io/anysplat/ | 2025-05-29T17:49:56Z | Project page: https://city-super.github.io/anysplat/ | null | null | null | null | null | null | null | null | null |
2,505.23719 | TiRex: Zero-Shot Forecasting Across Long and Short Horizons with
Enhanced In-Context Learning | ['Andreas Auer', 'Patrick Podest', 'Daniel Klotz', 'Sebastian Böck', 'Günter Klambauer', 'Sepp Hochreiter'] | ['cs.LG'] | In-context learning, the ability of large language models to perform tasks
using only examples provided in the prompt, has recently been adapted for time
series forecasting. This paradigm enables zero-shot prediction, where past
values serve as context for forecasting future values, making powerful
forecasting tools accessible to non-experts and increasing the performance when
training data are scarce. Most existing zero-shot forecasting approaches rely
on transformer architectures, which, despite their success in language, often
fall short of expectations in time series forecasting, where recurrent models
like LSTMs frequently have the edge. Conversely, while LSTMs are well-suited
for time series modeling due to their state-tracking capabilities, they lack
strong in-context learning abilities. We introduce TiRex that closes this gap
by leveraging xLSTM, an enhanced LSTM with competitive in-context learning
skills. Unlike transformers, state-space models, or parallelizable RNNs such as
RWKV, TiRex retains state-tracking, a critical property for long-horizon
forecasting. To further facilitate its state-tracking ability, we propose a
training-time masking strategy called CPM. TiRex sets a new state of the art in
zero-shot time series forecasting on the HuggingFace benchmarks GiftEval and
Chronos-ZS, outperforming significantly larger models including TabPFN-TS
(Prior Labs), Chronos Bolt (Amazon), TimesFM (Google), and Moirai (Salesforce)
across both short- and long-term forecasts. | 2025-05-29T17:52:10Z | null | null | null | TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning | ['Andreas Auer', 'Patrick Podest', 'Daniel Klotz', 'Sebastian Bock', 'G. Klambauer', 'Sepp Hochreiter'] | 2,025 | arXiv.org | 0 | 40 | ['Computer Science'] |
2,505.23734 | ZPressor: Bottleneck-Aware Compression for Scalable Feed-Forward 3DGS | ['Weijie Wang', 'Donny Y. Chen', 'Zeyu Zhang', 'Duochao Shi', 'Akide Liu', 'Bohan Zhuang'] | ['cs.CV'] | Feed-forward 3D Gaussian Splatting (3DGS) models have recently emerged as a
promising solution for novel view synthesis, enabling one-pass inference
without the need for per-scene 3DGS optimization. However, their scalability is
fundamentally constrained by the limited capacity of their encoders, leading to
degraded performance or excessive memory consumption as the number of input
views increases. In this work, we analyze feed-forward 3DGS frameworks through
the lens of the Information Bottleneck principle and introduce ZPressor, a
lightweight architecture-agnostic module that enables efficient compression of
multi-view inputs into a compact latent state $Z$ that retains essential scene
information while discarding redundancy. Concretely, ZPressor enables existing
feed-forward 3DGS models to scale to over 100 input views at 480P resolution on
an 80GB GPU, by partitioning the views into anchor and support sets and using
cross attention to compress the information from the support views into anchor
views, forming the compressed latent state $Z$. We show that integrating
ZPressor into several state-of-the-art feed-forward 3DGS models consistently
improves performance under moderate input views and enhances robustness under
dense view settings on two large-scale benchmarks DL3DV-10K and RealEstate10K.
The video results, code and trained models are available on our project page:
https://lhmd.top/zpressor. | 2025-05-29T17:57:04Z | Project Page: https://lhmd.top/zpressor, Code:
https://github.com/ziplab/ZPressor | null | null | null | null | null | null | null | null | null |
2,505.23747 | Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial
Intelligence | ['Diankun Wu', 'Fangfu Liu', 'Yi-Hsin Hung', 'Yueqi Duan'] | ['cs.CV', 'cs.AI', 'cs.LG', 'I.2.6; I.2'] | Recent advancements in Multimodal Large Language Models (MLLMs) have
significantly enhanced performance on 2D visual tasks. However, improving their
spatial intelligence remains a challenge. Existing 3D MLLMs always rely on
additional 3D or 2.5D data to incorporate spatial awareness, restricting their
utility in scenarios with only 2D inputs, such as images or videos. In this
paper, we present Spatial-MLLM, a novel framework for visual-based spatial
reasoning from purely 2D observations. Unlike conventional video MLLMs which
rely on CLIP-based visual encoders optimized for semantic understanding, our
key insight is to unleash the strong structure prior from the feed-forward
visual geometry foundation model. Specifically, we propose a dual-encoder
architecture: a pretrained 2D visual encoder to extract semantic features, and
a spatial encoder-initialized from the backbone of the visual geometry model-to
extract 3D structure features. A connector then integrates both features into
unified visual tokens for enhanced spatial understanding. Furthermore, we
propose a space-aware frame sampling strategy at inference time, which selects
the spatially informative frames of a video sequence, ensuring that even under
limited token length, the model focuses on frames critical for spatial
reasoning. Beyond architecture improvements, we construct the Spatial-MLLM-120k
dataset and train the model on it using supervised fine-tuning and GRPO.
Extensive experiments on various real-world datasets demonstrate that our
spatial-MLLM achieves state-of-the-art performance in a wide range of
visual-based spatial understanding and reasoning tasks. Project page:
https://diankun-wu.github.io/Spatial-MLLM/. | 2025-05-29T17:59:04Z | 21 pages | null | null | Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence | ['Diankun Wu', 'Fangfu Liu', 'Yi-Hsin Hung', 'Yueqi Duan'] | 2,025 | arXiv.org | 1 | 67 | ['Computer Science'] |
2,505.23762 | ZeroGUI: Automating Online GUI Learning at Zero Human Cost | ['Chenyu Yang', 'Shiqian Su', 'Shi Liu', 'Xuan Dong', 'Yue Yu', 'Weijie Su', 'Xuehui Wang', 'Zhaoyang Liu', 'Jinguo Zhu', 'Hao Li', 'Wenhai Wang', 'Yu Qiao', 'Xizhou Zhu', 'Jifeng Dai'] | ['cs.AI', 'cs.CL', 'cs.CV'] | The rapid advancement of large Vision-Language Models (VLMs) has propelled
the development of pure-vision-based GUI Agents, capable of perceiving and
operating Graphical User Interfaces (GUI) to autonomously fulfill user
instructions. However, existing approaches usually adopt an offline learning
framework, which faces two core limitations: (1) heavy reliance on high-quality
manual annotations for element grounding and action supervision, and (2)
limited adaptability to dynamic and interactive environments. To address these
limitations, we propose ZeroGUI, a scalable, online learning framework for
automating GUI Agent training at Zero human cost. Specifically, ZeroGUI
integrates (i) VLM-based automatic task generation to produce diverse training
goals from the current environment state, (ii) VLM-based automatic reward
estimation to assess task success without hand-crafted evaluation functions,
and (iii) two-stage online reinforcement learning to continuously interact with
and learn from GUI environments. Experiments on two advanced GUI Agents
(UI-TARS and Aguvis) demonstrate that ZeroGUI significantly boosts performance
across OSWorld and AndroidLab environments. The code is available at
https://github.com/OpenGVLab/ZeroGUI. | 2025-05-29T17:59:51Z | null | null | null | null | null | null | null | null | null | null |
2,505.23883 | BioCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive
Learning | ['Jianyang Gu', 'Samuel Stevens', 'Elizabeth G Campolongo', 'Matthew J Thompson', 'Net Zhang', 'Jiaman Wu', 'Andrei Kopanev', 'Zheda Mai', 'Alexander E. White', 'James Balhoff', 'Wasila Dahdul', 'Daniel Rubenstein', 'Hilmar Lapp', 'Tanya Berger-Wolf', 'Wei-Lun Chao', 'Yu Su'] | ['cs.CV', 'cs.CL', 'cs.LG'] | Foundation models trained at scale exhibit remarkable emergent behaviors,
learning new capabilities beyond their initial training objectives. We find
such emergent behaviors in biological vision models via large-scale contrastive
vision-language training. To achieve this, we first curate TreeOfLife-200M,
comprising 214 million images of living organisms, the largest and most diverse
biological organism image dataset to date. We then train BioCLIP 2 on
TreeOfLife-200M to distinguish different species. Despite the narrow training
objective, BioCLIP 2 yields extraordinary accuracy when applied to various
biological visual tasks such as habitat classification and trait prediction. We
identify emergent properties in the learned embedding space of BioCLIP 2. At
the inter-species level, the embedding distribution of different species aligns
closely with functional and ecological meanings (e.g., beak sizes and
habitats). At the intra-species level, instead of being diminished, the
intra-species variations (e.g., life stages and sexes) are preserved and better
separated in subspaces orthogonal to inter-species distinctions. We provide
formal proof and analyses to explain why hierarchical supervision and
contrastive objectives encourage these emergent properties. Crucially, our
results reveal that these properties become increasingly significant with
larger-scale training data, leading to a biologically meaningful embedding
space. | 2025-05-29T17:48:20Z | Project page: https://imageomics.github.io/bioclip-2/ | null | null | null | null | null | null | null | null | null |
2,505.23977 | VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL | ['Yichen Feng', 'Zhangchen Xu', 'Fengqing Jiang', 'Yuetai Li', 'Bhaskar Ramasubramanian', 'Luyao Niu', 'Bill Yuchen Lin', 'Radha Poovendran'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Vision language models (VLMs) are expected to perform effective multimodal
reasoning and make logically coherent decisions, which is critical to tasks
such as diagram understanding and spatial problem solving. However, current VLM
reasoning lacks large-scale and well-structured training datasets. To bridge
this gap, we propose VisualSphinx, a first-of-its-kind large-scale synthetic
visual logical reasoning training data. To tackle the challenge of image
synthesis with grounding answers, we propose a rule-to-image synthesis
pipeline, which extracts and expands puzzle rules from seed questions and
generates the code of grounding synthesis image synthesis for puzzle sample
assembly. Experiments demonstrate that VLM trained using GRPO on VisualSphinx
benefit from logical coherence and readability of our dataset and exhibit
improved performance on logical reasoning tasks. The enhanced reasoning
capabilities developed from VisualSphinx also benefit other reasoning tasks
such as algebraic reasoning, arithmetic reasoning and geometry reasoning. | 2025-05-29T20:08:36Z | Project page at https://visualsphinx.github.io/ | null | null | VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL | ['Yichen Feng', 'Zhangchen Xu', 'Fengqing Jiang', 'Yuetai Li', 'Bhaskar Ramasubramanian', 'Luyao Niu', 'Bill Yuchen Lin', 'Radha Poovendran'] | 2,025 | arXiv.org | 0 | 46 | ['Computer Science'] |
2,505.23987 | Large Language Models for Controllable Multi-property Multi-objective
Molecule Optimization | ['Vishal Dey', 'Xiao Hu', 'Xia Ning'] | ['cs.LG', 'cs.AI', 'cs.CL', 'q-bio.BM'] | In real-world drug design, molecule optimization requires selectively
improving multiple molecular properties up to pharmaceutically relevant levels,
while maintaining others that already meet such criteria. However, existing
computational approaches and instruction-tuned LLMs fail to capture such
nuanced property-specific objectives, limiting their practical applicability.
To address this, we introduce C-MuMOInstruct, the first instruction-tuning
dataset focused on multi-property optimization with explicit, property-specific
objectives. Leveraging C-MuMOInstruct, we develop GeLLMO-Cs, a series of
instruction-tuned LLMs that can perform targeted property-specific
optimization. Our experiments across 5 in-distribution and 5
out-of-distribution tasks show that GeLLMO-Cs consistently outperform strong
baselines, achieving up to 126% higher success rate. Notably, GeLLMO-Cs exhibit
impressive 0-shot generalization to novel optimization tasks and unseen
instructions. This offers a step toward a foundational LLM to support
realistic, diverse optimizations with property-specific objectives.
C-MuMOInstruct and code are accessible through
https://github.com/ninglab/GeLLMO-C. | 2025-05-29T20:29:14Z | null | null | null | null | null | null | null | null | null | null |
2,505.24111 | Fine-tune Before Structured Pruning: Towards Compact and Accurate
Self-Supervised Models for Speaker Diarization | ['Jiangyu Han', 'Federico Landini', 'Johan Rohdin', 'Anna Silnova', 'Mireia Diez', 'Jan Cernocky', 'Lukas Burget'] | ['eess.AS'] | Self-supervised learning (SSL) models like WavLM can be effectively utilized
when building speaker diarization systems but are often large and slow,
limiting their use in resource constrained scenarios. Previous studies have
explored compression techniques, but usually for the price of degraded
performance at high pruning ratios. In this work, we propose to compress SSL
models through structured pruning by introducing knowledge distillation.
Different from the existing works, we emphasize the importance of fine-tuning
SSL models before pruning. Experiments on far-field single-channel AMI,
AISHELL-4, and AliMeeting datasets show that our method can remove redundant
parameters of WavLM Base+ and WavLM Large by up to 80% without any performance
degradation. After pruning, the inference speeds on a single GPU for the Base+
and Large models are 4.0 and 2.6 times faster, respectively. Our source code is
publicly available. | 2025-05-30T01:19:58Z | Accepted by INTERSPEECH 2025 | null | null | Fine-tune Before Structured Pruning: Towards Compact and Accurate Self-Supervised Models for Speaker Diarization | ['Jiangyu Han', 'Federico Landini', 'Johan Rohdin', 'Anna Silnova', 'Mireia Díez', 'J. Černocký', 'Lukás Burget'] | 2,025 | null | 1 | 28 | ['Engineering'] |
2,505.24183 | CodeV-R1: Reasoning-Enhanced Verilog Generation | ['Yaoyu Zhu', 'Di Huang', 'Hanqi Lyu', 'Xiaoyun Zhang', 'Chongxiao Li', 'Wenxuan Shi', 'Yutong Wu', 'Jianan Mu', 'Jinghua Wang', 'Yang Zhao', 'Pengwei Jin', 'Shuyao Cheng', 'Shengwen Liang', 'Xishan Zhang', 'Rui Zhang', 'Zidong Du', 'Qi Guo', 'Xing Hu', 'Yunji Chen'] | ['cs.LG', 'cs.AR', 'cs.PL'] | Large language models (LLMs) trained via reinforcement learning with
verifiable reward (RLVR) have achieved breakthroughs on tasks with explicit,
automatable verification, such as software programming and mathematical
problems. Extending RLVR to electronic design automation (EDA), especially
automatically generating hardware description languages (HDLs) like Verilog
from natural-language (NL) specifications, however, poses three key challenges:
the lack of automated and accurate verification environments, the scarcity of
high-quality NL-code pairs, and the prohibitive computation cost of RLVR. To
this end, we introduce CodeV-R1, an RLVR framework for training Verilog
generation LLMs. First, we develop a rule-based testbench generator that
performs robust equivalence checking against golden references. Second, we
propose a round-trip data synthesis method that pairs open-source Verilog
snippets with LLM-generated NL descriptions, verifies code-NL-code consistency
via the generated testbench, and filters out inequivalent examples to yield a
high-quality dataset. Third, we employ a two-stage "distill-then-RL" training
pipeline: distillation for the cold start of reasoning abilities, followed by
adaptive DAPO, our novel RLVR algorithm that can reduce training cost by
adaptively adjusting sampling rate. The resulting model, CodeV-R1-7B, achieves
68.6% and 72.9% pass@1 on VerilogEval v2 and RTLLM v1.1, respectively,
surpassing prior state-of-the-art by 12~20%, while matching or even exceeding
the performance of 671B DeepSeek-R1. We will release our model, training
pipeline, and dataset to facilitate research in EDA and LLM communities. | 2025-05-30T03:51:06Z | null | null | null | null | null | null | null | null | null | null |
2,505.24216 | Shuffle PatchMix Augmentation with Confidence-Margin Weighted
Pseudo-Labels for Enhanced Source-Free Domain Adaptation | ['Prasanna Reddy Pulakurthi', 'Majid Rabbani', 'Jamison Heard', 'Sohail Dianat', 'Celso M. de Melo', 'Raghuveer Rao'] | ['cs.CV'] | This work investigates Source-Free Domain Adaptation (SFDA), where a model
adapts to a target domain without access to source data. A new augmentation
technique, Shuffle PatchMix (SPM), and a novel reweighting strategy are
introduced to enhance performance. SPM shuffles and blends image patches to
generate diverse and challenging augmentations, while the reweighting strategy
prioritizes reliable pseudo-labels to mitigate label noise. These techniques
are particularly effective on smaller datasets like PACS, where overfitting and
pseudo-label noise pose greater risks. State-of-the-art results are achieved on
three major benchmarks: PACS, VisDA-C, and DomainNet-126. Notably, on PACS,
improvements of 7.3% (79.4% to 86.7%) and 7.2% are observed in single-target
and multi-target settings, respectively, while gains of 2.8% and 0.7% are
attained on DomainNet-126 and VisDA-C. This combination of advanced
augmentation and robust pseudo-label reweighting establishes a new benchmark
for SFDA. The code is available at: https://github.com/PrasannaPulakurthi/SPM | 2025-05-30T05:02:42Z | 6 pages, 3 figures, 5 tables, Accepted to IEEE ICIP 2025 | null | null | null | null | null | null | null | null | null |
2,505.24219 | ERU-KG: Efficient Reference-aligned Unsupervised Keyphrase Generation | ['Lam Thanh Do', 'Aaditya Bodke', 'Pritom Saha Akash', 'Kevin Chen-Chuan Chang'] | ['cs.CL'] | Unsupervised keyphrase prediction has gained growing interest in recent
years. However, existing methods typically rely on heuristically defined
importance scores, which may lead to inaccurate informativeness estimation. In
addition, they lack consideration for time efficiency. To solve these problems,
we propose ERU-KG, an unsupervised keyphrase generation (UKG) model that
consists of an informativeness and a phraseness module. The former estimates
the relevance of keyphrase candidates, while the latter generate those
candidates. The informativeness module innovates by learning to model
informativeness through references (e.g., queries, citation contexts, and
titles) and at the term-level, thereby 1) capturing how the key concepts of
documents are perceived in different contexts and 2) estimating informativeness
of phrases more efficiently by aggregating term informativeness, removing the
need for explicit modeling of the candidates. ERU-KG demonstrates its
effectiveness on keyphrase generation benchmarks by outperforming unsupervised
baselines and achieving on average 89\% of the performance of a supervised
model for top 10 predictions. Additionally, to highlight its practical utility,
we evaluate the model on text retrieval tasks and show that keyphrases
generated by ERU-KG are effective when employed as query and document
expansions. Furthermore, inference speed tests reveal that ERU-KG is the
fastest among baselines of similar model sizes. Finally, our proposed model can
switch between keyphrase generation and extraction by adjusting
hyperparameters, catering to diverse application requirements. | 2025-05-30T05:09:53Z | Accepted to ACL 2025 | null | null | null | null | null | null | null | null | null |
2,505.24298 | AReaL: A Large-Scale Asynchronous Reinforcement Learning System for
Language Reasoning | ['Wei Fu', 'Jiaxuan Gao', 'Xujie Shen', 'Chen Zhu', 'Zhiyu Mei', 'Chuyi He', 'Shusheng Xu', 'Guo Wei', 'Jun Mei', 'Jiashu Wang', 'Tongkai Yang', 'Binhang Yuan', 'Yi Wu'] | ['cs.LG', 'cs.AI'] | Reinforcement learning (RL) has become a dominant paradigm for training large
language models (LLMs), particularly for reasoning tasks. Effective RL for LLMs
requires massive parallelization and poses an urgent need for efficient
training systems. Most existing large-scale RL systems for LLMs are
synchronous, alternating generation and training in a batch setting where
rollouts in each training batch are generated by the same model. This approach
stabilizes RL training but suffers from severe system-level inefficiency:
generation must wait until the longest output in the batch is completed before
model updates, resulting in GPU underutilization. We present AReaL, a fully
asynchronous RL system that completely decouples generation from training.
Rollout workers in AReaL continuously generate new outputs without waiting,
while training workers update the model whenever a batch of data is collected.
AReaL also incorporates a collection of system-level optimizations, leading to
substantially higher GPU utilization. To stabilize RL training, AReaL balances
the workload of rollout and training workers to control data staleness, and
adopts a staleness-enhanced PPO variant to better handle outdated training
samples. Extensive experiments on math and code reasoning benchmarks show that
AReaL achieves up to 2.77$\times$ training speedup compared to synchronous
systems with the same number of GPUs and matched or improved final performance.
The code of AReaL is available at https://github.com/inclusionAI/AReaL/. | 2025-05-30T07:18:25Z | null | null | null | null | null | null | null | null | null | null |
2,505.24421 | pyMEAL: A Multi-Encoder Augmentation-Aware Learning for Robust and
Generalizable Medical Image Translation | ['Abdul-mojeed Olabisi Ilyas', 'Adeleke Maradesa', 'Jamal Banzi', 'Jianpan Huang', 'Henry K. F. Mak', 'Kannie W. Y. Chan'] | ['eess.IV', 'cs.CV'] | Medical imaging is critical for diagnostics, but clinical adoption of
advanced AI-driven imaging faces challenges due to patient variability, image
artifacts, and limited model generalization. While deep learning has
transformed image analysis, 3D medical imaging still suffers from data scarcity
and inconsistencies due to acquisition protocols, scanner differences, and
patient motion. Traditional augmentation uses a single pipeline for all
transformations, disregarding the unique traits of each augmentation and
struggling with large data volumes.
To address these challenges, we propose a Multi-encoder Augmentation-Aware
Learning (MEAL) framework that leverages four distinct augmentation variants
processed through dedicated encoders. Three fusion strategies such as
concatenation (CC), fusion layer (FL), and adaptive controller block (BD) are
integrated to build multi-encoder models that combine augmentation-specific
features before decoding. MEAL-BD uniquely preserves augmentation-aware
representations, enabling robust, protocol-invariant feature learning.
As demonstrated in a Computed Tomography (CT)-to-T1-weighted Magnetic
Resonance Imaging (MRI) translation study, MEAL-BD consistently achieved the
best performance on both unseen- and predefined-test data. On both geometric
transformations (like rotations and flips) and non-augmented inputs, MEAL-BD
outperformed other competing methods, achieving higher mean peak
signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM)
scores. These results establish MEAL as a reliable framework for preserving
structural fidelity and generalizing across clinically relevant variability. By
reframing augmentation as a source of diverse, generalizable features, MEAL
supports robust, protocol-invariant learning, advancing clinically reliable
medical imaging solutions. | 2025-05-30T10:01:23Z | 36 pages, 9 figures, 2 tables | null | null | pyMEAL: A Multi-Encoder Augmentation-Aware Learning for Robust and Generalizable Medical Image Translation | ['A. Ilyas', 'Adeleke Maradesa', 'Jamal Banzi', 'Jianpan Huang', 'Henry K.F. Mak', 'Kannie W. Y. Chan'] | 2,025 | arXiv.org | 0 | 45 | ['Computer Science', 'Engineering'] |
2,505.24443 | Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised
Learning with Outliers | ['Heejo Kong', 'Sung-Jin Kim', 'Gunho Jung', 'Seong-Whan Lee'] | ['cs.CV', 'cs.LG'] | Conventional semi-supervised learning (SSL) ideally assumes that labeled and
unlabeled data share an identical class distribution, however in practice, this
assumption is easily violated, as unlabeled data often includes unknown class
data, i.e., outliers. The outliers are treated as noise, considerably degrading
the performance of SSL models. To address this drawback, we propose a novel
framework, Diversify and Conquer (DAC), to enhance SSL robustness in the
context of open-set semi-supervised learning. In particular, we note that
existing open-set SSL methods rely on prediction discrepancies between inliers
and outliers from a single model trained on labeled data. This approach can be
easily failed when the labeled data is insufficient, leading to performance
degradation that is worse than naive SSL that do not account for outliers. In
contrast, our approach exploits prediction disagreements among multiple models
that are differently biased towards the unlabeled distribution. By leveraging
the discrepancies arising from training on unlabeled data, our method enables
robust outlier detection even when the labeled data is underspecified. Our key
contribution is constructing a collection of differently biased models through
a single training process. By encouraging divergent heads to be differently
biased towards outliers while making consistent predictions for inliers, we
exploit the disagreement among these heads as a measure to identify unknown
concepts. Our code is available at https://github.com/heejokong/DivCon. | 2025-05-30T10:24:30Z | Accepted by IEEE Transactions on Neural Networks and Learning Systems
(TNNLS) | null | 10.1109/TNNLS.2025.3547801 | Diversify and Conquer: Open-Set Disagreement for Robust Semi-Supervised Learning With Outliers | ['Heejo Kong', 'Sung-Jin Kim', 'Gunho Jung', 'Seong-Whan Lee'] | 2,025 | IEEE Transactions on Neural Networks and Learning Systems | 0 | 56 | ['Computer Science', 'Medicine'] |
2,505.24449 | When Large Multimodal Models Confront Evolving Knowledge:Challenges and
Pathways | ['Kailin Jiang', 'Yuntao Du', 'Yukai Ding', 'Yuchen Ren', 'Ning Jiang', 'Zhi Gao', 'Zilong Zheng', 'Lei Liu', 'Bin Li', 'Qing Li'] | ['cs.CL'] | Large language/multimodal models (LLMs/LMMs) store extensive pre-trained
knowledge but struggle to maintain consistency with real-world updates, making
it difficult to avoid catastrophic forgetting while acquiring evolving
knowledge. Previous work focused on constructing textual knowledge datasets and
exploring knowledge injection in LLMs, lacking exploration of multimodal
evolving knowledge injection in LMMs. To address this, we propose the EVOKE
benchmark to evaluate LMMs' ability to inject multimodal evolving knowledge in
real-world scenarios. Meanwhile, a comprehensive evaluation of multimodal
evolving knowledge injection revealed two challenges: (1) Existing knowledge
injection methods perform terribly on evolving knowledge. (2) Supervised
fine-tuning causes catastrophic forgetting, particularly instruction following
ability is severely compromised. Additionally, we provide pathways and find
that: (1) Text knowledge augmentation during the training phase improves
performance, while image augmentation cannot achieve it. (2) Continual learning
methods, especially Replay and MoELoRA, effectively mitigate forgetting. Our
findings indicate that current knowledge injection methods have many
limitations on evolving knowledge, which motivates further research on more
efficient and stable knowledge injection methods. | 2025-05-30T10:36:19Z | null | null | null | When Large Multimodal Models Confront Evolving Knowledge:Challenges and Pathways | ['Kailin Jiang', 'Yuntao Du', 'Yukai Ding', 'Yuchen Ren', 'Ning Jiang', 'Zhi Gao', 'Zilong Zheng', 'Lei Liu', 'Bin Li', 'Qing Li'] | 2,025 | arXiv.org | 0 | 78 | ['Computer Science'] |
2,505.24461 | Logits-Based Finetuning | ['Jingyao Li', 'Senqiao Yang', 'Sitong Wu', 'Han Shi', 'Chuanyang Zheng', 'Hong Xu', 'Jiaya Jia'] | ['cs.LG'] | In recent years, developing compact and efficient large language models
(LLMs) has emerged as a thriving area of research. Traditional Supervised
Fine-Tuning (SFT), which relies on singular ground truth labels, often fails to
capture token-level dependencies and linguistic diversity. To address these
limitations, we propose a logits-based fine-tuning framework that integrates
the strengths of supervised learning and knowledge distillation. Our approach
constructs enriched training targets by combining teacher logits with ground
truth labels, preserving both correctness and linguistic diversity. This
ensures more reliable and effective training. We constructed a large-scale 1.2M
logits dataset and trained a series of science-focused models. Experimental
results demonstrate that our method achieves significant improvements, with
accuracy gains of 18% on Mawps and 22.7% on TabMWP. Across nine widely used
mathematical benchmarks, our method consistently outperforms prior SFT models,
achieving an average improvement of 7.28%. Codes are available at
https://github.com/dvlab-research/Logits-Based-Finetuning. | 2025-05-30T10:57:09Z | null | null | null | null | null | null | null | null | null | null |
2,505.24517 | un$^2$CLIP: Improving CLIP's Visual Detail Capturing Ability via
Inverting unCLIP | ['Yinqi Li', 'Jiahe Zhao', 'Hong Chang', 'Ruibing Hou', 'Shiguang Shan', 'Xilin Chen'] | ['cs.CV'] | Contrastive Language-Image Pre-training (CLIP) has become a foundation model
and has been applied to various vision and multimodal tasks. However, recent
works indicate that CLIP falls short in distinguishing detailed differences in
images and shows suboptimal performance on dense-prediction and vision-centric
multimodal tasks. Therefore, this work focuses on improving existing CLIP
models, aiming to capture as many visual details in images as possible. We find
that a specific type of generative models, unCLIP, provides a suitable
framework for achieving our goal. Specifically, unCLIP trains an image
generator conditioned on the CLIP image embedding. In other words, it inverts
the CLIP image encoder. Compared to discriminative models like CLIP, generative
models are better at capturing image details because they are trained to learn
the data distribution of images. Additionally, the conditional input space of
unCLIP aligns with CLIP's original image-text embedding space. Therefore, we
propose to invert unCLIP (dubbed un$^2$CLIP) to improve the CLIP model. In this
way, the improved image encoder can gain unCLIP's visual detail capturing
ability while preserving its alignment with the original text encoder
simultaneously. We evaluate our improved CLIP across various tasks to which
CLIP has been applied, including the challenging MMVP-VLM benchmark, the
dense-prediction open-vocabulary segmentation task, and multimodal large
language model tasks. Experiments show that un$^2$CLIP significantly improves
the original CLIP and previous CLIP improvement methods. Code and models will
be available at https://github.com/LiYinqi/un2CLIP. | 2025-05-30T12:29:38Z | null | null | null | null | null | null | null | null | null | null |
2,505.24523 | Stress-testing Machine Generated Text Detection: Shifting Language
Models Writing Style to Fool Detectors | ['Andrea Pedrotti', 'Michele Papucci', 'Cristiano Ciaccio', 'Alessio Miaschi', 'Giovanni Puccetti', "Felice Dell'Orletta", 'Andrea Esuli'] | ['cs.CL', 'cs.AI'] | Recent advancements in Generative AI and Large Language Models (LLMs) have
enabled the creation of highly realistic synthetic content, raising concerns
about the potential for malicious use, such as misinformation and manipulation.
Moreover, detecting Machine-Generated Text (MGT) remains challenging due to the
lack of robust benchmarks that assess generalization to real-world scenarios.
In this work, we present a pipeline to test the resilience of state-of-the-art
MGT detectors (e.g., Mage, Radar, LLM-DetectAIve) to linguistically informed
adversarial attacks. To challenge the detectors, we fine-tune language models
using Direct Preference Optimization (DPO) to shift the MGT style toward
human-written text (HWT). This exploits the detectors' reliance on stylistic
clues, making new generations more challenging to detect. Additionally, we
analyze the linguistic shifts induced by the alignment and which features are
used by detectors to detect MGT texts. Our results show that detectors can be
easily fooled with relatively few examples, resulting in a significant drop in
detection performance. This highlights the importance of improving detection
methods and making them robust to unseen in-domain texts. | 2025-05-30T12:33:30Z | Accepted at Findings of ACL 2025 | null | null | null | null | null | null | null | null | null |
2,505.24527 | Optimal Density Functions for Weighted Convolution in Learning Models | ['Simone Cammarasana', 'Giuseppe Patanè'] | ['cs.CV', 'cs.LG', '42A85'] | The paper introduces the weighted convolution, a novel approach to the
convolution for signals defined on regular grids (e.g., 2D images) through the
application of an optimal density function to scale the contribution of
neighbouring pixels based on their distance from the central pixel. This choice
differs from the traditional uniform convolution, which treats all neighbouring
pixels equally. Our weighted convolution can be applied to convolutional neural
network problems to improve the approximation accuracy. Given a convolutional
network, we define a framework to compute the optimal density function through
a minimisation model. The framework separates the optimisation of the
convolutional kernel weights (using stochastic gradient descent) from the
optimisation of the density function (using DIRECT-L). Experimental results on
a learning model for an image-to-image task (e.g., image denoising) show that
the weighted convolution significantly reduces the loss (up to 53% improvement)
and increases the test accuracy compared to standard convolution. While this
method increases execution time by 11%, it is robust across several
hyperparameters of the learning model. Future work will apply the weighted
convolution to real-case 2D and 3D image convolutional learning problems. | 2025-05-30T12:36:36Z | 5 figures, 5 tables, 21 pages | null | null | null | null | null | null | null | null | null |
2,505.24558 | Optimal Weighted Convolution for Classification and Denosing | ['Simone Cammarasana', 'Giuseppe Patanè'] | ['cs.CV', '68T05'] | We introduce a novel weighted convolution operator that enhances traditional
convolutional neural networks (CNNs) by integrating a spatial density function
into the convolution operator. This extension enables the network to
differentially weight neighbouring pixels based on their relative position to
the reference pixel, improving spatial characterisation and feature extraction.
The proposed operator maintains the same number of trainable parameters and is
fully compatible with existing CNN architectures. Although developed for 2D
image data, the framework is generalisable to signals on regular grids of
arbitrary dimensions, such as 3D volumetric data or 1D time series. We propose
an efficient implementation of the weighted convolution by pre-computing the
density function and achieving execution times comparable to standard
convolution layers. We evaluate our method on two deep learning tasks: image
classification using the CIFAR-100 dataset [KH+09] and image denoising using
the DIV2K dataset [AT17]. Experimental results with state-of-the-art
classification (e.g., VGG [SZ15], ResNet [HZRS16]) and denoising (e.g., DnCNN
[ZZC+17], NAFNet [CCZS22]) methods show that the weighted convolution improves
performance with respect to standard convolution across different quantitative
metrics. For example, VGG achieves an accuracy of 66.94% with weighted
convolution versus 56.89% with standard convolution on the classification
problem, while DnCNN improves the PSNR value from 20.17 to 22.63 on the
denoising problem. All models were trained on the CINECA Leonardo cluster to
reduce the execution time and improve the tuning of the density function
values. The PyTorch implementation of the weighted convolution is publicly
available at: https://github.com/cammarasana123/weightedConvolution2.0. | 2025-05-30T13:10:46Z | 17 pages, 3 figures, 6 tables | null | null | Optimal Weighted Convolution for Classification and Denosing | ['Simone Cammarasana', 'Giuseppe Patané'] | 2,025 | arXiv.org | 0 | 39 | ['Computer Science'] |
2,505.24581 | GATE: General Arabic Text Embedding for Enhanced Semantic Textual
Similarity with Matryoshka Representation Learning and Hybrid Loss Training | ['Omer Nacar', 'Anis Koubaa', 'Serry Sibaee', 'Yasser Al-Habashi', 'Adel Ammar', 'Wadii Boulila'] | ['cs.CL'] | Semantic textual similarity (STS) is a critical task in natural language
processing (NLP), enabling applications in retrieval, clustering, and
understanding semantic relationships between texts. However, research in this
area for the Arabic language remains limited due to the lack of high-quality
datasets and pre-trained models. This scarcity of resources has restricted the
accurate evaluation and advance of semantic similarity in Arabic text. This
paper introduces General Arabic Text Embedding (GATE) models that achieve
state-of-the-art performance on the Semantic Textual Similarity task within the
MTEB benchmark. GATE leverages Matryoshka Representation Learning and a hybrid
loss training approach with Arabic triplet datasets for Natural Language
Inference, which are essential for enhancing model performance in tasks that
demand fine-grained semantic understanding. GATE outperforms larger models,
including OpenAI, with a 20-25% performance improvement on STS benchmarks,
effectively capturing the unique semantic nuances of Arabic. | 2025-05-30T13:29:03Z | null | null | null | null | null | null | null | null | null | null |
2,505.24616 | Eye of Judgement: Dissecting the Evaluation of Russian-speaking LLMs
with POLLUX | ['Nikita Martynov', 'Anastasia Mordasheva', 'Dmitriy Gorbetskiy', 'Danil Astafurov', 'Ulyana Isaeva', 'Elina Basyrova', 'Sergey Skachkov', 'Victoria Berestova', 'Nikolay Ivanov', 'Valeriia Zanina', 'Alena Fenogenova'] | ['cs.CL', 'cs.AI'] | We introduce POLLUX, a comprehensive open-source benchmark designed to
evaluate the generative capabilities of large language models (LLMs) in
Russian. Our main contribution is a novel evaluation methodology that enhances
the interpretability of LLM assessment. For each task type, we define a set of
detailed criteria and develop a scoring protocol where models evaluate
responses and provide justifications for their ratings. This enables
transparent, criteria-driven evaluation beyond traditional resource-consuming,
side-by-side human comparisons. POLLUX includes a detailed, fine-grained
taxonomy of 35 task types covering diverse generative domains such as code
generation, creative writing, and practical assistant use cases, totaling 2,100
manually crafted and professionally authored prompts. Each task is categorized
by difficulty (easy/medium/hard), with experts constructing the dataset
entirely from scratch. We also release a family of LLM-as-a-Judge (7B and 32B)
evaluators trained for nuanced assessment of generative outputs. This approach
provides scalable, interpretable evaluation and annotation tools for model
development, effectively replacing costly and less precise human judgments. | 2025-05-30T14:08:17Z | 178 pages | null | null | Eye of Judgement: Dissecting the Evaluation of Russian-speaking LLMs with POLLUX | ['Nikita Martynov', 'Anastasia Mordasheva', 'Dmitriy Gorbetskiy', 'Danil Astafurov', 'Ulyana Isaeva', 'Elina Basyrova', 'Sergey Skachkov', 'Victoria Berestova', 'Nikolay Ivanov', 'Valeriia Zanina', 'Alena Fenogenova'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,505.24713 | Voice Conversion Improves Cross-Domain Robustness for Spoken Arabic
Dialect Identification | ['Badr M. Abdullah', 'Matthew Baas', 'Bernd Möbius', 'Dietrich Klakow'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Arabic dialect identification (ADI) systems are essential for large-scale
data collection pipelines that enable the development of inclusive speech
technologies for Arabic language varieties. However, the reliability of current
ADI systems is limited by poor generalization to out-of-domain speech. In this
paper, we present an effective approach based on voice conversion for training
ADI models that achieves state-of-the-art performance and significantly
improves robustness in cross-domain scenarios. Evaluated on a newly collected
real-world test set spanning four different domains, our approach yields
consistent improvements of up to +34.1% in accuracy across domains.
Furthermore, we present an analysis of our approach and demonstrate that voice
conversion helps mitigate the speaker bias in the ADI dataset. We release our
robust ADI model and cross-domain evaluation dataset to support the development
of inclusive speech technologies for Arabic. | 2025-05-30T15:36:08Z | Accepted in Interspeech 2025 | null | null | null | null | null | null | null | null | null |
2,505.24717 | PDE-Transformer: Efficient and Versatile Transformers for Physics
Simulations | ['Benjamin Holzschuh', 'Qiang Liu', 'Georg Kohl', 'Nils Thuerey'] | ['cs.LG'] | We introduce PDE-Transformer, an improved transformer-based architecture for
surrogate modeling of physics simulations on regular grids. We combine recent
architectural improvements of diffusion transformers with adjustments specific
for large-scale simulations to yield a more scalable and versatile
general-purpose transformer architecture, which can be used as the backbone for
building large-scale foundation models in physical sciences. We demonstrate
that our proposed architecture outperforms state-of-the-art transformer
architectures for computer vision on a large dataset of 16 different types of
PDEs. We propose to embed different physical channels individually as
spatio-temporal tokens, which interact via channel-wise self-attention. This
helps to maintain a consistent information density of tokens when learning
multiple types of PDEs simultaneously. We demonstrate that our pre-trained
models achieve improved performance on several challenging downstream tasks
compared to training from scratch and also beat other foundation model
architectures for physics simulations. | 2025-05-30T15:39:54Z | ICML 2025. Code available at
https://github.com/tum-pbs/pde-transformer | null | null | PDE-Transformer: Efficient and Versatile Transformers for Physics Simulations | ['Benjamin Holzschuh', 'Qiang Liu', 'Georg Kohl', 'Nils Thuerey'] | 2,025 | arXiv.org | 1 | 92 | ['Computer Science'] |
2,505.24718 | Reinforcing Video Reasoning with Focused Thinking | ['Jisheng Dang', 'Jingze Wu', 'Teng Wang', 'Xuanhui Lin', 'Nannan Zhu', 'Hongbo Chen', 'Wei-Shi Zheng', 'Meng Wang', 'Tat-Seng Chua'] | ['cs.CV'] | Recent advancements in reinforcement learning, particularly through Group
Relative Policy Optimization (GRPO), have significantly improved multimodal
large language models for complex reasoning tasks. However, two critical
limitations persist: 1) they often produce unfocused, verbose reasoning chains
that obscure salient spatiotemporal cues and 2) binary rewarding fails to
account for partially correct answers, resulting in high reward variance and
inefficient learning. In this paper, we propose TW-GRPO, a novel framework that
enhances visual reasoning with focused thinking and dense reward granularity.
Specifically, we employs a token weighting mechanism that prioritizes tokens
with high informational density (estimated by intra-group information entropy),
suppressing redundant tokens like generic reasoning prefixes. Furthermore, we
reformulate RL training by shifting from single-choice to multi-choice QA
tasks, where soft rewards enable finer-grained gradient estimation by
distinguishing partial correctness. Additionally, we propose question-answer
inversion, a data augmentation strategy to generate diverse multi-choice
samples from existing benchmarks. Experiments demonstrate state-of-the-art
performance on several video reasoning and general understanding benchmarks.
Notably, TW-GRPO achieves 50.4\% accuracy on CLEVRER (18.8\% improvement over
Video-R1) and 65.8\% on MMVU. Our codes are available at
\href{https://github.com/longmalongma/TW-GRPO}. | 2025-05-30T15:42:19Z | null | null | null | null | null | null | null | null | null | null |
2,505.2476 | REASONING GYM: Reasoning Environments for Reinforcement Learning with
Verifiable Rewards | ['Zafir Stojanovski', 'Oliver Stanley', 'Joe Sharratt', 'Richard Jones', 'Abdulhakeem Adefioye', 'Jean Kaddour', 'Andreas Köpf'] | ['cs.LG', 'cs.AI', 'cs.CL'] | We introduce Reasoning Gym (RG), a library of reasoning environments for
reinforcement learning with verifiable rewards. It provides over 100 data
generators and verifiers spanning multiple domains including algebra,
arithmetic, computation, cognition, geometry, graph theory, logic, and various
common games. Its key innovation is the ability to generate virtually infinite
training data with adjustable complexity, unlike most previous reasoning
datasets, which are typically fixed. This procedural generation approach allows
for continuous evaluation across varying difficulty levels. Our experimental
results demonstrate the efficacy of RG in both evaluating and reinforcement
learning of reasoning models. | 2025-05-30T16:20:18Z | For code, see https://github.com/open-thought/reasoning-gym | null | null | REASONING GYM: Reasoning Environments for Reinforcement Learning with Verifiable Rewards | ['Zafir Stojanovski', 'Oliver Stanley', 'Joe Sharratt', 'Richard Jones', 'A. Adefioye', 'Jean Kaddour', 'Andreas Köpf'] | 2,025 | arXiv.org | 1 | 81 | ['Computer Science'] |
2,505.24782 | Context is Gold to find the Gold Passage: Evaluating and Training
Contextual Document Embeddings | ['Max Conti', 'Manuel Faysse', 'Gautier Viaud', 'Antoine Bosselut', 'Céline Hudelot', 'Pierre Colombo'] | ['cs.IR'] | A limitation of modern document retrieval embedding methods is that they
typically encode passages (chunks) from the same documents independently, often
overlooking crucial contextual information from the rest of the document that
could greatly improve individual chunk representations.
In this work, we introduce ConTEB (Context-aware Text Embedding Benchmark), a
benchmark designed to evaluate retrieval models on their ability to leverage
document-wide context. Our results show that state-of-the-art embedding models
struggle in retrieval scenarios where context is required. To address this
limitation, we propose InSeNT (In-sequence Negative Training), a novel
contrastive post-training approach which combined with late chunking pooling
enhances contextual representation learning while preserving computational
efficiency. Our method significantly improves retrieval quality on ConTEB
without sacrificing base model performance. We further find chunks embedded
with our method are more robust to suboptimal chunking strategies and larger
retrieval corpus sizes. We open-source all artifacts at
https://github.com/illuin-tech/contextual-embeddings. | 2025-05-30T16:43:28Z | Under Review | null | null | null | null | null | null | null | null | null |
2,505.2484 | Vision LLMs Are Bad at Hierarchical Visual Understanding, and LLMs Are
the Bottleneck | ['Yuwen Tan', 'Yuan Qing', 'Boqing Gong'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | This paper reveals that many state-of-the-art large language models (LLMs)
lack hierarchical knowledge about our visual world, unaware of even
well-established biology taxonomies. This shortcoming makes LLMs a bottleneck
for vision LLMs' hierarchical visual understanding (e.g., recognizing Anemone
Fish but not Vertebrate). We arrive at these findings using about one million
four-choice visual question answering (VQA) tasks constructed from six
taxonomies and four image datasets. Interestingly, finetuning a vision LLM
using our VQA tasks reaffirms LLMs' bottleneck effect to some extent because
the VQA tasks improve the LLM's hierarchical consistency more than the vision
LLM's. We conjecture that one cannot make vision LLMs understand visual
concepts fully hierarchical until LLMs possess corresponding taxonomy
knowledge. | 2025-05-30T17:40:46Z | 28 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,505.24864 | ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in
Large Language Models | ['Mingjie Liu', 'Shizhe Diao', 'Ximing Lu', 'Jian Hu', 'Xin Dong', 'Yejin Choi', 'Jan Kautz', 'Yi Dong'] | ['cs.CL', 'cs.AI'] | Recent advances in reasoning-centric language models have highlighted
reinforcement learning (RL) as a promising method for aligning models with
verifiable rewards. However, it remains contentious whether RL truly expands a
model's reasoning capabilities or merely amplifies high-reward outputs already
latent in the base model's distribution, and whether continually scaling up RL
compute reliably leads to improved reasoning performance. In this work, we
challenge prevailing assumptions by demonstrating that prolonged RL (ProRL)
training can uncover novel reasoning strategies that are inaccessible to base
models, even under extensive sampling. We introduce ProRL, a novel training
methodology that incorporates KL divergence control, reference policy
resetting, and a diverse suite of tasks. Our empirical analysis reveals that
RL-trained models consistently outperform base models across a wide range of
pass@k evaluations, including scenarios where base models fail entirely
regardless of the number of attempts. We further show that reasoning boundary
improvements correlates strongly with task competence of base model and
training duration, suggesting that RL can explore and populate new regions of
solution space over time. These findings offer new insights into the conditions
under which RL meaningfully expands reasoning boundaries in language models and
establish a foundation for future work on long-horizon RL for reasoning. We
release model weights to support further research:
https://huggingface.co/nvidia/Nemotron-Research-Reasoning-Qwen-1.5B | 2025-05-30T17:59:01Z | 26 pages, 17 figures | null | null | null | null | null | null | null | null | null |
2,505.24873 | MiniMax-Remover: Taming Bad Noise Helps Video Object Removal | ['Bojia Zi', 'Weixuan Peng', 'Xianbiao Qi', 'Jianan Wang', 'Shihao Zhao', 'Rong Xiao', 'Kam-Fai Wong'] | ['cs.CV'] | Recent advances in video diffusion models have driven rapid progress in video
editing techniques. However, video object removal, a critical subtask of video
editing, remains challenging due to issues such as hallucinated objects and
visual artifacts. Furthermore, existing methods often rely on computationally
expensive sampling procedures and classifier-free guidance (CFG), resulting in
slow inference. To address these limitations, we propose MiniMax-Remover, a
novel two-stage video object removal approach. Motivated by the observation
that text condition is not best suited for this task, we simplify the
pretrained video generation model by removing textual input and cross-attention
layers, resulting in a more lightweight and efficient model architecture in the
first stage. In the second stage, we distilled our remover on successful videos
produced by the stage-1 model and curated by human annotators, using a minimax
optimization strategy to further improve editing quality and inference speed.
Specifically, the inner maximization identifies adversarial input noise ("bad
noise") that makes failure removals, while the outer minimization step trains
the model to generate high-quality removal results even under such challenging
conditions. As a result, our method achieves a state-of-the-art video object
removal results with as few as 6 sampling steps and doesn't rely on CFG,
significantly improving inference efficiency. Extensive experiments demonstrate
the effectiveness and superiority of MiniMax-Remover compared to existing
methods. Codes and Videos are available at: https://minimax-remover.github.io. | 2025-05-30T17:59:45Z | null | null | null | MiniMax-Remover: Taming Bad Noise Helps Video Object Removal | ['Bojia Zi', 'Weixuan Peng', 'Xianbiao Qi', 'Jianan Wang', 'Shihao Zhao', 'Rong Xiao', 'Kam-Fai Wong'] | 2,025 | arXiv.org | 0 | 53 | ['Computer Science'] |
2,505.24875 | ReasonGen-R1: CoT for Autoregressive Image generation models through SFT
and RL | ['Yu Zhang', 'Yunqi Li', 'Yifan Yang', 'Rui Wang', 'Yuqing Yang', 'Dai Qi', 'Jianmin Bao', 'Dongdong Chen', 'Chong Luo', 'Lili Qiu'] | ['cs.CV', 'cs.CL'] | Although chain-of-thought reasoning and reinforcement learning (RL) have
driven breakthroughs in NLP, their integration into generative vision models
remains underexplored. We introduce ReasonGen-R1, a two-stage framework that
first imbues an autoregressive image generator with explicit text-based
"thinking" skills via supervised fine-tuning on a newly generated reasoning
dataset of written rationales, and then refines its outputs using Group
Relative Policy Optimization. To enable the model to reason through text before
generating images, We automatically generate and release a corpus of model
crafted rationales paired with visual prompts, enabling controlled planning of
object layouts, styles, and scene compositions. Our GRPO algorithm uses reward
signals from a pretrained vision language model to assess overall visual
quality, optimizing the policy in each update. Evaluations on GenEval, DPG, and
the T2I benchmark demonstrate that ReasonGen-R1 consistently outperforms strong
baselines and prior state-of-the-art models. More: aka.ms/reasongen. | 2025-05-30T17:59:48Z | null | null | null | null | null | null | null | null | null | null |
2,506.00019 | Amadeus-Verbo Technical Report: The powerful Qwen2.5 family models
trained in Portuguese | ['William Alberto Cruz-Castañeda', 'Marcellus Amadeus'] | ['cs.CL', 'cs.AI'] | This report introduces the experience of developing Amadeus Verbo, a family
of large language models for Brazilian Portuguese. To handle diverse use cases,
Amadeus Verbo includes base-tuned, merged, and instruction-tuned models in
sizes of 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B parameters. Thus, the main
objective is to show how easy it is to fine-tune foundation models to
democratize the open-source development of Brazilian Portuguese LLMs when data
and resources are available. Amadeus-Verbo family models are all available at
HuggingFace at
https://huggingface.co/collections/amadeusai/amadeus-verbo-qwen25-67cf2e7aae69ce2b3bcdcfda. | 2025-05-20T22:40:00Z | null | null | null | null | null | null | null | null | null | null |
2,506.00129 | Geo-Sign: Hyperbolic Contrastive Regularisation for Geometrically Aware
Sign Language Translation | ['Edward Fish', 'Richard Bowden'] | ['cs.CV', 'cs.LG'] | Recent progress in Sign Language Translation (SLT) has focussed primarily on
improving the representational capacity of large language models to incorporate
Sign Language features. This work explores an alternative direction: enhancing
the geometric properties of skeletal representations themselves. We propose
Geo-Sign, a method that leverages the properties of hyperbolic geometry to
model the hierarchical structure inherent in sign language kinematics. By
projecting skeletal features derived from Spatio-Temporal Graph Convolutional
Networks (ST-GCNs) into the Poincar\'e ball model, we aim to create more
discriminative embeddings, particularly for fine-grained motions like finger
articulations. We introduce a hyperbolic projection layer, a weighted Fr\'echet
mean aggregation scheme, and a geometric contrastive loss operating directly in
hyperbolic space. These components are integrated into an end-to-end
translation framework as a regularisation function, to enhance the
representations within the language model. This work demonstrates the potential
of hyperbolic geometry to improve skeletal representations for Sign Language
Translation, improving on SOTA RGB methods while preserving privacy and
improving computational efficiency. Code available here:
https://github.com/ed-fish/geo-sign. | 2025-05-30T18:05:33Z | Under Review | null | null | Geo-Sign: Hyperbolic Contrastive Regularisation for Geometrically Aware Sign Language Translation | ['Edward Fish', 'Richard Bowden'] | 2,025 | arXiv.org | 1 | 76 | ['Computer Science'] |
2,506.00152 | Aligning Language Models with Observational Data: Opportunities and
Risks from a Causal Perspective | ['Erfan Loghmani'] | ['cs.LG', 'econ.EM', 'stat.ML', 'I.2.6; I.2.7; H.4.0; J.4'] | Large language models are being widely used across industries to generate
content that contributes directly to key performance metrics, such as
conversion rates. Pretrained models, however, often fall short when it comes to
aligning with human preferences or optimizing for business objectives. As a
result, fine-tuning with good-quality labeled data is essential to guide models
to generate content that achieves better results. Controlled experiments, like
A/B tests, can provide such data, but they are often expensive and come with
significant engineering and logistical challenges. Meanwhile, companies have
access to a vast amount of historical (observational) data that remains
underutilized. In this work, we study the challenges and opportunities of
fine-tuning LLMs using observational data. We show that while observational
outcomes can provide valuable supervision, directly fine-tuning models on such
data can lead them to learn spurious correlations. We present empirical
evidence of this issue using various real-world datasets and propose
DeconfoundLM, a method that explicitly removes the effect of known confounders
from reward signals. Using simulation experiments, we demonstrate that
DeconfoundLM improves the recovery of causal relationships and mitigates
failure modes found in fine-tuning methods that ignore or naively incorporate
confounding variables. Our findings highlight that while observational data
presents risks, with the right causal corrections, it can be a powerful source
of signal for LLM alignment. Please refer to the project page for code and
related resources. | 2025-05-30T18:44:09Z | 10+12 pages, 8 figures | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.