arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,505.14674 | Reward Reasoning Model | ['Jiaxin Guo', 'Zewen Chi', 'Li Dong', 'Qingxiu Dong', 'Xun Wu', 'Shaohan Huang', 'Furu Wei'] | ['cs.CL'] | Reward models play a critical role in guiding large language models toward
outputs that align with human expectations. However, an open challenge remains
in effectively utilizing test-time compute to enhance reward model performance.
In this work, we introduce Reward Reasoning Models (RRMs), which are
specifically designed to execute a deliberate reasoning process before
generating final rewards. Through chain-of-thought reasoning, RRMs leverage
additional test-time compute for complex queries where appropriate rewards are
not immediately apparent. To develop RRMs, we implement a reinforcement
learning framework that fosters self-evolved reward reasoning capabilities
without requiring explicit reasoning traces as training data. Experimental
results demonstrate that RRMs achieve superior performance on reward modeling
benchmarks across diverse domains. Notably, we show that RRMs can adaptively
exploit test-time compute to further improve reward accuracy. The pretrained
reward reasoning models are available at
https://huggingface.co/Reward-Reasoning. | 2025-05-20T17:58:03Z | null | null | null | Reward Reasoning Model | ['Jiaxin Guo', 'Zewen Chi', 'Li Dong', 'Qingxiu Dong', 'Xun Wu', 'Shaohan Huang', 'Furu Wei'] | 2,025 | arXiv.org | 1 | 74 | ['Computer Science'] |
2,505.14677 | Visionary-R1: Mitigating Shortcuts in Visual Reasoning with
Reinforcement Learning | ['Jiaer Xia', 'Yuhang Zang', 'Peng Gao', 'Yixuan Li', 'Kaiyang Zhou'] | ['cs.CV'] | Learning general-purpose reasoning capabilities has long been a challenging
problem in AI. Recent research in large language models (LLMs), such as
DeepSeek-R1, has shown that reinforcement learning techniques like GRPO can
enable pre-trained LLMs to develop reasoning capabilities using simple
question-answer pairs. In this paper, we aim to train visual language models
(VLMs) to perform reasoning on image data through reinforcement learning and
visual question-answer pairs, without any explicit chain-of-thought (CoT)
supervision. Our findings indicate that simply applying reinforcement learning
to a VLM -- by prompting the model to produce a reasoning chain before
providing an answer -- can lead the model to develop shortcuts from easy
questions, thereby reducing its ability to generalize across unseen data
distributions. We argue that the key to mitigating shortcut learning is to
encourage the model to interpret images prior to reasoning. Therefore, we train
the model to adhere to a caption-reason-answer output format: initially
generating a detailed caption for an image, followed by constructing an
extensive reasoning chain. When trained on 273K CoT-free visual question-answer
pairs and using only reinforcement learning, our model, named Visionary-R1,
outperforms strong multimodal models, such as GPT-4o, Claude3.5-Sonnet, and
Gemini-1.5-Pro, on multiple visual reasoning benchmarks. | 2025-05-20T17:58:35Z | null | null | null | Visionary-R1: Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning | ['Jiaer Xia', 'Y.-F. Zang', 'Peng Gao', 'Yixuan Li', 'Kaiyang Zhou'] | 2,025 | arXiv.org | 0 | 47 | ['Computer Science'] |
2,505.14683 | Emerging Properties in Unified Multimodal Pretraining | ['Chaorui Deng', 'Deyao Zhu', 'Kunchang Li', 'Chenhui Gou', 'Feng Li', 'Zeyu Wang', 'Shu Zhong', 'Weihao Yu', 'Xiaonan Nie', 'Ziang Song', 'Guang Shi', 'Haoqi Fan'] | ['cs.CV'] | Unifying multimodal understanding and generation has shown impressive
capabilities in cutting-edge proprietary systems. In this work, we introduce
BAGEL, an open-source foundational model that natively supports multimodal
understanding and generation. BAGEL is a unified, decoder-only model pretrained
on trillions of tokens curated from large-scale interleaved text, image, video,
and web data. When scaled with such diverse multimodal interleaved data, BAGEL
exhibits emerging capabilities in complex multimodal reasoning. As a result, it
significantly outperforms open-source unified models in both multimodal
generation and understanding across standard benchmarks, while exhibiting
advanced multimodal reasoning abilities such as free-form image manipulation,
future frame prediction, 3D manipulation, and world navigation. In the hope of
facilitating further opportunities for multimodal research, we share the key
findings, pretraining details, data creation protocal, and release our code and
checkpoints to the community. The project page is at https://bagel-ai.org/ | 2025-05-20T17:59:30Z | 37 pages, 17 figures | null | null | null | null | null | null | null | null | null |
2,505.14684 | Mind the Gap: Bridging Thought Leap for Improved Chain-of-Thought Tuning | ['Haolei Xu', 'Yuchen Yan', 'Yongliang Shen', 'Wenqi Zhang', 'Guiyang Hou', 'Shengpei Jiang', 'Kaitao Song', 'Weiming Lu', 'Jun Xiao', 'Yueting Zhuang'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have achieved remarkable progress on
mathematical tasks through Chain-of-Thought (CoT) reasoning. However, existing
mathematical CoT datasets often suffer from Thought Leaps due to experts
omitting intermediate steps, which negatively impacts model learning and
generalization. We propose the CoT Thought Leap Bridge Task, which aims to
automatically detect leaps and generate missing intermediate reasoning steps to
restore the completeness and coherence of CoT. To facilitate this, we
constructed a specialized training dataset called ScaleQM+, based on the
structured ScaleQuestMath dataset, and trained CoT-Bridge to bridge thought
leaps. Through comprehensive experiments on mathematical reasoning benchmarks,
we demonstrate that models fine-tuned on bridged datasets consistently
outperform those trained on original datasets, with improvements of up to
+5.87% on NuminaMath. Our approach effectively enhances distilled data (+3.02%)
and provides better starting points for reinforcement learning (+3.1%),
functioning as a plug-and-play module compatible with existing optimization
techniques. Furthermore, CoT-Bridge demonstrate improved generalization to
out-of-domain logical reasoning tasks, confirming that enhancing reasoning
completeness yields broadly applicable benefits. | 2025-05-20T17:59:31Z | Project: https://zju-real.github.io/CoT-Bridge/ | null | null | null | null | null | null | null | null | null |
2,505.14766 | This Time is Different: An Observability Perspective on Time Series
Foundation Models | ['Ben Cohen', 'Emaad Khwaja', 'Youssef Doubli', 'Salahidine Lemaachi', 'Chris Lettieri', 'Charles Masson', 'Hugo Miccinilli', 'Elise Ramé', 'Qiqi Ren', 'Afshin Rostamizadeh', 'Jean Ogier du Terrail', 'Anna-Monica Toon', 'Kan Wang', 'Stephan Xie', 'Zongzhe Xu', 'Viktoriya Zhukova', 'David Asker', 'Ameet Talwalkar', 'Othmane Abou-Amal'] | ['cs.LG', 'cs.AI'] | We introduce Toto, a time series forecasting foundation model with 151
million parameters. Toto uses a modern decoder-only architecture coupled with
architectural innovations designed to account for specific challenges found in
multivariate observability time series data. Toto's pre-training corpus is a
mixture of observability data, open datasets, and synthetic data, and is
4-10$\times$ larger than those of leading time series foundation models.
Additionally, we introduce BOOM, a large-scale benchmark consisting of 350
million observations across 2,807 real-world time series. For both Toto and
BOOM, we source observability data exclusively from Datadog's own telemetry and
internal observability metrics. Extensive evaluations demonstrate that Toto
achieves state-of-the-art performance on both BOOM and on established general
purpose time series forecasting benchmarks. Toto's model weights, inference
code, and evaluation scripts, as well as BOOM's data and evaluation code, are
all available as open source under the Apache 2.0 License available at
https://huggingface.co/Datadog/Toto-Open-Base-1.0 and
https://github.com/DataDog/toto. | 2025-05-20T17:48:13Z | null | null | null | null | null | null | null | null | null | null |
2,505.1481 | Scaling Reasoning, Losing Control: Evaluating Instruction Following in
Large Reasoning Models | ['Tingchen Fu', 'Jiawei Gu', 'Yafu Li', 'Xiaoye Qu', 'Yu Cheng'] | ['cs.CL', 'cs.AI'] | Instruction-following is essential for aligning large language models (LLMs)
with user intent. While recent reasoning-oriented models exhibit impressive
performance on complex mathematical problems, their ability to adhere to
natural language instructions remains underexplored. In this work, we introduce
MathIF, a dedicated benchmark for evaluating instruction-following in
mathematical reasoning tasks. Our empirical analysis reveals a consistent
tension between scaling up reasoning capacity and maintaining controllability,
as models that reason more effectively often struggle to comply with user
directives. We find that models tuned on distilled long chains-of-thought or
trained with reasoning-oriented reinforcement learning often degrade in
instruction adherence, especially when generation length increases.
Furthermore, we show that even simple interventions can partially recover
obedience, though at the cost of reasoning performance. These findings
highlight a fundamental tension in current LLM training paradigms and motivate
the need for more instruction-aware reasoning models. We release the code and
data at https://github.com/TingchenFu/MathIF. | 2025-05-20T18:18:01Z | null | null | null | Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models | ['Ting Fu', 'Jiawei Gu', 'Yafu Li', 'Xiaoye Qu', 'Yu Cheng'] | 2,025 | arXiv.org | 1 | 44 | ['Computer Science'] |
2,505.14884 | Polar Sparsity: High Throughput Batched LLM Inferencing with Scalable
Contextual Sparsity | ['Susav Shrestha', 'Brad Settlemyer', 'Nikoli Dryden', 'Narasimha Reddy'] | ['cs.LG', 'cs.AI'] | Accelerating large language model (LLM) inference is critical for real-world
deployments requiring high throughput and low latency. Contextual sparsity,
where each token dynamically activates only a small subset of the model
parameters, shows promise but does not scale to large batch sizes due to union
of active neurons quickly approaching dense computation. We introduce Polar
Sparsity, highlighting a key shift in sparsity importance from MLP to Attention
layers as we scale batch size and sequence length. While MLP layers become more
compute-efficient under batching, their sparsity vanishes. In contrast,
attention becomes increasingly more expensive at scale, while their head
sparsity remains stable and batch-invariant. We develop hardware-efficient,
sparsity-aware GPU kernels for selective MLP and Attention computations,
delivering up to \(2.2\times\) end-to-end speedups for models like OPT, LLaMA-2
\& 3, across various batch sizes and sequence lengths without compromising
accuracy. To our knowledge, this is the first work to demonstrate that
contextual sparsity can scale effectively to large batch sizes, delivering
substantial inference acceleration with minimal changes, making Polar Sparsity
practical for large-scale, high-throughput LLM deployment systems. Our code is
available at: https://github.com/susavlsh10/Polar-Sparsity. | 2025-05-20T20:15:42Z | null | null | null | null | null | null | null | null | null | null |
2,505.14969 | STree: Speculative Tree Decoding for Hybrid State-Space Models | ['Yangchao Wu', 'Zongyue Qin', 'Alex Wong', 'Stefano Soatto'] | ['cs.LG', 'cs.AI'] | Speculative decoding is a technique to leverage hardware concurrency to
improve the efficiency of large-scale autoregressive (AR) Transformer models by
enabling multiple steps of token generation in a single forward pass.
State-space models (SSMs) are already more efficient than AR Transformers,
since their state summarizes all past data with no need to cache or re-process
tokens in the sliding window context. However, their state can also comprise
thousands of tokens; so, speculative decoding has recently been extended to
SSMs. Existing approaches, however, do not leverage the tree-based verification
methods, since current SSMs lack the means to compute a token tree efficiently.
We propose the first scalable algorithm to perform tree-based speculative
decoding in state-space models (SSMs) and hybrid architectures of SSMs and
Transformer layers. We exploit the structure of accumulated state transition
matrices to facilitate tree-based speculative decoding with minimal overhead to
current SSM state update implementations. With the algorithm, we describe a
hardware-aware implementation that improves naive application of AR Transformer
tree-based speculative decoding methods to SSMs. Furthermore, we outperform
vanilla speculative decoding with SSMs even with a baseline drafting model and
tree structure on three different benchmarks, opening up opportunities for
further speed up with SSM and hybrid model inference. Code will be released
upon paper acceptance. | 2025-05-20T23:12:16Z | null | null | null | null | null | null | null | null | null | null |
2,505.15093 | Steering Generative Models with Experimental Data for Protein Fitness
Optimization | ['Jason Yang', 'Wenda Chu', 'Daniel Khalil', 'Raul Astudillo', 'Bruce J. Wittmann', 'Frances H. Arnold', 'Yisong Yue'] | ['q-bio.BM', 'cs.LG'] | Protein fitness optimization involves finding a protein sequence that
maximizes desired quantitative properties in a combinatorially large design
space of possible sequences. Recent developments in steering protein generative
models (e.g diffusion models, language models) offer a promising approach.
However, by and large, past studies have optimized surrogate rewards and/or
utilized large amounts of labeled data for steering, making it unclear how well
existing methods perform and compare to each other in real-world optimization
campaigns where fitness is measured by low-throughput wet-lab assays. In this
study, we explore fitness optimization using small amounts (hundreds) of
labeled sequence-fitness pairs and comprehensively evaluate strategies such as
classifier guidance and posterior sampling for guiding generation from
different discrete diffusion models of protein sequences. We also demonstrate
how guidance can be integrated into adaptive sequence selection akin to
Thompson sampling in Bayesian optimization, showing that plug-and-play guidance
strategies offer advantages compared to alternatives such as reinforcement
learning with protein language models. | 2025-05-21T04:30:48Z | null | null | null | null | null | null | null | null | null | null |
2,505.15263 | gen2seg: Generative Models Enable Generalizable Instance Segmentation | ['Om Khangaonkar', 'Hamed Pirsiavash'] | ['cs.CV', 'cs.LG'] | By pretraining to synthesize coherent images from perturbed inputs,
generative models inherently learn to understand object boundaries and scene
compositions. How can we repurpose these generative representations for
general-purpose perceptual organization? We finetune Stable Diffusion and MAE
(encoder+decoder) for category-agnostic instance segmentation using our
instance coloring loss exclusively on a narrow set of object types (indoor
furnishings and cars). Surprisingly, our models exhibit strong zero-shot
generalization, accurately segmenting objects of types and styles unseen in
finetuning (and in many cases, MAE's ImageNet-1K pretraining too). Our
best-performing models closely approach the heavily supervised SAM when
evaluated on unseen object types and styles, and outperform it when segmenting
fine structures and ambiguous boundaries. In contrast, existing promptable
segmentation architectures or discriminatively pretrained models fail to
generalize. This suggests that generative models learn an inherent grouping
mechanism that transfers across categories and domains, even without
internet-scale pretraining. Code, pretrained models, and demos are available on
our website. | 2025-05-21T08:42:05Z | Website: https://reachomk.github.io/gen2seg/ | null | null | gen2seg: Generative Models Enable Generalizable Instance Segmentation | ['Om Khangaonkar', 'Hamed Pirsiavash'] | 2,025 | arXiv.org | 0 | 65 | ['Computer Science'] |
2,505.1527 | Scaling Diffusion Transformers Efficiently via $μ$P | ['Chenyu Zheng', 'Xinyu Zhang', 'Rongzhen Wang', 'Wei Huang', 'Zhi Tian', 'Weilin Huang', 'Jun Zhu', 'Chongxuan Li'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Diffusion Transformers have emerged as the foundation for vision generative
models, but their scalability is limited by the high cost of hyperparameter
(HP) tuning at large scales. Recently, Maximal Update Parametrization ($\mu$P)
was proposed for vanilla Transformers, which enables stable HP transfer from
small to large language models, and dramatically reduces tuning costs. However,
it remains unclear whether $\mu$P of vanilla Transformers extends to diffusion
Transformers, which differ architecturally and objectively. In this work, we
generalize standard $\mu$P to diffusion Transformers and validate its
effectiveness through large-scale experiments. First, we rigorously prove that
$\mu$P of mainstream diffusion Transformers, including DiT, U-ViT,
PixArt-$\alpha$, and MMDiT, aligns with that of the vanilla Transformer,
enabling the direct application of existing $\mu$P methodologies. Leveraging
this result, we systematically demonstrate that DiT-$\mu$P enjoys robust HP
transferability. Notably, DiT-XL-2-$\mu$P with transferred learning rate
achieves 2.9 times faster convergence than the original DiT-XL-2. Finally, we
validate the effectiveness of $\mu$P on text-to-image generation by scaling
PixArt-$\alpha$ from 0.04B to 0.61B and MMDiT from 0.18B to 18B. In both cases,
models under $\mu$P outperform their respective baselines while requiring small
tuning cost, only 5.5% of one training run for PixArt-$\alpha$ and 3% of
consumption by human experts for MMDiT-18B. These results establish $\mu$P as a
principled and efficient framework for scaling diffusion Transformers. | 2025-05-21T08:49:03Z | 35 pages, 10 figures, 15 tables | null | null | null | null | null | null | null | null | null |
2,505.15277 | Web-Shepherd: Advancing PRMs for Reinforcing Web Agents | ['Hyungjoo Chae', 'Sunghwan Kim', 'Junhee Cho', 'Seungone Kim', 'Seungjun Moon', 'Gyeom Hwangbo', 'Dongha Lim', 'Minjin Kim', 'Yeonjun Hwang', 'Minju Gwak', 'Dongwook Choi', 'Minseok Kang', 'Gwanhoon Im', 'ByeongUng Cho', 'Hyojun Kim', 'Jun Hee Han', 'Taeyoon Kwon', 'Minju Kim', 'Beong-woo Kwak', 'Dongjin Kang', 'Jinyoung Yeo'] | ['cs.CL'] | Web navigation is a unique domain that can automate many repetitive real-life
tasks and is challenging as it requires long-horizon sequential decision making
beyond typical multimodal large language model (MLLM) tasks. Yet, specialized
reward models for web navigation that can be utilized during both training and
test-time have been absent until now. Despite the importance of speed and
cost-effectiveness, prior works have utilized MLLMs as reward models, which
poses significant constraints for real-world deployment. To address this, in
this work, we propose the first process reward model (PRM) called Web-Shepherd
which could assess web navigation trajectories in a step-level. To achieve
this, we first construct the WebPRM Collection, a large-scale dataset with 40K
step-level preference pairs and annotated checklists spanning diverse domains
and difficulty levels. Next, we also introduce the WebRewardBench, the first
meta-evaluation benchmark for evaluating PRMs. In our experiments, we observe
that our Web-Shepherd achieves about 30 points better accuracy compared to
using GPT-4o on WebRewardBench. Furthermore, when testing on WebArena-lite by
using GPT-4o-mini as the policy and Web-Shepherd as the verifier, we achieve
10.9 points better performance, in 10 less cost compared to using GPT-4o-mini
as the verifier. Our model, dataset, and code are publicly available at LINK. | 2025-05-21T08:56:55Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,505.15379 | The P$^3$ dataset: Pixels, Points and Polygons for Multimodal Building
Vectorization | ['Raphael Sulzer', 'Liuyun Duan', 'Nicolas Girard', 'Florent Lafarge'] | ['cs.CV'] | We present the P$^3$ dataset, a large-scale multimodal benchmark for building
vectorization, constructed from aerial LiDAR point clouds, high-resolution
aerial imagery, and vectorized 2D building outlines, collected across three
continents. The dataset contains over 10 billion LiDAR points with
decimeter-level accuracy and RGB images at a ground sampling distance of 25
centimeter. While many existing datasets primarily focus on the image modality,
P$^3$ offers a complementary perspective by also incorporating dense 3D
information. We demonstrate that LiDAR point clouds serve as a robust modality
for predicting building polygons, both in hybrid and end-to-end learning
frameworks. Moreover, fusing aerial LiDAR and imagery further improves accuracy
and geometric quality of predicted polygons. The P$^3$ dataset is publicly
available, along with code and pretrained weights of three state-of-the-art
models for building polygon prediction at
https://github.com/raphaelsulzer/PixelsPointsPolygons . | 2025-05-21T11:16:29Z | null | null | null | The P3 dataset: Pixels, Points and Polygons for Multimodal Building Vectorization | ['Raphael Sulzer', 'Liuyun Duan', 'Nicolas Girard', 'Florent Lafarge'] | 2,025 | arXiv.org | 0 | 39 | ['Computer Science'] |
2,505.15425 | On the Robustness of Medical Vision-Language Models: Are they Truly
Generalizable? | ['Raza Imam', 'Rufael Marew', 'Mohammad Yaqub'] | ['cs.CV'] | Medical Vision-Language Models (MVLMs) have achieved par excellence
generalization in medical image analysis, yet their performance under noisy,
corrupted conditions remains largely untested. Clinical imaging is inherently
susceptible to acquisition artifacts and noise; however, existing evaluations
predominantly assess generally clean datasets, overlooking robustness -- i.e.,
the model's ability to perform under real-world distortions. To address this
gap, we first introduce MediMeta-C, a corruption benchmark that systematically
applies several perturbations across multiple medical imaging datasets.
Combined with MedMNIST-C, this establishes a comprehensive robustness
evaluation framework for MVLMs. We further propose RobustMedCLIP, a visual
encoder adaptation of a pretrained MVLM that incorporates few-shot tuning to
enhance resilience against corruptions. Through extensive experiments, we
benchmark 5 major MVLMs across 5 medical imaging modalities, revealing that
existing models exhibit severe degradation under corruption and struggle with
domain-modality tradeoffs. Our findings highlight the necessity of diverse
training and robust adaptation strategies, demonstrating that efficient
low-rank adaptation when paired with few-shot tuning, improves robustness while
preserving generalization across modalities. | 2025-05-21T12:08:31Z | Dataset and Code is available at
https://github.com/BioMedIA-MBZUAI/RobustMedCLIP Accepted at: Medical Image
Understanding and Analysis (MIUA) 2025 | null | null | null | null | null | null | null | null | null |
2,505.15436 | Chain-of-Focus: Adaptive Visual Search and Zooming for Multimodal
Reasoning via RL | ['Xintong Zhang', 'Zhi Gao', 'Bofei Zhang', 'Pengxiang Li', 'Xiaowen Zhang', 'Yang Liu', 'Tao Yuan', 'Yuwei Wu', 'Yunde Jia', 'Song-Chun Zhu', 'Qing Li'] | ['cs.CV'] | Vision language models (VLMs) have achieved impressive performance across a
variety of computer vision tasks. However, the multimodal reasoning capability
has not been fully explored in existing models. In this paper, we propose a
Chain-of-Focus (CoF) method that allows VLMs to perform adaptive focusing and
zooming in on key image regions based on obtained visual cues and the given
questions, achieving efficient multimodal reasoning. To enable this CoF
capability, we present a two-stage training pipeline, including supervised
fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we
construct the MM-CoF dataset, comprising 3K samples derived from a visual agent
designed to adaptively identify key regions to solve visual tasks with
different image resolutions and questions. We use MM-CoF to fine-tune the
Qwen2.5-VL model for cold start. In the RL stage, we leverage the outcome
accuracies and formats as rewards to update the Qwen2.5-VL model, enabling
further refining the search and reasoning strategy of models without human
priors. Our model achieves significant improvements on multiple benchmarks. On
the V* benchmark that requires strong visual reasoning capability, our model
outperforms existing VLMs by 5% among 8 image resolutions ranging from 224 to
4K, demonstrating the effectiveness of the proposed CoF method and facilitating
the more efficient deployment of VLMs in practical applications. | 2025-05-21T12:18:15Z | null | null | null | Chain-of-Focus: Adaptive Visual Search and Zooming for Multimodal Reasoning via RL | ['Xintong Zhang', 'Zhi Gao', 'Bofei Zhang', 'Pengxiang Li', 'Xiaowen Zhang', 'Yang Liu', 'Tao Yuan', 'Yuwei Wu', 'Yunde Jia', 'Song-Chun Zhu', 'Qing Li'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,505.15607 | From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with
Pedagogy using Reinforcement Learning | ['David Dinucu-Jianu', 'Jakub Macina', 'Nico Daheim', 'Ido Hakimi', 'Iryna Gurevych', 'Mrinmaya Sachan'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) can transform education, but their optimization
for direct question-answering often undermines effective pedagogy which
requires strategically withholding answers. To mitigate this, we propose an
online reinforcement learning (RL)-based alignment framework that can quickly
adapt LLMs into effective tutors using simulated student-tutor interactions by
emphasizing pedagogical quality and guided problem-solving over simply giving
away answers. We use our method to train a 7B parameter tutor model without
human annotations which reaches similar performance to larger proprietary
models like LearnLM. We introduce a controllable reward weighting to balance
pedagogical support and student solving accuracy, allowing us to trace the
Pareto frontier between these two objectives. Our models better preserve
reasoning capabilities than single-turn SFT baselines and can optionally
enhance interpretability through thinking tags that expose the model's
instructional planning. | 2025-05-21T15:00:07Z | David Dinucu-Jianu and Jakub Macina contributed equally. Code
available: https://github.com/eth-lre/PedagogicalRL | null | null | null | null | null | null | null | null | null |
2,505.15776 | ConvSearch-R1: Enhancing Query Reformulation for Conversational Search
with Reasoning via Reinforcement Learning | ['Changtai Zhu', 'Siyin Wang', 'Ruijun Feng', 'Kai Song', 'Xipeng Qiu'] | ['cs.CL', 'cs.IR'] | Conversational search systems require effective handling of context-dependent
queries that often contain ambiguity, omission, and coreference. Conversational
Query Reformulation (CQR) addresses this challenge by transforming these
queries into self-contained forms suitable for off-the-shelf retrievers.
However, existing CQR approaches suffer from two critical constraints: high
dependency on costly external supervision from human annotations or large
language models, and insufficient alignment between the rewriting model and
downstream retrievers. We present ConvSearch-R1, the first self-driven
framework that completely eliminates dependency on external rewrite supervision
by leveraging reinforcement learning to optimize reformulation directly through
retrieval signals. Our novel two-stage approach combines Self-Driven Policy
Warm-Up to address the cold-start problem through retrieval-guided
self-distillation, followed by Retrieval-Guided Reinforcement Learning with a
specially designed rank-incentive reward shaping mechanism that addresses the
sparsity issue in conventional retrieval metrics. Extensive experiments on
TopiOCQA and QReCC datasets demonstrate that ConvSearch-R1 significantly
outperforms previous state-of-the-art methods, achieving over 10% improvement
on the challenging TopiOCQA dataset while using smaller 3B parameter models
without any external supervision. | 2025-05-21T17:27:42Z | null | null | null | null | null | null | null | null | null | null |
2,505.15801 | VerifyBench: Benchmarking Reference-based Reward Systems for Large
Language Models | ['Yuchen Yan', 'Jin Jiang', 'Zhenbang Ren', 'Yijun Li', 'Xudong Cai', 'Yang Liu', 'Xin Xu', 'Mengdi Zhang', 'Jian Shao', 'Yongliang Shen', 'Jun Xiao', 'Yueting Zhuang'] | ['cs.CL', 'cs.AI'] | Large reasoning models such as OpenAI o1 and DeepSeek-R1 have achieved
remarkable performance in the domain of reasoning. A key component of their
training is the incorporation of verifiable rewards within reinforcement
learning (RL). However, existing reward benchmarks do not evaluate
reference-based reward systems, leaving researchers with limited understanding
of the accuracy of verifiers used in RL. In this paper, we introduce two
benchmarks, VerifyBench and VerifyBench-Hard, designed to assess the
performance of reference-based reward systems. These benchmarks are constructed
through meticulous data collection and curation, followed by careful human
annotation to ensure high quality. Current models still show considerable room
for improvement on both VerifyBench and VerifyBench-Hard, especially
smaller-scale models. Furthermore, we conduct a thorough and comprehensive
analysis of evaluation results, offering insights for understanding and
developing reference-based reward systems. Our proposed benchmarks serve as
effective tools for guiding the development of verifier accuracy and the
reasoning capabilities of models trained via RL in reasoning tasks. | 2025-05-21T17:54:43Z | Project Page: https://zju-real.github.io/VerifyBench Dataset:
https://huggingface.co/datasets/ZJU-REAL/VerifyBench Code:
https://github.com/ZJU-REAL/VerifyBench | null | null | VerifyBench: Benchmarking Reference-based Reward Systems for Large Language Models | ['Yuchen Yan', 'Jin Jiang', 'Zhenbang Ren', 'Yijun Li', 'Xudong Cai', 'Yang Liu', 'Xin Xu', 'Mengdi Zhang', 'Jian Shao', 'Yongliang Shen', 'Jun Xiao', 'Yueting Zhuang'] | 2,025 | arXiv.org | 0 | 58 | ['Computer Science'] |
2,505.15809 | MMaDA: Multimodal Large Diffusion Language Models | ['Ling Yang', 'Ye Tian', 'Bowen Li', 'Xinchen Zhang', 'Ke Shen', 'Yunhai Tong', 'Mengdi Wang'] | ['cs.CV'] | We introduce MMaDA, a novel class of multimodal diffusion foundation models
designed to achieve superior performance across diverse domains such as textual
reasoning, multimodal understanding, and text-to-image generation. The approach
is distinguished by three key innovations: (i) MMaDA adopts a unified diffusion
architecture with a shared probabilistic formulation and a modality-agnostic
design, eliminating the need for modality-specific components. This
architecture ensures seamless integration and processing across different data
types. (ii) We implement a mixed long chain-of-thought (CoT) fine-tuning
strategy that curates a unified CoT format across modalities. By aligning
reasoning processes between textual and visual domains, this strategy
facilitates cold-start training for the final reinforcement learning (RL)
stage, thereby enhancing the model's ability to handle complex tasks from the
outset. (iii) We propose UniGRPO, a unified policy-gradient-based RL algorithm
specifically tailored for diffusion foundation models. Utilizing diversified
reward modeling, UniGRPO unifies post-training across both reasoning and
generation tasks, ensuring consistent performance improvements. Experimental
results demonstrate that MMaDA-8B exhibits strong generalization capabilities
as a unified multimodal foundation model. It surpasses powerful models like
LLaMA-3-7B and Qwen2-7B in textual reasoning, outperforms Show-o and SEED-X in
multimodal understanding, and excels over SDXL and Janus in text-to-image
generation. These achievements highlight MMaDA's effectiveness in bridging the
gap between pretraining and post-training within unified diffusion
architectures, providing a comprehensive framework for future research and
development. We open-source our code and trained models at:
https://github.com/Gen-Verse/MMaDA | 2025-05-21T17:59:05Z | Project: https://github.com/Gen-Verse/MMaDA | null | null | MMaDA: Multimodal Large Diffusion Language Models | ['Ling Yang', 'Ye Tian', 'Bowen Li', 'Xinchen Zhang', 'Ke Shen', 'Yunhai Tong', 'Mengdi Wang'] | 2,025 | arXiv.org | 6 | 98 | ['Computer Science'] |
2,505.1596 | Training Step-Level Reasoning Verifiers with Formal Verification Tools | ['Ryo Kamoi', 'Yusen Zhang', 'Nan Zhang', 'Sarkar Snigdha Sarathi Das', 'Rui Zhang'] | ['cs.CL'] | Process Reward Models (PRMs), which provide step-by-step feedback on the
reasoning generated by Large Language Models (LLMs), are receiving increasing
attention. However, two key research gaps remain: collecting accurate
step-level error labels for training typically requires costly human
annotation, and existing PRMs are limited to math reasoning problems. In
response to these gaps, this paper aims to address the challenges of automatic
dataset creation and the generalization of PRMs to diverse reasoning tasks. To
achieve this goal, we propose FoVer, an approach for training PRMs on
step-level error labels automatically annotated by formal verification tools,
such as Z3 for formal logic and Isabelle for theorem proof, which provide
automatic and accurate verification for symbolic tasks. Using this approach, we
synthesize a training dataset with error labels on LLM responses for formal
logic and theorem proof tasks without human annotation. Although this data
synthesis is feasible only for tasks compatible with formal verification, we
observe that LLM-based PRMs trained on our dataset exhibit cross-task
generalization, improving verification across diverse reasoning tasks.
Specifically, PRMs trained with FoVer significantly outperform baseline PRMs
based on the original LLMs and achieve competitive or superior results compared
to state-of-the-art PRMs trained on labels annotated by humans or stronger
models, as measured by step-level verification on ProcessBench and Best-of-K
performance across 12 reasoning benchmarks, including MATH, AIME, ANLI, MMLU,
and BBH. The datasets, models, and code are provided at
https://github.com/psunlpgroup/FoVer. | 2025-05-21T19:23:45Z | Datasets, models, and code are provided at
https://github.com/psunlpgroup/FoVer. Please also refer to our project
website at https://fover-prm.github.io/ | null | null | Training Step-Level Reasoning Verifiers with Formal Verification Tools | ['Ryo Kamoi', 'Yusen Zhang', 'Nan Zhang', 'Sarkar Snigdha Sarathi Das', 'Rui Zhang'] | 2,025 | arXiv.org | 0 | 65 | ['Computer Science'] |
2,505.15966 | Pixel Reasoner: Incentivizing Pixel-Space Reasoning with
Curiosity-Driven Reinforcement Learning | ['Alex Su', 'Haozhe Wang', 'Weiming Ren', 'Fangzhen Lin', 'Wenhu Chen'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Chain-of-thought reasoning has significantly improved the performance of
Large Language Models (LLMs) across various domains. However, this reasoning
process has been confined exclusively to textual space, limiting its
effectiveness in visually intensive tasks. To address this limitation, we
introduce the concept of reasoning in the pixel-space. Within this novel
framework, Vision-Language Models (VLMs) are equipped with a suite of visual
reasoning operations, such as zoom-in and select-frame. These operations enable
VLMs to directly inspect, interrogate, and infer from visual evidences, thereby
enhancing reasoning fidelity for visual tasks. Cultivating such pixel-space
reasoning capabilities in VLMs presents notable challenges, including the
model's initially imbalanced competence and its reluctance to adopt the newly
introduced pixel-space operations. We address these challenges through a
two-phase training approach. The first phase employs instruction tuning on
synthesized reasoning traces to familiarize the model with the novel visual
operations. Following this, a reinforcement learning (RL) phase leverages a
curiosity-driven reward scheme to balance exploration between pixel-space
reasoning and textual reasoning. With these visual operations, VLMs can
interact with complex visual inputs, such as information-rich images or videos
to proactively gather necessary information. We demonstrate that this approach
significantly improves VLM performance across diverse visual reasoning
benchmarks. Our 7B model, \model, achieves 84\% on V* bench, 74\% on
TallyQA-Complex, and 84\% on InfographicsVQA, marking the highest accuracy
achieved by any open-source model to date. These results highlight the
importance of pixel-space reasoning and the effectiveness of our framework. | 2025-05-21T19:35:08Z | Project Page: https://tiger-ai-lab.github.io/Pixel-Reasoner/,
Hands-on Demo: https://huggingface.co/spaces/TIGER-Lab/Pixel-Reasoner | null | null | null | null | null | null | null | null | null |
2,505.16 | Leveraging Online Data to Enhance Medical Knowledge in a Small Persian
Language Model | ['Mehrdad Ghassabi', 'Pedram Rostami', 'Hamidreza Baradaran Kashani', 'Amirhossein Poursina', 'Zahra Kazemi', 'Milad Tavakoli'] | ['cs.CL', 'cs.AI'] | The rapid advancement of language models has demonstrated the potential of
artificial intelligence in the healthcare industry. However, small language
models struggle with specialized domains in low-resource languages like
Persian. While numerous medical-domain websites exist in Persian, no curated
dataset or corpus has been available making ours the first of its kind. This
study explores the enhancement of medical knowledge in a small language model
by leveraging accessible online data, including a crawled corpus from medical
magazines and a dataset of real doctor-patient QA pairs. We fine-tuned a
baseline model using our curated data to improve its medical knowledge.
Benchmark evaluations demonstrate that the fine-tuned model achieves improved
accuracy in medical question answering and provides better responses compared
to its baseline. This work highlights the potential of leveraging open-access
online data to enrich small language models in medical fields, providing a
novel solution for Persian medical AI applications suitable for
resource-constrained environments. | 2025-05-21T20:30:47Z | 6 pages, 4 figures | null | null | null | null | null | null | null | null | null |
2,505.1616 | EduBench: A Comprehensive Benchmarking Dataset for Evaluating Large
Language Models in Diverse Educational Scenarios | ['Bin Xu', 'Yu Bai', 'Huashan Sun', 'Yiguan Lin', 'Siming Liu', 'Xinyue Liang', 'Yaolin Li', 'Yang Gao', 'Heyan Huang'] | ['cs.CL'] | As large language models continue to advance, their application in
educational contexts remains underexplored and under-optimized. In this paper,
we address this gap by introducing the first diverse benchmark tailored for
educational scenarios, incorporating synthetic data containing 9 major
scenarios and over 4,000 distinct educational contexts. To enable comprehensive
assessment, we propose a set of multi-dimensional evaluation metrics that cover
12 critical aspects relevant to both teachers and students. We further apply
human annotation to ensure the effectiveness of the model-generated evaluation
responses. Additionally, we succeed to train a relatively small-scale model on
our constructed dataset and demonstrate that it can achieve performance
comparable to state-of-the-art large models (e.g., Deepseek V3, Qwen Max) on
the test set. Overall, this work provides a practical foundation for the
development and evaluation of education-oriented language models. Code and data
are released at https://github.com/ybai-nlp/EduBench. | 2025-05-22T03:01:28Z | null | null | null | null | null | null | null | null | null | null |
2,505.16186 | SafeKey: Amplifying Aha-Moment Insights for Safety Reasoning | ['Kaiwen Zhou', 'Xuandong Zhao', 'Gaowen Liu', 'Jayanth Srinivasa', 'Aosong Feng', 'Dawn Song', 'Xin Eric Wang'] | ['cs.AI', 'cs.CL', 'cs.CR'] | Large Reasoning Models (LRMs) introduce a new generation paradigm of
explicitly reasoning before answering, leading to remarkable improvements in
complex tasks. However, they pose great safety risks against harmful queries
and adversarial attacks. While recent mainstream safety efforts on LRMs,
supervised fine-tuning (SFT), improve safety performance, we find that
SFT-aligned models struggle to generalize to unseen jailbreak prompts. After
thorough investigation of LRMs' generation, we identify a safety aha moment
that can activate safety reasoning and lead to a safe response. This aha moment
typically appears in the `key sentence', which follows models' query
understanding process and can indicate whether the model will proceed safely.
Based on these insights, we propose SafeKey, including two complementary
objectives to better activate the safety aha moment in the key sentence: (1) a
Dual-Path Safety Head to enhance the safety signal in the model's internal
representations before the key sentence, and (2) a Query-Mask Modeling
objective to improve the models' attention on its query understanding, which
has important safety hints. Experiments across multiple safety benchmarks
demonstrate that our methods significantly improve safety generalization to a
wide range of jailbreak attacks and out-of-distribution harmful prompts,
lowering the average harmfulness rate by 9.6\%, while maintaining general
abilities. Our analysis reveals how SafeKey enhances safety by reshaping
internal attention and improving the quality of hidden representations. | 2025-05-22T03:46:03Z | null | null | null | SafeKey: Amplifying Aha-Moment Insights for Safety Reasoning | ['KAI-QING Zhou', 'Xuandong Zhao', 'Gaowen Liu', 'Jayanth Srinivasa', 'Aosong Feng', 'D. Song', 'Xin Eric Wang'] | 2,025 | arXiv.org | 0 | 35 | ['Computer Science'] |
2,505.16239 | DOVE: Efficient One-Step Diffusion Model for Real-World Video
Super-Resolution | ['Zheng Chen', 'Zichen Zou', 'Kewei Zhang', 'Xiongfei Su', 'Xin Yuan', 'Yong Guo', 'Yulun Zhang'] | ['cs.CV'] | Diffusion models have demonstrated promising performance in real-world video
super-resolution (VSR). However, the dozens of sampling steps they require,
make inference extremely slow. Sampling acceleration techniques, particularly
single-step, provide a potential solution. Nonetheless, achieving one step in
VSR remains challenging, due to the high training overhead on video data and
stringent fidelity demands. To tackle the above issues, we propose DOVE, an
efficient one-step diffusion model for real-world VSR. DOVE is obtained by
fine-tuning a pretrained video diffusion model (*i.e.*, CogVideoX). To
effectively train DOVE, we introduce the latent-pixel training strategy. The
strategy employs a two-stage scheme to gradually adapt the model to the video
super-resolution task. Meanwhile, we design a video processing pipeline to
construct a high-quality dataset tailored for VSR, termed HQ-VSR. Fine-tuning
on this dataset further enhances the restoration capability of DOVE. Extensive
experiments show that DOVE exhibits comparable or superior performance to
multi-step diffusion-based VSR methods. It also offers outstanding inference
efficiency, achieving up to a **28$\times$** speed-up over existing methods
such as MGLD-VSR. Code is available at: https://github.com/zhengchen1999/DOVE. | 2025-05-22T05:16:45Z | Code is available at: https://github.com/zhengchen1999/DOVE | null | null | null | null | null | null | null | null | null |
2,505.16368 | SATURN: SAT-based Reinforcement Learning to Unleash Language Model
Reasoning | ['Huanyu Liu', 'Jia Li', 'Hao Zhu', 'Kechi Zhang', 'Yihong Dong', 'Ge Li'] | ['cs.LG', 'cs.AI'] | How to design reinforcement learning (RL) tasks that effectively unleash the
reasoning capability of large language models (LLMs) remains an open question.
Existing RL tasks (e.g., math, programming, and constructing reasoning tasks)
suffer from three key limitations: (1) Scalability. They rely heavily on human
annotation or expensive LLM synthesis to generate sufficient training data. (2)
Verifiability. LLMs' outputs are hard to verify automatically and reliably. (3)
Controllable Difficulty. Most tasks lack fine-grained difficulty control,
making it hard to train LLMs to develop reasoning ability from easy to hard.
To address these limitations, we propose Saturn, a SAT-based RL framework
that uses Boolean Satisfiability (SAT) problems to train and evaluate LLM
reasoning. Saturn enables scalable task construction, rule-based verification,
and precise difficulty control. Saturn designs a curriculum learning pipeline
that continuously improves LLMs' reasoning capability by constructing SAT tasks
of increasing difficulty and training LLMs from easy to hard. To ensure stable
training, we design a principled mechanism to control difficulty transitions.
We introduce Saturn-2.6k, a dataset of 2,660 SAT problems with varying
difficulty. It supports the evaluation of how LLM reasoning changes with
problem difficulty. We apply Saturn to DeepSeek-R1-Distill-Qwen and obtain
Saturn-1.5B and Saturn-7B. We achieve several notable results: (1) On SAT
problems, Saturn-1.5B and Saturn-7B achieve average pass@3 improvements of
+14.0 and +28.1, respectively. (2) On math and programming tasks, Saturn-1.5B
and Saturn-7B improve average scores by +4.9 and +1.8 on benchmarks (e.g.,
AIME, LiveCodeBench). (3) Compared to the state-of-the-art (SOTA) approach in
constructing RL tasks, Saturn achieves further improvements of +8.8%. We
release the source code, data, and models to support future research. | 2025-05-22T08:23:10Z | null | null | null | null | null | null | null | null | null | null |
2,505.164 | AceReason-Nemotron: Advancing Math and Code Reasoning through
Reinforcement Learning | ['Yang Chen', 'Zhuolin Yang', 'Zihan Liu', 'Chankyu Lee', 'Peng Xu', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Wei Ping'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Despite recent progress in large-scale reinforcement learning (RL) for
reasoning, the training recipe for building high-performing reasoning models
remains elusive. Key implementation details of frontier models, such as
DeepSeek-R1, including data curation strategies and RL training recipe, are
often omitted. Moreover, recent research indicates distillation remains more
effective than RL for smaller models. In this work, we demonstrate that
large-scale RL can significantly enhance the reasoning capabilities of strong,
small- and mid-sized models, achieving results that surpass those of
state-of-the-art distillation-based models. We systematically study the RL
training process through extensive ablations and propose a simple yet effective
approach: first training on math-only prompts, then on code-only prompts.
Notably, we find that math-only RL not only significantly enhances the
performance of strong distilled models on math benchmarks (e.g., +14.6% /
+17.2% on AIME 2025 for the 7B / 14B models), but also code reasoning tasks
(e.g., +6.8% / +5.8% on LiveCodeBench for the 7B / 14B models). In addition,
extended code-only RL iterations further improve performance on code benchmarks
with minimal or no degradation in math results. We develop a robust data
curation pipeline to collect challenging prompts with high-quality, verifiable
answers and test cases to enable verification-based RL across both domains.
Finally, we identify key experimental insights, including curriculum learning
with progressively increasing response lengths and the stabilizing effect of
on-policy parameter updates. We find that RL not only elicits the foundational
reasoning capabilities acquired during pretraining and supervised fine-tuning
(e.g., distillation), but also pushes the limits of the model's reasoning
ability, enabling it to solve problems that were previously unsolvable. | 2025-05-22T08:50:47Z | Add pass@1024 evaluation results for LiveCodeBench v6. We release the
models at:
https://huggingface.co/collections/nvidia/acereason-682f4e1261dc22f697fd1485 | null | null | null | null | null | null | null | null | null |
2,505.1641 | Tool-Star: Empowering LLM-Brained Multi-Tool Reasoner via Reinforcement
Learning | ['Guanting Dong', 'Yifei Chen', 'Xiaoxi Li', 'Jiajie Jin', 'Hongjin Qian', 'Yutao Zhu', 'Hangyu Mao', 'Guorui Zhou', 'Zhicheng Dou', 'Ji-Rong Wen'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recently, large language models (LLMs) have shown remarkable reasoning
capabilities via large-scale reinforcement learning (RL). However, leveraging
the RL algorithm to empower effective multi-tool collaborative reasoning in
LLMs remains an open challenge. In this paper, we introduce Tool-Star, an
RL-based framework designed to empower LLMs to autonomously invoke multiple
external tools during stepwise reasoning. Tool-Star integrates six types of
tools and incorporates systematic designs in both data synthesis and training.
To address the scarcity of tool-use data, we propose a general tool-integrated
reasoning data synthesis pipeline, which combines tool-integrated prompting
with hint-based sampling to automatically and scalably generate tool-use
trajectories. A subsequent quality normalization and difficulty-aware
classification process filters out low-quality samples and organizes the
dataset from easy to hard. Furthermore, we propose a two-stage training
framework to enhance multi-tool collaborative reasoning by: (1) cold-start
fine-tuning, which guides LLMs to explore reasoning patterns via
tool-invocation feedback; and (2) a multi-tool self-critic RL algorithm with
hierarchical reward design, which reinforces reward understanding and promotes
effective tool collaboration. Experimental analyses on over 10 challenging
reasoning benchmarks highlight the effectiveness and efficiency of Tool-Star.
The code is available at https://github.com/dongguanting/Tool-Star. | 2025-05-22T09:00:19Z | Working in progress | null | null | null | null | null | null | null | null | null |
2,505.16495 | ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation | ['Lingfeng Wang', 'Hualing Lin', 'Senda Chen', 'Tao Wang', 'Changxu Cheng', 'Yangyang Zhong', 'Dong Zheng', 'Wuyue Zhao'] | ['cs.CV'] | While humans effortlessly draw visual objects and shapes by adaptively
allocating attention based on their complexity, existing multimodal large
language models (MLLMs) remain constrained by rigid token representations.
Bridging this gap, we propose ALTo, an adaptive length tokenizer for
autoregressive mask generation. To achieve this, a novel token length predictor
is designed, along with a length regularization term and a differentiable token
chunking strategy. We further build ALToLLM that seamlessly integrates ALTo
into MLLM. Preferences on the trade-offs between mask quality and efficiency is
implemented by group relative policy optimization (GRPO). Experiments
demonstrate that ALToLLM achieves state-of-the-art performance with adaptive
token cost on popular segmentation benchmarks. Code and models are released at
https://github.com/yayafengzi/ALToLLM. | 2025-05-22T10:26:51Z | null | null | null | ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation | ['Lingfeng Wang', 'Hualing Lin', 'Senda Chen', 'Tao Wang', 'Changxu Cheng', 'Yangyang Zhong', 'Dong Zheng', 'Wuyue Zhao'] | 2,025 | arXiv.org | 0 | 64 | ['Computer Science'] |
2,505.16637 | SSR-Zero: Simple Self-Rewarding Reinforcement Learning for Machine
Translation | ['Wenjie Yang', 'Mao Zheng', 'Mingyang Song', 'Zheng Li', 'Sitong Wang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models (LLMs) have recently demonstrated remarkable
capabilities in machine translation (MT). However, most advanced MT-specific
LLMs heavily rely on external supervision signals during training, such as
human-annotated reference data or trained reward models (RMs), which are often
expensive to obtain and challenging to scale. To overcome this limitation, we
propose a Simple Self-Rewarding (SSR) Reinforcement Learning (RL) framework for
MT that is reference-free, fully online, and relies solely on self-judging
rewards. Training with SSR using 13K monolingual examples and Qwen-2.5-7B as
the backbone, our model SSR-Zero-7B outperforms existing MT-specific LLMs,
e.g., TowerInstruct-13B and GemmaX-28-9B, as well as larger general LLMs like
Qwen2.5-32B-Instruct in English $\leftrightarrow$ Chinese translation tasks
from WMT23, WMT24, and Flores200 benchmarks. Furthermore, by augmenting SSR
with external supervision from COMET, our strongest model, SSR-X-Zero-7B,
achieves state-of-the-art performance in English $\leftrightarrow$ Chinese
translation, surpassing all existing open-source models under 72B parameters
and even outperforming closed-source models, e.g., GPT-4o and Gemini 1.5 Pro.
Our analysis highlights the effectiveness of the self-rewarding mechanism
compared to the external LLM-as-a-judge approach in MT and demonstrates its
complementary benefits when combined with trained RMs. Our findings provide
valuable insight into the potential of self-improving RL methods. We have
publicly released our code, data and models. | 2025-05-22T13:08:25Z | null | null | null | null | null | null | null | null | null | null |
2,505.16647 | Point, Detect, Count: Multi-Task Medical Image Understanding with
Instruction-Tuned Vision-Language Models | ['Sushant Gautam', 'Michael A. Riegler', 'Pål Halvorsen'] | ['cs.CV', 'cs.AI', '68T45, 68T07', 'I.2.10; I.4.8'] | We investigate fine-tuning Vision-Language Models (VLMs) for multi-task
medical image understanding, focusing on detection, localization, and counting
of findings in medical images. Our objective is to evaluate whether
instruction-tuned VLMs can simultaneously improve these tasks, with the goal of
enhancing diagnostic accuracy and efficiency. Using MedMultiPoints, a
multimodal dataset with annotations from endoscopy (polyps and instruments) and
microscopy (sperm cells), we reformulate each task into instruction-based
prompts suitable for vision-language reasoning. We fine-tune
Qwen2.5-VL-7B-Instruct using Low-Rank Adaptation (LoRA) across multiple task
combinations. Results show that multi-task training improves robustness and
accuracy. For example, it reduces the Count Mean Absolute Error (MAE) and
increases Matching Accuracy in the Counting + Pointing task. However,
trade-offs emerge, such as more zero-case point predictions, indicating reduced
reliability in edge cases despite overall performance gains. Our study
highlights the potential of adapting general-purpose VLMs to specialized
medical tasks via prompt-driven fine-tuning. This approach mirrors clinical
workflows, where radiologists simultaneously localize, count, and describe
findings - demonstrating how VLMs can learn composite diagnostic reasoning
patterns. The model produces interpretable, structured outputs, offering a
promising step toward explainable and versatile medical AI. Code, model
weights, and scripts will be released for reproducibility at
https://github.com/simula/PointDetectCount. | 2025-05-22T13:18:44Z | Accepted as a full paper at the 38th IEEE International Symposium on
Computer-Based Medical Systems (CBMS) 2025 | null | null | null | null | null | null | null | null | null |
2,505.16661 | A Japanese Language Model and Three New Evaluation Benchmarks for
Pharmaceutical NLP | ['Issey Sukeda', 'Takuro Fujii', 'Kosei Buma', 'Shunsuke Sasaki', 'Shinnosuke Ono'] | ['cs.CL'] | We present a Japanese domain-specific language model for the pharmaceutical
field, developed through continual pretraining on 2 billion Japanese
pharmaceutical tokens and 8 billion English biomedical tokens. To enable
rigorous evaluation, we introduce three new benchmarks: YakugakuQA, based on
national pharmacist licensing exams; NayoseQA, which tests cross-lingual
synonym and terminology normalization; and SogoCheck, a novel task designed to
assess consistency reasoning between paired statements. We evaluate our model
against both open-source medical LLMs and commercial models, including GPT-4o.
Results show that our domain-specific model outperforms existing open models
and achieves competitive performance with commercial ones, particularly on
terminology-heavy and knowledge-based tasks. Interestingly, even GPT-4o
performs poorly on SogoCheck, suggesting that cross-sentence consistency
reasoning remains an open challenge. Our benchmark suite offers a broader
diagnostic lens for pharmaceutical NLP, covering factual recall, lexical
variation, and logical consistency. This work demonstrates the feasibility of
building practical, secure, and cost-effective language models for Japanese
domain-specific applications, and provides reusable evaluation resources for
future research in pharmaceutical and healthcare NLP. Our model, codes, and
datasets are released at https://github.com/EQUES-Inc/pharma-LLM-eval. | 2025-05-22T13:27:37Z | 15 pages, 9 tables, 5 figures | null | null | null | null | null | null | null | null | null |
2,505.16826 | KTAE: A Model-Free Algorithm to Key-Tokens Advantage Estimation in
Mathematical Reasoning | ['Wei Sun', 'Wen Yang', 'Pu Jian', 'Qianlong Du', 'Fuwei Cui', 'Shuo Ren', 'Jiajun Zhang'] | ['cs.AI', 'cs.CL'] | Recent advances have demonstrated that integrating reinforcement learning
with rule-based rewards can significantly enhance the reasoning capabilities of
large language models, even without supervised fine-tuning. However, prevalent
reinforcement learning algorithms such as GRPO and its variants like DAPO,
suffer from a coarse granularity issue when computing the advantage.
Specifically, they compute rollout-level advantages that assign identical
values to every token within a sequence, failing to capture token-specific
contributions and hindering effective learning. To address this limitation, we
propose Key-token Advantage Estimation (KTAE) - a novel algorithm that
estimates fine-grained, token-level advantages without introducing additional
models. KTAE leverages the correctness of sampled rollouts and applies
statistical analysis to quantify the importance of individual tokens within a
sequence to the final outcome. This quantified token-level importance is then
combined with the rollout-level advantage to obtain a more fine-grained
token-level advantage estimation. Empirical results show that models trained
with GRPO+KTAE and DAPO+KTAE outperform baseline methods across five
mathematical reasoning benchmarks. Notably, they achieve higher accuracy with
shorter responses and even surpass R1-Distill-Qwen-1.5B using the same base
model. | 2025-05-22T16:00:33Z | null | null | null | null | null | null | null | null | null | null |
2,505.16839 | LaViDa: A Large Diffusion Language Model for Multimodal Understanding | ['Shufan Li', 'Konstantinos Kallidromitis', 'Hritik Bansal', 'Akash Gokul', 'Yusuke Kato', 'Kazuki Kozuka', 'Jason Kuen', 'Zhe Lin', 'Kai-Wei Chang', 'Aditya Grover'] | ['cs.CV'] | Modern Vision-Language Models (VLMs) can solve a wide range of tasks
requiring visual reasoning. In real-world scenarios, desirable properties for
VLMs include fast inference and controllable generation (e.g., constraining
outputs to adhere to a desired format). However, existing autoregressive (AR)
VLMs like LLaVA struggle in these aspects. Discrete diffusion models (DMs)
offer a promising alternative, enabling parallel decoding for faster inference
and bidirectional context for controllable generation through text-infilling.
While effective in language-only settings, DMs' potential for multimodal tasks
is underexplored. We introduce LaViDa, a family of VLMs built on DMs. We build
LaViDa by equipping DMs with a vision encoder and jointly fine-tune the
combined parts for multimodal instruction following. To address challenges
encountered, LaViDa incorporates novel techniques such as complementary masking
for effective training, prefix KV cache for efficient inference, and timestep
shifting for high-quality sampling. Experiments show that LaViDa achieves
competitive or superior performance to AR VLMs on multi-modal benchmarks such
as MMMU, while offering unique advantages of DMs, including flexible
speed-quality tradeoff, controllability, and bidirectional reasoning. On COCO
captioning, LaViDa surpasses Open-LLaVa-Next-8B by +4.1 CIDEr with 1.92x
speedup. On bidirectional tasks, it achieves +59% improvement on Constrained
Poem Completion. These results demonstrate LaViDa as a strong alternative to AR
VLMs. Code and models will be released in the camera-ready version. | 2025-05-22T16:07:12Z | 26 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,505.16854 | Think or Not? Selective Reasoning via Reinforcement Learning for
Vision-Language Models | ['Jiaqi Wang', 'Kevin Qinghong Lin', 'James Cheng', 'Mike Zheng Shou'] | ['cs.AI', 'cs.CV'] | Reinforcement Learning (RL) has proven to be an effective post-training
strategy for enhancing reasoning in vision-language models (VLMs). Group
Relative Policy Optimization (GRPO) is a recent prominent method that
encourages models to generate complete reasoning traces before answering,
leading to increased token usage and computational cost. Inspired by the
human-like thinking process-where people skip reasoning for easy questions but
think carefully when needed-we explore how to enable VLMs to first decide when
reasoning is necessary. To realize this, we propose TON, a two-stage training
strategy: (i) a supervised fine-tuning (SFT) stage with a simple yet effective
'thought dropout' operation, where reasoning traces are randomly replaced with
empty thoughts. This introduces a think-or-not format that serves as a cold
start for selective reasoning; (ii) a GRPO stage that enables the model to
freely explore when to think or not, while maximizing task-aware outcome
rewards. Experimental results show that TON can reduce the completion length by
up to 90% compared to vanilla GRPO, without sacrificing performance or even
improving it. Further evaluations across diverse vision-language tasks-covering
a range of reasoning difficulties under both 3B and 7B models-consistently
reveal that the model progressively learns to bypass unnecessary reasoning
steps as training advances. These findings shed light on the path toward
human-like reasoning patterns in reinforcement learning approaches. Our code is
available at https://github.com/kokolerk/TON. | 2025-05-22T16:13:29Z | update more examples in appendix | null | null | Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models | ['Jiaqi Wang', 'Kevin Qinghong Lin', 'James Cheng', 'Mike Zheng Shou'] | 2,025 | arXiv.org | 0 | 37 | ['Computer Science'] |
2,505.16901 | Code Graph Model (CGM): A Graph-Integrated Large Language Model for
Repository-Level Software Engineering Tasks | ['Hongyuan Tao', 'Ying Zhang', 'Zhenhao Tang', 'Hongen Peng', 'Xukun Zhu', 'Bingchang Liu', 'Yingguang Yang', 'Ziyin Zhang', 'Zhaogui Xu', 'Haipeng Zhang', 'Linchao Zhu', 'Rui Wang', 'Hang Yu', 'Jianguo Li', 'Peng Di'] | ['cs.SE', 'cs.LG'] | Recent advances in Large Language Models (LLMs) have shown promise in
function-level code generation, yet repository-level software engineering tasks
remain challenging. Current solutions predominantly rely on proprietary LLM
agents, which introduce unpredictability and limit accessibility, raising
concerns about data privacy and model customization. This paper investigates
whether open-source LLMs can effectively address repository-level tasks without
requiring agent-based approaches. We demonstrate this is possible by enabling
LLMs to comprehend functions and files within codebases through their semantic
information and structural dependencies. To this end, we introduce Code Graph
Models (CGMs), which integrate repository code graph structures into the LLM's
attention mechanism and map node attributes to the LLM's input space using a
specialized adapter. When combined with an agentless graph RAG framework, our
approach achieves a 43.00% resolution rate on the SWE-bench Lite benchmark
using the open-source Qwen2.5-72B model. This performance ranks first among
open weight models, second among methods with open-source systems, and eighth
overall, surpassing the previous best open-source model-based method by 12.33%. | 2025-05-22T17:00:55Z | 35 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,505.16933 | LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning | ['Zebin You', 'Shen Nie', 'Xiaolu Zhang', 'Jun Hu', 'Jun Zhou', 'Zhiwu Lu', 'Ji-Rong Wen', 'Chongxuan Li'] | ['cs.LG', 'cs.CL', 'cs.CV'] | In this work, we introduce LLaDA-V, a purely diffusion-based Multimodal Large
Language Model (MLLM) that integrates visual instruction tuning with masked
diffusion models, representing a departure from the autoregressive paradigms
dominant in current multimodal approaches. Built upon LLaDA, a representative
large language diffusion model, LLaDA-V incorporates a vision encoder and MLP
connector that projects visual features into the language embedding space,
enabling effective multimodal alignment. Our empirical investigation reveals
several intriguing results: First, LLaDA-V demonstrates promising multimodal
performance despite its language model being weaker on purely textual tasks
than counterparts like LLaMA3-8B and Qwen2-7B. When trained on the same
instruction data, LLaDA-V is highly competitive to LLaMA3-V across multimodal
tasks with better data scalability. It also narrows the performance gap to
Qwen2-VL, suggesting the effectiveness of its architecture for multimodal
tasks. Second, LLaDA-V achieves state-of-the-art performance in multimodal
understanding compared to existing hybrid autoregressive-diffusion and purely
diffusion-based MLLMs. Our findings suggest that large language diffusion
models show promise in multimodal contexts and warrant further investigation in
future research. Project page and codes:
https://ml-gsai.github.io/LLaDA-V-demo/. | 2025-05-22T17:23:26Z | Project page and codes: \url{https://ml-gsai.github.io/LLaDA-V-demo/} | null | null | LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning | ['Zebin You', 'Shen Nie', 'Xiaolu Zhang', 'Jun Hu', 'Jun Zhou', 'Zhiwu Lu', 'Ji-Rong Wen', 'Chongxuan Li'] | 2,025 | arXiv.org | 2 | 114 | ['Computer Science'] |
2,505.16938 | NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop
System from Hypothesis to Verification | ['NovelSeek Team', 'Bo Zhang', 'Shiyang Feng', 'Xiangchao Yan', 'Jiakang Yuan', 'Zhiyin Yu', 'Xiaohan He', 'Songtao Huang', 'Shaowei Hou', 'Zheng Nie', 'Zhilong Wang', 'Jinyao Liu', 'Runmin Ma', 'Tianshuo Peng', 'Peng Ye', 'Dongzhan Zhou', 'Shufei Zhang', 'Xiaosong Wang', 'Yilan Zhang', 'Meng Li', 'Zhongying Tu', 'Xiangyu Yue', 'Wangli Ouyang', 'Bowen Zhou', 'Lei Bai'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Artificial Intelligence (AI) is accelerating the transformation of scientific
research paradigms, not only enhancing research efficiency but also driving
innovation. We introduce NovelSeek, a unified closed-loop multi-agent framework
to conduct Autonomous Scientific Research (ASR) across various scientific
research fields, enabling researchers to tackle complicated problems in these
fields with unprecedented speed and precision. NovelSeek highlights three key
advantages: 1) Scalability: NovelSeek has demonstrated its versatility across
12 scientific research tasks, capable of generating innovative ideas to enhance
the performance of baseline code. 2) Interactivity: NovelSeek provides an
interface for human expert feedback and multi-agent interaction in automated
end-to-end processes, allowing for the seamless integration of domain expert
knowledge. 3) Efficiency: NovelSeek has achieved promising performance gains in
several scientific fields with significantly less time cost compared to human
efforts. For instance, in reaction yield prediction, it increased from 27.6% to
35.4% in just 12 hours; in enhancer activity prediction, accuracy rose from
0.65 to 0.79 with only 4 hours of processing; and in 2D semantic segmentation,
precision advanced from 78.8% to 81.0% in a mere 30 hours. | 2025-05-22T17:27:43Z | HomePage: https://alpha-innovator.github.io/NovelSeek-project-page | null | null | NovelSeek: When Agent Becomes the Scientist - Building Closed-Loop System from Hypothesis to Verification | ['NovelSeek Team Bo Zhang', 'Shi Feng', 'Xiangchao Yan', 'Jiakang Yuan', 'Zhiyin Yu', 'Xiaohan He', 'Songtao Huang', 'Shaowei Hou', 'Zheng Nie', 'Zhilong Wang', 'Jinyao Liu', 'Runmin Ma', 'Tianshuo Peng', 'Peng Ye', 'Dongzhan Zhou', 'Shufei Zhang', 'Xiaosong Wang', 'Yilan Zhang', 'Meng Li', 'Zhongying Tu', 'Xiangyu Yue', 'Wangli Ouyang', 'Bowen Zhou', 'Lei Bai'] | 2,025 | arXiv.org | 2 | 51 | ['Computer Science'] |
2,505.16947 | MixAT: Combining Continuous and Discrete Adversarial Training for LLMs | ['Csaba Dékány', 'Stefan Balauca', 'Robin Staab', 'Dimitar I. Dimitrov', 'Martin Vechev'] | ['cs.LG', 'cs.AI', 'I.2.7; K.4.1'] | Despite recent efforts in Large Language Models (LLMs) safety and alignment,
current adversarial attacks on frontier LLMs are still able to force harmful
generations consistently. Although adversarial training has been widely studied
and shown to significantly improve the robustness of traditional machine
learning models, its strengths and weaknesses in the context of LLMs are less
understood. Specifically, while existing discrete adversarial attacks are
effective at producing harmful content, training LLMs with concrete adversarial
prompts is often computationally expensive, leading to reliance on continuous
relaxations. As these relaxations do not correspond to discrete input tokens,
such latent training methods often leave models vulnerable to a diverse set of
discrete attacks. In this work, we aim to bridge this gap by introducing MixAT,
a novel method that combines stronger discrete and faster continuous attacks
during training. We rigorously evaluate MixAT across a wide spectrum of
state-of-the-art attacks, proposing the At Least One Attack Success Rate
(ALO-ASR) metric to capture the worst-case vulnerability of models. We show
MixAT achieves substantially better robustness (ALO-ASR < 20%) compared to
prior defenses (ALO-ASR > 50%), while maintaining a runtime comparable to
methods based on continuous relaxations. We further analyze MixAT in realistic
deployment settings, exploring how chat templates, quantization, low-rank
adapters, and temperature affect both adversarial training and evaluation,
revealing additional blind spots in current methodologies. Our results
demonstrate that MixAT's discrete-continuous defense offers a principled and
superior robustness-accuracy tradeoff with minimal computational overhead,
highlighting its promise for building safer LLMs. We provide our code and
models at https://github.com/insait-institute/MixAT. | 2025-05-22T17:32:50Z | null | null | null | MixAT: Combining Continuous and Discrete Adversarial Training for LLMs | ["Csaba D'ek'any", 'Stefan Balauca', 'Robin Staab', 'Dimitar I. Dimitrov', 'Martin T. Vechev'] | 2,025 | arXiv.org | 0 | 41 | ['Computer Science'] |
2,505.16968 | CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark | ['Ahmed Heakl', 'Sarim Hashmi', 'Gustavo Bertolo Stahl', 'Seung Hun Eddie Han', 'Salman Khan', 'Abdulrahman Mahmoud'] | ['cs.AR', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.PL'] | We introduce CASS, the first large-scale dataset and model suite for
cross-architecture GPU code transpilation, targeting both source-level (CUDA
<--> HIP) and assembly-level (Nvidia SASS <--> AMD RDNA3) translation. The
dataset comprises 70k verified code pairs across host and device, addressing a
critical gap in low-level GPU code portability. Leveraging this resource, we
train the CASS family of domain-specific language models, achieving 95% source
translation accuracy and 37.5% assembly translation accuracy, substantially
outperforming commercial baselines such as GPT-4o, Claude, and Hipify. Our
generated code matches native performance in over 85% of test cases, preserving
runtime and memory behavior. To support rigorous evaluation, we introduce
CASS-Bench, a curated benchmark spanning 16 GPU domains with ground-truth
execution. All data, models, and evaluation tools are released as open source
to foster progress in GPU compiler tooling, binary compatibility, and
LLM-guided hardware translation. | 2025-05-22T17:48:53Z | 20 pages, 11 figures, 5 tables | null | null | null | null | null | null | null | null | null |
2,505.16973 | VeriFastScore: Speeding up long-form factuality evaluation | ['Rishanth Rajendhran', 'Amir Zadeh', 'Matthew Sarte', 'Chuan Li', 'Mohit Iyyer'] | ['cs.CL'] | Metrics like FactScore and VeriScore that evaluate long-form factuality
operate by decomposing an input response into atomic claims and then
individually verifying each claim. While effective and interpretable, these
methods incur numerous LLM calls and can take upwards of 100 seconds to
evaluate a single response, limiting their practicality in large-scale
evaluation and training scenarios. To address this, we propose VeriFastScore,
which leverages synthetic data to fine-tune Llama3.1 8B for simultaneously
extracting and verifying all verifiable claims within a given text based on
evidence from Google Search. We show that this task cannot be solved via
few-shot prompting with closed LLMs due to its complexity: the model receives
~4K tokens of evidence on average and needs to concurrently decompose claims,
judge their verifiability, and verify them against noisy evidence. However, our
fine-tuned VeriFastScore model demonstrates strong correlation with the
original VeriScore pipeline at both the example level (r=0.80) and system level
(r=0.94) while achieving an overall speedup of 6.6x (9.9x excluding evidence
retrieval) over VeriScore. To facilitate future factuality research, we
publicly release our VeriFastScore model and synthetic datasets. | 2025-05-22T17:51:25Z | null | null | null | VeriFastScore: Speeding up long-form factuality evaluation | ['Rishanth Rajendhran', 'Amir Zadeh', 'Matthew Sarte', 'Chuan Li', 'Mohit Iyyer'] | 2,025 | arXiv.org | 0 | 29 | ['Computer Science'] |
2,505.16983 | LLM as Effective Streaming Processor: Bridging Streaming-Batch
Mismatches with Group Position Encoding | ['Junlong Tong', 'Jinlan Fu', 'Zixuan Lin', 'Yingqi Fan', 'Anhao Zhao', 'Hui Su', 'Xiaoyu Shen'] | ['cs.CL'] | Large Language Models (LLMs) are primarily designed for batch processing.
Existing methods for adapting LLMs to streaming rely either on expensive
re-encoding or specialized architectures with limited scalability. This work
identifies three key mismatches in adapting batch-oriented LLMs to streaming:
(1) input-attention, (2) output-attention, and (3) position-ID mismatches.
While it is commonly assumed that the latter two mismatches require frequent
re-encoding, our analysis reveals that only the input-attention mismatch
significantly impacts performance, indicating re-encoding outputs is largely
unnecessary. To better understand this discrepancy with the common assumption,
we provide the first comprehensive analysis of the impact of position encoding
on LLMs in streaming, showing that preserving relative positions within source
and target contexts is more critical than maintaining absolute order. Motivated
by the above analysis, we introduce a group position encoding paradigm built on
batch architectures to enhance consistency between streaming and batch modes.
Extensive experiments on cross-lingual and cross-modal tasks demonstrate that
our method outperforms existing approaches. Our method requires no
architectural modifications, exhibits strong generalization in both streaming
and batch modes. The code is available at repository
https://github.com/EIT-NLP/StreamingLLM. | 2025-05-22T17:53:28Z | ACL 2025 Findings | null | null | LLM as Effective Streaming Processor: Bridging Streaming-Batch Mismatches with Group Position Encoding | ['Junlong Tong', 'Jinlan Fu', 'Zixuan Lin', 'Yingqi Fan', 'Anhao Zhao', 'Hui Su', 'Xiaoyu Shen'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,505.16984 | UFT: Unifying Supervised and Reinforcement Fine-Tuning | ['Mingyang Liu', 'Gabriele Farina', 'Asuman Ozdaglar'] | ['cs.LG', 'cs.CL'] | Post-training has demonstrated its importance in enhancing the reasoning
capabilities of large language models (LLMs). The primary post-training methods
can be categorized into supervised fine-tuning (SFT) and reinforcement
fine-tuning (RFT). SFT is efficient and well-suited for small language models,
but it may lead to overfitting and limit the reasoning abilities of larger
models. In contrast, RFT generally yields better generalization but depends
heavily on the strength of the base model. To address the limitations of SFT
and RFT, we propose Unified Fine-Tuning (UFT), a novel post-training paradigm
that unifies SFT and RFT into a single, integrated process. UFT enables the
model to effectively explore solutions while incorporating informative
supervision signals, bridging the gap between memorizing and thinking
underlying existing methods. Notably, UFT outperforms both SFT and RFT in
general, regardless of model sizes. Furthermore, we theoretically prove that
UFT breaks RFT's inherent exponential sample complexity bottleneck, showing for
the first time that unified training can exponentially accelerate convergence
on long-horizon reasoning tasks. | 2025-05-22T17:53:57Z | null | null | null | null | null | null | null | null | null | null |
2,505.1699 | Dimple: Discrete Diffusion Multimodal Large Language Model with Parallel
Decoding | ['Runpeng Yu', 'Xinyin Ma', 'Xinchao Wang'] | ['cs.CV'] | In this work, we propose Dimple, the first Discrete Diffusion Multimodal
Large Language Model (DMLLM). We observe that training with a purely discrete
diffusion approach leads to significant training instability, suboptimal
performance, and severe length bias issues. To address these challenges, we
design a novel training paradigm that combines an initial autoregressive phase
with a subsequent diffusion phase. This approach yields the Dimple-7B model,
trained on the same dataset and using a similar training pipeline as
LLaVA-NEXT. Dimple-7B ultimately surpasses LLaVA-NEXT in performance by 3.9%,
demonstrating that DMLLM can achieve performance comparable to that of
autoregressive models. To improve inference efficiency, we propose a decoding
strategy termed confident decoding, which dynamically adjusts the number of
tokens generated at each step, significantly reducing the number of generation
iterations. In autoregressive models, the number of forward iterations during
generation equals the response length. With confident decoding, however, the
number of iterations needed by Dimple is even only $\frac{\text{response
length}}{3}$. We also re-implement the prefilling technique in autoregressive
models and demonstrate that it does not significantly impact performance on
most benchmark evaluations, while offering a speedup of 1.5x to 7x.
Additionally, we explore Dimple's capability to precisely control its response
using structure priors. These priors enable structured responses in a manner
distinct from instruction-based or chain-of-thought prompting, and allow
fine-grained control over response format and length, which is difficult to
achieve in autoregressive models. Overall, this work validates the feasibility
and advantages of DMLLM and enhances its inference efficiency and
controllability. Code and models are available at
https://github.com/yu-rp/Dimple. | 2025-05-22T17:55:04Z | null | null | null | null | null | null | null | null | null | null |
2,505.16994 | $\text{R}^2\text{ec}$: Towards Large Recommender Models with Reasoning | ['Runyang You', 'Yongqi Li', 'Xinyu Lin', 'Xin Zhang', 'Wenjie Wang', 'Wenjie Li', 'Liqiang Nie'] | ['cs.IR', 'cs.AI', 'cs.CL'] | Large recommender models have extended LLMs as powerful recommenders via
encoding or item generation, and recent breakthroughs in LLM reasoning
synchronously motivate the exploration of reasoning in recommendation. Current
studies usually position LLMs as external reasoning modules to yield auxiliary
thought for augmenting conventional recommendation pipelines. However, such
decoupled designs are limited in significant resource cost and suboptimal joint
optimization. To address these issues, we propose \name, a unified large
recommender model with intrinsic reasoning capabilities. Initially, we
reconceptualize the model architecture to facilitate interleaved reasoning and
recommendation in the autoregressive process. Subsequently, we propose RecPO, a
corresponding reinforcement learning framework that optimizes \name\ both the
reasoning and recommendation capabilities simultaneously in a single policy
update; RecPO introduces a fused reward scheme that solely leverages
recommendation labels to simulate the reasoning capability, eliminating
dependency on specialized reasoning annotations. Experiments on three datasets
with various baselines verify the effectiveness of \name, showing relative
improvements of 68.67\% in Hit@5 and 45.21\% in NDCG@20. Code available at
https://github.com/YRYangang/RRec. | 2025-05-22T17:55:43Z | null | null | null | null | null | null | null | null | null | null |
2,505.17012 | SpatialScore: Towards Unified Evaluation for Multimodal Spatial
Understanding | ['Haoning Wu', 'Xiao Huang', 'Yaohui Chen', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | ['cs.CV', 'cs.AI'] | Multimodal large language models (MLLMs) have achieved impressive success in
question-answering tasks, yet their capabilities for spatial understanding are
less explored. This work investigates a critical question: do existing MLLMs
possess 3D spatial perception and understanding abilities? Concretely, we make
the following contributions in this paper: (i) we introduce VGBench, a
benchmark specifically designed to assess MLLMs for visual geometry perception,
e.g., camera pose and motion estimation; (ii) we propose SpatialScore, the most
comprehensive and diverse multimodal spatial understanding benchmark to date,
integrating VGBench with relevant data from the other 11 existing datasets.
This benchmark comprises 28K samples across various spatial understanding
tasks, modalities, and QA formats, along with a carefully curated challenging
subset, SpatialScore-Hard; (iii) we develop SpatialAgent, a novel multi-agent
system incorporating 9 specialized tools for spatial understanding, supporting
both Plan-Execute and ReAct reasoning paradigms; (iv) we conduct extensive
evaluations to reveal persistent challenges in spatial reasoning while
demonstrating the effectiveness of SpatialAgent. We believe SpatialScore will
offer valuable insights and serve as a rigorous benchmark for the next
evolution of MLLMs. | 2025-05-22T17:59:03Z | Technical Report; Project Page:
https://haoningwu3639.github.io/SpatialScore | null | null | null | null | null | null | null | null | null |
2,505.17016 | Interactive Post-Training for Vision-Language-Action Models | ['Shuhan Tan', 'Kairan Dou', 'Yue Zhao', 'Philipp Krähenbühl'] | ['cs.LG', 'cs.AI', 'cs.CV', 'cs.RO'] | We introduce RIPT-VLA, a simple and scalable reinforcement-learning-based
interactive post-training paradigm that fine-tunes pretrained
Vision-Language-Action (VLA) models using only sparse binary success rewards.
Existing VLA training pipelines rely heavily on offline expert demonstration
data and supervised imitation, limiting their ability to adapt to new tasks and
environments under low-data regimes. RIPT-VLA addresses this by enabling
interactive post-training with a stable policy optimization algorithm based on
dynamic rollout sampling and leave-one-out advantage estimation.
RIPT-VLA has the following characteristics. First, it applies to various VLA
models, resulting in an improvement on the lightweight QueST model by 21.2%,
and the 7B OpenVLA-OFT model to an unprecedented 97.5% success rate. Second, it
is computationally efficient and data-efficient: with only one demonstration,
RIPT-VLA enables an unworkable SFT model (4%) to succeed with a 97% success
rate within 15 iterations. Furthermore, we demonstrate that the policy learned
by RIPT-VLA generalizes across different tasks and scenarios and is robust to
the initial state context. These results highlight RIPT-VLA as a practical and
effective paradigm for post-training VLA models through minimal supervision. | 2025-05-22T17:59:45Z | Project page: https://ariostgx.github.io/ript_vla/ | null | null | Interactive Post-Training for Vision-Language-Action Models | ['Shuhan Tan', 'Kairan Dou', 'Yue Zhao', 'Philipp Krähenbühl'] | 2,025 | arXiv.org | 1 | 39 | ['Computer Science'] |
2,505.17018 | SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward | ['Kaixuan Fan', 'Kaituo Feng', 'Haoming Lyu', 'Dongzhan Zhou', 'Xiangyu Yue'] | ['cs.CV'] | Recent advances have shown success in eliciting strong reasoning abilities in
multimodal large language models (MLLMs) through rule-based reinforcement
learning (RL) with outcome rewards. However, this paradigm typically lacks
supervision over the thinking process leading to the final outcome.As a result,
the model may learn sub-optimal reasoning strategies, which can hinder its
generalization ability. In light of this, we propose SophiaVL-R1, as an attempt
to add reward signals for the thinking process in this paradigm. To achieve
this, we first train a thinking reward model that evaluates the quality of the
entire thinking process. Given that the thinking reward may be unreliable for
certain samples due to reward hacking, we propose the Trust-GRPO method, which
assigns a trustworthiness weight to the thinking reward during training. This
weight is computed based on the thinking reward comparison of responses leading
to correct answers versus incorrect answers, helping to mitigate the impact of
potentially unreliable thinking rewards. Moreover, we design an annealing
training strategy that gradually reduces the thinking reward over time,
allowing the model to rely more on the accurate rule-based outcome reward in
later training stages. Experiments show that our SophiaVL-R1 surpasses a series
of reasoning MLLMs on various benchmarks (e.g., MathVisita, MMMU),
demonstrating strong reasoning and generalization capabilities. Notably, our
SophiaVL-R1-7B even outperforms LLaVA-OneVision-72B on most benchmarks, despite
the latter having 10 times more parameters. All code, models, and datasets are
made publicly available at https://github.com/kxfan2002/SophiaVL-R1. | 2025-05-22T17:59:53Z | Project page:https://github.com/kxfan2002/SophiaVL-R1 | null | null | null | null | null | null | null | null | null |
2,505.17082 | GemMaroc: Unlocking Darija Proficiency in LLMs with Minimal Data | ['Abderrahman Skiredj', 'Ferdaous Azhari', 'Houdaifa Atou', 'Nouamane Tazi', 'Ismail Berrada'] | ['cs.CL', 'cs.AI'] | Open-source large language models (LLMs) still marginalise Moroccan Arabic
(Darija), forcing practitioners either to bolt on heavyweight Arabic adapters
or to sacrifice the very reasoning skills that make LLMs useful. We show that a
rigorously quality-over-quantity alignment strategy can surface fluent Darija
while safeguarding the backbone s cross-lingual reasoning at a sliver of the
usual compute. We translate three compact instruction suites LIMA 1 K, DEITA 6
K and TULU 50 K into Darija, preserve 20 of the English originals, and add
mathematics, coding and scientific prompts. A LoRA-tuned Gemma 3-4B trained on
5 K mixed instructions lifts DarijaMMLU from 32.8 to 42.7 ; adding the
reasoning-dense TULU portion pushes it to 47.5 with no English regression.
Scaling the identical recipe to Gemma 3-27B produces GemMaroc-27B, which
matches Atlas-Chat on DarijaMMLU (61.6 ) and leaps ahead on Darija commonsense,
scoring 60.5 on HellaSwag versus Atlas-Chat s 48.4 . Crucially, GemMaroc
retains Gemma-27B s strong maths and general-reasoning ability, showing only
minimal movement on GSM8K and English benchmarks. The entire model is trained
in just 48 GPU.h, underscoring a Green AI pathway to inclusive, sustainable
language technology. We release code, data and checkpoints to spur
Darija-centric applications in education, public services and everyday digital
interaction. | 2025-05-20T12:38:42Z | null | null | null | null | null | null | null | null | null | null |
2,505.17102 | BanglaByT5: Byte-Level Modelling for Bangla | ['Pramit Bhattacharyya', 'Arnab Bhattacharya'] | ['cs.CL'] | Large language models (LLMs) have achieved remarkable success across various
natural language processing tasks. However, most LLM models use traditional
tokenizers like BPE and SentencePiece, which fail to capture the finer nuances
of a morphologically rich language like Bangla (Bengali). In this work, we
introduce BanglaByT5, the first byte-level encoder-decoder model explicitly
tailored for Bangla. Built upon a small variant of Googles ByT5 architecture,
BanglaByT5 is pre-trained on a 14GB curated corpus combining high-quality
literary and newspaper articles. Through zeroshot and supervised evaluations
across generative and classification tasks, BanglaByT5 demonstrates competitive
performance, surpassing several multilingual and larger models. Our findings
highlight the efficacy of byte-level modelling for morphologically rich
languages and highlight BanglaByT5 potential as a lightweight yet powerful tool
for Bangla NLP, particularly in both resource-constrained and scalable
environments. | 2025-05-21T07:39:07Z | null | null | null | null | null | null | null | null | null | null |
2,505.17166 | ViDoRe Benchmark V2: Raising the Bar for Visual Retrieval | ['Quentin Macé', 'António Loison', 'Manuel Faysse'] | ['cs.IR'] | The ViDoRe Benchmark V1 was approaching saturation with top models exceeding
90% nDCG@5, limiting its ability to discern improvements. ViDoRe Benchmark V2
introduces realistic, challenging retrieval scenarios via blind contextual
querying, long and cross-document queries, and a hybrid synthetic and
human-in-the-loop query generation process. It comprises four diverse,
multilingual datasets and provides clear evaluation instructions. Initial
results demonstrate substantial room for advancement and highlight insights on
model generalization and multilingual capability. This benchmark is designed as
a living resource, inviting community contributions to maintain relevance
through future evaluations. | 2025-05-22T16:13:02Z | Published as a HuggingFace Blog | null | null | null | null | null | null | null | null | null |
2,505.17266 | Select2Reason: Efficient Instruction-Tuning Data Selection for Long-CoT
Reasoning | ['Cehao Yang', 'Xueyuan Lin', 'Chengjin Xu', 'Xuhui Jiang', 'Xiaojun Wu', 'Honghao Liu', 'Hui Xiong', 'Jian Guo'] | ['cs.CL', 'cs.AI'] | A practical approach to activate long chain-of-thoughts reasoning ability in
pre-trained large language models is to perform supervised fine-tuning on
instruction datasets synthesized by strong Large Reasoning Models such as
DeepSeek-R1, offering a cost-effective alternative to reinforcement learning.
However, large-scale instruction sets with more than 100k samples incur
significant training overhead, while effective strategies for automatic
long-CoT instruction selection still remain unexplored. In this work, we
propose Select2Reason, a novel and efficient instruction-tuning data selection
framework for long-CoT reasoning. From the perspective of emergence of
rethinking behaviors like self-correction and backtracking, we investigate
common metrics that may determine the quality of long-CoT reasoning
instructions. Select2Reason leverages a quantifier to estimate difficulty of
question and jointly incorporates a reasoning trace length-based heuristic
through a weighted scheme for ranking to prioritize high-utility examples.
Empirical results on OpenR1-Math-220k demonstrate that fine-tuning LLM on only
10% of the data selected by Select2Reason achieves performance competitive with
or superior to full-data tuning and open-source baseline OpenR1-Qwen-7B across
three competition-level and six comprehensive mathematical benchmarks. Further
experiments highlight the scalability in varying data size, efficiency during
inference, and its adaptability to other instruction pools with minimal cost. | 2025-05-22T20:24:08Z | null | null | null | Select2Reason: Efficient Instruction-Tuning Data Selection for Long-CoT Reasoning | ['Cehao Yang', 'Xueyuan Lin', 'Chengjin Xu', 'Xuhui Jiang', 'Xiaojun Wu', 'Honghao Liu', 'Hui Xiong', 'Jian Guo'] | 2,025 | arXiv.org | 0 | 67 | ['Computer Science'] |
2,505.17373 | Value-Guided Search for Efficient Chain-of-Thought Reasoning | ['Kaiwen Wang', 'Jin Peng Zhou', 'Jonathan Chang', 'Zhaolin Gao', 'Nathan Kallus', 'Kianté Brantley', 'Wen Sun'] | ['cs.LG', 'cs.AI', 'cs.CL'] | In this paper, we propose a simple and efficient method for value model
training on long-context reasoning traces. Compared to existing process reward
models (PRMs), our method does not require a fine-grained notion of "step,"
which is difficult to define for long-context reasoning models. By collecting a
dataset of 2.5 million reasoning traces, we train a 1.5B token-level value
model and apply it to DeepSeek models for improved performance with test-time
compute scaling. We find that block-wise value-guided search (VGS) with a final
weighted majority vote achieves better test-time scaling than standard methods
such as majority voting or best-of-n. With an inference budget of 64
generations, VGS with DeepSeek-R1-Distill-1.5B achieves an average accuracy of
45.7% across four competition math benchmarks (AIME 2024 & 2025, HMMT Feb 2024
& 2025), reaching parity with o3-mini-medium. Moreover, VGS significantly
reduces the inference FLOPs required to achieve the same performance of
majority voting. Our dataset, model and codebase are open-sourced. | 2025-05-23T01:05:07Z | null | null | null | Value-Guided Search for Efficient Chain-of-Thought Reasoning | ['Kaiwen Wang', 'Jin Peng Zhou', 'Jonathan D. Chang', 'Zhaolin Gao', 'Nathan Kallus', 'Kianté Brantley', 'Wen Sun'] | 2,025 | arXiv.org | 1 | 54 | ['Computer Science'] |
2,505.17412 | Direct3D-S2: Gigascale 3D Generation Made Easy with Spatial Sparse
Attention | ['Shuang Wu', 'Youtian Lin', 'Feihu Zhang', 'Yifei Zeng', 'Yikang Yang', 'Yajie Bao', 'Jiachen Qian', 'Siyu Zhu', 'Xun Cao', 'Philip Torr', 'Yao Yao'] | ['cs.CV'] | Generating high-resolution 3D shapes using volumetric representations such as
Signed Distance Functions (SDFs) presents substantial computational and memory
challenges. We introduce Direct3D-S2, a scalable 3D generation framework based
on sparse volumes that achieves superior output quality with dramatically
reduced training costs. Our key innovation is the Spatial Sparse Attention
(SSA) mechanism, which greatly enhances the efficiency of Diffusion Transformer
(DiT) computations on sparse volumetric data. SSA allows the model to
effectively process large token sets within sparse volumes, substantially
reducing computational overhead and achieving a 3.9x speedup in the forward
pass and a 9.6x speedup in the backward pass. Our framework also includes a
variational autoencoder (VAE) that maintains a consistent sparse volumetric
format across input, latent, and output stages. Compared to previous methods
with heterogeneous representations in 3D VAE, this unified design significantly
improves training efficiency and stability. Our model is trained on public
available datasets, and experiments demonstrate that Direct3D-S2 not only
surpasses state-of-the-art methods in generation quality and efficiency, but
also enables training at 1024 resolution using only 8 GPUs, a task typically
requiring at least 32 GPUs for volumetric representations at 256 resolution,
thus making gigascale 3D generation both practical and accessible. Project
page: https://www.neural4d.com/research/direct3d-s2. | 2025-05-23T02:58:01Z | Project page: https://www.neural4d.com/research/direct3d-s2 | null | null | Direct3D-S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention | ['Shuang Wu', 'Youtian Lin', 'Feihu Zhang', 'Yifei Zeng', 'Yikang Yang', 'Yajie Bao', 'Jiachen Qian', 'Siyu Zhu', 'Xun Cao', 'Philip Torr', 'Yao Yao'] | 2,025 | arXiv.org | 1 | 49 | ['Computer Science'] |
2,505.17426 | UniTTS: An end-to-end TTS system without decoupling of acoustic and
semantic information | ['Rui Wang', 'Qianguo Sun', 'Tianrong Chen', 'Zhiyun Zeng', 'Junlong Wu', 'Jiaxing Zhang'] | ['cs.SD', 'cs.AI', 'eess.AS'] | The emergence of multi-codebook neutral audio codecs such as Residual Vector
Quantization (RVQ) and Group Vector Quantization (GVQ) has significantly
advanced Large-Language-Model (LLM) based Text-to-Speech (TTS) systems. These
codecs are crucial in separating semantic and acoustic information while
efficiently harnessing semantic priors. However, since semantic and acoustic
information cannot be fully aligned, a significant drawback of these methods
when applied to LLM-based TTS is that large language models may have limited
access to comprehensive audio information. To address this limitation, we
propose DistilCodec and UniTTS, which collectively offer the following
advantages: 1) This method can distill a multi-codebook audio codec into a
single-codebook audio codec with 32,768 codes while achieving a near 100\%
utilization. 2) As DistilCodec does not employ a semantic alignment scheme, a
large amount of high-quality unlabeled audio (such as audiobooks with sound
effects, songs, etc.) can be incorporated during training, further expanding
data diversity and broadening its applicability. 3) Leveraging the
comprehensive audio information modeling of DistilCodec, we integrated three
key tasks into UniTTS's pre-training framework: audio modality autoregression,
text modality autoregression, and speech-text cross-modal autoregression. This
allows UniTTS to accept interleaved text and speech/audio prompts while
substantially preserving LLM's text capabilities. 4) UniTTS employs a
three-stage training process: Pre-Training, Supervised Fine-Tuning (SFT), and
Alignment. Source code and model checkpoints are publicly available at
https://github.com/IDEA-Emdoor-Lab/UniTTS and
https://github.com/IDEA-Emdoor-Lab/DistilCodec. | 2025-05-23T03:13:46Z | null | null | null | UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information | ['Rui Wang', 'Qianguo Sun', 'Tianrong Chen', 'Zhiyun Zeng', 'Junlong Wu', 'Jiaxing Zhang'] | 2,025 | arXiv.org | 0 | 42 | ['Computer Science', 'Engineering'] |
2,505.17496 | Analyzing Mitigation Strategies for Catastrophic Forgetting in
End-to-End Training of Spoken Language Models | ['Chi-Yuan Hsiao', 'Ke-Han Lu', 'Kai-Wei Chang', 'Chih-Kai Yang', 'Wei-Chih Chen', 'Hung-yi Lee'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.SD', 'eess.AS'] | End-to-end training of Spoken Language Models (SLMs) commonly involves
adapting pre-trained text-based Large Language Models (LLMs) to the speech
modality through multi-stage training on diverse tasks such as ASR, TTS and
spoken question answering (SQA). Although this multi-stage continual learning
equips LLMs with both speech understanding and generation capabilities, the
substantial differences in task and data distributions across stages can lead
to catastrophic forgetting, where previously acquired knowledge is lost. This
paper investigates catastrophic forgetting and evaluates three mitigation
strategies-model merging, discounting the LoRA scaling factor, and experience
replay to balance knowledge retention with new learning. Results show that
experience replay is the most effective, with further gains achieved by
combining it with other methods. These findings provide insights for developing
more robust and efficient SLM training pipelines. | 2025-05-23T05:50:14Z | Accepted to Interspeech 2025 | null | null | null | null | null | null | null | null | null |
2,505.17538 | Swedish Whispers; Leveraging a Massive Speech Corpus for Swedish Speech
Recognition | ['Leonora Vesterbacka', 'Faton Rekathati', 'Robin Kurtz', 'Justyna Sikora', 'Agnes Toftgård'] | ['cs.CL', 'cs.SD', 'eess.AS'] | This work presents a suite of fine-tuned Whisper models for Swedish, trained
on a dataset of unprecedented size and variability for this mid-resourced
language. As languages of smaller sizes are often underrepresented in
multilingual training datasets, substantial improvements in performance can be
achieved by fine-tuning existing multilingual models, as shown in this work.
This work reports an overall improvement across model sizes compared to
OpenAI's Whisper evaluated on Swedish. Most notably, we report an average 47%
reduction in WER comparing our best performing model to OpenAI's
whisper-large-v3, in evaluations across FLEURS, Common Voice, and NST. | 2025-05-23T06:42:16Z | Submitted to Interspeech 2025 | null | null | null | null | null | null | null | null | null |
2,505.17592 | AstroMLab 4: Benchmark-Topping Performance in Astronomy Q&A with a
70B-Parameter Domain-Specialized Reasoning Model | ['Tijmen de Haan', 'Yuan-Sen Ting', 'Tirthankar Ghosal', 'Tuan Dung Nguyen', 'Alberto Accomazzi', 'Emily Herron', 'Vanessa Lama', 'Rui Pan', 'Azton Wells', 'Nesar Ramachandra'] | ['astro-ph.IM', 'cs.LG'] | General-purpose large language models, despite their broad capabilities,
often struggle with specialized domain knowledge, a limitation particularly
pronounced in more accessible, lower-parameter versions. This gap hinders their
deployment as effective agents in demanding fields such as astronomy. Building
on our prior work with AstroSage-8B, this study introduces AstroSage-70B, a
significantly larger and more advanced domain-specialized natural-language AI
assistant. It is designed for research and education across astronomy,
astrophysics, space science, astroparticle physics, cosmology, and astronomical
instrumentation. Developed from the Llama-3.1-70B foundation, AstroSage-70B
underwent extensive continued pre-training on a vast corpus of astronomical
literature, followed by supervised fine-tuning and model merging. Beyond its
70-billion parameter scale, this model incorporates refined datasets,
judiciously chosen learning hyperparameters, and improved training procedures,
achieving state-of-the-art performance on complex astronomical tasks. Notably,
we integrated reasoning chains into the SFT dataset, enabling AstroSage-70B to
either answer the user query immediately, or first emit a human-readable
thought process. Evaluated on the AstroMLab-1 benchmark -- comprising 4,425
questions from literature withheld during training -- AstroSage-70B achieves
state-of-the-art performance. It surpasses all other tested open-weight and
proprietary models, including leading systems like o3, Gemini-2.5-Pro,
Claude-3.7-Sonnet, Deepseek-R1, and Qwen-3-235B, even those with API costs two
orders of magnitude higher. This work demonstrates that domain specialization,
when applied to large-scale models, can enable them to outperform generalist
counterparts in specialized knowledge areas like astronomy, thereby advancing
the frontier of AI capabilities in the field. | 2025-05-23T07:58:50Z | null | null | null | AstroMLab 4: Benchmark-Topping Performance in Astronomy Q&A with a 70B-Parameter Domain-Specialized Reasoning Model | ['Tijmen de Haan', 'Y.-S. Ting', 'Tirthankar Ghosal', 'Tuan Dung Nguyen', 'Alberto Accomazzi', 'Emily Herron', 'Vanessa Lama', 'Rui Pan', 'Azton Wells', 'Nesar Ramachandra'] | 2,025 | arXiv.org | 0 | 10 | ['Physics', 'Computer Science'] |
2,505.17612 | Distilling LLM Agent into Small Models with Retrieval and Code Tools | ['Minki Kang', 'Jongwon Jeong', 'Seanie Lee', 'Jaewoong Cho', 'Sung Ju Hwang'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) excel at complex reasoning tasks but remain
computationally expensive, limiting their practical deployment. To address
this, recent works have focused on distilling reasoning capabilities into
smaller language models (sLMs) using chain-of-thought (CoT) traces from teacher
LLMs. However, this approach struggles in scenarios requiring rare factual
knowledge or precise computation, where sLMs often hallucinate due to limited
capability. In this work, we propose Agent Distillation, a framework for
transferring not only reasoning capability but full task-solving behavior from
LLM-based agents into sLMs with retrieval and code tools. We improve agent
distillation along two complementary axes: (1) we introduce a prompting method
called first-thought prefix to enhance the quality of teacher-generated
trajectories; and (2) we propose a self-consistent action generation for
improving test-time robustness of small agents. We evaluate our method on eight
reasoning tasks across factual and mathematical domains, covering both
in-domain and out-of-domain generalization. Our results show that sLMs as small
as 0.5B, 1.5B, 3B parameters can achieve performance competitive with next-tier
larger 1.5B, 3B, 7B models fine-tuned using CoT distillation, demonstrating the
potential of agent distillation for building practical, tool-using small
agents. Our code is available at https://github.com/Nardien/agent-distillation. | 2025-05-23T08:20:15Z | preprint, v1 | null | null | Distilling LLM Agent into Small Models with Retrieval and Code Tools | ['Minki Kang', 'Jongwon Jeong', 'Seanie Lee', 'Jaewoong Cho', 'Sung Ju Hwang'] | 2,025 | arXiv.org | 2 | 74 | ['Computer Science'] |
2,505.17625 | Enhancing Large Vision-Language Models with Layout Modality for Table
Question Answering on Japanese Annual Securities Reports | ['Hayato Aida', 'Kosuke Takahashi', 'Takahiro Omi'] | ['cs.CL', 'cs.CV', '68T50', 'I.2'] | With recent advancements in Large Language Models (LLMs) and growing interest
in retrieval-augmented generation (RAG), the ability to understand table
structures has become increasingly important. This is especially critical in
financial domains such as securities reports, where highly accurate question
answering (QA) over tables is required. However, tables exist in various
formats-including HTML, images, and plain text-making it difficult to preserve
and extract structural information. Therefore, multimodal LLMs are essential
for robust and general-purpose table understanding. Despite their promise,
current Large Vision-Language Models (LVLMs), which are major representatives
of multimodal LLMs, still face challenges in accurately understanding
characters and their spatial relationships within documents. In this study, we
propose a method to enhance LVLM-based table understanding by incorporating
in-table textual content and layout features. Experimental results demonstrate
that these auxiliary modalities significantly improve performance, enabling
robust interpretation of complex document layouts without relying on explicitly
structured input formats. | 2025-05-23T08:36:22Z | Accepted at IIAI AAI 2025, the 3rd International Conference on
Computational and Data Sciences in Economics and Finance | null | null | null | null | null | null | null | null | null |
2,505.17667 | QwenLong-L1: Towards Long-Context Large Reasoning Models with
Reinforcement Learning | ['Fanqi Wan', 'Weizhou Shen', 'Shengyi Liao', 'Yingcheng Shi', 'Chenliang Li', 'Ziyi Yang', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou', 'Ming Yan'] | ['cs.CL'] | Recent large reasoning models (LRMs) have demonstrated strong reasoning
capabilities through reinforcement learning (RL). These improvements have
primarily been observed within the short-context reasoning tasks. In contrast,
extending LRMs to effectively process and reason on long-context inputs via RL
remains a critical unsolved challenge. To bridge this gap, we first formalize
the paradigm of long-context reasoning RL, and identify key challenges in
suboptimal training efficiency and unstable optimization process. To address
these issues, we propose QwenLong-L1, a framework that adapts short-context
LRMs to long-context scenarios via progressive context scaling. Specifically,
we utilize a warm-up supervised fine-tuning (SFT) stage to establish a robust
initial policy, followed by a curriculum-guided phased RL technique to
stabilize the policy evolution, and enhanced with a difficulty-aware
retrospective sampling strategy to incentivize the policy exploration.
Experiments on seven long-context document question-answering benchmarks
demonstrate that QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini
and Qwen3-235B-A22B, achieving performance on par with
Claude-3.7-Sonnet-Thinking, demonstrating leading performance among
state-of-the-art LRMs. This work advances the development of practical
long-context LRMs capable of robust reasoning across information-intensive
environments. | 2025-05-23T09:31:55Z | Technical Report | null | null | QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning | ['Fanqi Wan', 'Weizhou Shen', 'Shengyi Liao', 'Yingcheng Shi', 'Chenliang Li', 'Ziyi Yang', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou', 'Ming Yan'] | 2,025 | arXiv.org | 0 | 64 | ['Computer Science'] |
2,505.17778 | TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene
Text Synthesis | ['Yu Xie', 'Jielei Zhang', 'Pengyu Chen', 'Ziyue Wang', 'Weihang Wang', 'Longwen Gao', 'Peiyi Li', 'Huyang Sun', 'Qiang Zhang', 'Qian Qiao', 'Jiaqing Fan', 'Zhouhui Lian'] | ['cs.CV'] | Diffusion-based scene text synthesis has progressed rapidly, yet existing
methods commonly rely on additional visual conditioning modules and require
large-scale annotated data to support multilingual generation. In this work, we
revisit the necessity of complex auxiliary modules and further explore an
approach that simultaneously ensures glyph accuracy and achieves high-fidelity
scene integration, by leveraging diffusion models' inherent capabilities for
contextual reasoning. To this end, we introduce TextFlux, a DiT-based framework
that enables multilingual scene text synthesis. The advantages of TextFlux can
be summarized as follows: (1) OCR-free model architecture. TextFlux eliminates
the need for OCR encoders (additional visual conditioning modules) that are
specifically used to extract visual text-related features. (2) Strong
multilingual scalability. TextFlux is effective in low-resource multilingual
settings, and achieves strong performance in newly added languages with fewer
than 1,000 samples. (3) Streamlined training setup. TextFlux is trained with
only 1% of the training data required by competing methods. (4) Controllable
multi-line text generation. TextFlux offers flexible multi-line synthesis with
precise line-level control, outperforming methods restricted to single-line or
rigid layouts. Extensive experiments and visualizations demonstrate that
TextFlux outperforms previous methods in both qualitative and quantitative
evaluations. | 2025-05-23T11:46:46Z | null | null | null | TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis | ['Yu Xie', 'Jielei Zhang', 'Pengyu Chen', 'Ziyue Wang', 'Weihang Wang', 'Longwen Gao', 'Peiyi Li', 'Huyang Sun', 'Qiang Zhang', 'Qian Qiao', 'Jiaqing Fan', 'Zhouhui Lian'] | 2,025 | arXiv.org | 1 | 55 | ['Computer Science'] |
2,505.17941 | VeriThinker: Learning to Verify Makes Reasoning Model Efficient | ['Zigeng Chen', 'Xinyin Ma', 'Gongfan Fang', 'Ruonan Yu', 'Xinchao Wang'] | ['cs.LG'] | Large Reasoning Models (LRMs) excel at complex tasks using Chain-of-Thought
(CoT) reasoning. However, their tendency to overthinking leads to unnecessarily
lengthy reasoning chains, dramatically increasing inference costs. To mitigate
this issue, we introduce VeriThinker, a novel approach for CoT compression.
Unlike conventional methods that fine-tune LRMs directly on the original
reasoning task using synthetic concise CoT data, we innovatively fine-tune the
model solely through an auxiliary verification task. By training LRMs to
accurately verify the correctness of CoT solutions, the LRMs inherently become
more discerning about the necessity of subsequent self-reflection steps,
thereby effectively suppressing overthinking. Extensive experiments validate
that VeriThinker substantially reduces reasoning chain lengths while
maintaining or even slightly improving accuracy. When applied to
DeepSeek-R1-Distill-Qwen-7B, our approach reduces reasoning tokens on MATH500
from 3790 to 2125 while improving accuracy by 0.8% (94.0% to 94.8%), and on
AIME25, tokens decrease from 14321 to 10287 with a 2.1% accuracy gain (38.7% to
40.8%). Additionally, our experiments demonstrate that VeriThinker can also be
zero-shot generalized to speculative reasoning. Code is available at
https://github.com/czg1225/VeriThinker | 2025-05-23T14:17:56Z | Working in progress. Code Repo:
https://github.com/czg1225/VeriThinker | null | null | VeriThinker: Learning to Verify Makes Reasoning Model Efficient | ['Zigeng Chen', 'Xinyin Ma', 'Gongfan Fang', 'Ruonan Yu', 'Xinchao Wang'] | 2,025 | arXiv.org | 1 | 71 | ['Computer Science'] |
2,505.17952 | Beyond Distillation: Pushing the Limits of Medical LLM Reasoning with
Minimalist Rule-Based RL | ['Che Liu', 'Haozhe Wang', 'Jiazhen Pan', 'Zhongwei Wan', 'Yong Dai', 'Fangzhen Lin', 'Wenjia Bai', 'Daniel Rueckert', 'Rossella Arcucci'] | ['cs.CL', 'cs.AI'] | Improving performance on complex tasks and enabling interpretable decision
making in large language models (LLMs), especially for clinical applications,
requires effective reasoning. Yet this remains challenging without supervised
fine-tuning (SFT) on costly chain-of-thought (CoT) data distilled from
closed-source models (e.g., GPT-4o). In this work, we present AlphaMed, the
first medical LLM to show that reasoning capability can emerge purely through
reinforcement learning (RL), using minimalist rule-based rewards on public
multiple-choice QA datasets, without relying on SFT or distilled CoT data.
AlphaMed achieves state-of-the-art results on six medical QA benchmarks,
outperforming models trained with conventional SFT+RL pipelines. On challenging
benchmarks (e.g., MedXpert), AlphaMed even surpasses larger or closed-source
models such as DeepSeek-V3-671B and Claude-3.5-Sonnet. To understand the
factors behind this success, we conduct a comprehensive data-centric analysis
guided by three questions: (i) Can minimalist rule-based RL incentivize
reasoning without distilled CoT supervision? (ii) How do dataset quantity and
diversity impact reasoning? (iii) How does question difficulty shape the
emergence and generalization of reasoning? Our findings show that dataset
informativeness is a key driver of reasoning performance, and that minimalist
RL on informative, multiple-choice QA data is effective at inducing reasoning
without CoT supervision. We also observe divergent trends across benchmarks,
underscoring limitations in current evaluation and the need for more
challenging, reasoning-oriented medical QA benchmarks. | 2025-05-23T14:27:37Z | Under Review | null | null | Beyond Distillation: Pushing the Limits of Medical LLM Reasoning with Minimalist Rule-Based RL | ['Che Liu', 'Haozhe Wang', 'Jiazhen Pan', 'Zhongwei Wan', 'Yong Dai', 'Fangzhen Lin', 'Wenjia Bai', 'D. Rueckert', 'Rossella Arcucci'] | 2,025 | arXiv.org | 1 | 48 | ['Computer Science'] |
2,505.18092 | QwenLong-CPRS: Towards $\infty$-LLMs with Dynamic Context Optimization | ['Weizhou Shen', 'Chenliang Li', 'Fanqi Wan', 'Shengyi Liao', 'Shaopeng Lai', 'Bo Zhang', 'Yingcheng Shi', 'Yuning Wu', 'Gang Fu', 'Zhansheng Li', 'Bin Yang', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou', 'Ming Yan'] | ['cs.CL'] | This technical report presents QwenLong-CPRS, a context compression framework
designed for explicit long-context optimization, addressing prohibitive
computation overhead during the prefill stage and the "lost in the middle"
performance degradation of large language models (LLMs) during long sequence
processing. Implemented through a novel dynamic context optimization mechanism,
QwenLong-CPRS enables multi-granularity context compression guided by natural
language instructions, achieving both efficiency gains and improved
performance.
Evolved from the Qwen architecture series, QwenLong-CPRS introduces four key
innovations: (1) Natural language-guided dynamic optimization, (2)
Bidirectional reasoning layers for enhanced boundary awareness, (3) Token
critic mechanisms with language modeling heads, and (4) Window-parallel
inference.
Comprehensive evaluations across five benchmarks (4K-2M word contexts)
demonstrate QwenLong-CPRS's threefold effectiveness: (1) Consistent superiority
over other context management methods like RAG and sparse attention in both
accuracy and efficiency. (2) Architecture-agnostic integration with all
flagship LLMs, including GPT-4o, Gemini2.0-pro, Claude3.7-sonnet, DeepSeek-v3,
and Qwen2.5-max, achieves 21.59$\times$ context compression alongside
19.15-point average performance gains; (3) Deployed with Qwen2.5-32B-Instruct,
QwenLong-CPRS surpasses leading proprietary LLMs by 4.85 and 10.88 points on
Ruler-128K and InfiniteBench, establishing new SOTA performance. | 2025-05-23T16:47:00Z | null | null | null | QwenLong-CPRS: Towards ∞-LLMs with Dynamic Context Optimization | ['Weizhou Shen', 'Chenliang Li', 'Fanqi Wan', 'Shengyi Liao', 'Shaopeng Lai', 'Bo Zhang', 'Yingcheng Shi', 'Yuning Wu', 'Gang Fu', 'Zhansheng Li', 'Bin Yang', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou', 'Ming Yan'] | 2,025 | arXiv.org | 1 | 40 | ['Computer Science'] |
2,505.18125 | TabSTAR: A Foundation Tabular Model With Semantically Target-Aware
Representations | ['Alan Arazi', 'Eilam Shapira', 'Roi Reichart'] | ['cs.LG', 'cs.CL'] | While deep learning has achieved remarkable success across many domains, it
has historically underperformed on tabular learning tasks, which remain
dominated by gradient boosting decision trees (GBDTs). However, recent
advancements are paving the way for Tabular Foundation Models, which can
leverage real-world knowledge and generalize across diverse datasets,
particularly when the data contains free-text. Although incorporating language
model capabilities into tabular tasks has been explored, most existing methods
utilize static, target-agnostic textual representations, limiting their
effectiveness. We introduce TabSTAR: a Foundation Tabular Model with
Semantically Target-Aware Representations. TabSTAR is designed to enable
transfer learning on tabular data with textual features, with an architecture
free of dataset-specific parameters. It unfreezes a pretrained text encoder and
takes as input target tokens, which provide the model with the context needed
to learn task-specific embeddings. TabSTAR achieves state-of-the-art
performance for both medium- and large-sized datasets across known benchmarks
of classification tasks with text features, and its pretraining phase exhibits
scaling laws in the number of datasets, offering a pathway for further
performance improvements. | 2025-05-23T17:34:28Z | null | null | null | null | null | null | null | null | null | null |
2,505.18129 | One RL to See Them All: Visual Triple Unified Reinforcement Learning | ['Yan Ma', 'Linge Du', 'Xuyang Shen', 'Shaoxiang Chen', 'Pengfei Li', 'Qibing Ren', 'Lizhuang Ma', 'Yuchao Dai', 'Pengfei Liu', 'Junjie Yan'] | ['cs.CV', 'cs.CL'] | Reinforcement learning (RL) has significantly advanced the reasoning
capabilities of vision-language models (VLMs). However, the use of RL beyond
reasoning tasks remains largely unexplored, especially for perceptionintensive
tasks like object detection and grounding. We propose V-Triune, a Visual Triple
Unified Reinforcement Learning system that enables VLMs to jointly learn visual
reasoning and perception tasks within a single training pipeline. V-Triune
comprises triple complementary components: Sample-Level Data Formatting (to
unify diverse task inputs), Verifier-Level Reward Computation (to deliver
custom rewards via specialized verifiers) , and Source-Level Metric Monitoring
(to diagnose problems at the data-source level). We further introduce a novel
Dynamic IoU reward, which provides adaptive, progressive, and definite feedback
for perception tasks handled by V-Triune. Our approach is instantiated within
off-the-shelf RL training framework using open-source 7B and 32B backbone
models. The resulting model, dubbed Orsta (One RL to See Them All),
demonstrates consistent improvements across both reasoning and perception
tasks. This broad capability is significantly shaped by its training on a
diverse dataset, constructed around four representative visual reasoning tasks
(Math, Puzzle, Chart, and Science) and four visual perception tasks (Grounding,
Detection, Counting, and OCR). Subsequently, Orsta achieves substantial gains
on MEGA-Bench Core, with improvements ranging from +2.1 to an impressive +14.1
across its various 7B and 32B model variants, with performance benefits
extending to a wide range of downstream tasks. These results highlight the
effectiveness and scalability of our unified RL approach for VLMs. The V-Triune
system, along with the Orsta models, is publicly available at
https://github.com/MiniMax-AI. | 2025-05-23T17:41:14Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,505.18179 | GAIA: A Foundation Model for Operational Atmospheric Dynamics | ['Ata Akbari Asanjan', 'Olivia Alexander', 'Tom Berg', 'Clara Zhang', 'Matt Yang', 'Jad Makki', 'Disha Shidham', 'Srija Chakraborty', 'William Bender', 'Stephen Peng', 'Arun Ravindran', 'Olivier Raiman', 'David Potere', 'David Bell'] | ['cs.LG', 'cs.AI'] | We present the GAIA (Geospatial Artificial Intelligence for Atmospheres)
Foundation Model, a novel model that combines masked autoencoders (MAE) and
self-DIstillation with NO labels (DINO) for analyzing global atmospheric
patterns in satellite imagery. By integrating these complementary
self-supervised learning approaches, our model simultaneously captures both
local features and global dependencies. We address two critical challenges in
satellite data analysis: reconstructing missing regions and estimating
precipitation patterns as our first downstream tasks. The model demonstrates
superior temporal pattern capture compared to standard MAE approaches, while
maintaining robust performance in downstream tasks. Our experimental results
show strong gap-filling capabilities across varying mask ratios and accurate
precipitation estimation with limited training data, achieving a false alarm
ratio of 0.088 and structural similarity of 0.881. This work represents an
advancement in self-supervised learning for atmospheric science, providing a
foundation for improved weather monitoring and climate analysis. The trained
model weights and accompanying code are publicly available as open-source on
Hugging Face here: https://huggingface.co/bcg-usra-nasa-gaia/GAIA-v1. | 2025-05-15T05:07:09Z | 14 pages, 7 figures | null | null | null | null | null | null | null | null | null |
2,505.18383 | NileChat: Towards Linguistically Diverse and Culturally Aware LLMs for
Local Communities | ['Abdellah El Mekki', 'Houdaifa Atou', 'Omer Nacar', 'Shady Shehata', 'Muhammad Abdul-Mageed'] | ['cs.CL'] | Enhancing the linguistic capabilities of Large Language Models (LLMs) to
include low-resource languages is a critical research area. Current research
directions predominantly rely on synthetic data generated by translating
English corpora, which, while demonstrating promising linguistic understanding
and translation abilities, often results in models aligned with source language
culture. These models frequently fail to represent the cultural heritage and
values of local communities. This work proposes a methodology to create both
synthetic and retrieval-based pre-training data tailored to a specific
community, considering its (i) language, (ii) cultural heritage, and (iii)
cultural values. We demonstrate our methodology using Egyptian and Moroccan
dialects as testbeds, chosen for their linguistic and cultural richness and
current underrepresentation in LLMs. As a proof-of-concept, we develop
NileChat, a 3B parameter LLM adapted for Egyptian and Moroccan communities,
incorporating their language, cultural heritage, and values. Our results on
various understanding, translation, and cultural and values alignment
benchmarks show that NileChat outperforms existing Arabic-aware LLMs of similar
size and performs on par with larger models. We share our methods, data, and
models with the community to promote the inclusion and coverage of more diverse
communities in LLM development. | 2025-05-23T21:18:40Z | null | null | null | null | null | null | null | null | null | null |
2,505.18405 | RaDeR: Reasoning-aware Dense Retrieval Models | ['Debrup Das', "Sam O' Nuallain", 'Razieh Rahimi'] | ['cs.CL', 'cs.IR'] | We propose RaDeR, a set of reasoning-based dense retrieval models trained
with data derived from mathematical problem solving using large language models
(LLMs). Our method leverages retrieval-augmented reasoning trajectories of an
LLM and self-reflective relevance evaluation, enabling the creation of both
diverse and hard-negative samples for reasoning-intensive relevance. RaDeR
retrievers, trained for mathematical reasoning, effectively generalize to
diverse reasoning tasks in the BRIGHT and RAR-b benchmarks, consistently
outperforming strong baselines in overall performance. Notably, RaDeR achieves
significantly higher performance than baselines on the Math and Coding splits.
In addition, RaDeR presents the first dense retriever that outperforms BM25
when queries are Chain-of-Thought reasoning steps, underscoring the critical
role of reasoning-based retrieval to augment reasoning language models.
Furthermore, RaDeR achieves comparable or superior performance while using only
2.5% of the training data used by the concurrent work REASONIR, highlighting
the quality of our synthesized training data. | 2025-05-23T22:18:32Z | 26 pages | null | null | RaDeR: Reasoning-aware Dense Retrieval Models | ['Debrup Das', "Sam O' Nuallain", 'Razieh Rahimi'] | 2,025 | arXiv.org | 1 | 47 | ['Computer Science'] |
2,505.18445 | OmniConsistency: Learning Style-Agnostic Consistency from Paired
Stylization Data | ['Yiren Song', 'Cheng Liu', 'Mike Zheng Shou'] | ['cs.CV'] | Diffusion models have advanced image stylization significantly, yet two core
challenges persist: (1) maintaining consistent stylization in complex scenes,
particularly identity, composition, and fine details, and (2) preventing style
degradation in image-to-image pipelines with style LoRAs. GPT-4o's exceptional
stylization consistency highlights the performance gap between open-source
methods and proprietary models. To bridge this gap, we propose
\textbf{OmniConsistency}, a universal consistency plugin leveraging large-scale
Diffusion Transformers (DiTs). OmniConsistency contributes: (1) an in-context
consistency learning framework trained on aligned image pairs for robust
generalization; (2) a two-stage progressive learning strategy decoupling style
learning from consistency preservation to mitigate style degradation; and (3) a
fully plug-and-play design compatible with arbitrary style LoRAs under the Flux
framework. Extensive experiments show that OmniConsistency significantly
enhances visual coherence and aesthetic quality, achieving performance
comparable to commercial state-of-the-art model GPT-4o. | 2025-05-24T01:00:20Z | null | null | null | OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data | ['Yiren Song', 'Cheng Liu', 'Mike Zheng Shou'] | 2,025 | arXiv.org | 2 | 46 | ['Computer Science'] |
2,505.18495 | Beyond Masked and Unmasked: Discrete Diffusion Models via Partial
Masking | ['Chen-Hao Chao', 'Wei-Fang Sun', 'Hanwen Liang', 'Chun-Yi Lee', 'Rahul G. Krishnan'] | ['cs.LG'] | Masked diffusion models (MDM) are powerful generative models for discrete
data that generate samples by progressively unmasking tokens in a sequence.
Each token can take one of two states: masked or unmasked. We observe that
token sequences often remain unchanged between consecutive sampling steps;
consequently, the model repeatedly processes identical inputs, leading to
redundant computation. To address this inefficiency, we propose the Partial
masking scheme (Prime), which augments MDM by allowing tokens to take
intermediate states interpolated between the masked and unmasked states. This
design enables the model to make predictions based on partially observed token
information, and facilitates a fine-grained denoising process. We derive a
variational training objective and introduce a simple architectural design to
accommodate intermediate-state inputs. Our method demonstrates superior
performance across a diverse set of generative modeling tasks. On text data, it
achieves a perplexity of 15.36 on OpenWebText, outperforming previous MDM
(21.52), autoregressive models (17.54), and their hybrid variants (17.58),
without relying on an autoregressive formulation. On image data, it attains
competitive FID scores of 3.26 on CIFAR-10 and 6.98 on ImageNet-32, comparable
to leading continuous generative models. | 2025-05-24T04:16:40Z | null | null | null | Beyond Masked and Unmasked: Discrete Diffusion Models via Partial Masking | ['Chen-Hao Chao', 'Wei-Fang Sun', 'Hanwen Liang', 'Chun-Yi Lee', 'Rahul G. Krishnan'] | 2,025 | arXiv.org | 0 | 65 | ['Computer Science'] |
2,505.18499 | G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning | ['Xiaojun Guo', 'Ang Li', 'Yifei Wang', 'Stefanie Jegelka', 'Yisen Wang'] | ['cs.LG', 'cs.AI', 'stat.ML'] | Although Large Language Models (LLMs) have demonstrated remarkable progress,
their proficiency in graph-related tasks remains notably limited, hindering the
development of truly general-purpose models. Previous attempts, including
pretraining graph foundation models or employing supervised fine-tuning, often
face challenges such as the scarcity of large-scale, universally represented
graph data. We introduce G1, a simple yet effective approach demonstrating that
Reinforcement Learning (RL) on synthetic graph-theoretic tasks can
significantly scale LLMs' graph reasoning abilities. To enable RL training, we
curate Erd\~os, the largest graph reasoning dataset to date comprising 50
diverse graph-theoretic tasks of varying difficulty levels, 100k training data
and 5k test data, all drived from real-world graphs. With RL on Erd\~os, G1
obtains substantial improvements in graph reasoning, where our finetuned 3B
model even outperforms Qwen2.5-72B-Instruct (24x size). RL-trained models also
show strong zero-shot generalization to unseen tasks, domains, and graph
encoding schemes, including other graph-theoretic benchmarks as well as
real-world node classification and link prediction tasks, without compromising
general reasoning abilities. Our findings offer an efficient, scalable path for
building strong graph reasoners by finetuning LLMs with RL on graph-theoretic
tasks, which combines the strengths of pretrained LLM capabilities with
abundant, automatically generated synthetic data, suggesting that LLMs possess
graph understanding abilities that RL can elicit successfully. Our
implementation is open-sourced at https://github.com/PKU-ML/G1, with models and
datasets hosted on Hugging Face collections
https://huggingface.co/collections/PKU-ML/g1-683d659e992794fc99618cf2 for
broader accessibility. | 2025-05-24T04:33:41Z | null | null | null | null | null | null | null | null | null | null |
2,505.18601 | Flex-Judge: Think Once, Judge Anywhere | ['Jongwoo Ko', 'Sungnyun Kim', 'Sungwoo Cho', 'Se-Young Yun'] | ['cs.CL', 'cs.AI'] | Human-generated reward signals are critical for aligning generative models
with human preferences, guiding both training and inference-time evaluations.
While large language models (LLMs) employed as proxy evaluators, i.e.,
LLM-as-a-Judge, significantly reduce the costs associated with manual
annotations, they typically require extensive modality-specific training data
and fail to generalize well across diverse multimodal tasks. In this paper, we
propose Flex-Judge, a reasoning-guided multimodal judge model that leverages
minimal textual reasoning data to robustly generalize across multiple
modalities and evaluation formats. Our core intuition is that structured
textual reasoning explanations inherently encode generalizable decision-making
patterns, enabling an effective transfer to multimodal judgments, e.g., with
images or videos. Empirical results demonstrate that Flex-Judge, despite being
trained on significantly fewer text data, achieves competitive or superior
performance compared to state-of-the-art commercial APIs and extensively
trained multimodal evaluators. Notably, Flex-Judge presents broad impact in
modalities like molecule, where comprehensive evaluation benchmarks are scarce,
underscoring its practical value in resource-constrained domains. Our framework
highlights reasoning-based text supervision as a powerful, cost-effective
alternative to traditional annotation-intensive approaches, substantially
advancing scalable multimodal model-as-a-judge. | 2025-05-24T08:50:53Z | The code is available at https://github.com/jongwooko/flex-judge | null | null | Flex-Judge: Think Once, Judge Anywhere | ['Jongwoo Ko', 'Sungnyun Kim', 'Sungwoo Cho', 'Se-Young Yun'] | 2,025 | arXiv.org | 0 | 87 | ['Computer Science'] |
2,505.18842 | Don't Look Only Once: Towards Multimodal Interactive Reasoning with
Selective Visual Revisitation | ['Jiwan Chung', 'Junhyeok Kim', 'Siyeol Kim', 'Jaeyoung Lee', 'Min Soo Kim', 'Youngjae Yu'] | ['cs.CL', 'cs.CV'] | We present v1, a lightweight extension to Multimodal Large Language Models
(MLLMs) that enables selective visual revisitation during inference. While
current MLLMs typically consume visual input only once and reason purely over
internal memory, v1 introduces a simple point-and-copy mechanism that allows
the model to dynamically retrieve relevant image regions throughout the
reasoning process. This mechanism augments existing architectures with minimal
modifications, enabling contextual access to visual tokens based on the model's
evolving hypotheses. To train this capability, we construct v1g, a dataset of
300K multimodal reasoning traces with interleaved visual grounding annotations.
Experiments on three multimodal mathematical reasoning benchmarks -- MathVista,
MathVision, and MathVerse -- demonstrate that v1 consistently improves
performance over comparable baselines, particularly on tasks requiring
fine-grained visual reference and multi-step reasoning. Our results suggest
that dynamic visual access is a promising direction for enhancing grounded
multimodal reasoning. Code, models, and data will be released to support future
research. | 2025-05-24T19:30:47Z | null | null | null | null | null | null | null | null | null | null |
2,505.19 | VerIPO: Cultivating Long Reasoning in Video-LLMs via Verifier-Gudied
Iterative Policy Optimization | ['Yunxin Li', 'Xinyu Chen', 'Zitao Li', 'Zhenyu Liu', 'Longyue Wang', 'Wenhan Luo', 'Baotian Hu', 'Min Zhang'] | ['cs.CL', 'cs.CV'] | Applying Reinforcement Learning (RL) to Video Large Language Models
(Video-LLMs) shows significant promise for complex video reasoning. However,
popular Reinforcement Fine-Tuning (RFT) methods, such as outcome-based Group
Relative Policy Optimization (GRPO), are limited by data preparation
bottlenecks (e.g., noise or high cost) and exhibit unstable improvements in the
quality of long chain-of-thoughts (CoTs) and downstream performance.To address
these limitations, we propose VerIPO, a Verifier-guided Iterative Policy
Optimization method designed to gradually improve video LLMs' capacity for
generating deep, long-term reasoning chains. The core component is
Rollout-Aware Verifier, positioned between the GRPO and Direct Preference
Optimization (DPO) training phases to form the GRPO-Verifier-DPO training loop.
This verifier leverages small LLMs as a judge to assess the reasoning logic of
rollouts, enabling the construction of high-quality contrastive data, including
reflective and contextually consistent CoTs. These curated preference samples
drive the efficient DPO stage (7x faster than GRPO), leading to marked
improvements in reasoning chain quality, especially in terms of length and
contextual consistency. This training loop benefits from GRPO's expansive
search and DPO's targeted optimization. Experimental results demonstrate: 1)
Significantly faster and more effective optimization compared to standard GRPO
variants, yielding superior performance; 2) Our trained models exceed the
direct inference of large-scale instruction-tuned Video-LLMs, producing long
and contextually consistent CoTs on diverse video reasoning tasks; and 3) Our
model with one iteration outperforms powerful LMMs (e.g., Kimi-VL) and long
reasoning models (e.g., Video-R1), highlighting its effectiveness and
stability. | 2025-05-25T06:41:28Z | 19 pages, 9 figures, Project Link:
https://github.com/HITsz-TMG/VerIPO | null | null | null | null | null | null | null | null | null |
2,505.19084 | Jodi: Unification of Visual Generation and Understanding via Joint
Modeling | ['Yifeng Xu', 'Zhenliang He', 'Meina Kan', 'Shiguang Shan', 'Xilin Chen'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Visual generation and understanding are two deeply interconnected aspects of
human intelligence, yet they have been traditionally treated as separate tasks
in machine learning. In this paper, we propose Jodi, a diffusion framework that
unifies visual generation and understanding by jointly modeling the image
domain and multiple label domains. Specifically, Jodi is built upon a linear
diffusion transformer along with a role switch mechanism, which enables it to
perform three particular types of tasks: (1) joint generation, where the model
simultaneously generates images and multiple labels; (2) controllable
generation, where images are generated conditioned on any combination of
labels; and (3) image perception, where multiple labels can be predicted at
once from a given image. Furthermore, we present the Joint-1.6M dataset, which
contains 200,000 high-quality images collected from public sources, automatic
labels for 7 visual domains, and LLM-generated captions. Extensive experiments
demonstrate that Jodi excels in both generation and understanding tasks and
exhibits strong extensibility to a wider range of visual domains. Code is
available at https://github.com/VIPL-GENUN/Jodi. | 2025-05-25T10:40:52Z | Code: https://github.com/VIPL-GENUN/Jodi | null | null | null | null | null | null | null | null | null |
2,505.19094 | SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and
Verifiable Rewards | ['Chuming Shen', 'Wei Wei', 'Xiaoye Qu', 'Yu Cheng'] | ['cs.CV', 'cs.AI'] | DeepSeek-R1 has demonstrated powerful reasoning capabilities in the text
domain through stable reinforcement learning (RL). Recently, in the multimodal
domain, works have begun to directly apply RL to generate R1-like free-form
reasoning for Visual Question Answering (VQA) tasks. However, multimodal tasks
share an intrinsically different nature from textual tasks, which heavily rely
on the understanding of the input image to solve the problem. Therefore, such
free-form reasoning faces two critical limitations in the VQA task: (1)
Extended reasoning chains diffuse visual focus away from task-critical regions,
degrading answer accuracy. (2) Unverifiable intermediate steps amplify
policy-gradient variance and computational costs overhead. To address these
issues, in this paper, we introduce SATORI ($\textbf{S}patially$
$\textbf{A}nchored$ $\textbf{T}ask$ $\textbf{O}ptimization$ with
$\textbf{R}e\textbf{I}nforcement$ Learning), which decomposes VQA into three
verifiable stages, including global image captioning, region localization, and
answer prediction, each supplying explicit reward signals. Furthermore, we also
introduce VQA-Verify, a 12k dataset annotated with answer-aligned captions and
bounding-boxes to facilitate training. Experiments demonstrate consistent
performance improvements across seven VQA benchmarks, achieving up to $15.7\%$
improvement in accuracy in accuracy compared to the R1-like baseline. Our
analysis of the attention map confirms enhanced focus on critical regions,
which brings improvements in accuracy. Our code is available at
https://github.com/justairr/SATORI-R1. | 2025-05-25T11:11:06Z | Under review | null | null | null | null | null | null | null | null | null |
2,505.19095 | ScreenExplorer: Training a Vision-Language Model for Diverse Exploration
in Open GUI World | ['Runliang Niu', 'Jinglong Ji', 'Yi Chang', 'Qi Wang'] | ['cs.AI'] | The rapid progress of large language models (LLMs) has sparked growing
interest in building Artificial General Intelligence (AGI) within Graphical
User Interface (GUI) environments. However, existing GUI agents based on LLMs
or vision-language models (VLMs) often fail to generalize to novel environments
and rely heavily on manually curated, diverse datasets. To overcome these
limitations, we introduce ScreenExplorer, a VLM trained via Group Relative
Policy Optimization(GRPO) in real, dynamic, and open-ended GUI environments.
Innovatively, we introduced a world-model-based curiosity reward function to
help the agent overcome the cold-start phase of exploration. Additionally,
distilling experience streams further enhances the model's exploration
capabilities. Our training framework enhances model exploration in open GUI
environments, with trained models showing better environmental adaptation and
sustained exploration compared to static deployment models. Our findings offer
a scalable pathway toward AGI systems with self-improving capabilities in
complex interactive settings. | 2025-05-25T11:13:03Z | null | null | null | null | null | null | null | null | null | null |
2,505.19103 | WHISTRESS: Enriching Transcriptions with Sentence Stress Detection | ['Iddo Yosha', 'Dorin Shteyman', 'Yossi Adi'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Spoken language conveys meaning not only through words but also through
intonation, emotion, and emphasis. Sentence stress, the emphasis placed on
specific words within a sentence, is crucial for conveying speaker intent and
has been extensively studied in linguistics. In this work, we introduce
WHISTRESS, an alignment-free approach for enhancing transcription systems with
sentence stress detection. To support this task, we propose TINYSTRESS-15K, a
scalable, synthetic training data for the task of sentence stress detection
which resulted from a fully automated dataset creation process. We train
WHISTRESS on TINYSTRESS-15K and evaluate it against several competitive
baselines. Our results show that WHISTRESS outperforms existing methods while
requiring no additional input priors during training or inference. Notably,
despite being trained on synthetic data, WHISTRESS demonstrates strong
zero-shot generalization across diverse benchmarks. Project page:
https://pages.cs.huji.ac.il/adiyoss-lab/whistress. | 2025-05-25T11:45:08Z | Accepted to Interspeech2025 | null | null | null | null | null | null | null | null | null |
2,505.19114 | CreatiDesign: A Unified Multi-Conditional Diffusion Transformer for
Creative Graphic Design | ['Hui Zhang', 'Dexiang Hong', 'Maoke Yang', 'Yutao Cheng', 'Zhao Zhang', 'Jie Shao', 'Xinglong Wu', 'Zuxuan Wu', 'Yu-Gang Jiang'] | ['cs.CV'] | Graphic design plays a vital role in visual communication across advertising,
marketing, and multimedia entertainment. Prior work has explored automated
graphic design generation using diffusion models, aiming to streamline creative
workflows and democratize design capabilities. However, complex graphic design
scenarios require accurately adhering to design intent specified by multiple
heterogeneous user-provided elements (\eg images, layouts, and texts), which
pose multi-condition control challenges for existing methods. Specifically,
previous single-condition control models demonstrate effectiveness only within
their specialized domains but fail to generalize to other conditions, while
existing multi-condition methods often lack fine-grained control over each
sub-condition and compromise overall compositional harmony. To address these
limitations, we introduce CreatiDesign, a systematic solution for automated
graphic design covering both model architecture and dataset construction.
First, we design a unified multi-condition driven architecture that enables
flexible and precise integration of heterogeneous design elements with minimal
architectural modifications to the base diffusion model. Furthermore, to ensure
that each condition precisely controls its designated image region and to avoid
interference between conditions, we propose a multimodal attention mask
mechanism. Additionally, we develop a fully automated pipeline for constructing
graphic design datasets, and introduce a new dataset with 400K samples
featuring multi-condition annotations, along with a comprehensive benchmark.
Experimental results show that CreatiDesign outperforms existing models by a
clear margin in faithfully adhering to user intent. | 2025-05-25T12:14:23Z | null | null | null | null | null | null | null | null | null | null |
2,505.19201 | DREAM: Drafting with Refined Target Features and Entropy-Adaptive
Cross-Attention Fusion for Multimodal Speculative Decoding | ['Yunhai Hu', 'Tianhua Xia', 'Zining Liu', 'Rahul Raman', 'Xingyu Liu', 'Bo Bao', 'Eric Sather', 'Vithursan Thangarasa', 'Sai Qian Zhang'] | ['cs.CL'] | Speculative decoding (SD) has emerged as a powerful method for accelerating
autoregressive generation in large language models (LLMs), yet its integration
into vision-language models (VLMs) remains underexplored. We introduce DREAM, a
novel speculative decoding framework tailored for VLMs that combines three key
innovations: (1) a cross-attention-based mechanism to inject intermediate
features from the target model into the draft model for improved alignment, (2)
adaptive intermediate feature selection based on attention entropy to guide
efficient draft model training, and (3) visual token compression to reduce
draft model latency. DREAM enables efficient, accurate, and parallel multimodal
decoding with significant throughput improvement. Experiments across a diverse
set of recent popular VLMs, including LLaVA, Pixtral, SmolVLM and Gemma3,
demonstrate up to 3.6x speedup over conventional decoding and significantly
outperform prior SD baselines in both inference throughput and speculative
draft acceptance length across a broad range of multimodal benchmarks. The code
is publicly available at: https://github.com/SAI-Lab-NYU/DREAM.git | 2025-05-25T15:56:50Z | null | null | null | null | null | null | null | null | null | null |
2,505.19225 | MedITok: A Unified Tokenizer for Medical Image Synthesis and
Interpretation | ['Chenglong Ma', 'Yuanfeng Ji', 'Jin Ye', 'Zilong Li', 'Chenhui Wang', 'Junzhi Ning', 'Wei Li', 'Lihao Liu', 'Qiushan Guo', 'Tianbin Li', 'Junjun He', 'Hongming Shan'] | ['eess.IV', 'cs.CV'] | Advanced autoregressive models have reshaped multimodal AI. However, their
transformative potential in medical imaging remains largely untapped due to the
absence of a unified visual tokenizer -- one capable of capturing fine-grained
visual structures for faithful image reconstruction and realistic image
synthesis, as well as rich semantics for accurate diagnosis and image
interpretation. To this end, we present MedITok, the first unified tokenizer
tailored for medical images, encoding both low-level structural details and
high-level clinical semantics within a unified latent space. To balance these
competing objectives, we introduce a novel two-stage training framework: a
visual representation alignment stage that cold-starts the tokenizer
reconstruction learning with a visual semantic constraint, followed by a
textual semantic representation alignment stage that infuses detailed clinical
semantics into the latent space. Trained on the meticulously collected
large-scale dataset with over 30 million medical images and 2 million
image-caption pairs, MedITok achieves state-of-the-art performance on more than
30 datasets across 9 imaging modalities and 4 different tasks. By providing a
unified token space for autoregressive modeling, MedITok supports a wide range
of tasks in clinical diagnostics and generative healthcare applications. Model
and code will be made publicly available at:
https://github.com/Masaaki-75/meditok. | 2025-05-25T16:39:35Z | null | null | null | MedITok: A Unified Tokenizer for Medical Image Synthesis and Interpretation | ['Chenglong Ma', 'Yuanfeng Ji', 'Jin Ye', 'Zilong Li', 'Chenhui Wang', 'Junzhi Ning', 'Wei Li', 'Lihao Liu', 'Qiushan Guo', 'Tian-Xin Li', 'Junjun He', 'Hongming Shan'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science', 'Engineering'] |
2,505.19274 | Conventional Contrastive Learning Often Falls Short: Improving Dense
Retrieval with Cross-Encoder Listwise Distillation and Synthetic Data | ['Manveer Singh Tamber', 'Suleman Kazi', 'Vivek Sourabh', 'Jimmy Lin'] | ['cs.IR'] | We investigate improving the retrieval effectiveness of embedding models
through the lens of corpus-specific fine-tuning. Prior work has shown that
fine-tuning with queries generated using a dataset's retrieval corpus can boost
retrieval effectiveness for the dataset. However, we find that surprisingly,
fine-tuning using the conventional InfoNCE contrastive loss often reduces
effectiveness in state-of-the-art models. To overcome this, we revisit
cross-encoder listwise distillation and demonstrate that, unlike using
contrastive learning alone, listwise distillation can help more consistently
improve retrieval effectiveness across multiple datasets. Additionally, we show
that synthesizing more training data using diverse query types (such as claims,
keywords, and questions) yields greater effectiveness than using any single
query type alone, regardless of the query type used in evaluation. Our findings
further indicate that synthetic queries offer comparable utility to
human-written queries for training. We use our approach to train an embedding
model that achieves state-of-the-art effectiveness among BERT embedding models.
We release our model and both query generation and training code to facilitate
further research. | 2025-05-25T19:06:19Z | updated version of arxiv:2502.19712 | null | null | Conventional Contrastive Learning Often Falls Short: Improving Dense Retrieval with Cross-Encoder Listwise Distillation and Synthetic Data | ['M. Tamber', 'Suleman Kazi', 'Vivek Sourabh', 'Jimmy Lin'] | 2,025 | arXiv.org | 0 | 63 | ['Computer Science'] |
2,505.19314 | SoloSpeech: Enhancing Intelligibility and Quality in Target Speech
Extraction through a Cascaded Generative Pipeline | ['Helin Wang', 'Jiarui Hai', 'Dongchao Yang', 'Chen Chen', 'Kai Li', 'Junyi Peng', 'Thomas Thebaud', 'Laureano Moro Velazquez', 'Jesus Villalba', 'Najim Dehak'] | ['eess.AS', 'cs.AI', 'cs.SD'] | Target Speech Extraction (TSE) aims to isolate a target speaker's voice from
a mixture of multiple speakers by leveraging speaker-specific cues, typically
provided as auxiliary audio (a.k.a. cue audio). Although recent advancements in
TSE have primarily employed discriminative models that offer high perceptual
quality, these models often introduce unwanted artifacts, reduce naturalness,
and are sensitive to discrepancies between training and testing environments.
On the other hand, generative models for TSE lag in perceptual quality and
intelligibility. To address these challenges, we present SoloSpeech, a novel
cascaded generative pipeline that integrates compression, extraction,
reconstruction, and correction processes. SoloSpeech features a
speaker-embedding-free target extractor that utilizes conditional information
from the cue audio's latent space, aligning it with the mixture audio's latent
space to prevent mismatches. Evaluated on the widely-used Libri2Mix dataset,
SoloSpeech achieves the new state-of-the-art intelligibility and quality in
target speech extraction and speech separation tasks while demonstrating
exceptional generalization on out-of-domain data and real-world scenarios. | 2025-05-25T21:00:48Z | null | null | null | null | null | null | null | null | null | null |
2,505.19356 | Optimized Text Embedding Models and Benchmarks for Amharic Passage
Retrieval | ['Kidist Amde Mekonnen', 'Yosef Worku Alemneh', 'Maarten de Rijke'] | ['cs.IR', 'cs.AI', 'cs.CL', 'cs.LG', '68T50 (Primary), 68T05 (Secondary)', 'H.3.3; H.3.1; I.2.7'] | Neural retrieval methods using transformer-based pre-trained language models
have advanced multilingual and cross-lingual retrieval. However, their
effectiveness for low-resource, morphologically rich languages such as Amharic
remains underexplored due to data scarcity and suboptimal tokenization. We
address this gap by introducing Amharic-specific dense retrieval models based
on pre-trained Amharic BERT and RoBERTa backbones. Our proposed
RoBERTa-Base-Amharic-Embed model (110M parameters) achieves a 17.6% relative
improvement in MRR@10 and a 9.86% gain in Recall@10 over the strongest
multilingual baseline, Arctic Embed 2.0 (568M parameters). More compact
variants, such as RoBERTa-Medium-Amharic-Embed (42M), remain competitive while
being over 13x smaller. Additionally, we train a ColBERT-based late interaction
retrieval model that achieves the highest MRR@10 score (0.843) among all
evaluated models. We benchmark our proposed models against both sparse and
dense retrieval baselines to systematically assess retrieval effectiveness in
Amharic. Our analysis highlights key challenges in low-resource settings and
underscores the importance of language-specific adaptation. To foster future
research in low-resource IR, we publicly release our dataset, codebase, and
trained models at https://github.com/kidist-amde/amharic-ir-benchmarks. | 2025-05-25T23:06:20Z | 10 pages (excl. refs/appendix), 10 figures. Accepted to ACL 2025
Findings. Kidist and Yosef contributed equally to this work. Public
resources: https://github.com/kidist-amde/amharic-ir-benchmarks | null | null | Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval | ['Kidist Amde Mekonnen', 'Yosef Alemneh', 'M. D. Rijke'] | 2,025 | arXiv.org | 0 | 52 | ['Computer Science'] |
2,505.19536 | FlowCut: Rethinking Redundancy via Information Flow for Efficient
Vision-Language Models | ['Jintao Tong', 'Wenwei Jin', 'Pengda Qin', 'Anqi Li', 'Yixiong Zou', 'Yuhong Li', 'Yuhua Li', 'Ruixuan Li'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Large vision-language models (LVLMs) excel at multimodal understanding but
suffer from high computational costs due to redundant vision tokens. Existing
pruning methods typically rely on single-layer attention scores to rank and
prune redundant visual tokens to solve this inefficiency. However, as the
interaction between tokens and layers is complicated, this raises a basic
question: Is such a simple single-layer criterion sufficient to identify
redundancy? To answer this question, we rethink the emergence of redundant
visual tokens from a fundamental perspective: information flow, which models
the interaction between tokens and layers by capturing how information moves
between tokens across layers. We find (1) the CLS token acts as an information
relay, which can simplify the complicated flow analysis; (2) the redundancy
emerges progressively and dynamically via layer-wise attention concentration;
and (3) relying solely on attention scores from single layers can lead to
contradictory redundancy identification. Based on this, we propose FlowCut, an
information-flow-aware pruning framework, mitigating the insufficiency of the
current criterion for identifying redundant tokens and better aligning with the
model's inherent behaviors. Extensive experiments show that FlowCut achieves
superior results, outperforming SoTA by 1.6% on LLaVA-1.5-7B with 88.9% token
reduction, and by 4.3% on LLaVA-NeXT-7B with 94.4% reduction, delivering 3.2x
speed-up in the prefilling stage. Our code is available at
https://github.com/TungChintao/FlowCut | 2025-05-26T05:54:48Z | 19 pages, 11 figures | null | null | FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models | ['Jintao Tong', 'Wenwei Jin', 'Pengda Qin', 'Anqi Li', 'Yixiong Zou', 'Yuhong Li', 'Yuhua Li', 'Ruixuan Li'] | 2,025 | arXiv.org | 0 | 51 | ['Computer Science'] |
2,505.1959 | Learning to Reason without External Rewards | ['Xuandong Zhao', 'Zhewei Kang', 'Aosong Feng', 'Sergey Levine', 'Dawn Song'] | ['cs.LG', 'cs.CL'] | Training large language models (LLMs) for complex reasoning via Reinforcement
Learning with Verifiable Rewards (RLVR) is effective but limited by reliance on
costly, domain-specific supervision. We explore Reinforcement Learning from
Internal Feedback (RLIF), a framework that enables LLMs to learn from intrinsic
signals without external rewards or labeled data. We propose Intuitor, an RLIF
method that uses a model's own confidence, termed self-certainty, as its sole
reward signal. Intuitor replaces external rewards in Group Relative Policy
Optimization (GRPO) with self-certainty scores, enabling fully unsupervised
learning. Experiments demonstrate that Intuitor matches GRPO's performance on
mathematical benchmarks while achieving superior generalization to
out-of-domain tasks like code generation, without requiring gold solutions or
test cases. Our findings show that intrinsic model signals can drive effective
learning across domains, offering a scalable alternative to RLVR for autonomous
AI systems where verifiable rewards are unavailable. Code is available at
https://github.com/sunblaze-ucb/Intuitor | 2025-05-26T07:01:06Z | null | null | null | null | null | null | null | null | null | null |
2,505.19641 | SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning
Logical Reasoning and Beyond | ['Junteng Liu', 'Yuanxiang Fan', 'Zhuo Jiang', 'Han Ding', 'Yongyi Hu', 'Chi Zhang', 'Yiqi Shi', 'Shitong Weng', 'Aili Chen', 'Shiqi Chen', 'Yunan Huang', 'Mozhi Zhang', 'Pengyu Zhao', 'Junjie Yan', 'Junxian He'] | ['cs.AI', 'cs.CL'] | Recent advances such as OpenAI-o1 and DeepSeek R1 have demonstrated the
potential of Reinforcement Learning (RL) to enhance reasoning abilities in
Large Language Models (LLMs). While open-source replication efforts have
primarily focused on mathematical and coding domains, methods and resources for
developing general reasoning capabilities remain underexplored. This gap is
partly due to the challenge of collecting diverse and verifiable reasoning data
suitable for RL. We hypothesize that logical reasoning is critical for
developing general reasoning capabilities, as logic forms a fundamental
building block of reasoning. In this work, we present SynLogic, a data
synthesis framework and dataset that generates diverse logical reasoning data
at scale, encompassing 35 diverse logical reasoning tasks. The SynLogic
approach enables controlled synthesis of data with adjustable difficulty and
quantity. Importantly, all examples can be verified by simple rules, making
them ideally suited for RL with verifiable rewards. In our experiments, we
validate the effectiveness of RL training on the SynLogic dataset based on 7B
and 32B models. SynLogic leads to state-of-the-art logical reasoning
performance among open-source datasets, surpassing DeepSeek-R1-Distill-Qwen-32B
by 6 points on BBEH. Furthermore, mixing SynLogic data with mathematical and
coding tasks improves the training efficiency of these domains and
significantly enhances reasoning generalization. Notably, our mixed training
model outperforms DeepSeek-R1-Zero-Qwen-32B across multiple benchmarks. These
findings position SynLogic as a valuable resource for advancing the broader
reasoning capabilities of LLMs. We open-source both the data synthesis pipeline
and the SynLogic dataset at https://github.com/MiniMax-AI/SynLogic. | 2025-05-26T07:59:36Z | null | null | null | null | null | null | null | null | null | null |
2,505.1965 | Modality Curation: Building Universal Embeddings for Advanced Multimodal
Information Retrieval | ['Fanheng Kong', 'Jingyuan Zhang', 'Yahui Liu', 'Hongzhi Zhang', 'Shi Feng', 'Xiaocui Yang', 'Daling Wang', 'Yu Tian', 'Victoria W.', 'Fuzheng Zhang', 'Guorui Zhou'] | ['cs.CV', 'cs.IR', 'cs.MM'] | Multimodal information retrieval (MIR) faces inherent challenges due to the
heterogeneity of data sources and the complexity of cross-modal alignment.
While previous studies have identified modal gaps in feature spaces, a
systematic approach to address these challenges remains unexplored. In this
work, we introduce UNITE, a universal framework that tackles these challenges
through two critical yet underexplored aspects: data curation and
modality-aware training configurations. Our work provides the first
comprehensive analysis of how modality-specific data properties influence
downstream task performance across diverse scenarios. Moreover, we propose
Modal-Aware Masked Contrastive Learning (MAMCL) to mitigate the competitive
relationships among the instances of different modalities. Our framework
achieves state-of-the-art results on multiple multimodal retrieval benchmarks,
outperforming existing methods by notable margins. Through extensive
experiments, we demonstrate that strategic modality curation and tailored
training protocols are pivotal for robust cross-modal representation learning.
This work not only advances MIR performance but also provides a foundational
blueprint for future research in multimodal systems. Our project is available
at https://friedrichor.github.io/projects/UNITE. | 2025-05-26T08:09:44Z | 26 pages, project page: https://friedrichor.github.io/projects/UNITE | null | null | null | null | null | null | null | null | null |
2,505.19706 | Error Typing for Smarter Rewards: Improving Process Reward Models with
Error-Aware Hierarchical Supervision | ['Tej Deep Pala', 'Panshul Sharma', 'Amir Zadeh', 'Chuan Li', 'Soujanya Poria'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) are prone to hallucination, especially during
multi-hop and reasoning-intensive tasks such as mathematical problem solving.
While Outcome Reward Models verify only final answers, Process Reward Models
(PRMs) score each intermediate step to steer generation toward coherent
solutions. We introduce PathFinder-PRM, a novel hierarchical, error-aware
discriminative PRM that first classifies math and consistency errors at each
step, then combines these fine-grained signals to estimate step correctness. To
train PathFinder-PRM, we construct a 400K-sample dataset by enriching the
human-annotated PRM800K corpus and RLHFlow Mistral traces with
three-dimensional step-level labels. On PRMBench, PathFinder-PRM achieves a new
state-of-the-art PRMScore of 67.7, outperforming the prior best (65.5) while
using 3 times less data. When applied to reward guided greedy search, our model
yields prm@8 48.3, a +1.5 point gain over the strongest baseline. These results
demonstrate that decoupled error detection and reward estimation not only boost
fine-grained error detection but also substantially improve end-to-end,
reward-guided mathematical reasoning with greater data efficiency. | 2025-05-26T08:56:36Z | https://github.com/declare-lab/PathFinder-PRM | null | null | Error Typing for Smarter Rewards: Improving Process Reward Models with Error-Aware Hierarchical Supervision | ['Tej Deep Pala', 'Panshul Sharma', 'Amir Zadeh', 'Chuan Li', 'Soujanya Poria'] | 2,025 | arXiv.org | 0 | 20 | ['Computer Science'] |
2,505.19743 | Token-level Accept or Reject: A Micro Alignment Approach for Large
Language Models | ['Yang Zhang', 'Yu Yu', 'Bo Tang', 'Yu Zhu', 'Chuxiong Sun', 'Wenqiang Wei', 'Jie Hu', 'Zipeng Xie', 'Zhiyu Li', 'Feiyu Xiong', 'Edward Chung'] | ['cs.CL', 'cs.LG'] | With the rapid development of Large Language Models (LLMs), aligning these
models with human preferences and values is critical to ensuring ethical and
safe applications. However, existing alignment techniques such as RLHF or DPO
often require direct fine-tuning on LLMs with billions of parameters, resulting
in substantial computational costs and inefficiencies. To address this, we
propose Micro token-level Accept-Reject Aligning (MARA) approach designed to
operate independently of the language models. MARA simplifies the alignment
process by decomposing sentence-level preference learning into token-level
binary classification, where a compact three-layer fully-connected network
determines whether candidate tokens are "Accepted" or "Rejected" as part of the
response. Extensive experiments across seven different LLMs and three
open-source datasets show that MARA achieves significant improvements in
alignment performance while reducing computational costs. The source code and
implementation details are publicly available at
https://github.com/IAAR-Shanghai/MARA, and the trained models are released at
https://huggingface.co/IAAR-Shanghai/MARA_AGENTS. | 2025-05-26T09:24:36Z | Accepted to 34th International Joint Conference on Artificial
Intelligence (IJCAI 2025) | null | null | null | null | null | null | null | null | null |
2,505.19789 | What Can RL Bring to VLA Generalization? An Empirical Study | ['Jijia Liu', 'Feng Gao', 'Bingwen Wei', 'Xinlei Chen', 'Qingmin Liao', 'Yi Wu', 'Chao Yu', 'Yu Wang'] | ['cs.LG'] | Large Vision-Language Action (VLA) models have shown significant potential
for embodied AI. However, their predominant training via supervised fine-tuning
(SFT) limits generalization due to susceptibility to compounding errors under
distribution shifts. Reinforcement learning (RL) offers a path to overcome
these limitations by optimizing for task objectives via trial-and-error, yet a
systematic understanding of its specific generalization benefits for VLAs
compared to SFT is lacking. To address this, our study introduces a
comprehensive benchmark for evaluating VLA generalization and systematically
investigates the impact of RL fine-tuning across diverse visual, semantic, and
execution dimensions. Our extensive experiments reveal that RL fine-tuning,
particularly with PPO, significantly enhances generalization in semantic
understanding and execution robustness over SFT, while maintaining comparable
visual robustness. We identify PPO as a more effective RL algorithm for VLAs
than LLM-derived methods like DPO and GRPO. We also develop a simple recipe for
efficient PPO training on VLAs, and demonstrate its practical utility for
improving VLA generalization. The project page is at https://rlvla.github.io | 2025-05-26T10:19:26Z | null | null | null | What Can RL Bring to VLA Generalization? An Empirical Study | ['Jijia Liu', 'Feng Gao', 'Bingwen Wei', 'Xinlei Chen', 'Qingmin Liao', 'Yi Wu', 'Chaoyang Yu', 'Yu Wang'] | 2,025 | arXiv.org | 0 | 75 | ['Computer Science'] |
2,505.19819 | FinLoRA: Benchmarking LoRA Methods for Fine-Tuning LLMs on Financial
Datasets | ['Dannong Wang', 'Jaisal Patel', 'Daochen Zha', 'Steve Y. Yang', 'Xiao-Yang Liu'] | ['cs.CE', 'cs.AI'] | Low-rank adaptation (LoRA) methods show great potential for scaling
pre-trained general-purpose Large Language Models (LLMs) to hundreds or
thousands of use scenarios. However, their efficacy in high-stakes domains like
finance is rarely explored, e.g., passing CFA exams and analyzing SEC filings.
In this paper, we present the open-source FinLoRA project that benchmarks LoRA
methods on both general and highly professional financial tasks. First, we
curated 19 datasets covering diverse financial applications; in particular, we
created four novel XBRL analysis datasets based on 150 SEC filings. Second, we
evaluated five LoRA methods and five base LLMs. Finally, we provide extensive
experimental results in terms of accuracy, F1, and BERTScore and report
computational cost in terms of time and GPU memory during fine-tuning and
inference stages. We find that LoRA methods achieved substantial performance
gains of 36\% on average over base models. Our FinLoRA project provides an
affordable and scalable approach to democratize financial intelligence to the
general public. Datasets, LoRA adapters, code, and documentation are available
at https://github.com/Open-Finance-Lab/FinLoRA | 2025-05-26T10:58:51Z | null | null | null | null | null | null | null | null | null | null |
2,505.1984 | One Surrogate to Fool Them All: Universal, Transferable, and Targeted
Adversarial Attacks with CLIP | ['Binyan Xu', 'Xilin Dai', 'Di Tang', 'Kehuan Zhang'] | ['cs.CR', 'cs.LG', '68T07', 'I.2.6'] | Deep Neural Networks (DNNs) have achieved widespread success yet remain prone
to adversarial attacks. Typically, such attacks either involve frequent queries
to the target model or rely on surrogate models closely mirroring the target
model -- often trained with subsets of the target model's training data -- to
achieve high attack success rates through transferability. However, in
realistic scenarios where training data is inaccessible and excessive queries
can raise alarms, crafting adversarial examples becomes more challenging. In
this paper, we present UnivIntruder, a novel attack framework that relies
solely on a single, publicly available CLIP model and publicly available
datasets. By using textual concepts, UnivIntruder generates universal,
transferable, and targeted adversarial perturbations that mislead DNNs into
misclassifying inputs into adversary-specified classes defined by textual
concepts.
Our extensive experiments show that our approach achieves an Attack Success
Rate (ASR) of up to 85% on ImageNet and over 99% on CIFAR-10, significantly
outperforming existing transfer-based methods. Additionally, we reveal
real-world vulnerabilities, showing that even without querying target models,
UnivIntruder compromises image search engines like Google and Baidu with ASR
rates up to 84%, and vision language models like GPT-4 and Claude-3.5 with ASR
rates up to 80%. These findings underscore the practicality of our attack in
scenarios where traditional avenues are blocked, highlighting the need to
reevaluate security paradigms in AI applications. | 2025-05-26T11:25:00Z | 21 pages, 15 figures, 18 tables. To appear in the Proceedings of The
ACM Conference on Computer and Communications Security (CCS), 2025 | null | null | One Surrogate to Fool Them All: Universal, Transferable, and Targeted Adversarial Attacks with CLIP | ['Binyan Xu', 'Xilin Dai', 'Di Tang', 'Kehuan Zhang'] | 2,025 | arXiv.org | 0 | 68 | ['Computer Science'] |
2,505.19897 | ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic
Scientific Workflows | ['Qiushi Sun', 'Zhoumianze Liu', 'Chang Ma', 'Zichen Ding', 'Fangzhi Xu', 'Zhangyue Yin', 'Haiteng Zhao', 'Zhenyu Wu', 'Kanzhi Cheng', 'Zhaoyang Liu', 'Jianing Wang', 'Qintong Li', 'Xiangru Tang', 'Tianbao Xie', 'Xiachong Feng', 'Xiang Li', 'Ben Kao', 'Wenhai Wang', 'Biqing Qi', 'Lingpeng Kong', 'Zhiyong Wu'] | ['cs.AI', 'cs.CL', 'cs.CV', 'cs.HC'] | Large Language Models (LLMs) have extended their impact beyond Natural
Language Processing, substantially fostering the development of
interdisciplinary research. Recently, various LLM-based agents have been
developed to assist scientific discovery progress across multiple aspects and
domains. Among these, computer-using agents, capable of interacting with
operating systems as humans do, are paving the way to automated scientific
problem-solving and addressing routines in researchers' workflows. Recognizing
the transformative potential of these agents, we introduce ScienceBoard, which
encompasses two complementary contributions: (i) a realistic, multi-domain
environment featuring dynamic and visually rich scientific workflows with
integrated professional software, where agents can autonomously interact via
different interfaces to accelerate complex research tasks and experiments; and
(ii) a challenging benchmark of 169 high-quality, rigorously validated
real-world tasks curated by humans, spanning scientific-discovery workflows in
domains such as biochemistry, astronomy, and geoinformatics. Extensive
evaluations of agents with state-of-the-art backbones (e.g., GPT-4o, Claude
3.7, UI-TARS) show that, despite some promising results, they still fall short
of reliably assisting scientists in complex workflows, achieving only a 15%
overall success rate. In-depth analysis further provides valuable insights for
addressing current agent limitations and more effective design principles,
paving the way to build more capable agents for scientific discovery. Our code,
environment, and benchmark are at
https://qiushisun.github.io/ScienceBoard-Home/. | 2025-05-26T12:27:27Z | work in progress | null | null | null | null | null | null | null | null | null |
2,505.19954 | An Explainable Diagnostic Framework for Neurodegenerative Dementias via
Reinforcement-Optimized LLM Reasoning | ['Andrew Zamai', 'Nathanael Fijalkow', 'Boris Mansencal', 'Laurent Simon', 'Eloi Navet', 'Pierrick Coupe'] | ['cs.LG', 'cs.CL'] | The differential diagnosis of neurodegenerative dementias is a challenging
clinical task, mainly because of the overlap in symptom presentation and the
similarity of patterns observed in structural neuroimaging. To improve
diagnostic efficiency and accuracy, deep learning-based methods such as
Convolutional Neural Networks and Vision Transformers have been proposed for
the automatic classification of brain MRIs. However, despite their strong
predictive performance, these models find limited clinical utility due to their
opaque decision making. In this work, we propose a framework that integrates
two core components to enhance diagnostic transparency. First, we introduce a
modular pipeline for converting 3D T1-weighted brain MRIs into textual
radiology reports. Second, we explore the potential of modern Large Language
Models (LLMs) to assist clinicians in the differential diagnosis between
Frontotemporal dementia subtypes, Alzheimer's disease, and normal aging based
on the generated reports. To bridge the gap between predictive accuracy and
explainability, we employ reinforcement learning to incentivize diagnostic
reasoning in LLMs. Without requiring supervised reasoning traces or
distillation from larger models, our approach enables the emergence of
structured diagnostic rationales grounded in neuroimaging findings. Unlike
post-hoc explainability methods that retrospectively justify model decisions,
our framework generates diagnostic rationales as part of the inference
process-producing causally grounded explanations that inform and guide the
model's decision-making process. In doing so, our framework matches the
diagnostic performance of existing deep learning methods while offering
rationales that support its diagnostic conclusions. | 2025-05-26T13:18:32Z | null | null | null | null | null | null | null | null | null | null |
2,505.20046 | REARANK: Reasoning Re-ranking Agent via Reinforcement Learning | ['Le Zhang', 'Bo Wang', 'Xipeng Qiu', 'Siva Reddy', 'Aishwarya Agrawal'] | ['cs.IR', 'cs.CL'] | We present REARANK, a large language model (LLM)-based listwise reasoning
reranking agent. REARANK explicitly reasons before reranking, significantly
improving both performance and interpretability. Leveraging reinforcement
learning and data augmentation, REARANK achieves substantial improvements over
baseline models across popular information retrieval benchmarks, notably
requiring only 179 annotated samples. Built on top of Qwen2.5-7B, our
REARANK-7B demonstrates performance comparable to GPT-4 on both in-domain and
out-of-domain benchmarks and even surpasses GPT-4 on reasoning-intensive BRIGHT
benchmarks. These results underscore the effectiveness of our approach and
highlight how reinforcement learning can enhance LLM reasoning capabilities in
reranking. | 2025-05-26T14:31:48Z | null | null | null | REARANK: Reasoning Re-ranking Agent via Reinforcement Learning | ['Le Zhang', 'Bo Wang', 'Xipeng Qiu', 'Siva Reddy', 'Aishwarya Agrawal'] | 2,025 | arXiv.org | 0 | 31 | ['Computer Science'] |
2,505.20052 | Ankh3: Multi-Task Pretraining with Sequence Denoising and Completion
Enhances Protein Representations | ['Hazem Alsamkary', 'Mohamed Elshaffei', 'Mohamed Elkerdawy', 'Ahmed Elnaggar'] | ['cs.LG', 'q-bio.QM'] | Protein language models (PLMs) have emerged as powerful tools to detect
complex patterns of protein sequences. However, the capability of PLMs to fully
capture information on protein sequences might be limited by focusing on single
pre-training tasks. Although adding data modalities or supervised objectives
can improve the performance of PLMs, pre-training often remains focused on
denoising corrupted sequences. To push the boundaries of PLMs, our research
investigated a multi-task pre-training strategy. We developed Ankh3, a model
jointly optimized on two objectives: masked language modeling with multiple
masking probabilities and protein sequence completion relying only on protein
sequences as input. This multi-task pre-training demonstrated that PLMs can
learn richer and more generalizable representations solely from protein
sequences. The results demonstrated improved performance in downstream tasks,
such as secondary structure prediction, fluorescence, GB1 fitness, and contact
prediction. The integration of multiple tasks gave the model a more
comprehensive understanding of protein properties, leading to more robust and
accurate predictions. | 2025-05-26T14:41:10Z | 8 pages, 0 figures | null | null | Ankh3: Multi-Task Pretraining with Sequence Denoising and Completion Enhances Protein Representations | ['Hazem Alsamkary', 'Mohamed Elshaffei', 'Mohamed Elkerdawy', 'Ahmed Elnaggar'] | 2,025 | arXiv.org | 0 | 21 | ['Computer Science', 'Biology'] |
2,505.20156 | HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for
Multiple Characters | ['Yi Chen', 'Sen Liang', 'Zixiang Zhou', 'Ziyao Huang', 'Yifeng Ma', 'Junshu Tang', 'Qin Lin', 'Yuan Zhou', 'Qinglin Lu'] | ['cs.CV'] | Recent years have witnessed significant progress in audio-driven human
animation. However, critical challenges remain in (i) generating highly dynamic
videos while preserving character consistency, (ii) achieving precise emotion
alignment between characters and audio, and (iii) enabling multi-character
audio-driven animation. To address these challenges, we propose
HunyuanVideo-Avatar, a multimodal diffusion transformer (MM-DiT)-based model
capable of simultaneously generating dynamic, emotion-controllable, and
multi-character dialogue videos. Concretely, HunyuanVideo-Avatar introduces
three key innovations: (i) A character image injection module is designed to
replace the conventional addition-based character conditioning scheme,
eliminating the inherent condition mismatch between training and inference.
This ensures the dynamic motion and strong character consistency; (ii) An Audio
Emotion Module (AEM) is introduced to extract and transfer the emotional cues
from an emotion reference image to the target generated video, enabling
fine-grained and accurate emotion style control; (iii) A Face-Aware Audio
Adapter (FAA) is proposed to isolate the audio-driven character with
latent-level face mask, enabling independent audio injection via
cross-attention for multi-character scenarios. These innovations empower
HunyuanVideo-Avatar to surpass state-of-the-art methods on benchmark datasets
and a newly proposed wild dataset, generating realistic avatars in dynamic,
immersive scenarios. | 2025-05-26T15:57:27Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.