arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,506.002 | Structuring Radiology Reports: Challenging LLMs with Lightweight Models | ['Johannes Moll', 'Louisa Fay', 'Asfandyar Azhar', 'Sophie Ostmeier', 'Tim Lueth', 'Sergios Gatidis', 'Curtis Langlotz', 'Jean-Benoit Delbrouck'] | ['cs.CL', 'cs.LG'] | Radiology reports are critical for clinical decision-making but often lack a
standardized format, limiting both human interpretability and machine learning
(ML) applications. While large language models (LLMs) have shown strong
capabilities in reformatting clinical text, their high computational
requirements, lack of transparency, and data privacy concerns hinder practical
deployment. To address these challenges, we explore lightweight encoder-decoder
models (<300M parameters)-specifically T5 and BERT2BERT-for structuring
radiology reports from the MIMIC-CXR and CheXpert Plus datasets. We benchmark
these models against eight open-source LLMs (1B-70B), adapted using prefix
prompting, in-context learning (ICL), and low-rank adaptation (LoRA)
finetuning. Our best-performing lightweight model outperforms all LLMs adapted
using prompt-based techniques on a human-annotated test set. While some
LoRA-finetuned LLMs achieve modest gains over the lightweight model on the
Findings section (BLEU 6.4%, ROUGE-L 4.8%, BERTScore 3.6%, F1-RadGraph 1.1%,
GREEN 3.6%, and F1-SRR-BERT 4.3%), these improvements come at the cost of
substantially greater computational resources. For example, LLaMA-3-70B
incurred more than 400 times the inference time, cost, and carbon emissions
compared to the lightweight model. These results underscore the potential of
lightweight, task-specific models as sustainable and privacy-preserving
solutions for structuring clinical text in resource-constrained healthcare
settings. | 2025-05-30T20:12:51Z | null | null | null | null | null | null | null | null | null | null |
2,506.00227 | Ctrl-Crash: Controllable Diffusion for Realistic Car Crashes | ['Anthony Gosselin', 'Ge Ya Luo', 'Luis Lara', 'Florian Golemo', 'Derek Nowrouzezahrai', 'Liam Paull', 'Alexia Jolicoeur-Martineau', 'Christopher Pal'] | ['cs.CV', 'cs.AI', 'cs.RO'] | Video diffusion techniques have advanced significantly in recent years;
however, they struggle to generate realistic imagery of car crashes due to the
scarcity of accident events in most driving datasets. Improving traffic safety
requires realistic and controllable accident simulations. To tackle the
problem, we propose Ctrl-Crash, a controllable car crash video generation model
that conditions on signals such as bounding boxes, crash types, and an initial
image frame. Our approach enables counterfactual scenario generation where
minor variations in input can lead to dramatically different crash outcomes. To
support fine-grained control at inference time, we leverage classifier-free
guidance with independently tunable scales for each conditioning signal.
Ctrl-Crash achieves state-of-the-art performance across quantitative video
quality metrics (e.g., FVD and JEDi) and qualitative measurements based on a
human-evaluation of physical realism and video quality compared to prior
diffusion-based methods. | 2025-05-30T21:04:38Z | Under review | null | null | null | null | null | null | null | null | null |
2,506.00288 | Emergent Abilities of Large Language Models under Continued Pretraining
for Language Adaptation | ['Ahmed Elhady', 'Eneko Agirre', 'Mikel Artetxe'] | ['cs.CL', 'cs.AI'] | Continued pretraining (CPT) is a popular approach to adapt existing large
language models (LLMs) to new languages. When doing so, it is common practice
to include a portion of English data in the mixture, but its role has not been
carefully studied to date. In this work, we show that including English does
not impact validation perplexity, yet it is critical for the emergence of
downstream capabilities in the target language. We introduce a
language-agnostic benchmark for in-context learning (ICL), which reveals
catastrophic forgetting early on CPT when English is not included. This in turn
damages the ability of the model to generalize to downstream prompts in the
target language as measured by perplexity, even if it does not manifest in
terms of accuracy until later in training, and can be tied to a big shift in
the model parameters. Based on these insights, we introduce curriculum learning
and exponential moving average (EMA) of weights as effective alternatives to
mitigate the need for English. All in all, our work sheds light into the
dynamics by which emergent abilities arise when doing CPT for language
adaptation, and can serve as a foundation to design more effective methods in
the future. | 2025-05-30T22:31:59Z | To appear in ACL 2025 Main | null | null | Emergent Abilities of Large Language Models under Continued Pretraining for Language Adaptation | ['Ahmed Elhady', 'Eneko Agirre', 'Mikel Artetxe'] | 2,025 | arXiv.org | 0 | 39 | ['Computer Science'] |
2,506.00338 | OWSM v4: Improving Open Whisper-Style Speech Models via Data Scaling and
Cleaning | ['Yifan Peng', 'Shakeel Muhammad', 'Yui Sudo', 'William Chen', 'Jinchuan Tian', 'Chyi-Jiunn Lin', 'Shinji Watanabe'] | ['cs.CL', 'cs.SD', 'eess.AS'] | The Open Whisper-style Speech Models (OWSM) project has developed a series of
fully open speech foundation models using academic-scale resources, but their
training data remains insufficient. This work enhances OWSM by integrating
YODAS, a large-scale web-crawled dataset with a Creative Commons license.
However, incorporating YODAS is nontrivial due to its wild nature, which
introduces challenges such as incorrect language labels and audio-text
misalignments. To address this, we develop a scalable data-cleaning pipeline
using public toolkits, yielding a dataset with 166,000 hours of speech across
75 languages. Our new series of OWSM v4 models, trained on this curated dataset
alongside existing OWSM data, significantly outperform previous versions on
multilingual benchmarks. Our models even match or surpass frontier industrial
models like Whisper and MMS in multiple scenarios. We will publicly release the
cleaned YODAS data, pre-trained models, and all associated scripts via the
ESPnet toolkit. | 2025-05-31T01:44:44Z | Accepted at INTERSPEECH 2025 | null | null | null | null | null | null | null | null | null |
2,506.00385 | MagiCodec: Simple Masked Gaussian-Injected Codec for High-Fidelity
Reconstruction and Generation | ['Yakun Song', 'Jiawei Chen', 'Xiaobin Zhuang', 'Chenpeng Du', 'Ziyang Ma', 'Jian Wu', 'Jian Cong', 'Dongya Jia', 'Zhuo Chen', 'Yuping Wang', 'Yuxuan Wang', 'Xie Chen'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | Neural audio codecs have made significant strides in efficiently mapping raw
audio waveforms into discrete token representations, which are foundational for
contemporary audio generative models. However, most existing codecs are
optimized primarily for reconstruction quality, often at the expense of the
downstream modelability of the encoded tokens. Motivated by the need to
overcome this bottleneck, we introduce $\textbf{MagiCodec}$, a novel
single-layer, streaming Transformer-based audio codec. MagiCodec is designed
with a multistage training pipeline that incorporates Gaussian noise injection
and latent regularization, explicitly targeting the enhancement of semantic
expressiveness in the generated codes while preserving high reconstruction
fidelity. We analytically derive the effect of noise injection in the frequency
domain, demonstrating its efficacy in attenuating high-frequency components and
fostering robust tokenization. Extensive experimental evaluations show that
MagiCodec surpasses state-of-the-art codecs in both reconstruction quality and
downstream tasks. Notably, the tokens produced by MagiCodec exhibit Zipf-like
distributions, as observed in natural languages, thereby improving
compatibility with language-model-based generative architectures. The code and
pre-trained models are available at https://github.com/Ereboas/MagiCodec. | 2025-05-31T04:31:02Z | 18 pages, 3 figures. The code and pre-trained models are available at
https://github.com/Ereboas/MagiCodec | null | null | null | null | null | null | null | null | null |
2,506.00391 | SHARE: An SLM-based Hierarchical Action CorREction Assistant for
Text-to-SQL | ['Ge Qu', 'Jinyang Li', 'Bowen Qin', 'Xiaolong Li', 'Nan Huo', 'Chenhao Ma', 'Reynold Cheng'] | ['cs.CL'] | Current self-correction approaches in text-to-SQL face two critical
limitations: 1) Conventional self-correction methods rely on recursive
self-calls of LLMs, resulting in multiplicative computational overhead, and 2)
LLMs struggle to implement effective error detection and correction for
declarative SQL queries, as they fail to demonstrate the underlying reasoning
path. In this work, we propose SHARE, an SLM-based Hierarchical Action
corREction assistant that enables LLMs to perform more precise error
localization and efficient correction. SHARE orchestrates three specialized
Small Language Models (SLMs) in a sequential pipeline, where it first
transforms declarative SQL queries into stepwise action trajectories that
reveal underlying reasoning, followed by a two-phase granular refinement. We
further propose a novel hierarchical self-evolution strategy for data-efficient
training. Experimental results demonstrate that SHARE effectively enhances
self-correction capabilities while proving robust across various LLMs.
Furthermore, our comprehensive analysis shows that SHARE maintains strong
performance even in low-resource training settings, which is particularly
valuable for text-to-SQL applications with data privacy constraints. | 2025-05-31T04:51:12Z | Accepted to ACL 2025 Main | null | null | SHARE: An SLM-based Hierarchical Action CorREction Assistant for Text-to-SQL | ['Ge Qu', 'Jinyang Li', 'Bowen Qin', 'Xiaolong Li', 'Nan Huo', 'Chenhao Ma', 'Reynold Cheng'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,506.00421 | Enabling Chatbots with Eyes and Ears: An Immersive Multimodal
Conversation System for Dynamic Interactions | ['Jihyoung Jang', 'Minwook Bae', 'Minji Kim', 'Dilek Hakkani-Tur', 'Hyounghun Kim'] | ['cs.CL', 'cs.AI', 'cs.CV'] | As chatbots continue to evolve toward human-like, real-world, interactions,
multimodality remains an active area of research and exploration. So far,
efforts to integrate multimodality into chatbots have primarily focused on
image-centric tasks, such as visual dialogue and image-based instructions,
placing emphasis on the "eyes" of human perception while neglecting the "ears",
namely auditory aspects. Moreover, these studies often center around static
interactions that focus on discussing the modality rather than naturally
incorporating it into the conversation, which limits the richness of
simultaneous, dynamic engagement. Furthermore, while multimodality has been
explored in multi-party and multi-session conversations, task-specific
constraints have hindered its seamless integration into dynamic, natural
conversations. To address these challenges, this study aims to equip chatbots
with "eyes and ears" capable of more immersive interactions with humans. As
part of this effort, we introduce a new multimodal conversation dataset,
Multimodal Multi-Session Multi-Party Conversation ($M^3C$), and propose a novel
multimodal conversation model featuring multimodal memory retrieval. Our model,
trained on the $M^3C$, demonstrates the ability to seamlessly engage in
long-term conversations with multiple speakers in complex, real-world-like
settings, effectively processing visual and auditory inputs to understand and
respond appropriately. Human evaluations highlight the model's strong
performance in maintaining coherent and dynamic interactions, demonstrating its
potential for advanced multimodal conversational agents. | 2025-05-31T06:50:51Z | ACL 2025 (32 pages); Project website: https://m3c-dataset.github.io/ | null | null | null | null | null | null | null | null | null |
2,506.00469 | Massively Multilingual Adaptation of Large Language Models Using
Bilingual Translation Data | ['Shaoxiong Ji', 'Zihao Li', 'Jaakko Paavola', 'Indraneil Paul', 'Hengyu Luo', 'Jörg Tiedemann'] | ['cs.CL'] | This paper investigates a critical design decision in the practice of
massively multilingual continual pre-training -- the inclusion of parallel
data. Specifically, we study the impact of bilingual translation data for
massively multilingual language adaptation of the Llama3 family of models to
500 languages. To this end, we construct the MaLA bilingual translation corpus,
containing data from more than 2,500 language pairs. Subsequently, we develop
the EMMA-500 Llama 3 suite of four massively multilingual models -- continually
pre-trained from the Llama 3 family of base models extensively on diverse data
mixes up to 671B tokens -- and explore the effect of continual pre-training
with or without bilingual translation data. Comprehensive evaluation across 7
tasks and 12 benchmarks demonstrates that bilingual data tends to enhance
language transfer and performance, particularly for low-resource languages. We
open-source the MaLA corpus, EMMA-500 Llama 3 suite artefacts, code, and model
generations. | 2025-05-31T08:37:17Z | EMMA-500 Gen 2; refer to Gen 1 in arXiv:2409.17892 | null | null | null | null | null | null | null | null | null |
2,506.00649 | GuideX: Guided Synthetic Data Generation for Zero-Shot Information
Extraction | ['Neil De La Fuente', 'Oscar Sainz', 'Iker García-Ferrero', 'Eneko Agirre'] | ['cs.CL'] | Information Extraction (IE) systems are traditionally domain-specific,
requiring costly adaptation that involves expert schema design, data
annotation, and model training. While Large Language Models have shown promise
in zero-shot IE, performance degrades significantly in unseen domains where
label definitions differ. This paper introduces GUIDEX, a novel method that
automatically defines domain-specific schemas, infers guidelines, and generates
synthetically labeled instances, allowing for better out-of-domain
generalization. Fine-tuning Llama 3.1 with GUIDEX sets a new state-of-the-art
across seven zeroshot Named Entity Recognition benchmarks. Models trained with
GUIDEX gain up to 7 F1 points over previous methods without humanlabeled data,
and nearly 2 F1 points higher when combined with it. Models trained on GUIDEX
demonstrate enhanced comprehension of complex, domain-specific annotation
schemas. Code, models, and synthetic datasets are available at
neilus03.github.io/guidex.com | 2025-05-31T17:36:18Z | ACL Findings 2025 | null | null | GuideX: Guided Synthetic Data Generation for Zero-Shot Information Extraction | ['Neil De La Fuente', 'Oscar Sainz', "Iker Garc'ia-Ferrero", 'Eneko Agirre'] | 2,025 | arXiv.org | 0 | 52 | ['Computer Science'] |
2,506.00679 | CineMA: A Foundation Model for Cine Cardiac MRI | ['Yunguan Fu', 'Weixi Yi', 'Charlotte Manisty', 'Anish N Bhuva', 'Thomas A Treibel', 'James C Moon', 'Matthew J Clarkson', 'Rhodri Huw Davies', 'Yipeng Hu'] | ['eess.IV', 'cs.AI', 'cs.CV'] | Cardiac magnetic resonance (CMR) is a key investigation in clinical
cardiovascular medicine and has been used extensively in population research.
However, extracting clinically important measurements such as ejection fraction
for diagnosing cardiovascular diseases remains time-consuming and subjective.
We developed CineMA, a foundation AI model automating these tasks with limited
labels. CineMA is a self-supervised autoencoder model trained on 74,916 cine
CMR studies to reconstruct images from masked inputs. After fine-tuning, it was
evaluated across eight datasets on 23 tasks from four categories: ventricle and
myocardium segmentation, left and right ventricle ejection fraction
calculation, disease detection and classification, and landmark localisation.
CineMA is the first foundation model for cine CMR to match or outperform
convolutional neural networks (CNNs). CineMA demonstrated greater label
efficiency than CNNs, achieving comparable or better performance with fewer
annotations. This reduces the burden of clinician labelling and supports
replacing task-specific training with fine-tuning foundation models in future
cardiac imaging applications. Models and code for pre-training and fine-tuning
are available at https://github.com/mathpluscode/CineMA, democratising access
to high-performance models that otherwise require substantial computational
resources, promoting reproducibility and accelerating clinical translation. | 2025-05-31T19:12:34Z | null | null | null | CineMA: A Foundation Model for Cine Cardiac MRI | ['Yunguan Fu', 'Weixi Yi', 'Charlotte Manisty', 'A. Bhuva', 'Thomas A. Treibel', 'James C. Moon', 'Matthew J. Clarkson', 'R. Davies', 'Yipeng Hu'] | 2,025 | arXiv.org | 0 | 29 | ['Computer Science'] |
2,506.00711 | QoQ-Med: Building Multimodal Clinical Foundation Models with
Domain-Aware GRPO Training | ['Wei Dai', 'Peilin Chen', 'Chanakya Ekbote', 'Paul Pu Liang'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Clinical decision-making routinely demands reasoning over heterogeneous data,
yet existing multimodal language models (MLLMs) remain largely vision-centric
and fail to generalize across clinical specialties. To bridge this gap, we
introduce QoQ-Med-7B/32B, the first open generalist clinical foundation model
that jointly reasons across medical images, time-series signals, and text
reports. QoQ-Med is trained with Domain-aware Relative Policy Optimization
(DRPO), a novel reinforcement-learning objective that hierarchically scales
normalized rewards according to domain rarity and modality difficulty,
mitigating performance imbalance caused by skewed clinical data distributions.
Trained on 2.61 million instruction tuning pairs spanning 9 clinical domains,
we show that DRPO training boosts diagnostic performance by 43% in macro-F1 on
average across all visual domains as compared to other critic-free training
methods like GRPO. Furthermore, with QoQ-Med trained on intensive segmentation
data, it is able to highlight salient regions related to the diagnosis, with an
IoU 10x higher than open models while reaching the performance of OpenAI
o4-mini. To foster reproducibility and downstream research, we release (i) the
full model weights, (ii) the modular training pipeline, and (iii) all
intermediate reasoning traces at https://github.com/DDVD233/QoQ_Med. | 2025-05-31T21:02:52Z | null | null | null | null | null | null | null | null | null | null |
2,506.00782 | Jailbreak-R1: Exploring the Jailbreak Capabilities of LLMs via
Reinforcement Learning | ['Weiyang Guo', 'Zesheng Shi', 'Zhuo Li', 'Yequan Wang', 'Xuebo Liu', 'Wenya Wang', 'Fangming Liu', 'Min Zhang', 'Jing Li'] | ['cs.AI'] | As large language models (LLMs) grow in power and influence, ensuring their
safety and preventing harmful output becomes critical. Automated red teaming
serves as a tool to detect security vulnerabilities in LLMs without manual
labor. However, most existing methods struggle to balance the effectiveness and
diversity of red-team generated attack prompts. To address this challenge, we
propose \ourapproach, a novel automated red teaming training framework that
utilizes reinforcement learning to explore and generate more effective attack
prompts while balancing their diversity. Specifically, it consists of three
training stages: (1) Cold Start: The red team model is supervised and
fine-tuned on a jailbreak dataset obtained through imitation learning. (2)
Warm-up Exploration: The model is trained in jailbreak instruction following
and exploration, using diversity and consistency as reward signals. (3)
Enhanced Jailbreak: Progressive jailbreak rewards are introduced to gradually
enhance the jailbreak performance of the red-team model. Extensive experiments
on a variety of LLMs show that \ourapproach effectively balances the diversity
and effectiveness of jailbreak prompts compared to existing methods. Our work
significantly improves the efficiency of red team exploration and provides a
new perspective on automated red teaming. | 2025-06-01T02:19:46Z | 21 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,506.00863 | L3Cube-MahaEmotions: A Marathi Emotion Recognition Dataset with
Synthetic Annotations using CoTR prompting and Large Language Models | ['Nidhi Kowtal', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | Emotion recognition in low-resource languages like Marathi remains
challenging due to limited annotated data. We present L3Cube-MahaEmotions, a
high-quality Marathi emotion recognition dataset with 11 fine-grained emotion
labels. The training data is synthetically annotated using large language
models (LLMs), while the validation and test sets are manually labeled to serve
as a reliable gold-standard benchmark. Building on the MahaSent dataset, we
apply the Chain-of-Translation (CoTR) prompting technique, where Marathi
sentences are translated into English and emotion labeled via a single prompt.
GPT-4 and Llama3-405B were evaluated, with GPT-4 selected for training data
annotation due to superior label quality. We evaluate model performance using
standard metrics and explore label aggregation strategies (e.g., Union,
Intersection). While GPT-4 predictions outperform fine-tuned BERT models,
BERT-based models trained on synthetic labels fail to surpass GPT-4. This
highlights both the importance of high-quality human-labeled data and the
inherent complexity of emotion recognition. An important finding of this work
is that generic LLMs like GPT-4 and Llama3-405B generalize better than
fine-tuned BERT for complex low-resource emotion recognition tasks. The dataset
and model are shared publicly at https://github.com/l3cube-pune/MarathiNLP | 2025-06-01T07:01:34Z | null | null | null | L3Cube-MahaEmotions: A Marathi Emotion Recognition Dataset with Synthetic Annotations using CoTR prompting and Large Language Models | ['Nidhi Kowtal', 'Raviraj Joshi'] | 2,025 | arXiv.org | 0 | 25 | ['Computer Science'] |
2,506.00956 | Continual-MEGA: A Large-scale Benchmark for Generalizable Continual
Anomaly Detection | ['Geonu Lee', 'Yujeong Oh', 'Geonhui Jang', 'Soyoung Lee', 'Jeonghyo Song', 'Sungmin Cha', 'YoungJoon Yoo'] | ['cs.CV'] | In this paper, we introduce a new benchmark for continual learning in anomaly
detection, aimed at better reflecting real-world deployment scenarios. Our
benchmark, Continual-MEGA, includes a large and diverse dataset that
significantly expands existing evaluation settings by combining carefully
curated existing datasets with our newly proposed dataset, ContinualAD. In
addition to standard continual learning with expanded quantity, we propose a
novel scenario that measures zero-shot generalization to unseen classes, those
not observed during continual adaptation. This setting poses a new problem
setting that continual adaptation also enhances zero-shot performance. We also
present a unified baseline algorithm that improves robustness in few-shot
detection and maintains strong generalization. Through extensive evaluations,
we report three key findings: (1) existing methods show substantial room for
improvement, particularly in pixel-level defect localization; (2) our proposed
method consistently outperforms prior approaches; and (3) the newly introduced
ContinualAD dataset enhances the performance of strong anomaly detection
models. We release the benchmark and code in
https://github.com/Continual-Mega/Continual-Mega. | 2025-06-01T11:00:24Z | null | null | null | null | null | null | null | null | null | null |
2,506.00975 | NTPP: Generative Speech Language Modeling for Dual-Channel Spoken
Dialogue via Next-Token-Pair Prediction | ['Qichao Wang', 'Ziqiao Meng', 'Wenqian Cui', 'Yifei Zhang', 'Pengcheng Wu', 'Bingzhe Wu', 'Irwin King', 'Liang Chen', 'Peilin Zhao'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | Inspired by the impressive capabilities of GPT-4o, there is growing interest
in enabling speech language models (SLMs) to engage in natural, fluid spoken
interactions with humans. Recent advancements have led to the development of
several SLMs that demonstrate promising results in this area. However, current
approaches have yet to fully exploit dual-channel speech data, which inherently
captures the structure and dynamics of human conversation. In this work, we
systematically explore the use of dual-channel speech data in the context of
modern large language models, and introduce a novel generative modeling
paradigm, Next-Token-Pair Prediction (NTPP), to enable speaker-independent
dual-channel spoken dialogue learning using decoder-only architectures for the
first time. We evaluate our approach on standard benchmarks, and empirical
results show that our proposed method, NTPP, significantly improves the
conversational abilities of SLMs in terms of turn-taking prediction, response
coherence, and naturalness. Moreover, compared to existing methods, NTPP
achieves substantially lower inference latency, highlighting its practical
efficiency for real-time applications. | 2025-06-01T12:01:40Z | Accepted by ICML 2025 | null | null | NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Prediction | ['Qichao Wang', 'Ziqiao Meng', 'Wenqian Cui', 'Yifei Zhang', 'Pengcheng Wu', 'Bingzhe Wu', 'Irwin King', 'Liang Chen', 'Peilin Zhao'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science', 'Engineering'] |
2,506.00981 | What do self-supervised speech models know about Dutch? Analyzing
advantages of language-specific pre-training | ['Marianne de Heer Kloots', 'Hosein Mohebbi', 'Charlotte Pouw', 'Gaofei Shen', 'Willem Zuidema', 'Martijn Bentum'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | How language-specific are speech representations learned by self-supervised
models? Existing work has shown that a range of linguistic features can be
successfully decoded from end-to-end models trained only on speech recordings.
However, it's less clear to what extent pre-training on specific languages
improves language-specific linguistic information. Here we test the encoding of
Dutch phonetic and lexical information in internal representations of
self-supervised Wav2Vec2 models. Pre-training exclusively on Dutch improves the
representation of Dutch linguistic features as compared to pre-training on
similar amounts of English or larger amounts of multilingual data. This
language-specific advantage is well-detected by trained clustering or
classification probes, and partially observable using zero-shot metrics.
Furthermore, the language-specific benefit on linguistic feature encoding
aligns with downstream performance on Automatic Speech Recognition. | 2025-06-01T12:25:13Z | Accepted to Interspeech 2025. For model, code, and materials, see
https://github.com/mdhk/SSL-NL-eval | Proc. INTERSPEECH 2025 | 10.21437/Interspeech.2025-1526 | null | null | null | null | null | null | null |
2,506.00993 | FlexSelect: Flexible Token Selection for Efficient Long Video
Understanding | ['Yunzhu Zhang', 'Yu Lu', 'Tianyi Wang', 'Fengyun Rao', 'Yi Yang', 'Linchao Zhu'] | ['cs.CV'] | Long-form video understanding poses a significant challenge for video large
language models (VideoLLMs) due to prohibitively high computational and memory
demands. In this paper, we propose FlexSelect, a flexible and efficient token
selection strategy for processing long videos. FlexSelect identifies and
retains the most semantically relevant content by leveraging cross-modal
attention patterns from a reference transformer layer. It comprises two key
components: (1) a training-free token ranking pipeline that leverages faithful
cross-modal attention weights to estimate each video token's importance, and
(2) a rank-supervised lightweight selector that is trained to replicate these
rankings and filter redundant tokens. This generic approach can be seamlessly
integrated into various VideoLLM architectures, such as LLaVA-Video, InternVL
and Qwen-VL, serving as a plug-and-play module to extend their temporal context
length. Empirically, FlexSelect delivers strong gains across multiple
long-video benchmarks including VideoMME, MLVU, LongVB, and LVBench. Moreover,
it achieves significant speed-ups (for example, up to 9 times on a
LLaVA-Video-7B model), highlighting FlexSelect's promise for efficient
long-form video understanding. Project page available at:
https://yunzhuzhang0918.github.io/flex_select | 2025-06-01T12:49:39Z | null | null | null | FlexSelect: Flexible Token Selection for Efficient Long Video Understanding | ['Yunzhu Zhang', 'Yu Lu', 'Tianyi Wang', 'Fengyun Rao', 'Yi Yang', 'Linchao Zhu'] | 2,025 | arXiv.org | 0 | 38 | ['Computer Science'] |
2,506.01078 | GThinker: Towards General Multimodal Reasoning via Cue-Guided Rethinking | ['Yufei Zhan', 'Ziheng Wu', 'Yousong Zhu', 'Rongkun Xue', 'Ruipu Luo', 'Zhenghao Chen', 'Can Zhang', 'Yifan Li', 'Zhentao He', 'Zheming Yang', 'Ming Tang', 'Minghui Qiu', 'Jinqiao Wang'] | ['cs.CV', 'cs.AI'] | Despite notable advancements in multimodal reasoning, leading Multimodal
Large Language Models (MLLMs) still underperform on vision-centric multimodal
reasoning tasks in general scenarios. This shortfall stems from their
predominant reliance on logic- and knowledge-based slow thinking strategies,
while effective for domains like math and science, fail to integrate visual
information effectively during reasoning. Consequently, these models often fail
to adequately ground visual cues, resulting in suboptimal performance in tasks
that require multiple plausible visual interpretations and inferences. To
address this, we present GThinker (General Thinker), a novel reasoning MLLM
excelling in multimodal reasoning across general scenarios, mathematics, and
science. GThinker introduces Cue-Rethinking, a flexible reasoning pattern that
grounds inferences in visual cues and iteratively reinterprets these cues to
resolve inconsistencies. Building on this pattern, we further propose a
two-stage training pipeline, including pattern-guided cold start and incentive
reinforcement learning, designed to enable multimodal reasoning capabilities
across domains. Furthermore, to support the training, we construct
GThinker-11K, comprising 7K high-quality, iteratively-annotated reasoning paths
and 4K curated reinforcement learning samples, filling the data gap toward
general multimodal reasoning. Extensive experiments demonstrate that GThinker
achieves 81.5% on the challenging comprehensive multimodal reasoning benchmark
M$^3$CoT, surpassing the latest O4-mini model. It also shows an average
improvement of 2.1% on general scenario multimodal reasoning benchmarks, while
maintaining on-par performance in mathematical reasoning compared to
counterpart advanced reasoning models. The code, model, and data will be
released soon at https://github.com/jefferyZhan/GThinker. | 2025-06-01T16:28:26Z | Tech report | null | null | GThinker: Towards General Multimodal Reasoning via Cue-Guided Rethinking | ['Yufei Zhan', 'Ziheng Wu', 'Yousong Zhu', 'Rongkun Xue', 'Ruipu Luo', 'Zhenghao Chen', 'Can Zhang', 'Yifan Li', 'Zhentao He', 'Zheming Yang', 'Ming Tang', 'Minghui Qiu', 'Jinqiao Wang'] | 2,025 | arXiv.org | 0 | 66 | ['Computer Science'] |
2,506.01084 | zip2zip: Inference-Time Adaptive Vocabularies for Language Models via
Token Compression | ['Saibo Geng', 'Nathan Ranchin', 'Yunzhen yao', 'Maxime Peyrard', 'Chris Wendler', 'Michael Gastpar', 'Robert West'] | ['cs.CL', 'cs.LG'] | Tokenization efficiency plays a critical role in the performance and cost of
large language models (LLMs), yet most models rely on static tokenizers
optimized for general-purpose corpora. These tokenizers' fixed vocabularies
often fail to adapt to domain- or language-specific inputs, leading to longer
token sequences and higher computational costs. We introduce zip2zip, a
framework that enables LLMs to dynamically adjust token vocabulary at inference
time, allowing for fewer generated tokens and thus faster inference. zip2zip
consists of three key components: (1) a tokenizer based on Lempel-Ziv-Welch
(LZW) compression that incrementally compresses tokens into reusable
"hypertokens" on the fly; (2) an embedding layer that computes embeddings for
newly formed hypertokens at runtime; and (3) a causal language modeling variant
that trains the model to operate on hypertokenized, compressed sequences. We
show that an existing LLM can be zip2zip-fied in 10 GPU-hours via
parameter-efficient finetuning. The resulting zip2zip LLMs effectively learn to
use hypertokens at inference time, reducing input and output sequence length by
20-60\%, with significant improvements in inference latency. | 2025-06-01T17:03:02Z | Code will be released at https://github.com/epfl-dlab/zip2zip | null | null | zip2zip: Inference-Time Adaptive Vocabularies for Language Models via Token Compression | ['Saibo Geng', 'Nathan Ranchin', 'Yunzhen Yao', 'Maxime Peyrard', 'Chris Wendler', 'Michael Gastpar', 'Robert West'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,506.01262 | Exploring the Potential of LLMs as Personalized Assistants: Dataset,
Evaluation, and Analysis | ['Jisoo Mok', 'Ik-hwan Kim', 'Sangkwon Park', 'Sungroh Yoon'] | ['cs.CL'] | Personalized AI assistants, a hallmark of the human-like capabilities of
Large Language Models (LLMs), are a challenging application that intertwines
multiple problems in LLM research. Despite the growing interest in the
development of personalized assistants, the lack of an open-source
conversational dataset tailored for personalization remains a significant
obstacle for researchers in the field. To address this research gap, we
introduce HiCUPID, a new benchmark to probe and unleash the potential of LLMs
to deliver personalized responses. Alongside a conversational dataset, HiCUPID
provides a Llama-3.2-based automated evaluation model whose assessment closely
mirrors human preferences. We release our dataset, evaluation model, and code
at https://github.com/12kimih/HiCUPID. | 2025-06-02T02:25:46Z | ACL 2025 | null | null | Exploring the Potential of LLMs as Personalized Assistants: Dataset, Evaluation, and Analysis | ['J. Mok', 'Ik-hwan Kim', 'Sangkwon Park', 'Sungroh Yoon'] | 2,025 | arXiv.org | 0 | 49 | ['Computer Science'] |
2,506.01357 | KokoroChat: A Japanese Psychological Counseling Dialogue Dataset
Collected via Role-Playing by Trained Counselors | ['Zhiyang Qi', 'Takumasa Kaneko', 'Keiko Takamizo', 'Mariko Ukiyo', 'Michimasa Inaba'] | ['cs.CL', 'cs.AI'] | Generating psychological counseling responses with language models relies
heavily on high-quality datasets. Crowdsourced data collection methods require
strict worker training, and data from real-world counseling environments may
raise privacy and ethical concerns. While recent studies have explored using
large language models (LLMs) to augment psychological counseling dialogue
datasets, the resulting data often suffers from limited diversity and
authenticity. To address these limitations, this study adopts a role-playing
approach where trained counselors simulate counselor-client interactions,
ensuring high-quality dialogues while mitigating privacy risks. Using this
method, we construct KokoroChat, a Japanese psychological counseling dialogue
dataset comprising 6,589 long-form dialogues, each accompanied by comprehensive
client feedback. Experimental results demonstrate that fine-tuning open-source
LLMs with KokoroChat improves both the quality of generated counseling
responses and the automatic evaluation of counseling dialogues. The KokoroChat
dataset is available at https://github.com/UEC-InabaLab/KokoroChat. | 2025-06-02T06:20:53Z | Accepted to ACL 2025 Main Conference | null | null | KokoroChat: A Japanese Psychological Counseling Dialogue Dataset Collected via Role-Playing by Trained Counselors | ['Zhiyang Qi', 'Takumasa Kaneko', 'Keiko Takamizo', 'Mariko Ukiyo', 'Michimasa Inaba'] | 2,025 | arXiv.org | 0 | 34 | ['Computer Science'] |
2,506.01391 | AgentCPM-GUI: Building Mobile-Use Agents with Reinforcement Fine-Tuning | ['Zhong Zhang', 'Yaxi Lu', 'Yikun Fu', 'Yupeng Huo', 'Shenzhi Yang', 'Yesai Wu', 'Han Si', 'Xin Cong', 'Haotian Chen', 'Yankai Lin', 'Jie Xie', 'Wei Zhou', 'Wang Xu', 'Yuanheng Zhang', 'Zhou Su', 'Zhongwu Zhai', 'Xiaoming Liu', 'Yudong Mei', 'Jianming Xu', 'Hongyan Tian', 'Chongyi Wang', 'Chi Chen', 'Yuan Yao', 'Zhiyuan Liu', 'Maosong Sun'] | ['cs.AI', 'cs.CL', 'cs.CV', 'cs.HC', 'I.2.8; I.2.7; I.2.10; H.5.2'] | The recent progress of large language model agents has opened new
possibilities for automating tasks through graphical user interfaces (GUIs),
especially in mobile environments where intelligent interaction can greatly
enhance usability. However, practical deployment of such agents remains
constrained by several key challenges. Existing training data is often noisy
and lack semantic diversity, which hinders the learning of precise grounding
and planning. Models trained purely by imitation tend to overfit to seen
interface patterns and fail to generalize in unfamiliar scenarios. Moreover,
most prior work focuses on English interfaces while overlooks the growing
diversity of non-English applications such as those in the Chinese mobile
ecosystem. In this work, we present AgentCPM-GUI, an 8B-parameter GUI agent
built for robust and efficient on-device GUI interaction. Our training pipeline
includes grounding-aware pre-training to enhance perception, supervised
fine-tuning on high-quality Chinese and English trajectories to imitate
human-like actions, and reinforcement fine-tuning with GRPO to improve
reasoning capability. We also introduce a compact action space that reduces
output length and supports low-latency execution on mobile devices.
AgentCPM-GUI achieves state-of-the-art performance on five public benchmarks
and a new Chinese GUI benchmark called CAGUI, reaching $96.9\%$ Type-Match and
$91.3\%$ Exact-Match. To facilitate reproducibility and further research, we
publicly release all code, model checkpoint, and evaluation data. | 2025-06-02T07:30:29Z | Updated results in Table 2 and Table 3; The project is available at
https://github.com/OpenBMB/AgentCPM-GUI | null | null | AgentCPM-GUI: Building Mobile-Use Agents with Reinforcement Fine-Tuning | ['Zhong Zhang', 'Ya-Ting Lu', 'Yikun Fu', 'Yupeng Huo', 'Shenzhi Yang', 'Yesai Wu', 'Han Si', 'Xin Cong', 'Haotian Chen', 'Yankai Lin', 'Jie Xie', 'Wei Zhou', 'Wang Xu', 'Yuanheng Zhang', 'Zhou Su', 'Zhongwu Zhai', 'Xiao-Meng Liu', 'Yudong Mei', 'Jianming Xu', 'Hongyan Tian', 'Chongyi Wang', 'Chi Chen', 'Yuan Yao', 'Zhiyuan Liu', 'Mao-Ben Sun'] | 2,025 | arXiv.org | 0 | 58 | ['Computer Science'] |
2,506.01413 | Incentivizing Reasoning for Advanced Instruction-Following of Large
Language Models | ['Yulei Qin', 'Gang Li', 'Zongyi Li', 'Zihan Xu', 'Yuchen Shi', 'Zhekai Lin', 'Xiao Cui', 'Ke Li', 'Xing Sun'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | Existing large language models (LLMs) face challenges of following complex
instructions, especially when multiple constraints are present and organized in
paralleling, chaining, and branching structures. One intuitive solution, namely
chain-of-thought (CoT), is expected to universally improve capabilities of
LLMs. However, we find that the vanilla CoT exerts a negative impact on
performance due to its superficial reasoning pattern of simply paraphrasing the
instructions. It fails to peel back the compositions of constraints for
identifying their relationship across hierarchies of types and dimensions. To
this end, we propose a systematic method to boost LLMs in dealing with complex
instructions via incentivizing reasoning for test-time compute scaling. First,
we stem from the decomposition of complex instructions under existing
taxonomies and propose a reproducible data acquisition method. Second, we
exploit reinforcement learning (RL) with verifiable rule-centric reward signals
to cultivate reasoning specifically for instruction following. We address the
shallow, non-essential nature of reasoning under complex instructions via
sample-wise contrast for superior CoT enforcement. We also exploit behavior
cloning of experts to facilitate steady distribution shift from fast-thinking
LLMs to skillful reasoners. Extensive evaluations on seven comprehensive
benchmarks confirm the validity of the proposed method, where a 1.5B LLM
achieves 11.74% gains with performance comparable to a 8B LLM. Codes and data
will be available later (under review).
Keywords: reinforcement learning with verifiable rewards (RLVR), instruction
following, complex instructions | 2025-06-02T08:11:44Z | 13 pages of main body, 3 tables, 5 figures, 45 pages of appendix | null | null | null | null | null | null | null | null | null |
2,506.01666 | Synthesis of discrete-continuous quantum circuits with multimodal
diffusion models | ['Florian Fürrutter', 'Zohim Chandani', 'Ikko Hamamura', 'Hans J. Briegel', 'Gorka Muñoz-Gil'] | ['quant-ph', 'cs.AI', 'cs.LG'] | Efficiently compiling quantum operations remains a major bottleneck in
scaling quantum computing. Today's state-of-the-art methods achieve low
compilation error by combining search algorithms with gradient-based parameter
optimization, but they incur long runtimes and require multiple calls to
quantum hardware or expensive classical simulations, making their scaling
prohibitive. Recently, machine-learning models have emerged as an alternative,
though they are currently restricted to discrete gate sets. Here, we introduce
a multimodal denoising diffusion model that simultaneously generates a
circuit's structure and its continuous parameters for compiling a target
unitary. It leverages two independent diffusion processes, one for discrete
gate selection and one for parameter prediction. We benchmark the model over
different experiments, analyzing the method's accuracy across varying qubit
counts, circuit depths, and proportions of parameterized gates. Finally, by
exploiting its rapid circuit generation, we create large datasets of circuits
for particular operations and use these to extract valuable heuristics that can
help us discover new insights into quantum circuit synthesis. | 2025-06-02T13:35:33Z | Main Text: 10 pages and 5 figures; Appendix: 17 pages, 7 figures and
1 table. Code available at: https://github.com/FlorianFuerrutter/genQC | null | null | null | null | null | null | null | null | null |
2,506.01801 | OmniV2V: Versatile Video Generation and Editing via Dynamic Content
Manipulation | ['Sen Liang', 'Zhentao Yu', 'Zhengguang Zhou', 'Teng Hu', 'Hongmei Wang', 'Yi Chen', 'Qin Lin', 'Yuan Zhou', 'Xin Li', 'Qinglin Lu', 'Zhibo Chen'] | ['cs.CV'] | The emergence of Diffusion Transformers (DiT) has brought significant
advancements to video generation, especially in text-to-video and
image-to-video tasks. Although video generation is widely applied in various
fields, most existing models are limited to single scenarios and cannot perform
diverse video generation and editing through dynamic content manipulation. We
propose OmniV2V, a video model capable of generating and editing videos across
different scenarios based on various operations, including: object movement,
object addition, mask-guided video edit, try-on, inpainting, outpainting, human
animation, and controllable character video synthesis. We explore a unified
dynamic content manipulation injection module, which effectively integrates the
requirements of the above tasks. In addition, we design a visual-text
instruction module based on LLaVA, enabling the model to effectively understand
the correspondence between visual content and instructions. Furthermore, we
build a comprehensive multi-task data processing system. Since there is data
overlap among various tasks, this system can efficiently provide data
augmentation. Using this system, we construct a multi-type, multi-scenario
OmniV2V dataset and its corresponding OmniV2V-Test benchmark. Extensive
experiments show that OmniV2V works as well as, and sometimes better than, the
best existing open-source and commercial models for many video generation and
editing tasks. | 2025-06-02T15:42:06Z | null | null | null | OmniV2V: Versatile Video Generation and Editing via Dynamic Content Manipulation | ['Sen Liang', 'Zhentao Yu', 'Zhengguang Zhou', 'Teng Hu', 'Hongmei Wang', 'Yi Chen', 'Qin Lin', 'Yuan Zhou', 'Xin Li', 'Qinglin Lu', 'Zhibo Chen'] | 2,025 | arXiv.org | 0 | 61 | ['Computer Science'] |
2,506.01806 | Ridgeformer: Mutli-Stage Contrastive Training For Fine-grained
Cross-Domain Fingerprint Recognition | ['Shubham Pandey', 'Bhavin Jawade', 'Srirangaraj Setlur'] | ['cs.CV', 'cs.AI'] | The increasing demand for hygienic and portable biometric systems has
underscored the critical need for advancements in contactless fingerprint
recognition. Despite its potential, this technology faces notable challenges,
including out-of-focus image acquisition, reduced contrast between fingerprint
ridges and valleys, variations in finger positioning, and perspective
distortion. These factors significantly hinder the accuracy and reliability of
contactless fingerprint matching. To address these issues, we propose a novel
multi-stage transformer-based contactless fingerprint matching approach that
first captures global spatial features and subsequently refines localized
feature alignment across fingerprint samples. By employing a hierarchical
feature extraction and matching pipeline, our method ensures fine-grained,
cross-sample alignment while maintaining the robustness of global feature
representation. We perform extensive evaluations on publicly available datasets
such as HKPolyU and RidgeBase under different evaluation protocols, such as
contactless-to-contact matching and contactless-to-contactless matching and
demonstrate that our proposed approach outperforms existing methods, including
COTS solutions. | 2025-06-02T15:51:45Z | Accepted to IEEE International Conference on Image Processing 2025 | null | null | Ridgeformer: Mutli-Stage Contrastive Training For Fine-grained Cross-Domain Fingerprint Recognition | ['Shubham Pandey', 'Bhavin Jawade', 'Srirangaraj Setlur'] | 2,025 | arXiv.org | 0 | 22 | ['Computer Science'] |
2,506.01833 | SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model | ['Zhao Yang', 'Jiwei Zhu', 'Bing Su'] | ['cs.LG', 'q-bio.GN'] | Inspired by the success of unsupervised pre-training paradigms, researchers
have applied these approaches to DNA pre-training. However, we argue that these
approaches alone yield suboptimal results because pure DNA sequences lack
sufficient information, since their functions are regulated by genomic profiles
like chromatin accessibility. Here, we demonstrate that supervised training for
genomic profile prediction serves as a more effective alternative to pure
sequence pre-training. Furthermore, considering the multi-species and
multi-profile nature of genomic profile prediction, we introduce our
$\textbf{S}$pecies-$\textbf{P}$rofile $\textbf{A}$daptive
$\textbf{C}$ollaborative $\textbf{E}$xperts (SPACE) that leverages Mixture of
Experts (MoE) to better capture the relationships between DNA sequences across
different species and genomic profiles, thereby learning more effective DNA
representations. Through extensive experiments across various tasks, our model
achieves state-of-the-art performance, establishing that DNA models trained
with supervised genomic profiles serve as powerful DNA representation learners.
The code is available at https://github.com/ZhuJiwei111/SPACE. | 2025-06-02T16:23:05Z | Accepted to ICML 2025 | null | null | null | null | null | null | null | null | null |
2,506.01844 | SmolVLA: A Vision-Language-Action Model for Affordable and Efficient
Robotics | ['Mustafa Shukor', 'Dana Aubakirova', 'Francesco Capuano', 'Pepijn Kooijmans', 'Steven Palma', 'Adil Zouitine', 'Michel Aractingi', 'Caroline Pascal', 'Martino Russi', 'Andres Marafioti', 'Simon Alibert', 'Matthieu Cord', 'Thomas Wolf', 'Remi Cadene'] | ['cs.LG', 'cs.RO'] | Vision-language models (VLMs) pretrained on large-scale multimodal datasets
encode rich visual and linguistic knowledge, making them a strong foundation
for robotics. Rather than training robotic policies from scratch, recent
approaches adapt VLMs into vision-language-action (VLA) models that enable
natural language-driven perception and control. However, existing VLAs are
typically massive--often with billions of parameters--leading to high training
costs and limited real-world deployability. Moreover, they rely on academic and
industrial datasets, overlooking the growing availability of
community-collected data from affordable robotic platforms. In this work, we
present SmolVLA, a small, efficient, and community-driven VLA that drastically
reduces both training and inference costs, while retaining competitive
performance. SmolVLA is designed to be trained on a single GPU and deployed on
consumer-grade GPUs or even CPUs. To further improve responsiveness, we
introduce an asynchronous inference stack decoupling perception and action
prediction from action execution, allowing higher control rates with chunked
action generation. Despite its compact size, SmolVLA achieves performance
comparable to VLAs that are 10x larger. We evaluate SmolVLA on a range of both
simulated as well as real-world robotic benchmarks and release all code,
pretrained models, and training data. | 2025-06-02T16:30:19Z | 24 pages. Code and assets: https://github.com/huggingface/lerobot | null | null | SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics | ['Mustafa Shukor', 'Dana Aubakirova', 'Francesco Capuano', 'Pepijn Kooijmans', 'Steven Palma', 'Adil Zouitine', 'Michel Aractingi', 'Caroline Pascal', 'Martino Russi', 'Andrés Marafioti', 'Simon Alibert', 'Matthieu Cord', 'Thomas Wolf', 'Rémi Cadène'] | 2,025 | arXiv.org | 0 | 85 | ['Computer Science'] |
2,506.01853 | ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and
Understanding | ['Junliang Ye', 'Zhengyi Wang', 'Ruowen Zhao', 'Shenghao Xie', 'Jun Zhu'] | ['cs.CV'] | Recently, the powerful text-to-image capabilities of ChatGPT-4o have led to
growing appreciation for native multimodal large language models. However, its
multimodal capabilities remain confined to images and text. Yet beyond images,
the ability to understand and generate 3D content is equally crucial. To
address this gap, we propose ShapeLLM-Omni-a native 3D large language model
capable of understanding and generating 3D assets and text in any sequence.
First, we train a 3D vector-quantized variational autoencoder (VQVAE), which
maps 3D objects into a discrete latent space to achieve efficient and accurate
shape representation and reconstruction. Building upon the 3D-aware discrete
tokens, we innovatively construct a large-scale continuous training dataset
named 3D-Alpaca, encompassing generation, comprehension, and editing, thus
providing rich resources for future research and training. Finally, by
performing instruction-based training of the Qwen-2.5-vl-7B-Instruct model on
the 3D-Alpaca dataset. Our work provides an effective attempt at extending
multimodal models with basic 3D capabilities, which contributes to future
research in 3D-native AI. Project page:
https://github.com/JAMESYJL/ShapeLLM-Omni | 2025-06-02T16:40:50Z | Project page: https://github.com/JAMESYJL/ShapeLLM-Omni | null | null | null | null | null | null | null | null | null |
2,506.01937 | RewardBench 2: Advancing Reward Model Evaluation | ['Saumya Malik', 'Valentina Pyatkin', 'Sander Land', 'Jacob Morrison', 'Noah A. Smith', 'Hannaneh Hajishirzi', 'Nathan Lambert'] | ['cs.CL'] | Reward models are used throughout the post-training of language models to
capture nuanced signals from preference data and provide a training target for
optimization across instruction following, reasoning, safety, and more domains.
The community has begun establishing best practices for evaluating reward
models, from the development of benchmarks that test capabilities in specific
skill areas to others that test agreement with human preferences. At the same
time, progress in evaluation has not been mirrored by the effectiveness of
reward models in downstream tasks -- simpler direct alignment algorithms are
reported to work better in many cases. This paper introduces RewardBench 2, a
new multi-skill reward modeling benchmark designed to bring new, challenging
data for accuracy-based reward model evaluation -- models score about 20 points
on average lower on RewardBench 2 compared to the first RewardBench -- while
being highly correlated with downstream performance. Compared to most other
benchmarks, RewardBench 2 sources new human prompts instead of existing prompts
from downstream evaluations, facilitating more rigorous evaluation practices.
In this paper, we describe our benchmark construction process and report how
existing models perform on it, while quantifying how performance on the
benchmark correlates with downstream use of the models in both inference-time
scaling algorithms, like best-of-N sampling, and RLHF training algorithms like
proximal policy optimization. | 2025-06-02T17:54:04Z | Data, models, and leaderboard available at
https://huggingface.co/collections/allenai/reward-bench-2-683d2612a4b3e38a3e53bb51 | null | null | null | null | null | null | null | null | null |
2,506.01949 | IMAGHarmony: Controllable Image Editing with Consistent Object Quantity
and Layout | ['Fei Shen', 'Xiaoyu Du', 'Yutong Gao', 'Jian Yu', 'Yushe Cao', 'Xing Lei', 'Jinhui Tang'] | ['cs.CV'] | Recent diffusion models have advanced image editing by enhancing visual
quality and control, supporting broad applications across creative and
personalized domains. However, current image editing largely overlooks
multi-object scenarios, where precise control over object categories, counts,
and spatial layouts remains a significant challenge. To address this, we
introduce a new task, quantity-and-layout consistent image editing (QL-Edit),
which aims to enable fine-grained control of object quantity and spatial
structure in complex scenes. We further propose IMAGHarmony, a structure-aware
framework that incorporates harmony-aware attention (HA) to integrate
multimodal semantics, explicitly modeling object counts and layouts to enhance
editing accuracy and structural consistency. In addition, we observe that
diffusion models are susceptible to initial noise and exhibit strong
preferences for specific noise patterns. Motivated by this, we present a
preference-guided noise selection (PNS) strategy that chooses semantically
aligned initial noise samples based on vision-language matching, thereby
improving generation stability and layout consistency in multi-object editing.
To support evaluation, we construct HarmonyBench, a comprehensive benchmark
covering diverse quantity and layout control scenarios. Extensive experiments
demonstrate that IMAGHarmony consistently outperforms state-of-the-art methods
in structural alignment and semantic accuracy. The code and model are available
at https://github.com/muzishen/IMAGHarmony. | 2025-06-02T17:59:09Z | null | null | null | IMAGHarmony: Controllable Image Editing with Consistent Object Quantity and Layout | ['Fei Shen', 'Xiaoyu Du', 'Yutong Gao', 'Jian Yu', 'Yushe Cao', 'Xing Lei', 'Jinhui Tang'] | 2,025 | arXiv.org | 0 | 69 | ['Computer Science'] |
2,506.02018 | Enhancing Paraphrase Type Generation: The Impact of DPO and RLHF
Evaluated with Human-Ranked Data | ['Christopher Lee Lübbers'] | ['cs.CL', 'I.2.7'] | Paraphrasing re-expresses meaning to enhance applications like text
simplification, machine translation, and question-answering. Specific
paraphrase types facilitate accurate semantic analysis and robust language
models. However, existing paraphrase-type generation methods often misalign
with human preferences due to reliance on automated metrics and limited
human-annotated training data, obscuring crucial aspects of semantic fidelity
and linguistic transformations.
This study addresses this gap by leveraging a human-ranked paraphrase-type
dataset and integrating Direct Preference Optimization (DPO) to align model
outputs directly with human judgments. DPO-based training increases
paraphrase-type generation accuracy by 3 percentage points over a supervised
baseline and raises human preference ratings by 7 percentage points. A newly
created human-annotated dataset supports more rigorous future evaluations.
Additionally, a paraphrase-type detection model achieves F1 scores of 0.91 for
addition/deletion, 0.78 for same polarity substitution, and 0.70 for
punctuation changes.
These findings demonstrate that preference data and DPO training produce more
reliable, semantically accurate paraphrases, enabling downstream applications
such as improved summarization and more robust question-answering. The PTD
model surpasses automated metrics and provides a more reliable framework for
evaluating paraphrase quality, advancing paraphrase-type research toward
richer, user-aligned language generation and establishing a stronger foundation
for future evaluations grounded in human-centric criteria. | 2025-05-28T07:52:18Z | 21 pages, 11 figures. Master's thesis, University of Goettingen,
December 2025. Code: https://github.com/cluebbers/dpo-rlhf-paraphrase-types.
Models:
https://huggingface.co/collections/cluebbers/enhancing-paraphrase-type-generation-673ca8d75dfe2ce962a48ac0 | null | null | null | null | null | null | null | null | null |
2,506.02095 | Cycle Consistency as Reward: Learning Image-Text Alignment without Human
Preferences | ['Hyojin Bahng', 'Caroline Chan', 'Fredo Durand', 'Phillip Isola'] | ['cs.CV', 'cs.LG'] | Learning alignment between language and vision is a fundamental challenge,
especially as multimodal data becomes increasingly detailed and complex.
Existing methods often rely on collecting human or AI preferences, which can be
costly and time-intensive. We propose an alternative approach that leverages
cycle consistency as a supervisory signal. Given an image and generated text,
we map the text back to image space using a text-to-image model and compute the
similarity between the original image and its reconstruction. Analogously, for
text-to-image generation, we measure the textual similarity between an input
caption and its reconstruction through the cycle. We use the cycle consistency
score to rank candidates and construct a preference dataset of 866K comparison
pairs. The reward model trained on our dataset outperforms state-of-the-art
alignment metrics on detailed captioning, with superior inference-time
scalability when used as a verifier for Best-of-N sampling. Furthermore,
performing DPO and Diffusion DPO using our dataset enhances performance across
a wide range of vision-language tasks and text-to-image generation. Our
dataset, model, and code are at https://cyclereward.github.io | 2025-06-02T17:42:58Z | null | null | null | null | null | null | null | null | null | null |
2,506.02096 | SynthRL: Scaling Visual Reasoning with Verifiable Data Synthesis | ['Zijian Wu', 'Jinjie Ni', 'Xiangyan Liu', 'Zichen Liu', 'Hang Yan', 'Michael Qizhe Shieh'] | ['cs.LG', 'cs.CL', 'cs.CV'] | Vision-language models (VLMs) trained via reinforcement learning with
verifiable reward (RLVR) have shown notable progress in scaling test-time
compute effectively. In this work, we investigate how synthesized RL data can
further improve RLVR. To this end, we propose \textbf{SynthRL}-a scalable and
guaranteed pipeline for automatic data scaling in reasoning-oriented RL
training. SynthRL comprises three key stages: (1) selecting seed questions with
appropriate distribution, (2) augmenting them into more challenging variants
while preserving the original answers, and (3) a guaranteed verification stage
that ensures near-perfect correctness and difficulty enhancement. Our empirical
experiments demonstrate SynthRL's scalability and effectiveness. When applied
to the MMK12 dataset, SynthRL synthesizes over 3.3K additional verifiable,
challenging questions from approximately 8K seed samples. Models trained with
our synthesized data achieve consistent gains across five out-of-domain visual
math reasoning benchmarks, with a significant improvement over baseline models
trained on seed data alone. Notably, detailed analysis reveals that the gains
are more pronounced on the most challenging evaluation samples, highlighting
SynthRL's effectiveness in eliciting deeper and more complex reasoning
patterns. | 2025-06-02T17:45:16Z | null | null | null | null | null | null | null | null | null | null |
2,506.02178 | Cocktail-Party Audio-Visual Speech Recognition | ['Thai-Binh Nguyen', 'Ngoc-Quan Pham', 'Alexander Waibel'] | ['cs.SD', 'cs.CL'] | Audio-Visual Speech Recognition (AVSR) offers a robust solution for speech
recognition in challenging environments, such as cocktail-party scenarios,
where relying solely on audio proves insufficient. However, current AVSR models
are often optimized for idealized scenarios with consistently active speakers,
overlooking the complexities of real-world settings that include both speaking
and silent facial segments. This study addresses this gap by introducing a
novel audio-visual cocktail-party dataset designed to benchmark current AVSR
systems and highlight the limitations of prior approaches in realistic noisy
conditions. Additionally, we contribute a 1526-hour AVSR dataset comprising
both talking-face and silent-face segments, enabling significant performance
gains in cocktail-party environments. Our approach reduces WER by 67% relative
to the state-of-the-art, reducing WER from 119% to 39.2% in extreme noise,
without relying on explicit segmentation cues. | 2025-06-02T19:07:51Z | Accepted at Interspeech 2025 | null | null | null | null | null | null | null | null | null |
2,506.02295 | QARI-OCR: High-Fidelity Arabic Text Recognition through Multimodal Large
Language Model Adaptation | ['Ahmed Wasfy', 'Omer Nacar', 'Abdelakreem Elkhateb', 'Mahmoud Reda', 'Omar Elshehy', 'Adel Ammar', 'Wadii Boulila'] | ['cs.CV', 'cs.AI'] | The inherent complexities of Arabic script; its cursive nature, diacritical
marks (tashkeel), and varied typography, pose persistent challenges for Optical
Character Recognition (OCR). We present Qari-OCR, a series of vision-language
models derived from Qwen2-VL-2B-Instruct, progressively optimized for Arabic
through iterative fine-tuning on specialized synthetic datasets. Our leading
model, QARI v0.2, establishes a new open-source state-of-the-art with a Word
Error Rate (WER) of 0.160, Character Error Rate (CER) of 0.061, and BLEU score
of 0.737 on diacritically-rich texts. Qari-OCR demonstrates superior handling
of tashkeel, diverse fonts, and document layouts, alongside impressive
performance on low-resolution images. Further explorations (QARI v0.3) showcase
strong potential for structural document understanding and handwritten text.
This work delivers a marked improvement in Arabic OCR accuracy and efficiency,
with all models and datasets released to foster further research. | 2025-06-02T22:21:06Z | null | null | null | null | null | null | null | null | null | null |
2,506.02459 | ReSpace: Text-Driven 3D Scene Synthesis and Editing with Preference
Alignment | ['Martin JJ. Bucher', 'Iro Armeni'] | ['cs.CV', 'I.2.10; I.2.7'] | Scene synthesis and editing has emerged as a promising direction in computer
graphics. Current trained approaches for 3D indoor scenes either oversimplify
object semantics through one-hot class encodings (e.g., 'chair' or 'table'),
require masked diffusion for editing, ignore room boundaries, or rely on floor
plan renderings that fail to capture complex layouts. In contrast, LLM-based
methods enable richer semantics via natural language (e.g., 'modern studio with
light wood furniture') but do not support editing, remain limited to
rectangular layouts or rely on weak spatial reasoning from implicit world
models. We introduce ReSpace, a generative framework for text-driven 3D indoor
scene synthesis and editing using autoregressive language models. Our approach
features a compact structured scene representation with explicit room
boundaries that frames scene editing as a next-token prediction task. We
leverage a dual-stage training approach combining supervised fine-tuning and
preference alignment, enabling a specially trained language model for object
addition that accounts for user instructions, spatial geometry, object
semantics, and scene-level composition. For scene editing, we employ a
zero-shot LLM to handle object removal and prompts for addition. We further
introduce a novel voxelization-based evaluation that captures fine-grained
geometry beyond 3D bounding boxes. Experimental results surpass
state-of-the-art on object addition while maintaining competitive results on
full scene synthesis. | 2025-06-03T05:22:04Z | 20 pages, 17 figures (incl. appendix) | null | null | null | null | null | null | null | null | null |
2,506.02587 | BEVCALIB: LiDAR-Camera Calibration via Geometry-Guided Bird's-Eye View
Representations | ['Weiduo Yuan', 'Jerry Li', 'Justin Yue', 'Divyank Shah', 'Konstantinos Karydis', 'Hang Qiu'] | ['cs.CV', 'cs.RO'] | Accurate LiDAR-camera calibration is fundamental to fusing multi-modal
perception in autonomous driving and robotic systems. Traditional calibration
methods require extensive data collection in controlled environments and cannot
compensate for the transformation changes during the vehicle/robot movement. In
this paper, we propose the first model that uses bird's-eye view (BEV) features
to perform LiDAR camera calibration from raw data, termed BEVCALIB. To achieve
this, we extract camera BEV features and LiDAR BEV features separately and fuse
them into a shared BEV feature space. To fully utilize the geometric
information from the BEV feature, we introduce a novel feature selector to
filter the most important features in the transformation decoder, which reduces
memory consumption and enables efficient training. Extensive evaluations on
KITTI, NuScenes, and our own dataset demonstrate that BEVCALIB establishes a
new state of the art. Under various noise conditions, BEVCALIB outperforms the
best baseline in the literature by an average of (47.08%, 82.32%) on KITTI
dataset, and (78.17%, 68.29%) on NuScenes dataset, in terms of (translation,
rotation), respectively. In the open-source domain, it improves the best
reproducible baseline by one order of magnitude. Our code and demo results are
available at https://cisl.ucr.edu/BEVCalib. | 2025-06-03T08:07:18Z | null | null | null | null | null | null | null | null | null | null |
2,506.02751 | RobustSplat: Decoupling Densification and Dynamics for Transient-Free
3DGS | ['Chuanyu Fu', 'Yuqi Zhang', 'Kunbin Yao', 'Guanying Chen', 'Yuan Xiong', 'Chuan Huang', 'Shuguang Cui', 'Xiaochun Cao'] | ['cs.CV'] | 3D Gaussian Splatting (3DGS) has gained significant attention for its
real-time, photo-realistic rendering in novel-view synthesis and 3D modeling.
However, existing methods struggle with accurately modeling scenes affected by
transient objects, leading to artifacts in the rendered images. We identify
that the Gaussian densification process, while enhancing scene detail capture,
unintentionally contributes to these artifacts by growing additional Gaussians
that model transient disturbances. To address this, we propose RobustSplat, a
robust solution based on two critical designs. First, we introduce a delayed
Gaussian growth strategy that prioritizes optimizing static scene structure
before allowing Gaussian splitting/cloning, mitigating overfitting to transient
objects in early optimization. Second, we design a scale-cascaded mask
bootstrapping approach that first leverages lower-resolution feature similarity
supervision for reliable initial transient mask estimation, taking advantage of
its stronger semantic consistency and robustness to noise, and then progresses
to high-resolution supervision to achieve more precise mask prediction.
Extensive experiments on multiple challenging datasets show that our method
outperforms existing methods, clearly demonstrating the robustness and
effectiveness of our method. Our project page is
https://fcyycf.github.io/RobustSplat/. | 2025-06-03T11:13:48Z | ICCV 2025. Project page: https://fcyycf.github.io/RobustSplat/ | null | null | RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS | ['Chuanyu Fu', 'Yuqi Zhang', 'Kunbin Yao', 'Guanying Chen', 'Yuan Xiong', 'Chuan Huang', 'Shuguang Cui', 'Xiaochun Cao'] | 2,025 | arXiv.org | 0 | 63 | ['Computer Science'] |
2,506.02845 | Go Beyond Earth: Understanding Human Actions and Scenes in Microgravity
Environments | ['Di Wen', 'Lei Qi', 'Kunyu Peng', 'Kailun Yang', 'Fei Teng', 'Ao Luo', 'Jia Fu', 'Yufan Chen', 'Ruiping Liu', 'Yitian Shi', 'M. Saquib Sarfraz', 'Rainer Stiefelhagen'] | ['cs.CV'] | Despite substantial progress in video understanding, most existing datasets
are limited to Earth's gravitational conditions. However, microgravity alters
human motion, interactions, and visual semantics, revealing a critical gap for
real-world vision systems. This presents a challenge for domain-robust video
understanding in safety-critical space applications. To address this, we
introduce MicroG-4M, the first benchmark for spatio-temporal and semantic
understanding of human activities in microgravity. Constructed from real-world
space missions and cinematic simulations, the dataset includes 4,759 clips
covering 50 actions, 1,238 context-rich captions, and over 7,000
question-answer pairs on astronaut activities and scene understanding.
MicroG-4M supports three core tasks: fine-grained multi-label action
recognition, temporal video captioning, and visual question answering, enabling
a comprehensive evaluation of both spatial localization and semantic reasoning
in microgravity contexts. We establish baselines using state-of-the-art models.
All data, annotations, and code are available at
https://github.com/LEI-QI-233/HAR-in-Space. | 2025-06-03T13:15:19Z | 15 pages, 3 figures, code are available at
https://github.com/LEI-QI-233/HAR-in-Space | null | null | null | null | null | null | null | null | null |
2,506.02863 | CapSpeech: Enabling Downstream Applications in Style-Captioned
Text-to-Speech | ['Helin Wang', 'Jiarui Hai', 'Dading Chong', 'Karan Thakkar', 'Tiantian Feng', 'Dongchao Yang', 'Junhyeok Lee', 'Laureano Moro Velazquez', 'Jesus Villalba', 'Zengyi Qin', 'Shrikanth Narayanan', 'Mounya Elhiali', 'Najim Dehak'] | ['eess.AS', 'cs.AI', 'cs.SD'] | Recent advancements in generative artificial intelligence have significantly
transformed the field of style-captioned text-to-speech synthesis (CapTTS).
However, adapting CapTTS to real-world applications remains challenging due to
the lack of standardized, comprehensive datasets and limited research on
downstream tasks built upon CapTTS. To address these gaps, we introduce
CapSpeech, a new benchmark designed for a series of CapTTS-related tasks,
including style-captioned text-to-speech synthesis with sound events
(CapTTS-SE), accent-captioned TTS (AccCapTTS), emotion-captioned TTS
(EmoCapTTS), and text-to-speech synthesis for chat agent (AgentTTS). CapSpeech
comprises over 10 million machine-annotated audio-caption pairs and nearly 0.36
million human-annotated audio-caption pairs. In addition, we introduce two new
datasets collected and recorded by a professional voice actor and experienced
audio engineers, specifically for the AgentTTS and CapTTS-SE tasks. Alongside
the datasets, we conduct comprehensive experiments using both autoregressive
and non-autoregressive models on CapSpeech. Our results demonstrate
high-fidelity and highly intelligible speech synthesis across a diverse range
of speaking styles. To the best of our knowledge, CapSpeech is the largest
available dataset offering comprehensive annotations for CapTTS-related tasks.
The experiments and findings further provide valuable insights into the
challenges of developing CapTTS systems. | 2025-06-03T13:28:55Z | null | null | null | null | null | null | null | null | null | null |
2,506.02865 | Surfer-H Meets Holo1: Cost-Efficient Web Agent Powered by Open Weights | ['Mathieu Andreux', 'Breno Baldas Skuk', 'Hamza Benchekroun', 'Emilien Biré', 'Antoine Bonnet', 'Riaz Bordie', 'Nathan Bout', 'Matthias Brunel', 'Pierre-Louis Cedoz', 'Antoine Chassang', 'Mickaël Chen', 'Alexandra D. Constantinou', "Antoine d'Andigné", 'Hubert de La Jonquière', 'Aurélien Delfosse', 'Ludovic Denoyer', 'Alexis Deprez', 'Augustin Derupti', 'Michael Eickenberg', 'Mathïs Federico', 'Charles Kantor', 'Xavier Koegler', 'Yann Labbé', 'Matthew C. H. Lee', 'Erwan Le Jumeau de Kergaradec', 'Amir Mahla', 'Avshalom Manevich', 'Adrien Maret', 'Charles Masson', 'Rafaël Maurin', 'Arturo Mena', 'Philippe Modard', 'Axel Moyal', 'Axel Nguyen Kerbel', 'Julien Revelle', 'Mats L. Richter', 'María Santos', 'Laurent Sifre', 'Maxime Theillard', 'Marc Thibault', 'Louis Thiry', 'Léo Tronchon', 'Nicolas Usunier', 'Tony Wu'] | ['cs.AI'] | We present Surfer-H, a cost-efficient web agent that integrates
Vision-Language Models (VLM) to perform user-defined tasks on the web. We pair
it with Holo1, a new open-weight collection of VLMs specialized in web
navigation and information extraction. Holo1 was trained on carefully curated
data sources, including open-access web content, synthetic examples, and
self-produced agentic data. Holo1 tops generalist User Interface (UI)
benchmarks as well as our new web UI localization benchmark, WebClick. When
powered by Holo1, Surfer-H achieves a 92.2% state-of-the-art performance on
WebVoyager, striking a Pareto-optimal balance between accuracy and
cost-efficiency. To accelerate research advancement in agentic systems, we are
open-sourcing both our WebClick evaluation dataset and the Holo1 model weights. | 2025-06-03T13:29:03Z | Alphabetical order | null | null | null | null | null | null | null | null | null |
2,506.02911 | Cell-o1: Training LLMs to Solve Single-Cell Reasoning Puzzles with
Reinforcement Learning | ['Yin Fang', 'Qiao Jin', 'Guangzhi Xiong', 'Bowen Jin', 'Xianrui Zhong', 'Siru Ouyang', 'Aidong Zhang', 'Jiawei Han', 'Zhiyong Lu'] | ['cs.CL', 'cs.AI', 'cs.CE', 'cs.HC', 'cs.LG'] | Cell type annotation is a key task in analyzing the heterogeneity of
single-cell RNA sequencing data. Although recent foundation models automate
this process, they typically annotate cells independently, without considering
batch-level cellular context or providing explanatory reasoning. In contrast,
human experts often annotate distinct cell types for different cell clusters
based on their domain knowledge. To mimic this workflow, we introduce the
CellPuzzles task, where the objective is to assign unique cell types to a batch
of cells. This benchmark spans diverse tissues, diseases, and donor conditions,
and requires reasoning across the batch-level cellular context to ensure label
uniqueness. We find that off-the-shelf large language models (LLMs) struggle on
CellPuzzles, with the best baseline (OpenAI's o1) achieving only 19.0%
batch-level accuracy. To fill this gap, we propose Cell-o1, a 7B LLM trained
via supervised fine-tuning on distilled reasoning traces, followed by
reinforcement learning with batch-level rewards. Cell-o1 achieves
state-of-the-art performance, outperforming o1 by over 73% and generalizing
well across contexts. Further analysis of training dynamics and reasoning
behaviors provides insights into batch-level annotation performance and
emergent expert-like reasoning. Code and data are available at
https://github.com/ncbi-nlp/cell-o1. | 2025-06-03T14:16:53Z | 28 pages; 16 tables; 7 figures; Code:
https://github.com/ncbi-nlp/cell-o1 | null | null | null | null | null | null | null | null | null |
2,506.02979 | Towards a Japanese Full-duplex Spoken Dialogue System | ['Atsumoto Ohashi', 'Shinya Iizuka', 'Jingjing Jiang', 'Ryuichiro Higashinaka'] | ['cs.CL', 'eess.AS'] | Full-duplex spoken dialogue systems, which can model simultaneous
bidirectional features of human conversations such as speech overlaps and
backchannels, have attracted significant attention recently. However, the study
of full-duplex spoken dialogue systems for the Japanese language has been
limited, and the research on their development in Japanese remains scarce. In
this paper, we present the first publicly available full-duplex spoken dialogue
model in Japanese, which is built upon Moshi, a full-duplex dialogue model in
English. Our model is trained through a two-stage process: pre-training on a
large-scale spoken dialogue data in Japanese, followed by fine-tuning on
high-quality stereo spoken dialogue data. We further enhance the model's
performance by incorporating synthetic dialogue data generated by a
multi-stream text-to-speech system. Evaluation experiments demonstrate that the
trained model outperforms Japanese baseline models in both naturalness and
meaningfulness. | 2025-06-03T15:16:50Z | Accepted to Interspeech 2025 | null | null | null | null | null | null | null | null | null |
2,506.03096 | FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens | ['Christian Schlarmann', 'Francesco Croce', 'Nicolas Flammarion', 'Matthias Hein'] | ['cs.CV', 'cs.LG'] | Contrastive language-image pre-training aligns the features of text-image
pairs in a common latent space via distinct encoders for each modality. While
this approach achieves impressive performance in several zero-shot tasks, it
cannot natively handle multimodal inputs, i.e., encoding image and text into a
single feature vector. As a remedy, it is common practice to use additional
modules to merge the features extracted by the unimodal encoders. In this work,
we present FuseLIP, an alternative architecture for multimodal embedding.
Leveraging recent progress in discrete image tokenizers, we propose to use a
single transformer model which operates on an extended vocabulary of text and
image tokens. This early fusion approach allows the different modalities to
interact at each depth of encoding and obtain richer representations compared
to common late fusion. We collect new datasets for multimodal pre-training and
evaluation, designing challenging tasks for multimodal encoder models. We show
that FuseLIP outperforms other approaches in multimodal embedding tasks such as
VQA and text-guided image transformation retrieval, while being comparable to
baselines on unimodal tasks. | 2025-06-03T17:27:12Z | Code and models available at https://github.com/chs20/fuselip | null | null | null | null | null | null | null | null | null |
2,506.03107 | ByteMorph: Benchmarking Instruction-Guided Image Editing with Non-Rigid
Motions | ['Di Chang', 'Mingdeng Cao', 'Yichun Shi', 'Bo Liu', 'Shengqu Cai', 'Shijie Zhou', 'Weilin Huang', 'Gordon Wetzstein', 'Mohammad Soleymani', 'Peng Wang'] | ['cs.CV'] | Editing images with instructions to reflect non-rigid motions, camera
viewpoint shifts, object deformations, human articulations, and complex
interactions, poses a challenging yet underexplored problem in computer vision.
Existing approaches and datasets predominantly focus on static scenes or rigid
transformations, limiting their capacity to handle expressive edits involving
dynamic motion. To address this gap, we introduce ByteMorph, a comprehensive
framework for instruction-based image editing with an emphasis on non-rigid
motions. ByteMorph comprises a large-scale dataset, ByteMorph-6M, and a strong
baseline model built upon the Diffusion Transformer (DiT), named ByteMorpher.
ByteMorph-6M includes over 6 million high-resolution image editing pairs for
training, along with a carefully curated evaluation benchmark ByteMorph-Bench.
Both capture a wide variety of non-rigid motion types across diverse
environments, human figures, and object categories. The dataset is constructed
using motion-guided data generation, layered compositing techniques, and
automated captioning to ensure diversity, realism, and semantic coherence. We
further conduct a comprehensive evaluation of recent instruction-based image
editing methods from both academic and commercial domains. | 2025-06-03T17:39:47Z | Website: https://boese0601.github.io/bytemorph Dataset:
https://huggingface.co/datasets/ByteDance-Seed/BM-6M Benchmark:
https://huggingface.co/datasets/ByteDance-Seed/BM-Bench Code:
https://github.com/ByteDance-Seed/BM-code Demo:
https://huggingface.co/spaces/Boese0601/ByteMorph-Demo | null | null | ByteMorph: Benchmarking Instruction-Guided Image Editing with Non-Rigid Motions | ['Di Chang', 'Mingdeng Cao', 'Yichun Shi', 'Bo Liu', 'Shengqu Cai', 'Shijie Zhou', 'Weilin Huang', 'Gordon Wetzstein', 'Mohammad Soleymani', 'Peng Wang'] | 2,025 | arXiv.org | 0 | 65 | ['Computer Science'] |
2,506.03123 | DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video
Generation | ['Zhengyao Lv', 'Chenyang Si', 'Tianlin Pan', 'Zhaoxi Chen', 'Kwan-Yee K. Wong', 'Yu Qiao', 'Ziwei Liu'] | ['cs.CV'] | Diffusion Models have achieved remarkable results in video synthesis but
require iterative denoising steps, leading to substantial computational
overhead. Consistency Models have made significant progress in accelerating
diffusion models. However, directly applying them to video diffusion models
often results in severe degradation of temporal consistency and appearance
details. In this paper, by analyzing the training dynamics of Consistency
Models, we identify a key conflicting learning dynamics during the distillation
process: there is a significant discrepancy in the optimization gradients and
loss contributions across different timesteps. This discrepancy prevents the
distilled student model from achieving an optimal state, leading to compromised
temporal consistency and degraded appearance details. To address this issue, we
propose a parameter-efficient \textbf{Dual-Expert Consistency Model~(DCM)},
where a semantic expert focuses on learning semantic layout and motion, while a
detail expert specializes in fine detail refinement. Furthermore, we introduce
Temporal Coherence Loss to improve motion consistency for the semantic expert
and apply GAN and Feature Matching Loss to enhance the synthesis quality of the
detail expert.Our approach achieves state-of-the-art visual quality with
significantly reduced sampling steps, demonstrating the effectiveness of expert
specialization in video diffusion model distillation. Our code and models are
available at
\href{https://github.com/Vchitect/DCM}{https://github.com/Vchitect/DCM}. | 2025-06-03T17:55:04Z | null | null | null | null | null | null | null | null | null | null |
2,506.03126 | AnimeShooter: A Multi-Shot Animation Dataset for Reference-Guided Video
Generation | ['Lu Qiu', 'Yizhuo Li', 'Yuying Ge', 'Yixiao Ge', 'Ying Shan', 'Xihui Liu'] | ['cs.CV'] | Recent advances in AI-generated content (AIGC) have significantly accelerated
animation production. To produce engaging animations, it is essential to
generate coherent multi-shot video clips with narrative scripts and character
references. However, existing public datasets primarily focus on real-world
scenarios with global descriptions, and lack reference images for consistent
character guidance. To bridge this gap, we present AnimeShooter, a
reference-guided multi-shot animation dataset. AnimeShooter features
comprehensive hierarchical annotations and strong visual consistency across
shots through an automated pipeline. Story-level annotations provide an
overview of the narrative, including the storyline, key scenes, and main
character profiles with reference images, while shot-level annotations
decompose the story into consecutive shots, each annotated with scene,
characters, and both narrative and descriptive visual captions. Additionally, a
dedicated subset, AnimeShooter-audio, offers synchronized audio tracks for each
shot, along with audio descriptions and sound sources. To demonstrate the
effectiveness of AnimeShooter and establish a baseline for the reference-guided
multi-shot video generation task, we introduce AnimeShooterGen, which leverages
Multimodal Large Language Models (MLLMs) and video diffusion models. The
reference image and previously generated shots are first processed by MLLM to
produce representations aware of both reference and context, which are then
used as the condition for the diffusion model to decode the subsequent shot.
Experimental results show that the model trained on AnimeShooter achieves
superior cross-shot visual consistency and adherence to reference visual
guidance, which highlight the value of our dataset for coherent animated video
generation. | 2025-06-03T17:55:18Z | Project released at: https://qiulu66.github.io/animeshooter/ | null | null | null | null | null | null | null | null | null |
2,506.03131 | Native-Resolution Image Synthesis | ['Zidong Wang', 'Lei Bai', 'Xiangyu Yue', 'Wanli Ouyang', 'Yiyuan Zhang'] | ['cs.CV', 'cs.LG'] | We introduce native-resolution image synthesis, a novel generative modeling
paradigm that enables the synthesis of images at arbitrary resolutions and
aspect ratios. This approach overcomes the limitations of conventional
fixed-resolution, square-image methods by natively handling variable-length
visual tokens, a core challenge for traditional techniques. To this end, we
introduce the Native-resolution diffusion Transformer (NiT), an architecture
designed to explicitly model varying resolutions and aspect ratios within its
denoising process. Free from the constraints of fixed formats, NiT learns
intrinsic visual distributions from images spanning a broad range of
resolutions and aspect ratios. Notably, a single NiT model simultaneously
achieves the state-of-the-art performance on both ImageNet-256x256 and 512x512
benchmarks. Surprisingly, akin to the robust zero-shot capabilities seen in
advanced large language models, NiT, trained solely on ImageNet, demonstrates
excellent zero-shot generalization performance. It successfully generates
high-fidelity images at previously unseen high resolutions (e.g., 1536 x 1536)
and diverse aspect ratios (e.g., 16:9, 3:1, 4:3), as shown in Figure 1. These
findings indicate the significant potential of native-resolution modeling as a
bridge between visual generative modeling and advanced LLM methodologies. | 2025-06-03T17:57:33Z | Project Page: https://wzdthu.github.io/NiT/ | null | null | Native-Resolution Image Synthesis | ['Zidong Wang', 'Lei Bai', 'Xiangyu Yue', 'Wanli Ouyang', 'Yiyuan Zhang'] | 2,025 | arXiv.org | 0 | 84 | ['Computer Science'] |
2,506.03135 | OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for
Vision Language Models | ['Mengdi Jia', 'Zekun Qi', 'Shaochen Zhang', 'Wenyao Zhang', 'Xinqiang Yu', 'Jiawei He', 'He Wang', 'Li Yi'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Spatial reasoning is a key aspect of cognitive psychology and remains a major
bottleneck for current vision-language models (VLMs). While extensive research
has aimed to evaluate or improve VLMs' understanding of basic spatial
relations, such as distinguishing left from right, near from far, and object
counting, these tasks represent only the most fundamental level of spatial
reasoning. In this work, we introduce OmniSpatial, a comprehensive and
challenging benchmark for spatial reasoning, grounded in cognitive psychology.
OmniSpatial covers four major categories: dynamic reasoning, complex spatial
logic, spatial interaction, and perspective-taking, with 50 fine-grained
subcategories. Through Internet data crawling and careful manual annotation, we
construct over 1.5K question-answer pairs. Extensive experiments show that both
open- and closed-source VLMs, as well as existing reasoning and spatial
understanding models, exhibit significant limitations in comprehensive spatial
understanding. We further analyze failure cases and propose potential
directions for future research. | 2025-06-03T17:58:29Z | Project Page: https://qizekun.github.io/omnispatial/ | null | null | OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models | ['Mengdi Jia', 'Zekun Qi', 'Shaochen Zhang', 'Wenyao Zhang', 'Xinqiang Yu', 'Jiawei He', 'He Wang', 'Li Yi'] | 2,025 | arXiv.org | 0 | 122 | ['Computer Science'] |
2,506.03136 | Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning | ['Yinjie Wang', 'Ling Yang', 'Ye Tian', 'Ke Shen', 'Mengdi Wang'] | ['cs.CL'] | We propose CURE, a novel reinforcement learning framework with a dedicated
reward design that co-evolves coding and unit test generation capabilities
based on their interaction outcomes, without any ground-truth code as
supervision. This approach enables flexible and scalable training and allows
the unit tester to learn directly from the coder's mistakes. Our derived
ReasonFlux-Coder-7B and 14B models improve code generation accuracy by 5.3% and
Best-of-N accuracy by 9.0% after optimization on Qwen2.5-Instruct models,
outperforming similarly sized Qwen-Coder, DeepSeek-Coder, and Seed-Coder. They
naturally extend to downstream tasks such as test-time scaling and agentic
coding-achieving a 8.1% improvement over the base model. For the long-CoT
model, our ReasonFlux-Coder-4B consistently outperforms Qwen3-4B while
achieving 64.8% inference efficiency in unit test generation. Notably, we also
find that our model can serve as an effective reward model for reinforcement
learning on base models. Project: https://github.com/Gen-Verse/CURE | 2025-06-03T17:58:42Z | Project: https://github.com/Gen-Verse/CURE | null | null | null | null | null | null | null | null | null |
2,506.03143 | GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents | ['Qianhui Wu', 'Kanzhi Cheng', 'Rui Yang', 'Chaoyun Zhang', 'Jianwei Yang', 'Huiqiang Jiang', 'Jian Mu', 'Baolin Peng', 'Bo Qiao', 'Reuben Tan', 'Si Qin', 'Lars Liden', 'Qingwei Lin', 'Huan Zhang', 'Tong Zhang', 'Jianbing Zhang', 'Dongmei Zhang', 'Jianfeng Gao'] | ['cs.CL', 'cs.AI', 'cs.CV'] | One of the principal challenges in building VLM-powered GUI agents is visual
grounding, i.e., localizing the appropriate screen region for action execution
based on both the visual content and the textual plans. Most existing work
formulates this as a text-based coordinate generation task. However, these
approaches suffer from several limitations: weak spatial-semantic alignment,
inability to handle ambiguous supervision targets, and a mismatch between the
dense nature of screen coordinates and the coarse, patch-level granularity of
visual features extracted by models like Vision Transformers. In this paper, we
propose GUI-Actor, a VLM-based method for coordinate-free GUI grounding. At its
core, GUI-Actor introduces an attention-based action head that learns to align
a dedicated <ACTOR> token with all relevant visual patch tokens, enabling the
model to propose one or more action regions in a single forward pass. In line
with this, we further design a grounding verifier to evaluate and select the
most plausible action region from the candidates proposed for action execution.
Extensive experiments show that GUI-Actor outperforms prior state-of-the-art
methods on multiple GUI action grounding benchmarks, with improved
generalization to unseen screen resolutions and layouts. Notably, GUI-Actor-7B
even surpasses UI-TARS-72B (38.1) on ScreenSpot-Pro, achieving scores of 40.7
with Qwen2-VL and 44.6 with Qwen2.5-VL as backbones. Furthermore, by
incorporating the verifier, we find that fine-tuning only the newly introduced
action head (~100M parameters for 7B model) while keeping the VLM backbone
frozen is sufficient to achieve performance comparable to previous
state-of-the-art models, highlighting that GUI-Actor can endow the underlying
VLM with effective grounding capabilities without compromising its
general-purpose strengths. | 2025-06-03T17:59:08Z | null | null | null | null | null | null | null | null | null | null |
2,506.03147 | UniWorld-V1: High-Resolution Semantic Encoders for Unified Visual
Understanding and Generation | ['Bin Lin', 'Zongjian Li', 'Xinhua Cheng', 'Yuwei Niu', 'Yang Ye', 'Xianyi He', 'Shenghai Yuan', 'Wangbo Yu', 'Shaodong Wang', 'Yunyang Ge', 'Yatian Pang', 'Li Yuan'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Although existing unified models achieve strong performance in
vision-language understanding and text-to-image generation, they remain limited
in addressing image perception and manipulation -- capabilities increasingly
demanded in practical applications. Recently, OpenAI introduced the powerful
GPT-4o-Image model, which showcases advanced capabilities in comprehensive
image perception and manipulation, sparking widespread interest. Through
carefully designed experiments, we observe that GPT-4o-Image likely relies on
semantic encoders rather than VAEs for feature extraction, despite VAEs being
commonly regarded as crucial for image manipulation tasks. Inspired by this
insight, we propose UniWorld-V1, a unified generative framework built upon
semantic features extracted from powerful multimodal large language models and
contrastive semantic encoders. Using only 2.7M training data, UniWorld-V1
achieves impressive performance across diverse tasks, including image
understanding, generation, manipulation, and perception. We fully open-source
the UniWorld-V1 framework, including model weights, training and evaluation
scripts, and datasets to promote reproducibility and further research. | 2025-06-03T17:59:33Z | null | null | null | null | null | null | null | null | null | null |
2,506.03238 | Rethinking Whole-Body CT Image Interpretation: An Abnormality-Centric
Approach | ['Ziheng Zhao', 'Lisong Dai', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | ['eess.IV', 'cs.AI', 'cs.CV'] | Automated interpretation of CT images-particularly localizing and describing
abnormal findings across multi-plane and whole-body scans-remains a significant
challenge in clinical radiology. This work aims to address this challenge
through four key contributions: (i) On taxonomy, we collaborate with senior
radiologists to propose a comprehensive hierarchical classification system,
with 404 representative abnormal findings across all body regions; (ii) On
data, we contribute a dataset containing over 14.5K CT images from multiple
planes and all human body regions, and meticulously provide grounding
annotations for over 19K abnormalities, each linked to the detailed description
and cast into the taxonomy; (iii) On model development, we propose
OminiAbnorm-CT, which can automatically ground and describe abnormal findings
on multi-plane and whole-body CT images based on text queries, while also
allowing flexible interaction through visual prompts; (iv) On benchmarks, we
establish three representative evaluation tasks based on real clinical
scenarios. Through extensive experiments, we show that OminiAbnorm-CT can
significantly outperform existing methods on all the tasks and metrics. | 2025-06-03T17:57:34Z | null | null | null | null | null | null | null | null | null | null |
2,506.03295 | Unleashing the Reasoning Potential of Pre-trained LLMs by Critique
Fine-Tuning on One Problem | ['Yubo Wang', 'Ping Nie', 'Kai Zou', 'Lijun Wu', 'Wenhu Chen'] | ['cs.CL', 'cs.LG'] | We have witnessed that strong LLMs like Qwen-Math, MiMo, and Phi-4 possess
immense reasoning potential inherited from the pre-training stage. With
reinforcement learning (RL), these models can improve dramatically on reasoning
tasks. Recent studies have shown that even RL on a single problem can unleash
these models' reasoning capabilities. However, RL is not only expensive but
also unstable. Even one-shot RL requires hundreds of GPU hours. This raises a
critical question: Is there a more efficient way to unleash the reasoning
potential of these powerful base LLMs? In this work, we demonstrate that
Critique Fine-Tuning (CFT) on only one problem can effectively unleash the
reasoning potential of LLMs. Our method constructs critique data by collecting
diverse model-generated solutions to a single problem and using teacher LLMs to
provide detailed critiques. We fine-tune Qwen and Llama family models, ranging
from 1.5B to 14B parameters, on the CFT data and observe significant
performance gains across diverse reasoning tasks. For example, with just 5 GPU
hours of training, Qwen-Math-7B-CFT show an average improvement of 15% on six
math benchmarks and 16% on three logic reasoning benchmarks. These results are
comparable to or even surpass the results from RL with 20x less compute.
Ablation studies reveal the robustness of one-shot CFT across different prompt
problems. These results highlight one-shot CFT as a simple, general, and
compute-efficient approach to unleashing the reasoning capabilities of modern
LLMs. | 2025-06-03T18:35:52Z | null | null | null | Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem | ['Yubo Wang', 'Ping Nie', 'Kai Zou', 'Lijun Wu', 'Wenhu Chen'] | 2,025 | arXiv.org | 0 | 21 | ['Computer Science'] |
2,506.03355 | Robustness in Both Domains: CLIP Needs a Robust Text Encoder | ['Elias Abad Rocamora', 'Christian Schlarmann', 'Naman Deep Singh', 'Yongtao Wu', 'Matthias Hein', 'Volkan Cevher'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Adversarial input attacks can cause a significant shift of CLIP embeddings.
This can affect the downstream robustness of models incorporating CLIP in the
pipeline, such as text-to-image generative models or large vision language
models. While some efforts have been done towards making the CLIP image
encoders robust, the robustness of text encoders remains unexplored. In this
work, we cover this gap in the literature. We propose LEAF: an efficient
adversarial finetuning method for the text domain, with the ability to scale to
large CLIP models. Our models significantly improve the zero-shot adversarial
accuracy in the text domain, while maintaining the vision performance provided
by robust image encoders. When combined with text-to-image diffusion models, we
can improve the generation quality under adversarial noise. When employing our
robust CLIP encoders in multimodal retrieval tasks, we improve the recall under
adversarial noise over standard CLIP models. Finally, we show that robust text
encoders facilitate better reconstruction of input text from its embedding via
direct optimization. | 2025-06-03T19:57:09Z | null | null | null | null | null | null | null | null | null | null |
2,506.03487 | ProRank: Prompt Warmup via Reinforcement Learning for Small Language
Models Reranking | ['Xianming Li', 'Aamir Shakir', 'Rui Huang', 'Julius Lipp', 'Jing Li'] | ['cs.IR', 'cs.CL'] | Reranking is fundamental to information retrieval and retrieval-augmented
generation, with recent Large Language Models (LLMs) significantly advancing
reranking quality. While recent advances with LLMs have significantly improved
document reranking quality, current approaches primarily rely on large-scale
LLMs (>7B parameters) through zero-shot prompting, presenting high
computational costs. Small Language Models (SLMs) offer a promising alternative
because of their efficiency, but our preliminary quantitative analysis reveals
they struggle with understanding task prompts without fine-tuning. This limits
their effectiveness for document reranking tasks. To address this issue, we
introduce a novel two-stage training approach, ProRank, for SLM-based document
reranking. First, we propose a prompt warmup stage using reinforcement learning
GRPO to steer SLMs to understand task prompts and generate more accurate
coarse-grained binary relevance scores for document reranking. Then, we
continuously fine-tune the SLMs with a fine-grained score learning stage
without introducing additional layers to further improve the reranking quality.
Comprehensive experimental results demonstrate that the proposed ProRank
consistently outperforms both the most advanced open-source and proprietary
reranking models. Notably, our lightweight ProRank-0.5B model even surpasses
the powerful 32B LLM reranking model on the BEIR benchmark, establishing that
properly trained SLMs can achieve superior document reranking performance while
maintaining computational efficiency. | 2025-06-04T02:00:44Z | null | null | null | null | null | null | null | null | null | null |
2,506.03524 | Seed-Coder: Let the Code Model Curate Data for Itself | ['ByteDance Seed', 'Yuyu Zhang', 'Jing Su', 'Yifan Sun', 'Chenguang Xi', 'Xia Xiao', 'Shen Zheng', 'Anxiang Zhang', 'Kaibo Liu', 'Daoguang Zan', 'Tao Sun', 'Jinhua Zhu', 'Shulin Xin', 'Dong Huang', 'Yetao Bai', 'Lixin Dong', 'Chao Li', 'Jianchong Chen', 'Hanzhi Zhou', 'Yifan Huang', 'Guanghan Ning', 'Xierui Song', 'Jiaze Chen', 'Siyao Liu', 'Kai Shen', 'Liang Xiang', 'Yonghui Wu'] | ['cs.CL', 'cs.SE'] | Code data in large language model (LLM) pretraining is recognized crucial not
only for code-related tasks but also for enhancing general intelligence of
LLMs. Current open-source LLMs often heavily rely on human effort to produce
their code pretraining data, such as employing hand-crafted filtering rules
tailored to individual programming languages, or using human-annotated data to
train quality filters. However, these approaches are inherently limited in
scalability, prone to subjective biases, and costly to extend and maintain
across diverse programming languages. To address these challenges, we introduce
Seed-Coder, a series of open-source LLMs comprising base, instruct and
reasoning models of 8B size, minimizing human involvement in data construction.
Our code pretraining data is produced by a model-centric data pipeline, which
predominantly leverages LLMs for scoring and filtering code data. The instruct
model is further trained via supervised fine-tuning and preference
optimization, and the reasoning model leverages Long-Chain-of-Thought (LongCoT)
reinforcement learning to improve multi-step code reasoning. Seed-Coder
achieves state-of-the-art results among open-source models of similar size and
even surpasses some much larger models, demonstrating superior performance in
code generation, code completion, code editing, code reasoning, and software
engineering tasks. | 2025-06-04T03:17:19Z | null | null | null | Seed-Coder: Let the Code Model Curate Data for Itself | ['ByteDance Seed', 'Yuyu Zhang', 'Jing Su', 'Yifan Sun', 'Chenguang Xi', 'Xia Xiao', 'Shen Zheng', 'Anxiang Zhang', 'Kaibo Liu', 'Daoguang Zan', 'Tao Sun', 'Jinhua Zhu', 'Shulin Xin', 'Dong Huang', 'Yetao Bai', 'Lixin Dong', 'Chao Li', 'Jianchong Chen', 'Hanzhi Zhou', 'Yifan Huang', 'Guanghan Ning', 'Xierui Song', 'Jiaze Chen', 'Siyao Liu', 'Kai Shen', 'Liang Xiang', 'Yonghui Wu'] | 2,025 | arXiv.org | 2 | 57 | ['Computer Science'] |
2,506.03533 | Go-Browse: Training Web Agents with Structured Exploration | ['Apurva Gandhi', 'Graham Neubig'] | ['cs.CL'] | One of the fundamental problems in digital agents is their lack of
understanding of their environment. For instance, a web browsing agent may get
lost in unfamiliar websites, uncertain what pages must be visited to achieve
its goals. To address this, we propose Go-Browse, a method for automatically
collecting diverse and realistic web agent data at scale through structured
exploration of web environments. Go-Browse achieves efficient exploration by
framing data collection as a graph search, enabling reuse of information across
exploration episodes. We instantiate our method on the WebArena benchmark,
collecting a dataset of 10K successful task-solving trajectories and 40K
interaction steps across 100 URLs. Fine-tuning a 7B parameter language model on
this dataset achieves a success rate of 21.7% on the WebArena benchmark,
beating GPT-4o mini by 2.4% and exceeding current state-of-the-art results for
sub-10B parameter models by 2.9%. | 2025-06-04T03:27:56Z | null | null | null | null | null | null | null | null | null | null |
2,506.03569 | MiMo-VL Technical Report | ['Xiaomi LLM-Core Team', ':', 'Zihao Yue', 'Zhenru Lin', 'Yifan Song', 'Weikun Wang', 'Shuhuai Ren', 'Shuhao Gu', 'Shicheng Li', 'Peidian Li', 'Liang Zhao', 'Lei Li', 'Kainan Bao', 'Hao Tian', 'Hailin Zhang', 'Gang Wang', 'Dawei Zhu', 'Cici', 'Chenhong He', 'Bowen Ye', 'Bowen Shen', 'Zihan Zhang', 'Zihan Jiang', 'Zhixian Zheng', 'Zhichao Song', 'Zhenbo Luo', 'Yue Yu', 'Yudong Wang', 'Yuanyuan Tian', 'Yu Tu', 'Yihan Yan', 'Yi Huang', 'Xu Wang', 'Xinzhe Xu', 'Xingchen Song', 'Xing Zhang', 'Xing Yong', 'Xin Zhang', 'Xiangwei Deng', 'Wenyu Yang', 'Wenhan Ma', 'Weiwei Lv', 'Weiji Zhuang', 'Wei Liu', 'Sirui Deng', 'Shuo Liu', 'Shimao Chen', 'Shihua Yu', 'Shaohui Liu', 'Shande Wang', 'Rui Ma', 'Qiantong Wang', 'Peng Wang', 'Nuo Chen', 'Menghang Zhu', 'Kangyang Zhou', 'Kang Zhou', 'Kai Fang', 'Jun Shi', 'Jinhao Dong', 'Jiebao Xiao', 'Jiaming Xu', 'Huaqiu Liu', 'Hongshen Xu', 'Heng Qu', 'Haochen Zhao', 'Hanglong Lv', 'Guoan Wang', 'Duo Zhang', 'Dong Zhang', 'Di Zhang', 'Chong Ma', 'Chang Liu', 'Can Cai', 'Bingquan Xia'] | ['cs.CL'] | We open-source MiMo-VL-7B-SFT and MiMo-VL-7B-RL, two powerful vision-language
models delivering state-of-the-art performance in both general visual
understanding and multimodal reasoning. MiMo-VL-7B-RL outperforms Qwen2.5-VL-7B
on 35 out of 40 evaluated tasks, and scores 59.4 on OlympiadBench, surpassing
models with up to 78B parameters. For GUI grounding applications, it sets a new
standard with 56.1 on OSWorld-G, even outperforming specialized models such as
UI-TARS. Our training combines four-stage pre-training (2.4 trillion tokens)
with Mixed On-policy Reinforcement Learning (MORL) integrating diverse reward
signals. We identify the importance of incorporating high-quality reasoning
data with long Chain-of-Thought into pre-training stages, and the benefits of
mixed RL despite challenges in simultaneous multi-domain optimization. We also
contribute a comprehensive evaluation suite covering 50+ tasks to promote
reproducibility and advance the field. The model checkpoints and full
evaluation suite are available at https://github.com/XiaomiMiMo/MiMo-VL. | 2025-06-04T04:32:54Z | 32 pages | null | null | MiMo-VL Technical Report | ['Xiaomi LLM-Core Team Zihao Yue', 'Zhenrui Lin', 'Yi-Hao Song', 'Weikun Wang', 'Shu-Qin Ren', 'Shuhao Gu', 'Shi-Guang Li', 'Peidian Li', 'Liang Zhao', 'Lei Li', 'Kainan Bao', 'Hao Tian', 'Hailin Zhang', 'Gang Wang', 'Dawei Zhu', 'Cici', 'Chenhong He', 'Bowen Ye', 'Bowen Shen', 'Zihan Zhang', 'Zi-Ang Jiang', 'Zhixian Zheng', 'Zhichao Song', 'Zhen Luo', 'Yue Yu', 'Yudong Wang', 'Yu Tian', 'Yu Tu', 'Yihan Yan', 'Yi Huang', 'Xu Wang', 'Xin-dan Xu', 'X. Song', 'Xing Zhang', 'Xing Yong', 'Xin Zhang', 'Xia Deng', 'Wenyu Yang', 'Wenhan Ma', 'Weiwei Lv', 'Weiji Zhuang', 'Wei Liu', 'Sirui Deng', 'Shuo Liu', 'Shimao Chen', 'Shi-liang Yu', 'Shao-yang Liu', 'Shan-yong Wang', 'Rui Ma', 'Qiantong Wang', 'Peng Wang', 'Nuo Chen', 'Menghang Zhu', 'Kang Zhou', 'Kang Zhou', 'Kai Fang', 'Jun-Miao Shi', 'Jinhao Dong', 'Jiebao Xiao', 'Jiaming Xu', 'Huaqiu Liu', 'Hongsheng Xu', 'Hengxu Qu', 'Hao-Song Zhao', 'Hanglong Lv', 'Guoan Wang', 'Duo Zhang', 'Dong Zhang', 'Di Zhang', 'Chong-Yi Ma', 'Chang Liu', 'Can Cai', 'Bing Xia'] | 2,025 | arXiv.org | 0 | 74 | ['Computer Science'] |
2,506.03637 | RewardAnything: Generalizable Principle-Following Reward Models | ['Zhuohao Yu', 'Jiali Zeng', 'Weizheng Gu', 'Yidong Wang', 'Jindong Wang', 'Fandong Meng', 'Jie Zhou', 'Yue Zhang', 'Shikun Zhang', 'Wei Ye'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Reward Models, essential for guiding Large Language Model optimization, are
typically trained on fixed preference datasets, resulting in rigid alignment to
single, implicit preference distributions. This prevents adaptation to diverse
real-world needs-from conciseness in one task to detailed explanations in
another. The standard practice of collecting task-specific preference data and
retraining reward models is resource-intensive, often producing biased rewards,
and limits practical application. We introduce generalizable,
principle-following reward models. We propose that RMs should understand and
adhere to dynamically provided natural language specifications of reward
principles, similar to instruction-following in LLMs. To measure this
capability, we develop RABench, a comprehensive benchmark for RMs focusing on
generalization across diverse principles. Evaluations on RABench reveal poor
generalization of current RMs. As a solution, we present RewardAnything, a
novel RM designed and trained to explicitly follow natural language principles.
We achieve SotA performance with RewardAnything in traditional RM benchmark
simply by specifying a well-defined principle, and results on RABench show we
excel in adapting to novel principles without retraining. Furthermore,
RewardAnything integrates seamlessly with existing RLHF methods and we show by
a case study on how to automatically and efficiently align LLMs with only
natural language principles. | 2025-06-04T07:30:16Z | 25 pages, 9 figures, Code & model weights available at:
https://zhuohaoyu.github.io/RewardAnything | null | null | RewardAnything: Generalizable Principle-Following Reward Models | ['Zhuohao Yu', 'Jiali Zeng', 'Weizheng Gu', 'Yidong Wang', 'Jindong Wang', 'Fandong Meng', 'Jie Zhou', 'Yue Zhang', 'Shikun Zhang', 'Wei Ye'] | 2,025 | arXiv.org | 1 | 97 | ['Computer Science'] |
2,506.0369 | Robust Preference Optimization via Dynamic Target Margins | ['Jie Sun', 'Junkang Wu', 'Jiancan Wu', 'Zhibo Zhu', 'Xingyu Lu', 'Jun Zhou', 'Lintao Ma', 'Xiang Wang'] | ['cs.CL'] | The alignment of Large Language Models (LLMs) is crucial for ensuring their
safety and reliability in practical applications. Direct Preference
Optimization (DPO) has emerged as an efficient method that directly optimizes
models using preference pairs, significantly reducing resource demands.
However, the effectiveness of DPO heavily depends on the data quality, which is
frequently compromised by noise. In this work, we propose $\gamma$-PO, a
dynamic target margin preference optimization algorithm that adjust reward
margins at the pairwise level. By introducing instance-specific margin
calibration, $\gamma$-PO strategically prioritizes high-confidence pairs (those
demonstrating higher reward margins) while suppressing potential noise from
ambiguous pairs. Moreover, $\gamma$-PO is a plug-and-play method, compatible
with variants of DPO that rely on reward margin between preference pairs.
Across benchmarks such as AlpacaEval2 and Arena-Hard, $\gamma$-PO achieves an
average 4.4\% improvement over other baselines, setting new benchmarks for
state-of-the-art performance. Additionally, $\gamma$-PO requires minimal code
changes and has a negligible impact on training efficiency, making it a robust
solution for enhancing LLMs alignment. Our codes are available at
\href{https://github.com/sunjie279/gammaPO}{https://github.com/sunjie279/gammaPO}. | 2025-06-04T08:19:37Z | 18 pages, 6 figures, accepted to The 63rd Annual Meeting of the
Association for Computational Linguistics (ACL2025) | null | null | Robust Preference Optimization via Dynamic Target Margins | ['Jie Sun', 'Junkang Wu', 'Jiancan Wu', 'Zhibo Zhu', 'Xingyu Lu', 'Jun Zhou', 'Lintao Ma', 'Xiang Wang'] | 2,025 | arXiv.org | 0 | 51 | ['Computer Science'] |
2,506.03793 | Mark My Words: A Robust Multilingual Model for Punctuation in Text and
Speech Transcripts | ['Sidharth Pulipaka', 'Sparsh Jain', 'Ashwin Sankar', 'Raj Dabre'] | ['cs.CL'] | Punctuation plays a vital role in structuring meaning, yet current models
often struggle to restore it accurately in transcripts of spontaneous speech,
especially in the presence of disfluencies such as false starts and
backtracking. These limitations hinder the performance of downstream tasks like
translation, text to speech, summarization, etc. where sentence boundaries are
critical for preserving quality. In this work, we introduce Cadence, a
generalist punctuation restoration model adapted from a pretrained large
language model. Cadence is designed to handle both clean written text and
highly spontaneous spoken transcripts. It surpasses the previous state of the
art in performance while expanding support from 14 to all 22 Indian languages
and English. We conduct a comprehensive analysis of model behavior across
punctuation types and language families, identifying persistent challenges
under domain shift and with rare punctuation marks. Our findings demonstrate
the efficacy of utilizing pretrained language models for multilingual
punctuation restoration and highlight Cadence practical value for low resource
NLP pipelines at scale. | 2025-06-04T09:54:38Z | Work in Progress | null | null | null | null | null | null | null | null | null |
2,506.0393 | VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code
Generation | ['Yuansheng Ni', 'Ping Nie', 'Kai Zou', 'Xiang Yue', 'Wenhu Chen'] | ['cs.SE', 'cs.AI', 'cs.CL'] | Large language models (LLMs) often struggle with visualization tasks like
plotting diagrams, charts, where success depends on both code correctness and
visual semantics. Existing instruction-tuning datasets lack execution-grounded
supervision and offer limited support for iterative code correction, resulting
in fragile and unreliable plot generation. We present VisCode-200K, a
large-scale instruction tuning dataset for Python-based visualization and
self-correction. It contains over 200K examples from two sources: (1) validated
plotting code from open-source repositories, paired with natural language
instructions and rendered plots; and (2) 45K multi-turn correction dialogues
from Code-Feedback, enabling models to revise faulty code using runtime
feedback. We fine-tune Qwen2.5-Coder-Instruct on VisCode-200K to create
VisCoder, and evaluate it on PandasPlotBench. VisCoder significantly
outperforms strong open-source baselines and approaches the performance of
proprietary models like GPT-4o-mini. We further adopt a self-debug evaluation
protocol to assess iterative repair, demonstrating the benefits of
feedback-driven learning for executable, visually accurate code generation. | 2025-06-04T13:24:44Z | null | null | null | null | null | null | null | null | null | null |
2,506.03968 | From Real to Synthetic: Synthesizing Millions of Diversified and
Complicated User Instructions with Attributed Grounding | ['Chiwei Zhu', 'Benfeng Xu', 'Xiaorui Wang', 'Zhendong Mao'] | ['cs.CL'] | The pursuit of diverse, complex, and large-scale instruction data is crucial
for automatically aligning large language models (LLMs). While there are
methods capable of generating synthetic instructions at scale, they either
suffer from limited grounding sources, leading to a narrow distribution, or
rely on trivial extensions that fail to produce meaningful trajectories in
terms of complexity. In contrast, instructions that benefit efficient alignment
are typically crafted with cognitive insights and grounded in real-world use
cases. In this paper, we synthesize such instructions using attributed
grounding, which involves 1) a top-down attribution process that grounds a
selective set of real instructions to situated users, and 2) a bottom-up
synthesis process that leverages web documents to first generate a situation,
then a meaningful instruction. This framework allows us to harvest diverse and
complex instructions at scale, utilizing the vast range of web documents.
Specifically, we construct a dataset of 1 million instructions, called
SynthQuestions, and demonstrate that models trained on it achieve leading
performance on several common benchmarks, with improvements that continually
scale with more web corpora. Data, models and codes will be available at
https://github.com/Ignoramus0817/SynthQuestions. | 2025-06-04T14:00:47Z | To be published at ACL 2025 | null | null | From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding | ['Chiwei Zhu', 'Benfeng Xu', 'Xiaorui Wang', 'Zhendong Mao'] | 2,025 | arXiv.org | 0 | 30 | ['Computer Science'] |
2,506.04034 | Rex-Thinker: Grounded Object Referring via Chain-of-Thought Reasoning | ['Qing Jiang', 'Xingyu Chen', 'Zhaoyang Zeng', 'Junzhi Yu', 'Lei Zhang'] | ['cs.CV'] | Object referring aims to detect all objects in an image that match a given
natural language description. We argue that a robust object referring model
should be grounded, meaning its predictions should be both explainable and
faithful to the visual content. Specifically, it should satisfy two key
properties: 1) Verifiable, by producing interpretable reasoning that justifies
its predictions and clearly links them to visual evidence; and 2) Trustworthy,
by learning to abstain when no object in the image satisfies the given
expression. However, most methods treat referring as a direct bounding box
prediction task, offering limited interpretability and struggling to reject
expressions with no matching object. In this work, we propose Rex-Thinker, a
model that formulates object referring as an explicit CoT reasoning task. Given
a referring expression, we first identify all candidate object instances
corresponding to the referred object category. Rex-Thinker then performs
step-by-step reasoning over each candidate to assess whether it matches the
given expression, before making a final prediction. To support this paradigm,
we construct a large-scale CoT-style referring dataset named HumanRef-CoT by
prompting GPT-4o on the HumanRef dataset. Each reasoning trace follows a
structured planning, action, and summarization format, enabling the model to
learn decomposed, interpretable reasoning over object candidates. We then train
Rex-Thinker in two stages: a cold-start supervised fine-tuning phase to teach
the model how to perform structured reasoning, followed by GRPO-based RL
learning to improve accuracy and generalization. Experiments show that our
approach outperforms standard baselines in both precision and interpretability
on in-domain evaluation, while also demonstrating improved ability to reject
hallucinated outputs and strong generalization in out-of-domain settings. | 2025-06-04T14:56:57Z | homepage: https://rexthinker.github.io/ | null | null | null | null | null | null | null | null | null |
2,506.04158 | Image Editing As Programs with Diffusion Models | ['Yujia Hu', 'Songhua Liu', 'Zhenxiong Tan', 'Xingyi Yang', 'Xinchao Wang'] | ['cs.CV'] | While diffusion models have achieved remarkable success in text-to-image
generation, they encounter significant challenges with instruction-driven image
editing. Our research highlights a key challenge: these models particularly
struggle with structurally inconsistent edits that involve substantial layout
changes. To mitigate this gap, we introduce Image Editing As Programs (IEAP), a
unified image editing framework built upon the Diffusion Transformer (DiT)
architecture. At its core, IEAP approaches instructional editing through a
reductionist lens, decomposing complex editing instructions into sequences of
atomic operations. Each operation is implemented via a lightweight adapter
sharing the same DiT backbone and is specialized for a specific type of edit.
Programmed by a vision-language model (VLM)-based agent, these operations
collaboratively support arbitrary and structurally inconsistent
transformations. By modularizing and sequencing edits in this way, IEAP
generalizes robustly across a wide range of editing tasks, from simple
adjustments to substantial structural changes. Extensive experiments
demonstrate that IEAP significantly outperforms state-of-the-art methods on
standard benchmarks across various editing scenarios. In these evaluations, our
framework delivers superior accuracy and semantic fidelity, particularly for
complex, multi-step instructions. Codes are available at
https://github.com/YujiaHu1109/IEAP. | 2025-06-04T16:57:24Z | null | null | null | Image Editing As Programs with Diffusion Models | ['Yujia Hu', 'Songhua Liu', 'Zhenxiong Tan', 'Xingyi Yang', 'Xinchao Wang'] | 2,025 | arXiv.org | 0 | 75 | ['Computer Science'] |
2,506.04178 | OpenThoughts: Data Recipes for Reasoning Models | ['Etash Guha', 'Ryan Marten', 'Sedrick Keh', 'Negin Raoof', 'Georgios Smyrnis', 'Hritik Bansal', 'Marianna Nezhurina', 'Jean Mercat', 'Trung Vu', 'Zayne Sprague', 'Ashima Suvarna', 'Benjamin Feuer', 'Liangyu Chen', 'Zaid Khan', 'Eric Frankel', 'Sachin Grover', 'Caroline Choi', 'Niklas Muennighoff', 'Shiye Su', 'Wanjia Zhao', 'John Yang', 'Shreyas Pimpalgaonkar', 'Kartik Sharma', 'Charlie Cheng-Jie Ji', 'Yichuan Deng', 'Sarah Pratt', 'Vivek Ramanujan', 'Jon Saad-Falcon', 'Jeffrey Li', 'Achal Dave', 'Alon Albalak', 'Kushal Arora', 'Blake Wulfe', 'Chinmay Hegde', 'Greg Durrett', 'Sewoong Oh', 'Mohit Bansal', 'Saadia Gabriel', 'Aditya Grover', 'Kai-Wei Chang', 'Vaishaal Shankar', 'Aaron Gokaslan', 'Mike A. Merrill', 'Tatsunori Hashimoto', 'Yejin Choi', 'Jenia Jitsev', 'Reinhard Heckel', 'Maheswaran Sathiamoorthy', 'Alexandros G. Dimakis', 'Ludwig Schmidt'] | ['cs.LG'] | Reasoning models have made rapid progress on many benchmarks involving math,
code, and science. Yet, there are still many open questions about the best
training recipes for reasoning since state-of-the-art models often rely on
proprietary datasets with little to no public information available. To address
this, the goal of the OpenThoughts project is to create open-source datasets
for training reasoning models. After initial explorations, our OpenThoughts2-1M
dataset led to OpenThinker2-32B, the first model trained on public reasoning
data to match DeepSeek-R1-Distill-32B on standard reasoning benchmarks such as
AIME and LiveCodeBench. We then improve our dataset further by systematically
investigating each step of our data generation pipeline with 1,000+ controlled
experiments, which led to OpenThoughts3. Scaling the pipeline to 1.2M examples
and using QwQ-32B as teacher yields our OpenThoughts3-7B model, which achieves
state-of-the-art results: 53% on AIME 2025, 51% on LiveCodeBench 06/24-01/25,
and 54% on GPQA Diamond - improvements of 15.3, 17.2, and 20.5 percentage
points compared to the DeepSeek-R1-Distill-Qwen-7B. All of our datasets and
models are available on https://openthoughts.ai. | 2025-06-04T17:25:39Z | https://www.openthoughts.ai/blog/ot3. arXiv admin note: text overlap
with arXiv:2505.23754 by other authors | null | null | null | null | null | null | null | null | null |
2,506.04207 | Advancing Multimodal Reasoning: From Optimized Cold Start to Staged
Reinforcement Learning | ['Shuang Chen', 'Yue Guo', 'Zhaochen Su', 'Yafu Li', 'Yulun Wu', 'Jiacheng Chen', 'Jiayu Chen', 'Weijie Wang', 'Xiaoye Qu', 'Yu Cheng'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV'] | Inspired by the remarkable reasoning capabilities of Deepseek-R1 in complex
textual tasks, many works attempt to incentivize similar capabilities in
Multimodal Large Language Models (MLLMs) by directly applying reinforcement
learning (RL). However, they still struggle to activate complex reasoning. In
this paper, rather than examining multimodal RL in isolation, we delve into
current training pipelines and identify three crucial phenomena: 1) Effective
cold start initialization is critical for enhancing MLLM reasoning.
Intriguingly, we find that initializing with carefully selected text data alone
can lead to performance surpassing many recent multimodal reasoning models,
even before multimodal RL. 2) Standard GRPO applied to multimodal RL suffers
from gradient stagnation, which degrades training stability and performance. 3)
Subsequent text-only RL training, following the multimodal RL phase, further
enhances multimodal reasoning. This staged training approach effectively
balances perceptual grounding and cognitive reasoning development. By
incorporating the above insights and addressing multimodal RL issues, we
introduce ReVisual-R1, achieving a new state-of-the-art among open-source 7B
MLLMs on challenging benchmarks including MathVerse, MathVision, WeMath,
LogicVista, DynaMath, and challenging AIME2024 and AIME2025. | 2025-06-04T17:51:08Z | 19 pages, 6 figures | null | null | Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning | ['Shuang Chen', 'Yue Guo', 'Zhao-yu Su', 'Yafu Li', 'Yulun Wu', 'Jiacheng Chen', 'Jiayu Chen', 'Weijie Wang', 'Xiaoye Qu', 'Yu Cheng'] | 2,025 | arXiv.org | 0 | 78 | ['Computer Science'] |
2,506.04217 | OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data
Synthesis | ['Junting Chen', 'Haotian Liang', 'Lingxiao Du', 'Weiyun Wang', 'Mengkang Hu', 'Yao Mu', 'Wenhai Wang', 'Jifeng Dai', 'Ping Luo', 'Wenqi Shao', 'Lin Shao'] | ['cs.RO', 'cs.AI', 'I.2.4; I.2.9; I.2.10'] | The rapid progress of navigation, manipulation, and vision models has made
mobile manipulators capable in many specialized tasks. However, the open-world
mobile manipulation (OWMM) task remains a challenge due to the need for
generalization to open-ended instructions and environments, as well as the
systematic complexity to integrate high-level decision making with low-level
robot control based on both global scene understanding and current agent state.
To address this complexity, we propose a novel multi-modal agent architecture
that maintains multi-view scene frames and agent states for decision-making and
controls the robot by function calling. A second challenge is the hallucination
from domain shift. To enhance the agent performance, we further introduce an
agentic data synthesis pipeline for the OWMM task to adapt the VLM model to our
task domain with instruction fine-tuning. We highlight our fine-tuned OWMM-VLM
as the first dedicated foundation model for mobile manipulators with global
scene understanding, robot state tracking, and multi-modal action generation in
a unified model. Through experiments, we demonstrate that our model achieves
SOTA performance compared to other foundation models including GPT-4o and
strong zero-shot generalization in real world. The project page is at
https://github.com/HHYHRHY/OWMM-Agent | 2025-06-04T17:57:44Z | 9 pages of main content, 19 pages in total | null | null | null | null | null | null | null | null | null |
2,506.04308 | RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language
Models for Robotics | ['Enshen Zhou', 'Jingkun An', 'Cheng Chi', 'Yi Han', 'Shanyu Rong', 'Chi Zhang', 'Pengwei Wang', 'Zhongyuan Wang', 'Tiejun Huang', 'Lu Sheng', 'Shanghang Zhang'] | ['cs.RO', 'cs.AI', 'cs.CV'] | Spatial referring is a fundamental capability of embodied robots to interact
with the 3D physical world. However, even with the powerful pretrained vision
language models (VLMs), recent approaches are still not qualified to accurately
understand the complex 3D scenes and dynamically reason about the
instruction-indicated locations for interaction. To this end, we propose
RoboRefer, a 3D-aware VLM that can first achieve precise spatial understanding
by integrating a disentangled but dedicated depth encoder via supervised
fine-tuning (SFT). Moreover, RoboRefer advances generalized multi-step spatial
reasoning via reinforcement fine-tuning (RFT), with metric-sensitive process
reward functions tailored for spatial referring tasks. To support SFT and RFT
training, we introduce RefSpatial, a large-scale dataset of 20M QA pairs (2x
prior), covering 31 spatial relations (vs. 15 prior) and supporting complex
reasoning processes (up to 5 steps). In addition, we introduce
RefSpatial-Bench, a challenging benchmark filling the gap in evaluating spatial
referring with multi-step reasoning. Experiments show that SFT-trained
RoboRefer achieves state-of-the-art spatial understanding, with an average
success rate of 89.6%. RFT-trained RoboRefer further outperforms all other
baselines by a large margin, even surpassing Gemini-2.5-Pro by 17.4% in average
accuracy on RefSpatial-Bench. Notably, RoboRefer can be integrated with various
control policies to execute long-horizon, dynamic tasks across diverse robots
(e,g., UR5, G1 humanoid) in cluttered real-world scenes. | 2025-06-04T17:59:27Z | Project page: https://zhoues.github.io/RoboRefer/ | null | null | null | null | null | null | null | null | null |
2,506.04421 | HMAR: Efficient Hierarchical Masked Auto-Regressive Image Generation | ['Hermann Kumbong', 'Xian Liu', 'Tsung-Yi Lin', 'Ming-Yu Liu', 'Xihui Liu', 'Ziwei Liu', 'Daniel Y. Fu', 'Christopher Ré', 'David W. Romero'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Visual Auto-Regressive modeling (VAR) has shown promise in bridging the speed
and quality gap between autoregressive image models and diffusion models. VAR
reformulates autoregressive modeling by decomposing an image into successive
resolution scales. During inference, an image is generated by predicting all
the tokens in the next (higher-resolution) scale, conditioned on all tokens in
all previous (lower-resolution) scales. However, this formulation suffers from
reduced image quality due to the parallel generation of all tokens in a
resolution scale; has sequence lengths scaling superlinearly in image
resolution; and requires retraining to change the sampling schedule.
We introduce Hierarchical Masked Auto-Regressive modeling (HMAR), a new image
generation algorithm that alleviates these issues using next-scale prediction
and masked prediction to generate high-quality images with fast sampling. HMAR
reformulates next-scale prediction as a Markovian process, wherein the
prediction of each resolution scale is conditioned only on tokens in its
immediate predecessor instead of the tokens in all predecessor resolutions.
When predicting a resolution scale, HMAR uses a controllable multi-step masked
generation procedure to generate a subset of the tokens in each step. On
ImageNet 256x256 and 512x512 benchmarks, HMAR models match or outperform
parameter-matched VAR, diffusion, and autoregressive baselines. We develop
efficient IO-aware block-sparse attention kernels that allow HMAR to achieve
faster training and inference times over VAR by over 2.5x and 1.75x
respectively, as well as over 3x lower inference memory footprint. Finally,
HMAR yields additional flexibility over VAR; its sampling schedule can be
changed without further training, and it can be applied to image editing tasks
in a zero-shot manner. | 2025-06-04T20:08:07Z | Accepted to CVPR 2025. Project Page:
https://research.nvidia.com/labs/dir/hmar/ | null | null | null | null | null | null | null | null | null |
2,506.04559 | Perceptual Decoupling for Scalable Multi-modal Reasoning via
Reward-Optimized Captioning | ['Yunhao Gou', 'Kai Chen', 'Zhili Liu', 'Lanqing Hong', 'Xin Jin', 'Zhenguo Li', 'James T. Kwok', 'Yu Zhang'] | ['cs.CV'] | Recent advances in slow-thinking language models (e.g., OpenAI-o1 and
DeepSeek-R1) have demonstrated remarkable abilities in complex reasoning tasks
by emulating human-like reflective cognition. However, extending such
capabilities to multi-modal large language models (MLLMs) remains challenging
due to the high cost of retraining vision-language alignments when upgrading
the underlying reasoner LLMs. A straightforward solution is to decouple
perception from reasoning, i.e., converting visual inputs into language
representations (e.g., captions) that are then passed to a powerful text-only
reasoner. However, this decoupling introduces a critical challenge: the visual
extractor must generate descriptions that are both faithful to the image and
informative enough to support accurate downstream reasoning. To address this,
we propose Reasoning-Aligned Perceptual Decoupling via Caption Reward
Optimization (RACRO) - a reasoning-guided reinforcement learning strategy that
aligns the extractor's captioning behavior with the reasoning objective. By
closing the perception-reasoning loop via reward-based optimization, RACRO
significantly enhances visual grounding and extracts reasoning-optimized
representations. Experiments on multi-modal math and science benchmarks show
that the proposed RACRO method achieves state-of-the-art average performance
while enabling superior scalability and plug-and-play adaptation to more
advanced reasoning LLMs without the necessity for costly multi-modal
re-alignment. | 2025-06-05T02:28:07Z | null | null | null | Perceptual Decoupling for Scalable Multi-modal Reasoning via Reward-Optimized Captioning | ['Yunhao Gou', 'Kai Chen', 'Zhili Liu', 'Lanqing Hong', 'Xin Jin', 'Zhenguo Li', 'James T. Kwok', 'Yu Zhang'] | 2,025 | arXiv.org | 0 | 57 | ['Computer Science'] |
2,506.04598 | Scaling Laws for Robust Comparison of Open Foundation Language-Vision
Models and Datasets | ['Marianna Nezhurina', 'Tomer Porian', 'Giovanni Pucceti', 'Tommie Kerssies', 'Romain Beaumont', 'Mehdi Cherti', 'Jenia Jitsev'] | ['cs.LG', 'cs.AI', 'cs.CV'] | In studies of transferable learning, scaling laws are obtained for various
important foundation models to predict their properties and performance at
larger scales. We show here how scaling law derivation can also be used for
model and dataset comparison, allowing to decide which procedure is to be
preferred for pre-training. For the first time, full scaling laws based on
dense measurements across a wide span of model and samples seen scales are
derived for two important language-vision learning procedures, CLIP and MaMMUT,
that use either contrastive only or contrastive and captioning text generative
loss. Ensuring sufficient prediction accuracy for held out points, we use
derived scaling laws to compare both models, obtaining evidence for MaMMUT's
stronger improvement with scale and better sample efficiency than standard
CLIP. To strengthen validity of the comparison, we show scaling laws for
various downstream tasks, classification, retrieval, and segmentation, and for
different open datasets, DataComp, DFN and Re-LAION, observing consistently the
same trends. We show that comparison can also be performed when deriving
scaling laws with a constant learning rate schedule, reducing compute cost.
Accurate derivation of scaling laws provides thus means to perform model and
dataset comparison across scale spans, avoiding misleading conclusions based on
measurements from single reference scales only, paving the road for systematic
comparison and improvement of open foundation models and datasets for their
creation. We release all the pre-trained models with their intermediate
checkpoints, including openMaMMUT-L/14, which achieves $80.3\%$ zero-shot
ImageNet-1k accuracy, trained on 12.8B samples from DataComp-1.4B. Code for
reproducing experiments in the paper and raw experiments data can be found at
https://github.com/LAION-AI/scaling-laws-for-comparison. | 2025-06-05T03:35:59Z | Preprint. In Review | null | null | null | null | null | null | null | null | null |
2,506.04879 | Invisible Backdoor Triggers in Image Editing Model via Deep Watermarking | ['Yu-Feng Chen', 'Tzuhsuan Huang', 'Pin-Yen Chiu', 'Jun-Cheng Chen'] | ['cs.CV'] | Diffusion models have achieved remarkable progress in both image generation
and editing. However, recent studies have revealed their vulnerability to
backdoor attacks, in which specific patterns embedded in the input can
manipulate the model's behavior. Most existing research in this area has
proposed attack frameworks focused on the image generation pipeline, leaving
backdoor attacks in image editing relatively unexplored. Among the few studies
targeting image editing, most utilize visible triggers, which are impractical
because they introduce noticeable alterations to the input image before
editing. In this paper, we propose a novel attack framework that embeds
invisible triggers into the image editing process via poisoned training data.
We leverage off-the-shelf deep watermarking models to encode imperceptible
watermarks as backdoor triggers. Our goal is to make the model produce the
predefined backdoor target when it receives watermarked inputs, while editing
clean images normally according to the given prompt. With extensive experiments
across different watermarking models, the proposed method achieves promising
attack success rates. In addition, the analysis results of the watermark
characteristics in term of backdoor attack further support the effectiveness of
our approach. The code is available
at:https://github.com/aiiu-lab/BackdoorImageEditing | 2025-06-05T10:51:58Z | null | null | null | null | null | null | null | null | null | null |
2,506.04956 | FEAT: Full-Dimensional Efficient Attention Transformer for Medical Video
Generation | ['Huihan Wang', 'Zhiwen Yang', 'Hui Zhang', 'Dan Zhao', 'Bingzheng Wei', 'Yan Xu'] | ['cs.CV'] | Synthesizing high-quality dynamic medical videos remains a significant
challenge due to the need for modeling both spatial consistency and temporal
dynamics. Existing Transformer-based approaches face critical limitations,
including insufficient channel interactions, high computational complexity from
self-attention, and coarse denoising guidance from timestep embeddings when
handling varying noise levels. In this work, we propose FEAT, a
full-dimensional efficient attention Transformer, which addresses these issues
through three key innovations: (1) a unified paradigm with sequential
spatial-temporal-channel attention mechanisms to capture global dependencies
across all dimensions, (2) a linear-complexity design for attention mechanisms
in each dimension, utilizing weighted key-value attention and global channel
attention, and (3) a residual value guidance module that provides fine-grained
pixel-level guidance to adapt to different noise levels. We evaluate FEAT on
standard benchmarks and downstream tasks, demonstrating that FEAT-S, with only
23\% of the parameters of the state-of-the-art model Endora, achieves
comparable or even superior performance. Furthermore, FEAT-L surpasses all
comparison methods across multiple datasets, showcasing both superior
effectiveness and scalability. Code is available at
https://github.com/Yaziwel/FEAT. | 2025-06-05T12:31:02Z | This paper has been early accepted by MICCAI 2025 | null | null | null | null | null | null | null | null | null |
2,506.05074 | EMBER2024 -- A Benchmark Dataset for Holistic Evaluation of Malware
Classifiers | ['Robert J. Joyce', 'Gideon Miller', 'Phil Roth', 'Richard Zak', 'Elliott Zaresky-Williams', 'Hyrum Anderson', 'Edward Raff', 'James Holt'] | ['cs.CR', 'cs.LG'] | A lack of accessible data has historically restricted malware analysis
research, and practitioners have relied heavily on datasets provided by
industry sources to advance. Existing public datasets are limited by narrow
scope - most include files targeting a single platform, have labels supporting
just one type of malware classification task, and make no effort to capture the
evasive files that make malware detection difficult in practice. We present
EMBER2024, a new dataset that enables holistic evaluation of malware
classifiers. Created in collaboration with the authors of EMBER2017 and
EMBER2018, the EMBER2024 dataset includes hashes, metadata, feature vectors,
and labels for more than 3.2 million files from six file formats. Our dataset
supports the training and evaluation of machine learning models on seven
malware classification tasks, including malware detection, malware family
classification, and malware behavior identification. EMBER2024 is the first to
include a collection of malicious files that initially went undetected by a set
of antivirus products, creating a "challenge" set to assess classifier
performance against evasive malware. This work also introduces EMBER feature
version 3, with added support for several new feature types. We are releasing
the EMBER2024 dataset to promote reproducibility and empower researchers in the
pursuit of new malware research topics. | 2025-06-05T14:20:36Z | null | null | 10.1145/3711896.3737431 | null | null | null | null | null | null | null |
2,506.05127 | PixCell: A generative foundation model for digital histopathology images | ['Srikar Yellapragada', 'Alexandros Graikos', 'Zilinghan Li', 'Kostas Triaridis', 'Varun Belagali', 'Saarthak Kapse', 'Tarak Nath Nandi', 'Ravi K Madduri', 'Prateek Prasanna', 'Tahsin Kurc', 'Rajarsi R. Gupta', 'Joel Saltz', 'Dimitris Samaras'] | ['eess.IV', 'cs.CV', 'q-bio.QM'] | The digitization of histology slides has revolutionized pathology, providing
massive datasets for cancer diagnosis and research. Contrastive self-supervised
and vision-language models have been shown to effectively mine large pathology
datasets to learn discriminative representations. On the other hand, generative
models, capable of synthesizing realistic and diverse images, present a
compelling solution to address unique problems in pathology that involve
synthesizing images; overcoming annotated data scarcity, enabling
privacy-preserving data sharing, and performing inherently generative tasks,
such as virtual staining. We introduce PixCell, the first diffusion-based
generative foundation model for histopathology. We train PixCell on PanCan-30M,
a vast, diverse dataset derived from 69,184 H\&E-stained whole slide images
covering various cancer types. We employ a progressive training strategy and a
self-supervision-based conditioning that allows us to scale up training without
any annotated data. PixCell generates diverse and high-quality images across
multiple cancer types, which we find can be used in place of real data to train
a self-supervised discriminative model. Synthetic images shared between
institutions are subject to fewer regulatory barriers than would be the case
with real clinical images. Furthermore, we showcase the ability to precisely
control image generation using a small set of annotated images, which can be
used for both data augmentation and educational purposes. Testing on a cell
segmentation task, a mask-guided PixCell enables targeted data augmentation,
improving downstream performance. Finally, we demonstrate PixCell's ability to
use H\&E structural staining to infer results from molecular marker studies; we
use this capability to infer IHC staining from H\&E images. Our trained models
are publicly released to accelerate research in computational pathology. | 2025-06-05T15:14:32Z | null | null | null | null | null | null | null | null | null | null |
2,506.05176 | Qwen3 Embedding: Advancing Text Embedding and Reranking Through
Foundation Models | ['Yanzhao Zhang', 'Mingxin Li', 'Dingkun Long', 'Xin Zhang', 'Huan Lin', 'Baosong Yang', 'Pengjun Xie', 'An Yang', 'Dayiheng Liu', 'Junyang Lin', 'Fei Huang', 'Jingren Zhou'] | ['cs.CL'] | In this work, we introduce the Qwen3 Embedding series, a significant
advancement over its predecessor, the GTE-Qwen series, in text embedding and
reranking capabilities, built upon the Qwen3 foundation models. Leveraging the
Qwen3 LLMs' robust capabilities in multilingual text understanding and
generation, our innovative multi-stage training pipeline combines large-scale
unsupervised pre-training with supervised fine-tuning on high-quality datasets.
Effective model merging strategies further ensure the robustness and
adaptability of the Qwen3 Embedding series. During the training process, the
Qwen3 LLMs serve not only as backbone models but also play a crucial role in
synthesizing high-quality, rich, and diverse training data across multiple
domains and languages, thus enhancing the training pipeline. The Qwen3
Embedding series offers a spectrum of model sizes (0.6B, 4B, 8B) for both
embedding and reranking tasks, addressing diverse deployment scenarios where
users can optimize for either efficiency or effectiveness. Empirical
evaluations demonstrate that the Qwen3 Embedding series achieves
state-of-the-art results across diverse benchmarks. Notably, it excels on the
multilingual evaluation benchmark MTEB for text embedding, as well as in
various retrieval tasks, including code retrieval, cross-lingual retrieval and
multilingual retrieval. To facilitate reproducibility and promote
community-driven research and development, the Qwen3 Embedding models are
publicly available under the Apache 2.0 license. | 2025-06-05T15:49:48Z | null | null | null | null | null | null | null | null | null | null |
2,506.05209 | The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly
Licensed Text | ['Nikhil Kandpal', 'Brian Lester', 'Colin Raffel', 'Sebastian Majstorovic', 'Stella Biderman', 'Baber Abbasi', 'Luca Soldaini', 'Enrico Shippole', 'A. Feder Cooper', 'Aviya Skowron', 'John Kirchenbauer', 'Shayne Longpre', 'Lintang Sutawika', 'Alon Albalak', 'Zhenlin Xu', 'Guilherme Penedo', 'Loubna Ben Allal', 'Elie Bakouch', 'John David Pressman', 'Honglu Fan', 'Dashiell Stander', 'Guangyu Song', 'Aaron Gokaslan', 'Tom Goldstein', 'Brian R. Bartoldson', 'Bhavya Kailkhura', 'Tyler Murray'] | ['cs.CL', 'cs.LG'] | Large language models (LLMs) are typically trained on enormous quantities of
unlicensed text, a practice that has led to scrutiny due to possible
intellectual property infringement and ethical concerns. Training LLMs on
openly licensed text presents a first step towards addressing these issues, but
prior data collection efforts have yielded datasets too small or low-quality to
produce performant LLMs. To address this gap, we collect, curate, and release
the Common Pile v0.1, an eight terabyte collection of openly licensed text
designed for LLM pretraining. The Common Pile comprises content from 30 sources
that span diverse domains including research papers, code, books,
encyclopedias, educational materials, audio transcripts, and more. Crucially,
we validate our efforts by training two 7 billion parameter LLMs on text from
the Common Pile: Comma v0.1-1T and Comma v0.1-2T, trained on 1 and 2 trillion
tokens respectively. Both models attain competitive performance to LLMs trained
on unlicensed text with similar computational budgets, such as Llama 1 and 2
7B. In addition to releasing the Common Pile v0.1 itself, we also release the
code used in its creation as well as the training mixture and checkpoints for
the Comma v0.1 models. | 2025-06-05T16:21:30Z | null | null | null | null | null | null | null | null | null | null |
2,506.05218 | MonkeyOCR: Document Parsing with a Structure-Recognition-Relation
Triplet Paradigm | ['Zhang Li', 'Yuliang Liu', 'Qiang Liu', 'Zhiyin Ma', 'Ziyang Zhang', 'Shuo Zhang', 'Zidun Guo', 'Jiarui Zhang', 'Xinyu Wang', 'Xiang Bai'] | ['cs.CV'] | We introduce MonkeyOCR, a vision-language model for document parsing that
advances the state of the art by leveraging a Structure-Recognition-Relation
(SRR) triplet paradigm. This design simplifies what would otherwise be a
complex multi-tool pipeline (as in MinerU's modular approach) and avoids the
inefficiencies of processing full pages with giant end-to-end models (e.g.,
large multimodal LLMs like Qwen-VL). In SRR, document parsing is abstracted
into three fundamental questions - "Where is it?" (structure), "What is it?"
(recognition), and "How is it organized?" (relation) - corresponding to layout
analysis, content identification, and logical ordering. This focused
decomposition balances accuracy and speed: it enables efficient, scalable
processing without sacrificing precision. To train and evaluate this approach,
we introduce the MonkeyDoc (the most comprehensive document parsing dataset to
date), with 3.9 million instances spanning over ten document types in both
Chinese and English. Experiments show that MonkeyOCR outperforms MinerU by an
average of 5.1%, with particularly notable improvements on challenging content
such as formulas (+15.0%) and tables (+8.6%). Remarkably, our 3B-parameter
model surpasses much larger and top-performing models, including Qwen2.5-VL
(72B) and Gemini 2.5 Pro, achieving state-of-the-art average performance on
English document parsing tasks. In addition, MonkeyOCR processes multi-page
documents significantly faster (0.84 pages per second compared to 0.65 for
MinerU and 0.12 for Qwen2.5-VL-7B). The 3B model can be efficiently deployed
for inference on a single NVIDIA 3090 GPU. Code and models will be released at
https://github.com/Yuliang-Liu/MonkeyOCR. | 2025-06-05T16:34:57Z | null | null | null | MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm | ['Zhang Li', 'Yuliang Liu', 'Qiang Liu', 'Zhiyin Ma', 'Ziyang Zhang', 'Shuo Zhang', 'Zidun Guo', 'Jiarui Zhang', 'Xinyu Wang', 'Xiang Bai'] | 2,025 | arXiv.org | 0 | 50 | ['Computer Science'] |
2,506.05282 | Rectified Point Flow: Generic Point Cloud Pose Estimation | ['Tao Sun', 'Liyuan Zhu', 'Shengyu Huang', 'Shuran Song', 'Iro Armeni'] | ['cs.CV', 'cs.AI', 'cs.RO'] | We introduce Rectified Point Flow, a unified parameterization that formulates
pairwise point cloud registration and multi-part shape assembly as a single
conditional generative problem. Given unposed point clouds, our method learns a
continuous point-wise velocity field that transports noisy points toward their
target positions, from which part poses are recovered. In contrast to prior
work that regresses part-wise poses with ad-hoc symmetry handling, our method
intrinsically learns assembly symmetries without symmetry labels. Together with
a self-supervised encoder focused on overlapping points, our method achieves a
new state-of-the-art performance on six benchmarks spanning pairwise
registration and shape assembly. Notably, our unified formulation enables
effective joint training on diverse datasets, facilitating the learning of
shared geometric priors and consequently boosting accuracy. Project page:
https://rectified-pointflow.github.io/. | 2025-06-05T17:36:03Z | Project page: https://rectified-pointflow.github.io/ | null | null | Rectified Point Flow: Generic Point Cloud Pose Estimation | ['Tao Sun', 'Liyuan Zhu', 'Shengyu Huang', 'Shuran Song', 'Iro Armeni'] | 2,025 | arXiv.org | 0 | 67 | ['Computer Science'] |
2,506.05301 | SeedVR2: One-Step Video Restoration via Diffusion Adversarial
Post-Training | ['Jianyi Wang', 'Shanchuan Lin', 'Zhijie Lin', 'Yuxi Ren', 'Meng Wei', 'Zongsheng Yue', 'Shangchen Zhou', 'Hao Chen', 'Yang Zhao', 'Ceyuan Yang', 'Xuefeng Xiao', 'Chen Change Loy', 'Lu Jiang'] | ['cs.CV'] | Recent advances in diffusion-based video restoration (VR) demonstrate
significant improvement in visual quality, yet yield a prohibitive
computational cost during inference. While several distillation-based
approaches have exhibited the potential of one-step image restoration,
extending existing approaches to VR remains challenging and underexplored,
particularly when dealing with high-resolution video in real-world settings. In
this work, we propose a one-step diffusion-based VR model, termed as SeedVR2,
which performs adversarial VR training against real data. To handle the
challenging high-resolution VR within a single step, we introduce several
enhancements to both model architecture and training procedures. Specifically,
an adaptive window attention mechanism is proposed, where the window size is
dynamically adjusted to fit the output resolutions, avoiding window
inconsistency observed under high-resolution VR using window attention with a
predefined window size. To stabilize and improve the adversarial post-training
towards VR, we further verify the effectiveness of a series of losses,
including a proposed feature matching loss without significantly sacrificing
training efficiency. Extensive experiments show that SeedVR2 can achieve
comparable or even better performance compared with existing VR approaches in a
single step. | 2025-06-05T17:51:05Z | Draft Ver. Project page: https://iceclear.github.io/projects/seedvr2/ | null | null | SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training | ['Jianyi Wang', 'Shanchuan Lin', 'Zhijie Lin', 'Yuxi Ren', 'Meng Wei', 'Zongsheng Yue', 'Shangchen Zhou', 'Hao Chen', 'Yang Zhao', 'Ceyuan Yang', 'Xuefeng Xiao', 'Chen Change Loy', 'Lu Jiang'] | 2,025 | arXiv.org | 1 | 94 | ['Computer Science'] |
2,506.05302 | Perceive Anything: Recognize, Explain, Caption, and Segment Anything in
Images and Videos | ['Weifeng Lin', 'Xinyu Wei', 'Ruichuan An', 'Tianhe Ren', 'Tingwei Chen', 'Renrui Zhang', 'Ziyu Guo', 'Wentao Zhang', 'Lei Zhang', 'Hongsheng Li'] | ['cs.CV'] | We present Perceive Anything Model (PAM), a conceptually straightforward and
efficient framework for comprehensive region-level visual understanding in
images and videos. Our approach extends the powerful segmentation model SAM 2
by integrating Large Language Models (LLMs), enabling simultaneous object
segmentation with the generation of diverse, region-specific semantic outputs,
including categories, label definition, functional explanations, and detailed
captions. A key component, Semantic Perceiver, is introduced to efficiently
transform SAM 2's rich visual features, which inherently carry general vision,
localization, and semantic priors into multi-modal tokens for LLM
comprehension. To support robust multi-granularity understanding, we also
develop a dedicated data refinement and augmentation pipeline, yielding a
high-quality dataset of 1.5M image and 0.6M video region-semantic annotations,
including novel region-level streaming video caption data. PAM is designed for
lightweightness and efficiency, while also demonstrates strong performance
across a diverse range of region understanding tasks. It runs 1.2-2.4x faster
and consumes less GPU memory than prior approaches, offering a practical
solution for real-world applications. We believe that our effective approach
will serve as a strong baseline for future research in region-level visual
understanding. | 2025-06-05T17:51:39Z | 19 pages, 13 figures, Website: https://Perceive-Anything.github.io | null | null | Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos | ['Weifeng Lin', 'Xinyu Wei', 'Ruichuan An', 'Tianhe Ren', 'Tingwei Chen', 'Renrui Zhang', 'Ziyu Guo', 'Wentao Zhang', 'Lei Zhang', 'Hongsheng Li'] | 2,025 | arXiv.org | 0 | 78 | ['Computer Science'] |
2,506.05328 | AV-Reasoner: Improving and Benchmarking Clue-Grounded Audio-Visual
Counting for MLLMs | ['Lidong Lu', 'Guo Chen', 'Zhiqi Li', 'Yicheng Liu', 'Tong Lu'] | ['cs.CV'] | Despite progress in video understanding, current MLLMs struggle with counting
tasks. Existing benchmarks are limited by short videos, close-set queries, lack
of clue annotations, and weak multimodal coverage. In this paper, we introduce
CG-AV-Counting, a manually-annotated clue-grounded counting benchmark with
1,027 multimodal questions and 5,845 annotated clues over 497 long videos. It
supports both black-box and white-box evaluation, serving as a comprehensive
testbed for both end-to-end and reasoning-based counting. To explore ways to
improve model's counting capability, we propose AV-Reasoner, a model trained
with GRPO and curriculum learning to generalize counting ability from related
tasks. AV-Reasoner achieves state-of-the-art results across multiple
benchmarks, demonstrating the effectiveness of reinforcement learning. However,
experiments show that on out-of-domain benchmarks, reasoning in the language
space fails to bring performance gains. The code and benchmark have been
realeased on https://av-reasoner.github.io. | 2025-06-05T17:58:33Z | 21 pages, 11 figures | null | null | null | null | null | null | null | null | null |
2,506.05336 | VideoMolmo: Spatio-Temporal Grounding Meets Pointing | ['Ghazi Shazan Ahmad', 'Ahmed Heakl', 'Hanan Gani', 'Abdelrahman Shaker', 'Zhiqiang Shen', 'Fahad Shahbaz Khan', 'Salman Khan'] | ['cs.CV'] | Spatio-temporal localization is vital for precise interactions across diverse
domains, from biological research to autonomous navigation and interactive
interfaces. Current video-based approaches, while proficient in tracking, lack
the sophisticated reasoning capabilities of large language models, limiting
their contextual understanding and generalization. We introduce VideoMolmo, a
large multimodal model tailored for fine-grained spatio-temporal pointing
conditioned on textual descriptions. Building upon the Molmo architecture,
VideoMolmo incorporates a temporal module utilizing an attention mechanism to
condition each frame on preceding frames, ensuring temporal consistency.
Additionally, our novel temporal mask fusion pipeline employs SAM2 for
bidirectional point propagation, significantly enhancing coherence across video
sequences. This two-step decomposition, i.e., first using the LLM to generate
precise pointing coordinates, then relying on a sequential mask-fusion module
to produce coherent segmentation, not only simplifies the task for the language
model but also enhances interpretability. Due to the lack of suitable datasets,
we curate a comprehensive dataset comprising 72k video-caption pairs annotated
with 100k object points. To evaluate the generalization of VideoMolmo, we
introduce VPoS-Bench, a challenging out-of-distribution benchmark spanning five
real-world scenarios: Cell Tracking, Egocentric Vision, Autonomous Driving,
Video-GUI Interaction, and Robotics. We also evaluate our model on Referring
Video Object Segmentation (Refer-VOS) and Reasoning VOS tasks. In comparison to
existing models, VideoMolmo substantially improves spatio-temporal pointing
accuracy and reasoning capability. Our code and models are publicly available
at https://github.com/mbzuai-oryx/VideoMolmo. | 2025-06-05T17:59:29Z | 20 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,506.05343 | ContentV: Efficient Training of Video Generation Models with Limited
Compute | ['Wenfeng Lin', 'Renjie Chen', 'Boyuan Liu', 'Shiyue Yan', 'Ruoyu Feng', 'Jiangchuan Wei', 'Yichen Zhang', 'Yimeng Zhou', 'Chao Feng', 'Jiao Ran', 'Qi Wu', 'Zuotao Liu', 'Mingyu Guo'] | ['cs.CV'] | Recent advances in video generation demand increasingly efficient training
recipes to mitigate escalating computational costs. In this report, we present
ContentV, an 8B-parameter text-to-video model that achieves state-of-the-art
performance (85.14 on VBench) after training on 256 x 64GB Neural Processing
Units (NPUs) for merely four weeks. ContentV generates diverse, high-quality
videos across multiple resolutions and durations from text prompts, enabled by
three key innovations: (1) A minimalist architecture that maximizes reuse of
pre-trained image generation models for video generation; (2) A systematic
multi-stage training strategy leveraging flow matching for enhanced efficiency;
and (3) A cost-effective reinforcement learning with human feedback framework
that improves generation quality without requiring additional human
annotations. All the code and models are available at:
https://contentv.github.io. | 2025-06-05T17:59:54Z | Project Page: https://contentv.github.io | null | null | ContentV: Efficient Training of Video Generation Models with Limited Compute | ['Wenfeng Lin', 'Renjie Chen', 'Boyuan Liu', 'Shiyue Yan', 'Ruoyu Feng', 'Jiangchuan Wei', 'Yichen Zhang', 'Yimeng Zhou', 'Chao Feng', 'Jiao Ran', 'Qi Wu', 'Zuotao Liu', 'Mingyu Guo'] | 2,025 | arXiv.org | 0 | 51 | ['Computer Science'] |
2,506.05426 | Mixture-of-Experts Meets In-Context Reinforcement Learning | ['Wenhao Wu', 'Fuhong Liu', 'Haoru Li', 'Zican Hu', 'Daoyi Dong', 'Chunlin Chen', 'Zhi Wang'] | ['cs.LG', 'cs.AI'] | In-context reinforcement learning (ICRL) has emerged as a promising paradigm
for adapting RL agents to downstream tasks through prompt conditioning.
However, two notable challenges remain in fully harnessing in-context learning
within RL domains: the intrinsic multi-modality of the state-action-reward data
and the diverse, heterogeneous nature of decision tasks. To tackle these
challenges, we propose \textbf{T2MIR} (\textbf{T}oken- and \textbf{T}ask-wise
\textbf{M}oE for \textbf{I}n-context \textbf{R}L), an innovative framework that
introduces architectural advances of mixture-of-experts (MoE) into
transformer-based decision models. T2MIR substitutes the feedforward layer with
two parallel layers: a token-wise MoE that captures distinct semantics of input
tokens across multiple modalities, and a task-wise MoE that routes diverse
tasks to specialized experts for managing a broad task distribution with
alleviated gradient conflicts. To enhance task-wise routing, we introduce a
contrastive learning method that maximizes the mutual information between the
task and its router representation, enabling more precise capture of
task-relevant information. The outputs of two MoE components are concatenated
and fed into the next layer. Comprehensive experiments show that T2MIR
significantly facilitates in-context learning capacity and outperforms various
types of baselines. We bring the potential and promise of MoE to ICRL, offering
a simple and scalable architectural enhancement to advance ICRL one step closer
toward achievements in language and vision communities. Our code is available
at https://github.com/NJU-RL/T2MIR. | 2025-06-05T06:29:14Z | 26 pages, 13 figures | null | null | Mixture-of-Experts Meets In-Context Reinforcement Learning | ['Wenhao Wu', 'Fuhong Liu', 'Haoru Li', 'Zican Hu', 'Daoyi Dong', 'Chunlin Chen', 'Zhi Wang'] | 2,025 | arXiv.org | 0 | 66 | ['Computer Science'] |
2,506.05446 | Sentinel: SOTA model to protect against prompt injections | ['Dror Ivry', 'Oran Nahum'] | ['cs.CR', 'cs.AI'] | Large Language Models (LLMs) are increasingly powerful but remain vulnerable
to prompt injection attacks, where malicious inputs cause the model to deviate
from its intended instructions. This paper introduces Sentinel, a novel
detection model, qualifire/prompt-injection-sentinel, based on the
\answerdotai/ModernBERT-large architecture. By leveraging ModernBERT's advanced
features and fine-tuning on an extensive and diverse dataset comprising a few
open-source and private collections, Sentinel achieves state-of-the-art
performance. This dataset amalgamates varied attack types, from role-playing
and instruction hijacking to attempts to generate biased content, alongside a
broad spectrum of benign instructions, with private datasets specifically
targeting nuanced error correction and real-world misclassifications. On a
comprehensive, unseen internal test set, Sentinel demonstrates an average
accuracy of 0.987 and an F1-score of 0.980. Furthermore, when evaluated on
public benchmarks, it consistently outperforms strong baselines like
protectai/deberta-v3-base-prompt-injection-v2. This work details Sentinel's
architecture, its meticulous dataset curation, its training methodology, and a
thorough evaluation, highlighting its superior detection capabilities. | 2025-06-05T14:07:15Z | 6 pages, 2 tables | null | null | Sentinel: SOTA model to protect against prompt injections | ['Dror Ivry', 'Oran Nahum'] | 2,025 | arXiv.org | 0 | 22 | ['Computer Science'] |
2,506.05501 | FocusDiff: Advancing Fine-Grained Text-Image Alignment for
Autoregressive Visual Generation through RL | ['Kaihang Pan', 'Wendong Bu', 'Yuruo Wu', 'Yang Wu', 'Kai Shen', 'Yunfei Li', 'Hang Zhao', 'Juncheng Li', 'Siliang Tang', 'Yueting Zhuang'] | ['cs.CV'] | Recent studies extend the autoregression paradigm to text-to-image
generation, achieving performance comparable to diffusion models. However, our
new PairComp benchmark -- featuring test cases of paired prompts with similar
syntax but different fine-grained semantics -- reveals that existing models
struggle with fine-grained text-image alignment thus failing to realize precise
control over visual tokens. To address this, we propose FocusDiff, which
enhances fine-grained text-image semantic alignment by focusing on subtle
differences between similar text-image pairs. We construct a new dataset of
paired texts and images with similar overall expressions but distinct local
semantics, further introducing a novel reinforcement learning algorithm to
emphasize such fine-grained semantic differences for desired image generation.
Our approach achieves state-of-the-art performance on existing text-to-image
benchmarks and significantly outperforms prior methods on PairComp. | 2025-06-05T18:36:33Z | 15 pages, 8 figures. Project Page: https://focusdiff.github.io/ | null | null | FocusDiff: Advancing Fine-Grained Text-Image Alignment for Autoregressive Visual Generation through RL | ['Kaihang Pan', 'Wendong Bu', 'Yuruo Wu', 'Yang Wu', 'Kai Shen', 'Yunfei Li', 'Hang Zhao', 'Juncheng Li', 'Siliang Tang', 'Yueting Zhuang'] | 2,025 | arXiv.org | 0 | 38 | ['Computer Science'] |
2,506.05573 | PartCrafter: Structured 3D Mesh Generation via Compositional Latent
Diffusion Transformers | ['Yuchen Lin', 'Chenguo Lin', 'Panwang Pan', 'Honglei Yan', 'Yiqiang Feng', 'Yadong Mu', 'Katerina Fragkiadaki'] | ['cs.CV'] | We introduce PartCrafter, the first structured 3D generative model that
jointly synthesizes multiple semantically meaningful and geometrically distinct
3D meshes from a single RGB image. Unlike existing methods that either produce
monolithic 3D shapes or follow two-stage pipelines, i.e., first segmenting an
image and then reconstructing each segment, PartCrafter adopts a unified,
compositional generation architecture that does not rely on pre-segmented
inputs. Conditioned on a single image, it simultaneously denoises multiple 3D
parts, enabling end-to-end part-aware generation of both individual objects and
complex multi-object scenes. PartCrafter builds upon a pretrained 3D mesh
diffusion transformer (DiT) trained on whole objects, inheriting the pretrained
weights, encoder, and decoder, and introduces two key innovations: (1) A
compositional latent space, where each 3D part is represented by a set of
disentangled latent tokens; (2) A hierarchical attention mechanism that enables
structured information flow both within individual parts and across all parts,
ensuring global coherence while preserving part-level detail during generation.
To support part-level supervision, we curate a new dataset by mining part-level
annotations from large-scale 3D object datasets. Experiments show that
PartCrafter outperforms existing approaches in generating decomposable 3D
meshes, including parts that are not directly visible in input images,
demonstrating the strength of part-aware generative priors for 3D understanding
and synthesis. Code and training data will be released. | 2025-06-05T20:30:28Z | Project Page: https://wgsxm.github.io/projects/partcrafter/ | null | null | null | null | null | null | null | null | null |
2,506.05587 | MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark | ['Junjie Xing', 'Yeye He', 'Mengyu Zhou', 'Haoyu Dong', 'Shi Han', 'Lingjiao Chen', 'Dongmei Zhang', 'Surajit Chaudhuri', 'H. V. Jagadish'] | ['cs.AI', 'cs.CL', 'cs.DB', 'cs.LG'] | Tables and table-based use cases play a crucial role in many important
real-world applications, such as spreadsheets, databases, and computational
notebooks, which traditionally require expert-level users like data engineers,
data analysts, and database administrators to operate. Although LLMs have shown
remarkable progress in working with tables (e.g., in spreadsheet and database
copilot scenarios), comprehensive benchmarking of such capabilities remains
limited. In contrast to an extensive and growing list of NLP benchmarks,
evaluations of table-related tasks are scarce, and narrowly focus on tasks like
NL-to-SQL and Table-QA, overlooking the broader spectrum of real-world tasks
that professional users face. This gap limits our understanding and model
progress in this important area.
In this work, we introduce MMTU, a large-scale benchmark with over 30K
questions across 25 real-world table tasks, designed to comprehensively
evaluate models ability to understand, reason, and manipulate real tables at
the expert-level. These tasks are drawn from decades' worth of computer science
research on tabular data, with a focus on complex table tasks faced by
professional users. We show that MMTU require a combination of skills --
including table understanding, reasoning, and coding -- that remain challenging
for today's frontier models, where even frontier reasoning models like OpenAI
o4-mini and DeepSeek R1 score only around 60%, suggesting significant room for
improvement. We highlight key findings in our evaluation using MMTU and hope
that this benchmark drives further advances in understanding and developing
foundation models for structured data processing and analysis. Our code and
data are available at https://github.com/MMTU-Benchmark/MMTU and
https://huggingface.co/datasets/MMTU-benchmark/MMTU. | 2025-06-05T21:05:03Z | null | null | null | MMTU: A Massive Multi-Task Table Understanding and Reasoning Benchmark | ['Junjie Xing', 'Yeye He', 'Mengyu Zhou', 'Haoyu Dong', 'Shi Han', 'Lingjiao Chen', 'Dongmei Zhang', 'Surajit Chaudhuri', 'H. V. Jagadish'] | 2,025 | arXiv.org | 0 | 129 | ['Computer Science'] |
2,506.05673 | Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning
Vision Models from DataSeeds' Annotated Imagery | ['Sajjad Abdoli', 'Freeman Lewin', 'Gediminas Vasiliauskas', 'Fabian Schonholz'] | ['cs.LG', 'cs.AI', 'cs.CV'] | The development of modern Artificial Intelligence (AI) models, particularly
diffusion-based models employed in computer vision and image generation tasks,
is undergoing a paradigmatic shift in development methodologies. Traditionally
dominated by a "Model Centric" approach, in which performance gains were
primarily pursued through increasingly complex model architectures and
hyperparameter optimization, the field is now recognizing a more nuanced
"Data-Centric" approach. This emergent framework foregrounds the quality,
structure, and relevance of training data as the principal driver of model
performance. To operationalize this paradigm shift, we introduce the
DataSeeds.AI sample dataset (the "DSD"), initially comprised of approximately
10,610 high-quality human peer-ranked photography images accompanied by
extensive multi-tier annotations. The DSD is a foundational computer vision
dataset designed to usher in a new standard for commercial image datasets.
Representing a small fraction of DataSeeds.AI's 100 million-plus image catalog,
the DSD provides a scalable foundation necessary for robust commercial and
multimodal AI development. Through this in-depth exploratory analysis, we
document the quantitative improvements generated by the DSD on specific models
against known benchmarks and make the code and the trained models used in our
evaluation publicly available. | 2025-06-06T01:50:28Z | 28 pages, 12 figures | null | null | Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery | ['Sajjad Abdoli', 'Freeman Lewin', 'Gediminas Vasiliauskas', 'Fabian Schonholz'] | 2,025 | arXiv.org | 0 | 14 | ['Computer Science'] |
2,506.057 | RKEFino1: A Regulation Knowledge-Enhanced Large Language Model | ['Yan Wang', 'Yueru He', 'Ruoyu Xiang', 'Jeff Zhao'] | ['cs.CL', 'cs.AI'] | Recent advances in large language models (LLMs) hold great promise for
financial applications but introduce critical accuracy and compliance
challenges in Digital Regulatory Reporting (DRR). To address these issues, we
propose RKEFino1, a regulation knowledge-enhanced financial reasoning model
built upon Fino1, fine-tuned with domain knowledge from XBRL, CDM, and MOF. We
formulate two QA tasks-knowledge-based and mathematical reasoning-and introduce
a novel Numerical NER task covering financial entities in both sentences and
tables. Experimental results demonstrate the effectiveness and generalization
capacity of RKEFino1 in compliance-critical financial tasks. We have released
our model on Hugging Face. | 2025-06-06T03:02:52Z | null | null | null | null | null | null | null | null | null | null |
2,506.05767 | dots.llm1 Technical Report | ['Bi Huo', 'Bin Tu', 'Cheng Qin', 'Da Zheng', 'Debing Zhang', 'Dongjie Zhang', 'En Li', 'Fu Guo', 'Jian Yao', 'Jie Lou', 'Junfeng Tian', 'Li Hu', 'Ran Zhu', 'Shengdong Chen', 'Shuo Liu', 'Su Guang', 'Te Wo', 'Weijun Zhang', 'Xiaoming Shi', 'Xinxin Peng', 'Xing Wu', 'Yawen Liu', 'Yuqiu Ji', 'Ze Wen', 'Zhenhai Liu', 'Zichao Li', 'Zilong Liao'] | ['cs.CL', 'cs.AI'] | Mixture of Experts (MoE) models have emerged as a promising paradigm for
scaling language models efficiently by activating only a subset of parameters
for each input token. In this report, we present dots.llm1, a large-scale MoE
model that activates 14B parameters out of a total of 142B parameters,
delivering performance on par with state-of-the-art models while reducing
training and inference costs. Leveraging our meticulously crafted and efficient
data processing pipeline, dots.llm1 achieves performance comparable to
Qwen2.5-72B after pretraining on 11.2T high-quality tokens and post-training to
fully unlock its capabilities. Notably, no synthetic data is used during
pretraining. To foster further research, we open-source intermediate training
checkpoints at every one trillion tokens, providing valuable insights into the
learning dynamics of large language models. | 2025-06-06T05:51:29Z | null | null | null | dots.llm1 Technical Report | ['Bi Huo', 'Bin Tu', 'Cheng Qin', 'Da Zheng', 'Debing Zhang', 'Dongjie Zhang', 'En Li', 'Fu Guo', 'Jian Yao', 'Jie Lou', 'Junfeng Tian', 'Li Hu', 'Ran Zhu', 'Shengdong Chen', 'Shuo Liu', 'Su Guang', 'Te Wo', 'Weijun Zhang', 'Xiaoming Shi', 'Xinxin Peng', 'Xing Wu', 'Yawen Liu', 'Yuqiu Ji', 'Ze Wen', 'Zhenhai Liu', 'Zichao Li', 'Zilong Liao'] | 2,025 | arXiv.org | 0 | 78 | ['Computer Science'] |
2,506.05928 | MoA: Heterogeneous Mixture of Adapters for Parameter-Efficient
Fine-Tuning of Large Language Models | ['Jie Cao', 'Tianwei Lin', 'Hongyang He', 'Rolan Yan', 'Wenqiao Zhang', 'Juncheng Li', 'Dongping Zhang', 'Siliang Tang', 'Yueting Zhuang'] | ['cs.CL', 'cs.AI'] | Recent studies integrate Low-Rank Adaptation (LoRA) and Mixture-of-Experts
(MoE) to further enhance the performance of parameter-efficient fine-tuning
(PEFT) methods in Large Language Model (LLM) applications. Existing methods
employ \emph{homogeneous} MoE-LoRA architectures composed of LoRA experts with
either similar or identical structures and capacities. However, these
approaches often suffer from representation collapse and expert load imbalance,
which negatively impact the potential of LLMs. To address these challenges, we
propose a \emph{heterogeneous} \textbf{Mixture-of-Adapters (MoA)} approach.
This method dynamically integrates PEFT adapter experts with diverse
structures, leveraging their complementary representational capabilities to
foster expert specialization, thereby enhancing the effective transfer of
pre-trained knowledge to downstream tasks. MoA supports two variants:
\textbf{(i)} \textit{Soft MoA} achieves fine-grained integration by performing
a weighted fusion of all expert outputs; \textbf{(ii)} \textit{Sparse MoA}
activates adapter experts sparsely based on their contribution, achieving this
with negligible performance degradation. Experimental results demonstrate that
heterogeneous MoA outperforms homogeneous MoE-LoRA methods in both performance
and parameter efficiency. Our project is available at
https://github.com/DCDmllm/MoA. | 2025-06-06T09:54:19Z | null | null | null | MoA: Heterogeneous Mixture of Adapters for Parameter-Efficient Fine-Tuning of Large Language Models | ['Jie Cao', 'Tianwei Lin', 'Hongyang He', 'Rolan Yan', 'Wenqiao Zhang', 'Juncheng Li', 'Dongping Zhang', 'Siliang Tang', 'Yueting Zhuang'] | 2,025 | arXiv.org | 0 | 39 | ['Computer Science'] |
2,506.06006 | Bootstrapping World Models from Dynamics Models in Multimodal Foundation
Models | ['Yifu Qiu', 'Yftah Ziser', 'Anna Korhonen', 'Shay B. Cohen', 'Edoardo M. Ponti'] | ['cs.CV', 'cs.AI', 'cs.CL'] | To what extent do vision-and-language foundation models possess a realistic
world model (observation $\times$ action $\rightarrow$ observation) and a
dynamics model (observation $\times$ observation $\rightarrow$ action), when
actions are expressed through language? While open-source foundation models
struggle with both, we find that fine-tuning them to acquire a dynamics model
through supervision is significantly easier than acquiring a world model. In
turn, dynamics models can be used to bootstrap world models through two main
strategies: 1) weakly supervised learning from synthetic data and 2) inference
time verification. Firstly, the dynamics model can annotate actions for
unlabelled pairs of video frame observations to expand the training data. We
further propose a new objective, where image tokens in observation pairs are
weighted by their importance, as predicted by a recognition model. Secondly,
the dynamics models can assign rewards to multiple samples of the world model
to score them, effectively guiding search at inference time. We evaluate the
world models resulting from both strategies through the task of action-centric
image editing on Aurora-Bench. Our best model achieves a performance
competitive with state-of-the-art image editing models, improving on them by a
margin of $15\%$ on real-world subsets according to GPT4o-as-judge, and
achieving the best average human evaluation across all subsets of Aurora-Bench. | 2025-06-06T11:50:18Z | null | null | null | null | null | null | null | null | null | null |
2,506.06144 | CLaMR: Contextualized Late-Interaction for Multimodal Content Retrieval | ['David Wan', 'Han Wang', 'Elias Stengel-Eskin', 'Jaemin Cho', 'Mohit Bansal'] | ['cs.CV', 'cs.CL', 'cs.IR'] | Online video web content is richly multimodal: a single video blends vision,
speech, ambient audio, and on-screen text. Retrieval systems typically treat
these modalities as independent retrieval sources, which can lead to noisy and
subpar retrieval. We explore multimodal video content retrieval, where
relevance can be scored from one particular modality or jointly across multiple
modalities simultaneously. Consequently, an effective retriever must
dynamically choose which modality (or set of modalities) best addresses the
query. We introduce CLaMR, a multimodal, late-interaction retriever that
jointly indexes 4 modalities: video frames, transcribed speech, on-screen text,
and metadata. CLaMR jointly encodes all modalities with a unified multimodal
backbone for improved contextualization and is trained to enhance dynamic
modality selection via two key innovations. First, given the lack of training
data for multimodal retrieval, we introduce MultiVENT 2.0++, a large-scale
synthetic training dataset built on MultiVENT 2.0 (event-centric videos in
various languages paired with queries) with modality-targeted queries. Next, we
propose a modality-aware loss that jointly trains according to a standard
contrastive objective alongside an objective for learning correct modality
usage. On the test sets of MultiVENT 2.0++ and MSRVTT, conventional aggregation
strategies, such as averaging similarities for baseline retrievers, degrade
performance by introducing noise from irrelevant modalities. In contrast, CLaMR
consistently outperforms existing retrievers: on MultiVENT 2.0++, CLaMR
improves nDCG@10 by 25.6 over the best single-modality retriever and by 35.4
over the best multi-modality retriever. We illustrate CLaMR's downstream
utility on long-video QA, retrieving relevant frames and obtaining a 3.50%
boost over LanguageBind on Video-MME and 1.42% over dense sampling on
LongVideoBench. | 2025-06-06T15:02:30Z | 18 pages. Code and data: https://github.com/meetdavidwan/clamr | null | null | null | null | null | null | null | null | null |
2,506.0627 | RecGPT: A Foundation Model for Sequential Recommendation | ['Yangqin Jiang', 'Xubin Ren', 'Lianghao Xia', 'Da Luo', 'Kangyi Lin', 'Chao Huang'] | ['cs.IR'] | This work addresses a fundamental barrier in recommender systems: the
inability to generalize across domains without extensive retraining.
Traditional ID-based approaches fail entirely in cold-start and cross-domain
scenarios where new users or items lack sufficient interaction history.
Inspired by foundation models' cross-domain success, we develop a foundation
model for sequential recommendation that achieves genuine zero-shot
generalization capabilities. Our approach fundamentally departs from existing
ID-based methods by deriving item representations exclusively from textual
features. This enables immediate embedding of any new item without model
retraining. We introduce unified item tokenization with Finite Scalar
Quantization that transforms heterogeneous textual descriptions into
standardized discrete tokens. This eliminates domain barriers that plague
existing systems. Additionally, the framework features hybrid
bidirectional-causal attention that captures both intra-item token coherence
and inter-item sequential dependencies. An efficient catalog-aware beam search
decoder enables real-time token-to-item mapping. Unlike conventional approaches
confined to their training domains, RecGPT naturally bridges diverse
recommendation contexts through its domain-invariant tokenization mechanism.
Comprehensive evaluations across six datasets and industrial scenarios
demonstrate consistent performance advantages. | 2025-06-06T17:53:02Z | null | null | null | null | null | null | null | null | null | null |
2,506.06279 | CoMemo: LVLMs Need Image Context with Image Memory | ['Shi Liu', 'Weijie Su', 'Xizhou Zhu', 'Wenhai Wang', 'Jifeng Dai'] | ['cs.CV'] | Recent advancements in Large Vision-Language Models built upon Large Language
Models have established aligning visual features with LLM representations as
the dominant paradigm. However, inherited LLM architectural designs introduce
suboptimal characteristics for multimodal processing. First, LVLMs exhibit a
bimodal distribution in attention allocation, leading to the progressive
neglect of middle visual content as context expands. Second, conventional
positional encoding schemes fail to preserve vital 2D structural relationships
when processing dynamic high-resolution images. To address these limitations,
we propose CoMemo - a dual-path architecture that combines a Context image path
with an image Memory path for visual processing, effectively alleviating visual
information neglect. Additionally, we introduce RoPE-DHR, a novel positional
encoding mechanism that employs thumbnail-based positional aggregation to
maintain 2D spatial awareness while mitigating remote decay in extended
sequences. Evaluations across seven benchmarks,including long-context
comprehension, multi-image reasoning, and visual question answering,
demonstrate CoMemo's superior performance compared to conventional LVLM
architectures. Project page is available at
https://lalbj.github.io/projects/CoMemo/. | 2025-06-06T17:59:06Z | ICML 2025 | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.