arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,506.13796 | ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to
Answer Climate Change Queries | ['Zhou Chen', 'Xiao Wang', 'Yuanhong Liao', 'Ming Lin', 'Yuqi Bai'] | ['cs.CL', 'cs.AI'] | As the issue of global climate change becomes increasingly severe, the demand
for research in climate science continues to grow. Natural language processing
technologies, represented by Large Language Models (LLMs), have been widely
applied to climate change-specific research, providing essential information
support for decision-makers and the public. Some studies have improved model
performance on relevant tasks by constructing climate change-related
instruction data and instruction-tuning LLMs. However, current research remains
inadequate in efficiently producing large volumes of high-precision instruction
data for climate change, which limits further development of climate change
LLMs. This study introduces an automated method for constructing instruction
data. The method generates instructions using facts and background knowledge
from documents and enhances the diversity of the instruction data through web
scraping and the collection of seed instructions. Using this method, we
constructed a climate change instruction dataset, named ClimateChat-Corpus,
which was used to fine-tune open-source LLMs, resulting in an LLM named
ClimateChat. Evaluation results show that ClimateChat significantly improves
performance on climate change question-and-answer tasks. Additionally, we
evaluated the impact of different base models and instruction data on LLM
performance and demonstrated its capability to adapt to a wide range of climate
change scientific discovery tasks, emphasizing the importance of selecting an
appropriate base model for instruction tuning. This research provides valuable
references and empirical support for constructing climate change instruction
data and training climate change-specific LLMs. | 2025-06-12T08:43:38Z | ICLR 2025 camera ready, 13 pages, 4 figures, 4 tables | null | null | ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries | ['Zhou Chen', 'Xiao Wang', 'Yuanhong Liao', 'Ming Lin', 'Yuqi Bai'] | 2,025 | arXiv.org | 1 | 25 | ['Computer Science'] |
2,506.14111 | Essential-Web v1.0: 24T tokens of organized web data | ['Essential AI', ':', 'Andrew Hojel', 'Michael Pust', 'Tim Romanski', 'Yash Vanjani', 'Ritvik Kapila', 'Mohit Parmar', 'Adarsh Chaluvaraju', 'Alok Tripathy', 'Anil Thomas', 'Ashish Tanwer', 'Darsh J Shah', 'Ishaan Shah', 'Karl Stratos', 'Khoi Nguyen', 'Kurt Smith', 'Michael Callahan', 'Peter Rushton', 'Philip Monk', 'Platon Mazarakis', 'Saad Jamal', 'Saurabh Srivastava', 'Somanshu Singla', 'Ashish Vaswani'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Data plays the most prominent role in how language models acquire skills and
knowledge. The lack of massive, well-organized pre-training datasets results in
costly and inaccessible data pipelines. We present Essential-Web v1.0, a
24-trillion-token dataset in which every document is annotated with a
twelve-category taxonomy covering topic, format, content complexity, and
quality. Taxonomy labels are produced by EAI-Distill-0.5b, a fine-tuned
0.5b-parameter model that achieves an annotator agreement within 3% of
Qwen2.5-32B-Instruct. With nothing more than SQL-style filters, we obtain
competitive web-curated datasets in math (-8.0% relative to SOTA), web code
(+14.3%), STEM (+24.5%) and medical (+8.6%). Essential-Web v1.0 is available on
HuggingFace: https://huggingface.co/datasets/EssentialAI/essential-web-v1.0 | 2025-06-17T02:03:36Z | include MegaMath-Web-Pro | null | null | null | null | null | null | null | null | null |
2,506.14175 | GRAM: A Generative Foundation Reward Model for Reward Generalization | ['Chenglong Wang', 'Yang Gan', 'Yifu Huo', 'Yongyu Mu', 'Qiaozhi He', 'Murun Yang', 'Bei Li', 'Tong Xiao', 'Chunliang Zhang', 'Tongran Liu', 'Jingbo Zhu'] | ['cs.CL', 'cs.AI'] | In aligning large language models (LLMs), reward models have played an
important role, but are standardly trained as discriminative models and rely
only on labeled human preference data. In this paper, we explore methods that
train reward models using both unlabeled and labeled data. Building on the
generative models in LLMs, we develop a generative reward model that is first
trained via large-scale unsupervised learning and then fine-tuned via
supervised learning. We also show that by using label smoothing, we are in fact
optimizing a regularized pairwise ranking loss. This result, in turn, provides
a new view of training reward models, which links generative models and
discriminative models under the same class of training objectives. The outcome
of these techniques is a foundation reward model, which can be applied to a
wide range of tasks with little or no further fine-tuning effort. Extensive
experiments show that this model generalizes well across several tasks,
including response ranking, reinforcement learning from human feedback, and
task adaptation with fine-tuning, achieving significant performance
improvements over several strong baseline models. | 2025-06-17T04:34:27Z | Accepted by ICML 2025 | null | null | GRAM: A Generative Foundation Reward Model for Reward Generalization | ['Chenglong Wang', 'Yang Gan', 'Yifu Huo', 'Yongyu Mu', 'Qiaozhi He', 'Murun Yang', 'Bei Li', 'Tong Xiao', 'Chunliang Zhang', 'Tongran Liu', 'Jingbo Zhu'] | 2,025 | arXiv.org | 0 | 53 | ['Computer Science'] |
2,506.14512 | SIRI-Bench: Challenging VLMs' Spatial Intelligence through Complex
Reasoning Tasks | ['Zijian Song', 'Xiaoxin Lin', 'Qiuming Huang', 'Guangrun Wang', 'Liang Lin'] | ['cs.CV'] | Large Language Models (LLMs) are experiencing rapid advancements in complex
reasoning, exhibiting remarkable generalization in mathematics and programming.
In contrast, while spatial intelligence is fundamental for Vision-Language
Models (VLMs) in real-world interaction, the systematic evaluation of their
complex reasoning ability within spatial contexts remains underexplored. To
bridge this gap, we introduce SIRI-Bench, a benchmark designed to evaluate
VLMs' spatial intelligence through video-based reasoning tasks. SIRI-Bench
comprises nearly 1K video-question-answer triplets, where each problem is
embedded in a realistic 3D scene and captured by video. By carefully designing
questions and corresponding 3D scenes, our benchmark ensures that solving the
questions requires both spatial comprehension for extracting information and
high-level reasoning for deriving solutions, making it a challenging benchmark
for evaluating VLMs. To facilitate large-scale data synthesis, we develop an
Automatic Scene Creation Engine. This engine, leveraging multiple specialized
LLM agents, can generate realistic 3D scenes from abstract math problems,
ensuring faithfulness to the original descriptions. Experimental results reveal
that state-of-the-art VLMs struggle significantly on SIRI-Bench, underscoring
the challenge of spatial reasoning. We hope that our study will bring
researchers' attention to spatially grounded reasoning and advance VLMs in
visual problem-solving. | 2025-06-17T13:40:00Z | 16 pages, 9 figures | null | null | null | null | null | null | null | null | null |
2,506.14606 | Guaranteed Guess: A Language Modeling Approach for CISC-to-RISC
Transpilation with Testing Guarantees | ['Ahmed Heakl', 'Sarim Hashmi', 'Chaimaa Abi', 'Celine Lee', 'Abdulrahman Mahmoud'] | ['cs.CL', 'cs.AR', 'cs.LG', 'cs.PL', 'cs.SE'] | The hardware ecosystem is rapidly evolving, with increasing interest in
translating low-level programs across different instruction set architectures
(ISAs) in a quick, flexible, and correct way to enhance the portability and
longevity of existing code. A particularly challenging class of this
transpilation problem is translating between complex- (CISC) and reduced-
(RISC) hardware architectures, due to fundamental differences in instruction
complexity, memory models, and execution paradigms. In this work, we introduce
GG (Guaranteed Guess), an ISA-centric transpilation pipeline that combines the
translation power of pre-trained large language models (LLMs) with the rigor of
established software testing constructs. Our method generates candidate
translations using an LLM from one ISA to another, and embeds such translations
within a software-testing framework to build quantifiable confidence in the
translation. We evaluate our GG approach over two diverse datasets, enforce
high code coverage (>98%) across unit tests, and achieve functional/semantic
correctness of 99% on HumanEval programs and 49% on BringupBench programs,
respectively. Further, we compare our approach to the state-of-the-art Rosetta
2 framework on Apple Silicon, showcasing 1.73x faster runtime performance,
1.47x better energy efficiency, and 2.41x better memory usage for our
transpiled code, demonstrating the effectiveness of GG for real-world
CISC-to-RISC translation tasks. We will open-source our codes, data, models,
and benchmarks to establish a common foundation for ISA-level code translation
research. | 2025-06-17T15:06:54Z | Project page: https://ahmedheakl.github.io/Guaranteed-Guess/ | null | null | Guaranteed Guess: A Language Modeling Approach for CISC-to-RISC Transpilation with Testing Guarantees | ['Ahmed Heakl', 'Sarim Hashmi', 'Chaimaa Abi', 'Celine Lee', 'Abdulrahman Mahmoud'] | 2,025 | arXiv.org | 0 | 56 | ['Computer Science'] |
2,506.14731 | Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning
for LLMs | ['Ling Team', 'Bin Hu', 'Cai Chen', 'Deng Zhao', 'Ding Liu', 'Dingnan Jin', 'Feng Zhu', 'Hao Dai', 'Hongzhi Luan', 'Jia Guo', 'Jiaming Liu', 'Jiewei Wu', 'Jun Mei', 'Jun Zhou', 'Junbo Zhao', 'Junwu Xiong', 'Kaihong Zhang', 'Kuan Xu', 'Lei Liang', 'Liang Jiang', 'Liangcheng Fu', 'Longfei Zheng', 'Qiang Gao', 'Qing Cui', 'Quan Wan', 'Shaomian Zheng', 'Shuaicheng Li', 'Tongkai Yang', 'Wang Ren', 'Xiaodong Yan', 'Xiaopei Wan', 'Xiaoyun Feng', 'Xin Zhao', 'Xinxing Yang', 'Xinyu Kong', 'Xuemin Yang', 'Yang Li', 'Yingting Wu', 'Yongkang Liu', 'Zhankai Xu', 'Zhenduo Zhang', 'Zhenglei Zhou', 'Zhenyu Huang', 'Zhiqiang Zhang', 'Zihao Wang', 'Zujie Wen'] | ['cs.CL', 'cs.AI'] | We present Ring-lite, a Mixture-of-Experts (MoE)-based large language model
optimized via reinforcement learning (RL) to achieve efficient and robust
reasoning capabilities. Built upon the publicly available Ling-lite model, a
16.8 billion parameter model with 2.75 billion activated parameters, our
approach matches the performance of state-of-the-art (SOTA) small-scale
reasoning models on challenging benchmarks (e.g., AIME, LiveCodeBench,
GPQA-Diamond) while activating only one-third of the parameters required by
comparable models. To accomplish this, we introduce a joint training pipeline
integrating distillation with RL, revealing undocumented challenges in MoE RL
training. First, we identify optimization instability during RL training, and
we propose Constrained Contextual Computation Policy Optimization(C3PO), a
novel approach that enhances training stability and improves computational
throughput via algorithm-system co-design methodology. Second, we empirically
demonstrate that selecting distillation checkpoints based on entropy loss for
RL training, rather than validation metrics, yields superior
performance-efficiency trade-offs in subsequent RL training. Finally, we
develop a two-stage training paradigm to harmonize multi-domain data
integration, addressing domain conflicts that arise in training with mixed
dataset. We will release the model, dataset, and code. | 2025-06-17T17:12:34Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,506.14794 | Assembly of Experts: Linear-time construction of the Chimera LLM
variants with emergent and adaptable behaviors | ['Henrik Klagges', 'Robert Dahlke', 'Fabian Klemm', 'Benjamin Merkel', 'Daniel Klingmann', 'David A. Reiss', 'Dan Zecha'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Requiring $10^{13}$-$10^{15}$ FLOPs to calculate one 8 bit weight in an LLM
during pretraining is extremely expensive and seems inefficient. To better
leverage the huge investments made into pretrained models, we develop the new
"Assembly-of-Experts" (AoE) construction method to create capable child
variants of existing Mixture-of-Experts parent models in linear time. Model
weight tensors get interpolated individually, allowing to enhance or suppress
semantic features of the parents.
Varying the proportion of weights taken from the parent models, we observe
some properties of the AoE child model changing gradually, while other
behavioral traits emerge with a sharp transition. Surprisingly, nearly every
generated model is functional and capable, which makes searching the model
space straightforward.
We construct the DeepSeek R1T "Chimera", a 671B open-weights hybrid model
combining DeepSeek's V3-0324 and R1 model variants. The child inherits only the
routed expert tensors of R1, but still achieves about R1-level intelligence. At
the same time, it uses about 40\% fewer output tokens, close to V3 speed.
Constructed without any fine-tuning or distillation, the Chimera exhibits
surprisingly compact, orderly reasoning compared to its parent models. | 2025-05-31T18:23:19Z | null | null | null | Assembly of Experts: Linear-time construction of the Chimera LLM variants with emergent and adaptable behaviors | ['Henrik Klagges', 'Robert Dahlke', 'Fabian Klemm', 'Benjamin Merkel', 'Daniel Klingmann', 'David A. Reiss', 'Dan Zecha'] | 2,025 | arXiv.org | 0 | 41 | ['Computer Science'] |
2,506.14842 | PictSure: Pretraining Embeddings Matters for In-Context Learning Image
Classifiers | ['Lukas Schiesser', 'Cornelius Wolff', 'Sophie Haas', 'Simon Pukrop'] | ['cs.CV', 'cs.AI'] | Building image classification models remains cumbersome in data-scarce
domains, where collecting large labeled datasets is impractical. In-context
learning (ICL) has emerged as a promising paradigm for few-shot image
classification (FSIC), enabling models to generalize across domains without
gradient-based adaptation. However, prior work has largely overlooked a
critical component of ICL-based FSIC pipelines: the role of image embeddings.
In this work, we present PictSure, an ICL framework that places the embedding
model -- its architecture, pretraining, and training dynamics -- at the center
of analysis. We systematically examine the effects of different visual encoder
types, pretraining objectives, and fine-tuning strategies on downstream FSIC
performance. Our experiments show that the training success and the
out-of-domain performance are highly dependent on how the embedding models are
pretrained. Consequently, PictSure manages to outperform existing ICL-based
FSIC models on out-of-domain benchmarks that differ significantly from the
training distribution, while maintaining comparable results on in-domain tasks.
Code can be found at https://github.com/PictSure/pictsure-library. | 2025-06-16T08:57:03Z | 15 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,506.14965 | Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain
Perspective | ['Zhoujun Cheng', 'Shibo Hao', 'Tianyang Liu', 'Fan Zhou', 'Yutao Xie', 'Feng Yao', 'Yuexin Bian', 'Yonghao Zhuang', 'Nilabjo Dey', 'Yuheng Zha', 'Yi Gu', 'Kun Zhou', 'Yuqi Wang', 'Yuan Li', 'Richard Fan', 'Jianshu She', 'Chengqian Gao', 'Abulhair Saparov', 'Haonan Li', 'Taylor W. Killian', 'Mikhail Yurochkin', 'Zhengzhong Liu', 'Eric P. Xing', 'Zhiting Hu'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Reinforcement learning (RL) has emerged as a promising approach to improve
large language model (LLM) reasoning, yet most open efforts focus narrowly on
math and code, limiting our understanding of its broader applicability to
general reasoning. A key challenge lies in the lack of reliable, scalable RL
reward signals across diverse reasoning domains. We introduce Guru, a curated
RL reasoning corpus of 92K verifiable examples spanning six reasoning
domains--Math, Code, Science, Logic, Simulation, and Tabular--each built
through domain-specific reward design, deduplication, and filtering to ensure
reliability and effectiveness for RL training. Based on Guru, we systematically
revisit established findings in RL for LLM reasoning and observe significant
variation across domains. For example, while prior work suggests that RL
primarily elicits existing knowledge from pretrained models, our results reveal
a more nuanced pattern: domains frequently seen during pretraining (Math, Code,
Science) easily benefit from cross-domain RL training, while domains with
limited pretraining exposure (Logic, Simulation, and Tabular) require in-domain
training to achieve meaningful performance gains, suggesting that RL is likely
to facilitate genuine skill acquisition. Finally, we present Guru-7B and
Guru-32B, two models that achieve state-of-the-art performance among open
models RL-trained with publicly available data, outperforming best baselines by
7.9% and 6.7% on our 17-task evaluation suite across six reasoning domains. We
also show that our models effectively improve the Pass@k performance of their
base models, particularly on complex tasks less likely to appear in pretraining
data. We release data, models, training and evaluation code to facilitate
general-purpose reasoning at: https://github.com/LLM360/Reasoning360 | 2025-06-17T20:24:00Z | 38 pages, 9 figures. Under review | null | null | null | null | null | null | null | null | null |
2,506.15068 | Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form
Generation | ['Zongxia Li', 'Yapei Chang', 'Yuhang Zhou', 'Xiyang Wu', 'Zichao Liang', 'Yoo Yeon Sung', 'Jordan Lee Boyd-Graber'] | ['cs.CL', 'cs.LG'] | Evaluating open-ended long-form generation is challenging because it is hard
to define what clearly separates good from bad outputs. Existing methods often
miss key aspects like coherence, style, or relevance, or are biased by
pretraining data, making open-ended long-form evaluation an underexplored
problem. To address this gap, we propose PrefBERT, a scoring model for
evaluating open-ended long-form generation in GRPO and guiding its training
with distinct rewards for good and bad outputs. Trained on two response
evaluation datasets with diverse long-form styles and Likert-rated quality,
PrefBERT effectively supports GRPO by offering better semantic reward feedback
than traditional metrics ROUGE-L and BERTScore do. Through comprehensive
evaluations, including LLM-as-a-judge, human ratings, and qualitative analysis,
we show that PrefBERT, trained on multi-sentence and paragraph-length
responses, remains reliable across varied long passages and aligns well with
the verifiable rewards GRPO needs. Human evaluations confirm that using
PrefBERT as the reward signal to train policy models yields responses better
aligned with human preferences than those trained with traditional metrics. Our
code is available at https://github.com/zli12321/long_form_rl. | 2025-06-18T02:16:53Z | null | null | null | null | null | null | null | null | null | null |
2,506.15154 | SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning | ['Anuradha Chopra', 'Abhinaba Roy', 'Dorien Herremans'] | ['cs.SD', 'cs.AI', 'cs.CL', 'cs.MM', 'eess.AS', '68T10 (Primary), 68T50 (Secondary)', 'H.5.5; H.5.1; I.2.7'] | Detailed captions that accurately reflect the characteristics of a music
piece can enrich music databases and drive forward research in music AI. This
paper introduces a multi-task music captioning model, SonicVerse, that
integrates caption generation with auxiliary music feature detection tasks such
as key detection, vocals detection, and more, so as to directly capture both
low-level acoustic details as well as high-level musical attributes. The key
contribution is a projection-based architecture that transforms audio input
into language tokens, while simultaneously detecting music features through
dedicated auxiliary heads. The outputs of these heads are also projected into
language tokens, to enhance the captioning input. This framework not only
produces rich, descriptive captions for short music fragments but also directly
enables the generation of detailed time-informed descriptions for longer music
pieces, by chaining the outputs using a large-language model. To train the
model, we extended the MusicBench dataset by annotating it with music features
using MIRFLEX, a modular music feature extractor, resulting in paired audio,
captions and music feature data. Experimental results show that incorporating
features in this way improves the quality and detail of the generated captions. | 2025-06-18T05:51:36Z | 14 pages, 2 figures, Accepted to AIMC 2025 | Proceedings of the 6th Conference on AI Music Creativity (AIMC
2025), Brussels, Belgium, September 10th - 12th, 2025 | null | SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning | ['Anuradha Chopra', 'Abhinaba Roy', 'Dorien Herremans'] | 2,025 | arXiv.org | 0 | 30 | ['Computer Science', 'Engineering'] |
2,506.15266 | Thunder-DeID: Accurate and Efficient De-identification Framework for
Korean Court Judgments | ['Sungeun Hahm', 'Heejin Kim', 'Gyuseong Lee', 'Hyunji Park', 'Jaejin Lee'] | ['cs.CL'] | To ensure a balance between open access to justice and personal data
protection, the South Korean judiciary mandates the de-identification of court
judgments before they can be publicly disclosed. However, the current
de-identification process is inadequate for handling court judgments at scale
while adhering to strict legal requirements. Additionally, the legal
definitions and categorizations of personal identifiers are vague and not
well-suited for technical solutions. To tackle these challenges, we propose a
de-identification framework called Thunder-DeID, which aligns with relevant
laws and practices. Specifically, we (i) construct and release the first Korean
legal dataset containing annotated judgments along with corresponding lists of
entity mentions, (ii) introduce a systematic categorization of Personally
Identifiable Information (PII), and (iii) develop an end-to-end deep neural
network (DNN)-based de-identification pipeline. Our experimental results
demonstrate that our model achieves state-of-the-art performance in the
de-identification of court judgments. | 2025-06-18T08:41:28Z | null | null | null | null | null | null | null | null | null | null |
2,506.15442 | Hunyuan3D 2.1: From Images to High-Fidelity 3D Assets with
Production-Ready PBR Material | ['Team Hunyuan3D', 'Shuhui Yang', 'Mingxin Yang', 'Yifei Feng', 'Xin Huang', 'Sheng Zhang', 'Zebin He', 'Di Luo', 'Haolin Liu', 'Yunfei Zhao', 'Qingxiang Lin', 'Zeqiang Lai', 'Xianghui Yang', 'Huiwen Shi', 'Zibo Zhao', 'Bowen Zhang', 'Hongyu Yan', 'Lifu Wang', 'Sicong Liu', 'Jihong Zhang', 'Meng Chen', 'Liang Dong', 'Yiwen Jia', 'Yulin Cai', 'Jiaao Yu', 'Yixuan Tang', 'Dongyuan Guo', 'Junlin Yu', 'Hao Zhang', 'Zheng Ye', 'Peng He', 'Runzhou Wu', 'Shida Wei', 'Chao Zhang', 'Yonghao Tan', 'Yifu Sun', 'Lin Niu', 'Shirui Huang', 'Bojian Zheng', 'Shu Liu', 'Shilin Chen', 'Xiang Yuan', 'Xiaofeng Yang', 'Kai Liu', 'Jianchen Zhu', 'Peng Chen', 'Tian Liu', 'Di Wang', 'Yuhong Liu', 'Linus', 'Jie Jiang', 'Jingwei Huang', 'Chunchao Guo'] | ['cs.CV', 'cs.AI'] | 3D AI-generated content (AIGC) is a passionate field that has significantly
accelerated the creation of 3D models in gaming, film, and design. Despite the
development of several groundbreaking models that have revolutionized 3D
generation, the field remains largely accessible only to researchers,
developers, and designers due to the complexities involved in collecting,
processing, and training 3D models. To address these challenges, we introduce
Hunyuan3D 2.1 as a case study in this tutorial. This tutorial offers a
comprehensive, step-by-step guide on processing 3D data, training a 3D
generative model, and evaluating its performance using Hunyuan3D 2.1, an
advanced system for producing high-resolution, textured 3D assets. The system
comprises two core components: the Hunyuan3D-DiT for shape generation and the
Hunyuan3D-Paint for texture synthesis. We will explore the entire workflow,
including data preparation, model architecture, training strategies, evaluation
metrics, and deployment. By the conclusion of this tutorial, you will have the
knowledge to finetune or develop a robust 3D generative model suitable for
applications in gaming, virtual reality, and industrial design. | 2025-06-18T13:14:46Z | Github link: https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1 | null | null | null | null | null | null | null | null | null |
2,506.15498 | SPARE: Single-Pass Annotation with Reference-Guided Evaluation for
Automatic Process Supervision and Reward Modelling | ['Md Imbesat Hassan Rizvi', 'Xiaodan Zhu', 'Iryna Gurevych'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Process or step-wise supervision has played a crucial role in advancing
complex multi-step reasoning capabilities of Large Language Models (LLMs).
However, efficient, high-quality automated process annotation remains a
significant challenge. To address this, we introduce Single-Pass Annotation
with Reference-Guided Evaluation (SPARE), a novel structured framework that
enables single-pass, per-step annotation by aligning each solution step to one
or multiple steps in a reference solution, accompanied by explicit reasoning
for evaluation. We show that reference-guided step-level evaluation effectively
facilitates process supervision on four datasets spanning three domains:
mathematical reasoning, multi-hop compositional question answering, and spatial
reasoning. We demonstrate that SPARE, when compared to baselines, improves
reasoning performance when used for: (1) fine-tuning models in an offline RL
setup for inference-time greedy-decoding, and (2) training reward models for
ranking/aggregating multiple LLM-generated outputs. Additionally, SPARE
achieves competitive performance on challenging mathematical datasets while
offering 2.6 times greater efficiency, requiring only 38% of the runtime,
compared to tree search-based automatic annotation. The codebase, along with a
trained SPARE-PRM model, is publicly released to facilitate further research
and reproducibility. | 2025-06-18T14:37:59Z | 8 pages main content, 4 figures, 4 tables | null | null | null | null | null | null | null | null | null |
2,506.15564 | Show-o2: Improved Native Unified Multimodal Models | ['Jinheng Xie', 'Zhenheng Yang', 'Mike Zheng Shou'] | ['cs.CV'] | This paper presents improved native unified multimodal models, \emph{i.e.,}
Show-o2, that leverage autoregressive modeling and flow matching. Built upon a
3D causal variational autoencoder space, unified visual representations are
constructed through a dual-path of spatial (-temporal) fusion, enabling
scalability across image and video modalities while ensuring effective
multimodal understanding and generation. Based on a language model,
autoregressive modeling and flow matching are natively applied to the language
head and flow head, respectively, to facilitate text token prediction and
image/video generation. A two-stage training recipe is designed to effectively
learn and scale to larger models. The resulting Show-o2 models demonstrate
versatility in handling a wide range of multimodal understanding and generation
tasks across diverse modalities, including text, images, and videos. Code and
models are released at https://github.com/showlab/Show-o. | 2025-06-18T15:39:15Z | Technical report. (v2: update references and tables) | null | null | Show-o2: Improved Native Unified Multimodal Models | ['Jinheng Xie', 'Zhenheng Yang', 'Mike Zheng Shou'] | 2,025 | arXiv.org | 0 | 120 | ['Computer Science'] |
2,506.15635 | FindingDory: A Benchmark to Evaluate Memory in Embodied Agents | ['Karmesh Yadav', 'Yusuf Ali', 'Gunshi Gupta', 'Yarin Gal', 'Zsolt Kira'] | ['cs.CV', 'cs.RO'] | Large vision-language models have recently demonstrated impressive
performance in planning and control tasks, driving interest in their
application to real-world robotics. However, deploying these models for
reasoning in embodied contexts is limited by their ability to incorporate
long-term experience collected across multiple days and represented by vast
collections of images. Current VLMs typically struggle to process more than a
few hundred images concurrently, highlighting the need for more efficient
mechanisms to handle long-term memory in embodied settings. To effectively
evaluate these models for long-horizon control, a benchmark must specifically
target scenarios where memory is crucial for success. Existing long-video QA
benchmarks overlook embodied challenges like object manipulation and
navigation, which demand low-level skills and fine-grained reasoning over past
interactions. Moreover, effective memory integration in embodied agents
involves both recalling relevant historical information and executing actions
based on that information, making it essential to study these aspects together
rather than in isolation. In this work, we introduce a new benchmark for
long-range embodied tasks in the Habitat simulator. This benchmark evaluates
memory-based capabilities across 60 tasks requiring sustained engagement and
contextual awareness in an environment. The tasks can also be procedurally
extended to longer and more challenging versions, enabling scalable evaluation
of memory and reasoning. We also present baselines that integrate
state-of-the-art VLMs with low level navigation policies, assessing their
performance on these memory-intensive tasks and highlight areas for
improvement. | 2025-06-18T17:06:28Z | Our dataset and code will be made available at:
https://findingdory-benchmark.github.io/ | null | null | null | null | null | null | null | null | null |
2,506.15721 | Bohdi: Heterogeneous LLM Fusion with Automatic Data Exploration | ['Junqi Gao', 'Zhichang Guo', 'Dazhi Zhang', 'Dong Li', 'Runze Liu', 'Pengfei Li', 'Kai Tian', 'Biqing Qi'] | ['cs.LG'] | Heterogeneous Large Language Model (LLM) fusion integrates the strengths of
multiple source LLMs with different architectures into a target LLM with low
computational overhead. While promising, existing methods suffer from two major
limitations: 1) reliance on real data from limited domain for knowledge fusion,
preventing the target LLM from fully acquiring knowledge across diverse
domains, and 2) fixed data allocation proportions across domains, failing to
dynamically adjust according to the target LLM's varying capabilities across
domains, leading to a capability imbalance. To overcome these limitations, we
propose Bohdi, a synthetic-data-only heterogeneous LLM fusion framework.
Through the organization of knowledge domains into a hierarchical tree
structure, Bohdi enables automatic domain exploration and multi-domain data
generation through multi-model collaboration, thereby comprehensively
extracting knowledge from source LLMs. By formalizing domain expansion and data
sampling proportion allocation on the knowledge tree as a Hierarchical
Multi-Armed Bandit problem, Bohdi leverages the designed DynaBranches mechanism
to adaptively adjust sampling proportions based on the target LLM's performance
feedback across domains. Integrated with our proposed Introspection-Rebirth
(IR) mechanism, DynaBranches dynamically tracks capability shifts during target
LLM's updates via Sliding Window Binomial Likelihood Ratio Testing (SWBLRT),
further enhancing its online adaptation capability. Comparative experimental
results on a comprehensive suite of benchmarks demonstrate that Bohdi
significantly outperforms existing baselines on multiple target LLMs, exhibits
higher data efficiency, and virtually eliminates the imbalance in the target
LLM's capabilities. Our code is available at
https://github.com/gjq100/Bohdi.git. | 2025-06-04T17:01:38Z | null | null | null | Bohdi: Heterogeneous LLM Fusion with Automatic Data Exploration | ['Junqi Gao', 'Zhichang Guo', 'Dazhi Zhang', 'Dong Li', 'Runze Liu', 'Pengfei Li', 'Kai Tian', 'Biqing Qi'] | 2,025 | arXiv.org | 0 | 39 | ['Computer Science'] |
2,506.15742 | FLUX.1 Kontext: Flow Matching for In-Context Image Generation and
Editing in Latent Space | ['Black Forest Labs', 'Stephen Batifol', 'Andreas Blattmann', 'Frederic Boesel', 'Saksham Consul', 'Cyril Diagne', 'Tim Dockhorn', 'Jack English', 'Zion English', 'Patrick Esser', 'Sumith Kulal', 'Kyle Lacey', 'Yam Levi', 'Cheng Li', 'Dominik Lorenz', 'Jonas Müller', 'Dustin Podell', 'Robin Rombach', 'Harry Saini', 'Axel Sauer', 'Luke Smith'] | ['cs.GR'] | We present evaluation results for FLUX.1 Kontext, a generative flow matching
model that unifies image generation and editing. The model generates novel
output views by incorporating semantic context from text and image inputs.
Using a simple sequence concatenation approach, FLUX.1 Kontext handles both
local editing and generative in-context tasks within a single unified
architecture. Compared to current editing models that exhibit degradation in
character consistency and stability across multiple turns, we observe that
FLUX.1 Kontext improved preservation of objects and characters, leading to
greater robustness in iterative workflows. The model achieves competitive
performance with current state-of-the-art systems while delivering
significantly faster generation times, enabling interactive applications and
rapid prototyping workflows. To validate these improvements, we introduce
KontextBench, a comprehensive benchmark with 1026 image-prompt pairs covering
five task categories: local editing, global editing, character reference, style
reference and text editing. Detailed evaluations show the superior performance
of FLUX.1 Kontext in terms of both single-turn quality and multi-turn
consistency, setting new standards for unified image processing models. | 2025-06-17T20:18:23Z | null | null | null | null | null | null | null | null | null | null |
2,506.16073 | TD3Net: A Temporal Densely Connected Multi-Dilated Convolutional Network
for Lipreading | ['Byung Hoon Lee', 'Wooseok Shin', 'Sung Won Han'] | ['cs.CV', 'I.4.8; I.5.4; I.2.10'] | The word-level lipreading approach typically employs a two-stage framework
with separate frontend and backend architectures to model dynamic lip
movements. Each component has been extensively studied, and in the backend
architecture, temporal convolutional networks (TCNs) have been widely adopted
in state-of-the-art methods. Recently, dense skip connections have been
introduced in TCNs to mitigate the limited density of the receptive field,
thereby improving the modeling of complex temporal representations. However,
their performance remains constrained owing to potential information loss
regarding the continuous nature of lip movements, caused by blind spots in the
receptive field. To address this limitation, we propose TD3Net, a temporal
densely connected multi-dilated convolutional network that combines dense skip
connections and multi-dilated temporal convolutions as the backend
architecture. TD3Net covers a wide and dense receptive field without blind
spots by applying different dilation factors to skip-connected features.
Experimental results on a word-level lipreading task using two large publicly
available datasets, Lip Reading in the Wild (LRW) and LRW-1000, indicate that
the proposed method achieves performance comparable to state-of-the-art
methods. It achieved higher accuracy with fewer parameters and lower
floating-point operations compared to existing TCN-based backend architectures.
Moreover, visualization results suggest that our approach effectively utilizes
diverse temporal features while preserving temporal continuity, presenting
notable advantages in lipreading systems. The code is available at our GitHub
repository:
https://github.com/Leebh-kor/TD3Net-A-Temporal-Densely-Connected-Multi-dilated-Convolutional-Network-for-Lipreading | 2025-06-19T06:55:03Z | 15 pages, 6 figures | null | null | TD3Net: A Temporal Densely Connected Multi-Dilated Convolutional Network for Lipreading | ['B. Lee', 'Wooseok Shin', 'Sung Won Han'] | 2,025 | arXiv.org | 0 | 54 | ['Computer Science'] |
2,506.16141 | GRPO-CARE: Consistency-Aware Reinforcement Learning for Multimodal
Reasoning | ['Yi Chen', 'Yuying Ge', 'Rui Wang', 'Yixiao Ge', 'Junhao Cheng', 'Ying Shan', 'Xihui Liu'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | Recent reinforcement learning approaches, such as outcome-supervised GRPO,
have advanced Chain-of-Thought reasoning in large language models (LLMs), yet
their adaptation to multimodal LLMs (MLLMs) is unexplored. To address the lack
of rigorous evaluation for MLLM post-training methods, we introduce
SEED-Bench-R1, a benchmark with complex real-world videos requiring balanced
perception and reasoning. It offers a large training set and evaluates
generalization across three escalating challenges: in-distribution,
cross-environment, and cross-environment-task scenarios. Using SEED-Bench-R1,
we find that standard GRPO, while improving answer accuracy, often reduces
logical coherence between reasoning steps and answers, with only a 57.9%
consistency rate. This stems from reward signals focusing solely on final
answers, encouraging shortcuts, and strict KL penalties limiting exploration.To
address this, we propose GRPO-CARE, a consistency-aware RL framework optimizing
both answer correctness and reasoning coherence without explicit supervision.
GRPO-CARE introduces a two-tiered reward: (1) a base reward for answer
correctness, and (2) an adaptive consistency bonus, computed by comparing the
model's reasoning-to-answer likelihood (via a slowly-evolving reference model)
against group peers.This dual mechanism amplifies rewards for reasoning paths
that are both correct and logically consistent. Replacing KL penalties with
this adaptive bonus, GRPO-CARE outperforms standard GRPO on SEED-Bench-R1,
achieving a 6.7% performance gain on the hardest evaluation level and a 24.5%
improvement in consistency. It also shows strong transferability, improving
model performance across diverse video understanding benchmarks. Our work
contributes a systematically designed benchmark and a generalizable
post-training framework, advancing the development of more interpretable and
robust MLLMs. | 2025-06-19T08:49:13Z | Code released at: https://github.com/TencentARC/GRPO-CARE | null | null | GRPO-CARE: Consistency-Aware Reinforcement Learning for Multimodal Reasoning | ['Yi Chen', 'Yuying Ge', 'Rui Wang', 'Yixiao Ge', 'Jun Cheng', 'Ying Shan', 'Xihui Liu'] | 2,025 | arXiv.org | 0 | 45 | ['Computer Science'] |
2,506.16233 | Can AI Dream of Unseen Galaxies? Conditional Diffusion Model for Galaxy
Morphology Augmentation | ['Chenrui Ma', 'Zechang Sun', 'Tao Jing', 'Zheng Cai', 'Yuan-Sen Ting', 'Song Huang', 'Mingyu Li'] | ['astro-ph.GA', 'cs.LG'] | Observational astronomy relies on visual feature identification to detect
critical astrophysical phenomena. While machine learning (ML) increasingly
automates this process, models often struggle with generalization in
large-scale surveys due to the limited representativeness of labeled datasets
-- whether from simulations or human annotation -- a challenge pronounced for
rare yet scientifically valuable objects. To address this, we propose a
conditional diffusion model to synthesize realistic galaxy images for
augmenting ML training data. Leveraging the Galaxy Zoo 2 dataset which contains
visual feature -- galaxy image pairs from volunteer annotation, we demonstrate
that our model generates diverse, high-fidelity galaxy images closely adhere to
the specified morphological feature conditions. Moreover, this model enables
generative extrapolation to project well-annotated data into unseen domains and
advancing rare object detection. Integrating synthesized images into ML
pipelines improves performance in standard morphology classification, boosting
completeness and purity by up to 30\% across key metrics. For rare object
detection, using early-type galaxies with prominent dust lane features (
$\sim$0.1\% in GZ2 dataset) as a test case, our approach doubled the number of
detected instances from 352 to 872, compared to previous studies based on
visual inspection. This study highlights the power of generative models to
bridge gaps between scarce labeled data and the vast, uncharted parameter space
of observational astronomy and sheds insight for future astrophysical
foundation model developments. Our project homepage is available at
https://galaxysd-webpage.streamlit.app/. | 2025-06-19T11:44:09Z | We have submitted to AAS journals. See another independent work for
further reference -- Category-based Galaxy Image Generation via Diffusion
Models (Fan, Tang et al.). Comments are welcome | null | null | Can AI Dream of Unseen Galaxies? Conditional Diffusion Model for Galaxy Morphology Augmentation | ['Chenrui Ma', 'Zechang Sun', 'Tao Jing', 'Zheng Cai', 'Yuan-Sen Ting', 'Song Huang', 'Mingyu Li'] | 2,025 | arXiv.org | 0 | 7 | ['Physics', 'Computer Science'] |
2,506.1631 | Optimizing Multilingual Text-To-Speech with Accents & Emotions | ['Pranav Pawar', 'Akshansh Dwivedi', 'Jenish Boricha', 'Himanshu Gohil', 'Aditya Dubey'] | ['cs.LG', 'cs.HC', 'cs.SD', 'eess.AS'] | State-of-the-art text-to-speech (TTS) systems realize high naturalness in
monolingual environments, synthesizing speech with correct multilingual accents
(especially for Indic languages) and context-relevant emotions still poses
difficulty owing to cultural nuance discrepancies in current frameworks. This
paper introduces a new TTS architecture integrating accent along with
preserving transliteration with multi-scale emotion modelling, in particularly
tuned for Hindi and Indian English accent. Our approach extends the Parler-TTS
model by integrating A language-specific phoneme alignment hybrid
encoder-decoder architecture, and culture-sensitive emotion embedding layers
trained on native speaker corpora, as well as incorporating a dynamic accent
code switching with residual vector quantization. Quantitative tests
demonstrate 23.7% improvement in accent accuracy (Word Error Rate reduction
from 15.4% to 11.8%) and 85.3% emotion recognition accuracy from native
listeners, surpassing METTS and VECL-TTS baselines. The novelty of the system
is that it can mix code in real time - generating statements such as "Namaste,
let's talk about <Hindi phrase>" with uninterrupted accent shifts while
preserving emotional consistency. Subjective evaluation with 200 users reported
a mean opinion score (MOS) of 4.2/5 for cultural correctness, much better than
existing multilingual systems (p<0.01). This research makes cross-lingual
synthesis more feasible by showcasing scalable accent-emotion disentanglement,
with direct application in South Asian EdTech and accessibility software. | 2025-06-19T13:35:05Z | 12 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,506.16322 | PL-Guard: Benchmarking Language Model Safety for Polish | ['Aleksandra Krasnodębska', 'Karolina Seweryn', 'Szymon Łukasik', 'Wojciech Kusa'] | ['cs.CL', 'I.2.7'] | Despite increasing efforts to ensure the safety of large language models
(LLMs), most existing safety assessments and moderation tools remain heavily
biased toward English and other high-resource languages, leaving majority of
global languages underexamined. To address this gap, we introduce a manually
annotated benchmark dataset for language model safety classification in Polish.
We also create adversarially perturbed variants of these samples designed to
challenge model robustness. We conduct a series of experiments to evaluate
LLM-based and classifier-based models of varying sizes and architectures.
Specifically, we fine-tune three models: Llama-Guard-3-8B, a HerBERT-based
classifier (a Polish BERT derivative), and PLLuM, a Polish-adapted Llama-8B
model. We train these models using different combinations of annotated data and
evaluate their performance, comparing it against publicly available guard
models. Results demonstrate that the HerBERT-based classifier achieves the
highest overall performance, particularly under adversarial conditions. | 2025-06-19T13:56:41Z | Accepted to the 10th Workshop on Slavic Natural Language Processing | null | null | null | null | null | null | null | null | null |
2,506.165 | SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity | ['Samir Khaki', 'Xiuyu Li', 'Junxian Guo', 'Ligeng Zhu', 'Chenfeng Xu', 'Konstantinos N. Plataniotis', 'Amir Yazdanbakhsh', 'Kurt Keutzer', 'Song Han', 'Zhijian Liu'] | ['cs.LG'] | Fine-tuning LLMs is both computationally and memory-intensive. While
parameter-efficient fine-tuning methods, such as QLoRA and DoRA, reduce the
number of trainable parameters and lower memory usage, they do not decrease
computational cost. In some cases, they may even slow down fine-tuning. In this
paper, we introduce SparseLoRA, a method that accelerates LLM fine-tuning
through contextual sparsity. We propose a lightweight, training-free SVD
sparsity estimator that dynamically selects a sparse subset of weights for loss
and gradient computation. Also, we systematically analyze and address
sensitivity across layers, tokens, and training steps. Our experimental results
show that SparseLoRA reduces computational cost by up to 2.2 times and a
measured speedup of up to 1.6 times while maintaining accuracy across various
downstream tasks, including commonsense and arithmetic reasoning, code
generation, and instruction following. | 2025-06-19T17:53:34Z | ICML 2025. The first three authors contributed equally to this work.
Project page: https://z-lab.ai/projects/sparselora | null | null | null | null | null | null | null | null | null |
2,506.16655 | Arch-Router: Aligning LLM Routing with Human Preferences | ['Co Tran', 'Salman Paracha', 'Adil Hafeez', 'Shuguang Chen'] | ['cs.CL'] | With the rapid proliferation of large language models (LLMs) -- each
optimized for different strengths, style, or latency/cost profile -- routing
has become an essential technique to operationalize the use of different
models. However, existing LLM routing approaches are limited in two key ways:
they evaluate performance using benchmarks that often fail to capture human
preferences driven by subjective evaluation criteria, and they typically select
from a limited pool of models. In this work, we propose a preference-aligned
routing framework that guides model selection by matching queries to
user-defined domains (e.g., travel) or action types (e.g., image editing) --
offering a practical mechanism to encode preferences in routing decisions.
Specifically, we introduce \textbf{Arch-Router}, a compact 1.5B model that
learns to map queries to domain-action preferences for model routing decisions.
Our approach also supports seamlessly adding new models for routing without
requiring retraining or architectural modifications. Experiments on
conversational datasets demonstrate that our approach achieves state-of-the-art
(SOTA) results in matching queries with human preferences, outperforming top
proprietary models. Our approach captures subjective evaluation criteria and
makes routing decisions more transparent and flexible. Our model is available
at: \texttt{https://huggingface.co/katanemo/Arch-Router-1.5B}. | 2025-06-19T23:57:41Z | null | null | null | null | null | null | null | null | null | null |
2,506.16962 | Enhancing Step-by-Step and Verifiable Medical Reasoning in MLLMs | ['Haoran Sun', 'Yankai Jiang', 'Wenjie Lou', 'Yujie Zhang', 'Wenjie Li', 'Lilong Wang', 'Mianxin Liu', 'Lei Liu', 'Xiaosong Wang'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Multimodal large language models (MLLMs) have begun to demonstrate robust
reasoning capabilities on general tasks, yet their application in the medical
domain remains in its early stages. Constructing chain-of-thought (CoT)
training data is essential for bolstering the reasoning abilities of medical
MLLMs. However, existing approaches exhibit a deficiency in offering a
comprehensive framework for searching and evaluating effective reasoning paths
towards critical diagnosis. To address this challenge, we propose Mentor-Intern
Collaborative Search (MICS), a novel reasoning-path searching scheme to
generate rigorous and effective medical CoT data. MICS first leverages mentor
models to initialize the reasoning, one step at a time, then prompts each
intern model to continue the thinking along those initiated paths, and finally
selects the optimal reasoning path according to the overall reasoning
performance of multiple intern models. The reasoning performance is determined
by an MICS-Score, which assesses the quality of generated reasoning paths.
Eventually, we construct MMRP, a multi-task medical reasoning dataset with
ranked difficulty, and Chiron-o1, a new medical MLLM devised via a curriculum
learning strategy, with robust visual question-answering and generalizable
reasoning capabilities. Extensive experiments demonstrate that Chiron-o1,
trained on our CoT dataset constructed using MICS, achieves state-of-the-art
performance across a list of medical visual question answering and reasoning
benchmarks. Codes are available at GitHub - manglu097/Chiron-o1: Enhancing
Step-by-Step and Verifiable Medical Reasoning in MLLMs | 2025-06-20T12:51:19Z | null | null | null | null | null | null | null | null | null | null |
2,506.1708 | Tower+: Bridging Generality and Translation Specialization in
Multilingual LLMs | ['Ricardo Rei', 'Nuno M. Guerreiro', 'José Pombal', 'João Alves', 'Pedro Teixeirinha', 'Amin Farajian', 'André F. T. Martins'] | ['cs.CL', 'cs.AI'] | Fine-tuning pretrained LLMs has been shown to be an effective strategy for
reaching state-of-the-art performance on specific tasks like machine
translation. However, this process of adaptation often implies sacrificing
general-purpose capabilities, such as conversational reasoning and
instruction-following, hampering the utility of the system in real-world
applications that require a mixture of skills. In this paper, we introduce
Tower+, a suite of models designed to deliver strong performance across both
translation and multilingual general-purpose text capabilities. We achieve a
Pareto frontier between translation specialization and multilingual
general-purpose capabilities by introducing a novel training recipe that builds
on Tower (Alves et al., 2024), comprising continued pretraining, supervised
fine-tuning, preference optimization, and reinforcement learning with
verifiable rewards. At each stage of training, we carefully generate and curate
data to strengthen performance on translation as well as general-purpose tasks
involving code generation, mathematics problem solving, and general
instruction-following. We develop models at multiple scales: 2B, 9B, and 72B.
Our smaller models often outperform larger general-purpose open-weight and
proprietary LLMs (e.g., Llama 3.3 70B, GPT-4o). Our largest model delivers
best-in-class translation performance for high-resource languages and top
results in multilingual Arena Hard evaluations and in IF-MT, a benchmark we
introduce for evaluating both translation and instruction-following. Our
findings highlight that it is possible to rival frontier models in general
capabilities, while optimizing for specific business domains, such as
translation and localization. | 2025-06-20T15:30:06Z | null | null | null | Tower+: Bridging Generality and Translation Specialization in Multilingual LLMs | ['Ricardo Rei', 'Nuno M. Guerreiro', 'José P. Pombal', 'João Alves', 'Pedro Teixeirinha', 'Amin Farajian', "Andr'e F. T. Martins"] | 2,025 | arXiv.org | 0 | 37 | ['Computer Science'] |
2,506.1709 | Better Language Model Inversion by Compactly Representing Next-Token
Distributions | ['Murtaza Nazir', 'Matthew Finlayson', 'John X. Morris', 'Xiang Ren', 'Swabha Swayamdipta'] | ['cs.CL'] | Language model inversion seeks to recover hidden prompts using only language
model outputs. This capability has implications for security and accountability
in language model deployments, such as leaking private information from an
API-protected language model's system message. We propose a new method --
prompt inversion from logprob sequences (PILS) -- that recovers hidden prompts
by gleaning clues from the model's next-token probabilities over the course of
multiple generation steps. Our method is enabled by a key insight: The
vector-valued outputs of a language model occupy a low-dimensional subspace.
This enables us to losslessly compress the full next-token probability
distribution over multiple generation steps using a linear map, allowing more
output information to be used for inversion. Our approach yields massive gains
over previous state-of-the-art methods for recovering hidden prompts, achieving
2--3.5 times higher exact recovery rates across test sets, in one case
increasing the recovery rate from 17% to 60%. Our method also exhibits
surprisingly good generalization behavior; for instance, an inverter trained on
16 generations steps gets 5--27 points higher prompt recovery when we increase
the number of steps to 32 at test time. Furthermore, we demonstrate strong
performance of our method on the more challenging task of recovering hidden
system messages. We also analyze the role of verbatim repetition in prompt
recovery and propose a new method for cross-family model transfer for
logit-based inverters. Our findings show that next-token probabilities are a
considerably more vulnerable attack surface for inversion attacks than
previously known. | 2025-06-20T15:53:51Z | null | null | null | null | null | null | null | null | null | null |
2,506.17206 | DreamCube: 3D Panorama Generation via Multi-plane Synchronization | ['Yukun Huang', 'Yanning Zhou', 'Jianan Wang', 'Kaiyi Huang', 'Xihui Liu'] | ['cs.GR', 'cs.CV', 'cs.LG'] | 3D panorama synthesis is a promising yet challenging task that demands
high-quality and diverse visual appearance and geometry of the generated
omnidirectional content. Existing methods leverage rich image priors from
pre-trained 2D foundation models to circumvent the scarcity of 3D panoramic
data, but the incompatibility between 3D panoramas and 2D single views limits
their effectiveness. In this work, we demonstrate that by applying multi-plane
synchronization to the operators from 2D foundation models, their capabilities
can be seamlessly extended to the omnidirectional domain. Based on this design,
we further introduce DreamCube, a multi-plane RGB-D diffusion model for 3D
panorama generation, which maximizes the reuse of 2D foundation model priors to
achieve diverse appearances and accurate geometry while maintaining multi-view
consistency. Extensive experiments demonstrate the effectiveness of our
approach in panoramic image generation, panoramic depth estimation, and 3D
scene generation. | 2025-06-20T17:55:06Z | Project page: https://yukun-huang.github.io/DreamCube/ | null | null | null | null | null | null | null | null | null |
2,506.17238 | Training a Scientific Reasoning Model for Chemistry | ['Siddharth M. Narayanan', 'James D. Braza', 'Ryan-Rhys Griffiths', 'Albert Bou', 'Geemi Wellawatte', 'Mayk Caldas Ramos', 'Ludovico Mitchener', 'Samuel G. Rodriques', 'Andrew D. White'] | ['cs.LG'] | Reasoning models are large language models that emit a long chain-of-thought
before answering, providing both higher accuracy and explicit reasoning for
their response. A major question has been whether language model reasoning
generalizes beyond mathematics, programming, and logic, where most previous
work has focused. We demonstrate that reasoning models can be post-trained for
chemistry without additional domain pretraining, and require substantially less
data compared to contemporary domain-specific models. We report ether0, a 24B
parameter LLM (based on Mistral-Small-24B) that can reason in natural language
and respond with chemical structures. This reasoning model was trained with
reinforcement learning on 640,730 experimentally-grounded chemistry problems
across 375 tasks ranging from synthesizability, to blood-brain barrier
permeability, to human receptor activity, to scent. Our model exceeds
general-purpose chemistry models, frontier models, and human experts on
molecular design tasks. It is also more data efficient relative to specialized
models. We anticipate that this method can be applied to train data-efficient
language models specialized for tasks across a wide variety of scientific
domains. | 2025-06-04T17:57:18Z | null | null | null | null | null | null | null | null | null | null |
2,506.17497 | From Generality to Mastery: Composer-Style Symbolic Music Generation via
Large-Scale Pre-training | ['Mingyang Yao', 'Ke Chen'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | Despite progress in controllable symbolic music generation, data scarcity
remains a challenge for certain control modalities. Composer-style music
generation is a prime example, as only a few pieces per composer are available,
limiting the modeling of both styles and fundamental music elements (e.g.,
melody, chord, rhythm). In this paper, we investigate how general music
knowledge learned from a broad corpus can enhance the mastery of specific
composer styles, with a focus on piano piece generation. Our approach follows a
two-stage training paradigm. First, we pre-train a REMI-based music generation
model on a large corpus of pop, folk, and classical music. Then, we fine-tune
it on a small, human-verified dataset from four renowned composers, namely
Bach, Mozart, Beethoven, and Chopin, using a lightweight adapter module to
condition the model on style indicators. To evaluate the effectiveness of our
approach, we conduct both objective and subjective evaluations on style
accuracy and musicality. Experimental results demonstrate that our method
outperforms ablations and baselines, achieving more precise composer-style
modeling and better musical aesthetics. Additionally, we provide observations
on how the model builds music concepts from the generality pre-training and
refines its stylistic understanding through the mastery fine-tuning. | 2025-06-20T22:20:59Z | Proceedings of the 6th Conference on AI Music Creativity, AIMC 2025 | null | null | null | null | null | null | null | null | null |
2,506.17561 | VLA-OS: Structuring and Dissecting Planning Representations and
Paradigms in Vision-Language-Action Models | ['Chongkai Gao', 'Zixuan Liu', 'Zhenghao Chi', 'Junshan Huang', 'Xin Fei', 'Yiwen Hou', 'Yuxuan Zhang', 'Yudi Lin', 'Zhirui Fang', 'Zeyu Jiang', 'Lin Shao'] | ['cs.CV', 'cs.AI', 'cs.RO'] | Recent studies on Vision-Language-Action (VLA) models have shifted from the
end-to-end action-generation paradigm toward a pipeline involving task planning
followed by action generation, demonstrating improved performance on various
complex, long-horizon manipulation tasks. However, existing approaches vary
significantly in terms of network architectures, planning paradigms,
representations, and training data sources, making it challenging for
researchers to identify the precise sources of performance gains and components
to be further improved. To systematically investigate the impacts of different
planning paradigms and representations isolating from network architectures and
training data, in this paper, we introduce VLA-OS, a unified VLA architecture
series capable of various task planning paradigms, and design a comprehensive
suite of controlled experiments across diverse object categories (rigid and
deformable), visual modalities (2D and 3D), environments (simulation and
real-world), and end-effectors (grippers and dexterous hands). Our results
demonstrate that: 1) visually grounded planning representations are generally
better than language planning representations; 2) the Hierarchical-VLA paradigm
generally achieves superior or comparable performance than other paradigms on
task performance, pretraining, generalization ability, scalability, and
continual learning ability, albeit at the cost of slower training and inference
speeds. | 2025-06-21T03:07:48Z | null | null | null | null | null | null | null | null | null | null |
2,506.17612 | JarvisArt: Liberating Human Artistic Creativity via an Intelligent Photo
Retouching Agent | ['Yunlong Lin', 'Zixu Lin', 'Kunjie Lin', 'Jinbin Bai', 'Panwang Pan', 'Chenxin Li', 'Haoyu Chen', 'Zhongdao Wang', 'Xinghao Ding', 'Wenbo Li', 'Shuicheng Yan'] | ['cs.CV'] | Photo retouching has become integral to contemporary visual storytelling,
enabling users to capture aesthetics and express creativity. While professional
tools such as Adobe Lightroom offer powerful capabilities, they demand
substantial expertise and manual effort. In contrast, existing AI-based
solutions provide automation but often suffer from limited adjustability and
poor generalization, failing to meet diverse and personalized editing needs. To
bridge this gap, we introduce JarvisArt, a multi-modal large language model
(MLLM)-driven agent that understands user intent, mimics the reasoning process
of professional artists, and intelligently coordinates over 200 retouching
tools within Lightroom. JarvisArt undergoes a two-stage training process: an
initial Chain-of-Thought supervised fine-tuning to establish basic reasoning
and tool-use skills, followed by Group Relative Policy Optimization for
Retouching (GRPO-R) to further enhance its decision-making and tool
proficiency. We also propose the Agent-to-Lightroom Protocol to facilitate
seamless integration with Lightroom. To evaluate performance, we develop
MMArt-Bench, a novel benchmark constructed from real-world user edits.
JarvisArt demonstrates user-friendly interaction, superior generalization, and
fine-grained control over both global and local adjustments, paving a new
avenue for intelligent photo retouching. Notably, it outperforms GPT-4o with a
60% improvement in average pixel-level metrics on MMArt-Bench for content
fidelity, while maintaining comparable instruction-following capabilities.
Project Page: https://jarvisart.vercel.app/. | 2025-06-21T06:36:00Z | 40 pages, 26 figures | null | null | null | null | null | null | null | null | null |
2,506.17671 | TPTT: Transforming Pretrained Transformer into Titans | ['Fabien Furfaro'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent advances in large language models (LLMs) have led to remarkable
progress in natural language processing, but their computational and memory
demands remain a significant challenge, particularly for long-context
inference. We introduce TPTT (Transforming Pretrained Transformer into Titans),
a novel framework for enhancing pretrained Transformer models with efficient
linearized attention mechanisms and advanced memory management. TPTT employs
techniques such as Memory as Gate (MaG) and mixed linearized attention (LiZA).
It is fully compatible with the Hugging Face Transformers library, enabling
seamless adaptation of any causal LLM through parameter-efficient fine-tuning
(LoRA) without full retraining. We show the effectiveness of TPTT on the MMLU
benchmark with models of approximately 1 billion parameters, observing
substantial improvements in both efficiency and accuracy. For instance,
Titans-Llama-3.2-1B achieves a 20% increase in Exact Match (EM) over its
baseline. Statistical analyses and comparisons with recent state-of-the-art
methods confirm the practical scalability and robustness of TPTT. Code is
available at https://github.com/fabienfrfr/tptt . Python package at
https://pypi.org/project/tptt/ . | 2025-06-21T10:06:07Z | 6 pages, 1 figure | null | null | null | null | null | null | null | null | null |
2,506.17818 | CultureMERT: Continual Pre-Training for Cross-Cultural Music
Representation Learning | ['Angelos-Nikolaos Kanatas', 'Charilaos Papaioannou', 'Alexandros Potamianos'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | Recent advances in music foundation models have improved audio representation
learning, yet their effectiveness across diverse musical traditions remains
limited. We introduce CultureMERT-95M, a multi-culturally adapted foundation
model developed to enhance cross-cultural music representation learning and
understanding. To achieve this, we propose a two-stage continual pre-training
strategy that integrates learning rate re-warming and re-decaying, enabling
stable adaptation even with limited computational resources. Training on a
650-hour multi-cultural data mix, comprising Greek, Turkish, and Indian music
traditions, results in an average improvement of 4.9% in ROC-AUC and AP across
diverse non-Western music auto-tagging tasks, surpassing prior
state-of-the-art, with minimal forgetting on Western-centric benchmarks. We
further investigate task arithmetic, an alternative approach to multi-cultural
adaptation that merges single-culture adapted models in the weight space. Task
arithmetic performs on par with our multi-culturally trained model on
non-Western auto-tagging tasks and shows no regression on Western datasets.
Cross-cultural evaluation reveals that single-culture models transfer with
varying effectiveness across musical traditions, whereas the multi-culturally
adapted model achieves the best overall performance. To support research on
world music representation learning, we publicly release CultureMERT-95M and
CultureMERT-TA-95M, fostering the development of more culturally aware music
foundation models. | 2025-06-21T21:16:39Z | 10 pages, 4 figures, accepted to the 26th International Society for
Music Information Retrieval conference (ISMIR 2025), to be held in Daejeon,
South Korea | null | null | null | null | null | null | null | null | null |
2,506.18035 | Splitformer: An improved early-exit architecture for automatic speech
recognition on edge devices | ['Maxence Lasbordes', 'Daniele Falavigna', 'Alessio Brutti'] | ['cs.CL', 'cs.SD', 'eess.AS', '68T50 (Primary)', 'I.2.7; I.5.4'] | The ability to dynamically adjust the computational load of neural models
during inference in a resource aware manner is crucial for on-device processing
scenarios, characterised by limited and time-varying computational resources.
Early-exit architectures represent an elegant and effective solution, since
they can process the input with a subset of their layers, exiting at
intermediate branches (the upmost layers are hence removed from the model).
From a different perspective, for automatic speech recognition applications
there are memory-efficient neural architectures that apply variable frame rate
analysis, through downsampling/upsampling operations in the middle layers,
reducing the overall number of operations and improving significantly the
performance on well established benchmarks. One example is the Zipformer.
However, these architectures lack the modularity necessary to inject early-exit
branches.
With the aim of improving the performance in early-exit models, we propose
introducing parallel layers in the architecture that process downsampled
versions of their inputs. % in conjunction with standard processing layers. We
show that in this way the speech recognition performance on standard benchmarks
significantly improve, at the cost of a small increase in the overall number of
model parameters but without affecting the inference time. | 2025-06-22T13:34:18Z | 5 pages, 3 Postscript figures | null | null | null | null | null | null | null | null | null |
2,506.18088 | RoboTwin 2.0: A Scalable Data Generator and Benchmark with Strong Domain
Randomization for Robust Bimanual Robotic Manipulation | ['Tianxing Chen', 'Zanxin Chen', 'Baijun Chen', 'Zijian Cai', 'Yibin Liu', 'Qiwei Liang', 'Zixuan Li', 'Xianliang Lin', 'Yiheng Ge', 'Zhenyu Gu', 'Weiliang Deng', 'Yubin Guo', 'Tian Nian', 'Xuanbing Xie', 'Qiangyu Chen', 'Kailun Su', 'Tianling Xu', 'Guodong Liu', 'Mengkang Hu', 'Huan-ang Gao', 'Kaixuan Wang', 'Zhixuan Liang', 'Yusen Qin', 'Xiaokang Yang', 'Ping Luo', 'Yao Mu'] | ['cs.RO', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.MA'] | Simulation-based data synthesis has emerged as a powerful paradigm for
enhancing real-world robotic manipulation. However, existing synthetic datasets
remain insufficient for robust bimanual manipulation due to two challenges: (1)
the lack of an efficient, scalable data generation method for novel tasks, and
(2) oversimplified simulation environments that fail to capture real-world
complexity. We present RoboTwin 2.0, a scalable simulation framework that
enables automated, large-scale generation of diverse and realistic data, along
with unified evaluation protocols for dual-arm manipulation. We first construct
RoboTwin-OD, a large-scale object library comprising 731 instances across 147
categories, each annotated with semantic and manipulation-relevant labels.
Building on this foundation, we develop an expert data synthesis pipeline that
combines multimodal large language models (MLLMs) with simulation-in-the-loop
refinement to generate task-level execution code automatically. To improve
sim-to-real transfer, RoboTwin 2.0 incorporates structured domain randomization
along five axes: clutter, lighting, background, tabletop height and language
instructions, thereby enhancing data diversity and policy robustness. We
instantiate this framework across 50 dual-arm tasks spanning five robot
embodiments, and pre-collect over 100,000 domain-randomized expert
trajectories. Empirical results show a 10.9% gain in code generation success
and improved generalization to novel real-world scenarios. A VLA model
fine-tuned on our dataset achieves a 367% relative improvement (42.0% vs. 9.0%)
on unseen scene real-world tasks, while zero-shot models trained solely on our
synthetic data achieve a 228% relative gain, highlighting strong generalization
without real-world supervision. We release the data generator, benchmark,
dataset, and code to support scalable research in robust bimanual manipulation. | 2025-06-22T16:26:53Z | Project Page: https://robotwin-platform.github.io/ | null | null | null | null | null | null | null | null | null |
2,506.18095 | ShareGPT-4o-Image: Aligning Multimodal Models with GPT-4o-Level Image
Generation | ['Junying Chen', 'Zhenyang Cai', 'Pengcheng Chen', 'Shunian Chen', 'Ke Ji', 'Xidong Wang', 'Yunjin Yang', 'Benyou Wang'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Recent advances in multimodal generative models have unlocked photorealistic,
instruction-aligned image generation, yet leading systems like GPT-4o-Image
remain proprietary and inaccessible. To democratize these capabilities, we
present ShareGPT-4o-Image, the first dataset comprising 45K text-to-image and
46K text-and-image-to-image data, all synthesized using GPT-4o's image
generation capabilities for distilling its advanced image generation abilities.
Leveraging this dataset, we develop Janus-4o, a multimodal large language model
capable of both text-to-image and text-and-image-to-image generation. Janus-4o
not only significantly improves text-to-image generation over its predecessor,
Janus-Pro, but also newly supports text-and-image-to-image generation. Notably,
it achieves impressive performance in text-and-image-to-image generation from
scratch, using only 91K synthetic samples and 6 hours of training on an 8
A800-GPU machine. We hope the release of ShareGPT-4o-Image and Janus-4o will
foster open research in photorealistic, instruction-aligned image generation. | 2025-06-22T16:51:09Z | null | null | null | null | null | null | null | null | null | null |
2,506.18203 | Shrinking the Generation-Verification Gap with Weak Verifiers | ['Jon Saad-Falcon', 'E. Kelly Buchanan', 'Mayee F. Chen', 'Tzu-Heng Huang', 'Brendan McLaughlin', 'Tanvir Bhathal', 'Shang Zhu', 'Ben Athiwaratkun', 'Frederic Sala', 'Scott Linderman', 'Azalia Mirhoseini', 'Christopher Ré'] | ['cs.CR', 'cs.CL'] | Verifiers can improve language model capabilities by scoring and ranking
responses from generated candidates. Currently, high-quality verifiers are
either unscalable (e.g., humans) or limited in utility (e.g., tools like Lean).
While LM judges and reward models have become broadly useful as general-purpose
verifiers, a significant performance gap remains between them and oracle
verifiers (verifiers with perfect accuracy). To help close this gap, we
introduce Weaver, a framework for designing a strong verifier by combining
multiple weak, imperfect verifiers. We find weighted ensembles of verifiers,
which typically require learning from labeled data, significantly outperform
unweighted combinations due to differences in verifier accuracies. To reduce
dependency on labeled data, Weaver leverages weak supervision to estimate each
verifier's accuracy and combines outputs into a unified score that better
reflects true response quality. However, directly applying weak supervision
algorithms poses challenges, including inconsistent verifier output formats and
handling low-quality verifiers. Weaver addresses these using dataset statistics
to normalize outputs and filter specific verifiers. We study Weaver's
effectiveness in test-time repeated sampling, where a model generates multiple
candidate responses and selects one. Our evaluations show Weaver significantly
improves over Pass@1-performance when selecting the first candidate-across
reasoning and math tasks, achieving o3-mini-level accuracy with Llama 3.3 70B
Instruct as generator, and an ensemble of 70B or smaller judge and reward
models as verifiers (87.7% average). This gain mirrors the jump between GPT-4o
and o3-mini (69.0% vs. 86.7%), which required extensive finetuning and
post-training. To reduce computational costs of verifier ensembles, we train a
400M cross-encoder using Weaver's combined output scores. | 2025-06-22T23:38:15Z | null | null | null | null | null | null | null | null | null | null |
2,506.18245 | Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart
Contract Vulnerability Detection | ['Lei Yu', 'Zhirong Huang', 'Hang Yuan', 'Shiqi Cheng', 'Li Yang', 'Fengjun Zhang', 'Chenjie Shen', 'Jiajia Ma', 'Jingyuan Zhang', 'Junyi Lu', 'Chun Zuo'] | ['cs.CR', 'cs.AI', 'cs.SE'] | Smart contract vulnerability detection remains a major challenge in
blockchain security. Existing vulnerability detection methods face two main
issues: (1) Existing datasets lack comprehensive coverage and high-quality
explanations for preference learning. (2) Large language models (LLMs) often
struggle with accurately interpreting specific concepts in smart contract
security. Empirical analysis shows that even after continual pre-training (CPT)
and supervised fine-tuning (SFT), LLMs may misinterpret the execution order of
state changes, resulting in incorrect explanations despite making correct
detection decisions. To address these challenges, we propose Smart-LLaMA-DPO
based on LLaMA-3.1-8B. We construct a comprehensive dataset covering four major
vulnerability types and machine-unauditable vulnerabilities, including precise
labels, explanations, and locations for SFT, as well as high-quality and
low-quality output pairs for Direct Preference Optimization (DPO). Second, we
perform CPT using large-scale smart contract to enhance the LLM's understanding
of specific security practices in smart contracts. Futhermore, we conduct SFT
with our comprehensive dataset. Finally, we apply DPO, leveraging human
feedback and a specially designed loss function that increases the probability
of preferred explanations while reducing the likelihood of non-preferred
outputs. We evaluate Smart-LLaMA-DPO on four major vulnerability types:
reentrancy, timestamp dependence, integer overflow/underflow, and delegatecall,
as well as machine-unauditable vulnerabilities. Our method significantly
outperforms state-of-the-art baselines, with average improvements of 10.43% in
F1 score and 7.87% in accuracy. Moreover, both LLM evaluation and human
evaluation confirm that our method generates more correct, thorough, and clear
explanations. | 2025-06-23T02:24:07Z | Accepted to ISSTA 2025 | null | null | null | null | null | null | null | null | null |
2,506.18254 | RLPR: Extrapolating RLVR to General Domains without Verifiers | ['Tianyu Yu', 'Bo Ji', 'Shouli Wang', 'Shu Yao', 'Zefan Wang', 'Ganqu Cui', 'Lifan Yuan', 'Ning Ding', 'Yuan Yao', 'Zhiyuan Liu', 'Maosong Sun', 'Tat-Seng Chua'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Reinforcement Learning with Verifiable Rewards (RLVR) demonstrates promising
potential in advancing the reasoning capabilities of LLMs. However, its success
remains largely confined to mathematical and code domains. This primary
limitation stems from the heavy reliance on domain-specific verifiers, which
results in prohibitive complexity and limited scalability. To address the
challenge, our key observation is that LLM's intrinsic probability of
generating a correct free-form answer directly indicates its own evaluation of
the reasoning reward (i.e., how well the reasoning process leads to the correct
answer). Building on this insight, we propose RLPR, a simple verifier-free
framework that extrapolates RLVR to broader general domains. RLPR uses the
LLM's own token probability scores for reference answers as the reward signal
and maximizes the expected reward during training. We find that addressing the
high variance of this noisy probability reward is crucial to make it work, and
propose prob-to-reward and stabilizing methods to ensure a precise and stable
reward from LLM intrinsic probabilities. Comprehensive experiments in four
general-domain benchmarks and three mathematical benchmarks show that RLPR
consistently improves reasoning capabilities in both areas for Gemma, Llama,
and Qwen based models. Notably, RLPR outperforms concurrent VeriFree by 7.6
points on TheoremQA and 7.5 points on Minerva, and even surpasses strong
verifier-model-dependent approaches General-Reasoner by 1.6 average points
across seven benchmarks. | 2025-06-23T02:56:36Z | Project Website: https://github.com/openbmb/RLPR | null | null | null | null | null | null | null | null | null |
2,506.1833 | Confucius3-Math: A Lightweight High-Performance Reasoning LLM for
Chinese K-12 Mathematics Learning | ['Lixin Wu', 'Na Cai', 'Qiao Cheng', 'Jiachen Wang', 'Yitao Duan'] | ['cs.LG', 'cs.AI', 'cs.CL'] | We introduce Confucius3-Math, an open-source large language model with 14B
parameters that (1) runs efficiently on a single consumer-grade GPU; (2)
achieves SOTA performances on a range of mathematical reasoning tasks,
outperforming many models with significantly larger sizes. In particular, as
part of our mission to enhancing education and knowledge dissemination with AI,
Confucius3-Math is specifically committed to mathematics learning for Chinese
K-12 students and educators. Built via post-training with large-scale
reinforcement learning (RL), Confucius3-Math aligns with national curriculum
and excels at solving main-stream Chinese K-12 mathematical problems with low
cost. In this report we share our development recipe, the challenges we
encounter and the techniques we develop to overcome them. In particular, we
introduce three technical innovations: Targeted Entropy Regularization, Recent
Sample Recovery and Policy-Specific Hardness Weighting. These innovations
encompass a new entropy regularization, a novel data scheduling policy, and an
improved group-relative advantage estimator. Collectively, they significantly
stabilize the RL training, improve data efficiency, and boost performance. Our
work demonstrates the feasibility of building strong reasoning models in a
particular domain at low cost. We open-source our model and code at
https://github.com/netease-youdao/Confucius3-Math. | 2025-06-23T06:23:53Z | null | null | null | null | null | null | null | null | null | null |
2,506.18349 | SlimMoE: Structured Compression of Large MoE Models via Expert Slimming
and Distillation | ['Zichong Li', 'Chen Liang', 'Zixuan Zhang', 'Ilgee Hong', 'Young Jin Kim', 'Weizhu Chen', 'Tuo Zhao'] | ['cs.LG', 'cs.CL'] | The Mixture of Experts (MoE) architecture has emerged as a powerful paradigm
for scaling large language models (LLMs) while maintaining inference
efficiency. However, their enormous memory requirements make them prohibitively
expensive to fine-tune or deploy in resource-constrained environments. To
address this challenge, we introduce SlimMoE, a multi-stage compression
framework for transforming large MoE models into much smaller, efficient
variants without incurring the prohibitive costs of training from scratch. Our
method systematically reduces parameter counts by slimming experts and
transferring knowledge through intermediate stages, effectively mitigating the
performance degradation common in one-shot pruning approaches. Using this
framework, we compress Phi 3.5-MoE (41.9B total/6.6B activated parameters) to
create Phi-mini-MoE (7.6B total/2.4B activated parameters) and Phi-tiny-MoE
(3.8B total/1.1B activated parameters) using only 400B tokens--less than 10% of
the original model's training data. These compressed models can be fine-tuned
on a single GPU (A100 for Phi-mini-MoE, A6000 for Phi-tiny-MoE), making them
highly suitable for academic and resource-limited settings. Our experiments
demonstrate that these compressed models outperform others of similar size and
remain competitive with larger models. For instance, Phi-mini-MoE achieves
similar or better performance to Phi-3-mini using only 2/3 of the activated
parameters and yields comparable MMLU scores to Llama 3.1 8B despite having
significantly lower latency. Our findings demonstrate that structured pruning
combined with staged distillation offers an effective path to creating
high-quality, compact MoE models, paving the way for broader adoption of MoE
architectures. We make our models publicly available at
https://huggingface.co/microsoft/Phi-mini-MoE-instruct and
https://huggingface.co/microsoft/Phi-tiny-MoE-instruct . | 2025-06-23T07:15:59Z | null | null | null | null | null | null | null | null | null | null |
2,506.18582 | Parallel Continuous Chain-of-Thought with Jacobi Iteration | ['Haoyi Wu', 'Zhihao Teng', 'Kewei Tu'] | ['cs.CL'] | Continuous chain-of-thought has been shown to be effective in saving
reasoning tokens for large language models. By reasoning with continuous latent
thought tokens, continuous CoT is able to perform implicit reasoning in a
compact manner. However, the sequential dependencies between latent thought
tokens spoil parallel training, leading to long training time. In this paper,
we propose Parallel Continuous Chain-of-Thought (PCCoT), which performs Jacobi
iteration on the latent thought tokens, updating them iteratively in parallel
instead of sequentially and thus improving both training and inference
efficiency of continuous CoT. Experiments demonstrate that by choosing the
proper number of iterations, we are able to achieve comparable or even better
performance while saving nearly 50% of the training and inference time.
Moreover, PCCoT shows better stability and robustness in the training process.
Our code is available at https://github.com/whyNLP/PCCoT. | 2025-06-23T12:35:41Z | under review | null | null | null | null | null | null | null | null | null |
2,506.18623 | Efficient and Generalizable Speaker Diarization via Structured Pruning
of Self-Supervised Models | ['Jiangyu Han', 'Petr Pálka', 'Marc Delcroix', 'Federico Landini', 'Johan Rohdin', 'Jan Cernocký', 'Lukáš Burget'] | ['eess.AS'] | Self-supervised learning (SSL) models such as WavLM have brought substantial
improvements to speaker diarization by providing rich contextual
representations. However, the high computational and memory costs of these
models hinder their deployment in real-time and resource-constrained scenarios.
In this work, we present a comprehensive study on compressing SSL-based
diarization models through structured pruning guided by knowledge distillation.
Building upon our previous work, we extend the analysis to include pruning
objectives based on multiply-accumulate operations (MACs), investigate
module-wise and progressive pruning strategies, and examine the impact of
training data quantity. Experimental results show that our method reduces model
size by up to 80% without degrading performance, achieving up to 4x faster
inference on a single GPU. We further perform large-scale evaluations on a
diverse compound dataset comprising eight public diarization corpora, where our
best pruned model achieves state-of-the-art performance across most conditions.
Additionally, we show strong generalization to the CHiME-6 dataset, attaining
performance comparable to the third-place system in the CHiME-7 challenge
without any domain adaptation. All models and code are publicly released to
support reproducibility and future research. | 2025-06-23T13:29:51Z | 11 pages, 6 figures | null | null | null | null | null | null | null | null | null |
2,506.18701 | Matrix-Game: Interactive World Foundation Model | ['Yifan Zhang', 'Chunli Peng', 'Boyang Wang', 'Puyi Wang', 'Qingcheng Zhu', 'Fei Kang', 'Biao Jiang', 'Zedong Gao', 'Eric Li', 'Yang Liu', 'Yahui Zhou'] | ['cs.CV', 'cs.AI'] | We introduce Matrix-Game, an interactive world foundation model for
controllable game world generation. Matrix-Game is trained using a two-stage
pipeline that first performs large-scale unlabeled pretraining for environment
understanding, followed by action-labeled training for interactive video
generation. To support this, we curate Matrix-Game-MC, a comprehensive
Minecraft dataset comprising over 2,700 hours of unlabeled gameplay video clips
and over 1,000 hours of high-quality labeled clips with fine-grained keyboard
and mouse action annotations. Our model adopts a controllable image-to-world
generation paradigm, conditioned on a reference image, motion context, and user
actions. With over 17 billion parameters, Matrix-Game enables precise control
over character actions and camera movements, while maintaining high visual
quality and temporal coherence. To evaluate performance, we develop GameWorld
Score, a unified benchmark measuring visual quality, temporal quality, action
controllability, and physical rule understanding for Minecraft world
generation. Extensive experiments show that Matrix-Game consistently
outperforms prior open-source Minecraft world models (including Oasis and
MineWorld) across all metrics, with particularly strong gains in
controllability and physical consistency. Double-blind human evaluations
further confirm the superiority of Matrix-Game, highlighting its ability to
generate perceptually realistic and precisely controllable videos across
diverse game scenarios. To facilitate future research on interactive
image-to-world generation, we will open-source the Matrix-Game model weights
and the GameWorld Score benchmark at https://github.com/SkyworkAI/Matrix-Game. | 2025-06-23T14:40:49Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,506.18841 | LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement
Learning | ['Yuhao Wu', 'Yushi Bai', 'Zhiqiang Hu', 'Roy Ka-Wei Lee', 'Juanzi Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Ultra-long generation by large language models (LLMs) is a widely demanded
scenario, yet it remains a significant challenge due to their maximum
generation length limit and overall quality degradation as sequence length
increases. Previous approaches, exemplified by LongWriter, typically rely on
''teaching'', which involves supervised fine-tuning (SFT) on synthetic
long-form outputs. However, this strategy heavily depends on synthetic SFT
data, which is difficult and costly to construct, often lacks coherence and
consistency, and tends to be overly artificial and structurally monotonous. In
this work, we propose an incentivization-based approach that, starting entirely
from scratch and without relying on any annotated or synthetic data, leverages
reinforcement learning (RL) to foster the emergence of ultra-long, high-quality
text generation capabilities in LLMs. We perform RL training starting from a
base model, similar to R1-Zero, guiding it to engage in reasoning that
facilitates planning and refinement during the writing process. To support
this, we employ specialized reward models that steer the LLM towards improved
length control, writing quality, and structural formatting. Experimental
evaluations show that our LongWriter-Zero model, trained from Qwen2.5-32B,
consistently outperforms traditional SFT methods on long-form writing tasks,
achieving state-of-the-art results across all metrics on WritingBench and
Arena-Write, and even surpassing 100B+ models such as DeepSeek R1 and
Qwen3-235B. We open-source our data and model checkpoints under
https://huggingface.co/THU-KEG/LongWriter-Zero-32B | 2025-06-23T16:59:02Z | null | null | null | null | null | null | null | null | null | null |
2,506.18843 | USAD: Universal Speech and Audio Representation via Distillation | ['Heng-Jui Chang', 'Saurabhchand Bhati', 'James Glass', 'Alexander H. Liu'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Self-supervised learning (SSL) has revolutionized audio representations, yet
models often remain domain-specific, focusing on either speech or non-speech
tasks. In this work, we present Universal Speech and Audio Distillation (USAD),
a unified approach to audio representation learning that integrates diverse
audio types - speech, sound, and music - into a single model. USAD employs
efficient layer-to-layer distillation from domain-specific SSL models to train
a student on a comprehensive audio dataset. USAD offers competitive performance
across various benchmarks and datasets, including frame and instance-level
speech processing tasks, audio tagging, and sound classification, achieving
near state-of-the-art results with a single encoder on SUPERB and HEAR
benchmarks. | 2025-06-23T17:02:00Z | Preprint | null | null | null | null | null | null | null | null | null |
2,506.18866 | OmniAvatar: Efficient Audio-Driven Avatar Video Generation with Adaptive
Body Animation | ['Qijun Gan', 'Ruizi Yang', 'Jianke Zhu', 'Shaofei Xue', 'Steven Hoi'] | ['cs.CV', 'cs.AI', 'cs.MM'] | Significant progress has been made in audio-driven human animation, while
most existing methods focus mainly on facial movements, limiting their ability
to create full-body animations with natural synchronization and fluidity. They
also struggle with precise prompt control for fine-grained generation. To
tackle these challenges, we introduce OmniAvatar, an innovative audio-driven
full-body video generation model that enhances human animation with improved
lip-sync accuracy and natural movements. OmniAvatar introduces a pixel-wise
multi-hierarchical audio embedding strategy to better capture audio features in
the latent space, enhancing lip-syncing across diverse scenes. To preserve the
capability for prompt-driven control of foundation models while effectively
incorporating audio features, we employ a LoRA-based training approach.
Extensive experiments show that OmniAvatar surpasses existing models in both
facial and semi-body video generation, offering precise text-based control for
creating videos in various domains, such as podcasts, human interactions,
dynamic scenes, and singing. Our project page is
https://omni-avatar.github.io/. | 2025-06-23T17:33:03Z | Project page: https://omni-avatar.github.io/ | null | null | null | null | null | null | null | null | null |
2,506.18871 | OmniGen2: Exploration to Advanced Multimodal Generation | ['Chenyuan Wu', 'Pengfei Zheng', 'Ruiran Yan', 'Shitao Xiao', 'Xin Luo', 'Yueze Wang', 'Wanli Li', 'Xiyan Jiang', 'Yexin Liu', 'Junjie Zhou', 'Ze Liu', 'Ziyi Xia', 'Chaofan Li', 'Haoge Deng', 'Jiahao Wang', 'Kun Luo', 'Bo Zhang', 'Defu Lian', 'Xinlong Wang', 'Zhongyuan Wang', 'Tiejun Huang', 'Zheng Liu'] | ['cs.CV', 'cs.AI', 'cs.CL'] | In this work, we introduce OmniGen2, a versatile and open-source generative
model designed to provide a unified solution for diverse generation tasks,
including text-to-image, image editing, and in-context generation. Unlike
OmniGen v1, OmniGen2 features two distinct decoding pathways for text and image
modalities, utilizing unshared parameters and a decoupled image tokenizer. This
design enables OmniGen2 to build upon existing multimodal understanding models
without the need to re-adapt VAE inputs, thereby preserving the original text
generation capabilities. To facilitate the training of OmniGen2, we developed
comprehensive data construction pipelines, encompassing image editing and
in-context generation data. Additionally, we introduce a reflection mechanism
tailored for image generation tasks and curate a dedicated reflection dataset
based on OmniGen2. Despite its relatively modest parameter size, OmniGen2
achieves competitive results on multiple task benchmarks, including
text-to-image and image editing. To further evaluate in-context generation,
also referred to as subject-driven tasks, we introduce a new benchmark named
OmniContext. OmniGen2 achieves state-of-the-art performance among open-source
models in terms of consistency. We will release our models, training code,
datasets, and data construction pipeline to support future research in this
field. Project Page: https://vectorspacelab.github.io/OmniGen2; GitHub Link:
https://github.com/VectorSpaceLab/OmniGen2 | 2025-06-23T17:38:54Z | null | null | null | null | null | null | null | null | null | null |
2,506.18896 | ReasonFlux-PRM: Trajectory-Aware PRMs for Long Chain-of-Thought
Reasoning in LLMs | ['Jiaru Zou', 'Ling Yang', 'Jingwen Gu', 'Jiahao Qiu', 'Ke Shen', 'Jingrui He', 'Mengdi Wang'] | ['cs.CL'] | Process Reward Models (PRMs) have recently emerged as a powerful framework
for supervising intermediate reasoning steps in large language models (LLMs).
Previous PRMs are primarily trained on model final output responses and
struggle to evaluate intermediate thinking trajectories robustly, especially in
the emerging setting of trajectory-response outputs generated by frontier
reasoning models like Deepseek-R1. In this work, we introduce ReasonFlux-PRM, a
novel trajectory-aware PRM explicitly designed to evaluate the
trajectory-response type of reasoning traces. ReasonFlux-PRM incorporates both
step-level and trajectory-level supervision, enabling fine-grained reward
assignment aligned with structured chain-of-thought data. We adapt
ReasonFlux-PRM to support reward supervision under both offline and online
settings, including (i) selecting high-quality model distillation data for
downstream supervised fine-tuning of smaller models, (ii) providing dense
process-level rewards for policy optimization during reinforcement learning,
and (iii) enabling reward-guided Best-of-N test-time scaling. Empirical results
on challenging downstream benchmarks such as AIME, MATH500, and GPQA-Diamond
demonstrate that ReasonFlux-PRM-7B selects higher quality data than strong PRMs
(e.g., Qwen2.5-Math-PRM-72B) and human-curated baselines. Furthermore, our
derived ReasonFlux-PRM-7B yields consistent performance improvements, achieving
average gains of 12.1% in supervised fine-tuning, 4.5% in reinforcement
learning, and 6.3% in test-time scaling. We also release our efficient
ReasonFlux-PRM-1.5B for resource-constrained applications and edge deployment.
Projects: https://github.com/Gen-Verse/ReasonFlux | 2025-06-23T17:59:02Z | Codes and Models: https://github.com/Gen-Verse/ReasonFlux | null | null | null | null | null | null | null | null | null |
2,506.18898 | Vision as a Dialect: Unifying Visual Understanding and Generation via
Text-Aligned Representations | ['Jiaming Han', 'Hao Chen', 'Yang Zhao', 'Hanyu Wang', 'Qi Zhao', 'Ziyan Yang', 'Hao He', 'Xiangyu Yue', 'Lu Jiang'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.MM'] | This paper presents a multimodal framework that attempts to unify visual
understanding and generation within a shared discrete semantic representation.
At its core is the Text-Aligned Tokenizer (TA-Tok), which converts images into
discrete tokens using a text-aligned codebook projected from a large language
model's (LLM) vocabulary. By integrating vision and text into a unified space
with an expanded vocabulary, our multimodal LLM, Tar, enables cross-modal input
and output through a shared interface, without the need for modality-specific
designs. Additionally, we propose scale-adaptive encoding and decoding to
balance efficiency and visual detail, along with a generative de-tokenizer to
produce high-fidelity visual outputs. To address diverse decoding needs, we
utilize two complementary de-tokenizers: a fast autoregressive model and a
diffusion-based model. To enhance modality fusion, we investigate advanced
pre-training tasks, demonstrating improvements in both visual understanding and
generation. Experiments across benchmarks show that Tar matches or surpasses
existing multimodal LLM methods, achieving faster convergence and greater
training efficiency. Code, models, and data are available at
https://tar.csuhan.com | 2025-06-23T17:59:14Z | Project page: https://tar.csuhan.com | null | null | null | null | null | null | null | null | null |
2,506.18902 | jina-embeddings-v4: Universal Embeddings for Multimodal Multilingual
Retrieval | ['Michael Günther', 'Saba Sturua', 'Mohammad Kalim Akram', 'Isabelle Mohr', 'Andrei Ungureanu', 'Bo Wang', 'Sedigheh Eslami', 'Scott Martens', 'Maximilian Werk', 'Nan Wang', 'Han Xiao'] | ['cs.AI', 'cs.CL', 'cs.IR', '68T50', 'I.2.7'] | We introduce jina-embeddings-v4, a 3.8 billion parameter multimodal embedding
model that unifies text and image representations through a novel architecture
supporting both single-vector and multi-vector embeddings in the late
interaction style. The model incorporates task-specific Low-Rank Adaptation
(LoRA) adapters to optimize performance across diverse retrieval scenarios,
including query-document retrieval, semantic text similarity, and code search.
Comprehensive evaluations demonstrate that jina-embeddings-v4 achieves
state-of-the-art performance on both single-modal and cross-modal retrieval
tasks, with particular strength in processing visually rich content such as
tables, charts, diagrams, and mixed-media formats. To facilitate evaluation of
this capability, we also introduce Jina-VDR, a novel benchmark specifically
designed for visually rich image retrieval. | 2025-06-23T17:59:55Z | 22 pages, 1-10 main, 14-22 experimental results, benchmark tables | null | null | null | null | null | null | null | null | null |
2,506.18903 | VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed
View Memory | ['Runjia Li', 'Philip Torr', 'Andrea Vedaldi', 'Tomas Jakab'] | ['cs.CV'] | We propose a novel memory mechanism to build video generators that can
explore environments interactively. Similar results have previously been
achieved by out-painting 2D views of the scene while incrementally
reconstructing its 3D geometry, which quickly accumulates errors, or by video
generators with a short context window, which struggle to maintain scene
coherence over the long term. To address these limitations, we introduce
Surfel-Indexed View Memory (VMem), a mechanism that remembers past views by
indexing them geometrically based on the 3D surface elements (surfels) they
have observed. VMem enables the efficient retrieval of the most relevant past
views when generating new ones. By focusing only on these relevant views, our
method produces consistent explorations of imagined environments at a fraction
of the computational cost of using all past views as context. We evaluate our
approach on challenging long-term scene synthesis benchmarks and demonstrate
superior performance compared to existing methods in maintaining scene
coherence and camera control. | 2025-06-23T17:59:56Z | Project page: https://v-mem.github.io | null | null | null | null | null | null | null | null | null |
2,506.18904 | TC-Light: Temporally Coherent Generative Rendering for Realistic World
Transfer | ['Yang Liu', 'Chuanchen Luo', 'Zimo Tang', 'Yingyan Li', 'Yuran Yang', 'Yuanyong Ning', 'Lue Fan', 'Zhaoxiang Zhang', 'Junran Peng'] | ['cs.CV'] | Illumination and texture editing are critical dimensions for world-to-world
transfer, which is valuable for applications including sim2real and real2real
visual data scaling up for embodied AI. Existing techniques generatively
re-render the input video to realize the transfer, such as video relighting
models and conditioned world generation models. Nevertheless, these models are
predominantly limited to the domain of training data (e.g., portrait) or fall
into the bottleneck of temporal consistency and computation efficiency,
especially when the input video involves complex dynamics and long durations.
In this paper, we propose TC-Light, a novel generative renderer to overcome
these problems. Starting from the video preliminarily relighted by an inflated
video relighting model, it optimizes appearance embedding in the first stage to
align global illumination. Then it optimizes the proposed canonical video
representation, i.e., Unique Video Tensor (UVT), to align fine-grained texture
and lighting in the second stage. To comprehensively evaluate performance, we
also establish a long and highly dynamic video benchmark. Extensive experiments
show that our method enables physically plausible re-rendering results with
superior temporal coherence and low computation cost. The code and video demos
are available at https://dekuliutesla.github.io/tclight/. | 2025-06-23T17:59:58Z | Project Page: https://dekuliutesla.github.io/tclight/ Code:
https://github.com/Linketic/TC-Light | null | null | null | null | null | null | null | null | null |
2,506.19103 | Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency
Models | ['Ilia Beletskii', 'Andrey Kuznetsov', 'Aibek Alanov'] | ['cs.CV'] | Recent advances in image editing with diffusion models have achieved
impressive results, offering fine-grained control over the generation process.
However, these methods are computationally intensive because of their iterative
nature. While distilled diffusion models enable faster inference, their editing
capabilities remain limited, primarily because of poor inversion quality.
High-fidelity inversion and reconstruction are essential for precise image
editing, as they preserve the structural and semantic integrity of the source
image. In this work, we propose a novel framework that enhances image inversion
using consistency models, enabling high-quality editing in just four steps. Our
method introduces a cycle-consistency optimization strategy that significantly
improves reconstruction accuracy and enables a controllable trade-off between
editability and content preservation. We achieve state-of-the-art performance
across various image editing tasks and datasets, demonstrating that our method
matches or surpasses full-step diffusion models while being substantially more
efficient. The code of our method is available on GitHub at
https://github.com/ControlGenAI/Inverse-and-Edit. | 2025-06-23T20:34:43Z | The code of our method is available on GitHub at
https://github.com/ControlGenAI/Inverse-and-Edit | null | null | null | null | null | null | null | null | null |
2,506.1929 | Skywork-SWE: Unveiling Data Scaling Laws for Software Engineering in
LLMs | ['Liang Zeng', 'Yongcong Li', 'Yuzhen Xiao', 'Changshi Li', 'Chris Yuhao Liu', 'Rui Yan', 'Tianwen Wei', 'Jujie He', 'Xuchen Song', 'Yang Liu', 'Yahui Zhou'] | ['cs.AI', 'cs.CL'] | Software engineering (SWE) has recently emerged as a crucial testbed for
next-generation LLM agents, demanding inherent capabilities in two critical
dimensions: sustained iterative problem-solving (e.g., >50 interaction rounds)
and long-context dependency resolution (e.g., >32k tokens). However, the data
curation process in SWE remains notoriously time-consuming, as it heavily
relies on manual annotation for code file filtering and the setup of dedicated
runtime environments to execute and validate unit tests. Consequently, most
existing datasets are limited to only a few thousand GitHub-sourced instances.
To this end, we propose an incremental, automated data-curation pipeline that
systematically scales both the volume and diversity of SWE datasets. Our
dataset comprises 10,169 real-world Python task instances from 2,531 distinct
GitHub repositories, each accompanied by a task specified in natural language
and a dedicated runtime-environment image for automated unit-test validation.
We have carefully curated over 8,000 successfully runtime-validated training
trajectories from our proposed SWE dataset. When fine-tuning the Skywork-SWE
model on these trajectories, we uncover a striking data scaling phenomenon: the
trained model's performance for software engineering capabilities in LLMs
continues to improve as the data size increases, showing no signs of
saturation. Notably, our Skywork-SWE model achieves 38.0% pass@1 accuracy on
the SWE-bench Verified benchmark without using verifiers or multiple rollouts,
establishing a new state-of-the-art (SOTA) among the Qwen2.5-Coder-32B-based
LLMs built on the OpenHands agent framework. Furthermore, with the
incorporation of test-time scaling techniques, the performance further improves
to 47.0% accuracy, surpassing the previous SOTA results for sub-32B parameter
models. We release the Skywork-SWE-32B model checkpoint to accelerate future
research. | 2025-06-24T03:53:36Z | null | null | null | null | null | null | null | null | null | null |
2,506.19585 | SMARTIES: Spectrum-Aware Multi-Sensor Auto-Encoder for Remote Sensing
Images | ['Gencer Sumbul', 'Chang Xu', 'Emanuele Dalsasso', 'Devis Tuia'] | ['cs.CV'] | From optical sensors to microwave radars, leveraging the complementary
strengths of remote sensing (RS) sensors is crucial for achieving dense
spatio-temporal monitoring of our planet. In contrast, recent deep learning
models, whether task-specific or foundational, are often specific to single
sensors or to fixed combinations: adapting such models to different sensory
inputs requires both architectural changes and re-training, limiting
scalability and generalization across multiple RS sensors. On the contrary, a
single model able to modulate its feature representations to accept diverse
sensors as input would pave the way to agile and flexible multi-sensor RS data
processing. To address this, we introduce SMARTIES, a generic and versatile
foundation model lifting sensor-specific/dependent efforts and enabling
scalability and generalization to diverse RS sensors: SMARTIES projects data
from heterogeneous sensors into a shared spectrum-aware space, enabling the use
of arbitrary combinations of bands both for training and inference. To obtain
sensor-agnostic representations, we train a single, unified transformer model
reconstructing masked multi-sensor data with cross-sensor token mixup. On both
single- and multi-modal tasks across diverse sensors, SMARTIES outperforms
previous models that rely on sensor-specific pretraining. Our code and
pretrained models are available at https://gsumbul.github.io/SMARTIES. | 2025-06-24T12:51:39Z | null | null | null | null | null | null | null | null | null | null |
2,506.19697 | Outlier-Safe Pre-Training for Robust 4-Bit Quantization of Large
Language Models | ['Jungwoo Park', 'Taewhoo Lee', 'Chanwoong Yoon', 'Hyeon Hwang', 'Jaewoo Kang'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Extreme activation outliers in Large Language Models (LLMs) critically
degrade quantization performance, hindering efficient on-device deployment.
While channel-wise operations and adaptive gradient scaling are recognized
causes, practical mitigation remains challenging. We introduce Outlier-Safe
Pre-Training (OSP), a practical guideline that proactively prevents outlier
formation rather than relying on post-hoc mitigation. OSP combines three key
innovations: (1) the Muon optimizer, eliminating privileged bases while
maintaining training efficiency; (2) Single-Scale RMSNorm, preventing
channel-wise amplification; and (3) a learnable embedding projection,
redistributing activation magnitudes originating from embedding matrices. We
validate OSP by training a 1.4B-parameter model on 1 trillion tokens, which is
the first production-scale LLM trained without such outliers. Under aggressive
4-bit quantization, our OSP model achieves a 35.7 average score across 10
benchmarks (compared to 26.5 for an Adam-trained model), with only a 2%
training overhead. Remarkably, OSP models exhibit near-zero excess kurtosis
(0.04) compared to extreme values (1818.56) in standard models, fundamentally
altering LLM quantization behavior. Our work demonstrates that outliers are not
inherent to LLMs but are consequences of training strategies, paving the way
for more efficient LLM deployment. The source code and pretrained checkpoints
are available at https://github.com/dmis-lab/Outlier-Safe-Pre-Training. | 2025-06-24T15:03:57Z | null | null | null | null | null | null | null | null | null | null |
2,506.19708 | Uncovering Conceptual Blindspots in Generative Image Models Using Sparse
Autoencoders | ['Matyas Bohacek', 'Thomas Fel', 'Maneesh Agrawala', 'Ekdeep Singh Lubana'] | ['cs.GR', 'cs.AI', 'cs.CV'] | Despite their impressive performance, generative image models trained on
large-scale datasets frequently fail to produce images with seemingly simple
concepts -- e.g., human hands or objects appearing in groups of four -- that
are reasonably expected to appear in the training data. These failure modes
have largely been documented anecdotally, leaving open the question of whether
they reflect idiosyncratic anomalies or more structural limitations of these
models. To address this, we introduce a systematic approach for identifying and
characterizing "conceptual blindspots" -- concepts present in the training data
but absent or misrepresented in a model's generations. Our method leverages
sparse autoencoders (SAEs) to extract interpretable concept embeddings,
enabling a quantitative comparison of concept prevalence between real and
generated images. We train an archetypal SAE (RA-SAE) on DINOv2 features with
32,000 concepts -- the largest such SAE to date -- enabling fine-grained
analysis of conceptual disparities. Applied to four popular generative models
(Stable Diffusion 1.5/2.1, PixArt, and Kandinsky), our approach reveals
specific suppressed blindspots (e.g., bird feeders, DVD discs, and whitespaces
on documents) and exaggerated blindspots (e.g., wood background texture and
palm trees). At the individual datapoint level, we further isolate memorization
artifacts -- instances where models reproduce highly specific visual templates
seen during training. Overall, we propose a theoretically grounded framework
for systematically identifying conceptual blindspots in generative models by
assessing their conceptual fidelity with respect to the underlying
data-generating process. | 2025-06-24T15:15:15Z | null | null | null | null | null | null | null | null | null | null |
2,506.19753 | Arabic Dialect Classification using RNNs, Transformers, and Large
Language Models: A Comparative Analysis | ['Omar A. Essameldin', 'Ali O. Elbeih', 'Wael H. Gomaa', 'Wael F. Elsersy'] | ['cs.CL', 'cs.AI'] | The Arabic language is among the most popular languages in the world with a
huge variety of dialects spoken in 22 countries. In this study, we address the
problem of classifying 18 Arabic dialects of the QADI dataset of Arabic tweets.
RNN models, Transformer models, and large language models (LLMs) via prompt
engineering are created and tested. Among these, MARBERTv2 performed best with
65% accuracy and 64% F1-score. Through the use of state-of-the-art
preprocessing techniques and the latest NLP models, this paper identifies the
most significant linguistic issues in Arabic dialect identification. The
results corroborate applications like personalized chatbots that respond in
users' dialects, social media monitoring, and greater accessibility for Arabic
communities. | 2025-06-24T16:06:58Z | Email Typo Update | null | null | null | null | null | null | null | null | null |
2,506.19767 | SRFT: A Single-Stage Method with Supervised and Reinforcement
Fine-Tuning for Reasoning | ['Yuqian Fu', 'Tinghong Chen', 'Jiajun Chai', 'Xihuai Wang', 'Songjun Tu', 'Guojun Yin', 'Wei Lin', 'Qichao Zhang', 'Yuanheng Zhu', 'Dongbin Zhao'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models (LLMs) have achieved remarkable progress in reasoning
tasks, yet the optimal integration of Supervised Fine-Tuning (SFT) and
Reinforcement Learning (RL) remains a fundamental challenge. Through
comprehensive analysis of token distributions, learning dynamics, and
integration mechanisms from entropy-based perspectives, we reveal key
differences between these paradigms: SFT induces coarse-grained global changes
to LLM policy distributions, while RL performs fine-grained selective
optimizations, with entropy serving as a critical indicator of training
effectiveness. Building on these observations, we propose Supervised
Reinforcement Fine-Tuning (SRFT), a single-stage method that unifies both
fine-tuning paradigms through entropy-aware weighting mechanisms. Our approach
simultaneously applies SFT and RL to directly optimize the LLM using
demonstrations and self-exploration rollouts rather than through two-stage
sequential methods. Extensive experiments show that SRFT achieves 59.1% average
accuracy, outperforming zero-RL methods by 9.0% on five mathematical reasoning
benchmarks and 10.9% on three out-of-distribution benchmarks. | 2025-06-24T16:31:37Z | null | null | null | null | null | null | null | null | null | null |
2,506.19807 | KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality | ['Baochang Ren', 'Shuofei Qiao', 'Wenhao Yu', 'Huajun Chen', 'Ningyu Zhang'] | ['cs.AI', 'cs.CL', 'cs.CV', 'cs.LG', 'cs.MA'] | Large Language Models (LLMs), particularly slow-thinking models, often
exhibit severe hallucination, outputting incorrect content due to an inability
to accurately recognize knowledge boundaries during reasoning. While
Reinforcement Learning (RL) can enhance complex reasoning abilities, its
outcome-oriented reward mechanism often lacks factual supervision over the
thinking process, further exacerbating the hallucination problem. To address
the high hallucination in slow-thinking models, we propose Knowledge-enhanced
RL, KnowRL. KnowRL guides models to perform fact-based slow thinking by
integrating a factuality reward, based on knowledge verification, into the RL
training process, helping them recognize their knowledge boundaries. KnowRL
guides models to perform fact-based slow thinking by integrating a factuality
reward, based on knowledge verification, into the RL training process, helping
them recognize their knowledge boundaries. This targeted factual input during
RL training enables the model to learn and internalize fact-based reasoning
strategies. By directly rewarding adherence to facts within the reasoning
steps, KnowRL fosters a more reliable thinking process. Experimental results on
three hallucination evaluation datasets and two reasoning evaluation datasets
demonstrate that KnowRL effectively mitigates hallucinations in slow-thinking
models while maintaining their original strong reasoning capabilities. Our code
is available at https://github.com/zjunlp/KnowRL. | 2025-06-24T17:17:17Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,506.1985 | Unified Vision-Language-Action Model | ['Yuqi Wang', 'Xinghang Li', 'Wenxuan Wang', 'Junbo Zhang', 'Yingyan Li', 'Yuntao Chen', 'Xinlong Wang', 'Zhaoxiang Zhang'] | ['cs.CV', 'cs.RO'] | Vision-language-action models (VLAs) have garnered significant attention for
their potential in advancing robotic manipulation. However, previous approaches
predominantly rely on the general comprehension capabilities of vision-language
models (VLMs) to generate action signals, often overlooking the rich temporal
and causal structure embedded in visual observations. In this paper, we present
UniVLA, a unified and native multimodal VLA model that autoregressively models
vision, language, and action signals as discrete token sequences. This
formulation enables flexible multimodal tasks learning, particularly from
large-scale video data. By incorporating world modeling during post-training,
UniVLA captures causal dynamics from videos, facilitating effective transfer to
downstream policy learning--especially for long-horizon tasks. Our approach
sets new state-of-the-art results across several widely used simulation
benchmarks, including CALVIN, LIBERO, and Simplenv-Bridge, significantly
surpassing previous methods. For example, UniVLA achieves 95.5% average success
rate on LIBERO benchmark, surpassing pi0-FAST's 85.5%. We further demonstrate
its broad applicability on real-world ALOHA manipulation and autonomous
driving. | 2025-06-24T17:59:57Z | technical report | null | null | null | null | null | null | null | null | null |
2,506.20151 | EAR: Erasing Concepts from Unified Autoregressive Models | ['Haipeng Fan', 'Shiyuan Zhang', 'Baohunesitu', 'Zihang Guo', 'Huaiwen Zhang'] | ['cs.CV', 'cs.AI'] | Autoregressive (AR) models have achieved unified and strong performance
across both visual understanding and image generation tasks. However, removing
undesired concepts from AR models while maintaining overall generation quality
remains an open challenge. In this paper, we propose Erasure Autoregressive
Model (EAR), a fine-tuning method for effective and utility-preserving concept
erasure in AR models. Specifically, we introduce Windowed Gradient Accumulation
(WGA) strategy to align patch-level decoding with erasure objectives, and
Thresholded Loss Masking (TLM) strategy to protect content unrelated to the
target concept during fine-tuning. Furthermore, we propose a novel benchmark,
Erase Concept Generator and Visual Filter (ECGVF), aim at provide a more
rigorous and comprehensive foundation for evaluating concept erasure in AR
models. Specifically, we first employ structured templates across diverse large
language models (LLMs) to pre-generate a large-scale corpus of
target-replacement concept prompt pairs. Subsequently, we generate images from
these prompts and subject them to rigorous filtering via a visual classifier to
ensure concept fidelity and alignment. Extensive experimental results conducted
on the ECGVF benchmark with the AR model Janus-Pro demonstrate that EAR
achieves marked improvements in both erasure effectiveness and model utility
preservation. Code is available at: https://github.com/immc-lab/ear/ | 2025-06-25T06:15:07Z | 11 pages, 7 figures, 1 tables | null | null | null | null | null | null | null | null | null |
2,506.20279 | From Ideal to Real: Unified and Data-Efficient Dense Prediction for
Real-World Scenarios | ['Changliang Xia', 'Chengyou Jia', 'Zhuohang Dang', 'Minnan Luo'] | ['cs.CV'] | Dense prediction tasks hold significant importance of computer vision, aiming
to learn pixel-wise annotated label for an input image. Despite advances in
this field, existing methods primarily focus on idealized conditions, with
limited generalization to real-world scenarios and facing the challenging
scarcity of real-world data. To systematically study this problem, we first
introduce DenseWorld, a benchmark spanning a broad set of 25 dense prediction
tasks that correspond to urgent real-world applications, featuring unified
evaluation across tasks. Then, we propose DenseDiT, which maximally exploits
generative models' visual priors to perform diverse real-world dense prediction
tasks through a unified strategy. DenseDiT combines a parameter-reuse mechanism
and two lightweight branches that adaptively integrate multi-scale context,
working with less than 0.1% additional parameters. Evaluations on DenseWorld
reveal significant performance drops in existing general and specialized
baselines, highlighting their limited real-world generalization. In contrast,
DenseDiT achieves superior results using less than 0.01% training data of
baselines, underscoring its practical value for real-world deployment. Our
data, and checkpoints and codes are available at
https://xcltql666.github.io/DenseDiTProj | 2025-06-25T09:40:50Z | null | null | null | null | null | null | null | null | null | null |
2,506.20326 | From Codicology to Code: A Comparative Study of Transformer and
YOLO-based Detectors for Layout Analysis in Historical Documents | ['Sergio Torres Aguilar'] | ['cs.CV', 'cs.CL', 'cs.DB'] | Robust Document Layout Analysis (DLA) is critical for the automated
processing and understanding of historical documents with complex page
organizations. This paper benchmarks five state-of-the-art object detection
architectures on three annotated datasets representing a spectrum of
codicological complexity: The e-NDP, a corpus of Parisian medieval registers
(1326-1504); CATMuS, a diverse multiclass dataset derived from various medieval
and modern sources (ca.12th-17th centuries) and HORAE, a corpus of decorated
books of hours (ca.13th-16th centuries). We evaluate two Transformer-based
models (Co-DETR, Grounding DINO) against three YOLO variants (AABB, OBB, and
YOLO-World). Our findings reveal significant performance variations dependent
on model architecture, data set characteristics, and bounding box
representation. In the e-NDP dataset, Co-DETR achieves state-of-the-art results
(0.752 mAP@.50:.95), closely followed by YOLOv11X-OBB (0.721). Conversely, on
the more complex CATMuS and HORAE datasets, the CNN-based YOLOv11x-OBB
significantly outperforms all other models (0.564 and 0.568, respectively).
This study unequivocally demonstrates that using Oriented Bounding Boxes (OBB)
is not a minor refinement but a fundamental requirement for accurately modeling
the non-Cartesian nature of historical manuscripts. We conclude that a key
trade-off exists between the global context awareness of Transformers, ideal
for structured layouts, and the superior generalization of CNN-OBB models for
visually diverse and complex documents. | 2025-06-25T11:14:04Z | null | null | null | null | null | null | null | null | null | null |
2,506.2048 | GPTailor: Large Language Model Pruning Through Layer Cutting and
Stitching | ['Guinan Su', 'Li Shen', 'Lu Yin', 'Shiwei Liu', 'Yanwu Yang', 'Jonas Geiping'] | ['cs.CL'] | Large language models (LLMs) have shown remarkable capabilities in language
understanding and generation. However, such impressive capability typically
comes with a substantial model size, which presents significant challenges in
deployment and inference. While structured pruning of model parameters offers a
promising way to reduce computational costs at deployment time, current methods
primarily focus on single model pruning. In this work, we develop a novel
strategy to compress models by strategically combining or merging layers from
finetuned model variants, which preserves the original model's abilities by
aggregating capabilities accentuated in different finetunes. We pose the
optimal tailoring of these LLMs as a zero-order optimization problem, adopting
a search space that supports three different operations: (1) Layer removal, (2)
Layer selection from different candidate models, and (3) Layer merging. Our
experiments demonstrate that this approach leads to competitive model pruning,
for example, for the Llama2-13B model families, our compressed models maintain
approximately 97.3\% of the original performance while removing $\sim25\%$ of
parameters, significantly outperforming previous state-of-the-art methods. The
code is available at https://github.com/Guinan-Su/auto-merge-llm. | 2025-06-25T14:24:59Z | null | null | null | null | null | null | null | null | null | null |
2,506.20512 | OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling | ['Zengzhi Wang', 'Fan Zhou', 'Xuefeng Li', 'Pengfei Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Different base language model families, such as Llama and Qwen, exhibit
divergent behaviors during post-training with reinforcement learning (RL),
especially on reasoning-intensive tasks. What makes a base language model
suitable for reinforcement learning? Gaining deeper insight into this question
is essential for developing RL-scalable foundation models of the next
generation. In this work, we investigate how mid-training strategies shape RL
dynamics, focusing on two representative model families: Qwen and Llama. Our
study reveals that (1) high-quality mathematical corpora, such as
MegaMath-Web-Pro, significantly improve both base model and RL performance,
while existing alternatives (e.g., FineMath-4plus) fail to do so; (2) further
adding QA-style data, particularly long chain-of-thought (CoT) reasoning
examples, enhances RL outcomes, and instruction data further unlocks this
effect; (3) while long-CoT improves reasoning depth, it can also induce
verbosity of model responses and unstability of RL training, underscoring the
importance of data formatting; (4) scaling mid-training consistently leads to
stronger downstream RL performance. Building on these insights, we introduce a
two-stage mid-training strategy, Stable-then-Decay, in which base models are
first trained on 200B tokens with a constant learning rate, followed by 20B
tokens across three CoT-focused branches with learning rate decay. This yields
OctoThinker, a family of models demonstrating strong RL compatibility and
closing the performance gap with more RL-friendly model families, i.e., Qwen.
We hope our work will help shape pre-training strategies for foundation models
in the RL era. To support further research, we release our open-source models
along with a curated math reasoning-intensive corpus of over 70 billion tokens
(i.e., MegaMath-Web-Pro-Max). | 2025-06-25T14:58:13Z | 26 pages; The first three authors contribute to this work equally | null | null | null | null | null | null | null | null | null |
2,506.20639 | DiffuCoder: Understanding and Improving Masked Diffusion Models for Code
Generation | ['Shansan Gong', 'Ruixiang Zhang', 'Huangjie Zheng', 'Jiatao Gu', 'Navdeep Jaitly', 'Lingpeng Kong', 'Yizhe Zhang'] | ['cs.CL'] | Diffusion large language models (dLLMs) are compelling alternatives to
autoregressive (AR) models because their denoising models operate over the
entire sequence. The global planning and iterative refinement features of dLLMs
are particularly useful for code generation. However, current training and
inference mechanisms for dLLMs in coding are still under-explored. To demystify
the decoding behavior of dLLMs and unlock their potential for coding, we
systematically investigate their denoising processes and reinforcement learning
(RL) methods. We train a 7B dLLM, \textbf{DiffuCoder}, on 130B tokens of code.
Using this model as a testbed, we analyze its decoding behavior, revealing how
it differs from that of AR models: (1) dLLMs can decide how causal their
generation should be without relying on semi-AR decoding, and (2) increasing
the sampling temperature diversifies not only token choices but also their
generation order. This diversity creates a rich search space for RL rollouts.
For RL training, to reduce the variance of token log-likelihood estimates and
maintain training efficiency, we propose \textbf{coupled-GRPO}, a novel
sampling scheme that constructs complementary mask noise for completions used
in training. In our experiments, coupled-GRPO significantly improves
DiffuCoder's performance on code generation benchmarks (+4.4\% on EvalPlus) and
reduces reliance on AR bias during decoding. Our work provides deeper insight
into the machinery of dLLM generation and offers an effective, diffusion-native
RL training framework. https://github.com/apple/ml-diffucoder. | 2025-06-25T17:35:47Z | minor update | null | null | null | null | null | null | null | null | null |
2,506.20741 | OTSurv: A Novel Multiple Instance Learning Framework for Survival
Prediction with Heterogeneity-aware Optimal Transport | ['Qin Ren', 'Yifan Wang', 'Ruogu Fang', 'Haibin Ling', 'Chenyu You'] | ['cs.CV'] | Survival prediction using whole slide images (WSIs) can be formulated as a
multiple instance learning (MIL) problem. However, existing MIL methods often
fail to explicitly capture pathological heterogeneity within WSIs, both
globally -- through long-tailed morphological distributions, and locally
through -- tile-level prediction uncertainty. Optimal transport (OT) provides a
principled way of modeling such heterogeneity by incorporating marginal
distribution constraints. Building on this insight, we propose OTSurv, a novel
MIL framework from an optimal transport perspective. Specifically, OTSurv
formulates survival predictions as a heterogeneity-aware OT problem with two
constraints: (1) global long-tail constraint that models prior morphological
distributions to avert both mode collapse and excessive uniformity by
regulating transport mass allocation, and (2) local uncertainty-aware
constraint that prioritizes high-confidence patches while suppressing noise by
progressively raising the total transport mass. We then recast the initial OT
problem, augmented by these constraints, into an unbalanced OT formulation that
can be solved with an efficient, hardware-friendly matrix scaling algorithm.
Empirically, OTSurv sets new state-of-the-art results across six popular
benchmarks, achieving an absolute 3.6% improvement in average C-index. In
addition, OTSurv achieves statistical significance in log-rank tests and offers
high interpretability, making it a powerful tool for survival prediction in
digital pathology. Our codes are available at
https://github.com/Y-Research-SBU/OTSurv. | 2025-06-25T18:09:42Z | Accepted by International Conference on Medical Image Computing and
Computer-Assisted Intervention (MICCAI 2025) | null | null | null | null | null | null | null | null | null |
2,506.20923 | KaLM-Embedding-V2: Superior Training Techniques and Data Inspire A
Versatile Embedding Model | ['Xinping Zhao', 'Xinshuo Hu', 'Zifei Shan', 'Shouzheng Huang', 'Yao Zhou', 'Zetian Sun', 'Zhenyu Liu', 'Dongfang Li', 'Xinyuan Wei', 'Qian Chen', 'Youcheng Pan', 'Yang Xiang', 'Meishan Zhang', 'Haofen Wang', 'Jun Yu', 'Baotian Hu', 'Min Zhang'] | ['cs.CL'] | In this paper, we propose KaLM-Embedding-V2, a versatile and compact
embedding model, which achieves impressive performance in general-purpose text
embedding tasks by leveraging superior training techniques and data. Our key
innovations include: (1) To better align the architecture with representation
learning, we remove the causal attention mask and adopt a fully bidirectional
transformer with simple yet effective mean-pooling to produce fixed-length
embeddings; (2) We employ a multi-stage training pipeline: (i) pre-training on
large-scale weakly supervised open-source corpora; (ii) fine-tuning on
high-quality retrieval and non-retrieval datasets; and (iii) model-soup
parameter averaging for robust generalization. Besides, we introduce a
focal-style reweighting mechanism that concentrates learning on difficult
samples and an online hard-negative mixing strategy to continuously enrich hard
negatives without expensive offline mining; (3) We collect over 20 categories
of data for pre-training and 100 categories of data for fine-tuning, to boost
both the performance and generalization of the embedding model. Extensive
evaluations on the Massive Text Embedding Benchmark (MTEB) Chinese and English
show that our model significantly outperforms others of comparable size, and
competes with 3x, 14x, 18x, and 26x larger embedding models, setting a new
standard for a versatile and compact embedding model with less than 1B
parameters. | 2025-06-26T01:09:44Z | Technical Report; 26 pages 12 tables 1 figure. arXiv admin note:
substantial text overlap with arXiv:2501.01028 | null | null | null | null | null | null | null | null | null |
2,506.2109 | Post-training for Deepfake Speech Detection | ['Wanying Ge', 'Xin Wang', 'Xuechen Liu', 'Junichi Yamagishi'] | ['eess.AS'] | We introduce a post-training approach that adapts self-supervised learning
(SSL) models for deepfake speech detection by bridging the gap between general
pre-training and domain-specific fine-tuning. We present AntiDeepfake models, a
series of post-trained models developed using a large-scale multilingual speech
dataset containing over 56,000 hours of genuine speech and 18,000 hours of
speech with various artifacts in over one hundred languages. Experimental
results show that the post-trained models already exhibit strong robustness and
generalization to unseen deepfake speech. When they are further fine-tuned on
the Deepfake-Eval-2024 dataset, these models consistently surpass existing
state-of-the-art detectors that do not leverage post-training. Model
checkpoints and source code are available online. | 2025-06-26T08:34:19Z | null | null | null | null | null | null | null | null | null | null |
2,506.21103 | Learning to Skip the Middle Layers of Transformers | ['Tim Lawson', 'Laurence Aitchison'] | ['cs.LG', 'cs.CL'] | Conditional computation is a popular strategy to make Transformers more
efficient. Existing methods often target individual modules (e.g.,
mixture-of-experts layers) or skip layers independently of one another.
However, interpretability research has demonstrated that the middle layers of
Transformers exhibit greater redundancy, and that early layers aggregate
information into token positions. Guided by these insights, we propose a novel
architecture that dynamically skips a variable number of layers from the middle
outward. In particular, a learned gating mechanism determines whether to bypass
a symmetric span of central blocks based on the input, and a gated attention
mechanism prevents subsequent tokens from attending to skipped token positions.
Residual norms are controlled with a 'sandwich' or 'perilayernorm' scheme and
gate sparsity with an adaptive regularization loss. We had aimed to reduce
compute requirements for 'simpler' tokens and potentially foster an emergent
multi-level representational hierarchy but, at the scales investigated, our
approach does not achieve improvements in the trade-off between validation
cross-entropy and estimated FLOPs compared to dense baselines with fewer
layers. We release our code at https://github.com/tim-lawson/skip-middle. | 2025-06-26T09:01:19Z | 11 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,506.21277 | HumanOmniV2: From Understanding to Omni-Modal Reasoning with Context | ['Qize Yang', 'Shimin Yao', 'Weixuan Chen', 'Shenghao Fu', 'Detao Bai', 'Jiaxing Zhao', 'Boyuan Sun', 'Bowen Yin', 'Xihan Wei', 'Jingren Zhou'] | ['cs.CV', 'cs.CL'] | With the rapid evolution of multimodal large language models, the capacity to
deeply understand and interpret human intentions has emerged as a critical
capability, which demands detailed and thoughtful reasoning. In recent studies,
Reinforcement Learning (RL) has demonstrated potential in enhancing the
reasoning capabilities of Large Language Models (LLMs). Nonetheless, the
challenges associated with adapting RL to multimodal data and formats remain
largely unaddressed. In this paper, we identify two issues in existing
multimodal reasoning models: insufficient global context understanding and
shortcut problems. Insufficient context understanding can happen when a model
misinterprets multimodal context, resulting in incorrect answers. The shortcut
problem occurs when the model overlooks crucial clues in multimodal inputs,
directly addressing the query without considering the multimodal information.
To tackle these issues, we emphasize the necessity for the model to reason with
a clear understanding of the global context within multimodal inputs. This
global context understanding can effectively prevent the model from overlooking
key multimodal cues and ensure a thorough reasoning process. To ensure the
accurate interpretation of multimodal context information, we implement a
context reward judged by a large language model, alongside format and accuracy
rewards. Additionally, to improve complex reasoning capability, we employ the
LLM to assess the logical reward, determining whether the reasoning process
successfully integrates multimodal information with logical methods. We also
introduce a reasoning omni-modal benchmark, IntentBench, aimed at evaluating
models in understanding complex human intentions and emotions. Our proposed
method demonstrates advanced performance across multiple omni-modal benchmarks
compared to other open-source omni-modal models. | 2025-06-26T14:01:03Z | null | null | null | null | null | null | null | null | null | null |
2,506.21356 | ShotBench: Expert-Level Cinematic Understanding in Vision-Language
Models | ['Hongbo Liu', 'Jingwen He', 'Yi Jin', 'Dian Zheng', 'Yuhao Dong', 'Fan Zhang', 'Ziqi Huang', 'Yinan He', 'Yangguang Li', 'Weichao Chen', 'Yu Qiao', 'Wanli Ouyang', 'Shengjie Zhao', 'Ziwei Liu'] | ['cs.CV'] | Cinematography, the fundamental visual language of film, is essential for
conveying narrative, emotion, and aesthetic quality. While recent
Vision-Language Models (VLMs) demonstrate strong general visual understanding,
their proficiency in comprehending the nuanced cinematic grammar embedded
within individual shots remains largely unexplored and lacks robust evaluation.
This critical gap limits both fine-grained visual comprehension and the
precision of AI-assisted video generation. To address this, we introduce
ShotBench, a comprehensive benchmark specifically designed for cinematic
language understanding. It features over 3.5k expert-annotated QA pairs from
images and video clips, meticulously curated from over 200 acclaimed
(predominantly Oscar-nominated) films and spanning eight key cinematography
dimensions. Our evaluation of 24 leading VLMs on ShotBench reveals their
substantial limitations: even the top-performing model achieves less than 60%
average accuracy, particularly struggling with fine-grained visual cues and
complex spatial reasoning. To catalyze advancement in this domain, we construct
ShotQA, a large-scale multimodal dataset comprising approximately 70k cinematic
QA pairs. Leveraging ShotQA, we develop ShotVL through supervised fine-tuning
and Group Relative Policy Optimization. ShotVL significantly outperforms all
existing open-source and proprietary models on ShotBench, establishing new
state-of-the-art performance. We open-source our models, data, and code to
foster rapid progress in this crucial area of AI-driven cinematic understanding
and generation. | 2025-06-26T15:09:21Z | null | null | null | null | null | null | null | null | null | null |
2,506.21416 | XVerse: Consistent Multi-Subject Control of Identity and Semantic
Attributes via DiT Modulation | ['Bowen Chen', 'Mengyi Zhao', 'Haomiao Sun', 'Li Chen', 'Xu Wang', 'Kang Du', 'Xinglong Wu'] | ['cs.CV'] | Achieving fine-grained control over subject identity and semantic attributes
(pose, style, lighting) in text-to-image generation, particularly for multiple
subjects, often undermines the editability and coherence of Diffusion
Transformers (DiTs). Many approaches introduce artifacts or suffer from
attribute entanglement. To overcome these challenges, we propose a novel
multi-subject controlled generation model XVerse. By transforming reference
images into offsets for token-specific text-stream modulation, XVerse allows
for precise and independent control for specific subject without disrupting
image latents or features. Consequently, XVerse offers high-fidelity, editable
multi-subject image synthesis with robust control over individual subject
characteristics and semantic attributes. This advancement significantly
improves personalized and complex scene generation capabilities. | 2025-06-26T16:04:16Z | Project Page: https://bytedance.github.io/XVerse Github Link:
https://github.com/bytedance/XVerse | null | null | null | null | null | null | null | null | null |
2,506.21448 | ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language
Models for Audio Generation and Editing | ['Huadai Liu', 'Jialei Wang', 'Kaicheng Luo', 'Wen Wang', 'Qian Chen', 'Zhou Zhao', 'Wei Xue'] | ['eess.AS', 'cs.CV', 'cs.SD'] | While end-to-end video-to-audio generation has greatly improved, producing
high-fidelity audio that authentically captures the nuances of visual content
remains challenging. Like professionals in the creative industries, such
generation requires sophisticated reasoning about items such as visual
dynamics, acoustic environments, and temporal relationships. We present
ThinkSound, a novel framework that leverages Chain-of-Thought (CoT) reasoning
to enable stepwise, interactive audio generation and editing for videos. Our
approach decomposes the process into three complementary stages: foundational
foley generation that creates semantically coherent soundscapes, interactive
object-centric refinement through precise user interactions, and targeted
editing guided by natural language instructions. At each stage, a multimodal
large language model generates contextually aligned CoT reasoning that guides a
unified audio foundation model. Furthermore, we introduce AudioCoT, a
comprehensive dataset with structured reasoning annotations that establishes
connections between visual content, textual descriptions, and sound synthesis.
Experiments demonstrate that ThinkSound achieves state-of-the-art performance
in video-to-audio generation across both audio metrics and CoT metrics and
excels in out-of-distribution Movie Gen Audio benchmark. The demo page is
available at https://ThinkSound-Project.github.io. | 2025-06-26T16:32:06Z | null | null | null | null | null | null | null | null | null | null |
2,506.21458 | Spatial Mental Modeling from Limited Views | ['Baiqiao Yin', 'Qineng Wang', 'Pingyue Zhang', 'Jianshu Zhang', 'Kangrui Wang', 'Zihan Wang', 'Jieyu Zhang', 'Keshigeyan Chandrasegaran', 'Han Liu', 'Ranjay Krishna', 'Saining Xie', 'Manling Li', 'Jiajun Wu', 'Li Fei-Fei'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Can Vision Language Models (VLMs) imagine the full scene from just a few
views, like humans do? Humans form spatial mental models, internal
representations of unseen space, to reason about layout, perspective, and
motion. Our new MindCube benchmark with 21,154 questions across 3,268 images
exposes this critical gap, where existing VLMs exhibit near-random performance.
Using MindCube, we systematically evaluate how well VLMs build robust spatial
mental models through representing positions (cognitive mapping), orientations
(perspective-taking), and dynamics (mental simulation for "what-if" movements).
We then explore three approaches to help VLMs approximate spatial mental
models, including unseen intermediate views, natural language reasoning chains,
and cognitive maps. The significant improvement comes from a synergistic
approach, "map-then-reason", that jointly trains the model to first generate a
cognitive map and then reason upon it. By training models to reason over these
internal maps, we boosted accuracy from 37.8% to 60.8% (+23.0%). Adding
reinforcement learning pushed performance even further to 70.7% (+32.9%). Our
key insight is that such scaffolding of spatial mental models, actively
constructing and utilizing internal structured spatial representations with
flexible reasoning processes, significantly improves understanding of
unobservable space. | 2025-06-26T16:38:19Z | Preprint version | null | null | null | null | null | null | null | null | null |
2,506.21476 | Global and Local Entailment Learning for Natural World Imagery | ['Srikumar Sastry', 'Aayush Dhakal', 'Eric Xing', 'Subash Khanal', 'Nathan Jacobs'] | ['cs.CV'] | Learning the hierarchical structure of data in vision-language models is a
significant challenge. Previous works have attempted to address this challenge
by employing entailment learning. However, these approaches fail to model the
transitive nature of entailment explicitly, which establishes the relationship
between order and semantics within a representation space. In this work, we
introduce Radial Cross-Modal Embeddings (RCME), a framework that enables the
explicit modeling of transitivity-enforced entailment. Our proposed framework
optimizes for the partial order of concepts within vision-language models. By
leveraging our framework, we develop a hierarchical vision-language foundation
model capable of representing the hierarchy in the Tree of Life. Our
experiments on hierarchical species classification and hierarchical retrieval
tasks demonstrate the enhanced performance of our models compared to the
existing state-of-the-art models. Our code and models are open-sourced at
https://vishu26.github.io/RCME/index.html. | 2025-06-26T17:05:06Z | Accepted at ICCV 2025 | null | null | null | null | null | null | null | null | null |
2,506.21539 | WorldVLA: Towards Autoregressive Action World Model | ['Jun Cen', 'Chaohui Yu', 'Hangjie Yuan', 'Yuming Jiang', 'Siteng Huang', 'Jiayan Guo', 'Xin Li', 'Yibing Song', 'Hao Luo', 'Fan Wang', 'Deli Zhao', 'Hao Chen'] | ['cs.RO', 'cs.AI'] | We present WorldVLA, an autoregressive action world model that unifies action
and image understanding and generation. Our WorldVLA intergrates
Vision-Language-Action (VLA) model and world model in one single framework. The
world model predicts future images by leveraging both action and image
understanding, with the purpose of learning the underlying physics of the
environment to improve action generation. Meanwhile, the action model generates
the subsequent actions based on image observations, aiding in visual
understanding and in turn helps visual generation of the world model. We
demonstrate that WorldVLA outperforms standalone action and world models,
highlighting the mutual enhancement between the world model and the action
model. In addition, we find that the performance of the action model
deteriorates when generating sequences of actions in an autoregressive manner.
This phenomenon can be attributed to the model's limited generalization
capability for action prediction, leading to the propagation of errors from
earlier actions to subsequent ones. To address this issue, we propose an
attention mask strategy that selectively masks prior actions during the
generation of the current action, which shows significant performance
improvement in the action chunk generation task. | 2025-06-26T17:55:40Z | Code: https://github.com/alibaba-damo-academy/WorldVLA | null | null | null | null | null | null | null | null | null |
2,506.21594 | Gazal-R1: Achieving State-of-the-Art Medical Reasoning with
Parameter-Efficient Two-Stage Training | ['Ahmed M. Adly', 'Mostafa Samy', 'Amr Fawzy'] | ['cs.CL'] | We present Gazal-R1, a 32-billion-parameter language model that achieves
state-of-the-art performance in medical reasoning while providing transparent,
step-by-step explanations for clinical decision-making. Built upon Qwen3 32B,
our model demonstrates that strategic training can enable mid-sized models to
outperform significantly larger counterparts in specialized domains. We
developed a novel two-stage training pipeline: first, supervised fine-tuning on
a carefully curated dataset of 107,033 synthetic medical reasoning examples
that teaches structured clinical thinking, enhanced by advanced
parameter-efficient techniques including Weight-Decomposed Low-Rank Adaptation
(DoRA) and Rank-Stabilized LoRA (rsLoRA); second, reinforcement learning using
Group Relative Policy Optimization (GRPO) with a sophisticated multi-component
reward system that refines accuracy, format adherence, and reasoning quality.
Gazal-R1 achieves exceptional performance across medical benchmarks, scoring
87.1% on MedQA, 81.6% on MMLU Pro (Medical), and 79.6% on PubMedQA, surpassing
models up to 12x larger. Beyond its strong empirical results, this work
provides detailed insights into the challenges of training reasoning-capable
models in specialized domains, including issues with reward hacking, training
instability, and the fundamental tension between factual recall and detailed
reasoning. Our methodology offers a reproducible framework for developing
high-capability, domain-specific language models that balance performance,
efficiency, and explainability. | 2025-06-18T09:44:21Z | null | null | null | null | null | null | null | null | null | null |
2,506.21862 | LLaVA-Scissor: Token Compression with Semantic Connected Components for
Video LLMs | ['Boyuan Sun', 'Jiaxing Zhao', 'Xihan Wei', 'Qibin Hou'] | ['cs.CV', 'cs.AI', 'cs.HC', 'cs.MM'] | In this paper, we present LLaVA-Scissor, a training-free token compression
strategy designed for video multimodal large language models. Previous methods
mostly attempt to compress tokens based on attention scores, but fail to
effectively capture all semantic regions and often lead to token redundancy.
Differently, we propose to leverage the Semantic Connected Components (SCC)
approach that assigns tokens to distinct semantic regions within the token set,
ensuring comprehensive semantic coverage. The outcome is a two-step
spatio-temporal token compression strategy that utilizes SCC in both spatial
and temporal domains. This strategy can effectively compress tokens by
representing the entire video with a set of non-overlapping semantic tokens. We
conduct extensive evaluations of the token compression capabilities of
LLaVA-Scissor across diverse video understanding benchmarks, including video
question answering, long video understanding, and comprehensive multi-choices
benchmarks. Experimental results show that the proposed LLaVA-Scissor
outperforms other token compression methods, achieving superior performance in
various video understanding benchmarks, particularly at low token retention
ratios. Project page: https://github.com/HumanMLLM/LLaVA-Scissor. | 2025-06-27T02:29:58Z | 21 pages, 4 figures, 7 tables | null | null | null | null | null | null | null | null | null |
2,506.2276 | Jan-nano Technical Report | ['Alan Dao', 'Dinh Bach Vu'] | ['cs.CL'] | Most language models face a fundamental tradeoff where powerful capabilities
require substantial computational resources. We shatter this constraint with
Jan-nano, a 4B parameter language model that redefines efficiency through
radical specialization: instead of trying to know everything, it masters the
art of finding anything instantly. Fine-tuned from Qwen3-4B using our novel
multi-stage Reinforcement Learning with Verifiable Rewards (RLVR) system that
completely eliminates reliance on next token prediction training (SFT),
Jan-nano achieves 83.2% on SimpleQA benchmark with MCP integration while
running on consumer hardware. With 128K context length, Jan-nano proves that
intelligence isn't about scale, it's about strategy. | 2025-06-28T05:44:57Z | null | null | null | null | null | null | null | null | null | null |
2,506.22832 | Listener-Rewarded Thinking in VLMs for Image Preferences | ['Alexander Gambashidze', 'Li Pengyi', 'Matvey Skripkin', 'Andrey Galichin', 'Anton Gusarov', 'Konstantin Sobolev', 'Andrey Kuznetsov', 'Ivan Oseledets'] | ['cs.CV', 'cs.AI'] | Training robust and generalizable reward models for human visual preferences
is essential for aligning text-to-image and text-to-video generative models
with human intent. However, current reward models often fail to generalize, and
supervised fine-tuning leads to memorization, demanding complex annotation
pipelines. While reinforcement learning (RL), specifically Group Relative
Policy Optimization (GRPO), improves generalization, we uncover a key failure
mode: a significant drop in reasoning accuracy occurs when a model's reasoning
trace contradicts that of an independent, frozen vision-language model
("listener") evaluating the same output. To address this, we introduce a
listener-augmented GRPO framework. Here, the listener re-evaluates the
reasoner's chain-of-thought to provide a dense, calibrated confidence score,
shaping the RL reward signal. This encourages the reasoner not only to answer
correctly, but to produce explanations that are persuasive to an independent
model. Our listener-shaped reward scheme achieves best accuracy on the
ImageReward benchmark (67.4%), significantly improves out-of-distribution (OOD)
performance on a large-scale human preference dataset (1.2M votes, up to +6%
over naive reasoner), and reduces reasoning contradictions compared to strong
GRPO and SFT baselines. These results demonstrate that listener-based rewards
provide a scalable, data-efficient path to aligning vision-language models with
nuanced human preferences. We will release our reasoning model here:
https://huggingface.co/alexgambashidze/qwen2.5vl_image_preference_reasoner. | 2025-06-28T09:53:17Z | null | null | null | null | null | null | null | null | null | null |
2,506.22919 | Hecto: Modular Sparse Experts for Adaptive and Interpretable Reasoning | ['Sanskar Pandey', 'Ruhaan Chopra', 'Saad Murtaza Bhat', 'Ark Abhyudaya'] | ['cs.AI'] | Mixture-of-Experts (MoE) models enable conditional computation by routing
inputs to specialized experts, but these experts rely on identical inductive
biases, thus limiting representational diversity. This static computation
pathway is inefficient for inputs that require different types of reasoning and
limits specialization and interpretability. We propose Hecto, a lightweight MoE
architecture that leverages architectural heterogeneity by combining a GRU
expert for temporal reasoning and an FFNN expert for static abstraction under a
sparse Top-1 gating mechanism. Evaluated on three reasoning benchmarks (AG
News, SST-2, HotpotQA) and a regression task (STS-B), Hecto matches or closely
trails homogeneous baselines in performance despite receiving isolated input
representations, while achieving clear expert specialization, with each expert
aligning to distinct reasoning types (temporal vs static). At larger batch
sizes, Hecto exhibits improved performance, benefiting from relaxed
computational constraints that allow its heterogeneous architecture to optimize
more effectively. Ablation results isolate architectural diversity as the
source of Hecto's stability and interpretability across diverse reasoning
tasks. Overall, Hecto establishes itself as a new benchmark for conditional
computation, offering a principled framework for specialized reasoning in
low-resource regimes with its model strength derived from principled
specialization. | 2025-06-28T15:03:43Z | null | null | null | null | null | null | null | null | null | null |
2,506.22973 | Confident Splatting: Confidence-Based Compression of 3D Gaussian
Splatting via Learnable Beta Distributions | ['AmirHossein Naghi Razlighi', 'Elaheh Badali Golezani', 'Shohreh Kasaei'] | ['cs.GR', 'cs.CV'] | 3D Gaussian Splatting enables high-quality real-time rendering but often
produces millions of splats, resulting in excessive storage and computational
overhead. We propose a novel lossy compression method based on learnable
confidence scores modeled as Beta distributions. Each splat's confidence is
optimized through reconstruction-aware losses, enabling pruning of
low-confidence splats while preserving visual fidelity. The proposed approach
is architecture-agnostic and can be applied to any Gaussian Splatting variant.
In addition, the average confidence values serve as a new metric to assess the
quality of the scene. Extensive experiments demonstrate favorable trade-offs
between compression and fidelity compared to prior work. Our code and data are
publicly available at
https://github.com/amirhossein-razlighi/Confident-Splatting | 2025-06-28T18:11:30Z | null | null | null | null | null | null | null | null | null | null |
2,506.23009 | MusiXQA: Advancing Visual Music Understanding in Multimodal Large
Language Models | ['Jian Chen', 'Wenye Ma', 'Penghang Liu', 'Wei Wang', 'Tengwei Song', 'Ming Li', 'Chenguang Wang', 'Ruiyi Zhang', 'Changyou Chen'] | ['cs.CV'] | Multimodal Large Language Models (MLLMs) have achieved remarkable visual
reasoning abilities in natural images, text-rich documents, and graphic
designs. However, their ability to interpret music sheets remains
underexplored. To bridge this gap, we introduce MusiXQA, the first
comprehensive dataset for evaluating and advancing MLLMs in music sheet
understanding. MusiXQA features high-quality synthetic music sheets generated
via MusiXTeX, with structured annotations covering note pitch and duration,
chords, clefs, key/time signatures, and text, enabling diverse visual QA tasks.
Through extensive evaluations, we reveal significant limitations of current
state-of-the-art MLLMs in this domain. Beyond benchmarking, we developed
Phi-3-MusiX, an MLLM fine-tuned on our dataset, achieving significant
performance gains over GPT-based methods. The proposed dataset and model
establish a foundation for future advances in MLLMs for music sheet
understanding. Code, data, and model will be released upon acceptance. | 2025-06-28T20:46:47Z | null | null | null | null | null | null | null | null | null | null |
2,506.23044 | Ovis-U1 Technical Report | ['Guo-Hua Wang', 'Shanshan Zhao', 'Xinjie Zhang', 'Liangfu Cao', 'Pengxin Zhan', 'Lunhao Duan', 'Shiyin Lu', 'Minghao Fu', 'Xiaohao Chen', 'Jianshan Zhao', 'Yang Li', 'Qing-Guo Chen'] | ['cs.CV', 'cs.AI'] | In this report, we introduce Ovis-U1, a 3-billion-parameter unified model
that integrates multimodal understanding, text-to-image generation, and image
editing capabilities. Building on the foundation of the Ovis series, Ovis-U1
incorporates a diffusion-based visual decoder paired with a bidirectional token
refiner, enabling image generation tasks comparable to leading models like
GPT-4o. Unlike some previous models that use a frozen MLLM for generation
tasks, Ovis-U1 utilizes a new unified training approach starting from a
language model. Compared to training solely on understanding or generation
tasks, unified training yields better performance, demonstrating the
enhancement achieved by integrating these two tasks. Ovis-U1 achieves a score
of 69.6 on the OpenCompass Multi-modal Academic Benchmark, surpassing recent
state-of-the-art models such as Ristretto-3B and SAIL-VL-1.5-2B. In
text-to-image generation, it excels with scores of 83.72 and 0.89 on the
DPG-Bench and GenEval benchmarks, respectively. For image editing, it achieves
4.00 and 6.42 on the ImgEdit-Bench and GEdit-Bench-EN, respectively. As the
initial version of the Ovis unified model series, Ovis-U1 pushes the boundaries
of multimodal understanding, generation, and editing. | 2025-06-29T00:40:17Z | An unified model for multimodal understanding, text-to-image
generation, and image editing. GitHub: https://github.com/AIDC-AI/Ovis-U1 | null | null | null | null | null | null | null | null | null |
2,506.23077 | Dynamic Contrastive Learning for Hierarchical Retrieval: A Case Study of
Distance-Aware Cross-View Geo-Localization | ['Suofei Zhang', 'Xinxin Wang', 'Xiaofu Wu', 'Quan Zhou', 'Haifeng Hu'] | ['cs.CV'] | Existing deep learning-based cross-view geo-localization methods primarily
focus on improving the accuracy of cross-domain image matching, rather than
enabling models to comprehensively capture contextual information around the
target and minimize the cost of localization errors. To support systematic
research into this Distance-Aware Cross-View Geo-Localization (DACVGL) problem,
we construct Distance-Aware Campus (DA-Campus), the first benchmark that pairs
multi-view imagery with precise distance annotations across three spatial
resolutions. Based on DA-Campus, we formulate DACVGL as a hierarchical
retrieval problem across different domains. Our study further reveals that, due
to the inherent complexity of spatial relationships among buildings, this
problem can only be addressed via a contrastive learning paradigm, rather than
conventional metric learning. To tackle this challenge, we propose Dynamic
Contrastive Learning (DyCL), a novel framework that progressively aligns
feature representations according to hierarchical spatial margins. Extensive
experiments demonstrate that DyCL is highly complementary to existing
multi-scale metric learning methods and yields substantial improvements in both
hierarchical retrieval performance and overall cross-view geo-localization
accuracy. Our code and benchmark are publicly available at
https://github.com/anocodetest1/DyCL. | 2025-06-29T03:57:01Z | null | null | null | null | null | null | null | null | null | null |
2,506.23115 | MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional
Multimodal Embeddings | ['Haonan Chen', 'Hong Liu', 'Yuping Luo', 'Liang Wang', 'Nan Yang', 'Furu Wei', 'Zhicheng Dou'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Multimodal embedding models, built upon causal Vision Language Models (VLMs),
have shown promise in various tasks. However, current approaches face three key
limitations: the use of causal attention in VLM backbones is suboptimal for
embedding tasks; scalability issues due to reliance on high-quality labeled
paired data for contrastive learning; and limited diversity in training
objectives and data. To address these issues, we propose MoCa, a two-stage
framework for transforming pre-trained VLMs into effective bidirectional
multimodal embedding models. The first stage, Modality-aware Continual
Pre-training, introduces a joint reconstruction objective that simultaneously
denoises interleaved text and image inputs, enhancing bidirectional
context-aware reasoning. The second stage, Heterogeneous Contrastive
Fine-tuning, leverages diverse, semantically rich multimodal data beyond simple
image-caption pairs to enhance generalization and alignment. Our method
addresses the stated limitations by introducing bidirectional attention through
continual pre-training, scaling effectively with massive unlabeled datasets via
joint reconstruction objectives, and utilizing diverse multimodal data for
enhanced representation robustness. Experiments demonstrate that MoCa
consistently improves performance across MMEB and ViDoRe-v2 benchmarks,
achieving new state-of-the-art results, and exhibits strong scalability with
both model size and training data on MMEB. | 2025-06-29T06:41:00Z | Homepage: https://haon-chen.github.io/MoCa/ | null | null | null | null | null | null | null | null | null |
2,506.23151 | MEMFOF: High-Resolution Training for Memory-Efficient Multi-Frame
Optical Flow Estimation | ['Vladislav Bargatin', 'Egor Chistov', 'Alexander Yakovenko', 'Dmitriy Vatolin'] | ['cs.CV', 'cs.AI', 'cs.MM'] | Recent advances in optical flow estimation have prioritized accuracy at the
cost of growing GPU memory consumption, particularly for high-resolution
(FullHD) inputs. We introduce MEMFOF, a memory-efficient multi-frame optical
flow method that identifies a favorable trade-off between multi-frame
estimation and GPU memory usage. Notably, MEMFOF requires only 2.09 GB of GPU
memory at runtime for 1080p inputs, and 28.5 GB during training, which uniquely
positions our method to be trained at native 1080p without the need for
cropping or downsampling. We systematically revisit design choices from
RAFT-like architectures, integrating reduced correlation volumes and
high-resolution training protocols alongside multi-frame estimation, to achieve
state-of-the-art performance across multiple benchmarks while substantially
reducing memory overhead. Our method outperforms more resource-intensive
alternatives in both accuracy and runtime efficiency, validating its robustness
for flow estimation at high resolutions. At the time of submission, our method
ranks first on the Spring benchmark with a 1-pixel (1px) outlier rate of 3.289,
leads Sintel (clean) with an endpoint error (EPE) of 0.963, and achieves the
best Fl-all error on KITTI-2015 at 2.94%. The code is available at
https://github.com/msu-video-group/memfof. | 2025-06-29T09:01:42Z | Accepted at ICCV 2025 | null | null | null | null | null | null | null | null | null |
2,506.23325 | XY-Tokenizer: Mitigating the Semantic-Acoustic Conflict in Low-Bitrate
Speech Codecs | ['Yitian Gong', 'Luozhijie Jin', 'Ruifan Deng', 'Dong Zhang', 'Xin Zhang', 'Qinyuan Cheng', 'Zhaoye Fei', 'Shimin Li', 'Xipeng Qiu'] | ['cs.SD', 'cs.AI', 'eess.AS'] | Speech codecs serve as bridges between speech signals and large language
models. An ideal codec for speech language models should not only preserve
acoustic information but also capture rich semantic information. However,
existing speech codecs struggle to balance high-quality audio reconstruction
with ease of modeling by language models. In this study, we analyze the
limitations of previous codecs in balancing semantic richness and acoustic
fidelity. We propose XY-Tokenizer, a novel codec that mitigates the conflict
between semantic and acoustic capabilities through multi-stage, multi-task
learning. Experimental results demonstrate that XY-Tokenizer achieves
performance in both semantic and acoustic tasks comparable to that of
state-of-the-art codecs operating at similar bitrates, even though those
existing codecs typically excel in only one aspect. Specifically, XY-Tokenizer
achieves strong text alignment, surpassing distillation-based semantic modeling
methods such as SpeechTokenizer and Mimi, while maintaining a speaker
similarity score of 0.83 between reconstructed and original audio. The
reconstruction performance of XY-Tokenizer is comparable to that of BigCodec,
the current state-of-the-art among acoustic-only codecs, which achieves a
speaker similarity score of 0.84 at a similar bitrate. Code and models are
available at https://github.com/gyt1145028706/XY-Tokenizer. | 2025-06-29T16:51:50Z | null | null | null | null | null | null | null | null | null | null |
2,506.23394 | Teaching a Language Model to Speak the Language of Tools | ['Simeon Emanuilov'] | ['cs.IR', 'cs.AI', 'cs.CL', 'I.2.7; I.2.1'] | External tool integration through function-calling is essential for practical
language model applications, yet most multilingual models lack reliable
tool-use capabilities in non-English languages. Even state-of-the-art
multilingual models struggle with determining when to use tools and generating
the structured outputs required for function calls, often exhibiting language
confusion when prompted in lower-resource languages. This work presents a
methodology for adapting existing language models to enable robust tool use in
any target language, using Bulgarian as a case study. The approach involves
continued training of the BgGPT model series (2.6B, 9B, 27B parameters) on a
novel bilingual dataset of 10,035 function-calling examples designed to support
standardized protocols like MCP (Model Context Protocol). The research
introduces TUCAN (Tool-Using Capable Assistant Navigator), which achieves up to
28.75% improvement in function-calling accuracy over base models while
preserving core language understanding, as verified on established Bulgarian
benchmarks. Beyond accuracy gains, TUCAN models demonstrate production-ready
response formatting with clean, parsable function calls, contrasting with the
verbose and inconsistent outputs of base models. The models, evaluation
framework, and dataset are released to enable replication for other languages.
This work demonstrates a practical approach for extending tool-augmented
capabilities beyond English-centric systems. | 2025-06-29T20:47:27Z | null | null | null | null | null | null | null | null | null | null |
2,506.23491 | ZonUI-3B: A Lightweight Vision-Language Model for Cross-Resolution GUI
Grounding | ['ZongHan Hsieh', 'Tzer-Jen Wei', 'ShengJing Yang'] | ['cs.CV', 'cs.AI'] | This paper introduces ZonUI-3B, a lightweight Vision-Language Model (VLM)
specifically designed for Graphical User Interface grounding tasks, achieving
performance competitive with significantly larger models. Unlike large-scale
VLMs (>7B parameters) that are computationally intensive and impractical for
consumer-grade hardware, ZonUI-3B delivers strong grounding accuracy while
being fully trainable on a single GPU (RTX 4090). The model incorporates
several key innovations: (i) combine cross-platform, multi-resolution dataset
of 24K examples from diverse sources including mobile, desktop, and web GUI
screenshots to effectively address data scarcity in high-resolution desktop
environments; (ii) a two-stage fine-tuning strategy, where initial
cross-platform training establishes robust GUI understanding, followed by
specialized fine-tuning on high-resolution data to significantly enhance model
adaptability; and (iii) data curation and redundancy reduction strategies,
demonstrating that randomly sampling a smaller subset with reduced redundancy
achieves performance comparable to larger datasets, emphasizing data diversity
over sheer volume. Empirical evaluation on standard GUI grounding
benchmarks-including ScreenSpot, ScreenSpot-v2, and the challenging
ScreenSpot-Pro, highlights ZonUI-3B's exceptional accuracy, achieving 84.9% on
ScreenSpot and 86.4% on ScreenSpot-v2, surpassing prior models under 4B
parameters. Ablation studies validate the critical role of balanced sampling
and two-stage fine-tuning in enhancing robustness, particularly in
high-resolution desktop scenarios. The ZonUI-3B is available at:
https://github.com/Han1018/ZonUI-3B | 2025-06-30T03:33:02Z | null | null | null | null | null | null | null | null | null | null |
2,506.2367 | Efficient Interleaved Speech Modeling through Knowledge Distillation | ['Mohammadmahdi Nouriborji', 'Morteza Rohanian'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Current speech language models exceed the size and latency constraints of
many deployment environments. We build compact, expressive speech generation
models through layer-aligned distillation, matching hidden states, attention
maps, and softened logits to compress large multimodal transformers by 3x with
minimal loss in performance. We introduce TinyWave, a family of 2B-parameter
models for speech-to-speech and interleaved speech-text generation, trained on
50,000 hours of public audio. TinyWave supports (i) speech-only generation
using phonetic or expressive tokens and (ii) mixed speech-text continuations.
Evaluation on Libri-Light shows TinyWave within 1.4 normalized perplexity
points of its teacher. Accuracy on spoken StoryCloze and SALMon reaches 93-97%
of the teacher's performance, outperforming size-matched baselines. These
models are optimized for deployment on commodity hardware, enabling
applications in real-time conversational agents, assistive technologies, and
low-resource environments. We release models, training code, and evaluation
scripts to support reproducible research on compact, expressive speech
generation. | 2025-06-30T09:47:37Z | null | null | null | null | null | null | null | null | null | null |
2,506.23822 | Interpretable Zero-Shot Learning with Locally-Aligned Vision-Language
Model | ['Shiming Chen', 'Bowen Duan', 'Salman Khan', 'Fahad Shahbaz Khan'] | ['cs.CV'] | Large-scale vision-language models (VLMs), such as CLIP, have achieved
remarkable success in zero-shot learning (ZSL) by leveraging large-scale
visual-text pair datasets. However, these methods often lack interpretability,
as they compute the similarity between an entire query image and the embedded
category words, making it difficult to explain their predictions. One approach
to address this issue is to develop interpretable models by integrating
language, where classifiers are built using discrete attributes, similar to
human perception. This introduces a new challenge: how to effectively align
local visual features with corresponding attributes based on pre-trained VLMs.
To tackle this, we propose LaZSL, a locally-aligned vision-language model for
interpretable ZSL. LaZSL employs local visual-semantic alignment via optimal
transport to perform interaction between visual regions and their associated
attributes, facilitating effective alignment and providing interpretable
similarity without the need for additional training. Extensive experiments
demonstrate that our method offers several advantages, including enhanced
interpretability, improved accuracy, and strong domain generalization. Codes
available at: https://github.com/shiming-chen/LaZSL. | 2025-06-30T13:14:46Z | Accepted to ICCV'25 | null | null | null | null | null | null | null | null | null |
2,506.23869 | Scaling Self-Supervised Representation Learning for Symbolic Piano
Performance | ['Louis Bradshaw', 'Honglu Fan', 'Alexander Spangher', 'Stella Biderman', 'Simon Colton'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | We study the capabilities of generative autoregressive transformer models
trained on large amounts of symbolic solo-piano transcriptions. After first
pretraining on approximately 60,000 hours of music, we use a comparatively
smaller, high-quality subset, to finetune models to produce musical
continuations, perform symbolic classification tasks, and produce
general-purpose contrastive MIDI embeddings by adapting the SimCLR framework to
symbolic music. When evaluating piano continuation coherence, our generative
model outperforms leading symbolic generation techniques and remains
competitive with proprietary audio generation models. On MIR classification
benchmarks, frozen representations from our contrastive model achieve
state-of-the-art results in linear probe experiments, while direct finetuning
demonstrates the generalizability of pretrained representations, often
requiring only a few hundred labeled examples to specialize to downstream
tasks. | 2025-06-30T14:00:14Z | ISMIR (2025) | null | null | null | null | null | null | null | null | null |
2,506.23971 | UMA: A Family of Universal Models for Atoms | ['Brandon M. Wood', 'Misko Dzamba', 'Xiang Fu', 'Meng Gao', 'Muhammed Shuaibi', 'Luis Barroso-Luque', 'Kareem Abdelmaqsoud', 'Vahe Gharakhanyan', 'John R. Kitchin', 'Daniel S. Levine', 'Kyle Michel', 'Anuroop Sriram', 'Taco Cohen', 'Abhishek Das', 'Ammar Rizvi', 'Sushree Jagriti Sahoo', 'Zachary W. Ulissi', 'C. Lawrence Zitnick'] | ['cs.LG'] | The ability to quickly and accurately compute properties from atomic
simulations is critical for advancing a large number of applications in
chemistry and materials science including drug discovery, energy storage, and
semiconductor manufacturing. To address this need, Meta FAIR presents a family
of Universal Models for Atoms (UMA), designed to push the frontier of speed,
accuracy, and generalization. UMA models are trained on half a billion unique
3D atomic structures (the largest training runs to date) by compiling data
across multiple chemical domains, e.g. molecules, materials, and catalysts. We
develop empirical scaling laws to help understand how to increase model
capacity alongside dataset size to achieve the best accuracy. The UMA small and
medium models utilize a novel architectural design we refer to as mixture of
linear experts that enables increasing model capacity without sacrificing
speed. For example, UMA-medium has 1.4B parameters but only ~50M active
parameters per atomic structure. We evaluate UMA models on a diverse set of
applications across multiple domains and find that, remarkably, a single model
without any fine-tuning can perform similarly or better than specialized
models. We are releasing the UMA code, weights, and associated data to
accelerate computational workflows and enable the community to continue to
build increasingly capable AI models. | 2025-06-30T15:38:13Z | 29 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,506.24085 | Imagine for Me: Creative Conceptual Blending of Real Images and Text via
Blended Attention | ['Wonwoong Cho', 'Yanxia Zhang', 'Yan-Ying Chen', 'David I. Inouye'] | ['cs.CV', 'cs.AI'] | Blending visual and textual concepts into a new visual concept is a unique
and powerful trait of human beings that can fuel creativity. However, in
practice, cross-modal conceptual blending for humans is prone to cognitive
biases, like design fixation, which leads to local minima in the design space.
In this paper, we propose a T2I diffusion adapter "IT-Blender" that can
automate the blending process to enhance human creativity. Prior works related
to cross-modal conceptual blending are limited in encoding a real image without
loss of details or in disentangling the image and text inputs. To address these
gaps, IT-Blender leverages pretrained diffusion models (SD and FLUX) to blend
the latent representations of a clean reference image with those of the noisy
generated image. Combined with our novel blended attention, IT-Blender encodes
the real reference image without loss of details and blends the visual concept
with the object specified by the text in a disentangled way. Our experiment
results show that IT-Blender outperforms the baselines by a large margin in
blending visual and textual concepts, shedding light on the new application of
image generative models to augment human creativity. | 2025-06-30T17:41:25Z | Project website is available at https://imagineforme.github.io/ | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.