arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,506.06281
TerraFM: A Scalable Foundation Model for Unified Multisensor Earth Observation
['Muhammad Sohail Danish', 'Muhammad Akhtar Munir', 'Syed Roshaan Ali Shah', 'Muhammad Haris Khan', 'Rao Muhammad Anwer', 'Jorma Laaksonen', 'Fahad Shahbaz Khan', 'Salman Khan']
['cs.CV']
Modern Earth observation (EO) increasingly leverages deep learning to harness the scale and diversity of satellite imagery across sensors and regions. While recent foundation models have demonstrated promising generalization across EO tasks, many remain limited by the scale, geographical coverage, and spectral diversity of their training data, factors critical for learning globally transferable representations. In this work, we introduce TerraFM, a scalable self-supervised learning model that leverages globally distributed Sentinel-1 and Sentinel-2 imagery, combined with large spatial tiles and land-cover aware sampling to enrich spatial and semantic coverage. By treating sensing modalities as natural augmentations in our self-supervised approach, we unify radar and optical inputs via modality-specific patch embeddings and adaptive cross-attention fusion. Our training strategy integrates local-global contrastive learning and introduces a dual-centering mechanism that incorporates class-frequency-aware regularization to address long-tailed distributions in land cover.TerraFM achieves strong generalization on both classification and segmentation tasks, outperforming prior models on GEO-Bench and Copernicus-Bench. Our code and pretrained models are publicly available at: https://github.com/mbzuai-oryx/TerraFM .
2025-06-06T17:59:50Z
null
null
null
null
null
null
null
null
null
null
2,506.06962
AR-RAG: Autoregressive Retrieval Augmentation for Image Generation
['Jingyuan Qi', 'Zhiyang Xu', 'Qifan Wang', 'Lifu Huang']
['cs.CV']
We introduce Autoregressive Retrieval Augmentation (AR-RAG), a novel paradigm that enhances image generation by autoregressively incorporating knearest neighbor retrievals at the patch level. Unlike prior methods that perform a single, static retrieval before generation and condition the entire generation on fixed reference images, AR-RAG performs context-aware retrievals at each generation step, using prior-generated patches as queries to retrieve and incorporate the most relevant patch-level visual references, enabling the model to respond to evolving generation needs while avoiding limitations (e.g., over-copying, stylistic bias, etc.) prevalent in existing methods. To realize AR-RAG, we propose two parallel frameworks: (1) Distribution-Augmentation in Decoding (DAiD), a training-free plug-and-use decoding strategy that directly merges the distribution of model-predicted patches with the distribution of retrieved patches, and (2) Feature-Augmentation in Decoding (FAiD), a parameter-efficient fine-tuning method that progressively smooths the features of retrieved patches via multi-scale convolution operations and leverages them to augment the image generation process. We validate the effectiveness of AR-RAG on widely adopted benchmarks, including Midjourney-30K, GenEval and DPG-Bench, demonstrating significant performance gains over state-of-the-art image generation models.
2025-06-08T01:33:05Z
Image Generation, Retrieval Augmented Generation
null
null
null
null
null
null
null
null
null
2,506.07032
A Culturally-diverse Multilingual Multimodal Video Benchmark & Model
['Bhuiyan Sanjid Shafique', 'Ashmal Vayani', 'Muhammad Maaz', 'Hanoona Abdul Rasheed', 'Dinura Dissanayake', 'Mohammed Irfan Kurpath', 'Yahya Hmaiti', 'Go Inoue', 'Jean Lahoud', 'Md. Safirur Rashid', 'Shadid Intisar Quasem', 'Maheen Fatima', 'Franco Vidal', 'Mykola Maslych', 'Ketan Pravin More', 'Sanoojan Baliah', 'Hasindri Watawana', 'Yuhao Li', 'Fabian Farestam', 'Leon Schaller', 'Roman Tymtsiv', 'Simon Weber', 'Hisham Cholakkal', 'Ivan Laptev', "Shin'ichi Satoh", 'Michael Felsberg', 'Mubarak Shah', 'Salman Khan', 'Fahad Shahbaz Khan']
['cs.CL', 'cs.CV']
Large multimodal models (LMMs) have recently gained attention due to their effectiveness to understand and generate descriptions of visual content. Most existing LMMs are in English language. While few recent works explore multilingual image LMMs, to the best of our knowledge, moving beyond the English language for cultural and linguistic inclusivity is yet to be investigated in the context of video LMMs. In pursuit of more inclusive video LMMs, we introduce a multilingual Video LMM benchmark, named ViMUL-Bench, to evaluate Video LMMs across 14 languages, including both low- and high-resource languages: English, Chinese, Spanish, French, German, Hindi, Arabic, Russian, Bengali, Urdu, Sinhala, Tamil, Swedish, and Japanese. Our ViMUL-Bench is designed to rigorously test video LMMs across 15 categories including eight culturally diverse categories, ranging from lifestyles and festivals to foods and rituals and from local landmarks to prominent cultural personalities. ViMUL-Bench comprises both open-ended (short and long-form) and multiple-choice questions spanning various video durations (short, medium, and long) with 8k samples that are manually verified by native language speakers. In addition, we also introduce a machine translated multilingual video training set comprising 1.2 million samples and develop a simple multilingual video LMM, named ViMUL, that is shown to provide a better tradeoff between high-and low-resource languages for video understanding. We hope our ViMUL-Bench and multilingual video LMM along with a large-scale multilingual video training set will help ease future research in developing cultural and linguistic inclusive multilingual video LMMs. Our proposed benchmark, video LMM and training data will be publicly released at https://mbzuai-oryx.github.io/ViMUL/.
2025-06-08T07:52:20Z
null
null
null
null
null
null
null
null
null
null
2,506.07044
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
['LASA Team', 'Weiwen Xu', 'Hou Pong Chan', 'Long Li', 'Mahani Aljunied', 'Ruifeng Yuan', 'Jianyu Wang', 'Chenghao Xiao', 'Guizhen Chen', 'Chaoqun Liu', 'Zhaodonghui Li', 'Yu Sun', 'Junao Shen', 'Chaojun Wang', 'Jie Tan', 'Deli Zhao', 'Tingyang Xu', 'Hao Zhang', 'Yu Rong']
['cs.CL', 'cs.AI', 'cs.CV']
Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in understanding common visual elements, largely due to their large-scale datasets and advanced training strategies. However, their effectiveness in medical applications remains limited due to the inherent discrepancies between data and tasks in medical scenarios and those in the general domain. Concretely, existing medical MLLMs face the following critical limitations: (1) limited coverage of medical knowledge beyond imaging, (2) heightened susceptibility to hallucinations due to suboptimal data curation processes, (3) lack of reasoning capabilities tailored for complex medical scenarios. To address these challenges, we first propose a comprehensive data curation procedure that (1) efficiently acquires rich medical knowledge data not only from medical imaging but also from extensive medical texts and general-domain data; and (2) synthesizes accurate medical captions, visual question answering (VQA), and reasoning samples. As a result, we build a multimodal dataset enriched with extensive medical knowledge. Building on the curated data, we introduce our medical-specialized MLLM: Lingshu. Lingshu undergoes multi-stage training to embed medical expertise and enhance its task-solving capabilities progressively. Besides, we preliminarily explore the potential of applying reinforcement learning with verifiable rewards paradigm to enhance Lingshu's medical reasoning ability. Additionally, we develop MedEvalKit, a unified evaluation framework that consolidates leading multimodal and textual medical benchmarks for standardized, fair, and efficient model assessment. We evaluate the performance of Lingshu on three fundamental medical tasks, multimodal QA, text-based QA, and medical report generation. The results show that Lingshu consistently outperforms the existing open-source multimodal models on most tasks ...
2025-06-08T08:47:30Z
Technical Report, 53 pages, 25 tables, and 16 figures. Our webpage is https://alibaba-damo-academy.github.io/lingshu/
null
null
null
null
null
null
null
null
null
2,506.0708
FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping
['Anatol Garioud', 'Sébastien Giordano', 'Nicolas David', 'Nicolas Gonthier']
['cs.CV']
The growing availability of high-quality Earth Observation (EO) data enables accurate global land cover and crop type monitoring. However, the volume and heterogeneity of these datasets pose major processing and annotation challenges. To address this, the French National Institute of Geographical and Forest Information (IGN) is actively exploring innovative strategies to exploit diverse EO data, which require large annotated datasets. IGN introduces FLAIR-HUB, the largest multi-sensor land cover dataset with very-high-resolution (20 cm) annotations, covering 2528 km2 of France. It combines six aligned modalities: aerial imagery, Sentinel-1/2 time series, SPOT imagery, topographic data, and historical aerial images. Extensive benchmarks evaluate multimodal fusion and deep learning models (CNNs, transformers) for land cover or crop mapping and also explore multi-task learning. Results underscore the complexity of multimodal fusion and fine-grained classification, with best land cover performance (78.2% accuracy, 65.8% mIoU) achieved using nearly all modalities. FLAIR-HUB supports supervised and multimodal pretraining, with data and code available at https://ignf.github.io/FLAIR/flairhub.
2025-06-08T10:48:51Z
null
null
null
null
null
null
null
null
null
null
2,506.0731
AllTracker: Efficient Dense Point Tracking at High Resolution
['Adam W. Harley', 'Yang You', 'Xinglong Sun', 'Yang Zheng', 'Nikhil Raghuraman', 'Yunqi Gu', 'Sheldon Liang', 'Wen-Hsuan Chu', 'Achal Dave', 'Pavel Tokmakov', 'Suya You', 'Rares Ambrus', 'Katerina Fragkiadaki', 'Leonidas J. Guibas']
['cs.CV']
We introduce AllTracker: a model that estimates long-range point tracks by way of estimating the flow field between a query frame and every other frame of a video. Unlike existing point tracking methods, our approach delivers high-resolution and dense (all-pixel) correspondence fields, which can be visualized as flow maps. Unlike existing optical flow methods, our approach corresponds one frame to hundreds of subsequent frames, rather than just the next frame. We develop a new architecture for this task, blending techniques from existing work in optical flow and point tracking: the model performs iterative inference on low-resolution grids of correspondence estimates, propagating information spatially via 2D convolution layers, and propagating information temporally via pixel-aligned attention layers. The model is fast and parameter-efficient (16 million parameters), and delivers state-of-the-art point tracking accuracy at high resolution (i.e., tracking 768x1024 pixels, on a 40G GPU). A benefit of our design is that we can train on a wider set of datasets, and we find that doing so is crucial for top performance. We provide an extensive ablation study on our architecture details and training recipe, making it clear which details matter most. Our code and model weights are available at https://alltracker.github.io .
2025-06-08T22:55:06Z
null
null
null
null
null
null
null
null
null
null
2,506.07434
Well Begun is Half Done: Low-resource Preference Alignment by Weak-to-Strong Decoding
['Feifan Song', 'Shaohang Wei', 'Wen Luo', 'Yuxuan Fan', 'Tianyu Liu', 'Guoyin Wang', 'Houfeng Wang']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) require alignment with human preferences to avoid generating offensive, false, or meaningless content. Recently, low-resource methods for LLM alignment have been popular, while still facing challenges in obtaining both high-quality and aligned content. Motivated by the observation that the difficulty of generating aligned responses is concentrated at the beginning of decoding, we propose a novel framework, Weak-to-Strong Decoding (WSD), to enhance the alignment ability of base models by the guidance of a small aligned model. The small model first drafts well-aligned beginnings, followed by the large base model to continue the rest, controlled by a well-designed auto-switch mechanism. We also collect a new dataset, GenerAlign, to fine-tune a small-sized Pilot-3B as the draft model, which effectively enhances different base models under the WSD framework to outperform all baseline methods, while avoiding degradation on downstream tasks, termed as the alignment tax. Extensive experiments are further conducted to examine the impact of different settings and time efficiency, as well as analyses on the intrinsic mechanisms of WSD in depth.
2025-06-09T05:21:22Z
Accepted by ACL 2025 Findings
null
null
null
null
null
null
null
null
null
2,506.07438
LGAI-EMBEDDING-Preview Technical Report
['Jooyoung Choi', 'Hyun Kim', 'Hansol Jang', 'Changwook Jun', 'Kyunghoon Bae', 'Hyewon Choi', 'Stanley Jungkyu Choi', 'Honglak Lee', 'Chulmin Yun']
['cs.CL']
This report presents a unified instruction-based framework for learning generalized text embeddings optimized for both information retrieval (IR) and non-IR tasks. Built upon a decoder-only large language model (Mistral-7B), our approach combines in-context learning, soft supervision, and adaptive hard-negative mining to generate context-aware embeddings without task-specific fine-tuning. Structured instructions and few-shot examples are used to guide the model across diverse tasks, enabling strong performance on classification, semantic similarity, clustering, and reranking benchmarks. To improve semantic discrimination, we employ a soft labeling framework where continuous relevance scores, distilled from a high-performance dense retriever and reranker, serve as fine-grained supervision signals. In addition, we introduce adaptive margin-based hard-negative mining, which filters out semantically ambiguous negatives based on their similarity to positive examples, thereby enhancing training stability and retrieval robustness. Our model is evaluated on the newly introduced MTEB (English, v2) benchmark, covering 41 tasks across seven categories. Results show that our method achieves strong generalization and ranks among the top-performing models by Borda score, outperforming several larger or fully fine-tuned baselines. These findings highlight the effectiveness of combining in-context prompting, soft supervision, and adaptive sampling for scalable, high-quality embedding generation.
2025-06-09T05:30:35Z
10 pages
null
null
LG-ANNA-Embedding technical report
['Jooyoung Choi', 'Hyun Kim', 'Hansol Jang', 'Changwook Jun', 'Kyunghoon Bae', 'Hyewon Choi', 'Stanley Jungkyu Choi', 'Honglak Lee', 'Chulmin Yun']
2,025
null
0
35
['Computer Science']
2,506.07491
SpatialLM: Training Large Language Models for Structured Indoor Modeling
['Yongsen Mao', 'Junhao Zhong', 'Chuan Fang', 'Jia Zheng', 'Rui Tang', 'Hao Zhu', 'Ping Tan', 'Zihan Zhou']
['cs.CV']
SpatialLM is a large language model designed to process 3D point cloud data and generate structured 3D scene understanding outputs. These outputs include architectural elements like walls, doors, windows, and oriented object boxes with their semantic categories. Unlike previous methods which exploit task-specific network designs, our model adheres to the standard multimodal LLM architecture and is fine-tuned directly from open-source LLMs. To train SpatialLM, we collect a large-scale, high-quality synthetic dataset consisting of the point clouds of 12,328 indoor scenes (54,778 rooms) with ground-truth 3D annotations, and conduct a careful study on various modeling and training decisions. On public benchmarks, our model gives state-of-the-art performance in layout estimation and competitive results in 3D object detection. With that, we show a feasible path for enhancing the spatial understanding capabilities of modern LLMs for applications in augmented reality, embodied robotics, and more.
2025-06-09T07:10:58Z
null
null
null
SpatialLM: Training Large Language Models for Structured Indoor Modeling
['Yongsen Mao', 'Junhao Zhong', 'Chuan Fang', 'Jia Zheng', 'Rui Tang', 'Hao Zhu', 'Ping Tan', 'Zihan Zhou']
2,025
arXiv.org
1
68
['Computer Science']
2,506.0752
LeVo: High-Quality Song Generation with Multi-Preference Alignment
['Shun Lei', 'Yaoxun Xu', 'Zhiwei Lin', 'Huaicheng Zhang', 'Wei Tan', 'Hangting Chen', 'Jianwei Yu', 'Yixuan Zhang', 'Chenyu Yang', 'Haina Zhu', 'Shuai Wang', 'Zhiyong Wu', 'Dong Yu']
['cs.SD', 'cs.AI', 'eess.AS']
Recent advances in large language models (LLMs) and audio language models have significantly improved music generation, particularly in lyrics-to-song generation. However, existing approaches still struggle with the complex composition of songs and the scarcity of high-quality data, leading to limitations in sound quality, musicality, instruction following, and vocal-instrument harmony. To address these challenges, we introduce LeVo, an LM-based framework consisting of LeLM and a music codec. LeLM is capable of parallelly modeling two types of tokens: mixed tokens, which represent the combined audio of vocals and accompaniment to achieve vocal-instrument harmony, and dual-track tokens, which separately encode vocals and accompaniment for high-quality song generation. It employs two decoder-only transformers and a modular extension training strategy to prevent interference between different token types. To further enhance musicality and instruction following, we introduce a multi-preference alignment method based on Direct Preference Optimization (DPO). This method handles diverse human preferences through a semi-automatic data construction process and DPO post-training. Experimental results demonstrate that LeVo consistently outperforms existing methods on both objective and subjective metrics. Ablation studies further justify the effectiveness of our designs. Audio examples are available at https://levo-demo.github.io/. Code is released at https://github.com/tencent-ailab/songgeneration.
2025-06-09T07:57:24Z
null
null
null
null
null
null
null
null
null
null
2,506.07527
Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions
['Lu Ma', 'Hao Liang', 'Meiyi Qiang', 'Lexiang Tang', 'Xiaochen Ma', 'Zhen Hao Wong', 'Junbo Niu', 'Chengyu Shen', 'Runming He', 'Bin Cui', 'Wentao Zhang']
['cs.AI', 'cs.LG']
Recent advances in large language model (LLM) reasoning have shown that sophisticated behaviors such as planning and self-reflection can emerge through reinforcement learning (RL). However, despite these successes, RL in its current form remains insufficient to induce capabilities that exceed the limitations of the base model, as it is primarily optimized based on existing knowledge of the model rather than facilitating the acquisition of new information. To address this limitation, we employ supervised fine-tuning (SFT) to learn what RL cannot, which enables the incorporation of new knowledge and reasoning patterns by leveraging high-quality demonstration data. We analyze the training dynamics of RL and SFT for LLM reasoning and find that RL excels at maintaining and improving performance on questions within the model's original capabilities, while SFT is more effective at enabling progress on questions beyond the current scope of the model. Motivated by the complementary strengths of RL and SFT, we introduce a novel training approach, \textbf{ReLIFT} (\textbf{Re}inforcement \textbf{L}earning \textbf{I}nterleaved with Online \textbf{F}ine-\textbf{T}uning). In ReLIFT, the model is primarily trained using RL, but when it encounters challenging questions, high-quality solutions are collected for fine-tuning, and the training process alternates between RL and fine-tuning to enhance the model's reasoning abilities. ReLIFT achieves an average improvement of over +5.2 points across five competition-level benchmarks and one out-of-distribution benchmark compared to other zero-RL models. Furthermore, we demonstrate that ReLIFT outperforms both RL and SFT while using only 13\% of the detailed demonstration data, highlighting its scalability. These results provide compelling evidence that ReLIFT overcomes the fundamental limitations of RL and underscores the significant potential.
2025-06-09T08:11:20Z
12 pages, 5 figures
null
null
Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions
['Lu Ma', 'Hao Liang', 'Meiyi Qiang', 'Lexiang Tang', 'Xiaochen Ma', 'Zhen Hao Wong', 'Junbo Niu', 'Chengyu Shen', 'Runming He', 'Bin Cui', 'Wentao Zhang']
2,025
arXiv.org
0
31
['Computer Science']
2,506.0753
BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation
['Hongyu Wang', 'Chuyan Xiong', 'Ruiping Wang', 'Xilin Chen']
['cs.RO', 'cs.CV']
Vision-Language-Action (VLA) models have shown impressive capabilities across a wide range of robotics manipulation tasks. However, their growing model size poses significant challenges for deployment on resource-constrained robotic systems. While 1-bit pretraining has proven effective for enhancing the inference efficiency of large language models with minimal performance loss, its application to VLA models remains underexplored. In this work, we present BitVLA, the first 1-bit VLA model for robotics manipulation, in which every parameter is ternary, i.e., {-1, 0, 1}. To further reduce the memory footprint of the vision encoder, we propose the distillation-aware training strategy that compresses the full-precision encoder to 1.58-bit weights. During this process, a full-precision encoder serves as a teacher model to better align latent representations. Despite the lack of large-scale robotics pretraining, BitVLA achieves performance comparable to the state-of-the-art model OpenVLA-OFT with 4-bit post-training quantization on the LIBERO benchmark, while consuming only 29.8% of the memory. These results highlight BitVLA's promise for deployment on memory-constrained edge devices. We release the code and model weights in https://github.com/ustcwhy/BitVLA.
2025-06-09T08:15:11Z
Work in progress
null
null
BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation
['Hongyu Wang', 'Chuyan Xiong', 'Ruiping Wang', 'Xilin Chen']
2,025
arXiv.org
0
46
['Computer Science']
2,506.07597
Instructing Large Language Models for Low-Resource Languages: A Systematic Study for Basque
['Oscar Sainz', 'Naiara Perez', 'Julen Etxaniz', 'Joseba Fernandez de Landa', 'Itziar Aldabe', 'Iker García-Ferrero', 'Aimar Zabala', 'Ekhi Azurmendi', 'German Rigau', 'Eneko Agirre', 'Mikel Artetxe', 'Aitor Soroa']
['cs.CL']
Instructing language models with user intent requires large instruction datasets, which are only available for a limited set of languages. In this paper, we explore alternatives to conventional instruction adaptation pipelines in low-resource scenarios. We assume a realistic scenario for low-resource languages, where only the following are available: corpora in the target language, existing open-weight multilingual base and instructed backbone LLMs, and synthetically generated instructions sampled from the instructed backbone. We present a comprehensive set of experiments for Basque that systematically study different combinations of these components evaluated on benchmarks and human preferences from 1,680 participants. Our conclusions show that target language corpora are essential, with synthetic instructions yielding robust models, and, most importantly, that using as backbone an instruction-tuned model outperforms using a base non-instructed model, and improved results when scaling up. Using Llama 3.1 instruct 70B as backbone our model comes near frontier models of much larger sizes for Basque, without using any Basque data apart from the 1.2B word corpora. We release code, models, instruction datasets, and human preferences to support full reproducibility in future research on low-resource language adaptation.
2025-06-09T09:54:47Z
Under review
null
null
null
null
null
null
null
null
null
2,506.07621
LoRMA: Low-Rank Multiplicative Adaptation for LLMs
['Harsh Bihany', 'Shubham Patel', 'Ashutosh Modi']
['cs.CL', 'cs.AI', 'cs.LG']
Large Language Models have shown remarkable capabilities in the NLP domain. Their effectiveness can mainly be attributed to their ability to adapt to an array of downstream tasks. However, generally, full fine-tuning is a computationally expensive job. To mitigate this, many techniques have been developed that prime efficiency, a prominent one being Low-Rank Adaptation (LoRA). However, LoRA and its variants employ re-parametrized additive updates. In this paper, we propose Low-Rank Multiplicative Adaptation (LoRMA), which shifts the paradigm of additive updates to a richer space of matrix multiplicative transformations. We tackle challenges such as computational complexity and rank bottleneck of matrix multiplication by effectively re-ordering operations and introducing rank inflation strategies. We conduct extensive experiments to demonstrate the effectiveness of our approach in terms of various evaluation metrics.
2025-06-09T10:36:46Z
Accepted at ACL Findings 2025; 21 pages (9 main paper + 5 pages references + 7 pages appendix)
null
null
null
null
null
null
null
null
null
2,506.07634
SongBloom: Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement
['Chenyu Yang', 'Shuai Wang', 'Hangting Chen', 'Wei Tan', 'Jianwei Yu', 'Haizhou Li']
['eess.AS', 'cs.MM']
Generating music with coherent structure, harmonious instrumental and vocal elements remains a significant challenge in song generation. Existing language models and diffusion-based methods often struggle to balance global coherence with local fidelity, resulting in outputs that lack musicality or suffer from incoherent progression and mismatched lyrics. This paper introduces $\textbf{SongBloom}$, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models. Specifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process. Experimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms. Audio samples are available on our demo page: https://cypress-yang.github.io/SongBloom_demo. The code and model weights have been released on https://github.com/Cypress-Yang/SongBloom .
2025-06-09T11:01:01Z
Submitted to NeurIPS2025
null
null
null
null
null
null
null
null
null
2,506.07636
SWE-Dev: Building Software Engineering Agents with Training and Inference Scaling
['Haoran Wang', 'Zhenyu Hou', 'Yao Wei', 'Jie Tang', 'Yuxiao Dong']
['cs.AI']
Large language models (LLMs) have advanced rapidly from conversational problem solving to addressing real-world tasks involving tool use, such as software engineering (SWE). Recent LLM-powered toolkits, such as OpenAI Codex and Cursor, have offered end-to-end automation of the software development process. However, building effective SWE agents remains challenging due to the lack of high-quality training data and effective test cases. To address this issue, we present SWE-Dev, an SWE agent built upon open-source LLMs. First, we develop a robust pipeline to synthesize test cases for patch evaluation. Second, we scale up agent trajectories to construct the training data for building SWE-Dev. Experiments on the SWE-bench-Verified benchmark show that the SWE-Dev models can achieve top performance among all open SWE agents. Specifically, the success rates of the SWE-Dev 7B and 32B parameter models reach 23.4% and 36.6%, respectively, outperforming state-of-the-art open-source models. All code, models, and datasets are publicly available at https://github.com/THUDM/SWE-Dev.
2025-06-09T11:03:16Z
Accepted to Findings of ACL'25
null
null
null
null
null
null
null
null
null
2,506.07643
Synthetic Visual Genome
['Jae Sung Park', 'Zixian Ma', 'Linjie Li', 'Chenhao Zheng', 'Cheng-Yu Hsieh', 'Ximing Lu', 'Khyathi Chandu', 'Quan Kong', 'Norimasa Kobori', 'Ali Farhadi', 'Yejin Choi', 'Ranjay Krishna']
['cs.CV']
Reasoning over visual relationships-spatial, functional, interactional, social, etc.-is considered to be a fundamental component of human cognition. Yet, despite the major advances in visual comprehension in multimodal language models (MLMs), precise reasoning over relationships and their generations remains a challenge. We introduce ROBIN: an MLM instruction-tuned with densely annotated relationships capable of constructing high-quality dense scene graphs at scale. To train ROBIN, we curate SVG, a synthetic scene graph dataset by completing the missing relations of selected objects in existing scene graphs using a teacher MLM and a carefully designed filtering process to ensure high-quality. To generate more accurate and rich scene graphs at scale for any image, we introduce SG-EDIT: a self-distillation framework where GPT-4o further refines ROBIN's predicted scene graphs by removing unlikely relations and/or suggesting relevant ones. In total, our dataset contains 146K images and 5.6M relationships for 2.6M objects. Results show that our ROBIN-3B model, despite being trained on less than 3 million instances, outperforms similar-size models trained on over 300 million instances on relationship understanding benchmarks, and even surpasses larger models up to 13B parameters. Notably, it achieves state-of-the-art performance in referring expression comprehension with a score of 88.9, surpassing the previous best of 87.4. Our results suggest that training on the refined scene graph data is crucial to maintaining high performance across diverse visual reasoning task.
2025-06-09T11:09:10Z
CVPR 2025
null
null
null
null
null
null
null
null
null
2,506.07833
Improving Large Language Models with Concept-Aware Fine-Tuning
['Michael K. Chen', 'Xikun Zhang', 'Jiaxing Huang', 'Dacheng Tao']
['cs.LG', 'cs.AI', 'cs.CL']
Large language models (LLMs) have become the cornerstone of modern AI. However, the existing paradigm of next-token prediction fundamentally limits their ability to form coherent, high-level concepts, making it a critical barrier to human-like understanding and reasoning. Take the phrase "ribonucleic acid" as an example: an LLM will first decompose it into tokens, i.e., artificial text fragments ("rib", "on", ...), then learn each token sequentially, rather than grasping the phrase as a unified, coherent semantic entity. This fragmented representation hinders deeper conceptual understanding and, ultimately, the development of truly intelligent systems. In response, we introduce Concept-Aware Fine-Tuning (CAFT), a novel multi-token training method that redefines how LLMs are fine-tuned. By enabling the learning of sequences that span multiple tokens, this method fosters stronger concept-aware learning. Our experiments demonstrate significant improvements compared to conventional next-token finetuning methods across diverse tasks, including traditional applications like text summarization and domain-specific ones like de novo protein design. Multi-token prediction was previously only possible in the prohibitively expensive pretraining phase; CAFT, to our knowledge, is the first to bring the multi-token setting to the post-training phase, thus effectively democratizing its benefits for the broader community of practitioners and researchers. Finally, the unexpected effectiveness of our proposed method suggests wider implications for the machine learning research community. All code and data are available at https://github.com/michaelchen-lab/caft-llm
2025-06-09T14:55:00Z
null
null
null
Improving Large Language Models with Concept-Aware Fine-Tuning
['Michael Chen', 'Xikun Zhang', 'Jiaxing Huang', 'Dacheng Tao']
2,025
arXiv.org
0
63
['Computer Science']
2,506.07837
HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific Domains
['Shijie Wang', 'Yilun Zhang', 'Zeyu Lai', 'Dexing Kong']
['cs.AI']
Multimodal large language models (MLLMs) have shown great potential in general domains but perform poorly in some specific domains due to a lack of domain-specific data, such as image-text data or vedio-text data. In some specific domains, there is abundant graphic and textual data scattered around, but lacks standardized arrangement. In the field of medical ultrasound, there are ultrasonic diagnostic books, ultrasonic clinical guidelines, ultrasonic diagnostic reports, and so on. However, these ultrasonic materials are often saved in the forms of PDF, images, etc., and cannot be directly used for the training of MLLMs. This paper proposes a novel image-text reasoning supervised fine-tuning data generation pipeline to create specific domain quadruplets (image, question, thinking trace, and answer) from domain-specific materials. A medical ultrasound domain dataset ReMUD is established, containing over 45,000 reasoning and non-reasoning supervised fine-tuning Question Answering (QA) and Visual Question Answering (VQA) data. The ReMUD-7B model, fine-tuned on Qwen2.5-VL-7B-Instruct, outperforms general-domain MLLMs in medical ultrasound field. To facilitate research, the ReMUD dataset, data generation codebase, and ReMUD-7B parameters will be released at https://github.com/ShiDaizi/ReMUD, addressing the data shortage issue in specific domain MLLMs.
2025-06-09T15:01:38Z
null
null
null
null
null
null
null
null
null
null
2,506.079
MiniCPM4: Ultra-Efficient LLMs on End Devices
['MiniCPM Team', 'Chaojun Xiao', 'Yuxuan Li', 'Xu Han', 'Yuzhuo Bai', 'Jie Cai', 'Haotian Chen', 'Wentong Chen', 'Xin Cong', 'Ganqu Cui', 'Ning Ding', 'Shengdan Fan', 'Yewei Fang', 'Zixuan Fu', 'Wenyu Guan', 'Yitong Guan', 'Junshao Guo', 'Yufeng Han', 'Bingxiang He', 'Yuxiang Huang', 'Cunliang Kong', 'Qiuzuo Li', 'Siyuan Li', 'Wenhao Li', 'Yanghao Li', 'Yishan Li', 'Zhen Li', 'Dan Liu', 'Biyuan Lin', 'Yankai Lin', 'Xiang Long', 'Quanyu Lu', 'Yaxi Lu', 'Peiyan Luo', 'Hongya Lyu', 'Litu Ou', 'Yinxu Pan', 'Zekai Qu', 'Qundong Shi', 'Zijun Song', 'Jiayuan Su', 'Zhou Su', 'Ao Sun', 'Xianghui Sun', 'Peijun Tang', 'Fangzheng Wang', 'Feng Wang', 'Shuo Wang', 'Yudong Wang', 'Yesai Wu', 'Zhenyu Xiao', 'Jie Xie', 'Zihao Xie', 'Yukun Yan', 'Jiarui Yuan', 'Kaihuo Zhang', 'Lei Zhang', 'Linyue Zhang', 'Xueren Zhang', 'Yudi Zhang', 'Hengyu Zhao', 'Weilin Zhao', 'Weilun Zhao', 'Yuanqian Zhao', 'Zhi Zheng', 'Ge Zhou', 'Jie Zhou', 'Wei Zhou', 'Zihan Zhou', 'Zixuan Zhou', 'Zhiyuan Liu', 'Guoyang Zeng', 'Chao Jia', 'Dahai Li', 'Maosong Sun']
['cs.CL', 'cs.AI']
This paper introduces MiniCPM4, a highly efficient large language model (LLM) designed explicitly for end-side devices. We achieve this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. Specifically, in terms of model architecture, we propose InfLLM v2, a trainable sparse attention mechanism that accelerates both prefilling and decoding phases for long-context processing. Regarding training data, we propose UltraClean, an efficient and accurate pre-training data filtering and generation strategy, and UltraChat v2, a comprehensive supervised fine-tuning dataset. These datasets enable satisfactory model performance to be achieved using just 8 trillion training tokens. Regarding training algorithms, we propose ModelTunnel v2 for efficient pre-training strategy search, and improve existing post-training methods by introducing chunk-wise rollout for load-balanced reinforcement learning and data-efficient tenary LLM, BitCPM. Regarding inference systems, we propose CPM.cu that integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding. To meet diverse on-device requirements, MiniCPM4 is available in two versions, with 0.5B and 8B parameters, respectively. Sufficient evaluation results show that MiniCPM4 outperforms open-source models of similar size across multiple benchmarks, highlighting both its efficiency and effectiveness. Notably, MiniCPM4-8B demonstrates significant speed improvements over Qwen3-8B when processing long sequences. Through further adaptation, MiniCPM4 successfully powers diverse applications, including trustworthy survey generation and tool use with model context protocol, clearly showcasing its broad usability.
2025-06-09T16:16:50Z
MiniCPM4 Technical Report
null
null
null
null
null
null
null
null
null
2,506.07905
WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning
['Jie Yang', 'Feipeng Ma', 'Zitian Wang', 'Dacheng Yin', 'Kang Rong', 'Fengyun Rao', 'Ruimao Zhang']
['cs.CV']
Building on the success of text-based reasoning models like DeepSeek-R1, extending these capabilities to multimodal reasoning holds great promise. While recent works have attempted to adapt DeepSeek-R1-style reinforcement learning (RL) training paradigms to multimodal large language models (MLLM), focusing on domain-specific tasks like math and visual perception, a critical question remains: How can we achieve the general-purpose visual-language reasoning through RL? To address this challenge, we make three key efforts: (1) A novel Scalable Multimodal QA Synthesis pipeline that autonomously generates context-aware, reasoning-centric question-answer (QA) pairs directly from the given images. (2) The open-source WeThink dataset containing over 120K multimodal QA pairs with annotated reasoning paths, curated from 18 diverse dataset sources and covering various question domains. (3) A comprehensive exploration of RL on our dataset, incorporating a hybrid reward mechanism that combines rule-based verification with model-based assessment to optimize RL training efficiency across various task domains. Across 14 diverse MLLM benchmarks, we demonstrate that our WeThink dataset significantly enhances performance, from mathematical reasoning to diverse general multimodal tasks. Moreover, we show that our automated data pipeline can continuously increase data diversity to further improve model performance.
2025-06-09T16:20:54Z
null
null
null
WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning
['Jie Yang', 'Feipeng Ma', 'Zitian Wang', 'Dacheng Yin', 'Kang Rong', 'Fengyun Rao', 'Ruimao Zhang']
2,025
arXiv.org
0
96
['Computer Science']
2,506.07918
CausalPFN: Amortized Causal Effect Estimation via In-Context Learning
['Vahid Balazadeh', 'Hamidreza Kamkari', 'Valentin Thomas', 'Benson Li', 'Junwei Ma', 'Jesse C. Cresswell', 'Rahul G. Krishnan']
['cs.LG', 'stat.ML']
Causal effect estimation from observational data is fundamental across various applications. However, selecting an appropriate estimator from dozens of specialized methods demands substantial manual effort and domain expertise. We present CausalPFN, a single transformer that amortizes this workflow: trained once on a large library of simulated data-generating processes that satisfy ignorability, it infers causal effects for new observational datasets out-of-the-box. CausalPFN combines ideas from Bayesian causal inference with the large-scale training protocol of prior-fitted networks (PFNs), learning to map raw observations directly to causal effects without any task-specific adjustment. Our approach achieves superior average performance on heterogeneous and average treatment effect estimation benchmarks (IHDP, Lalonde, ACIC). Moreover, it shows competitive performance for real-world policy making on uplift modeling tasks. CausalPFN provides calibrated uncertainty estimates to support reliable decision-making based on Bayesian principles. This ready-to-use model does not require any further training or tuning and takes a step toward automated causal inference (https://github.com/vdblm/CausalPFN).
2025-06-09T16:31:06Z
null
null
null
CausalPFN: Amortized Causal Effect Estimation via In-Context Learning
['Vahid Balazadeh', 'Hamidreza Kamkari', 'Valentin Thomas', 'Benson Li', 'Junwei Ma', 'Jesse C. Cresswell', 'Rahul G. Krishnan']
2,025
arXiv.org
0
107
['Computer Science', 'Mathematics']
2,506.07932
Squeeze3D: Your 3D Generation Model is Secretly an Extreme Neural Compressor
['Rishit Dagli', 'Yushi Guan', 'Sankeerth Durvasula', 'Mohammadreza Mofayezi', 'Nandita Vijaykumar']
['cs.GR', 'cs.CV', 'cs.LG']
We propose Squeeze3D, a novel framework that leverages implicit prior knowledge learnt by existing pre-trained 3D generative models to compress 3D data at extremely high compression ratios. Our approach bridges the latent spaces between a pre-trained encoder and a pre-trained generation model through trainable mapping networks. Any 3D model represented as a mesh, point cloud, or a radiance field is first encoded by the pre-trained encoder and then transformed (i.e. compressed) into a highly compact latent code. This latent code can effectively be used as an extremely compressed representation of the mesh or point cloud. A mapping network transforms the compressed latent code into the latent space of a powerful generative model, which is then conditioned to recreate the original 3D model (i.e. decompression). Squeeze3D is trained entirely on generated synthetic data and does not require any 3D datasets. The Squeeze3D architecture can be flexibly used with existing pre-trained 3D encoders and existing generative models. It can flexibly support different formats, including meshes, point clouds, and radiance fields. Our experiments demonstrate that Squeeze3D achieves compression ratios of up to 2187x for textured meshes, 55x for point clouds, and 619x for radiance fields while maintaining visual quality comparable to many existing methods. Squeeze3D only incurs a small compression and decompression latency since it does not involve training object-specific networks to compress an object.
2025-06-09T16:52:10Z
null
null
null
null
null
null
null
null
null
null
2,506.07966
SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence
['Ziyang Gong', 'Wenhao Li', 'Oliver Ma', 'Songyuan Li', 'Jiayi Ji', 'Xue Yang', 'Gen Luo', 'Junchi Yan', 'Rongrong Ji']
['cs.CV']
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in various multimodal tasks. To pursue higher intelligence in space, MLLMs require integrating multiple atomic spatial capabilities to handle complex and dynamic tasks. However, existing benchmarks struggle to comprehensively evaluate the spatial intelligence of common MLLMs from the atomic level to the compositional level. To fill this gap, we present SpaCE-10, a comprehensive benchmark for compositional spatial evaluations. In SpaCE-10, we define 10 atomic spatial capabilities, which are combined to form 8 compositional capabilities. Based on these definitions, we propose a novel hierarchical annotation pipeline to generate high-quality and diverse question-answer (QA) pairs. With over 150+ hours of human expert effort, we obtain over 5k QA pairs for 811 real indoor scenes in SpaCE-10, which covers various evaluation settings like point cloud input and multi-choice QA. We conduct an extensive evaluation of common MLLMs on SpaCE-10 and find that even the most advanced MLLM still lags behind humans by large margins. Through our careful study, we also draw several significant findings that benefit the MLLM community. For example, we reveal that the shortcoming of counting capability greatly limits the compositional spatial capabilities of existing MLLMs. The evaluation code and benchmark datasets are available at https://github.com/Cuzyoung/SpaCE-10.
2025-06-09T17:41:36Z
null
null
null
null
null
null
null
null
null
null
2,506.07986
Rethinking Cross-Modal Interaction in Multimodal Diffusion Transformers
['Zhengyao Lv', 'Tianlin Pan', 'Chenyang Si', 'Zhaoxi Chen', 'Wangmeng Zuo', 'Ziwei Liu', 'Kwan-Yee K. Wong']
['cs.CV']
Multimodal Diffusion Transformers (MM-DiTs) have achieved remarkable progress in text-driven visual generation. However, even state-of-the-art MM-DiT models like FLUX struggle with achieving precise alignment between text prompts and generated content. We identify two key issues in the attention mechanism of MM-DiT, namely 1) the suppression of cross-modal attention due to token imbalance between visual and textual modalities and 2) the lack of timestep-aware attention weighting, which hinder the alignment. To address these issues, we propose \textbf{Temperature-Adjusted Cross-modal Attention (TACA)}, a parameter-efficient method that dynamically rebalances multimodal interactions through temperature scaling and timestep-dependent adjustment. When combined with LoRA fine-tuning, TACA significantly enhances text-image alignment on the T2I-CompBench benchmark with minimal computational overhead. We tested TACA on state-of-the-art models like FLUX and SD3.5, demonstrating its ability to improve image-text alignment in terms of object appearance, attribute binding, and spatial relationships. Our findings highlight the importance of balancing cross-modal attention in improving semantic fidelity in text-to-image diffusion models. Our codes are publicly available at \href{https://github.com/Vchitect/TACA}
2025-06-09T17:54:04Z
Project Page: https://vchitect.github.io/TACA/
null
null
null
null
null
null
null
null
null
2,506.07999
MADFormer: Mixed Autoregressive and Diffusion Transformers for Continuous Image Generation
['Junhao Chen', 'Yulia Tsvetkov', 'Xiaochuang Han']
['cs.CV', 'cs.LG']
Recent progress in multimodal generation has increasingly combined autoregressive (AR) and diffusion-based approaches, leveraging their complementary strengths: AR models capture long-range dependencies and produce fluent, context-aware outputs, while diffusion models operate in continuous latent spaces to refine high-fidelity visual details. However, existing hybrids often lack systematic guidance on how and why to allocate model capacity between these paradigms. In this work, we introduce MADFormer, a Mixed Autoregressive and Diffusion Transformer that serves as a testbed for analyzing AR-diffusion trade-offs. MADFormer partitions image generation into spatial blocks, using AR layers for one-pass global conditioning across blocks and diffusion layers for iterative local refinement within each block. Through controlled experiments on FFHQ-1024 and ImageNet, we identify two key insights: (1) block-wise partitioning significantly improves performance on high-resolution images, and (2) vertically mixing AR and diffusion layers yields better quality-efficiency balances--improving FID by up to 75% under constrained inference compute. Our findings offer practical design principles for future hybrid generative models.
2025-06-09T17:59:01Z
null
null
null
null
null
null
null
null
null
null
2,506.08003
Audio-Sync Video Generation with Multi-Stream Temporal Control
['Shuchen Weng', 'Haojie Zheng', 'Zheng Chang', 'Si Li', 'Boxin Shi', 'Xinlong Wang']
['cs.CV', 'cs.AI']
Audio is inherently temporal and closely synchronized with the visual world, making it a naturally aligned and expressive control signal for controllable video generation (e.g., movies). Beyond control, directly translating audio into video is essential for understanding and visualizing rich audio narratives (e.g., Podcasts or historical recordings). However, existing approaches fall short in generating high-quality videos with precise audio-visual synchronization, especially across diverse and complex audio types. In this work, we introduce MTV, a versatile framework for audio-sync video generation. MTV explicitly separates audios into speech, effects, and music tracks, enabling disentangled control over lip motion, event timing, and visual mood, respectively -- resulting in fine-grained and semantically aligned video generation. To support the framework, we additionally present DEMIX, a dataset comprising high-quality cinematic videos and demixed audio tracks. DEMIX is structured into five overlapped subsets, enabling scalable multi-stage training for diverse generation scenarios. Extensive experiments demonstrate that MTV achieves state-of-the-art performance across six standard metrics spanning video quality, text-video consistency, and audio-video alignment. Project page: https://hjzheng.net/projects/MTV/.
2025-06-09T17:59:42Z
null
null
null
null
null
null
null
null
null
null
2,506.08007
Reinforcement Pre-Training
['Qingxiu Dong', 'Li Dong', 'Yao Tang', 'Tianzhu Ye', 'Yutao Sun', 'Zhifang Sui', 'Furu Wei']
['cs.CL']
In this work, we introduce Reinforcement Pre-Training (RPT) as a new scaling paradigm for large language models and reinforcement learning (RL). Specifically, we reframe next-token prediction as a reasoning task trained using RL, where it receives verifiable rewards for correctly predicting the next token for a given context. RPT offers a scalable method to leverage vast amounts of text data for general-purpose RL, rather than relying on domain-specific annotated answers. By incentivizing the capability of next-token reasoning, RPT significantly improves the language modeling accuracy of predicting the next tokens. Moreover, RPT provides a strong pre-trained foundation for further reinforcement fine-tuning. The scaling curves show that increased training compute consistently improves the next-token prediction accuracy. The results position RPT as an effective and promising scaling paradigm to advance language model pre-training.
2025-06-09T17:59:53Z
null
null
null
null
null
null
null
null
null
null
2,506.08009
Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion
['Xun Huang', 'Zhengqi Li', 'Guande He', 'Mingyuan Zhou', 'Eli Shechtman']
['cs.CV', 'cs.AI', 'cs.LG']
We introduce Self Forcing, a novel training paradigm for autoregressive video diffusion models. It addresses the longstanding issue of exposure bias, where models trained on ground-truth context must generate sequences conditioned on their own imperfect outputs during inference. Unlike prior methods that denoise future frames based on ground-truth context frames, Self Forcing conditions each frame's generation on previously self-generated outputs by performing autoregressive rollout with key-value (KV) caching during training. This strategy enables supervision through a holistic loss at the video level that directly evaluates the quality of the entire generated sequence, rather than relying solely on traditional frame-wise objectives. To ensure training efficiency, we employ a few-step diffusion model along with a stochastic gradient truncation strategy, effectively balancing computational cost and performance. We further introduce a rolling KV cache mechanism that enables efficient autoregressive video extrapolation. Extensive experiments demonstrate that our approach achieves real-time streaming video generation with sub-second latency on a single GPU, while matching or even surpassing the generation quality of significantly slower and non-causal diffusion models. Project website: http://self-forcing.github.io/
2025-06-09T17:59:55Z
Project website: http://self-forcing.github.io/
null
null
null
null
null
null
null
null
null
2,506.0801
Vision Transformers Don't Need Trained Registers
['Nick Jiang', 'Amil Dravid', 'Alexei Efros', 'Yossi Gandelsman']
['cs.CV', 'cs.AI']
We investigate the mechanism underlying a previously identified phenomenon in Vision Transformers -- the emergence of high-norm tokens that lead to noisy attention maps. We observe that in multiple models (e.g., CLIP, DINOv2), a sparse set of neurons is responsible for concentrating high-norm activations on outlier tokens, leading to irregular attention patterns and degrading downstream visual processing. While the existing solution for removing these outliers involves retraining models from scratch with additional learned register tokens, we use our findings to create a training-free approach to mitigate these artifacts. By shifting the high-norm activations from our discovered register neurons into an additional untrained token, we can mimic the effect of register tokens on a model already trained without registers. We demonstrate that our method produces cleaner attention and feature maps, enhances performance over base models across multiple downstream visual tasks, and achieves results comparable to models explicitly trained with register tokens. We then extend test-time registers to off-the-shelf vision-language models to improve their interpretability. Our results suggest that test-time registers effectively take on the role of register tokens at test-time, offering a training-free solution for any pre-trained model released without them.
2025-06-09T17:59:57Z
Project page and code: https://avdravid.github.io/test-time-registers
null
null
null
null
null
null
null
null
null
2,506.08011
Play to Generalize: Learning to Reason Through Game Play
['Yunfei Xie', 'Yinsong Ma', 'Shiyi Lan', 'Alan Yuille', 'Junfei Xiao', 'Chen Wei']
['cs.CV', 'cs.CL']
Developing generalizable reasoning capabilities in multimodal large language models (MLLMs) remains challenging. Motivated by cognitive science literature suggesting that gameplay promotes transferable cognitive skills, we propose a novel post-training paradigm, Visual Game Learning, or ViGaL, where MLLMs develop out-of-domain generalization of multimodal reasoning through playing arcade-like games. Specifically, we show that post-training a 7B-parameter MLLM via reinforcement learning (RL) on simple arcade-like games, e.g. Snake, significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, without seeing any worked solutions, equations, or diagrams during RL, suggesting the capture of transferable reasoning skills. Remarkably, our model outperforms specialist models tuned on multimodal reasoning data in multimodal reasoning benchmarks, while preserving the base model's performance on general visual benchmarks, a challenge where specialist models often fall short. Our findings suggest a new post-training paradigm: synthetic, rule-based games can serve as controllable and scalable pre-text tasks that unlock generalizable multimodal reasoning abilities in MLLMs.
2025-06-09T17:59:57Z
Project Page: https://yunfeixie233.github.io/ViGaL/
null
null
Play to Generalize: Learning to Reason Through Game Play
['Yunfei Xie', 'Yinsong Ma', 'Shiyi Lan', 'Alan L. Yuille', 'Junfei Xiao', 'Chen Wei']
2,025
arXiv.org
0
74
['Computer Science']
2,506.08293
Diffusion Sequence Models for Enhanced Protein Representation and Generation
['Logan Hallee', 'Nikolaos Rafailidis', 'David B. Bichara', 'Jason P. Gleghorn']
['q-bio.BM']
Proteins are fundamental to biology, executing diverse functions through complex physicochemical interactions, and they hold transformative potential across medicine, materials science, and environmental applications. Protein Language Models (pLMs) aim to unlock insights from the vast space of unlabeled protein sequences by learning rich, semantic representations from primary sequences via masked language modeling. However, these models typically exhibit limited generative capacity. In this work, we introduce the Diffusion Sequence Model (DSM), a novel pLM trained with masked diffusion to enable both high-quality representation learning and generative protein design. DSM builds upon the ESM2 architecture by incorporating a masked forward diffusion process inspired by the LLaDA framework. After training, DSM is capable of generating diverse, biomimetic sequences that align with expected amino acid compositions, secondary structures, and predicted functions, even with 90\% token corruption. Furthermore, DSM's learned representations match or exceed those of similarly sized pLMs on downstream tasks. We also introduce DSM(ppi), a variant fine-tuned to generate protein binders by attending to target sequences. We demonstrate DSM(ppi)'s effectiveness on the challenging Bench-tested Binder Benchmark (BenchBB), where both DSM and DSM(ppi) produce candidates with superior predicted binding affinity compared to known binders. Our results establish masked diffusion as a powerful paradigm for unifying protein representation and generation in a single framework.
2025-06-09T23:50:11Z
20 pages, 15 figures
null
null
null
null
null
null
null
null
null
2,506.083
Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability
['Matteo Cargnelutti', 'Catherine Brobston', 'John Hess', 'Jack Cushman', 'Kristi Mukk', 'Aristana Scourtas', 'Kyle Courtney', 'Greg Leppert', 'Amanda Watson', 'Martha Whitehead', 'Jonathan Zittrain']
['cs.CL', 'cs.DL']
Large language models (LLMs) use data to learn about the world in order to produce meaningful correlations and predictions. As such, the nature, scale, quality, and diversity of the datasets used to train these models, or to support their work at inference time, have a direct impact on their quality. The rapid development and adoption of LLMs of varying quality has brought into focus the scarcity of publicly available, high-quality training data and revealed an urgent need to ground the stewardship of these datasets in sustainable practices with clear provenance chains. To that end, this technical report introduces Institutional Books 1.0, a large collection of public domain books originally digitized through Harvard Library's participation in the Google Books project, beginning in 2006. Working with Harvard Library, we extracted, analyzed, and processed these volumes into an extensively-documented dataset of historic texts. This analysis covers the entirety of Harvard Library's collection scanned as part of that project, originally spanning 1,075,899 volumes written in over 250 different languages for a total of approximately 250 billion tokens. As part of this initial release, the OCR-extracted text (original and post-processed) as well as the metadata (bibliographic, source, and generated) of the 983,004 volumes, or 242B tokens, identified as being in the public domain have been made available. This report describes this project's goals and methods as well as the results of the analyses we performed, all in service of making this historical collection more accessible and easier for humans and machines alike to filter, read and use.
2025-06-10T00:11:30Z
null
null
null
null
null
null
null
null
null
null
2,506.08388
Reinforcement Learning Teachers of Test Time Scaling
['Edoardo Cetin', 'Tianyu Zhao', 'Yujin Tang']
['cs.LG', 'cs.AI', 'cs.CL']
Training reasoning language models (LMs) with reinforcement learning (RL) for one-hot correctness inherently relies on the LM being able to explore and solve its task with some chance at initialization. Furthermore, a key use case of reasoning LMs is to act as teachers for distilling new students and cold-starting future RL iterations rather than being deployed themselves. From these considerations, we introduce a new framework that avoids RL's exploration challenge by training a new class of Reinforcement-Learned Teachers (RLTs) focused on yielding the most effective downstream distillation. RLTs are prompted with both the question and solution to each problem, and tasked to simply "connect-the-dots" with detailed explanations tailored for their students. We train RLTs with dense rewards obtained by feeding each explanation to the student and testing its understanding of the problem's solution. In practice, the raw outputs of a 7B RLT provide higher final performance on competition and graduate-level tasks than existing distillation and cold-starting pipelines that collect and postprocess the reasoning traces of orders of magnitude larger LMs. Furthermore, RLTs maintain their effectiveness when training larger students and when applied zero-shot to out-of-distribution tasks, unlocking new levels of efficiency and re-usability for the RL reasoning framework.
2025-06-10T02:53:24Z
Code available at: https://github.com/SakanaAI/RLT
null
null
Reinforcement Learning Teachers of Test Time Scaling
['Edoardo Cetin', 'Tianyu Zhao', 'Yujin Tang']
2,025
arXiv.org
0
45
['Computer Science']
2,506.0864
Orientation Matters: Making 3D Generative Models Orientation-Aligned
['Yichong Lu', 'Yuzhuo Tian', 'Zijin Jiang', 'Yikun Zhao', 'Yuanbo Yang', 'Hao Ouyang', 'Haoji Hu', 'Huimin Yu', 'Yujun Shen', 'Yiyi Liao']
['cs.CV']
Humans intuitively perceive object shape and orientation from a single image, guided by strong priors about canonical poses. However, existing 3D generative models often produce misaligned results due to inconsistent training data, limiting their usability in downstream tasks. To address this gap, we introduce the task of orientation-aligned 3D object generation: producing 3D objects from single images with consistent orientations across categories. To facilitate this, we construct Objaverse-OA, a dataset of 14,832 orientation-aligned 3D models spanning 1,008 categories. Leveraging Objaverse-OA, we fine-tune two representative 3D generative models based on multi-view diffusion and 3D variational autoencoder frameworks to produce aligned objects that generalize well to unseen objects across various categories. Experimental results demonstrate the superiority of our method over post-hoc alignment approaches. Furthermore, we showcase downstream applications enabled by our aligned object generation, including zero-shot object orientation estimation via analysis-by-synthesis and efficient arrow-based object rotation manipulation.
2025-06-10T09:54:37Z
Project Page: https://xdimlab.github.io/Orientation_Matters
null
null
null
null
null
null
null
null
null
2,506.08672
RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling
['Yang Liu', 'Jiaqi Li', 'Zilong Zheng']
['cs.CL', 'cs.AI', 'cs.LG']
Rule-based reasoning has been acknowledged as one of the fundamental problems in reasoning, while deviations in rule formats, types, and complexity in real-world applications pose severe challenges. Recent studies have shown that large reasoning models (LRMs) have remarkable reasoning capabilities, and their performance is substantially enhanced by reinforcement learning (RL). However, it remains an open question whether small reasoning models (SRMs) can learn rule-based reasoning effectively with robust generalization across diverse tasks and domains. To address this, we introduce Reinforced Rule-based Reasoning, a.k.a. RuleReasoner, a simple yet effective method to conduct rule-based reasoning via a wide collection of curated tasks and a novel domain-aware dynamic sampling approach. Specifically, RuleReasoner resamples each training batch by updating the sampling weights of different domains based on historical rewards. This facilitates domain augmentation and flexible online learning schedules for RL, obviating the need for pre-hoc human-engineered mix-training recipes used in existing methods. Empirical evaluations on in-distribution (ID) and out-of-distribution (OOD) benchmarks reveal that RuleReasoner outperforms frontier LRMs by a significant margin ($\Delta$4.1% average points on eight ID tasks and $\Delta$10.4% average points on three OOD tasks over OpenAI-o1). Notably, our approach also exhibits higher computational efficiency compared to prior dynamic sampling methods for RL.
2025-06-10T10:31:21Z
22 pages, 10 figures, 8 tables
null
null
null
null
null
null
null
null
null
2,506.08897
PlantDeBERTa: An Open Source Language Model for Plant Science
['Hiba Khey', 'Amine Lakhder', 'Salma Rouichi', 'Imane El Ghabi', 'Kamal Hejjaoui', 'Younes En-nahli', 'Fahd Kalloubi', 'Moez Amri']
['cs.CL', 'cs.AI']
The rapid advancement of transformer-based language models has catalyzed breakthroughs in biomedical and clinical natural language processing; however, plant science remains markedly underserved by such domain-adapted tools. In this work, we present PlantDeBERTa, a high-performance, open-source language model specifically tailored for extracting structured knowledge from plant stress-response literature. Built upon the DeBERTa architecture-known for its disentangled attention and robust contextual encoding-PlantDeBERTa is fine-tuned on a meticulously curated corpus of expert-annotated abstracts, with a primary focus on lentil (Lens culinaris) responses to diverse abiotic and biotic stressors. Our methodology combines transformer-based modeling with rule-enhanced linguistic post-processing and ontology-grounded entity normalization, enabling PlantDeBERTa to capture biologically meaningful relationships with precision and semantic fidelity. The underlying corpus is annotated using a hierarchical schema aligned with the Crop Ontology, encompassing molecular, physiological, biochemical, and agronomic dimensions of plant adaptation. PlantDeBERTa exhibits strong generalization capabilities across entity types and demonstrates the feasibility of robust domain adaptation in low-resource scientific fields.By providing a scalable and reproducible framework for high-resolution entity recognition, PlantDeBERTa bridges a critical gap in agricultural NLP and paves the way for intelligent, data-driven systems in plant genomics, phenomics, and agronomic knowledge discovery. Our model is publicly released to promote transparency and accelerate cross-disciplinary innovation in computational plant science.
2025-06-10T15:24:03Z
null
null
null
null
null
null
null
null
null
null
2,506.089
MIRAGE: Multimodal foundation model and benchmark for comprehensive retinal OCT image analysis
['José Morano', 'Botond Fazekas', 'Emese Sükei', 'Ronald Fecso', 'Taha Emre', 'Markus Gumpinger', 'Georg Faustmann', 'Marzieh Oghbaie', 'Ursula Schmidt-Erfurth', 'Hrvoje Bogunović']
['cs.CV']
Artificial intelligence (AI) has become a fundamental tool for assisting clinicians in analyzing ophthalmic images, such as optical coherence tomography (OCT). However, developing AI models often requires extensive annotation, and existing models tend to underperform on independent, unseen data. Foundation models (FMs), large AI models trained on vast unlabeled datasets, have shown promise in overcoming these challenges. Nonetheless, available FMs for ophthalmology lack extensive validation, especially for segmentation tasks, and focus on a single imaging modality. In this context, we propose MIRAGE, a novel multimodal FM for the analysis of OCT and scanning laser ophthalmoscopy (SLO) images. Additionally, we propose a new evaluation benchmark with OCT/SLO classification and segmentation tasks. The comparison with general and specialized FMs and segmentation methods shows the superiority of MIRAGE in both types of tasks, highlighting its suitability as a basis for the development of robust AI systems for retinal OCT image analysis. Both MIRAGE and the evaluation benchmark are publicly available: https://github.com/j-morano/MIRAGE.
2025-06-10T15:25:55Z
null
null
null
MIRAGE: Multimodal foundation model and benchmark for comprehensive retinal OCT image analysis
['José Morano', 'Botond Fazekas', 'Emese Sukei', 'Ronald Fecso', 'T. Emre', 'Markus Gumpinger', 'Georg Faustmann', 'Marzieh Oghbaie', 'U. Schmidt-Erfurth', "Hrvoje Bogunovi'c"]
2,025
arXiv.org
0
0
['Computer Science']
2,506.08967
Step-Audio-AQAA: a Fully End-to-End Expressive Large Audio Language Model
['Ailin Huang', 'Bingxin Li', 'Bruce Wang', 'Boyong Wu', 'Chao Yan', 'Chengli Feng', 'Heng Wang', 'Hongyu Zhou', 'Hongyuan Wang', 'Jingbei Li', 'Jianjian Sun', 'Joanna Wang', 'Mingrui Chen', 'Peng Liu', 'Ruihang Miao', 'Shilei Jiang', 'Tian Fei', 'Wang You', 'Xi Chen', 'Xuerui Yang', 'Yechang Huang', 'Yuxiang Zhang', 'Zheng Ge', 'Zheng Gong', 'Zhewei Huang', 'Zixin Zhang', 'Bin Wang', 'Bo Li', 'Buyun Ma', 'Changxin Miao', 'Changyi Wan', 'Chen Xu', 'Dapeng Shi', 'Dingyuan Hu', 'Enle Liu', 'Guanzhe Huang', 'Gulin Yan', 'Hanpeng Hu', 'Haonan Jia', 'Jiahao Gong', 'Jiaoren Wu', 'Jie Wu', 'Jie Yang', 'Junzhe Lin', 'Kaixiang Li', 'Lei Xia', 'Longlong Gu', 'Ming Li', 'Nie Hao', 'Ranchen Ming', 'Shaoliang Pang', 'Siqi Liu', 'Song Yuan', 'Tiancheng Cao', 'Wen Li', 'Wenqing He', 'Xu Zhao', 'Xuelin Zhang', 'Yanbo Yu', 'Yinmin Zhong', 'Yu Zhou', 'Yuanwei Liang', 'Yuanwei Lu', 'Yuxiang Yang', 'Zidong Yang', 'Zili Zhang', 'Binxing Jiao', 'Heung-Yeung Shum', 'Jiansheng Chen', 'Jing Li', 'Xiangyu Zhang', 'Xinhao Zhang', 'Yibo Zhu', 'Daxin Jiang', 'Shuchang Zhou', 'Chen Hu']
['cs.SD', 'cs.CL', 'eess.AS']
Large Audio-Language Models (LALMs) have significantly advanced intelligent human-computer interaction, yet their reliance on text-based outputs limits their ability to generate natural speech responses directly, hindering seamless audio interactions. To address this, we introduce Step-Audio-AQAA, a fully end-to-end LALM designed for Audio Query-Audio Answer (AQAA) tasks. The model integrates a dual-codebook audio tokenizer for linguistic and semantic feature extraction, a 130-billion-parameter backbone LLM and a neural vocoder for high-fidelity speech synthesis. Our post-training approach employs interleaved token-output of text and audio to enhance semantic coherence and combines Direct Preference Optimization (DPO) with model merge to improve performance. Evaluations on the StepEval-Audio-360 benchmark demonstrate that Step-Audio-AQAA excels especially in speech control, outperforming the state-of-art LALMs in key areas. This work contributes a promising solution for end-to-end LALMs and highlights the critical role of token-based vocoder in enhancing overall performance for AQAA tasks.
2025-06-10T16:37:39Z
12 pages, 3 figures
null
null
null
null
null
null
null
null
null
2,506.09007
Branched Schrödinger Bridge Matching
['Sophia Tang', 'Yinuo Zhang', 'Alexander Tong', 'Pranam Chatterjee']
['cs.LG', 'q-bio.QM']
Predicting the intermediate trajectories between an initial and target distribution is a central problem in generative modeling. Existing approaches, such as flow matching and Schr\"odinger Bridge Matching, effectively learn mappings between two distributions by modeling a single stochastic path. However, these methods are inherently limited to unimodal transitions and cannot capture branched or divergent evolution from a common origin to multiple distinct outcomes. To address this, we introduce Branched Schr\"odinger Bridge Matching (BranchSBM), a novel framework that learns branched Schr\"odinger bridges. BranchSBM parameterizes multiple time-dependent velocity fields and growth processes, enabling the representation of population-level divergence into multiple terminal distributions. We show that BranchSBM is not only more expressive but also essential for tasks involving multi-path surface navigation, modeling cell fate bifurcations from homogeneous progenitor states, and simulating diverging cellular responses to perturbations.
2025-06-10T17:29:48Z
null
null
null
null
null
null
null
null
null
null
2,506.09278
UFM: A Simple Path towards Unified Dense Correspondence with Flow
['Yuchen Zhang', 'Nikhil Keetha', 'Chenwei Lyu', 'Bhuvan Jhamb', 'Yutian Chen', 'Yuheng Qiu', 'Jay Karhade', 'Shreyas Jha', 'Yaoyu Hu', 'Deva Ramanan', 'Sebastian Scherer', 'Wenshan Wang']
['cs.CV', 'cs.LG', 'cs.RO']
Dense image correspondence is central to many applications, such as visual odometry, 3D reconstruction, object association, and re-identification. Historically, dense correspondence has been tackled separately for wide-baseline scenarios and optical flow estimation, despite the common goal of matching content between two images. In this paper, we develop a Unified Flow & Matching model (UFM), which is trained on unified data for pixels that are co-visible in both source and target images. UFM uses a simple, generic transformer architecture that directly regresses the (u,v) flow. It is easier to train and more accurate for large flows compared to the typical coarse-to-fine cost volumes in prior work. UFM is 28% more accurate than state-of-the-art flow methods (Unimatch), while also having 62% less error and 6.7x faster than dense wide-baseline matchers (RoMa). UFM is the first to demonstrate that unified training can outperform specialized approaches across both domains. This result enables fast, general-purpose correspondence and opens new directions for multi-modal, long-range, and real-time correspondence tasks.
2025-06-10T22:32:13Z
Project Page: https://uniflowmatch.github.io/
null
null
null
null
null
null
null
null
null
2,506.09344
Ming-Omni: A Unified Multimodal Model for Perception and Generation
['Inclusion AI', 'Biao Gong', 'Cheng Zou', 'Chuanyang Zheng', 'Chunluan Zhou', 'Canxiang Yan', 'Chunxiang Jin', 'Chunjie Shen', 'Dandan Zheng', 'Fudong Wang', 'Furong Xu', 'GuangMing Yao', 'Jun Zhou', 'Jingdong Chen', 'Jianxin Sun', 'Jiajia Liu', 'Jianjiang Zhu', 'Jun Peng', 'Kaixiang Ji', 'Kaiyou Song', 'Kaimeng Ren', 'Libin Wang', 'Lixiang Ru', 'Lele Xie', 'Longhua Tan', 'Lyuxin Xue', 'Lan Wang', 'Mochen Bai', 'Ning Gao', 'Pei Chen', 'Qingpei Guo', 'Qinglong Zhang', 'Qiang Xu', 'Rui Liu', 'Ruijie Xiong', 'Sirui Gao', 'Tinghao Liu', 'Taisong Li', 'Weilong Chai', 'Xinyu Xiao', 'Xiaomei Wang', 'Xiaoxue Chen', 'Xiao Lu', 'Xiaoyu Li', 'Xingning Dong', 'Xuzheng Yu', 'Yi Yuan', 'Yuting Gao', 'Yunxiao Sun', 'Yipeng Chen', 'Yifei Wu', 'Yongjie Lyu', 'Ziping Ma', 'Zipeng Feng', 'Zhijiang Fang', 'Zhihao Qiu', 'Ziyuan Huang', 'Zhengyu He']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.LG', 'cs.SD', 'eess.AS']
We propose Ming-Omni, a unified multimodal model capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation. Ming-Omni employs dedicated encoders to extract tokens from different modalities, which are then processed by Ling, an MoE architecture equipped with newly proposed modality-specific routers. This design enables a single model to efficiently process and fuse multimodal inputs within a unified framework, thereby facilitating diverse tasks without requiring separate models, task-specific fine-tuning, or structural redesign. Importantly, Ming-Omni extends beyond conventional multimodal models by supporting audio and image generation. This is achieved through the integration of an advanced audio decoder for natural-sounding speech and Ming-Lite-Uni for high-quality image generation, which also allow the model to engage in context-aware chatting, perform text-to-speech conversion, and conduct versatile image editing. Our experimental results showcase Ming-Omni offers a powerful solution for unified perception and generation across all modalities. Notably, our proposed Ming-Omni is the first open-source model we are aware of to match GPT-4o in modality support, and we release all code and model weights to encourage further research and development in the community.
2025-06-11T02:50:49Z
18 pages,8 figures
null
null
null
null
null
null
null
null
null
2,506.09366
SkillBlender: Towards Versatile Humanoid Whole-Body Loco-Manipulation via Skill Blending
['Yuxuan Kuang', 'Haoran Geng', 'Amine Elhafsi', 'Tan-Dzung Do', 'Pieter Abbeel', 'Jitendra Malik', 'Marco Pavone', 'Yue Wang']
['cs.RO', 'cs.LG']
Humanoid robots hold significant potential in accomplishing daily tasks across diverse environments thanks to their flexibility and human-like morphology. Recent works have made significant progress in humanoid whole-body control and loco-manipulation leveraging optimal control or reinforcement learning. However, these methods require tedious task-specific tuning for each task to achieve satisfactory behaviors, limiting their versatility and scalability to diverse tasks in daily scenarios. To that end, we introduce SkillBlender, a novel hierarchical reinforcement learning framework for versatile humanoid loco-manipulation. SkillBlender first pretrains goal-conditioned task-agnostic primitive skills, and then dynamically blends these skills to accomplish complex loco-manipulation tasks with minimal task-specific reward engineering. We also introduce SkillBench, a parallel, cross-embodiment, and diverse simulated benchmark containing three embodiments, four primitive skills, and eight challenging loco-manipulation tasks, accompanied by a set of scientific evaluation metrics balancing accuracy and feasibility. Extensive simulated experiments show that our method significantly outperforms all baselines, while naturally regularizing behaviors to avoid reward hacking, resulting in more accurate and feasible movements for diverse loco-manipulation tasks in our daily scenarios. Our code and benchmark will be open-sourced to the community to facilitate future research. Project page: https://usc-gvl.github.io/SkillBlender-web/.
2025-06-11T03:24:26Z
null
null
null
SkillBlender: Towards Versatile Humanoid Whole-Body Loco-Manipulation via Skill Blending
['Yuxuan Kuang', 'Haoran Geng', 'Amine Elhafsi', 'Tan-Dzung Do', 'Pieter Abbeel', 'Jitendra Malik', 'Marco Pavone', 'Yue Wang']
2,025
arXiv.org
1
54
['Computer Science']
2,506.09369
ScaleLSD: Scalable Deep Line Segment Detection Streamlined
['Zeran Ke', 'Bin Tan', 'Xianwei Zheng', 'Yujun Shen', 'Tianfu Wu', 'Nan Xue']
['cs.CV']
This paper studies the problem of Line Segment Detection (LSD) for the characterization of line geometry in images, with the aim of learning a domain-agnostic robust LSD model that works well for any natural images. With the focus of scalable self-supervised learning of LSD, we revisit and streamline the fundamental designs of (deep and non-deep) LSD approaches to have a high-performing and efficient LSD learner, dubbed as ScaleLSD, for the curation of line geometry at scale from over 10M unlabeled real-world images. Our ScaleLSD works very well to detect much more number of line segments from any natural images even than the pioneered non-deep LSD approach, having a more complete and accurate geometric characterization of images using line segments. Experimentally, our proposed ScaleLSD is comprehensively testified under zero-shot protocols in detection performance, single-view 3D geometry estimation, two-view line segment matching, and multiview 3D line mapping, all with excellent performance obtained. Based on the thorough evaluation, our ScaleLSD is observed to be the first deep approach that outperforms the pioneered non-deep LSD in all aspects we have tested, significantly expanding and reinforcing the versatility of the line geometry of images. Code and Models are available at https://github.com/ant-research/scalelsd
2025-06-11T03:34:21Z
accepted to CVPR 2025; 17 pages, appendices included
null
null
null
null
null
null
null
null
null
2,506.0944
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture
['GigaChat team', 'Mamedov Valentin', 'Evgenii Kosarev', 'Gregory Leleytner', 'Ilya Shchuckin', 'Valeriy Berezovskiy', 'Daniil Smirnov', 'Dmitry Kozlov', 'Sergei Averkiev', 'Lukyanenko Ivan', 'Aleksandr Proshunin', 'Ainur Israfilova', 'Ivan Baskov', 'Artem Chervyakov', 'Emil Shakirov', 'Mikhail Kolesov', 'Daria Khomich', 'Darya Latortseva', 'Sergei Porkhun', 'Yury Fedorov', 'Oleg Kutuzov', 'Polina Kudriavtseva', 'Sofiia Soldatova', 'Kolodin Egor', 'Stanislav Pyatkin', 'Dzmitry Menshykh', 'Grafov Sergei', 'Eldar Damirov', 'Karlov Vladimir', 'Ruslan Gaitukiev', 'Arkadiy Shatenov', 'Alena Fenogenova', 'Nikita Savushkin', 'Fedor Minkin']
['cs.CL', 'cs.AI']
Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source (https://huggingface.co/ai-sage), aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.
2025-06-11T06:46:49Z
ACL-2025 System Demo
null
null
null
null
null
null
null
null
null
2,506.09482
Marrying Autoregressive Transformer and Diffusion with Multi-Reference Autoregression
['Dingcheng Zhen', 'Qian Qiao', 'Tan Yu', 'Kangxi Wu', 'Ziwei Zhang', 'Siyuan Liu', 'Shunshun Yin', 'Ming Tao']
['cs.CV']
We introduce TransDiff, the first image generation model that marries Autoregressive (AR) Transformer with diffusion models. In this joint modeling framework, TransDiff encodes labels and images into high-level semantic features and employs a diffusion model to estimate the distribution of image samples. On the ImageNet 256x256 benchmark, TransDiff significantly outperforms other image generation models based on standalone AR Transformer or diffusion models. Specifically, TransDiff achieves a Frechet Inception Distance (FID) of 1.61 and an Inception Score (IS) of 293.4, and further provides x2 faster inference latency compared to state-of-the-art methods based on AR Transformer and x112 faster inference compared to diffusion-only models. Furthermore, building on the TransDiff model, we introduce a novel image generation paradigm called Multi-Reference Autoregression (MRAR), which performs autoregressive generation by predicting the next image. MRAR enables the model to reference multiple previously generated images, thereby facilitating the learning of more diverse representations and improving the quality of generated images in subsequent iterations. By applying MRAR, the performance of TransDiff is improved, with the FID reduced from 1.61 to 1.42. We expect TransDiff to open up a new frontier in the field of image generation.
2025-06-11T07:50:31Z
null
null
null
null
null
null
null
null
null
null
2,506.09513
ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning
['Yu Sun', 'Xingyu Qian', 'Weiwen Xu', 'Hao Zhang', 'Chenghao Xiao', 'Long Li', 'Yu Rong', 'Wenbing Huang', 'Qifeng Bai', 'Tingyang Xu']
['cs.CL', 'cs.AI', 'cs.MA']
Though reasoning-based large language models (LLMs) have excelled in mathematics and programming, their capabilities in knowledge-intensive medical question answering remain underexplored. To address this, we introduce ReasonMed, the largest medical reasoning dataset, comprising 370k high-quality examples distilled from 1.7 million initial reasoning paths generated by various LLMs. ReasonMed is constructed through a \textit{multi-agent verification and refinement process}, where we design an \textit{Error Refiner} to enhance the reasoning paths by identifying and correcting error-prone steps flagged by a verifier. Leveraging ReasonMed, we systematically investigate best practices for training medical reasoning models and find that combining detailed Chain-of-Thought (CoT) reasoning with concise answer summaries yields the most effective fine-tuning strategy. Based on this strategy, we train ReasonMed-7B, which sets a new benchmark for sub-10B models, outperforming the prior best by 4.17\% and even exceeding LLaMA3.1-70B on PubMedQA by 4.60\%.
2025-06-11T08:36:55Z
24 pages, 6 figures, 7 tables
null
null
null
null
null
null
null
null
null
2,506.0956
Towards Open Foundation Language Model and Corpus for Macedonian: A Low-Resource Language
['Stefan Krsteski', 'Matea Tashkovska', 'Borjan Sazdov', 'Hristijan Gjoreski', 'Branislav Gerazov']
['cs.CL']
The increase in technological adoption worldwide comes with demands for novel tools to be used by the general population. Large Language Models (LLMs) provide a great opportunity in this respect, but their capabilities remain limited for low-resource languages, restricting applications in countries where such languages are spoken. We create several resources to facilitate the adoption of LLMs and to support research advancements for Macedonian. We collect the largest Macedonian corpus to date, consisting of 40GB of textual data and totaling 3.5B words. To support conversational applications, we collect a 106k-instance instruction dataset, carefully built to be culturally grounded. For evaluation, we construct a Macedonian evaluation suite covering seven benchmarks. Finally, we train domestic-yak, a state-of-the-art 8B-parameter model, on our curated datasets and evaluate it against eight baseline models using the newly constructed benchmark suite. Our model outperforms all existing models in the 8B parameter range across all benchmarks, and achieves performance comparable to models up to 10x larger. Furthermore, a qualitative analysis with native speakers reveals that our model is preferred over larger counterparts, receiving higher ratings for grammatical correctness and cultural appropriateness. All datasets, code, and model weights are openly released, setting a foundation for advancing LLMs in similarly underrepresented languages. These resources are publicly available at github.com/LVSTCK for source code, and at huggingface.co/LVSTCK for pretrained model weights and data.
2025-06-11T09:46:58Z
Camera-ready version accepted at SlavNLP-2025@ACL
null
null
null
null
null
null
null
null
null
2,506.09645
Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering
['Tianjun Yao', 'Haoxuan Li', 'Zhiqiang Shen', 'Pan Li', 'Tongliang Liu', 'Kun Zhang']
['cs.CL', 'cs.IR', 'cs.LG', 'I.2.6']
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains, but their reliability is hindered by the outdated knowledge and hallucinations. Retrieval-Augmented Generation mitigates these issues by grounding LLMs with external knowledge; however, most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning. Knowledge graphs, which represent facts as relational triples, offer a more structured and compact alternative. Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering (KGQA), with a significant proportion adopting the retrieve-then-reasoning paradigm. In this framework, graph-based retrievers have demonstrated strong empirical performance, yet they still face challenges in generalization ability. In this work, we propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA. RAPL addresses these limitations through three aspects: (1) a two-stage labeling strategy that combines heuristic signals with parametric models to provide causally grounded supervision; (2) a model-agnostic graph transformation approach to capture both intra- and inter-triple interactions, thereby enhancing representational capacity; and (3) a path-based reasoning strategy that facilitates learning from the injected rational knowledge, and supports downstream reasoner through structured inputs. Empirically, RAPL outperforms state-of-the-art methods by $2.66\%-20.34\%$, and significantly reduces the performance gap between smaller and more powerful LLM-based reasoners, as well as the gap under cross-dataset settings, highlighting its superior retrieval capability and generalizability. Codes are available at: https://github.com/tianyao-aka/RAPL.
2025-06-11T12:03:52Z
32 pages, 28 figures
null
null
Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering
['Tianjun Yao', 'Haoxuan Li', 'Zhiqiang Shen', 'Pan Li', 'Tongliang Liu', 'Kun Zhang']
2,025
arXiv.org
0
66
['Computer Science']
2,506.09736
Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning
['Yuting Li', 'Lai Wei', 'Kaipeng Zheng', 'Jingyuan Huang', 'Linghe Kong', 'Lichao Sun', 'Weiran Huang']
['cs.CV', 'cs.AI']
Despite the rapid progress of multimodal large language models (MLLMs), they have largely overlooked the importance of visual processing. In a simple yet revealing experiment, we interestingly find that language-only models, when provided with image captions, can achieve comparable or even better performance than MLLMs that consume raw visual inputs. This suggests that current MLLMs may generate accurate visual descriptions but fail to effectively integrate them during reasoning. Motivated by this, we propose a simple visual perturbation framework that enhances perceptual robustness without requiring algorithmic modifications or additional training data. Our approach introduces three targeted perturbations: distractor concatenation, dominance-preserving mixup, and random rotation, that can be easily integrated into existing post-training pipelines including SFT, DPO, and GRPO. Through extensive experiments across multiple datasets, we demonstrate consistent improvements in mathematical reasoning performance, with gains comparable to those achieved through algorithmic changes. Additionally, we achieve competitive performance among open-source 7B RL-tuned models by training Qwen2.5-VL-7B with visual perturbation. Through comprehensive ablation studies, we analyze the effectiveness of different perturbation strategies, revealing that each perturbation type contributes uniquely to different aspects of visual reasoning. Our findings highlight the critical role of visual perturbation in multimodal mathematical reasoning: better reasoning begins with better seeing. Our code is available at https://github.com/YutingLi0606/Vision-Matters.
2025-06-11T13:39:46Z
Technical Report
null
null
null
null
null
null
null
null
null
2,506.0982
CoRT: Code-integrated Reasoning within Thinking
['Chengpeng Li', 'Zhengyang Tang', 'Ziniu Li', 'Mingfeng Xue', 'Keqin Bao', 'Tian Ding', 'Ruoyu Sun', 'Benyou Wang', 'Xiang Wang', 'Junyang Lin', 'Dayiheng Liu']
['cs.CL', 'cs.AI', 'cs.LG']
Large Reasoning Models (LRMs) like o1 and DeepSeek-R1 have shown remarkable progress in natural language reasoning with long chain-of-thought (CoT), yet they remain inefficient or inaccurate when handling complex mathematical operations. Addressing these limitations through computational tools (e.g., computation libraries and symbolic solvers) is promising, but it introduces a technical challenge: Code Interpreter (CI) brings external knowledge beyond the model's internal text representations, thus the direct combination is not efficient. This paper introduces CoRT, a post-training framework for teaching LRMs to leverage CI effectively and efficiently. As a first step, we address the data scarcity issue by synthesizing code-integrated reasoning data through Hint-Engineering, which strategically inserts different hints at appropriate positions to optimize LRM-CI interaction. We manually create 30 high-quality samples, upon which we post-train models ranging from 1.5B to 32B parameters, with supervised fine-tuning, rejection fine-tuning and reinforcement learning. Our experimental results demonstrate that Hint-Engineering models achieve 4\% and 8\% absolute improvements on DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-Distill-Qwen-1.5B respectively, across five challenging mathematical reasoning datasets. Furthermore, Hint-Engineering models use about 30\% fewer tokens for the 32B model and 50\% fewer tokens for the 1.5B model compared with the natural language models. The models and code are available at https://github.com/ChengpengLi1003/CoRT.
2025-06-11T14:59:02Z
work in progress
null
null
null
null
null
null
null
null
null
2,506.0993
From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
['Irving Fang', 'Juexiao Zhang', 'Shengbang Tong', 'Chen Feng']
['cs.RO', 'cs.CV']
One promise that Vision-Language-Action (VLA) models hold over traditional imitation learning for robotics is to leverage the broad generalization capabilities of large Vision-Language Models (VLMs) to produce versatile, "generalist" robot policies. However, current evaluations of VLAs remain insufficient. Traditional imitation learning benchmarks are unsuitable due to the lack of language instructions. Emerging benchmarks for VLAs that incorporate language often come with limited evaluation tasks and do not intend to investigate how much VLM pretraining truly contributes to the generalization capabilities of the downstream robotic policy. Meanwhile, much research relies on real-world robot setups designed in isolation by different institutions, which creates a barrier for reproducibility and accessibility. To address this gap, we introduce a unified probing suite of 50 simulation-based tasks across 10 subcategories spanning language instruction, vision, and objects. We systematically evaluate several state-of-the-art VLA architectures on this suite to understand their generalization capability. Our results show that while VLM backbones endow VLAs with robust perceptual understanding and high level planning, which we refer to as good intentions, this does not reliably translate into precise motor execution: when faced with out-of-distribution observations, policies often exhibit coherent intentions, but falter in action execution. Moreover, finetuning on action data can erode the original VLM's generalist reasoning abilities. We release our task suite and evaluation code to serve as a standardized benchmark for future VLAs and to drive research on closing the perception-to-action gap. More information, including the source code, can be found at https://ai4ce.github.io/INT-ACT/
2025-06-11T16:52:18Z
Under review
null
null
From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
['Irving Fang', 'Juexiao Zhang', 'Shengbang Tong', 'Chen Feng']
2,025
arXiv.org
1
38
['Computer Science']
2,506.09942
VerIF: Verification Engineering for Reinforcement Learning in Instruction Following
['Hao Peng', 'Yunjia Qi', 'Xiaozhi Wang', 'Bin Xu', 'Lei Hou', 'Juanzi Li']
['cs.CL', 'cs.AI']
Reinforcement learning with verifiable rewards (RLVR) has become a key technique for enhancing large language models (LLMs), with verification engineering playing a central role. However, best practices for RL in instruction following remain underexplored. In this work, we explore the verification challenge in RL for instruction following and propose VerIF, a verification method that combines rule-based code verification with LLM-based verification from a large reasoning model (e.g., QwQ-32B). To support this approach, we construct a high-quality instruction-following dataset, VerInstruct, containing approximately 22,000 instances with associated verification signals. We apply RL training with VerIF to two models, achieving significant improvements across several representative instruction-following benchmarks. The trained models reach state-of-the-art performance among models of comparable size and generalize well to unseen constraints. We further observe that their general capabilities remain unaffected, suggesting that RL with VerIF can be integrated into existing RL recipes to enhance overall model performance. We have released our datasets, codes, and models to facilitate future research at https://github.com/THU-KEG/VerIF.
2025-06-11T17:10:36Z
16 pages, 8 figures
null
null
null
null
null
null
null
null
null
2,506.09965
Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing
['Junfei Wu', 'Jian Guan', 'Kaituo Feng', 'Qiang Liu', 'Shu Wu', 'Liang Wang', 'Wei Wu', 'Tieniu Tan']
['cs.CV', 'cs.AI']
As textual reasoning with large language models (LLMs) has advanced significantly, there has been growing interest in enhancing the multimodal reasoning capabilities of large vision-language models (LVLMs). However, existing methods primarily approach multimodal reasoning in a straightforward, text-centric manner, where both reasoning and answer derivation are conducted purely through text, with the only difference being the presence of multimodal input. As a result, these methods often encounter fundamental limitations in spatial reasoning tasks that demand precise geometric understanding and continuous spatial tracking-capabilities that humans achieve through mental visualization and manipulation. To address the limitations, we propose drawing to reason in space, a novel paradigm that enables LVLMs to reason through elementary drawing operations in the visual space. By equipping models with basic drawing operations, including annotating bounding boxes and drawing auxiliary lines, we empower them to express and analyze spatial relationships through direct visual manipulation, meanwhile avoiding the performance ceiling imposed by specialized perception tools in previous tool-integrated reasoning approaches. To cultivate this capability, we develop a three-stage training framework: cold-start training with synthetic data to establish basic drawing abilities, reflective rejection sampling to enhance self-reflection behaviors, and reinforcement learning to directly optimize for target rewards. Extensive experiments demonstrate that our model, named VILASR, consistently outperforms existing methods across diverse spatial reasoning benchmarks, involving maze navigation, static spatial reasoning, video-based reasoning, and multi-view-based reasoning tasks, with an average improvement of 18.4%.
2025-06-11T17:41:50Z
null
null
null
null
null
null
null
null
null
null
2,506.0998
Efficient Part-level 3D Object Generation via Dual Volume Packing
['Jiaxiang Tang', 'Ruijie Lu', 'Zhaoshuo Li', 'Zekun Hao', 'Xuan Li', 'Fangyin Wei', 'Shuran Song', 'Gang Zeng', 'Ming-Yu Liu', 'Tsung-Yi Lin']
['cs.CV']
Recent progress in 3D object generation has greatly improved both the quality and efficiency. However, most existing methods generate a single mesh with all parts fused together, which limits the ability to edit or manipulate individual parts. A key challenge is that different objects may have a varying number of parts. To address this, we propose a new end-to-end framework for part-level 3D object generation. Given a single input image, our method generates high-quality 3D objects with an arbitrary number of complete and semantically meaningful parts. We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object. Experiments show that our model achieves better quality, diversity, and generalization than previous image-based part-level generation methods.
2025-06-11T17:55:03Z
Code: https://github.com/NVlabs/PartPacker Project Page: https://research.nvidia.com/labs/dir/partpacker/
null
null
null
null
null
null
null
null
null
2,506.09991
Multiverse: Your Language Models Secretly Decide How to Parallelize and Merge Generation
['Xinyu Yang', 'Yuwei An', 'Hongyi Liu', 'Tianqi Chen', 'Beidi Chen']
['cs.LG']
Autoregressive Large Language Models (AR-LLMs) frequently exhibit implicit parallelism in sequential generation. Inspired by this, we introduce Multiverse, a new generative model that enables natively parallel generation. Multiverse internalizes a MapReduce paradigm, generating automatically through three stages: (i) a Map stage for adaptive task decomposition, (ii) a Process stage for parallel subtask execution, and (iii) a Reduce stage for lossless result synthesis. Next, we build a real-world Multiverse reasoning model with co-design of data, algorithm, and system, enabling rapid and seamless transfer from frontier AR-LLMs. For data creation, we develop Multiverse Curator, an automated LLM-assisted pipeline that transforms sequential reasoning chains into structured training data, avoiding costly human annotations. Algorithmically, we design Multiverse Attention to separate parallel reasoning steps while keeping compatibility with causal attention for efficient training. Systematically, we implement Multiverse Engine to support parallel inference. It features a dedicated interpreter that dynamically switches between sequential and parallel generation, triggered directly by the model. After a 3-hour fine-tuning with 1K examples, our Multiverse-32B stands as the only open-sourced non-AR model achieving performance on par with leading AR-LLMs of the same scale, evidenced by AIME24 & 25 scores of 54% and 46%, respectively. Moreover, our budget control experiments show that Multiverse-32B exhibits superior scaling, outperforming AR-LLMs by 1.87% on average using the same context length. Such scaling further leads to practical efficiency gains, achieving up to 2x speedup across varying batch sizes. We have open-sourced the entire Multiverse ecosystem, including data, model weights, engine, as well as complete data curation prompts and detailed training and evaluation recipes.
2025-06-11T17:59:23Z
null
null
null
null
null
null
null
null
null
null
2,506.10357
Optimus-3: Towards Generalist Multimodal Minecraft Agents with Scalable Task Experts
['Zaijing Li', 'Yuquan Xie', 'Rui Shao', 'Gongwei Chen', 'Weili Guan', 'Dongmei Jiang', 'Liqiang Nie']
['cs.AI']
Recently, agents based on multimodal large language models (MLLMs) have achieved remarkable progress across various domains. However, building a generalist agent with capabilities such as perception, planning, action, grounding, and reflection in open-world environments like Minecraft remains challenges: insufficient domain-specific data, interference among heterogeneous tasks, and visual diversity in open-world settings. In this paper, we address these challenges through three key contributions. 1) We propose a knowledge-enhanced data generation pipeline to provide scalable and high-quality training data for agent development. 2) To mitigate interference among heterogeneous tasks, we introduce a Mixture-of-Experts (MoE) architecture with task-level routing. 3) We develop a Multimodal Reasoning-Augmented Reinforcement Learning approach to enhance the agent's reasoning ability for visual diversity in Minecraft. Built upon these innovations, we present Optimus-3, a general-purpose agent for Minecraft. Extensive experimental results demonstrate that Optimus-3 surpasses both generalist multimodal large language models and existing state-of-the-art agents across a wide range of tasks in the Minecraft environment. Project page: https://cybertronagent.github.io/Optimus-3.github.io/
2025-06-12T05:29:40Z
24 pages, 10 figures
null
null
null
null
null
null
null
null
null
2,506.10452
Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts
['Guowei Zhong', 'Ruohong Huan', 'Mingzhen Wu', 'Ronghua Liang', 'Peng Chen']
['cs.CV', 'cs.CL', 'cs.LG', 'cs.MM']
Recent advancements in Multimodal Emotion Recognition (MER) face challenges in addressing both modality missing and Out-Of-Distribution (OOD) data simultaneously. Existing methods often rely on specific models or introduce excessive parameters, which limits their practicality. To address these issues, we propose a novel robust MER framework, Causal Inference Distiller (CIDer), and introduce a new task, Random Modality Feature Missing (RMFM), to generalize the definition of modality missing. CIDer integrates two key components: a Model-Specific Self-Distillation (MSSD) module and a Model-Agnostic Causal Inference (MACI) module. MSSD enhances robustness under the RMFM task through a weight-sharing self-distillation approach applied across low-level features, attention maps, and high-level representations. Additionally, a Word-level Self-aligned Attention Module (WSAM) reduces computational complexity, while a Multimodal Composite Transformer (MCT) facilitates efficient multimodal fusion. To tackle OOD challenges, MACI employs a tailored causal graph to mitigate label and language biases using a Multimodal Causal Module (MCM) and fine-grained counterfactual texts. Notably, MACI can independently enhance OOD generalization with minimal additional parameters. Furthermore, we also introduce the new repartitioned MER OOD datasets. Experimental results demonstrate that CIDer achieves robust performance in both RMFM and OOD scenarios, with fewer parameters and faster training compared to state-of-the-art methods. The implementation of this work is publicly accessible at https://github.com/gw-zhong/CIDer.
2025-06-12T07:58:17Z
Submitted to TAC. The code is available at https://github.com/gw-zhong/CIDer
null
null
Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts
['Guowei Zhong', 'Ruohong Huan', 'Mingzhen Wu', 'Ronghua Liang', 'Peng Chen']
2,025
arXiv.org
0
41
['Computer Science']
2,506.10601
Semantic-decoupled Spatial Partition Guided Point-supervised Oriented Object Detection
['Xinyuan Liu', 'Hang Xu', 'Yike Ma', 'Yucheng Zhang', 'Feng Dai']
['cs.CV']
Recent remote sensing tech advancements drive imagery growth, making oriented object detection rapid development, yet hindered by labor-intensive annotation for high-density scenes. Oriented object detection with point supervision offers a cost-effective solution for densely packed scenes in remote sensing, yet existing methods suffer from inadequate sample assignment and instance confusion due to rigid rule-based designs. To address this, we propose SSP (Semantic-decoupled Spatial Partition), a unified framework that synergizes rule-driven prior injection and data-driven label purification. Specifically, SSP introduces two core innovations: 1) Pixel-level Spatial Partition-based Sample Assignment, which compactly estimates the upper and lower bounds of object scales and mines high-quality positive samples and hard negative samples through spatial partitioning of pixel maps. 2) Semantic Spatial Partition-based Box Extraction, which derives instances from spatial partitions modulated by semantic maps and reliably converts them into bounding boxes to form pseudo-labels for supervising the learning of downstream detectors. Experiments on DOTA-v1.0 and others demonstrate SSP\' s superiority: it achieves 45.78% mAP under point supervision, outperforming SOTA method PointOBB-v2 by 4.10%. Furthermore, when integrated with ORCNN and ReDet architectures, the SSP framework achieves mAP values of 47.86% and 48.50%, respectively. The code is available at https://github.com/antxinyuan/ssp.
2025-06-12T11:44:34Z
null
null
null
null
null
null
null
null
null
null
2,506.10707
ConTextTab: A Semantics-Aware Tabular In-Context Learner
['Marco Spinaci', 'Marek Polewczyk', 'Maximilian Schambach', 'Sam Thelin']
['cs.LG', 'cs.AI']
Tabular in-context learning (ICL) has recently achieved state-of-the-art (SOTA) performance on several tabular prediction tasks. Previously restricted to classification problems on small tables, recent advances such as TabPFN and TabICL have extended its use to larger datasets. While being architecturally efficient and well-adapted to tabular data structures, current table-native ICL architectures, being trained exclusively on synthetic data, do not fully leverage the rich semantics and world knowledge contained in real-world tabular data. On another end of this spectrum, tabular ICL models based on pretrained large language models such as TabuLa-8B integrate deep semantic understanding and world knowledge but are only able to make use of a small amount of context due to inherent architectural limitations. With the aim to combine the best of both these worlds, we introduce ConTextTab, integrating semantic understanding and alignment into a table-native ICL framework. By employing specialized embeddings for different data modalities and by training on large-scale real-world tabular data, our model is competitive with SOTA across a broad set of benchmarks while setting a new standard on the semantically rich CARTE benchmark. Code and checkpoints are available at https://github.com/SAP-samples/contexttab
2025-06-12T13:57:29Z
null
null
null
ConTextTab: A Semantics-Aware Tabular In-Context Learner
['Marco Spinaci', 'Marek Polewczyk', 'Maximilian Schambach', 'Sam Thelin']
2,025
arXiv.org
0
38
['Computer Science']
2,506.10741
PosterCraft: Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework
['SiXiang Chen', 'Jianyu Lai', 'Jialin Gao', 'Tian Ye', 'Haoyu Chen', 'Hengyu Shi', 'Shitong Shao', 'Yunlong Lin', 'Song Fei', 'Zhaohu Xing', 'Yeying Jin', 'Junfeng Luo', 'Xiaoming Wei', 'Lei Zhu']
['cs.CV']
Generating aesthetic posters is more challenging than simple design images: it requires not only precise text rendering but also the seamless integration of abstract artistic content, striking layouts, and overall stylistic harmony. To address this, we propose PosterCraft, a unified framework that abandons prior modular pipelines and rigid, predefined layouts, allowing the model to freely explore coherent, visually compelling compositions. PosterCraft employs a carefully designed, cascaded workflow to optimize the generation of high-aesthetic posters: (i) large-scale text-rendering optimization on our newly introduced Text-Render-2M dataset; (ii) region-aware supervised fine-tuning on HQ-Poster100K; (iii) aesthetic-text-reinforcement learning via best-of-n preference optimization; and (iv) joint vision-language feedback refinement. Each stage is supported by a fully automated data-construction pipeline tailored to its specific needs, enabling robust training without complex architectural modifications. Evaluated on multiple experiments, PosterCraft significantly outperforms open-source baselines in rendering accuracy, layout coherence, and overall visual appeal-approaching the quality of SOTA commercial systems. Our code, models, and datasets can be found in the Project page: https://ephemeral182.github.io/PosterCraft
2025-06-12T14:28:12Z
null
null
null
null
null
null
null
null
null
null
2,506.10892
The Diffusion Duality
['Subham Sekhar Sahoo', 'Justin Deschenaux', 'Aaron Gokaslan', 'Guanghan Wang', 'Justin Chiu', 'Volodymyr Kuleshov']
['cs.LG', 'cs.AI', 'cs.CL']
Uniform-state discrete diffusion models hold the promise of fast text generation due to their inherent ability to self-correct. However, they are typically outperformed by autoregressive models and masked diffusion models. In this work, we narrow this performance gap by leveraging a key insight: Uniform-state diffusion processes naturally emerge from an underlying Gaussian diffusion. Our method, Duo, transfers powerful techniques from Gaussian diffusion to improve both training and sampling. First, we introduce a curriculum learning strategy guided by the Gaussian process, doubling training speed by reducing variance. Models trained with curriculum learning surpass autoregressive models in zero-shot perplexity on 3 of 7 benchmarks. Second, we present Discrete Consistency Distillation, which adapts consistency distillation from the continuous to the discrete setting. This algorithm unlocks few-step generation in diffusion language models by accelerating sampling by two orders of magnitude. We provide the code and model checkpoints on the project page: http://s-sahoo.github.io/duo
2025-06-12T16:55:35Z
ICML 2025. We provide the code at: https://github.com/s-sahoo/duo
null
null
null
null
null
null
null
null
null
2,506.10896
BioClinical ModernBERT: A State-of-the-Art Long-Context Encoder for Biomedical and Clinical NLP
['Thomas Sounack', 'Joshua Davis', 'Brigitte Durieux', 'Antoine Chaffin', 'Tom J. Pollard', 'Eric Lehman', 'Alistair E. W. Johnson', 'Matthew McDermott', 'Tristan Naumann', 'Charlotta Lindvall']
['cs.CL', 'cs.AI']
Encoder-based transformer models are central to biomedical and clinical Natural Language Processing (NLP), as their bidirectional self-attention makes them well-suited for efficiently extracting structured information from unstructured text through discriminative tasks. However, encoders have seen slower development compared to decoder models, leading to limited domain adaptation in biomedical and clinical settings. We introduce BioClinical ModernBERT, a domain-adapted encoder that builds on the recent ModernBERT release, incorporating long-context processing and substantial improvements in speed and performance for biomedical and clinical NLP. BioClinical ModernBERT is developed through continued pretraining on the largest biomedical and clinical corpus to date, with over 53.5 billion tokens, and addresses a key limitation of prior clinical encoders by leveraging 20 datasets from diverse institutions, domains, and geographic regions, rather than relying on data from a single source. It outperforms existing biomedical and clinical encoders on four downstream tasks spanning a broad range of use cases. We release both base (150M parameters) and large (396M parameters) versions of BioClinical ModernBERT, along with training checkpoints to support further research.
2025-06-12T17:01:11Z
null
null
null
null
null
null
null
null
null
null
2,506.1091
Magistral
['Mistral-AI', ':', 'Abhinav Rastogi', 'Albert Q. Jiang', 'Andy Lo', 'Gabrielle Berrada', 'Guillaume Lample', 'Jason Rute', 'Joep Barmentlo', 'Karmesh Yadav', 'Kartik Khandelwal', 'Khyathi Raghavi Chandu', 'Léonard Blier', 'Lucile Saulnier', 'Matthieu Dinot', 'Maxime Darrin', 'Neha Gupta', 'Roman Soletskyi', 'Sagar Vaze', 'Teven Le Scao', 'Yihan Wang', 'Adam Yang', 'Alexander H. Liu', 'Alexandre Sablayrolles', 'Amélie Héliou', 'Amélie Martin', 'Andy Ehrenberg', 'Anmol Agarwal', 'Antoine Roux', 'Arthur Darcet', 'Arthur Mensch', 'Baptiste Bout', 'Baptiste Rozière', 'Baudouin De Monicault', 'Chris Bamford', 'Christian Wallenwein', 'Christophe Renaudin', 'Clémence Lanfranchi', 'Darius Dabert', 'Devon Mizelle', 'Diego de las Casas', 'Elliot Chane-Sane', 'Emilien Fugier', 'Emma Bou Hanna', 'Gauthier Delerce', 'Gauthier Guinet', 'Georgii Novikov', 'Guillaume Martin', 'Himanshu Jaju', 'Jan Ludziejewski', 'Jean-Hadrien Chabran', 'Jean-Malo Delignon', 'Joachim Studnia', 'Jonas Amar', 'Josselin Somerville Roberts', 'Julien Denize', 'Karan Saxena', 'Kush Jain', 'Lingxiao Zhao', 'Louis Martin', 'Luyu Gao', 'Lélio Renard Lavaud', 'Marie Pellat', 'Mathilde Guillaumin', 'Mathis Felardos', 'Maximilian Augustin', 'Mickaël Seznec', 'Nikhil Raghuraman', 'Olivier Duchenne', 'Patricia Wang', 'Patrick von Platen', 'Patryk Saffer', 'Paul Jacob', 'Paul Wambergue', 'Paula Kurylowicz', 'Pavankumar Reddy Muddireddy', 'Philomène Chagniot', 'Pierre Stock', 'Pravesh Agrawal', 'Romain Sauvestre', 'Rémi Delacourt', 'Sanchit Gandhi', 'Sandeep Subramanian', 'Shashwat Dalal', 'Siddharth Gandhi', 'Soham Ghosh', 'Srijan Mishra', 'Sumukh Aithal', 'Szymon Antoniak', 'Thibault Schueller', 'Thibaut Lavril', 'Thomas Robert', 'Thomas Wang', 'Timothée Lacroix', 'Valeriia Nemychnikova', 'Victor Paltz', 'Virgile Richard', 'Wen-Ding Li', 'William Marshall', 'Xuanyu Zhang', 'Yunhao Tang']
['cs.CL']
We introduce Magistral, Mistral's first reasoning model and our own scalable reinforcement learning (RL) pipeline. Instead of relying on existing implementations and RL traces distilled from prior models, we follow a ground up approach, relying solely on our own models and infrastructure. Notably, we demonstrate a stack that enabled us to explore the limits of pure RL training of LLMs, present a simple method to force the reasoning language of the model, and show that RL on text data alone maintains most of the initial checkpoint's capabilities. We find that RL on text maintains or improves multimodal understanding, instruction following and function calling. We present Magistral Medium, trained for reasoning on top of Mistral Medium 3 with RL alone, and we open-source Magistral Small (Apache 2.0) which further includes cold-start data from Magistral Medium.
2025-06-12T17:22:37Z
null
null
null
null
null
null
null
null
null
null
2,506.10941
VINCIE: Unlocking In-context Image Editing from Video
['Leigang Qu', 'Feng Cheng', 'Ziyan Yang', 'Qi Zhao', 'Shanchuan Lin', 'Yichun Shi', 'Yicong Li', 'Wenjie Wang', 'Tat-Seng Chua', 'Lu Jiang']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM']
In-context image editing aims to modify images based on a contextual sequence comprising text and previously generated images. Existing methods typically depend on task-specific pipelines and expert models (e.g., segmentation and inpainting) to curate training data. In this work, we explore whether an in-context image editing model can be learned directly from videos. We introduce a scalable approach to annotate videos as interleaved multimodal sequences. To effectively learn from this data, we design a block-causal diffusion transformer trained on three proxy tasks: next-image prediction, current segmentation prediction, and next-segmentation prediction. Additionally, we propose a novel multi-turn image editing benchmark to advance research in this area. Extensive experiments demonstrate that our model exhibits strong in-context image editing capabilities and achieves state-of-the-art results on two multi-turn image editing benchmarks. Despite being trained exclusively on videos, our model also shows promising abilities in multi-concept composition, story generation, and chain-of-editing applications.
2025-06-12T17:46:54Z
Project page: https://vincie2025.github.io/
null
null
null
null
null
null
null
null
null
2,506.1096
ChineseHarm-Bench: A Chinese Harmful Content Detection Benchmark
['Kangwei Liu', 'Siyuan Cheng', 'Bozhong Tian', 'Xiaozhuan Liang', 'Yuyang Yin', 'Meng Han', 'Ningyu Zhang', 'Bryan Hooi', 'Xi Chen', 'Shumin Deng']
['cs.CL', 'cs.AI', 'cs.CR', 'cs.IR', 'cs.LG']
Large language models (LLMs) have been increasingly applied to automated harmful content detection tasks, assisting moderators in identifying policy violations and improving the overall efficiency and accuracy of content review. However, existing resources for harmful content detection are predominantly focused on English, with Chinese datasets remaining scarce and often limited in scope. We present a comprehensive, professionally annotated benchmark for Chinese content harm detection, which covers six representative categories and is constructed entirely from real-world data. Our annotation process further yields a knowledge rule base that provides explicit expert knowledge to assist LLMs in Chinese harmful content detection. In addition, we propose a knowledge-augmented baseline that integrates both human-annotated knowledge rules and implicit knowledge from large language models, enabling smaller models to achieve performance comparable to state-of-the-art LLMs. Code and data are available at https://github.com/zjunlp/ChineseHarm-bench.
2025-06-12T17:57:05Z
Work in progress
null
null
null
null
null
null
null
null
null
2,506.11029
Output Scaling: YingLong-Delayed Chain of Thought in a Large Pretrained Time Series Forecasting Model
['Xue Wang', 'Tian Zhou', 'Jinyang Gao', 'Bolin Ding', 'Jingren Zhou']
['cs.LG', 'cs.AI']
We present a joint forecasting framework for time series prediction that contrasts with traditional direct or recursive methods. This framework achieves state-of-the-art performance for our designed foundation model, YingLong, and reveals a novel scaling effect: longer outputs significantly enhance model accuracy due to delayed chain-of-thought reasoning in our non-causal approach. YingLong is a non-causal, bidirectional attention encoder-only transformer trained through masked token recovery, aligning more effectively with language understanding tasks than with generation tasks. Additionally, we boost performance by tackling output variance with a multi-input ensemble. We release four foundation models ranging from 6M to 300M parameters, demonstrating superior results in zero-shot tasks on the ETT and Weather datasets. YingLong achieves more than 60% best performance. To ensure generalizability, we assessed the models using the GIFT-Eval benchmark, which comprises 23 time series datasets across 7 domains. Yinglong significantly outperformed the best time-series foundation models, end-to-end trained models by 14% and 44% in rank respectively.The pretrained 300M model is available at https://huggingface.co/qcw1314/YingLong_300m
2025-05-20T14:31:06Z
null
null
null
Output Scaling: YingLong-Delayed Chain of Thought in a Large Pretrained Time Series Forecasting Model
['Xue Wang', 'Tian Zhou', 'Jinyang Gao', 'Bolin Ding', 'Jingren Zhou']
2,025
arXiv.org
0
58
['Computer Science']
2,506.11115
Incorporating Domain Knowledge into Materials Tokenization
['Yerim Oh', 'Jun-Hyung Park', 'Junho Kim', 'SungHo Kim', 'SangKeun Lee']
['cs.CL', 'cs.AI']
While language models are increasingly utilized in materials science, typical models rely on frequency-centric tokenization methods originally developed for natural language processing. However, these methods frequently produce excessive fragmentation and semantic loss, failing to maintain the structural and semantic integrity of material concepts. To address this issue, we propose MATTER, a novel tokenization approach that integrates material knowledge into tokenization. Based on MatDetector trained on our materials knowledge base and a re-ranking method prioritizing material concepts in token merging, MATTER maintains the structural integrity of identified material concepts and prevents fragmentation during tokenization, ensuring their semantic meaning remains intact. The experimental results demonstrate that MATTER outperforms existing tokenization methods, achieving an average performance gain of $4\%$ and $2\%$ in the generation and classification tasks, respectively. These results underscore the importance of domain knowledge for tokenization strategies in scientific text processing. Our code is available at https://github.com/yerimoh/MATTER
2025-06-09T04:59:13Z
null
null
null
null
null
null
null
null
null
null
2,506.1113
A Self-Refining Framework for Enhancing ASR Using TTS-Synthesized Data
['Cheng-Kang Chou', 'Chan-Jan Hsu', 'Ho-Lam Chung', 'Liang-Hsuan Tseng', 'Hsi-Chun Cheng', 'Yu-Kuan Fu', 'Kuan Po Huang', 'Hung-Yi Lee']
['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS']
We propose a self-refining framework that enhances ASR performance with only unlabeled datasets. The process starts with an existing ASR model generating pseudo-labels on unannotated speech, which are then used to train a high-fidelity text-to-speech (TTS) system. Then, synthesized speech text pairs are bootstrapped into the original ASR system, completing the closed-loop self-improvement cycle. We demonstrated the effectiveness of the framework on Taiwanese Mandarin speech. Leveraging 6,000 hours of unlabeled speech, a moderate amount of text data, and synthetic content from the AI models, we adapt Whisper-large-v2 into a specialized model, Twister. Twister reduces error rates by up to 20% on Mandarin and 50% on Mandarin-English code-switching benchmarks compared to Whisper. Results highlight the framework as a compelling alternative to pseudo-labeling self-distillation approaches and provides a practical pathway for improving ASR performance in low-resource or domain-specific settings.
2025-06-10T17:30:32Z
null
null
null
null
null
null
null
null
null
null
2,506.11305
Don't Pay Attention
['Mohammad Hammoud', 'Devang Acharya']
['cs.CL', 'cs.AI']
The Transformer has become the de facto standard for large language models and a wide range of downstream tasks across various domains. Despite its numerous advantages like inherent training parallelism, the Transformer still faces key challenges due to its inability to effectively process sequences beyond a fixed context window and the quadratic complexity of its attention mechanism. These challenges have renewed interest in RNN-like architectures, which offer linear scaling with sequence length and improved handling of long-range dependencies, albeit with limited parallelism due to their inherently recurrent nature. In this paper, we propose Avey, a new neural foundational architecture that breaks away from both attention and recurrence. Avey comprises a ranker and an autoregressive neural processor, which collaboratively identify and contextualize only the most relevant tokens for any given token, regardless of their positions in the sequence. Specifically, Avey decouples sequence length from context width, thus enabling effective processing of arbitrarily long sequences. Experimental results show that Avey compares favorably to the Transformer across a variety of standard short-range NLP benchmarks, while notably excelling at capturing long-range dependencies.
2025-06-12T21:11:06Z
null
null
null
null
null
null
null
null
null
null
2,506.1135
GLAP: General contrastive audio-text pretraining across domains and languages
['Heinrich Dinkel', 'Zhiyong Yan', 'Tianzi Wang', 'Yongqing Wang', 'Xingwei Sun', 'Yadong Niu', 'Jizhong Liu', 'Gang Li', 'Junbo Zhang', 'Jian Luan']
['cs.SD', 'cs.CL', 'eess.AS']
Contrastive Language Audio Pretraining (CLAP) is a widely-used method to bridge the gap between audio and text domains. Current CLAP methods enable sound and music retrieval in English, ignoring multilingual spoken content. To address this, we introduce general language audio pretraining (GLAP), which expands CLAP with multilingual and multi-domain abilities. GLAP demonstrates its versatility by achieving competitive performance on standard audio-text retrieval benchmarks like Clotho and AudioCaps, while significantly surpassing existing methods in speech retrieval and classification tasks. Additionally, GLAP achieves strong results on widely used sound-event zero-shot benchmarks, while simultaneously outperforming previous methods on speech content benchmarks. Further keyword spotting evaluations across 50 languages emphasize GLAP's advanced multilingual capabilities. Finally, multilingual sound and music understanding is evaluated across four languages. Checkpoints and Source: https://github.com/xiaomi-research/dasheng-glap.
2025-06-12T22:54:31Z
null
null
null
GLAP: General contrastive audio-text pretraining across domains and languages
['Heinrich Dinkel', 'Zhiyong Yan', 'Tianzi Wang', 'Yongqing Wang', 'Xingwei Sun', 'Yadong Niu', 'Jizhong Liu', 'Gang Li', 'Junbo Zhang', 'Jian Luan']
2,025
arXiv.org
0
33
['Computer Science', 'Engineering']
2,506.11474
Med-PRM: Medical Reasoning Models with Stepwise, Guideline-verified Process Rewards
['Jaehoon Yun', 'Jiwoong Sohn', 'Jungwoo Park', 'Hyunjae Kim', 'Xiangru Tang', 'Yanjun Shao', 'Yonghoe Koo', 'Minhyeok Ko', 'Qingyu Chen', 'Mark Gerstein', 'Michael Moor', 'Jaewoo Kang']
['cs.CL']
Large language models have shown promise in clinical decision making, but current approaches struggle to localize and correct errors at specific steps of the reasoning process. This limitation is critical in medicine, where identifying and addressing reasoning errors is essential for accurate diagnosis and effective patient care. We introduce Med-PRM, a process reward modeling framework that leverages retrieval-augmented generation to verify each reasoning step against established medical knowledge bases. By verifying intermediate reasoning steps with evidence retrieved from clinical guidelines and literature, our model can precisely assess the reasoning quality in a fine-grained manner. Evaluations on five medical QA benchmarks and two open-ended diagnostic tasks demonstrate that Med-PRM achieves state-of-the-art performance, with improving the performance of base models by up to 13.50% using Med-PRM. Moreover, we demonstrate the generality of Med-PRM by integrating it in a plug-and-play fashion with strong policy models such as Meerkat, achieving over 80\% accuracy on MedQA for the first time using small-scale models of 8 billion parameters. Our code and data are available at: https://med-prm.github.io/
2025-06-13T05:36:30Z
null
null
null
null
null
null
null
null
null
null
2,506.11515
Manager: Aggregating Insights from Unimodal Experts in Two-Tower VLMs and MLLMs
['Xiao Xu', 'Libo Qin', 'Wanxiang Che', 'Min-Yen Kan']
['cs.CV', 'cs.CL', 'cs.LG']
Two-Tower Vision--Language Models (VLMs) have demonstrated strong performance across various downstream VL tasks. While BridgeTower further enhances performance by building bridges between encoders, it \textit{(i)} suffers from ineffective layer-by-layer utilization of unimodal representations, \textit{(ii)} restricts the flexible exploitation of different levels of unimodal semantic knowledge, and \textit{(iii)} is limited to the evaluation on traditional low-resolution datasets only with the Two-Tower VLM architecture. In this work, we propose Manager, a lightweight, efficient and effective plugin that adaptively aggregates insights from different levels of pre-trained unimodal experts to facilitate more comprehensive VL alignment and fusion. First, under the Two-Tower VLM architecture, we introduce ManagerTower, a novel VLM that introduces the manager in each cross-modal layer. Whether with or without VL pre-training, ManagerTower outperforms previous strong baselines and achieves superior performance on 4 downstream VL tasks. Moreover, we extend our exploration to the latest Multimodal Large Language Model (MLLM) architecture. We demonstrate that LLaVA-OV-Manager significantly boosts the zero-shot performance of LLaVA-OV across different categories of capabilities, images, and resolutions on 20 downstream datasets, whether the multi-grid algorithm is enabled or not. In-depth analysis reveals that both our manager and the multi-grid algorithm can be viewed as a plugin that improves the visual representation by capturing more diverse visual details from two orthogonal perspectives (depth and width). Their synergy can mitigate the semantic ambiguity caused by the multi-grid algorithm and further improve performance. Code and models are available at https://github.com/LooperXX/ManagerTower.
2025-06-13T07:16:41Z
Accepted by IEEE Transactions on Circuits and Systems for Video Technology (TCSVT). June 2025. DOI: https://doi.org/10.1109/TCSVT.2025.3578266
null
10.1109/TCSVT.2025.3578266
Manager: Aggregating Insights from Unimodal Experts in Two-Tower VLMs and MLLMs
['Xiao Xu', 'Libo Qin', 'Wanxiang Che', 'Min-Yen Kan']
2,025
IEEE transactions on circuits and systems for video technology (Print)
0
143
['Computer Science']
2,506.11543
FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation
['Zhuguanyu Wu', 'Shihe Wang', 'Jiayi Zhang', 'Jiaxin Chen', 'Yunhong Wang']
['cs.CV', 'cs.AI', 'cs.LG']
Post-training quantization (PTQ) has stood out as a cost-effective and promising model compression paradigm in recent years, as it avoids computationally intensive model retraining. Nevertheless, current PTQ methods for Vision Transformers (ViTs) still suffer from significant accuracy degradation, especially under low-bit quantization. To address these shortcomings, we analyze the prevailing Hessian-guided quantization loss, and uncover certain limitations of conventional Hessian approximations. By following the block-wise reconstruction framework, we propose a novel PTQ method for ViTs, dubbed FIMA-Q. Specifically, we firstly establish the connection between KL divergence and FIM, which enables fast computation of the quantization loss during reconstruction. We further propose an efficient FIM approximation method, namely DPLR-FIM, by employing the diagonal plus low-rank principle, and formulate the ultimate quantization loss. Our extensive experiments, conducted across various vision tasks with representative ViT-based architectures on public datasets, demonstrate that our method substantially promotes the accuracy compared to the state-of-the-art approaches, especially in the case of low-bit quantization. The source code is available at https://github.com/ShiheWang/FIMA-Q.
2025-06-13T07:57:38Z
CVPR 2025 Highlight
null
null
null
null
null
null
null
null
null
2,506.11702
Configurable Preference Tuning with Rubric-Guided Synthetic Data
['Víctor Gallego']
['cs.CL', 'cs.AI']
Models of human feedback for AI alignment, such as those underpinning Direct Preference Optimization (DPO), often bake in a singular, static set of preferences, limiting adaptability. This paper challenges the assumption of monolithic preferences by introducing Configurable Preference Tuning (CPT), a novel framework for endowing language models with the ability to dynamically adjust their behavior based on explicit, human-interpretable directives. CPT leverages synthetically generated preference data, conditioned on system prompts derived from structured, fine-grained rubrics that define desired attributes like writing style. By fine-tuning with these rubric-guided preferences, the LLM learns to modulate its outputs at inference time in response to the system prompt, without retraining. This approach not only offers fine-grained control but also provides a mechanism for modeling more nuanced and context-dependent human feedback. Several experimental artifacts, such as training code, generated datasets and fine-tuned models are released at https://github.com/vicgalle/configurable-preference-tuning
2025-06-13T12:17:38Z
Accepted to ICML 2025 Workshop on Models of Human Feedback for AI Alignment
null
null
null
null
null
null
null
null
null
2,506.11903
GeistBERT: Breathing Life into German NLP
['Raphael Scheible-Schmitt', 'Johann Frei']
['cs.CL']
Advances in transformer-based language models have highlighted the benefits of language-specific pre-training on high-quality corpora. In this context, German NLP stands to gain from updated architectures and modern datasets tailored to the linguistic characteristics of the German language. GeistBERT seeks to improve German language processing by incrementally training on a diverse corpus and optimizing model performance across various NLP tasks. We pre-trained GeistBERT using fairseq, following the RoBERTa base configuration with Whole Word Masking (WWM), and initialized from GottBERT weights. The model was trained on a 1.3 TB German corpus with dynamic masking and a fixed sequence length of 512 tokens. For evaluation, we fine-tuned the model on standard downstream tasks, including NER (CoNLL 2003, GermEval 2014), text classification (GermEval 2018 coarse/fine, 10kGNAD), and NLI (German XNLI), using $F_1$ score and accuracy as evaluation metrics. GeistBERT achieved strong results across all tasks, leading among base models and setting a new state-of-the-art (SOTA) in GermEval 2018 fine text classification. It also outperformed several larger models, particularly in classification benchmarks. To support research in German NLP, we release GeistBERT under the MIT license.
2025-06-13T15:53:17Z
null
null
null
null
null
null
null
null
null
null
2,506.11991
VGR: Visual Grounded Reasoning
['Jiacong Wang', 'Zijian Kang', 'Haochen Wang', 'Haiyong Jiang', 'Jiawen Li', 'Bohong Wu', 'Ya Wang', 'Jiao Ran', 'Xiao Liang', 'Chao Feng', 'Jun Xiao']
['cs.CV', 'cs.AI', 'cs.CL']
In the field of multimodal chain-of-thought (CoT) reasoning, existing approaches predominantly rely on reasoning on pure language space, which inherently suffers from language bias and is largely confined to math or science domains. This narrow focus limits their ability to handle complex visual reasoning tasks that demand comprehensive understanding of image details. To address these limitations, this paper introduces VGR, a novel reasoning multimodal large language model (MLLM) with enhanced fine-grained visual perception capabilities. Unlike traditional MLLMs that answer the question or reasoning solely on the language space, our VGR first detects relevant regions that may help to solve problems, and then provides precise answers based on replayed image regions. To achieve this, we conduct a large-scale SFT dataset called VGR -SFT that contains reasoning data with mixed vision grounding and language deduction. The inference pipeline of VGR allows the model to choose bounding boxes for visual reference and a replay stage is introduced to integrates the corresponding regions into the reasoning process, enhancing multimodel comprehension. Experiments on the LLaVA-NeXT-7B baseline show that VGR achieves superior performance on multi-modal benchmarks requiring comprehensive image detail understanding. Compared to the baseline, VGR uses only 30\% of the image token count while delivering scores of +4.1 on MMStar, +7.1 on AI2D, and a +12.9 improvement on ChartQA.
2025-06-13T17:47:43Z
9 pages, 4 figures
null
null
null
null
null
null
null
null
null
2,506.12119
Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?
['Houyi Li', 'Ka Man Lo', 'Ziqi Wang', 'Zili Wang', 'Wenzhen Zheng', 'Shuigeng Zhou', 'Xiangyu Zhang', 'Daxin Jiang']
['cs.CL', 'cs.AI']
Mixture-of-Experts (MoE) language models dramatically expand model capacity and achieve remarkable performance without increasing per-token compute. However, can MoEs surpass dense architectures under strictly equal resource constraints - that is, when the total parameter count, training compute, and data budget are identical? This question remains under-explored despite its significant practical value and potential. In this paper, we propose a novel perspective and methodological framework to study this question thoroughly. First, we comprehensively investigate the architecture of MoEs and achieve an optimal model design that maximizes the performance. Based on this, we subsequently find that an MoE model with activation rate in an optimal region is able to outperform its dense counterpart under the same total parameter, training compute and data resource. More importantly, this optimal region remains consistent across different model sizes. Although additional amount of data turns out to be a trade-off for the enhanced performance, we show that this can be resolved via reusing data. We validate our findings through extensive experiments, training nearly 200 language models at 2B scale and over 50 at 7B scale, cumulatively processing 50 trillion tokens. All models will be released publicly.
2025-06-13T17:59:05Z
null
null
null
Can Mixture-of-Experts Surpass Dense LLMs Under Strictly Equal Resources?
['Houyi Li', 'Ka Man Lo', 'Ziqi Wang', 'Zili Wang', 'Wenzheng Zheng', 'Shuigeng Zhou', 'Xiangyu Zhang', 'Daxin Jiang']
2,025
arXiv.org
0
63
['Computer Science']
2,506.12242
Large Language Models for History, Philosophy, and Sociology of Science: Interpretive Uses, Methodological Challenges, and Critical Perspectives
['Arno Simons', 'Michael Zichert', 'Adrian Wüthrich']
['cs.CL', 'cs.AI', 'cs.CY', 'A.1; I.2.1; I.2.7; J.4; J.5']
This paper explores the use of large language models (LLMs) as research tools in the history, philosophy, and sociology of science (HPSS). LLMs are remarkably effective at processing unstructured text and inferring meaning from context, offering new affordances that challenge long-standing divides between computational and interpretive methods. This raises both opportunities and challenges for HPSS, which emphasizes interpretive methodologies and understands meaning as context-dependent, ambiguous, and historically situated. We argue that HPSS is uniquely positioned not only to benefit from LLMs' capabilities but also to interrogate their epistemic assumptions and infrastructural implications. To this end, we first offer a concise primer on LLM architectures and training paradigms tailored to non-technical readers. We frame LLMs not as neutral tools but as epistemic infrastructures that encode assumptions about meaning, context, and similarity, conditioned by their training data, architecture, and patterns of use. We then examine how computational techniques enhanced by LLMs, such as structuring data, detecting patterns, and modeling dynamic processes, can be applied to support interpretive research in HPSS. Our analysis compares full-context and generative models, outlines strategies for domain and task adaptation (e.g., continued pretraining, fine-tuning, and retrieval-augmented generation), and evaluates their respective strengths and limitations for interpretive inquiry in HPSS. We conclude with four lessons for integrating LLMs into HPSS: (1) model selection involves interpretive trade-offs; (2) LLM literacy is foundational; (3) HPSS must define its own benchmarks and corpora; and (4) LLMs should enhance, not replace, interpretive methods.
2025-06-13T21:44:13Z
27 pages, 2 tables
null
null
Large Language Models for History, Philosophy, and Sociology of Science: Interpretive Uses, Methodological Challenges, and Critical Perspectives
['Arno Simons', 'Michael Zichert', 'Adrian Wüthrich']
2,025
arXiv.org
0
79
['Computer Science']
2,506.12364
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval
['Mingjun Xu', 'Jinhan Dong', 'Jue Hou', 'Zehui Wang', 'Sihang Li', 'Zhifeng Gao', 'Renxin Zhong', 'Hengxing Cai']
['cs.AI', 'cs.CL', 'cs.CV']
Multimodal document retrieval systems enable information access across text, images, and layouts, benefiting various domains like document-based question answering, report analysis, and interactive content summarization. Rerankers improve retrieval precision by reordering retrieved candidates. However, current multimodal reranking methods remain underexplored, with significant room for improvement in both training strategies and overall effectiveness. Moreover, the lack of explicit reasoning makes it difficult to analyze and optimize these methods further. In this paper, We propose MM-R5, a MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval, aiming to provide a more effective and reliable solution for multimodal reranking tasks. MM-R5 is trained in two stages: supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we focus on improving instruction-following and guiding the model to generate complete and high-quality reasoning chains. To support this, we introduce a novel data construction strategy that produces rich, high-quality reasoning data. In the RL stage, we design a task-specific reward framework, including a reranking reward tailored for multimodal candidates and a composite template-based reward to further refine reasoning quality. We conduct extensive experiments on MMDocIR, a challenging public benchmark spanning multiple domains. MM-R5 achieves state-of-the-art performance on most metrics and delivers comparable results to much larger models on the remaining ones. Moreover, compared to the best retrieval-only method, MM-R5 improves recall@1 by over 4%. These results validate the effectiveness of our reasoning-enhanced training pipeline. Our code is available at https://github.com/i2vec/MM-R5 .
2025-06-14T05:55:00Z
null
null
null
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval
['Mingjun Xu', 'Jinhan Dong', 'Jue Hou', 'Zehui Wang', 'Sihang Li', 'Zhifeng Gao', 'Renxin Zhong', 'Hengxing Cai']
2,025
arXiv.org
0
47
['Computer Science']
2,506.12473
TagRouter: Learning Route to LLMs through Tags for Open-Domain Text Generation Tasks
['Zhou Chen', 'Zhiqiang Wei', 'Yuqi Bai', 'Xue Xiong', 'Jianmin Wu']
['cs.CL']
Model routing allocates queries to the suitable model, improving system performance while reducing costs. However, existing routing methods face practical limitations that hinder scalability in large-scale applications and struggle to keep up with the rapid growth of the large language model (LLM) ecosystem. To tackle these challenges, we propose TagRouter, a training-free model routing method designed to optimize the synergy among multiple LLMs for open-domain text generation tasks. Experimental results demonstrate that TagRouter outperforms 13 baseline methods, increasing the accept rate of system by 6.15% and reducing costs by 17.20%, achieving optimal cost-efficiency. Our findings provides the LLM community with an efficient and scalable solution for model ensembling, offering users an evolvable "super model."
2025-06-14T12:17:47Z
ACL 2025, 26 pages, 13 figures, 14 tables
null
null
null
null
null
null
null
null
null
2,506.12479
AI Flow: Perspectives, Scenarios, and Approaches
['Hongjun An', 'Wenhan Hu', 'Sida Huang', 'Siqi Huang', 'Ruanjun Li', 'Yuanzhi Liang', 'Jiawei Shao', 'Yiliang Song', 'Zihan Wang', 'Cheng Yuan', 'Chi Zhang', 'Hongyuan Zhang', 'Wenhao Zhuang', 'Xuelong Li']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.DC', 'eess.SP']
Pioneered by the foundational information theory by Claude Shannon and the visionary framework of machine intelligence by Alan Turing, the convergent evolution of information and communication technologies (IT/CT) has created an unbroken wave of connectivity and computation. This synergy has sparked a technological revolution, now reaching its peak with large artificial intelligence (AI) models that are reshaping industries and redefining human-machine collaboration. However, the realization of ubiquitous intelligence faces considerable challenges due to substantial resource consumption in large models and high communication bandwidth demands. To address these challenges, AI Flow has been introduced as a multidisciplinary framework that integrates cutting-edge IT and CT advancements, with a particular emphasis on the following three key points. First, device-edge-cloud framework serves as the foundation, which integrates end devices, edge servers, and cloud clusters to optimize scalability and efficiency for low-latency model inference. Second, we introduce the concept of familial models, which refers to a series of different-sized models with aligned hidden features, enabling effective collaboration and the flexibility to adapt to varying resource constraints and dynamic scenarios. Third, connectivity- and interaction-based intelligence emergence is a novel paradigm of AI Flow. By leveraging communication networks to enhance connectivity, the collaboration among AI models across heterogeneous nodes achieves emergent intelligence that surpasses the capability of any single model. The innovations of AI Flow provide enhanced intelligence, timely responsiveness, and ubiquitous accessibility to AI services, paving the way for the tighter fusion of AI techniques and communication systems.
2025-06-14T12:43:07Z
Authors are with Institute of Artificial Intelligence (TeleAI), China Telecom, China. Author names are listed alphabetically by surname. This work was conducted at TeleAI, facilitated by Dr. Jiawei Shao (e-mail: shaojw2@chinatelecom.cn) under the leadership of Prof. Xuelong Li. The corresponding author is Prof. Xuelong Li (e-mail: xuelong li@ieee.org), the CTO and Chief Scientist of China Telecom
null
null
null
null
null
null
null
null
null
2,506.12704
Flexible Realignment of Language Models
['Wenhong Zhu', 'Ruobing Xie', 'Weinan Zhang', 'Rui Wang']
['cs.CL', 'cs.AI']
Realignment becomes necessary when a language model (LM) fails to meet expected performance. We propose a flexible realignment framework that supports quantitative control of alignment degree during training and inference. This framework incorporates Training-time Realignment (TrRa), which efficiently realigns the reference model by leveraging the controllable fusion of logits from both the reference and already aligned models. For example, TrRa reduces token usage by 54.63% on DeepSeek-R1-Distill-Qwen-1.5B without any performance degradation, outperforming DeepScaleR-1.5B's 33.86%. To complement TrRa during inference, we introduce a layer adapter that enables smooth Inference-time Realignment (InRa). This adapter is initialized to perform an identity transformation at the bottom layer and is inserted preceding the original layers. During inference, input embeddings are simultaneously processed by the adapter and the original layer, followed by the remaining layers, and then controllably interpolated at the logit level. We upgraded DeepSeek-R1-Distill-Qwen-7B from a slow-thinking model to one that supports both fast and slow thinking, allowing flexible alignment control even during inference. By encouraging deeper reasoning, it even surpassed its original performance.
2025-06-15T03:26:59Z
null
null
null
null
null
null
null
null
null
null
2,506.12776
Native Visual Understanding: Resolving Resolution Dilemmas in Vision-Language Models
['Junbo Niu', 'Yuanhong Zheng', 'Ziyang Miao', 'Hejun Dong', 'Chunjiang Ge', 'Hao Liang', 'Ma Lu', 'Bohan Zeng', 'Qiahao Zheng', 'Conghui He', 'Wentao Zhang']
['cs.CV']
Vision-Language Models (VLMs) face significant challenges when dealing with the diverse resolutions and aspect ratios of real-world images, as most existing models rely on fixed, low-resolution inputs. While recent studies have explored integrating native resolution visual encoding to improve model performance, such efforts remain fragmented and lack a systematic framework within the open-source community. Moreover, existing benchmarks fall short in evaluating VLMs under varied visual conditions, often neglecting resolution as a critical factor. To address the "Resolution Dilemma" stemming from both model design and benchmark limitations, we introduce RC-Bench, a novel benchmark specifically designed to systematically evaluate VLM capabilities under extreme visual conditions, with an emphasis on resolution and aspect ratio variations. In conjunction, we propose NativeRes-LLaVA, an open-source training framework that empowers VLMs to effectively process images at their native resolutions and aspect ratios. Based on RC-Bench and NativeRes-LLaVA, we conduct comprehensive experiments on existing visual encoding strategies. The results show that Native Resolution Visual Encoding significantly improves the performance of VLMs on RC-Bench as well as other resolution-centric benchmarks. Code is available at https://github.com/Niujunbo2002/NativeRes-LLaVA.
2025-06-15T08:58:09Z
null
null
null
Native Visual Understanding: Resolving Resolution Dilemmas in Vision-Language Models
['Junbo Niu', 'Yuanhong Zheng', 'Ziyang Miao', 'Hejun Dong', 'Chunjiang Ge', 'Hao Liang', 'Ma Lu', 'Bohan Zeng', 'Qiahao Zheng', 'Conghui He', 'Wentao Zhang']
2,025
arXiv.org
0
62
['Computer Science']
2,506.1286
QFFT, Question-Free Fine-Tuning for Adaptive Reasoning
['Wanlong Liu', 'Junxiao Xu', 'Fei Yu', 'Yukang Lin', 'Ke Ji', 'Wenyu Chen', 'Yan Xu', 'Yasheng Wang', 'Lifeng Shang', 'Benyou Wang']
['cs.CL']
Recent advancements in Long Chain-of-Thought (CoT) reasoning models have improved performance on complex tasks, but they suffer from overthinking, which generates redundant reasoning steps, especially for simple questions. This paper revisits the reasoning patterns of Long and Short CoT models, observing that the Short CoT patterns offer concise reasoning efficiently, while the Long CoT patterns excel in challenging scenarios where the Short CoT patterns struggle. To enable models to leverage both patterns, we propose Question-Free Fine-Tuning (QFFT), a fine-tuning approach that removes the input question during training and learns exclusively from Long CoT responses. This approach enables the model to adaptively employ both reasoning patterns: it prioritizes the Short CoT patterns and activates the Long CoT patterns only when necessary. Experiments on various mathematical datasets demonstrate that QFFT reduces average response length by more than 50\%, while achieving performance comparable to Supervised Fine-Tuning (SFT). Additionally, QFFT exhibits superior performance compared to SFT in noisy, out-of-domain, and low-resource scenarios.
2025-06-15T14:21:28Z
23 pages
null
null
QFFT, Question-Free Fine-Tuning for Adaptive Reasoning
['Wanlong Liu', 'Junxiao Xu', 'Fei Yu', 'Yukang Lin', 'Ke Ji', 'Wenyu Chen', 'Yan Xu', 'Yasheng Wang', 'Lifeng Shang', 'Benyou Wang']
2,025
arXiv.org
0
48
['Computer Science']
2,506.13006
Antibody Foundational Model : Ab-RoBERTa
['Eunna Huh', 'Hyeonsu Lee', 'Hyunjin Shin']
['cs.LG', '68T50 (Primary) 68U15 (Secondary)']
With the growing prominence of antibody-based therapeutics, antibody engineering has gained increasing attention as a critical area of research and development. Recent progress in transformer-based protein large language models (LLMs) has demonstrated promising applications in protein sequence design and structural prediction. Moreover, the availability of large-scale antibody datasets such as the Observed Antibody Space (OAS) database has opened new avenues for the development of LLMs specialized for processing antibody sequences. Among these, RoBERTa has demonstrated improved performance relative to BERT, while maintaining a smaller parameter count (125M) compared to the BERT-based protein model, ProtBERT (420M). This reduced model size enables more efficient deployment in antibody-related applications. However, despite the numerous advantages of the RoBERTa architecture, antibody-specific foundational models built upon it have remained inaccessible to the research community. In this study, we introduce Ab-RoBERTa, a RoBERTa-based antibody-specific LLM, which is publicly available at https://huggingface.co/mogam-ai/Ab-RoBERTa. This resource is intended to support a wide range of antibody-related research applications including paratope prediction or humanness assessment.
2025-06-16T00:22:07Z
14 page, 3 figures, 5 tables
null
null
null
null
null
null
null
null
null
2,506.13044
Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models
['Muhammad Reza Qorib', 'Junyi Li', 'Hwee Tou Ng']
['cs.CL', 'cs.AI']
Large language models (LLMs) have demonstrated impressive translation capabilities even without being explicitly trained on parallel data. This remarkable property has led some to believe that parallel data is no longer necessary for building multilingual language models. While some attribute this to the emergent abilities of LLMs due to scale, recent work suggests that it is actually caused by incidental bilingual signals present in the training data. Various methods have been proposed to maximize the utility of parallel data to enhance the multilingual capabilities of multilingual encoder-based and encoder-decoder language models. However, some decoder-based LLMs opt to ignore parallel data instead. In this work, we conduct a systematic study on the impact of adding parallel data on LLMs' multilingual capabilities, focusing specifically on translation and multilingual common-sense reasoning. Through controlled experiments, we demonstrate that parallel data can significantly improve LLMs' multilingual capabilities.
2025-06-16T02:21:15Z
ACL 2025
null
null
null
null
null
null
null
null
null
2,506.13053
ZipVoice: Fast and High-Quality Zero-Shot Text-to-Speech with Flow Matching
['Han Zhu', 'Wei Kang', 'Zengwei Yao', 'Liyong Guo', 'Fangjun Kuang', 'Zhaoqing Li', 'Weiji Zhuang', 'Long Lin', 'Daniel Povey']
['eess.AS', 'cs.SD']
Existing large-scale zero-shot text-to-speech (TTS) models deliver high speech quality but suffer from slow inference speeds due to massive parameters. To address this issue, this paper introduces ZipVoice, a high-quality flow-matching-based zero-shot TTS model with a compact model size and fast inference speed. Key designs include: 1) a Zipformer-based flow-matching decoder to maintain adequate modeling capabilities under constrained size; 2) Average upsampling-based initial speech-text alignment and Zipformer-based text encoder to improve speech intelligibility; 3) A flow distillation method to reduce sampling steps and eliminate the inference overhead associated with classifier-free guidance. Experiments on 100k hours multilingual datasets show that ZipVoice matches state-of-the-art models in speech quality, while being 3 times smaller and up to 30 times faster than a DiT-based flow-matching baseline. Codes, model checkpoints and demo samples are publicly available.
2025-06-16T02:48:17Z
null
null
null
null
null
null
null
null
null
null
2,506.13056
Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning
['Haibo Qiu', 'Xiaohan Lan', 'Fanfan Liu', 'Xiaohu Sun', 'Delian Ruan', 'Peng Shi', 'Lin Ma']
['cs.AI', 'cs.CV', 'cs.LG']
Recent advancements in large language models (LLMs) have witnessed a surge in the development of advanced reasoning paradigms, which are now being integrated into multimodal large language models (MLLMs). However, existing approaches often fall short: methods solely employing reinforcement learning (RL) can struggle with sample inefficiency and activating entirely absent reasoning capabilities, while conventional pipelines that initiate with a cold-start supervised fine-tuning (SFT) phase before RL may restrict the model's exploratory capacity and face suboptimal convergence. In this work, we introduce \textbf{Metis-RISE} (\textbf{R}L \textbf{I}ncentivizes and \textbf{S}FT \textbf{E}nhances) for multimodal reasoning model learning. Unlike conventional approaches, Metis-RISE distinctively omits an initial SFT stage, beginning instead with an RL phase (e.g., using a Group Relative Policy Optimization variant) to incentivize and activate the model's latent reasoning capacity. Subsequently, the targeted SFT stage addresses two key challenges identified during RL: (1) \textit{inefficient trajectory sampling} for tasks where the model possesses but inconsistently applies correct reasoning, which we tackle using self-distilled reasoning trajectories from the RL model itself; and (2) \textit{fundamental capability absence}, which we address by injecting expert-augmented knowledge for prompts where the model entirely fails. This strategic application of RL for incentivization followed by SFT for enhancement forms the core of Metis-RISE, leading to two versions of our MLLMs (7B and 72B parameters). Evaluations on the OpenCompass Multimodal Reasoning Leaderboard demonstrate that both models achieve state-of-the-art performance among similar-sized models, with the 72B version ranking fourth overall. Please refer to our project page for open-source information.
2025-06-16T02:56:13Z
Project Page: https://github.com/MM-Thinking/Metis-RISE
null
null
null
null
null
null
null
null
null
2,506.13277
SeqPE: Transformer with Sequential Position Encoding
['Huayang Li', 'Yahui Liu', 'Hongyu Sun', 'Deng Cai', 'Leyang Cui', 'Wei Bi', 'Peilin Zhao', 'Taro Watanabe']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV']
Since self-attention layers in Transformers are permutation invariant by design, positional encodings must be explicitly incorporated to enable spatial understanding. However, fixed-size lookup tables used in traditional learnable position embeddings (PEs) limit extrapolation capabilities beyond pre-trained sequence lengths. Expert-designed methods such as ALiBi and RoPE, mitigate this limitation but demand extensive modifications for adapting to new modalities, underscoring fundamental challenges in adaptability and scalability. In this work, we present SeqPE, a unified and fully learnable position encoding framework that represents each $n$-dimensional position index as a symbolic sequence and employs a lightweight sequential position encoder to learn their embeddings in an end-to-end manner. To regularize SeqPE's embedding space, we introduce two complementary objectives: a contrastive objective that aligns embedding distances with a predefined position-distance function, and a knowledge distillation loss that anchors out-of-distribution position embeddings to in-distribution teacher representations, further enhancing extrapolation performance. Experiments across language modeling, long-context question answering, and 2D image classification demonstrate that SeqPE not only surpasses strong baselines in perplexity, exact match (EM), and accuracy--particularly under context length extrapolation--but also enables seamless generalization to multi-dimensional inputs without requiring manual architectural redesign. We release our code, data, and checkpoints at https://github.com/ghrua/seqpe.
2025-06-16T09:16:40Z
null
null
null
SeqPE: Transformer with Sequential Position Encoding
['Huyang Li', 'Yahui Liu', 'Hongyu Sun', 'Deng Cai', 'Leyang Cui', 'Wei Bi', 'Peilin Zhao', 'Taro Watanabe']
2,025
arXiv.org
0
54
['Computer Science']
2,506.13284
AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy
['Zihan Liu', 'Zhuolin Yang', 'Yang Chen', 'Chankyu Lee', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Wei Ping']
['cs.CL', 'cs.AI', 'cs.LG']
In this work, we investigate the synergy between supervised fine-tuning (SFT) and reinforcement learning (RL) in developing strong reasoning models. We begin by curating the SFT training data through two scaling strategies: increasing the number of collected prompts and the number of generated responses per prompt. Both approaches yield notable improvements in reasoning performance, with scaling the number of prompts resulting in more substantial gains. We then explore the following questions regarding the synergy between SFT and RL: (i) Does a stronger SFT model consistently lead to better final performance after large-scale RL training? (ii) How can we determine an appropriate sampling temperature during RL training to effectively balance exploration and exploitation for a given SFT initialization? Our findings suggest that (i) holds true, provided effective RL training is conducted, particularly when the sampling temperature is carefully chosen to maintain the temperature-adjusted entropy around 0.3, a setting that strikes a good balance between exploration and exploitation. Notably, the performance gap between initial SFT models narrows significantly throughout the RL process. Leveraging a strong SFT foundation and insights into the synergistic interplay between SFT and RL, our AceReason-Nemotron-1.1 7B model significantly outperforms AceReason-Nemotron-1.0 and achieves new state-of-the-art performance among Qwen2.5-7B-based reasoning models on challenging math and code benchmarks, thereby demonstrating the effectiveness of our post-training recipe. We release the model and data at: https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
2025-06-16T09:27:48Z
The AceReason-Nemotron collection: https://huggingface.co/collections/nvidia/acereason-682f4e1261dc22f697fd1485
null
null
AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy
['Zihan Liu', 'Zhuoling Yang', 'Yang Chen', 'Chankyu Lee', 'M. Shoeybi', 'Bryan Catanzaro', 'Wei Ping']
2,025
arXiv.org
0
42
['Computer Science']
2,506.13342
Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers
['Wooseok Seo', 'Seungju Han', 'Jaehun Jung', 'Benjamin Newman', 'Seungwon Lim', 'Seungbeen Lee', 'Ximing Lu', 'Yejin Choi', 'Youngjae Yu']
['cs.AI', 'cs.CL', 'cs.LG']
Fact verification is essential for ensuring the reliability of LLM applications. In this study, we evaluate 12 pre-trained LLMs and one specialized fact-verifier, including frontier LLMs and open-weight reasoning LLMs, using a collection of examples from 14 fact-checking benchmarks. We share three findings intended to guide future development of more robust fact verifiers. First, we highlight the importance of addressing annotation errors and ambiguity in datasets, demonstrating that approximately 16\% of ambiguous or incorrectly labeled data substantially influences model rankings. Neglecting this issue may result in misleading conclusions during comparative evaluations, and we suggest using a systematic pipeline utilizing LLM-as-a-judge to help identify these issues at scale. Second, we discover that frontier LLMs with few-shot in-context examples, often overlooked in previous works, achieve top-tier performance. We therefore recommend future studies include comparisons with these simple yet highly effective baselines. Lastly, despite their effectiveness, frontier LLMs incur substantial costs, motivating the development of small, fine-tuned fact verifiers. We show that these small models still have room for improvement, particularly on instances that require complex reasoning. Encouragingly, we demonstrate that augmenting training with synthetic multi-hop reasoning data significantly enhances their capabilities in such instances. We release our code, model, and dataset at https://github.com/just1nseo/verifying-the-verifiers
2025-06-16T10:32:10Z
null
null
null
null
null
null
null
null
null
null
2,506.13355
DicFace: Dirichlet-Constrained Variational Codebook Learning for Temporally Coherent Video Face Restoration
['Yan Chen', 'Hanlin Shang', 'Ce Liu', 'Yuxuan Chen', 'Hui Li', 'Weihao Yuan', 'Hao Zhu', 'Zilong Dong', 'Siyu Zhu']
['cs.CV']
Video face restoration faces a critical challenge in maintaining temporal consistency while recovering fine facial details from degraded inputs. This paper presents a novel approach that extends Vector-Quantized Variational Autoencoders (VQ-VAEs), pretrained on static high-quality portraits, into a video restoration framework through variational latent space modeling. Our key innovation lies in reformulating discrete codebook representations as Dirichlet-distributed continuous variables, enabling probabilistic transitions between facial features across frames. A spatio-temporal Transformer architecture jointly models inter-frame dependencies and predicts latent distributions, while a Laplacian-constrained reconstruction loss combined with perceptual (LPIPS) regularization enhances both pixel accuracy and visual quality. Comprehensive evaluations on blind face restoration, video inpainting, and facial colorization tasks demonstrate state-of-the-art performance. This work establishes an effective paradigm for adapting intensive image priors, pretrained on high-quality images, to video restoration while addressing the critical challenge of flicker artifacts. The source code has been open-sourced and is available at https://github.com/fudan-generative-vision/DicFace.
2025-06-16T10:54:28Z
null
null
null
null
null
null
null
null
null
null
2,506.13414
BUT System for the MLC-SLM Challenge
['Alexander Polok', 'Jiangyu Han', 'Dominik Klement', 'Samuele Cornell', 'Jan Černocký', 'Lukáš Burget']
['eess.AS']
We present a two-speaker automatic speech recognition (ASR) system that combines DiCoW -- a diarization-conditioned variant of Whisper -- with DiariZen, a diarization pipeline built on top of Pyannote. We first evaluate both systems in out-of-domain (OOD) multilingual scenarios without any fine-tuning. In this scenario, DiariZen consistently outperforms the baseline Pyannote diarization model, demonstrating strong generalization. Despite being fine-tuned on English-only data for target-speaker ASR, DiCoW retains solid multilingual performance, indicating that encoder modifications preserve Whisper's multilingual capabilities. We then fine-tune both DiCoW and DiariZen on the MLC-SLM challenge data. The fine-tuned DiariZen continues to outperform the fine-tuned Pyannote baseline, while DiCoW sees further gains from domain adaptation. Our final system achieves a micro-average tcpWER/CER of 16.75% and ranks second in Task 2 of the MLC-SLM challenge. Lastly, we identify several labeling inconsistencies in the training data -- such as missing speech segments and incorrect silence annotations -- which can hinder diarization fine-tuning. We propose simple mitigation strategies to address these issues and improve system robustness.
2025-06-16T12:28:35Z
null
null
null
null
null
null
null
null
null
null
2,506.13585
MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
['MiniMax', ':', 'Aili Chen', 'Aonian Li', 'Bangwei Gong', 'Binyang Jiang', 'Bo Fei', 'Bo Yang', 'Boji Shan', 'Changqing Yu', 'Chao Wang', 'Cheng Zhu', 'Chengjun Xiao', 'Chengyu Du', 'Chi Zhang', 'Chu Qiao', 'Chunhao Zhang', 'Chunhui Du', 'Congchao Guo', 'Da Chen', 'Deming Ding', 'Dianjun Sun', 'Dong Li', 'Enwei Jiao', 'Haigang Zhou', 'Haimo Zhang', 'Han Ding', 'Haohai Sun', 'Haoyu Feng', 'Huaiguang Cai', 'Haichao Zhu', 'Jian Sun', 'Jiaqi Zhuang', 'Jiaren Cai', 'Jiayuan Song', 'Jin Zhu', 'Jingyang Li', 'Jinhao Tian', 'Jinli Liu', 'Junhao Xu', 'Junjie Yan', 'Junteng Liu', 'Junxian He', 'Kaiyi Feng', 'Ke Yang', 'Kecheng Xiao', 'Le Han', 'Leyang Wang', 'Lianfei Yu', 'Liheng Feng', 'Lin Li', 'Lin Zheng', 'Linge Du', 'Lingyu Yang', 'Lunbin Zeng', 'Minghui Yu', 'Mingliang Tao', 'Mingyuan Chi', 'Mozhi Zhang', 'Mujie Lin', 'Nan Hu', 'Nongyu Di', 'Peng Gao', 'Pengfei Li', 'Pengyu Zhao', 'Qibing Ren', 'Qidi Xu', 'Qile Li', 'Qin Wang', 'Rong Tian', 'Ruitao Leng', 'Shaoxiang Chen', 'Shaoyu Chen', 'Shengmin Shi', 'Shitong Weng', 'Shuchang Guan', 'Shuqi Yu', 'Sichen Li', 'Songquan Zhu', 'Tengfei Li', 'Tianchi Cai', 'Tianrun Liang', 'Weiyu Cheng', 'Weize Kong', 'Wenkai Li', 'Xiancai Chen', 'Xiangjun Song', 'Xiao Luo', 'Xiao Su', 'Xiaobo Li', 'Xiaodong Han', 'Xinzhu Hou', 'Xuan Lu', 'Xun Zou', 'Xuyang Shen', 'Yan Gong', 'Yan Ma', 'Yang Wang', 'Yiqi Shi', 'Yiran Zhong', 'Yonghong Duan', 'Yongxiang Fu', 'Yongyi Hu', 'Yu Gao', 'Yuanxiang Fan', 'Yufeng Yang', 'Yuhao Li', 'Yulin Hu', 'Yunan Huang', 'Yunji Li', 'Yunzhi Xu', 'Yuxin Mao', 'Yuxuan Shi', 'Yuze Wenren', 'Zehan Li', 'Zelin Li', 'Zhanxu Tian', 'Zhengmao Zhu', 'Zhenhua Fan', 'Zhenzhen Wu', 'Zhichao Xu', 'Zhihang Yu', 'Zhiheng Lyu', 'Zhuo Jiang', 'Zibo Gao', 'Zijia Wu', 'Zijian Song', 'Zijun Sun']
['cs.CL', 'cs.LG']
We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model. MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism. The model is developed based on our previous MiniMax-Text-01 model, which contains a total of 456 billion parameters with 45.9 billion parameters activated per token. The M1 model natively supports a context length of 1 million tokens, 8x the context size of DeepSeek R1. Furthermore, the lightning attention mechanism in MiniMax-M1 enables efficient scaling of test-time compute. These properties make M1 particularly suitable for complex tasks that require processing long inputs and thinking extensively. MiniMax-M1 is trained using large-scale reinforcement learning (RL) on diverse problems including sandbox-based, real-world software engineering environments. In addition to M1's inherent efficiency advantage for RL training, we propose CISPO, a novel RL algorithm to further enhance RL efficiency. CISPO clips importance sampling weights rather than token updates, outperforming other competitive RL variants. Combining hybrid-attention and CISPO enables MiniMax-M1's full RL training on 512 H800 GPUs to complete in only three weeks, with a rental cost of just $534,700. We release two versions of MiniMax-M1 models with 40K and 80K thinking budgets respectively, where the 40K model represents an intermediate phase of the 80K training. Experiments on standard benchmarks show that our models are comparable or superior to strong open-weight models such as the original DeepSeek-R1 and Qwen3-235B, with particular strengths in complex software engineering, tool utilization, and long-context tasks. We publicly release MiniMax-M1 at https://github.com/MiniMax-AI/MiniMax-M1.
2025-06-16T15:08:02Z
A technical report from MiniMax. The authors are listed in alphabetical order. We open-source our MiniMax-M1 at https://github.com/MiniMax-AI/MiniMax-M1
null
null
null
null
null
null
null
null
null
2,506.13642
Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model
['Shaolei Zhang', 'Shoutao Guo', 'Qingkai Fang', 'Yan Zhou', 'Yang Feng']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.SD', 'eess.AS']
The emergence of GPT-4o-like large multimodal models (LMMs) has raised the exploration of integrating text, vision, and speech modalities to support more flexible multimodal interaction. Existing LMMs typically concatenate representation of modalities along the sequence dimension and feed them into a large language model (LLM) backbone. While sequence-dimension concatenation is straightforward for modality integration, it often relies heavily on large-scale data to learn modality alignments. In this paper, we aim to model the relationships between modalities more purposefully, thereby achieving more efficient and flexible modality alignments. To this end, we propose Stream-Omni, a large language-vision-speech model with efficient modality alignments, which can simultaneously support interactions under various modality combinations. Stream-Omni employs LLM as the backbone and aligns the vision and speech to the text based on their relationships. For vision that is semantically complementary to text, Stream-Omni uses sequence-dimension concatenation to achieve vision-text alignment. For speech that is semantically consistent with text, Stream-Omni introduces a CTC-based layer-dimension mapping to achieve speech-text alignment. In this way, Stream-Omni can achieve modality alignments with less data (especially speech), enabling the transfer of text capabilities to other modalities. Experiments on various benchmarks demonstrate that Stream-Omni achieves strong performance on visual understanding, speech interaction, and vision-grounded speech interaction tasks. Owing to the layer-dimensional mapping, Stream-Omni can simultaneously provide intermediate text outputs (such as ASR transcriptions and model responses) during speech interaction, offering users a comprehensive multimodal experience.
2025-06-16T16:06:45Z
Code: https://github.com/ictnlp/Stream-Omni , Model: https://huggingface.co/ICTNLP/stream-omni-8b
null
null
Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model
['Shaolei Zhang', 'Shoutao Guo', 'Qingkai Fang', 'Yan Zhou', 'Yang Feng']
2,025
arXiv.org
0
55
['Computer Science', 'Engineering']
2,506.13691
UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions
['Zhucun Xue', 'Jiangning Zhang', 'Teng Hu', 'Haoyang He', 'Yinan Chen', 'Yuxuan Cai', 'Yabiao Wang', 'Chengjie Wang', 'Yong Liu', 'Xiangtai Li', 'Dacheng Tao']
['cs.CV']
The quality of the video dataset (image quality, resolution, and fine-grained caption) greatly influences the performance of the video generation model. The growing demand for video applications sets higher requirements for high-quality video generation models. For example, the generation of movie-level Ultra-High Definition (UHD) videos and the creation of 4K short video content. However, the existing public datasets cannot support related research and applications. In this paper, we first propose a high-quality open-sourced UHD-4K (22.4\% of which are 8K) text-to-video dataset named UltraVideo, which contains a wide range of topics (more than 100 kinds), and each video has 9 structured captions with one summarized caption (average of 824 words). Specifically, we carefully design a highly automated curation process with four stages to obtain the final high-quality dataset: \textit{i)} collection of diverse and high-quality video clips. \textit{ii)} statistical data filtering. \textit{iii)} model-based data purification. \textit{iv)} generation of comprehensive, structured captions. In addition, we expand Wan to UltraWan-1K/-4K, which can natively generate high-quality 1K/4K videos with more consistent text controllability, demonstrating the effectiveness of our data curation.We believe that this work can make a significant contribution to future research on UHD video generation. UltraVideo dataset and UltraWan models are available at https://xzc-zju.github.io/projects/UltraVideo.
2025-06-16T16:52:52Z
null
null
null
null
null
null
null
null
null
null
2,506.13705
TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning
['Junru Zhang', 'Lang Feng', 'Xu Guo', 'Yuhan Wu', 'Yabo Dong', 'Duanqing Xu']
['cs.LG', 'cs.AI']
Time-series reasoning remains a significant challenge in multimodal large language models (MLLMs) due to the dynamic temporal patterns, ambiguous semantics, and lack of temporal priors. In this work, we introduce TimeMaster, a reinforcement learning (RL)-based method that enables time-series MLLMs to perform structured, interpretable reasoning directly over visualized time-series inputs and task prompts. TimeMaster adopts a three-part structured output format, reasoning, classification, and domain-specific extension, and is optimized via a composite reward function that aligns format adherence, prediction accuracy, and open-ended insight quality. The model is trained using a two-stage pipeline: we first apply supervised fine-tuning (SFT) to establish a good initialization, followed by Group Relative Policy Optimization (GRPO) at the token level to enable stable and targeted reward-driven improvement in time-series reasoning. We evaluate TimeMaster on the TimerBed benchmark across six real-world classification tasks based on Qwen2.5-VL-3B-Instruct. TimeMaster achieves state-of-the-art performance, outperforming both classical time-series models and few-shot GPT-4o by over 14.6% and 7.3% performance gain, respectively. Notably, TimeMaster goes beyond time-series classification: it also exhibits expert-like reasoning behavior, generates context-aware explanations, and delivers domain-aligned insights. Our results highlight that reward-driven RL can be a scalable and promising path toward integrating temporal understanding into time-series MLLMs.
2025-06-16T17:12:26Z
Preprint
null
null
TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning
['Junru Zhang', 'Lang Feng', 'Xu Guo', 'Yuhan Wu', 'Yabo Dong', 'Duanqing Xu']
2,025
arXiv.org
0
59
['Computer Science']
2,506.13725
CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding
['Wenxuan Song', 'Jiayi Chen', 'Pengxiang Ding', 'Yuxin Huang', 'Han Zhao', 'Donglin Wang', 'Haoang Li']
['cs.RO']
In recent years, Vision-Language-Action (VLA) models have become a vital research direction in robotics due to their impressive multimodal understanding and generalization capabilities. Despite the progress, their practical deployment is severely constrained by inference speed bottlenecks, particularly in high-frequency and dexterous manipulation tasks. While recent studies have explored Jacobi decoding as a more efficient alternative to traditional autoregressive decoding, its practical benefits are marginal due to the lengthy iterations. To address it, we introduce consistency distillation training to predict multiple correct action tokens in each iteration, thereby achieving acceleration. Besides, we design mixed-label supervision to mitigate the error accumulation during distillation. Although distillation brings acceptable speedup, we identify that certain inefficient iterations remain a critical bottleneck. To tackle this, we propose an early-exit decoding strategy that moderately relaxes convergence conditions, which further improves average inference efficiency. Experimental results show that the proposed method achieves more than 4 times inference acceleration across different baselines while maintaining high task success rates in both simulated and real-world robot tasks. These experiments validate that our approach provides an efficient and general paradigm for accelerating multimodal decision-making in robotics. Our project page is available at https://irpn-eai.github.io/CEED-VLA/.
2025-06-16T17:31:16Z
16 pages
null
null
null
null
null
null
null
null
null
2,506.13793
Med-REFL: Medical Reasoning Enhancement via Self-Corrected Fine-grained Reflection
['Zongxian Yang', 'Jiayu Qian', 'Zegao Peng', 'Haoyu Zhang', 'Zhi-An Huang']
['cs.AI']
Large reasoning models have recently made significant strides in mathematical and code reasoning, yet their success has not transferred smoothly to the medical domain. While multiple factors contribute to this disparity, a critical issue is the inadequate focus on the quality of intermediate reflection steps, which is particularly crucial in high-stakes medical scenarios. To address this challenge, we propose Med-REFL, a \underline{\textbf{Med}}ical \underline{\textbf{R}}easoning \underline{\textbf{E}}nhancement via self-corrected \underline{\textbf{F}}ine-grained ref\underline{\textbf{L}}ection. Our method leverages a tree-of-thought approach to decompose medical questions into fine-grained reasoning paths, quantitatively evaluating each step and its subsequent reflections. These assessments enable automatic construction of direct preference optimization data, reducing reliance on expensive expert annotations while guiding models to identify and correct reasoning errors. Experimental results on the MedQA-USMLE benchmark demonstrate Med-REFL achieves consistent improvements, with average gains up to 4.11\%. Notably, it further boosts the state-of-the-art performance of 7B/8B models by an additional 4.13\%. Furthermore, Med-REFL exhibits strong generalization capabilities and robustness across several challenging medical question-answering datasets. Our work illustrates that prioritizing reflection quality leads to more accurate and trustworthy reasoning in medical AI applications. Checkpoints, code, and data can be found in https://github.com/TianYin123/Med-REFL.
2025-06-11T14:58:38Z
null
null
null
null
null
null
null
null
null
null