arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,506.24119 | SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via
Multi-Agent Multi-Turn Reinforcement Learning | ['Bo Liu', 'Leon Guertler', 'Simon Yu', 'Zichen Liu', 'Penghui Qi', 'Daniel Balcells', 'Mickel Liu', 'Cheston Tan', 'Weiyan Shi', 'Min Lin', 'Wee Sun Lee', 'Natasha Jaques'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Recent advances in reinforcement learning have shown that language models can
develop sophisticated reasoning through training on tasks with verifiable
rewards, but these approaches depend on human-curated problem-answer pairs and
domain-specific reward engineering. We introduce SPIRAL, a self-play framework
where models learn by playing multi-turn, zero-sum games against continuously
improving versions of themselves, eliminating the need for human supervision.
Through self-play, SPIRAL generates an infinite curriculum of progressively
challenging problems as models must constantly adapt to stronger opponents. To
enable this self-play training at scale, We implement a fully online,
multi-turn, multi-agent reinforcement learning system for LLMs and propose
role-conditioned advantage estimation (RAE) to stabilize multi-agent training.
Using SPIRAL, self-play on zero-sum games produces reasoning capabilities that
transfer broadly. Training Qwen3-4B-Base on Kuhn Poker alone achieves 8.6%
improvement on math and 8.4% on general reasoning, outperforming SFT on 25,000
expert game trajectories. Analysis reveals that this transfer occurs through
three cognitive patterns: systematic decomposition, expected value calculation,
and case-by-case analysis. Multi-game training (TicTacToe, Kuhn Poker, Simple
Negotiation) further enhances performance as each game develops distinct
reasoning strengths. Applying SPIRAL to a strong reasoning model
(DeepSeek-R1-Distill-Qwen-7B) can still lead to 2.0% average improvement. These
results demonstrate that zero-sum games naturally develop transferable
reasoning capabilities, highlighting a promising direction for autonomous
reasoning development. | 2025-06-30T17:58:13Z | Work in Progress | null | null | null | null | null | null | null | null | null |
2,507.00432 | Does Math Reasoning Improve General LLM Capabilities? Understanding
Transferability of LLM Reasoning | ['Maggie Huan', 'Yuetai Li', 'Tuney Zheng', 'Xiaoyu Xu', 'Seungone Kim', 'Minxin Du', 'Radha Poovendran', 'Graham Neubig', 'Xiang Yue'] | ['cs.AI', 'cs.CL'] | Math reasoning has become the poster child of progress in large language
models (LLMs), with new models rapidly surpassing human-level performance on
benchmarks like MATH and AIME. But as math leaderboards improve week by week,
it is worth asking: do these gains reflect broader problem-solving ability or
just narrow overfitting? To answer this question, we evaluate over 20
open-weight reasoning-tuned models across a broad suite of tasks, including
math, scientific QA, agent planning, coding, and standard
instruction-following. We surprisingly find that most models that succeed in
math fail to transfer their gains to other domains. To rigorously study this
phenomenon, we conduct controlled experiments on Qwen3-14B models using
math-only data but different tuning methods. We find that reinforcement
learning (RL)-tuned models generalize well across domains, while supervised
fine-tuning (SFT)-tuned models often forget general capabilities. Latent-space
representation and token-space distribution shift analyses reveal that SFT
induces substantial representation and output drift, while RL preserves
general-domain structure. Our results suggest a need to rethink standard
post-training recipes, particularly the reliance on SFT-distilled data for
advancing reasoning models. | 2025-07-01T05:23:05Z | null | null | null | null | null | null | null | null | null | null |
2,507.00505 | LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for
MLLMs | ['Haoran Lou', 'Chunxiao Fan', 'Ziyan Liu', 'Yuexin Wu', 'Xinliang Wang'] | ['cs.CV'] | The architecture of multimodal large language models (MLLMs) commonly
connects a vision encoder, often based on CLIP-ViT, to a large language model.
While CLIP-ViT works well for capturing global image features, it struggles to
model local relationships between adjacent patches, leading to weaker visual
representation, which in turn affects the detailed understanding ability of
MLLMs. To solve this, we propose LLaVA-SP, which only adds six spatial visual
tokens to the original visual tokens to enhance the visual representation. Our
approach offers three key advantages: 1) We propose a novel Projector, which
uses convolutional kernels to derive visual spatial tokens from ViT patch
features, simulating two visual spatial ordering approaches: "from central
region to global" and "from abstract to specific". Then, a cross-attention
mechanism is applied to fuse fine-grained visual information, enriching the
overall visual representation. 2) We present two model variants:
LLaVA-SP-Cropping, which focuses on detail features through progressive
cropping, and LLaVA-SP-Pooling, which captures global semantics through
adaptive pooling, enabling the model to handle diverse visual understanding
tasks. 3) Extensive experiments show that LLaVA-SP, fine-tuned with LoRA,
achieves significant performance improvements across various multimodal
benchmarks, outperforming the state-of-the-art LLaVA-1.5 model in multiple
tasks with nearly identical inference latency. The code and models are
available at https://github.com/CnFaker/LLaVA-SP. | 2025-07-01T07:20:11Z | Accepted to ICCV 2025 | null | null | null | null | null | null | null | null | null |
2,507.00833 | HumanoidGen: Data Generation for Bimanual Dexterous Manipulation via LLM
Reasoning | ['Zhi Jing', 'Siyuan Yang', 'Jicong Ao', 'Ting Xiao', 'Yugang Jiang', 'Chenjia Bai'] | ['cs.RO', 'cs.AI'] | For robotic manipulation, existing robotics datasets and simulation
benchmarks predominantly cater to robot-arm platforms. However, for humanoid
robots equipped with dual arms and dexterous hands, simulation tasks and
high-quality demonstrations are notably lacking. Bimanual dexterous
manipulation is inherently more complex, as it requires coordinated arm
movements and hand operations, making autonomous data collection challenging.
This paper presents HumanoidGen, an automated task creation and demonstration
collection framework that leverages atomic dexterous operations and LLM
reasoning to generate relational constraints. Specifically, we provide spatial
annotations for both assets and dexterous hands based on the atomic operations,
and perform an LLM planner to generate a chain of actionable spatial
constraints for arm movements based on object affordances and scenes. To
further improve planning ability, we employ a variant of Monte Carlo tree
search to enhance LLM reasoning for long-horizon tasks and insufficient
annotation. In experiments, we create a novel benchmark with augmented
scenarios to evaluate the quality of the collected data. The results show that
the performance of the 2D and 3D diffusion policies can scale with the
generated dataset. Project page is https://openhumanoidgen.github.io. | 2025-07-01T15:04:38Z | Project Page: https://openhumanoidgen.github.io | null | null | null | null | null | null | null | null | null |
2,507.00971 | Reasoning as an Adaptive Defense for Safety | ['Taeyoun Kim', 'Fahim Tajwar', 'Aditi Raghunathan', 'Aviral Kumar'] | ['cs.LG', 'cs.AI'] | Reasoning methods that adaptively allocate test-time compute have advanced
LLM performance on easy to verify domains such as math and code. In this work,
we study how to utilize this approach to train models that exhibit a degree of
robustness to safety vulnerabilities, and show that doing so can provide
benefits. We build a recipe called $\textit{TARS}$ (Training Adaptive Reasoners
for Safety), a reinforcement learning (RL) approach that trains models to
reason about safety using chain-of-thought traces and a reward signal that
balances safety with task completion. To build TARS, we identify three critical
design choices: (1) a "lightweight" warmstart SFT stage, (2) a mix of harmful,
harmless, and ambiguous prompts to prevent shortcut behaviors such as too many
refusals, and (3) a reward function to prevent degeneration of reasoning
capabilities during training. Models trained with TARS exhibit adaptive
behaviors by spending more compute on ambiguous queries, leading to better
safety-refusal trade-offs. They also internally learn to better distinguish
between safe and unsafe prompts and attain greater robustness to both white-box
(e.g., GCG) and black-box attacks (e.g., PAIR). Overall, our work provides an
effective, open recipe for training LLMs against jailbreaks and harmful
requests by reasoning per prompt. | 2025-07-01T17:20:04Z | 42 pages, 11 Figures, 7 Tables | null | null | null | null | null | null | null | null | null |
2,507.00994 | Should We Still Pretrain Encoders with Masked Language Modeling? | ['Hippolyte Gisserot-Boukhlef', 'Nicolas Boizard', 'Manuel Faysse', 'Duarte M. Alves', 'Emmanuel Malherbe', 'André F. T. Martins', 'Céline Hudelot', 'Pierre Colombo'] | ['cs.CL'] | Learning high-quality text representations is fundamental to a wide range of
NLP tasks. While encoder pretraining has traditionally relied on Masked
Language Modeling (MLM), recent evidence suggests that decoder models
pretrained with Causal Language Modeling (CLM) can be effectively repurposed as
encoders, often surpassing traditional encoders on text representation
benchmarks. However, it remains unclear whether these gains reflect an inherent
advantage of the CLM objective or arise from confounding factors such as model
and data scale. In this paper, we address this question through a series of
large-scale, carefully controlled pretraining ablations, training a total of 38
models ranging from 210 million to 1 billion parameters, and conducting over
15,000 fine-tuning and evaluation runs. We find that while training with MLM
generally yields better performance across text representation tasks,
CLM-trained models are more data-efficient and demonstrate improved fine-tuning
stability. Building on these findings, we experimentally show that a biphasic
training strategy that sequentially applies CLM and then MLM, achieves optimal
performance under a fixed computational training budget. Moreover, we
demonstrate that this strategy becomes more appealing when initializing from
readily available pretrained CLM models, reducing the computational burden
needed to train best-in-class encoder models. We release all project artifacts
at https://hf.co/MLMvsCLM to foster further research. | 2025-07-01T17:45:48Z | 23 pages, 10 figures, 17 tables | null | null | null | null | null | null | null | null | null |
2,507.01006 | GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable
Reinforcement Learning | ['GLM-V Team', ':', 'Wenyi Hong', 'Wenmeng Yu', 'Xiaotao Gu', 'Guo Wang', 'Guobing Gan', 'Haomiao Tang', 'Jiale Cheng', 'Ji Qi', 'Junhui Ji', 'Lihang Pan', 'Shuaiqi Duan', 'Weihan Wang', 'Yan Wang', 'Yean Cheng', 'Zehai He', 'Zhe Su', 'Zhen Yang', 'Ziyang Pan', 'Aohan Zeng', 'Baoxu Wang', 'Boyan Shi', 'Changyu Pang', 'Chenhui Zhang', 'Da Yin', 'Fan Yang', 'Guoqing Chen', 'Jiazheng Xu', 'Jiali Chen', 'Jing Chen', 'Jinhao Chen', 'Jinghao Lin', 'Jinjiang Wang', 'Junjie Chen', 'Leqi Lei', 'Letian Gong', 'Leyi Pan', 'Mingzhi Zhang', 'Qinkai Zheng', 'Sheng Yang', 'Shi Zhong', 'Shiyu Huang', 'Shuyuan Zhao', 'Siyan Xue', 'Shangqin Tu', 'Shengbiao Meng', 'Tianshu Zhang', 'Tianwei Luo', 'Tianxiang Hao', 'Wenkai Li', 'Wei Jia', 'Xin Lyu', 'Xuancheng Huang', 'Yanling Wang', 'Yadong Xue', 'Yanfeng Wang', 'Yifan An', 'Yifan Du', 'Yiming Shi', 'Yiheng Huang', 'Yilin Niu', 'Yuan Wang', 'Yuanchang Yue', 'Yuchen Li', 'Yutao Zhang', 'Yuxuan Zhang', 'Zhanxiao Du', 'Zhenyu Hou', 'Zhao Xue', 'Zhengxiao Du', 'Zihan Wang', 'Peng Zhang', 'Debing Liu', 'Bin Xu', 'Juanzi Li', 'Minlie Huang', 'Yuxiao Dong', 'Jie Tang'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present GLM-4.1V-Thinking, a vision-language model (VLM) designed to
advance general-purpose multimodal understanding and reasoning. In this report,
we share our key findings in the development of the reasoning-centric training
framework. We first develop a capable vision foundation model with significant
potential through large-scale pre-training, which arguably sets the upper bound
for the final performance. We then propose Reinforcement Learning with
Curriculum Sampling (RLCS) to unlock the full potential of the model, leading
to comprehensive capability enhancement across a diverse range of tasks,
including STEM problem solving, video understanding, content recognition,
coding, grounding, GUI-based agents, and long document understanding. We
open-source GLM-4.1V-9B-Thinking, which achieves state-of-the-art performance
among models of comparable size. In a comprehensive evaluation across 28 public
benchmarks, our model outperforms Qwen2.5-VL-7B on nearly all tasks and
achieves comparable or even superior performance on 18 benchmarks relative to
the significantly larger Qwen2.5-VL-72B. Notably, GLM-4.1V-9B-Thinking also
demonstrates competitive or superior performance compared to closed-source
models such as GPT-4o on challenging tasks including long document
understanding and STEM reasoning, further underscoring its strong capabilities.
Code, models and more information are released at
https://github.com/THUDM/GLM-4.1V-Thinking. | 2025-07-01T17:55:04Z | null | null | null | null | null | null | null | null | null | null |
2,507.01255 | AIGVE-MACS: Unified Multi-Aspect Commenting and Scoring Model for
AI-Generated Video Evaluation | ['Xiao Liu', 'Jiawei Zhang'] | ['cs.CV'] | The rapid advancement of AI-generated video models has created a pressing
need for robust and interpretable evaluation frameworks. Existing metrics are
limited to producing numerical scores without explanatory comments, resulting
in low interpretability and human evaluation alignment. To address those
challenges, we introduce AIGVE-MACS, a unified model for AI-Generated Video
Evaluation(AIGVE), which can provide not only numerical scores but also
multi-aspect language comment feedback in evaluating these generated videos.
Central to our approach is AIGVE-BENCH 2, a large-scale benchmark comprising
2,500 AI-generated videos and 22,500 human-annotated detailed comments and
numerical scores across nine critical evaluation aspects. Leveraging
AIGVE-BENCH 2, AIGVE-MACS incorporates recent Vision-Language Models with a
novel token-wise weighted loss and a dynamic frame sampling strategy to better
align with human evaluators. Comprehensive experiments across supervised and
zero-shot benchmarks demonstrate that AIGVE-MACS achieves state-of-the-art
performance in both scoring correlation and comment quality, significantly
outperforming prior baselines including GPT-4o and VideoScore. In addition, we
further showcase a multi-agent refinement framework where feedback from
AIGVE-MACS drives iterative improvements in video generation, leading to 53.5%
quality enhancement. This work establishes a new paradigm for comprehensive,
human-aligned evaluation of AI-generated videos. We release the AIGVE-BENCH 2
and AIGVE-MACS at https://huggingface.co/xiaoliux/AIGVE-MACS. | 2025-07-02T00:20:06Z | Working in Progress | null | null | null | null | null | null | null | null | null |
2,507.01352 | Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy | ['Chris Yuhao Liu', 'Liang Zeng', 'Yuzhen Xiao', 'Jujie He', 'Jiacai Liu', 'Chaojie Wang', 'Rui Yan', 'Wei Shen', 'Fuxiang Zhang', 'Jiacheng Xu', 'Yang Liu', 'Yahui Zhou'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite the critical role of reward models (RMs) in reinforcement learning
from human feedback (RLHF), current state-of-the-art open RMs perform poorly on
most existing evaluation benchmarks, failing to capture the spectrum of nuanced
and sophisticated human preferences. Even approaches that incorporate advanced
training techniques have not yielded meaningful performance improvements. We
hypothesize that this brittleness stems primarily from limitations in
preference datasets, which are often narrowly scoped, synthetically labeled, or
lack rigorous quality control. To address these challenges, we present a
large-scale preference dataset comprising 40 million preference pairs, named
SynPref-40M. To enable data curation at scale, we design a human-AI synergistic
two-stage pipeline that leverages the complementary strengths of human
annotation quality and AI scalability. In this pipeline, humans provide
verified annotations, while large language models perform automatic curation
based on human guidance. Training on this preference mixture, we introduce
Skywork-Reward-V2, a suite of eight reward models ranging from 0.6B to 8B
parameters, trained on a carefully curated subset of 26 million preference
pairs from SynPref-40M. We demonstrate that Skywork-Reward-V2 is versatile
across a wide range of capabilities, including alignment with human
preferences, objective correctness, safety, resistance to stylistic biases, and
best-of-N scaling, achieving state-of-the-art performance across seven major
reward model benchmarks. Ablation studies confirm that the effectiveness of our
approach stems not only from data scale but also from high-quality curation.
The Skywork-Reward-V2 series represents substantial progress in open reward
models, highlighting the untapped potential of existing preference datasets and
demonstrating how human-AI curation synergy can unlock significantly higher
data quality. | 2025-07-02T04:40:29Z | null | null | null | null | null | null | null | null | null | null |
2,507.01472 | Optimizing Methane Detection On Board Satellites: Speed, Accuracy, and
Low-Power Solutions for Resource-Constrained Hardware | ['Jonáš Herec', 'Vít Růžička', 'Rado Pitoňák'] | ['cs.CV', 'cs.LG', 'cs.PF'] | Methane is a potent greenhouse gas, and detecting its leaks early via
hyperspectral satellite imagery can help mitigate climate change. Meanwhile,
many existing missions operate in manual tasking regimes only, thus missing
potential events of interest. To overcome slow downlink rates cost-effectively,
onboard detection is a viable solution. However, traditional methane
enhancement methods are too computationally demanding for resource-limited
onboard hardware. This work accelerates methane detection by focusing on
efficient, low-power algorithms. We test fast target detection methods (ACE,
CEM) that have not been previously used for methane detection and propose a
Mag1c-SAS - a significantly faster variant of the current state-of-the-art
algorithm for methane detection: Mag1c. To explore their true detection
potential, we integrate them with a machine learning model (U-Net, LinkNet).
Our results identify two promising candidates (Mag1c-SAS and CEM), both
acceptably accurate for the detection of strong plumes and computationally
efficient enough for onboard deployment: one optimized more for accuracy, the
other more for speed, achieving up to ~100x and ~230x faster computation than
original Mag1c on resource-limited hardware. Additionally, we propose and
evaluate three band selection strategies. One of them can outperform the method
traditionally used in the field while using fewer channels, leading to even
faster processing without compromising accuracy. This research lays the
foundation for future advancements in onboard methane detection with minimal
hardware requirements, improving timely data delivery. The produced code, data,
and models are open-sourced and can be accessed from
https://github.com/zaitra/methane-filters-benchmark. | 2025-07-02T08:34:34Z | This is a preprint of a paper accepted for the EDHPC 2025 Conference | null | null | null | null | null | null | null | null | null |
2,507.01634 | Depth Anything at Any Condition | ['Boyuan Sun', 'Modi Jin', 'Bowen Yin', 'Qibin Hou'] | ['cs.CV', 'cs.AI'] | We present Depth Anything at Any Condition (DepthAnything-AC), a foundation
monocular depth estimation (MDE) model capable of handling diverse
environmental conditions. Previous foundation MDE models achieve impressive
performance across general scenes but not perform well in complex open-world
environments that involve challenging conditions, such as illumination
variations, adverse weather, and sensor-induced distortions. To overcome the
challenges of data scarcity and the inability of generating high-quality
pseudo-labels from corrupted images, we propose an unsupervised consistency
regularization finetuning paradigm that requires only a relatively small amount
of unlabeled data. Furthermore, we propose the Spatial Distance Constraint to
explicitly enforce the model to learn patch-level relative relationships,
resulting in clearer semantic boundaries and more accurate details.
Experimental results demonstrate the zero-shot capabilities of DepthAnything-AC
across diverse benchmarks, including real-world adverse weather benchmarks,
synthetic corruption benchmarks, and general benchmarks.
Project Page: https://ghost233lism.github.io/depthanything-AC-page
Code: https://github.com/HVision-NKU/DepthAnythingAC | 2025-07-02T12:05:57Z | null | null | null | null | null | null | null | null | null | null |
2,507.01643 | SAILViT: Towards Robust and Generalizable Visual Backbones for MLLMs via
Gradual Feature Refinement | ['Weijie Yin', 'Dingkang Yang', 'Hongyuan Dong', 'Zijian Kang', 'Jiacong Wang', 'Xiao Liang', 'Chao Feng', 'Jiao Ran'] | ['cs.CV'] | Vision Transformers (ViTs) are essential as foundation backbones in
establishing the visual comprehension capabilities of Multimodal Large Language
Models (MLLMs). Although most ViTs achieve impressive performance through
image-text pair-based contrastive learning or self-supervised mechanisms, they
struggle to engage in connector-based co-training directly with LLMs due to
potential parameter initialization conflicts and modality semantic gaps. To
address the above challenges, this paper proposes SAILViT, a gradual feature
learning-enhanced ViT for facilitating MLLMs to break through performance
bottlenecks in complex multimodal interactions. SAILViT achieves
coarse-to-fine-grained feature alignment and world knowledge infusion with
gradual feature refinement, which better serves target training demands. We
perform thorough empirical analyses to confirm the powerful robustness and
generalizability of SAILViT across different dimensions, including parameter
sizes, model architectures, training strategies, and data scales. Equipped with
SAILViT, existing MLLMs show significant and consistent performance
improvements on the OpenCompass benchmark across extensive downstream tasks.
SAILViT series models are released at
https://huggingface.co/BytedanceDouyinContent. | 2025-07-02T12:17:23Z | We release SAILViT, a series of versatile vision foundation models | null | null | null | null | null | null | null | null | null |
2,507.01738 | DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image
Segmentation through Loopback Synergy | ['Ming Dai', 'Wenxuan Cheng', 'Jiang-jiang Liu', 'Sen Yang', 'Wenxiao Cai', 'Yanpeng Sun', 'Wankou Yang'] | ['cs.CV'] | Referring Image Segmentation (RIS) is a challenging task that aims to segment
objects in an image based on natural language expressions. While prior studies
have predominantly concentrated on improving vision-language interactions and
achieving fine-grained localization, a systematic analysis of the fundamental
bottlenecks in existing RIS frameworks remains underexplored. To bridge this
gap, we propose DeRIS, a novel framework that decomposes RIS into two key
components: perception and cognition. This modular decomposition facilitates a
systematic analysis of the primary bottlenecks impeding RIS performance. Our
findings reveal that the predominant limitation lies not in perceptual
deficiencies, but in the insufficient multi-modal cognitive capacity of current
models. To mitigate this, we propose a Loopback Synergy mechanism, which
enhances the synergy between the perception and cognition modules, thereby
enabling precise segmentation while simultaneously improving robust image-text
comprehension. Additionally, we analyze and introduce a simple non-referent
sample conversion data augmentation to address the long-tail distribution issue
related to target existence judgement in general scenarios. Notably, DeRIS
demonstrates inherent adaptability to both non- and multi-referents scenarios
without requiring specialized architectural modifications, enhancing its
general applicability. The codes and models are available at
https://github.com/Dmmm1997/DeRIS. | 2025-07-02T14:14:35Z | ICCV 2025 | null | null | null | null | null | null | null | null | null |
2,507.01931 | Adaptability of ASR Models on Low-Resource Language: A Comparative Study
of Whisper and Wav2Vec-BERT on Bangla | ['Md Sazzadul Islam Ridoy', 'Sumi Akter', 'Md. Aminur Rahman'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | In recent years, neural models trained on large multilingual text and speech
datasets have shown great potential for supporting low-resource languages. This
study investigates the performances of two state-of-the-art Automatic Speech
Recognition (ASR) models, OpenAI's Whisper (Small & Large-V2) and Facebook's
Wav2Vec-BERT on Bangla, a low-resource language. We have conducted experiments
using two publicly available datasets: Mozilla Common Voice-17 and OpenSLR to
evaluate model performances. Through systematic fine-tuning and hyperparameter
optimization, including learning rate, epochs, and model checkpoint selection,
we have compared the models based on Word Error Rate (WER), Character Error
Rate (CER), Training Time, and Computational Efficiency. The Wav2Vec-BERT model
outperformed Whisper across all key evaluation metrics, demonstrated superior
performance while requiring fewer computational resources, and offered valuable
insights to develop robust speech recognition systems in low-resource
linguistic settings. | 2025-07-02T17:44:54Z | null | null | null | null | null | null | null | null | null | null |
2,507.01949 | Kwai Keye-VL Technical Report | ['Kwai Keye Team', 'Biao Yang', 'Bin Wen', 'Changyi Liu', 'Chenglong Chu', 'Chengru Song', 'Chongling Rao', 'Chuan Yi', 'Da Li', 'Dunju Zang', 'Fan Yang', 'Guorui Zhou', 'Hao Peng', 'Haojie Ding', 'Jiaming Huang', 'Jiangxia Cao', 'Jiankang Chen', 'Jingyun Hua', 'Jin Ouyang', 'Kaibing Chen', 'Kaiyu Jiang', 'Kaiyu Tang', 'Kun Gai', 'Shengnan Zhang', 'Siyang Mao', 'Sui Huang', 'Tianke Zhang', 'Tingting Gao', 'Wei Chen', 'Wei Yuan', 'Xiangyu Wu', 'Xiao Hu', 'Xingyu Lu', 'Yang Zhou', 'Yi-Fan Zhang', 'Yiping Yang', 'Yulong Chen', 'Zhenhua Wu', 'Zhenyu Li', 'Zhixin Ling', 'Ziming Li', 'Dehua Ma', 'Di Xu', 'Haixuan Gao', 'Hang Li', 'Jiawei Guo', 'Jing Wang', 'Lejian Ren', 'Muhao Wei', 'Qianqian Wang', 'Qigen Hu', 'Shiyao Wang', 'Tao Yu', 'Xinchen Luo', 'Yan Li', 'Yiming Liang', 'Yuhang Hu', 'Zeyi Lu', 'Zhuoran Yang', 'Zixing Zhang'] | ['cs.CV'] | While Multimodal Large Language Models (MLLMs) demonstrate remarkable
capabilities on static images, they often fall short in comprehending dynamic,
information-dense short-form videos, a dominant medium in today's digital
landscape. To bridge this gap, we introduce \textbf{Kwai Keye-VL}, an
8-billion-parameter multimodal foundation model engineered for leading-edge
performance in short-video understanding while maintaining robust
general-purpose vision-language abilities. The development of Keye-VL rests on
two core pillars: a massive, high-quality dataset exceeding 600 billion tokens
with a strong emphasis on video, and an innovative training recipe. This recipe
features a four-stage pre-training process for solid vision-language alignment,
followed by a meticulous two-phase post-training process. The first
post-training stage enhances foundational capabilities like instruction
following, while the second phase focuses on stimulating advanced reasoning. In
this second phase, a key innovation is our five-mode ``cold-start'' data
mixture, which includes ``thinking'', ``non-thinking'', ``auto-think'', ``think
with image'', and high-quality video data. This mixture teaches the model to
decide when and how to reason. Subsequent reinforcement learning (RL) and
alignment steps further enhance these reasoning capabilities and correct
abnormal model behaviors, such as repetitive outputs. To validate our approach,
we conduct extensive evaluations, showing that Keye-VL achieves
state-of-the-art results on public video benchmarks and remains highly
competitive on general image-based tasks (Figure 1). Furthermore, we develop
and release the \textbf{KC-MMBench}, a new benchmark tailored for real-world
short-video scenarios, where Keye-VL shows a significant advantage. | 2025-07-02T17:57:28Z | Technical Report: https://github.com/Kwai-Keye/Keye | null | null | null | null | null | null | null | null | null |
2,507.01951 | Test-Time Scaling with Reflective Generative Model | ['Zixiao Wang', 'Yuxin Wang', 'Xiaorui Wang', 'Mengting Xing', 'Jie Gao', 'Jianjun Xu', 'Guangcan Liu', 'Chenhui Jin', 'Zhuo Wang', 'Shengzhuo Zhang', 'Hongtao Xie'] | ['cs.LG', 'cs.CL'] | We introduce our first reflective generative model MetaStone-S1, which
obtains OpenAI o3-mini's performance via the new Reflective Generative Form.
The new form focuses on high-quality reasoning trajectory selection and
contains two novelties: 1) A unified interface for policy and process reward
model: we share the backbone network and use task-specific heads for reasoning
trajectory predicting and scoring respectively, introducing only 53M extra
parameters for trajectory scoring. 2) Eliminating the reliance on process-level
annotation: we provide a self-supervised process reward model, which can
directly learn the high-quality reasoning trajectory selection from the outcome
reward. Equipped with the reflective generative form, MetaStone-S1 is naturally
suitable for test-time scaling, and we provide three reasoning effort modes
(low, medium, and high) based on the controllable thinking length. Experiments
demonstrate that our MetaStone-S1 achieves comparable performance to OpenAI
o3-mini's series with only 32B parameter size. To support the research
community, we have open-sourced MetaStone-S1 at
https://github.com/MetaStone-AI/MetaStone-S1. | 2025-07-02T17:58:01Z | null | null | null | null | null | null | null | null | null | null |
2,507.01991 | FinAI-BERT: A Transformer-Based Model for Sentence-Level Detection of AI
Disclosures in Financial Reports | ['Muhammad Bilal Zafar'] | ['q-fin.CP', 'cs.CL', 'econ.GN', 'q-fin.EC', 'q-fin.GN'] | The proliferation of artificial intelligence (AI) in financial services has
prompted growing demand for tools that can systematically detect AI-related
disclosures in corporate filings. While prior approaches often rely on keyword
expansion or document-level classification, they fall short in granularity,
interpretability, and robustness. This study introduces FinAI-BERT, a
domain-adapted transformer-based language model designed to classify AI-related
content at the sentence level within financial texts. The model was fine-tuned
on a manually curated and balanced dataset of 1,586 sentences drawn from 669
annual reports of U.S. banks (2015 to 2023). FinAI-BERT achieved near-perfect
classification performance (accuracy of 99.37 percent, F1 score of 0.993),
outperforming traditional baselines such as Logistic Regression, Naive Bayes,
Random Forest, and XGBoost. Interpretability was ensured through SHAP-based
token attribution, while bias analysis and robustness checks confirmed the
model's stability across sentence lengths, adversarial inputs, and temporal
samples. Theoretically, the study advances financial NLP by operationalizing
fine-grained, theme-specific classification using transformer architectures.
Practically, it offers a scalable, transparent solution for analysts,
regulators, and scholars seeking to monitor the diffusion and framing of AI
across financial institutions. | 2025-06-29T09:33:29Z | The FinAI-BERT model can be directly loaded via Hugging Face
Transformers (https://huggingface.co/bilalzafar/FinAI-BERT) for
sentence-level AI disclosure classification | null | null | null | null | null | null | null | null | null |
2,507.02025 | IntFold: A Controllable Foundation Model for General and Specialized
Biomolecular Structure Prediction | ['The IntFold Team', 'Leon Qiao', 'Wayne Bai', 'He Yan', 'Gary Liu', 'Nova Xi', 'Xiang Zhang', 'Siqi Sun'] | ['q-bio.BM'] | We introduce IntFold, a controllable foundation model for general and
specialized biomolecular structure prediction. Utilizing a high-performance
custom attention kernel, IntFold achieves accuracy comparable to the
state-of-the-art AlphaFold 3 on a comprehensive benchmark of diverse
biomolecular structures, while also significantly outperforming other leading
all-atom prediction approaches. The model's key innovation is its
controllability, enabling downstream applications critical for drug screening
and design. Through specialized adapters, it can be precisely guided to predict
complex allosteric states, apply user-defined structural constraints, and
estimate binding affinity. Furthermore, we present a training-free,
similarity-based method for ranking predictions that improves success rates in
a model-agnostic manner. This report details these advancements and shares
insights from the training and development of this large-scale model. | 2025-07-02T16:09:47Z | null | null | null | null | null | null | null | null | null | null |
2,507.02029 | RoboBrain 2.0 Technical Report | ['BAAI RoboBrain Team', 'Mingyu Cao', 'Huajie Tan', 'Yuheng Ji', 'Minglan Lin', 'Zhiyu Li', 'Zhou Cao', 'Pengwei Wang', 'Enshen Zhou', 'Yi Han', 'Yingbo Tang', 'Xiangqi Xu', 'Wei Guo', 'Yaoxu Lyu', 'Yijie Xu', 'Jiayu Shi', 'Mengfei Du', 'Cheng Chi', 'Mengdi Zhao', 'Xiaoshuai Hao', 'Junkai Zhao', 'Xiaojie Zhang', 'Shanyu Rong', 'Huaihai Lyu', 'Zhengliang Cai', 'Yankai Fu', 'Ning Chen', 'Bolun Zhang', 'Lingfeng Zhang', 'Shuyi Zhang', 'Dong Liu', 'Xi Feng', 'Songjing Wang', 'Xiaodan Liu', 'Yance Jiao', 'Mengsi Lyu', 'Zhuo Chen', 'Chenrui He', 'Yulong Ao', 'Xue Sun', 'Zheqi He', 'Jingshu Zheng', 'Xi Yang', 'Donghai Shi', 'Kunchang Xie', 'Bochao Zhang', 'Shaokai Nie', 'Chunlei Men', 'Yonghua Lin', 'Zhongyuan Wang', 'Tiejun Huang', 'Shanghang Zhang'] | ['cs.RO'] | We introduce RoboBrain 2.0, our latest generation of embodied vision-language
foundation models, designed to unify perception, reasoning, and planning for
complex embodied tasks in physical environments. It comes in two variants: a
lightweight 7B model and a full-scale 32B model, featuring a heterogeneous
architecture with a vision encoder and a language model. Despite its compact
size, RoboBrain 2.0 achieves strong performance across a wide spectrum of
embodied reasoning tasks. On both spatial and temporal benchmarks, the 32B
variant achieves leading results, surpassing prior open-source and proprietary
models. In particular, it supports key real-world embodied AI capabilities,
including spatial understanding (e.g., affordance prediction, spatial
referring, trajectory forecasting) and temporal decision-making (e.g.,
closed-loop interaction, multi-agent long-horizon planning, and scene graph
updating). This report details the model architecture, data construction,
multi-stage training strategies, infrastructure and practical applications. We
hope RoboBrain 2.0 advances embodied AI research and serves as a practical step
toward building generalist embodied agents. The code, checkpoint and benchmark
are available at https://superrobobrain.github.io. | 2025-07-02T17:05:33Z | null | null | null | null | null | null | null | null | null | null |
2,507.02259 | MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory
Agent | ['Hongli Yu', 'Tinghong Chen', 'Jiangtao Feng', 'Jiangjie Chen', 'Weinan Dai', 'Qiying Yu', 'Ya-Qin Zhang', 'Wei-Ying Ma', 'Jingjing Liu', 'Mingxuan Wang', 'Hao Zhou'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite improvements by length extrapolation, efficient attention and memory
modules, handling infinitely long documents with linear complexity without
performance degradation during extrapolation remains the ultimate challenge in
long-text processing. We directly optimize for long-text tasks in an end-to-end
fashion and introduce a novel agent workflow, MemAgent, which reads text in
segments and updates the memory using an overwrite strategy. We extend the DAPO
algorithm to facilitate training via independent-context multi-conversation
generation. MemAgent has demonstrated superb long-context capabilities, being
able to extrapolate from an 8K context trained on 32K text to a 3.5M QA task
with performance loss < 5% and achieves 95%+ in 512K RULER test. | 2025-07-03T03:11:50Z | Project Page: https://memagent-sialab.github.io/ | null | null | null | null | null | null | null | null | null |
2,507.02735 | Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks | ['Sizhe Chen', 'Arman Zharmagambetov', 'David Wagner', 'Chuan Guo'] | ['cs.CR', 'cs.AI'] | Prompt injection attacks pose a significant security threat to LLM-integrated
applications. Model-level defenses have shown strong effectiveness, but are
currently deployed into commercial-grade models in a closed-source manner. We
believe open-source models are needed by the AI security community, where
co-development of attacks and defenses through open research drives scientific
progress in mitigation against prompt injection attacks. To this end, we
develop Meta SecAlign, the first open-source and open-weight LLM with built-in
model-level defense that achieves commercial-grade model performance. We
provide complete details of our training recipe, which utilizes an improved
version of the SOTA SecAlign defense. Evaluations on 9 utility benchmarks and 7
security benchmarks show that Meta SecAlign, despite being trained on a generic
instruction-tuning dataset, confers security in unseen downstream tasks,
including tool-calling and agentic web navigation, in addition general
instruction-following. Our best model -- Meta-SecAlign-70B -- achieves
state-of-the-art robustness against prompt injection attacks and comparable
utility to closed-source commercial LLM with model-level defense. | 2025-07-03T15:47:13Z | null | null | null | null | null | null | null | null | null | null |
2,507.02768 | DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with
Self-Generated Cross-Modal Alignment | ['Ke-Han Lu', 'Zhehuai Chen', 'Szu-Wei Fu', 'Chao-Han Huck Yang', 'Sung-Feng Huang', 'Chih-Kai Yang', 'Chee-En Yu', 'Chun-Wei Chen', 'Wei-Chih Chen', 'Chien-yu Huang', 'Yi-Cheng Lin', 'Yu-Xiang Lin', 'Chi-An Fu', 'Chun-Yi Kuan', 'Wenze Ren', 'Xuanjun Chen', 'Wei-Ping Huang', 'En-Pei Hu', 'Tzu-Quan Lin', 'Yuan-Kuei Wu', 'Kuan-Po Huang', 'Hsiao-Ying Huang', 'Huang-Cheng Chou', 'Kai-Wei Chang', 'Cheng-Han Chiang', 'Boris Ginsburg', 'Yu-Chiang Frank Wang', 'Hung-yi Lee'] | ['eess.AS', 'cs.CL', 'cs.SD'] | We introduce DeSTA2.5-Audio, a general-purpose Large Audio Language Model
(LALM) designed for robust auditory perception and instruction-following,
without requiring task-specific audio instruction-tuning. Recent LALMs
typically augment Large Language Models (LLMs) with auditory capabilities by
training on large-scale, manually curated or LLM-synthesized audio-instruction
datasets. However, these approaches have often suffered from the catastrophic
forgetting of the LLM's original language abilities. To address this, we
revisit the data construction pipeline and propose DeSTA, a self-generated
cross-modal alignment strategy in which the backbone LLM generates its own
training targets. This approach preserves the LLM's native language proficiency
while establishing effective audio-text alignment, thereby enabling zero-shot
generalization without task-specific tuning. Using DeSTA, we construct
DeSTA-AQA5M, a large-scale, task-agnostic dataset containing 5 million training
samples derived from 7,000 hours of audio spanning 50 diverse datasets,
including speech, environmental sounds, and music. DeSTA2.5-Audio achieves
state-of-the-art or competitive performance across a wide range of
audio-language benchmarks, including Dynamic-SUPERB, MMAU, SAKURA,
Speech-IFEval, and VoiceBench. Comprehensive comparative studies demonstrate
that our self-generated strategy outperforms widely adopted data construction
and training strategies in both auditory perception and instruction-following
capabilities. Our findings underscore the importance of carefully designed data
construction in LALM development and offer practical insights for building
robust, general-purpose LALMs. | 2025-07-03T16:28:25Z | Model and code available at:
https://github.com/kehanlu/DeSTA2.5-Audio | null | null | null | null | null | null | null | null | null |
2,507.02813 | LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with
TriMap Video Diffusion | ['Fangfu Liu', 'Hao Li', 'Jiawei Chi', 'Hanyang Wang', 'Minghui Yang', 'Fudong Wang', 'Yueqi Duan'] | ['cs.CV'] | Recovering 3D structures with open-vocabulary scene understanding from 2D
images is a fundamental but daunting task. Recent developments have achieved
this by performing per-scene optimization with embedded language information.
However, they heavily rely on the calibrated dense-view reconstruction
paradigm, thereby suffering from severe rendering artifacts and implausible
semantic synthesis when limited views are available. In this paper, we
introduce a novel generative framework, coined LangScene-X, to unify and
generate 3D consistent multi-modality information for reconstruction and
understanding. Powered by the generative capability of creating more consistent
novel observations, we can build generalizable 3D language-embedded scenes from
only sparse views. Specifically, we first train a TriMap video diffusion model
that can generate appearance (RGBs), geometry (normals), and semantics
(segmentation maps) from sparse inputs through progressive knowledge
integration. Furthermore, we propose a Language Quantized Compressor (LQC),
trained on large-scale image datasets, to efficiently encode language
embeddings, enabling cross-scene generalization without per-scene retraining.
Finally, we reconstruct the language surface fields by aligning language
information onto the surface of 3D scenes, enabling open-ended language
queries. Extensive experiments on real-world data demonstrate the superiority
of our LangScene-X over state-of-the-art methods in terms of quality and
generalizability. Project Page: https://liuff19.github.io/LangScene-X. | 2025-07-03T17:21:23Z | Project page: https://liuff19.github.io/LangScene-X | null | null | null | null | null | null | null | null | null |
2,507.02851 | MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs | ['Purbesh Mitra', 'Sennur Ulukus'] | ['cs.CL', 'cs.AI', 'cs.IT', 'cs.LG', 'cs.SY', 'eess.SY', 'math.IT'] | Recent advancements in the reasoning capabilities of large language models
(LLMs) show that employing group relative policy optimization (GRPO) algorithm
for reinforcement learning (RL) training allows the models to use more
thinking/reasoning tokens for generating better responses. However, LLMs can
generate only a finite amount of tokens while maintaining attention to the
previously generated tokens. This limit, also known as the context size of an
LLM, is a bottleneck in LLM reasoning with arbitrarily large number of tokens.
To think beyond the limit of context size, an LLM must employ a modular
thinking strategy to reason over multiple rounds. In this work, we propose
$\textbf{MOTIF: Modular Thinking via Reinforcement Finetuning}$ -- an RL
training method for generating thinking tokens in multiple rounds, effectively
allowing the model to think with additional context size. We trained the
open-source model Qwen2.5-3B-Instruct on GSM8K dataset via parameter efficient
fine-tuning and tested its accuracy on MATH500 and AIME2024 benchmarks. Our
experiments show 3.8\% and 3.3\% improvements over vanilla GRPO based training
in the respective benchmarks. Furthermore, this improvement was achieved with
only 15\% of samples, thus demonstrating sample efficiency of MOTIF. Our code
and models are available at https://github.com/purbeshmitra/MOTIF and
https://huggingface.co/purbeshmitra/MOTIF, respectively. | 2025-07-03T17:55:43Z | null | null | null | null | null | null | null | null | null | null |
2,507.03033 | Preserving Privacy, Increasing Accessibility, and Reducing Cost: An
On-Device Artificial Intelligence Model for Medical Transcription and Note
Generation | ['Johnson Thomas', 'Ayush Mudgal', 'Wendao Liu', 'Nisten Tahiraj', 'Zeeshaan Mohammed', 'Dhruv Diddi'] | ['cs.CL', 'cs.AI'] | Background: Clinical documentation represents a significant burden for
healthcare providers, with physicians spending up to 2 hours daily on
administrative tasks. Recent advances in large language models (LLMs) offer
promising solutions, but privacy concerns and computational requirements limit
their adoption in healthcare settings. Objective: To develop and evaluate a
privacy-preserving, on-device medical transcription system using a fine-tuned
Llama 3.2 1B model capable of generating structured medical notes from medical
transcriptions while maintaining complete data sovereignty entirely in the
browser. Methods: We fine-tuned a Llama 3.2 1B model using Parameter-Efficient
Fine-Tuning (PEFT) with LoRA on 1,500 synthetic medical
transcription-to-structured note pairs. The model was evaluated against the
base Llama 3.2 1B on two datasets: 100 endocrinology transcripts and 140
modified ACI benchmark cases. Evaluation employed both statistical metrics
(ROUGE, BERTScore, BLEURT) and LLM-as-judge assessments across multiple
clinical quality dimensions. Results: The fine-tuned OnDevice model
demonstrated substantial improvements over the base model. On the ACI
benchmark, ROUGE-1 scores increased from 0.346 to 0.496, while BERTScore F1
improved from 0.832 to 0.866. Clinical quality assessments showed marked
reduction in major hallucinations (from 85 to 35 cases) and enhanced factual
correctness (2.81 to 3.54 on 5-point scale). Similar improvements were observed
on the internal evaluation dataset, with composite scores increasing from 3.13
to 4.43 (+41.5%). Conclusions: Fine-tuning compact LLMs for medical
transcription yields clinically meaningful improvements while enabling complete
on-device browser deployment. This approach addresses key barriers to AI
adoption in healthcare: privacy preservation, cost reduction, and accessibility
for resource-constrained environments. | 2025-07-03T01:51:49Z | null | null | null | null | null | null | null | null | null | null |
2,507.03112 | RLVER: Reinforcement Learning with Verifiable Emotion Rewards for
Empathetic Agents | ['Peisong Wang', 'Ruotian Ma', 'Bang Zhang', 'Xingyu Chen', 'Zhiwei He', 'Kang Luo', 'Qingsong Lv', 'Qingxuan Jiang', 'Zheng Xie', 'Shanyi Wang', 'Yuan Li', 'Fanghua Ye', 'Jian Li', 'Yifan Yang', 'Zhaopeng Tu', 'Xiaolong Li'] | ['cs.CL', 'cs.AI', 'cs.CY'] | Large language models (LLMs) excel at logical and algorithmic reasoning, yet
their emotional intelligence (EQ) still lags far behind their cognitive
prowess. While reinforcement learning from verifiable rewards (RLVR) has
advanced in other domains, its application to dialogue-especially for emotional
intelligence-remains underexplored. In this work, we introduce RLVER, the first
end-to-end reinforcement learning framework that leverages verifiable emotion
rewards from simulated users to cultivate higher-order empathetic abilities in
LLMs. Within this framework, self-consistent affective simulated users engage
in dialogue rollouts and produce deterministic emotion scores during
conversations, serving as reward signals to guide the LLM's learning.
Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its
Sentient-Benchmark score from 13.3 to 79.2 while largely preserving
mathematical and coding competence. Extensive experiments reveal that: (i)
RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and
non-thinking models show distinct trends--thinking models excel in empathy and
insight, while non-thinking models favor action; (iii) GRPO often yields stable
gains, while PPO can push certain capabilities to a higher ceiling; (iv) More
challenging environments are not always better-moderate ones can yield stronger
outcomes. Our results show that RLVER is a practical route toward emotionally
intelligent and broadly capable language agents. | 2025-07-03T18:33:18Z | Code: https://github.com/Tencent/DigitalHuman/tree/main/RLVER | null | null | null | null | null | null | null | null | null |
2,507.03152 | Expert-level validation of AI-generated medical text with scalable
language models | ['Asad Aali', 'Vasiliki Bikia', 'Maya Varma', 'Nicole Chiou', 'Sophie Ostmeier', 'Arnav Singhvi', 'Magdalini Paschali', 'Ashwin Kumar', 'Andrew Johnston', 'Karimar Amador-Martinez', 'Eduardo Juan Perez Guerrero', 'Paola Naovi Cruz Rivera', 'Sergios Gatidis', 'Christian Bluethgen', 'Eduardo Pontes Reis', 'Eddy D. Zandee van Rilland', 'Poonam Laxmappa Hosamani', 'Kevin R Keet', 'Minjoung Go', 'Evelyn Ling', 'David B. Larson', 'Curtis Langlotz', 'Roxana Daneshjou', 'Jason Hom', 'Sanmi Koyejo', 'Emily Alsentzer', 'Akshay S. Chaudhari'] | ['cs.CL', 'cs.AI', 'cs.LG'] | With the growing use of language models (LMs) in clinical environments, there
is an immediate need to evaluate the accuracy and safety of LM-generated
medical text. Currently, such evaluation relies solely on manual physician
review. However, detecting errors in LM-generated text is challenging because
1) manual review is costly and 2) expert-composed reference outputs are often
unavailable in real-world settings. While the "LM-as-judge" paradigm (a LM
evaluating another LM) offers scalable evaluation, even frontier LMs can miss
subtle but clinically significant errors. To address these challenges, we
propose MedVAL, a self-supervised framework that leverages synthetic data to
train evaluator LMs to assess whether LM-generated medical outputs are
factually consistent with inputs, without requiring physician labels or
reference outputs. To evaluate LM performance, we introduce MedVAL-Bench, a
dataset containing 840 outputs annotated by physicians, following a
physician-defined taxonomy of risk levels and error categories. Across 6
diverse medical tasks and 10 state-of-the-art LMs spanning open-source,
proprietary, and medically adapted models, MedVAL fine-tuning significantly
improves (p < 0.001) alignment with physicians on both seen and unseen tasks,
increasing average F1 scores from 66% to 83%, with per-sample safety
classification scores up to 86%. MedVAL improves the performance of even the
best-performing proprietary LM (GPT-4o) by 8%. To support a scalable,
risk-aware pathway towards clinical integration, we open-source the 1) codebase
(https://github.com/StanfordMIMI/MedVAL), 2) MedVAL-Bench
(https://huggingface.co/datasets/stanfordmimi/MedVAL-Bench), and 3) MedVAL-4B
(https://huggingface.co/stanfordmimi/MedVAL-4B), the best-performing
open-source LM. Our research provides the first evidence of LMs approaching
expert-level validation ability for medical text. | 2025-07-03T20:19:18Z | null | null | null | null | null | null | null | null | null | null |
2,507.03482 | OMAR-RQ: Open Music Audio Representation Model Trained with
Multi-Feature Masked Token Prediction | ['Pablo Alonso-Jiménez', 'Pedro Ramoneda', 'R. Oguz Araz', 'Andrea Poltronieri', 'Dmitry Bogdanov'] | ['cs.SD', 'eess.AS'] | Developing open-source foundation models is essential for advancing research
in music audio understanding and ensuring access to powerful, multipurpose
representations for music information retrieval. We present OMAR-RQ, a model
trained with self-supervision via masked token classification methodologies
using a large-scale dataset with over 330,000 hours of music audio. We
experiment with different input features and quantization options, and achieve
state-of-the-art performance in music tagging, pitch estimation, chord
recognition, beat tracking, segmentation, and difficulty estimation among open
self-supervised models. We open-source our training and evaluation pipelines
and model weights, available at https://github.com/mtg/omar-rq. | 2025-07-04T11:19:47Z | null | null | null | null | null | null | null | null | null | null |
2,507.03607 | VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity
Classification | ['Cédric Bonhomme', 'Alexandre Dulaunoy'] | ['cs.CR'] | This paper presents VLAI, a transformer-based model that predicts software
vulnerability severity levels directly from text descriptions. Built on
RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and
achieves over 82% accuracy in predicting severity categories, enabling faster
and more consistent triage ahead of manual CVSS scoring. The model and dataset
are open-source and integrated into the Vulnerability-Lookup service. | 2025-07-04T14:28:14Z | This paper is a preprint for the 25V4C-TC: 2025 Vulnerability
Forecasting Technical Colloquia. Darwin College Cambridge, UK, September
25-26, 2025 | null | null | null | null | null | null | null | null | null |
2,507.03738 | Flow-Anchored Consistency Models | ['Yansong Peng', 'Kai Zhu', 'Yu Liu', 'Pingyu Wu', 'Hebei Li', 'Xiaoyan Sun', 'Feng Wu'] | ['cs.CV'] | Continuous-time Consistency Models (CMs) promise efficient few-step
generation but face significant challenges with training instability. We argue
this instability stems from a fundamental conflict: by training a network to
learn only a shortcut across a probability flow, the model loses its grasp on
the instantaneous velocity field that defines the flow. Our solution is to
explicitly anchor the model in the underlying flow during training. We
introduce the Flow-Anchored Consistency Model (FACM), a simple but effective
training strategy that uses a Flow Matching (FM) task as an anchor for the
primary CM shortcut objective. This Flow-Anchoring approach requires no
architectural modifications and is broadly compatible with standard model
architectures. By distilling a pre-trained LightningDiT model, our method
achieves a state-of-the-art FID of 1.32 with two steps (NFE=2) and 1.76 with
just one step (NFE=1) on ImageNet 256x256, significantly outperforming previous
methods. This provides a general and effective recipe for building
high-performance, few-step generative models. Our code and pretrained models:
https://github.com/ali-vilab/FACM. | 2025-07-04T17:56:51Z | null | null | null | null | null | null | null | null | null | null |
2,507.04569 | Nile-Chat: Egyptian Language Models for Arabic and Latin Scripts | ['Guokan Shang', 'Hadi Abdine', 'Ahmad Chamma', 'Amr Mohamed', 'Mohamed Anwar', 'Abdelaziz Bounhar', 'Omar El Herraoui', 'Preslav Nakov', 'Michalis Vazirgiannis', 'Eric Xing'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce Nile-Chat-4B, 3x4B-A6B, and 12B, a collection of LLMs for
Egyptian dialect, uniquely designed to understand and generate texts written in
both Arabic and Latin scripts. Specifically, with Nile-Chat-3x4B-A6B, we
introduce a novel language adaptation approach by leveraging the
Branch-Train-MiX strategy to merge script-specialized experts, into a single
MoE model. Our Nile-Chat models significantly outperform leading multilingual
and Arabic LLMs, such as LLaMa, Jais, and ALLaM, on our newly introduced
Egyptian evaluation benchmarks, which span both understanding and generative
tasks. Notably, our 12B model yields a 14.4% performance gain over
Qwen2.5-14B-Instruct on Latin-script benchmarks. All our resources are publicly
available. We believe this work presents a comprehensive methodology for
adapting LLMs to dual-script languages, addressing an often overlooked aspect
in modern LLM development. | 2025-07-06T22:53:41Z | null | null | null | null | null | null | null | null | null | null |
2,507.0459 | VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and
Visual Documents | ['Rui Meng', 'Ziyan Jiang', 'Ye Liu', 'Mingyi Su', 'Xinyi Yang', 'Yuepeng Fu', 'Can Qin', 'Zeyuan Chen', 'Ran Xu', 'Caiming Xiong', 'Yingbo Zhou', 'Wenhu Chen', 'Semih Yavuz'] | ['cs.CV', 'cs.CL'] | Multimodal embedding models have been crucial in enabling various downstream
tasks such as semantic similarity, information retrieval, and clustering over
different modalities. However, existing multimodal embeddings like VLM2Vec,
E5-V, GME are predominantly focused on natural images, with limited support for
other visual forms such as videos and visual documents. This restricts their
applicability in real-world scenarios, including AI agents, multi-modal search
and recommendation, and retrieval-augmented generation (RAG). To close this
gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across
diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark
that extends MMEB with five new task types: visual document retrieval, video
retrieval, temporal grounding, video classification and video question
answering - spanning text, image, video, and visual document inputs. Next, we
train VLM2Vec-V2, a general-purpose embedding model that supports text, image,
video, and visual document inputs. Extensive experiments show that VLM2Vec-V2
achieves strong performance not only on the newly introduced video and document
retrieval tasks, but also improves over prior baselines on the original image
benchmarks. Through extensive evaluation, our study offers insights into the
generalizability of various multimodal embedding models and highlights
effective strategies for unified embedding learning, laying the groundwork for
more scalable and adaptable representation learning in both research and
real-world settings. | 2025-07-07T00:51:57Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,507.04612 | Retain or Reframe? A Computational Framework for the Analysis of Framing
in News Articles and Reader Comments | ['Matteo Guida', 'Yulia Otmakhova', 'Eduard Hovy', 'Lea Frermann'] | ['cs.CL'] | When a news article describes immigration as an "economic burden" or a
"humanitarian crisis," it selectively emphasizes certain aspects of the issue.
Although \textit{framing} shapes how the public interprets such issues,
audiences do not absorb frames passively but actively reorganize the presented
information. While this relationship between source content and audience
response is well-documented in the social sciences, NLP approaches often ignore
it, detecting frames in articles and responses in isolation. We present the
first computational framework for large-scale analysis of framing across source
content (news articles) and audience responses (reader comments).
Methodologically, we refine frame labels and develop a framework that
reconstructs dominant frames in articles and comments from sentence-level
predictions, and aligns articles with topically relevant comments. Applying our
framework across eleven topics and two news outlets, we find that frame reuse
in comments correlates highly across outlets, while topic-specific patterns
vary. We release a frame classifier that performs well on both articles and
comments, a dataset of article and comment sentences manually labeled for
frames, and a large-scale dataset of articles and comments with predicted frame
labels. | 2025-07-07T02:05:56Z | null | null | null | null | null | null | null | null | null | null |
2,507.04635 | MODA: MOdular Duplex Attention for Multimodal Perception, Cognition, and
Emotion Understanding | ['Zhicheng Zhang', 'Wuyou Xia', 'Chenxi Zhao', 'Zhou Yan', 'Xiaoqiang Liu', 'Yongjie Zhu', 'Wenyu Qin', 'Pengfei Wan', 'Di Zhang', 'Jufeng Yang'] | ['cs.CV'] | Multimodal large language models (MLLMs) recently showed strong capacity in
integrating data among multiple modalities, empowered by a generalizable
attention architecture. Advanced methods predominantly focus on
language-centric tuning while less exploring multimodal tokens mixed through
attention, posing challenges in high-level tasks that require fine-grained
cognition and emotion understanding. In this work, we identify the attention
deficit disorder problem in multimodal learning, caused by inconsistent
cross-modal attention and layer-by-layer decayed attention activation. To
address this, we propose a novel attention mechanism, termed MOdular Duplex
Attention (MODA), simultaneously conducting the inner-modal refinement and
inter-modal interaction. MODA employs a correct-after-align strategy to
effectively decouple modality alignment from cross-layer token mixing. In the
alignment phase, tokens are mapped to duplex modality spaces based on the basis
vectors, enabling the interaction between visual and language modality.
Further, the correctness of attention scores is ensured through adaptive masked
attention, which enhances the model's flexibility by allowing customizable
masking patterns for different modalities. Extensive experiments on 21
benchmark datasets verify the effectiveness of MODA in perception, cognition,
and emotion tasks. Source code and demo are available in
https://zzcheng.top/MODA. | 2025-07-07T03:37:42Z | ICML 2025 (Spotlight, Top 2.6%) | null | null | null | null | null | null | null | null | null |
2,507.04886 | Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen
Visual Unicode Representations | ['A. Bochkov'] | ['cs.CL', 'cs.AI'] | Understanding the locus of semantic representation in large language models
(LLMs) is crucial for interpretability and architectural innovation. The
dominant paradigm posits that trainable input embeddings serve as foundational
"meaning vectors." This paper challenges that view. We construct Transformer
models where the embedding layer is entirely frozen, with vectors derived not
from data, but from the visual structure of Unicode glyphs. These non-semantic,
precomputed visual embeddings are fixed throughout training. Our method is
compatible with any tokenizer, including a novel Unicode-centric tokenizer we
introduce to ensure universal text coverage. Despite the absence of trainable,
semantically initialized embeddings, our models converge, generate coherent
text, and, critically, outperform architecturally identical models with
trainable embeddings on the MMLU reasoning benchmark. We attribute this to
"representational interference" in conventional models, where the embedding
layer is burdened with learning both structural and semantic features. Our
results indicate that high-level semantics are not inherent to input embeddings
but are an emergent property of the Transformer's compositional architecture
and data scale. This reframes the role of embeddings from meaning containers to
structural primitives. We release all code and models to foster further
research. | 2025-07-07T11:17:32Z | null | null | null | null | null | null | null | null | null | null |
2,507.05197 | Pre-Trained Policy Discriminators are General Reward Models | ['Shihan Dou', 'Shichun Liu', 'Yuming Yang', 'Yicheng Zou', 'Yunhua Zhou', 'Shuhao Xing', 'Chenhao Huang', 'Qiming Ge', 'Demin Song', 'Haijun Lv', 'Songyang Gao', 'Chengqi Lv', 'Enyu Zhou', 'Honglin Guo', 'Zhiheng Xi', 'Wenwei Zhang', 'Qipeng Guo', 'Qi Zhang', 'Xipeng Qiu', 'Xuanjing Huang', 'Tao Gui', 'Kai Chen'] | ['cs.CL', 'cs.LG'] | We offer a novel perspective on reward modeling by formulating it as a policy
discriminator, which quantifies the difference between two policies to generate
a reward signal, guiding the training policy towards a target policy with
desired behaviors. Based on this conceptual insight, we propose a scalable
pre-training method named Policy Discriminative Learning (POLAR), which trains
a reward model (RM) to discern identical policies and discriminate different
ones. Unlike traditional reward modeling methods relying on absolute
preferences, POLAR captures the relative difference between one policy and an
arbitrary target policy, which is a scalable, high-level optimization objective
suitable for modeling generic ranking relationships. Leveraging the POLAR
pre-training paradigm, we present a series of RMs with parameter scales from
1.8B to 7B. Empirical results show that POLAR substantially outperforms
traditional non-pre-trained methods, significantly enhancing RM performance.
For instance, POLAR-7B could improve preference accuracy from 54.8% to 81.0% on
STEM tasks and from 57.9% to 85.5% on creative writing tasks compared to SOTA
baselines. POLAR also shows robust generalization capabilities in RLHF using
Reinforcement Fine-tuning (RFT), providing reliable reward signals and markedly
enhancing policy performance--improving LLaMa3.1-8B from an average of 47.36%
to 56.33% and Qwen2.5-32B from 64.49% to 70.47% on 20 benchmarks. Moreover,
scaling experiments reveal a clear power-law relationship between computation
and performance, supported by linear correlation coefficients approaching 0.99.
The impressive performance, strong generalization, and scaling properties
suggest that POLAR is a promising direction for developing general and strong
reward models. | 2025-07-07T16:56:31Z | null | null | null | null | null | null | null | null | null | null |
2,507.05201 | MedGemma Technical Report | ['Andrew Sellergren', 'Sahar Kazemzadeh', 'Tiam Jaroensri', 'Atilla Kiraly', 'Madeleine Traverse', 'Timo Kohlberger', 'Shawn Xu', 'Fayaz Jamil', 'Cían Hughes', 'Charles Lau', 'Justin Chen', 'Fereshteh Mahvar', 'Liron Yatziv', 'Tiffany Chen', 'Bram Sterling', 'Stefanie Anna Baby', 'Susanna Maria Baby', 'Jeremy Lai', 'Samuel Schmidgall', 'Lu Yang', 'Kejia Chen', 'Per Bjornsson', 'Shashir Reddy', 'Ryan Brush', 'Kenneth Philbrick', 'Mercy Asiedu', 'Ines Mezerreg', 'Howard Hu', 'Howard Yang', 'Richa Tiwari', 'Sunny Jansen', 'Preeti Singh', 'Yun Liu', 'Shekoofeh Azizi', 'Aishwarya Kamath', 'Johan Ferret', 'Shreya Pathak', 'Nino Vieillard', 'Ramona Merhej', 'Sarah Perrin', 'Tatiana Matejovicova', 'Alexandre Ramé', 'Morgane Riviere', 'Louis Rouillard', 'Thomas Mesnard', 'Geoffrey Cideron', 'Jean-bastien Grill', 'Sabela Ramos', 'Edouard Yvinec', 'Michelle Casbon', 'Elena Buchatskaya', 'Jean-Baptiste Alayrac', 'Dmitry Lepikhin', 'Vlad Feinberg', 'Sebastian Borgeaud', 'Alek Andreev', 'Cassidy Hardin', 'Robert Dadashi', 'Léonard Hussenot', 'Armand Joulin', 'Olivier Bachem', 'Yossi Matias', 'Katherine Chou', 'Avinatan Hassidim', 'Kavi Goel', 'Clement Farabet', 'Joelle Barral', 'Tris Warkentin', 'Jonathon Shlens', 'David Fleet', 'Victor Cotruta', 'Omar Sanseviero', 'Gus Martins', 'Phoebe Kirk', 'Anand Rao', 'Shravya Shetty', 'David F. Steiner', 'Can Kirmizibayrak', 'Rory Pilgrim', 'Daniel Golden', 'Lin Yang'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Artificial intelligence (AI) has significant potential in healthcare
applications, but its training and deployment faces challenges due to
healthcare's diverse data, complex tasks, and the need to preserve privacy.
Foundation models that perform well on medical tasks and require less
task-specific tuning data are critical to accelerate the development of
healthcare AI applications. We introduce MedGemma, a collection of medical
vision-language foundation models based on Gemma 3 4B and 27B. MedGemma
demonstrates advanced medical understanding and reasoning on images and text,
significantly exceeding the performance of similar-sized generative models and
approaching the performance of task-specific models, while maintaining the
general capabilities of the Gemma 3 base models. For out-of-distribution tasks,
MedGemma achieves 2.6-10% improvement on medical multimodal question answering,
15.5-18.1% improvement on chest X-ray finding classification, and 10.8%
improvement on agentic evaluations compared to the base models. Fine-tuning
MedGemma further improves performance in subdomains, reducing errors in
electronic health record information retrieval by 50% and reaching comparable
performance to existing specialized state-of-the-art methods for pneumothorax
classification and histopathology patch classification. We additionally
introduce MedSigLIP, a medically-tuned vision encoder derived from SigLIP.
MedSigLIP powers the visual understanding capabilities of MedGemma and as an
encoder achieves comparable or better performance than specialized medical
image encoders. Taken together, the MedGemma collection provides a strong
foundation of medical image and text capabilities, with potential to
significantly accelerate medical research and development of downstream
applications. The MedGemma collection, including tutorials and model weights,
can be found at https://goo.gle/medgemma. | 2025-07-07T17:01:44Z | null | null | null | null | null | null | null | null | null | null |
2,507.0524 | StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context
Modeling | ['Meng Wei', 'Chenyang Wan', 'Xiqian Yu', 'Tai Wang', 'Yuqiang Yang', 'Xiaohan Mao', 'Chenming Zhu', 'Wenzhe Cai', 'Hanqing Wang', 'Yilun Chen', 'Xihui Liu', 'Jiangmiao Pang'] | ['cs.RO', 'cs.CV'] | Vision-and-Language Navigation (VLN) in real-world settings requires agents
to process continuous visual streams and generate actions with low latency
grounded in language instructions. While Video-based Large Language Models
(Video-LLMs) have driven recent progress, current VLN methods based on
Video-LLM often face trade-offs among fine-grained visual understanding,
long-term context modeling and computational efficiency. We introduce
StreamVLN, a streaming VLN framework that employs a hybrid slow-fast context
modeling strategy to support multi-modal reasoning over interleaved vision,
language and action inputs. The fast-streaming dialogue context facilitates
responsive action generation through a sliding-window of active dialogues,
while the slow-updating memory context compresses historical visual states
using a 3D-aware token pruning strategy. With this slow-fast design, StreamVLN
achieves coherent multi-turn dialogue through efficient KV cache reuse,
supporting long video streams with bounded context size and inference cost.
Experiments on VLN-CE benchmarks demonstrate state-of-the-art performance with
stable low latency, ensuring robustness and efficiency in real-world
deployment. The project page is:
\href{https://streamvln.github.io/}{https://streamvln.github.io/}. | 2025-07-07T17:49:41Z | null | null | null | null | null | null | null | null | null | null |
2,507.05513 | Llama Nemoretriever Colembed: Top-Performing Text-Image Retrieval Model | ['Mengyao Xu', 'Gabriel Moreira', 'Ronay Ak', 'Radek Osmulski', 'Yauhen Babakhin', 'Zhiding Yu', 'Benedikt Schifferer', 'Even Oldridge'] | ['cs.CV', 'cs.AI'] | Motivated by the growing demand for retrieval systems that operate across
modalities, we introduce llama-nemoretriever-colembed, a unified text-image
retrieval model that delivers state-of-the-art performance across multiple
benchmarks. We release two model variants, 1B and 3B. The 3B model achieves
state of the art performance, scoring NDCG@5 91.0 on ViDoRe V1 and 63.5 on
ViDoRe V2, placing first on both leaderboards as of June 27, 2025.
Our approach leverages the NVIDIA Eagle2 Vision-Language model (VLM),
modifies its architecture by replacing causal attention with bidirectional
attention, and integrates a ColBERT-style late interaction mechanism to enable
fine-grained multimodal retrieval in a shared embedding space. While this
mechanism delivers superior retrieval accuracy, it introduces trade-offs in
storage and efficiency. We provide a comprehensive analysis of these
trade-offs. Additionally, we adopt a two-stage training strategy to enhance the
model's retrieval capabilities. | 2025-07-07T22:20:04Z | null | null | null | null | null | null | null | null | null | null |
2,507.05517 | Empowering Healthcare Practitioners with Language Models: Structuring
Speech Transcripts in Two Real-World Clinical Applications | ['Jean-Philippe Corbeil', 'Asma Ben Abacha', 'George Michalopoulos', 'Phillip Swazinna', 'Miguel Del-Agua', 'Jerome Tremblay', 'Akila Jeeson Daniel', 'Cari Bader', 'Yu-Cheng Cho', 'Pooja Krishnan', 'Nathan Bodenstab', 'Thomas Lin', 'Wenxuan Teng', 'Francois Beaulieu', 'Paul Vozila'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) such as GPT-4o and o1 have demonstrated strong
performance on clinical natural language processing (NLP) tasks across multiple
medical benchmarks. Nonetheless, two high-impact NLP tasks - structured tabular
reporting from nurse dictations and medical order extraction from
doctor-patient consultations - remain underexplored due to data scarcity and
sensitivity, despite active industry efforts. Practical solutions to these
real-world clinical tasks can significantly reduce the documentation burden on
healthcare providers, allowing greater focus on patient care. In this paper, we
investigate these two challenging tasks using private and open-source clinical
datasets, evaluating the performance of both open- and closed-weight LLMs, and
analyzing their respective strengths and limitations. Furthermore, we propose
an agentic pipeline for generating realistic, non-sensitive nurse dictations,
enabling structured extraction of clinical observations. To support further
research in both areas, we release SYNUR and SIMORD, the first open-source
datasets for nurse observation extraction and medical order extraction. | 2025-07-07T22:29:29Z | null | null | null | null | null | null | null | null | null | null |
2,507.06167 | Skywork-R1V3 Technical Report | ['Wei Shen', 'Jiangbo Pei', 'Yi Peng', 'Xuchen Song', 'Yang Liu', 'Jian Peng', 'Haofeng Sun', 'Yunzhuo Hao', 'Peiyu Wang', 'Jianhao Zhang', 'Yahui Zhou'] | ['cs.CL', 'cs.CV'] | We introduce Skywork-R1V3, an advanced, open-source vision-language model
(VLM) that pioneers a new approach to visual reasoning. Its key innovation lies
in effectively transferring reasoning skills from text-only Large Language
Models (LLMs) to visual tasks. The strong performance of Skywork-R1V3 primarily
stems from our elaborate post-training RL framework, which effectively
activates and enhances the model's reasoning ability, without the need for
additional continue pre-training. Through this framework, we further uncover
the fundamental role of the connector module in achieving robust cross-modal
alignment for multimodal reasoning models. In addition, we introduce a unique
indicator of reasoning capability, the entropy of critical reasoning tokens,
which has proven highly effective for checkpoint selection during RL training.
Skywork-R1V3 achieves state-of-the-art results on MMMU, significantly improving
from 64.3% to 76.0%. This performance matches entry-level human capabilities.
Remarkably, our RL-powered post-training approach enables even the 38B
parameter model to rival top closed-source VLMs. The implementation
successfully transfers mathematical reasoning to other subject-related
reasoning tasks. We also include an analysis of curriculum learning and
reinforcement finetuning strategies, along with a broader discussion on
multimodal reasoning. Skywork-R1V3 represents a significant leap in multimodal
reasoning, showcasing RL as a powerful engine for advancing open-source VLM
capabilities. | 2025-07-08T16:47:16Z | null | null | null | null | null | null | null | null | null | null |
2,507.06181 | CriticLean: Critic-Guided Reinforcement Learning for Mathematical
Formalization | ['Zhongyuan Peng', 'Yifan Yao', 'Kaijing Ma', 'Shuyue Guo', 'Yizhe Li', 'Yichi Zhang', 'Chenchen Zhang', 'Yifan Zhang', 'Zhouliang Yu', 'Luming Li', 'Minghao Liu', 'Yihang Xia', 'Jiawei Shen', 'Yuchen Wu', 'Yixin Cao', 'Zhaoxiang Zhang', 'Wenhao Huang', 'Jiaheng Liu', 'Ge Zhang'] | ['cs.CL'] | Translating natural language mathematical statements into formal, executable
code is a fundamental challenge in automated theorem proving. While prior work
has focused on generation and compilation success, little attention has been
paid to the critic phase-the evaluation of whether generated formalizations
truly capture the semantic intent of the original problem. In this paper, we
introduce CriticLean, a novel critic-guided reinforcement learning framework
that elevates the role of the critic from a passive validator to an active
learning component. Specifically, first, we propose the CriticLeanGPT, trained
via supervised fine-tuning and reinforcement learning, to rigorously assess the
semantic fidelity of Lean 4 formalizations. Then, we introduce CriticLeanBench,
a benchmark designed to measure models' ability to distinguish semantically
correct from incorrect formalizations, and demonstrate that our trained
CriticLeanGPT models can significantly outperform strong open- and
closed-source baselines. Building on the CriticLean framework, we construct
FineLeanCorpus, a dataset comprising over 285K problems that exhibits rich
domain diversity, broad difficulty coverage, and high correctness based on
human evaluation. Overall, our findings highlight that optimizing the critic
phase is essential for producing reliable formalizations, and we hope our
CriticLean will provide valuable insights for future advances in formal
mathematical reasoning. | 2025-07-08T17:03:39Z | null | null | null | null | null | null | null | null | null | null |
2,507.0623 | Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion | ['Aleksandar Jevtić', 'Christoph Reich', 'Felix Wimbauer', 'Oliver Hahn', 'Christian Rupprecht', 'Stefan Roth', 'Daniel Cremers'] | ['cs.CV'] | Semantic scene completion (SSC) aims to infer both the 3D geometry and
semantics of a scene from single images. In contrast to prior work on SSC that
heavily relies on expensive ground-truth annotations, we approach SSC in an
unsupervised setting. Our novel method, SceneDINO, adapts techniques from
self-supervised representation learning and 2D unsupervised scene understanding
to SSC. Our training exclusively utilizes multi-view consistency
self-supervision without any form of semantic or geometric ground truth. Given
a single input image, SceneDINO infers the 3D geometry and expressive 3D DINO
features in a feed-forward manner. Through a novel 3D feature distillation
approach, we obtain unsupervised 3D semantics. In both 3D and 2D unsupervised
scene understanding, SceneDINO reaches state-of-the-art segmentation accuracy.
Linear probing our 3D features matches the segmentation accuracy of a current
supervised SSC approach. Additionally, we showcase the domain generalization
and multi-view consistency of SceneDINO, taking the first steps towards a
strong foundation for single image 3D scene understanding. | 2025-07-08T17:59:50Z | To appear at ICCV 2025. Christoph Reich and Aleksandar Jevti\'c -
both authors contributed equally. Code:
https://github.com/tum-vision/scenedino Project page:
https://visinf.github.io/scenedino | null | null | null | null | null | null | null | null | null |
2,507.06448 | Perception-Aware Policy Optimization for Multimodal Reasoning | ['Zhenhailong Wang', 'Xuehang Guo', 'Sofia Stoica', 'Haiyang Xu', 'Hongru Wang', 'Hyeonjeong Ha', 'Xiusi Chen', 'Yangyi Chen', 'Ming Yan', 'Fei Huang', 'Heng Ji'] | ['cs.CL'] | Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be a
highly effective strategy for endowing Large Language Models (LLMs) with robust
multi-step reasoning abilities. However, its design and optimizations remain
tailored to purely textual domains, resulting in suboptimal performance when
applied to multimodal reasoning tasks. In particular, we observe that a major
source of error in current multimodal reasoning lies in the perception of
visual inputs. To address this bottleneck, we propose Perception-Aware Policy
Optimization (PAPO), a simple yet effective extension of GRPO that encourages
the model to learn to perceive while learning to reason, entirely from internal
supervision signals. Notably, PAPO does not rely on additional data curation,
external reward models, or proprietary models. Specifically, we introduce the
Implicit Perception Loss in the form of a KL divergence term to the GRPO
objective, which, despite its simplicity, yields significant overall
improvements (4.4%) on diverse multimodal benchmarks. The improvements are more
pronounced, approaching 8.0%, on tasks with high vision dependency. We also
observe a substantial reduction (30.5%) in perception errors, indicating
improved perceptual capabilities with PAPO. We conduct comprehensive analysis
of PAPO and identify a unique loss hacking issue, which we rigorously analyze
and mitigate through a Double Entropy Loss. Overall, our work introduces a
deeper integration of perception-aware supervision into RLVR learning
objectives and lays the groundwork for a new RL framework that encourages
visually grounded reasoning. Project page: https://mikewangwzhl.github.io/PAPO. | 2025-07-08T23:22:34Z | null | null | null | null | null | null | null | null | null | null |
2,507.06607 | Decoder-Hybrid-Decoder Architecture for Efficient Reasoning with Long
Generation | ['Liliang Ren', 'Congcong Chen', 'Haoran Xu', 'Young Jin Kim', 'Adam Atkinson', 'Zheng Zhan', 'Jiankai Sun', 'Baolin Peng', 'Liyuan Liu', 'Shuohang Wang', 'Hao Cheng', 'Jianfeng Gao', 'Weizhu Chen', 'Yelong Shen'] | ['cs.CL', 'cs.LG'] | Recent advances in language modeling have demonstrated the effectiveness of
State Space Models (SSMs) for efficient sequence modeling. While hybrid
architectures such as Samba and the decoder-decoder architecture, YOCO, have
shown promising performance gains over Transformers, prior works have not
investigated the efficiency potential of representation sharing between SSM
layers. In this paper, we introduce the Gated Memory Unit (GMU), a simple yet
effective mechanism for efficient memory sharing across layers. We apply it to
create SambaY, a decoder-hybrid-decoder architecture that incorporates GMUs in
the cross-decoder to share memory readout states from a Samba-based
self-decoder. SambaY significantly enhances decoding efficiency, preserves
linear pre-filling time complexity, and boosts long-context performance, all
while eliminating the need for explicit positional encoding. Through extensive
scaling experiments, we demonstrate that our model exhibits a significantly
lower irreducible loss compared to a strong YOCO baseline, indicating superior
performance scalability under large-scale compute regimes. Our largest model
enhanced with Differential Attention, Phi4-mini-Flash-Reasoning, achieves
significantly better performance than Phi4-mini-Reasoning on reasoning tasks
such as Math500, AIME24/25, and GPQA Diamond without any reinforcement
learning, while delivering up to 10x higher decoding throughput on 2K-length
prompts with 32K generation length under the vLLM inference framework. We
release our training codebase on open-source data at
https://github.com/microsoft/ArchScale. | 2025-07-09T07:27:00Z | null | null | null | null | null | null | null | null | null | null |
2,507.07104 | Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation
from Diffusion Models | ['Tiezheng Zhang', 'Yitong Li', 'Yu-cheng Chou', 'Jieneng Chen', 'Alan Yuille', 'Chen Wei', 'Junfei Xiao'] | ['cs.CV'] | Building state-of-the-art Vision-Language Models (VLMs) with strong
captioning capabilities typically necessitates training on billions of
high-quality image-text pairs, requiring millions of GPU hours. This paper
introduces the Vision-Language-Vision (VLV) auto-encoder framework, which
strategically leverages key pretrained components: a vision encoder, the
decoder of a Text-to-Image (T2I) diffusion model, and subsequently, a Large
Language Model (LLM). Specifically, we establish an information bottleneck by
regularizing the language representation space, achieved through freezing the
pretrained T2I diffusion decoder. Our VLV pipeline effectively distills
knowledge from the text-conditioned diffusion model using continuous
embeddings, demonstrating comprehensive semantic understanding via high-quality
reconstructions. Furthermore, by fine-tuning a pretrained LLM to decode the
intermediate language representations into detailed descriptions, we construct
a state-of-the-art (SoTA) captioner comparable to leading models like GPT-4o
and Gemini 2.0 Flash. Our method demonstrates exceptional cost-efficiency and
significantly reduces data requirements; by primarily utilizing single-modal
images for training and maximizing the utility of existing pretrained models
(image encoder, T2I diffusion model, and LLM), it circumvents the need for
massive paired image-text datasets, keeping the total training expenditure
under $1,000 USD. | 2025-07-09T17:59:04Z | Project Page: https://lambert-x.github.io/Vision-Language-Vision/ | null | null | null | null | null | null | null | null | null |
2,507.07129 | Growing Transformers: Modular Composition and Layer-wise Expansion on a
Frozen Substrate | ['A. Bochkov'] | ['cs.LG', 'cs.CL'] | The prevailing paradigm for scaling large language models (LLMs) involves
monolithic, end-to-end training, a resource-intensive process that lacks
flexibility. This paper explores an alternative, constructive approach to model
development, built upon the foundation of non-trainable, deterministic input
embeddings. In prior [1], we established that high-level semantic reasoning can
emerge in Transformers using frozen embeddings derived from the visual
structure of Unicode glyphs. Here, we demonstrate that this fixed
representational substrate acts as a universal "docking port," enabling two
powerful and efficient scaling paradigms: seamless modular composition and
progressive layer-wise growth.
First, we show that specialist models trained on disparate datasets (e.g.,
Russian and Chinese text) can be merged into a single, more capable
Mixture-of-Experts (MoE) model, post-training, with zero architectural
modification. This is achieved by simply averaging their output logits. The
resulting MoE model exhibits immediate performance improvements on reasoning
benchmarks like MMLU, surpassing its constituent experts without catastrophic
forgetting. Second, we introduce a layer-wise constructive training
methodology, where a deep Transformer is "grown" by progressively stacking and
training one layer at a time. This method demonstrates stable convergence and a
clear correlation between model depth and the emergence of complex reasoning
abilities, such as those required for SQuAD.
Our findings suggest a paradigm shift from monolithic optimization towards a
more biological or constructive model of AI development, where complexity is
built incrementally and modules can be composed freely. This opens new avenues
for resource-efficient scaling, continual learning, and a more democratized
ecosystem for building powerful AI systems. We release all code and models to
facilitate further research. | 2025-07-08T20:01:15Z | null | null | null | null | null | null | null | null | null | null |
2,507.07186 | Planted in Pretraining, Swayed by Finetuning: A Case Study on the
Origins of Cognitive Biases in LLMs | ['Itay Itzhak', 'Yonatan Belinkov', 'Gabriel Stanovsky'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models (LLMs) exhibit cognitive biases -- systematic
tendencies of irrational decision-making, similar to those seen in humans.
Prior work has found that these biases vary across models and can be amplified
by instruction tuning. However, it remains unclear if these differences in
biases stem from pretraining, finetuning, or even random noise due to training
stochasticity. We propose a two-step causal experimental approach to
disentangle these factors. First, we finetune models multiple times using
different random seeds to study how training randomness affects over $30$
cognitive biases. Second, we introduce \emph{cross-tuning} -- swapping
instruction datasets between models to isolate bias sources. This swap uses
datasets that led to different bias patterns, directly testing whether biases
are dataset-dependent. Our findings reveal that while training randomness
introduces some variability, biases are mainly shaped by pretraining: models
with the same pretrained backbone exhibit more similar bias patterns than those
sharing only finetuning data. These insights suggest that understanding biases
in finetuned models requires considering their pretraining origins beyond
finetuning effects. This perspective can guide future efforts to develop
principled strategies for evaluating and mitigating bias in LLMs. | 2025-07-09T18:01:14Z | CoLM 2025 | null | null | null | null | null | null | null | null | null |
2,507.0723 | Colors See Colors Ignore: Clothes Changing ReID with Color
Disentanglement | ['Priyank Pathak', 'Yogesh S. Rawat'] | ['cs.CV'] | Clothes-Changing Re-Identification (CC-ReID) aims to recognize individuals
across different locations and times, irrespective of clothing. Existing
methods often rely on additional models or annotations to learn robust,
clothing-invariant features, making them resource-intensive. In contrast, we
explore the use of color - specifically foreground and background colors - as a
lightweight, annotation-free proxy for mitigating appearance bias in ReID
models. We propose Colors See, Colors Ignore (CSCI), an RGB-only method that
leverages color information directly from raw images or video frames. CSCI
efficiently captures color-related appearance bias ('Color See') while
disentangling it from identity-relevant ReID features ('Color Ignore'). To
achieve this, we introduce S2A self-attention, a novel self-attention to
prevent information leak between color and identity cues within the feature
space. Our analysis shows a strong correspondence between learned color
embeddings and clothing attributes, validating color as an effective proxy when
explicit clothing labels are unavailable. We demonstrate the effectiveness of
CSCI on both image and video ReID with extensive experiments on four CC-ReID
datasets. We improve the baseline by Top-1 2.9% on LTCC and 5.0% on PRCC for
image-based ReID, and 1.0% on CCVID and 2.5% on MeVID for video-based ReID
without relying on additional supervision. Our results highlight the potential
of color as a cost-effective solution for addressing appearance bias in
CC-ReID. Github: https://github.com/ppriyank/ICCV-CSCI-Person-ReID. | 2025-07-09T19:05:46Z | ICCV'25 paper | null | null | null | null | null | null | null | null | null |
2,507.07248 | Medical Red Teaming Protocol of Language Models: On the Importance of
User Perspectives in Healthcare Settings | ['Jean-Philippe Corbeil', 'Minseon Kim', 'Alessandro Sordoni', 'Francois Beaulieu', 'Paul Vozila'] | ['cs.CL'] | As the performance of large language models (LLMs) continues to advance,
their adoption is expanding across a wide range of domains, including the
medical field. The integration of LLMs into medical applications raises
critical safety concerns, particularly due to their use by users with diverse
roles, e.g. patients and clinicians, and the potential for model's outputs to
directly affect human health. Despite the domain-specific capabilities of
medical LLMs, prior safety evaluations have largely focused only on general
safety benchmarks. In this paper, we introduce a safety evaluation protocol
tailored to the medical domain in both patient user and clinician user
perspectives, alongside general safety assessments and quantitatively analyze
the safety of medical LLMs. We bridge a gap in the literature by building the
PatientSafetyBench containing 466 samples over 5 critical categories to measure
safety from the perspective of the patient. We apply our red-teaming protocols
on the MediPhi model collection as a case study. To our knowledge, this is the
first work to define safety evaluation criteria for medical LLMs through
targeted red-teaming taking three different points of view - patient,
clinician, and general user - establishing a foundation for safer deployment in
medical domains. | 2025-07-09T19:38:58Z | null | null | null | null | null | null | null | null | null | null |
2,507.07439 | Towards Interpretable Time Series Foundation Models | ['Matthieu Boileau', 'Philippe Helluy', 'Jeremy Pawlus', 'Svitlana Vyetrenko'] | ['cs.CL', 'cs.AI'] | In this paper, we investigate the distillation of time series reasoning
capabilities into small, instruction-tuned language models as a step toward
building interpretable time series foundation models. Leveraging a synthetic
dataset of mean-reverting time series with systematically varied trends and
noise levels, we generate natural language annotations using a large multimodal
model and use these to supervise the fine-tuning of compact Qwen models. We
introduce evaluation metrics that assess the quality of the distilled reasoning
- focusing on trend direction, noise intensity, and extremum localization - and
show that the post-trained models acquire meaningful interpretive capabilities.
Our results highlight the feasibility of compressing time series understanding
into lightweight, language-capable models suitable for on-device or
privacy-sensitive deployment. This work contributes a concrete foundation
toward developing small, interpretable models that explain temporal patterns in
natural language. | 2025-07-10T05:29:34Z | International Conference on Machine Leaning (ICML) 2025 Workshop on
Foundation Models for Structured Data | null | null | null | null | null | null | null | null | null |
2,507.07562 | The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training
Techniques for Reasoning VLMs | ['Jierun Chen', 'Tiezheng Yu', 'Haoli Bai', 'Lewei Yao', 'Jiannan Wu', 'Kaican Li', 'Fei Mi', 'Chaofan Tao', 'Lei Zhu', 'Manyi Zhang', 'Xiaohui Li', 'Lu Hou', 'Lifeng Shang', 'Qun Liu'] | ['cs.CL'] | Large vision-language models (VLMs) increasingly adopt post-training
techniques such as long chain-of-thought (CoT) supervised fine-tuning (SFT) and
reinforcement learning (RL) to elicit sophisticated reasoning. While these
methods exhibit synergy in language-only models, their joint effectiveness in
VLMs remains uncertain. We present a systematic investigation into the distinct
roles and interplay of long-CoT SFT and RL across multiple multimodal reasoning
benchmarks. We find that SFT improves performance on difficult questions by
in-depth, structured reasoning, but introduces verbosity and degrades
performance on simpler ones. In contrast, RL promotes generalization and
brevity, yielding consistent improvements across all difficulty levels, though
the improvements on the hardest questions are less prominent compared to SFT.
Surprisingly, combining them through two-staged, interleaved, or progressive
training strategies, as well as data mixing and model merging, all fails to
produce additive benefits, instead leading to trade-offs in accuracy, reasoning
style, and response length. This ``synergy dilemma'' highlights the need for
more seamless and adaptive approaches to unlock the full potential of combined
post-training techniques for reasoning VLMs. | 2025-07-10T09:05:49Z | null | null | null | null | null | null | null | null | null | null |
2,507.07831 | Rethinking Query-based Transformer for Continual Image Segmentation | ['Yuchen Zhu', 'Cheng Shi', 'Dingyou Wang', 'Jiajin Tang', 'Zhengxuan Wei', 'Yu Wu', 'Guanbin Li', 'Sibei Yang'] | ['cs.CV'] | Class-incremental/Continual image segmentation (CIS) aims to train an image
segmenter in stages, where the set of available categories differs at each
stage. To leverage the built-in objectness of query-based transformers, which
mitigates catastrophic forgetting of mask proposals, current methods often
decouple mask generation from the continual learning process. This study,
however, identifies two key issues with decoupled frameworks: loss of
plasticity and heavy reliance on input data order. To address these, we conduct
an in-depth investigation of the built-in objectness and find that highly
aggregated image features provide a shortcut for queries to generate masks
through simple feature alignment. Based on this, we propose SimCIS, a simple
yet powerful baseline for CIS. Its core idea is to directly select image
features for query assignment, ensuring "perfect alignment" to preserve
objectness, while simultaneously allowing queries to select new classes to
promote plasticity. To further combat catastrophic forgetting of categories, we
introduce cross-stage consistency in selection and an innovative "visual
query"-based replay mechanism. Experiments demonstrate that SimCIS consistently
outperforms state-of-the-art methods across various segmentation tasks,
settings, splits, and input data orders. All models and codes will be made
publicly available at https://github.com/SooLab/SimCIS. | 2025-07-10T15:03:10Z | This work is accepted by CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,507.07999 | Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and
Methodology | ['Haochen Wang', 'Xiangtai Li', 'Zilong Huang', 'Anran Wang', 'Jiacong Wang', 'Tao Zhang', 'Jiani Zheng', 'Sule Bai', 'Zijian Kang', 'Jiashi Feng', 'Zhuochen Wang', 'Zhaoxiang Zhang'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically
referencing visual regions, just like human "thinking with images". However, no
benchmark exists to evaluate these capabilities holistically. To bridge this
gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a
diagnostic benchmark built on three principles: (1) focused visual perception
of subtle targets in complex scenes, (2) traceable evidence via bounding box
evaluation, and (3) second-order reasoning to test object interactions and
spatial hierarchies beyond simple object localization. Prioritizing images with
dense objects, we initially sample 1K high-quality images from SA-1B, and
incorporate eight LMM experts to manually annotate questions, candidate
options, and answers for each image. After three stages of quality control,
TreeBench consists of 405 challenging visual question-answering pairs, even the
most advanced models struggle with this benchmark, where none of them reach 60%
accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR
(Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to
supervise localization and reasoning jointly with reinforcement learning,
enabling accurate localizations and explainable reasoning pathways. Initialized
from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and
TreeBench (+13.4), proving traceability is key to advancing vision-grounded
reasoning. The code is available at https://github.com/Haochen-Wang409/TreeVGR. | 2025-07-10T17:59:58Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.