arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,501.14677
MatAnyone: Stable Video Matting with Consistent Memory Propagation
['Peiqing Yang', 'Shangchen Zhou', 'Jixin Zhao', 'Qingyi Tao', 'Chen Change Loy']
['cs.CV']
Auxiliary-free human video matting methods, which rely solely on input frames, often struggle with complex or ambiguous backgrounds. To address this, we propose MatAnyone, a robust framework tailored for target-assigned video matting. Specifically, building on a memory-based paradigm, we introduce a consistent memory p...
2025-01-24T17:56:24Z
Project page: https://pq-yang.github.io/projects/MatAnyone
null
null
null
null
null
null
null
null
null
2,501.14693
Rethinking Table Instruction Tuning
['Naihao Deng', 'Rada Mihalcea']
['cs.CL', 'cs.AI']
Recent advances in table understanding have focused on instruction-tuning large language models (LLMs) for table-related tasks. However, existing research has overlooked the impact of hyperparameter choices, and also lacks a comprehensive evaluation of the out-of-domain table understanding ability and the general capab...
2025-01-24T18:06:07Z
Accepted to ACL 2025 Findings. Updates: 07/2025: We release the TAMA-QWen2.5 and TAMA-QWen3 models. 06/2025: We release our project page: https://lit.eecs.umich.edu/TAMA/, code: https://github.com/MichiganNLP/TAMA, huggingface models: https://huggingface.co/collections/MichiganNLP/tama-684eeb3e7f262362856eccd1,...
null
null
null
null
null
null
null
null
null
2,501.14818
Eagle 2: Building Post-Training Data Strategies from Scratch for Frontier Vision-Language Models
['Zhiqi Li', 'Guo Chen', 'Shilong Liu', 'Shihao Wang', 'Vibashan VS', 'Yishen Ji', 'Shiyi Lan', 'Hao Zhang', 'Yilin Zhao', 'Subhashree Radhakrishnan', 'Nadine Chang', 'Karan Sapra', 'Amala Sanjay Deshmukh', 'Tuomas Rintamaki', 'Matthieu Le', 'Ilia Karmanov', 'Lukas Voegtle', 'Philipp Fischer', 'De-An Huang', 'Timo Roma...
['cs.CV', 'cs.AI', 'cs.LG']
Recently, promising progress has been made by open-source vision-language models (VLMs) in bringing their capabilities closer to those of proprietary frontier models. However, most open-source models only publish their final model weights, leaving the critical details of data strategies and implementation largely opaqu...
2025-01-20T18:40:47Z
null
null
null
null
null
null
null
null
null
null
2,501.1514
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language Models
['Hulingxiao He', 'Geng Li', 'Zijun Geng', 'Jinglin Xu', 'Yuxin Peng']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
Multi-modal large language models (MLLMs) have shown remarkable abilities in various visual understanding tasks. However, MLLMs still struggle with fine-grained visual recognition (FGVR), which aims to identify subordinate-level categories from images. This can negatively impact more advanced capabilities of MLLMs, suc...
2025-01-25T08:52:43Z
Published as a conference paper at ICLR 2025. The model is available at https://huggingface.co/StevenHH2000/Finedefics
null
null
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language Models
['Hulingxiao He', 'Geng Li', 'Zijun Geng', 'Jinglin Xu', 'Yuxin Peng']
2,025
International Conference on Learning Representations
7
53
['Computer Science']
2,501.15187
Uni-Sign: Toward Unified Sign Language Understanding at Scale
['Zecheng Li', 'Wengang Zhou', 'Weichao Zhao', 'Kepeng Wu', 'Hezhen Hu', 'Houqiang Li']
['cs.CV']
Sign language pre-training has gained increasing attention for its ability to enhance performance across various sign language understanding (SLU) tasks. However, existing methods often suffer from a gap between pre-training and fine-tuning, leading to suboptimal results. To address this, we propose Uni-Sign, a unified...
2025-01-25T11:51:23Z
Accepted by ICLR 2025
null
null
Uni-Sign: Toward Unified Sign Language Understanding at Scale
['Zecheng Li', 'Wen-gang Zhou', 'Weichao Zhao', 'Kepeng Wu', 'Hezhen Hu', 'Houqiang Li']
2,025
International Conference on Learning Representations
6
72
['Computer Science']
2,501.15368
Baichuan-Omni-1.5 Technical Report
['Yadong Li', 'Jun Liu', 'Tao Zhang', 'Tao Zhang', 'Song Chen', 'Tianpeng Li', 'Zehuan Li', 'Lijun Liu', 'Lingfeng Ming', 'Guosheng Dong', 'Da Pan', 'Chong Li', 'Yuanbo Fang', 'Dongdong Kuang', 'Mingrui Wang', 'Chenglin Zhu', 'Youwei Zhang', 'Hongyu Guo', 'Fengyu Zhang', 'Yuran Wang', 'Bowen Ding', 'Wei Song', 'Xu Li',...
['cs.CL', 'cs.SD', 'eess.AS']
We introduce Baichuan-Omni-1.5, an omni-modal model that not only has omni-modal understanding capabilities but also provides end-to-end audio generation capabilities. To achieve fluent and high-quality interaction across modalities without compromising the capabilities of any modality, we prioritized optimizing three ...
2025-01-26T02:19:03Z
null
null
null
Baichuan-Omni-1.5 Technical Report
['Yadong Li', 'Jun Liu', 'Tao Zhang', 'Song Chen', 'Tianpeng Li', 'Zehuan Li', 'Lijun Liu', 'Lingfeng Ming', 'Guosheng Dong', 'Dawei Pan', 'Chong Li', 'Yuanbo Fang', 'Dong-Ling Kuang', 'Mingrui Wang', 'Chenglin Zhu', 'Youwei Zhang', 'Hongyu Guo', 'Fengyu Zhang', 'Yuran Wang', 'Bowen Ding', 'Wei Song', 'Xu Li', 'Yuqiu H...
2,025
arXiv.org
23
184
['Computer Science', 'Engineering']
2,501.15369
iFormer: Integrating ConvNet and Transformer for Mobile Application
['Chuanyang Zheng']
['cs.CV', 'cs.AI', 'cs.LG']
We present a new family of mobile hybrid vision networks, called iFormer, with a focus on optimizing latency and accuracy on mobile applications. iFormer effectively integrates the fast local representation capacity of convolution with the efficient global modeling ability of self-attention. The local interactions are ...
2025-01-26T02:34:58Z
Accepted to ICLR 2025. Code: https://github.com/ChuanyangZheng/iFormer
null
null
iFormer: Integrating ConvNet and Transformer for Mobile Application
['Chuanyang Zheng']
2,025
International Conference on Learning Representations
0
82
['Computer Science']
2,501.15383
Qwen2.5-1M Technical Report
['An Yang', 'Bowen Yu', 'Chengyuan Li', 'Dayiheng Liu', 'Fei Huang', 'Haoyan Huang', 'Jiandong Jiang', 'Jianhong Tu', 'Jianwei Zhang', 'Jingren Zhou', 'Junyang Lin', 'Kai Dang', 'Kexin Yang', 'Le Yu', 'Mei Li', 'Minmin Sun', 'Qin Zhu', 'Rui Men', 'Tao He', 'Weijia Xu', 'Wenbiao Yin', 'Wenyuan Yu', 'Xiafei Qiu', 'Xingzh...
['cs.CL']
We introduce Qwen2.5-1M, a series of models that extend the context length to 1 million tokens. Compared to the previous 128K version, the Qwen2.5-1M series have significantly enhanced long-context capabilities through long-context pre-training and post-training. Key techniques such as long data synthesis, progressive ...
2025-01-26T03:47:25Z
null
null
null
Qwen2.5-1M Technical Report
['An Yang', 'Bowen Yu', 'Chengyuan Li', 'Dayiheng Liu', 'Fei Huang', 'Haoyan Huang', 'Jiandong Jiang', 'Jianhong Tu', 'Jianwei Zhang', 'Jingren Zhou', 'Junyang Lin', 'Kai Dang', 'Kexin Yang', 'Le Yu', 'Mei Li', 'Minmin Sun', 'Qin Zhu', 'Rui Men', 'Tao He', 'Weijia Xu', 'Wenbiao Yin', 'Wenyuan Yu', 'Xiafei Qiu', 'Xingzh...
2,025
arXiv.org
29
46
['Computer Science']
2,501.15415
OCSU: Optical Chemical Structure Understanding for Molecule-centric Scientific Discovery
['Siqi Fan', 'Yuguang Xie', 'Bowen Cai', 'Ailin Xie', 'Gaochao Liu', 'Mu Qiao', 'Jie Xing', 'Zaiqing Nie']
['cs.CV']
Understanding the chemical structure from a graphical representation of a molecule is a challenging image caption task that would greatly benefit molecule-centric scientific discovery. Variations in molecular images and caption subtasks pose a significant challenge in both image representation learning and task modelin...
2025-01-26T06:14:29Z
null
null
null
null
null
null
null
null
null
null
2,501.15442
Overview of the Amphion Toolkit (v0.2)
['Jiaqi Li', 'Xueyao Zhang', 'Yuancheng Wang', 'Haorui He', 'Chaoren Wang', 'Li Wang', 'Huan Liao', 'Junyi Ao', 'Zeyu Xie', 'Yiqiao Huang', 'Junan Zhang', 'Zhizheng Wu']
['cs.SD', 'cs.AI', 'eess.AS']
Amphion is an open-source toolkit for Audio, Music, and Speech Generation, designed to lower the entry barrier for junior researchers and engineers in these fields. It provides a versatile framework that supports a variety of generation tasks and models. In this report, we introduce Amphion v0.2, the second major relea...
2025-01-26T08:10:13Z
Github: https://github.com/open-mmlab/Amphion
null
null
null
null
null
null
null
null
null
2,501.15513
TinyLLaVA-Video: Towards Smaller LMMs for Video Understanding with Group Resampler
['Xingjian Zhang', 'Xi Weng', 'Yihao Yue', 'Zhaoxin Fan', 'Wenjun Wu', 'Lei Huang']
['cs.CV']
Video behavior recognition and scene understanding are fundamental tasks in multimodal intelligence, serving as critical building blocks for numerous real-world applications. Through large multimodal models (LMMs) have achieved remarkable progress in video understanding, most existing open-source models rely on over 7B...
2025-01-26T13:10:12Z
code and training recipes are available at https://github.com/ZhangXJ199/TinyLLaVA-Video
null
null
TinyLLaVA-Video: Towards Smaller LMMs for Video Understanding with Group Resampler
['Xingjian Zhang', 'Xi Weng', 'Yihao Yue', 'Zhaoxin Fan', 'Wenjun Wu', 'Lei Huang']
2,025
null
0
0
['Computer Science']
2,501.1557
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer
['Lin Yueyu', 'Li Zhiyuan', 'Peter Yue', 'Liu Xiao']
['cs.CL']
As is known, hybrid quadratic and subquadratic attention models in multi-head architectures have surpassed both Transformer and Linear RNN models , with these works primarily focusing on reducing KV complexity and improving efficiency. For further research on expressiveness, we introduce our series of models distilled ...
2025-01-26T15:56:56Z
null
null
null
null
null
null
null
null
null
null
2,501.15579
An Explainable Biomedical Foundation Model via Large-Scale Concept-Enhanced Vision-Language Pre-training
['Yuxiang Nie', 'Sunan He', 'Yequan Bie', 'Yihui Wang', 'Zhixuan Chen', 'Shu Yang', 'Zhiyuan Cai', 'Hongmei Wang', 'Xi Wang', 'Luyang Luo', 'Mingxiang Wu', 'Xian Wu', 'Ronald Cheong Kin Chan', 'Yuk Ming Lau', 'Yefeng Zheng', 'Pranav Rajpurkar', 'Hao Chen']
['cs.CV', 'cs.CL']
The clinical adoption of artificial intelligence (AI) in medical imaging requires models that are both diagnostically accurate and interpretable to clinicians. While current multimodal biomedical foundation models prioritize performance, their black-box nature hinders explaining the decision-making process in clinicall...
2025-01-26T16:07:11Z
null
null
null
null
null
null
null
null
null
null
2,501.15588
Tumor Detection, Segmentation and Classification Challenge on Automated 3D Breast Ultrasound: The TDSC-ABUS Challenge
['Gongning Luo', 'Mingwang Xu', 'Hongyu Chen', 'Xinjie Liang', 'Xing Tao', 'Dong Ni', 'Hyunsu Jeong', 'Chulhong Kim', 'Raphael Stock', 'Michael Baumgartner', 'Yannick Kirchhoff', 'Maximilian Rokuss', 'Klaus Maier-Hein', 'Zhikai Yang', 'Tianyu Fan', 'Nicolas Boutry', 'Dmitry Tereshchenko', 'Arthur Moine', 'Maximilien Ch...
['eess.IV', 'cs.CV']
Breast cancer is one of the most common causes of death among women worldwide. Early detection helps in reducing the number of deaths. Automated 3D Breast Ultrasound (ABUS) is a newer approach for breast screening, which has many advantages over handheld mammography such as safety, speed, and higher detection rate of b...
2025-01-26T16:30:30Z
null
null
null
null
null
null
null
null
null
null
2,501.1583
SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model
['Delin Qu', 'Haoming Song', 'Qizhi Chen', 'Yuanqi Yao', 'Xinyi Ye', 'Yan Ding', 'Zhigang Wang', 'JiaYuan Gu', 'Bin Zhao', 'Dong Wang', 'Xuelong Li']
['cs.RO', 'cs.AI']
In this paper, we claim that spatial understanding is the keypoint in robot manipulation, and propose SpatialVLA to explore effective spatial representations for the robot foundation model. Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-act...
2025-01-27T07:34:33Z
null
Robotics: Science and Systems, 2025
null
null
null
null
null
null
null
null
2,501.16011
MEL: Legal Spanish Language Model
['David Betancur Sánchez', 'Nuria Aldama García', 'Álvaro Barbero Jiménez', 'Marta Guerrero Nieto', 'Patricia Marsà Morales', 'Nicolás Serrano Salas', 'Carlos García Hernán', 'Pablo Haya Coll', 'Elena Montiel Ponsoda', 'Pablo Calleja Ibáñez']
['cs.CL']
Legal texts, characterized by complex and specialized terminology, present a significant challenge for Language Models. Adding an underrepresented language, such as Spanish, to the mix makes it even more challenging. While pre-trained models like XLM-RoBERTa have shown capabilities in handling multilingual corpora, the...
2025-01-27T12:50:10Z
8 pages, 6 figures, 3 tables
null
null
MEL: Legal Spanish Language Model
['David Betancur Sánchez', 'Nuria Aldama-García', 'Álvaro Barbero Jiménez', 'Marta Guerrero Nieto', 'Patricia Marsa Morales', "Nicol'as Serrano Salas", "Carlos Garc'ia Hern'an", 'Pablo Haya Coll', 'Elena Montiel-Ponsoda', 'Pablo Calleja-Ibáñez']
2,025
arXiv.org
0
16
['Computer Science']
2,501.16207
From Informal to Formal -- Incorporating and Evaluating LLMs on Natural Language Requirements to Verifiable Formal Proofs
['Jialun Cao', 'Yaojie Lu', 'Meiziniu Li', 'Haoyang Ma', 'Haokun Li', 'Mengda He', 'Cheng Wen', 'Le Sun', 'Hongyu Zhang', 'Shengchao Qin', 'Shing-Chi Cheung', 'Cong Tian']
['cs.AI', 'cs.CL', 'cs.PL']
The research in AI-based formal mathematical reasoning has shown an unstoppable growth trend. These studies have excelled in mathematical competitions like IMO and have made significant progress. This paper focuses on formal verification, an immediate application scenario of formal reasoning, and breaks it down into su...
2025-01-27T17:00:56Z
20 pages
null
null
null
null
null
null
null
null
null
2,501.16214
Provence: efficient and robust context pruning for retrieval-augmented generation
['Nadezhda Chirkova', 'Thibault Formal', 'Vassilina Nikoulina', 'Stéphane Clinchant']
['cs.CL', 'cs.IR']
Retrieval-augmented generation improves various aspects of large language models (LLMs) generation, but suffers from computational overhead caused by long contexts as well as the propagation of irrelevant retrieved information into generated responses. Context pruning deals with both aspects, by removing irrelevant par...
2025-01-27T17:06:56Z
Accepted to ICLR 2025
null
null
Provence: efficient and robust context pruning for retrieval-augmented generation
['Nadezhda Chirkova', 'Thibault Formal', 'Vassilina Nikoulina', 'S. Clinchant']
2,025
International Conference on Learning Representations
1
0
['Computer Science']
2,501.16239
Distilling foundation models for robust and efficient models in digital pathology
['Alexandre Filiot', 'Nicolas Dop', 'Oussama Tchita', 'Auriane Riou', 'Rémy Dubois', 'Thomas Peeters', 'Daria Valter', 'Marin Scalbert', 'Charlie Saillard', 'Geneviève Robin', 'Antoine Olivier']
['cs.CV', '68T45', 'I.4.9; J.3']
In recent years, the advent of foundation models (FM) for digital pathology has relied heavily on scaling the pre-training datasets and the model size, yielding large and powerful models. While it resulted in improving the performance on diverse downstream tasks, it also introduced increased computational cost and infe...
2025-01-27T17:35:39Z
Preprint
null
null
null
null
null
null
null
null
null
2,501.16255
A foundation model for human-AI collaboration in medical literature mining
['Zifeng Wang', 'Lang Cao', 'Qiao Jin', 'Joey Chan', 'Nicholas Wan', 'Behdad Afzali', 'Hyun-Jin Cho', 'Chang-In Choi', 'Mehdi Emamverdi', 'Manjot K. Gill', 'Sun-Hyung Kim', 'Yijia Li', 'Yi Liu', 'Hanley Ong', 'Justin Rousseau', 'Irfan Sheikh', 'Jenny J. Wei', 'Ziyang Xu', 'Christopher M. Zallek', 'Kyungsang Kim', 'Yifa...
['cs.CL']
Systematic literature review is essential for evidence-based medicine, requiring comprehensive analysis of clinical trial publications. However, the application of artificial intelligence (AI) models for medical literature mining has been limited by insufficient training and evaluation across broad therapeutic areas an...
2025-01-27T17:55:37Z
null
null
null
null
null
null
null
null
null
null
2,501.16372
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
['J. Pablo Muñoz', 'Jinjie Yuan', 'Nilesh Jain']
['cs.LG', 'cs.AI', 'cs.CL']
The rapid expansion of Large Language Models (LLMs) has posed significant challenges regarding the computational resources required for fine-tuning and deployment. Recent advancements in low-rank adapters have demonstrated their efficacy in parameter-efficient fine-tuning (PEFT) of these models. This retrospective pape...
2025-01-23T02:14:08Z
AAAI-25 Workshop on Connecting Low-rank Representations in AI
null
null
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
['J. P. Munoz', 'Jinjie Yuan', 'Nilesh Jain']
2,025
arXiv.org
0
9
['Computer Science']
2,501.16764
DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation
['Chenguo Lin', 'Panwang Pan', 'Bangbang Yang', 'Zeming Li', 'Yadong Mu']
['cs.CV']
Recent advancements in 3D content generation from text or a single image struggle with limited high-quality 3D datasets and inconsistency from 2D multi-view generation. We introduce DiffSplat, a novel 3D generative framework that natively generates 3D Gaussian splats by taming large-scale text-to-image diffusion models...
2025-01-28T07:38:59Z
Accepted to ICLR 2025; Project page: https://chenguolin.github.io/projects/DiffSplat
null
null
DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation
['Chenguo Lin', 'Panwang Pan', 'Bangbang Yang', 'Zeming Li', 'Yadong Mu']
2,025
International Conference on Learning Representations
9
94
['Computer Science']
2,501.16899
RDMM: Fine-Tuned LLM Models for On-Device Robotic Decision Making with Enhanced Contextual Awareness in Specific Domains
['Shady Nasrat', 'Myungsu Kim', 'Seonil Lee', 'Jiho Lee', 'Yeoncheol Jang', 'Seung-joon Yi']
['cs.RO', 'cs.AI']
Large language models (LLMs) represent a significant advancement in integrating physical robots with AI-driven systems. We showcase the capabilities of our framework within the context of the real-world household competition. This research introduces a framework that utilizes RDMM (Robotics Decision-Making Models), whi...
2025-01-28T12:35:06Z
null
null
null
null
null
null
null
null
null
null
2,501.16937
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
['Makoto Shing', 'Kou Misaki', 'Han Bao', 'Sho Yokoi', 'Takuya Akiba']
['cs.LG', 'cs.AI', 'cs.CL']
Causal language models have demonstrated remarkable capabilities, but their size poses significant challenges for deployment in resource-constrained environments. Knowledge distillation, a widely-used technique for transferring knowledge from a large teacher model to a small student model, presents a promising approach...
2025-01-28T13:31:18Z
To appear at the 13th International Conference on Learning Representations (ICLR 2025) as a Spotlight presentation
null
null
null
null
null
null
null
null
null
2,501.17088
Mamba-Shedder: Post-Transformer Compression for Efficient Selective Structured State Space Models
['J. Pablo Muñoz', 'Jinjie Yuan', 'Nilesh Jain']
['cs.LG', 'cs.AI', 'cs.CL', 'I.2.0']
Large pre-trained models have achieved outstanding results in sequence modeling. The Transformer block and its attention mechanism have been the main drivers of the success of these models. Recently, alternative architectures, such as Selective Structured State Space Models (SSMs), have been proposed to address the ine...
2025-01-28T17:22:01Z
NAACL-25 - Main track
null
null
null
null
null
null
null
null
null
2,501.17144
FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data
['Deren Lei', 'Yaxi Li', 'Siyao Li', 'Mengya Hu', 'Rui Xu', 'Ken Archer', 'Mingyu Wang', 'Emily Ching', 'Alex Deng']
['cs.CL', 'cs.AI']
Prior research on training grounded factuality classification models to detect hallucinations in large language models (LLMs) has relied on public natural language inference (NLI) data and synthetic data. However, conventional NLI datasets are not well-suited for document-level reasoning, which is critical for detectin...
2025-01-28T18:45:07Z
NAACL 2025
null
null
FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data
['Deren Lei', 'Yaxi Li', 'Siyao Li', 'Mengya Hu', 'Rui Xu', 'Ken Archer', 'Mingyu Wang', 'Emily Ching', 'Alex Deng']
2,025
North American Chapter of the Association for Computational Linguistics
2
36
['Computer Science']
2,501.17161
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
['Tianzhe Chu', 'Yuexiang Zhai', 'Jihan Yang', 'Shengbang Tong', 'Saining Xie', 'Dale Schuurmans', 'Quoc V. Le', 'Sergey Levine', 'Yi Ma']
['cs.AI', 'cs.CV', 'cs.LG']
Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used post-training techniques for foundation models. However, their roles in enhancing model generalization capabilities remain unclear. This paper studies the difference between SFT and RL on generalization and memorization, focusing on text-based...
2025-01-28T18:59:44Z
Website at https://tianzhechu.com/SFTvsRL
null
null
null
null
null
null
null
null
null
2,501.17195
Atla Selene Mini: A General Purpose Evaluation Model
['Andrei Alexandru', 'Antonia Calvi', 'Henry Broomfield', 'Jackson Golden', 'Kyle Dai', 'Mathias Leys', 'Maurice Burger', 'Max Bartolo', 'Roman Engeler', 'Sashank Pisupati', 'Toby Drane', 'Young Sun Park']
['cs.CL', 'cs.AI']
We introduce Atla Selene Mini, a state-of-the-art small language model-as-a-judge (SLMJ). Selene Mini is a general-purpose evaluator that outperforms the best SLMJs and GPT-4o-mini on overall performance across 11 out-of-distribution benchmarks, spanning absolute scoring, classification, and pairwise preference tasks. ...
2025-01-27T15:09:08Z
null
null
null
null
null
null
null
null
null
null
2,501.17703
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
['Yubo Wang', 'Xiang Yue', 'Wenhu Chen']
['cs.CL']
Supervised Fine-Tuning (SFT) is commonly used to train language models to imitate annotated responses for given instructions. In this paper, we propose Critique Fine-Tuning (CFT), a method more effective than SFT for reasoning tasks. Instead of simply imitating correct responses, CFT trains models to critique noisy res...
2025-01-29T15:20:30Z
null
null
null
null
null
null
null
null
null
null
2,501.1779
BreezyVoice: Adapting TTS for Taiwanese Mandarin with Enhanced Polyphone Disambiguation -- Challenges and Insights
['Chan-Jan Hsu', 'Yi-Cheng Lin', 'Chia-Chun Lin', 'Wei-Chih Chen', 'Ho Lam Chung', 'Chen-An Li', 'Yi-Chang Chen', 'Chien-Yu Yu', 'Ming-Ji Lee', 'Chien-Cheng Chen', 'Ru-Heng Huang', 'Hung-yi Lee', 'Da-Shan Shiu']
['cs.CL', 'cs.AI']
We present BreezyVoice, a Text-to-Speech (TTS) system specifically adapted for Taiwanese Mandarin, highlighting phonetic control abilities to address the unique challenges of polyphone disambiguation in the language. Building upon CosyVoice, we incorporate a $S^{3}$ tokenizer, a large language model (LLM), an optimal-t...
2025-01-29T17:31:26Z
null
null
null
null
null
null
null
null
null
null
2,501.17811
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling
['Xiaokang Chen', 'Zhiyu Wu', 'Xingchao Liu', 'Zizheng Pan', 'Wen Liu', 'Zhenda Xie', 'Xingkai Yu', 'Chong Ruan']
['cs.AI', 'cs.CL', 'cs.CV']
In this work, we introduce Janus-Pro, an advanced version of the previous work Janus. Specifically, Janus-Pro incorporates (1) an optimized training strategy, (2) expanded training data, and (3) scaling to larger model size. With these improvements, Janus-Pro achieves significant advancements in both multimodal underst...
2025-01-29T18:00:19Z
Research paper. arXiv admin note: text overlap with arXiv:2410.13848
null
null
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling
['Xi-aokang Chen', 'Zhiyu Wu', 'Xingchao Liu', 'Zizheng Pan', 'Wen Liu', 'Zhenda Xie', 'Xingkai Yu', 'C. Ruan']
2,025
arXiv.org
160
0
['Computer Science']
2,501.17821
SSF: Sparse Long-Range Scene Flow for Autonomous Driving
['Ajinkya Khoche', 'Qingwen Zhang', 'Laura Pereira Sanchez', 'Aron Asefaw', 'Sina Sharif Mansouri', 'Patric Jensfelt']
['cs.CV']
Scene flow enables an understanding of the motion characteristics of the environment in the 3D world. It gains particular significance in the long-range, where object-based perception methods might fail due to sparse observations far away. Although significant advancements have been made in scene flow pipelines to hand...
2025-01-29T18:14:16Z
7 pages, 3 figures, accepted to International Conference on Robotics and Automation (ICRA) 2025
null
null
null
null
null
null
null
null
null
2,501.18052
SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders
['Bartosz Cywiński', 'Kamil Deja']
['cs.LG', 'cs.AI']
Diffusion models, while powerful, can inadvertently generate harmful or undesirable content, raising significant ethical and safety concerns. Recent machine unlearning approaches offer potential solutions but often lack transparency, making it difficult to understand the changes they introduce to the base model. In thi...
2025-01-29T23:29:47Z
null
null
null
SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders
["Bartosz Cywi'nski", 'Kamil Deja']
2,025
arXiv.org
9
62
['Computer Science']
2,501.18107
Scaling Inference-Efficient Language Models
['Song Bian', 'Minghao Yan', 'Shivaram Venkataraman']
['cs.LG', 'cs.AI', 'cs.CL']
Scaling laws are powerful tools to predict the performance of large language models. However, current scaling laws fall short of accounting for inference costs. In this work, we first show that model architecture affects inference latency, where models of the same size can have up to 3.5x difference in latency. To tack...
2025-01-30T03:16:44Z
21 pages, 18 figures, ICML 2025
null
null
null
null
null
null
null
null
null
2,501.18251
How to Select Datapoints for Efficient Human Evaluation of NLG Models?
['Vilém Zouhar', 'Peng Cui', 'Mrinmaya Sachan']
['cs.CL']
Human evaluation is the gold standard for evaluating text generation models. However, it is expensive. In order to fit budgetary constraints, a random subset of the test data is often chosen in practice for human evaluation. However, randomly selected data may not accurately represent test performance, making this appr...
2025-01-30T10:33:26Z
null
null
null
How to Select Datapoints for Efficient Human Evaluation of NLG Models?
['Vilém Zouhar', 'Peng Cui', 'Mrinmaya Sachan']
2,025
arXiv.org
1
64
['Computer Science']
2,501.18362
MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding
['Yuxin Zuo', 'Shang Qu', 'Yifei Li', 'Zhangren Chen', 'Xuekai Zhu', 'Ermo Hua', 'Kaiyan Zhang', 'Ning Ding', 'Bowen Zhou']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.LG']
We introduce MedXpertQA, a highly challenging and comprehensive benchmark to evaluate expert-level medical knowledge and advanced reasoning. MedXpertQA includes 4,460 questions spanning 17 specialties and 11 body systems. It includes two subsets, Text for text evaluation and MM for multimodal evaluation. Notably, MM in...
2025-01-30T14:07:56Z
ICML 2025
null
null
null
null
null
null
null
null
null
2,501.18427
SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
['Enze Xie', 'Junsong Chen', 'Yuyang Zhao', 'Jincheng Yu', 'Ligeng Zhu', 'Chengyue Wu', 'Yujun Lin', 'Zhekai Zhang', 'Muyang Li', 'Junyu Chen', 'Han Cai', 'Bingchen Liu', 'Daquan Zhou', 'Song Han']
['cs.CV']
This paper presents SANA-1.5, a linear Diffusion Transformer for efficient scaling in text-to-image generation. Building upon SANA-1.0, we introduce three key innovations: (1) Efficient Training Scaling: A depth-growth paradigm that enables scaling from 1.6B to 4.8B parameters with significantly reduced computational r...
2025-01-30T15:31:48Z
null
null
null
null
null
null
null
null
null
null
2,501.18435
GENIE: Generative Note Information Extraction model for structuring EHR data
['Huaiyuan Ying', 'Hongyi Yuan', 'Jinsen Lu', 'Zitian Qu', 'Yang Zhao', 'Zhengyun Zhao', 'Isaac Kohane', 'Tianxi Cai', 'Sheng Yu']
['cs.CL']
Electronic Health Records (EHRs) hold immense potential for advancing healthcare, offering rich, longitudinal data that combines structured information with valuable insights from unstructured clinical notes. However, the unstructured nature of clinical text poses significant challenges for secondary applications. Trad...
2025-01-30T15:42:24Z
null
null
null
null
null
null
null
null
null
null
2,501.18492
GuardReasoner: Towards Reasoning-based LLM Safeguards
['Yue Liu', 'Hongcheng Gao', 'Shengfang Zhai', 'Jun Xia', 'Tianyi Wu', 'Zhiwei Xue', 'Yulin Chen', 'Kenji Kawaguchi', 'Jiaheng Zhang', 'Bryan Hooi']
['cs.CR', 'cs.AI', 'cs.LG']
As LLMs increasingly impact safety-critical applications, ensuring their safety using guardrails remains a key challenge. This paper proposes GuardReasoner, a new safeguard for LLMs, by guiding the guard model to learn to reason. Concretely, we first create the GuardReasonerTrain dataset, which consists of 127K samples...
2025-01-30T17:06:06Z
22 pages, 18 figures
null
null
null
null
null
null
null
null
null
2,501.18511
WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training
['Benjamin Feuer', 'Chinmay Hegde']
['cs.LG', 'cs.CL']
Language model (LLM) post-training, from DPO to distillation, can refine behaviors and unlock new skills, but the open science supporting these post-training techniques is still in its infancy. One limiting factor has been the difficulty of conducting large-scale comparative analyses of synthetic data generating models...
2025-01-30T17:21:44Z
ICML 2025
null
null
WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training
['Ben Feuer', 'Chinmay Hegde']
2,025
arXiv.org
0
53
['Computer Science']
2,501.1859
DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models
['Ruofan Liang', 'Zan Gojcic', 'Huan Ling', 'Jacob Munkberg', 'Jon Hasselgren', 'Zhi-Hao Lin', 'Jun Gao', 'Alexander Keller', 'Nandita Vijaykumar', 'Sanja Fidler', 'Zian Wang']
['cs.CV', 'cs.GR']
Understanding and modeling lighting effects are fundamental tasks in computer vision and graphics. Classic physically-based rendering (PBR) accurately simulates the light transport, but relies on precise scene representations--explicit 3D geometry, high-quality material properties, and lighting conditions--that are oft...
2025-01-30T18:59:11Z
CVPR 2025; project page: research.nvidia.com/labs/toronto-ai/DiffusionRenderer/
null
null
DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models
['Ruofan Liang', 'Zan Gojcic', 'Huan Ling', 'Jacob Munkberg', 'J. Hasselgren', 'Zhi-Hao Lin', 'Jun Gao', 'Alexander Keller', 'Nandita Vijaykumar', 'Sanja Fidler', 'Zian Wang']
2,025
arXiv.org
11
91
['Computer Science']
2,501.1867
High-Accuracy ECG Image Interpretation using Parameter-Efficient LoRA Fine-Tuning with Multimodal LLaMA 3.2
['Nandakishor M', 'Anjali M']
['cs.CV', 'cs.AI']
Electrocardiogram (ECG) interpretation is a cornerstone of cardiac diagnostics. This paper explores a practical approach to enhance ECG image interpretation using the multimodal LLaMA 3.2 model. We used a parameter-efficient fine-tuning strategy, Low-Rank Adaptation (LoRA), specifically designed to boost the model's ab...
2025-01-30T17:55:27Z
null
null
null
null
null
null
null
null
null
null
2,501.18898
GestureLSM: Latent Shortcut based Co-Speech Gesture Generation with Spatial-Temporal Modeling
['Pinxin Liu', 'Luchuan Song', 'Junhua Huang', 'Haiyang Liu', 'Chenliang Xu']
['cs.CV', 'cs.GR']
Generating full-body human gestures based on speech signals remains challenges on quality and speed. Existing approaches model different body regions such as body, legs and hands separately, which fail to capture the spatial interactions between them and result in unnatural and disjointed movements. Additionally, their...
2025-01-31T05:34:59Z
null
null
null
null
null
null
null
null
null
null
2,501.18954
LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision of Large Language Models
['Shenghao Fu', 'Qize Yang', 'Qijie Mo', 'Junkai Yan', 'Xihan Wei', 'Jingke Meng', 'Xiaohua Xie', 'Wei-Shi Zheng']
['cs.CV']
Recent open-vocabulary detectors achieve promising performance with abundant region-level annotated data. In this work, we show that an open-vocabulary detector co-training with a large language model by generating image-level detailed captions for each image can further improve performance. To achieve the goal, we fir...
2025-01-31T08:27:31Z
null
null
null
null
null
null
null
null
null
null
2,501.19054
Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models
['Ruiyu Wang', 'Yu Yuan', 'Shizhao Sun', 'Jiang Bian']
['cs.CV', 'cs.LG']
Creating Computer-Aided Design (CAD) models requires significant expertise and effort. Text-to-CAD, which converts textual descriptions into CAD parametric sequences, is crucial in streamlining this process. Recent studies have utilized ground-truth parametric sequences, known as sequential signals, as supervision to a...
2025-01-31T11:28:16Z
ICML 2025 camera ready
null
null
null
null
null
null
null
null
null
2,501.19374
Fixing the Double Penalty in Data-Driven Weather Forecasting Through a Modified Spherical Harmonic Loss Function
['Christopher Subich', 'Syed Zahid Husain', 'Leo Separovic', 'Jing Yang']
['cs.LG', 'physics.ao-ph', 'I.2.6; I.2.1; J.2']
Recent advancements in data-driven weather forecasting models have delivered deterministic models that outperform the leading operational forecast systems based on traditional, physics-based models. However, these data-driven models are typically trained with a mean squared error loss function, which causes smoothing o...
2025-01-31T18:23:45Z
Accepted at ICML 2025
null
null
Fixing the Double Penalty in Data-Driven Weather Forecasting Through a Modified Spherical Harmonic Loss Function
['Christopher Subich', 'S. Husain', 'L. Šeparović', 'Jing Yang']
2,025
arXiv.org
5
38
['Computer Science', 'Physics']
2,501.19393
s1: Simple test-time scaling
['Niklas Muennighoff', 'Zitong Yang', 'Weijia Shi', 'Xiang Lisa Li', 'Li Fei-Fei', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer', 'Percy Liang', 'Emmanuel Candès', 'Tatsunori Hashimoto']
['cs.CL', 'cs.AI', 'cs.LG']
Test-time scaling is a promising new approach to language modeling that uses extra test-time compute to improve performance. Recently, OpenAI's o1 model showed this capability but did not publicly share its methodology, leading to many replication efforts. We seek the simplest approach to achieve test-time scaling and ...
2025-01-31T18:48:08Z
46 pages (9 main), 10 figures, 15 tables
null
null
s1: Simple test-time scaling
['Niklas Muennighoff', 'Zitong Yang', 'Weijia Shi', 'Xiang Lisa Li', 'Fei-Fei Li', 'Hanna Hajishirzi', 'Luke S. Zettlemoyer', 'Percy Liang', 'Emmanuel J. Candes', 'Tatsunori Hashimoto']
2,025
arXiv.org
392
72
['Computer Science']
2,501.194
Vintix: Action Model via In-Context Reinforcement Learning
['Andrey Polubarov', 'Nikita Lyubaykin', 'Alexander Derevyagin', 'Ilya Zisman', 'Denis Tarasov', 'Alexander Nikulin', 'Vladislav Kurenkov']
['cs.LG', 'cs.AI', 'cs.RO']
In-Context Reinforcement Learning (ICRL) represents a promising paradigm for developing generalist agents that learn at inference time through trial-and-error interactions, analogous to how large language models adapt contextually, but with a focus on reward maximization. However, the scalability of ICRL beyond toy tas...
2025-01-31T18:57:08Z
Preprint. In review
null
null
Vintix: Action Model via In-Context Reinforcement Learning
['Andrey Polubarov', 'Nikita Lyubaykin', 'Alexander Derevyagin', 'Ilya Zisman', 'Denis Tarasov', 'Alexander Nikulin', 'Vladislav Kurenkov']
2,025
arXiv.org
3
0
['Computer Science']
2,501.99999
null
[]
['']
null
null
null
null
null
null
null
null
null
null
null
null
2,502.00094
AIN: The Arabic INclusive Large Multimodal Model
['Ahmed Heakl', 'Sara Ghaboura', 'Omkar Thawkar', 'Fahad Shahbaz Khan', 'Hisham Cholakkal', 'Rao Muhammad Anwer', 'Salman Khan']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.HC', 'cs.LG']
Amid the swift progress of large language models (LLMs) and their evolution into large multimodal models (LMMs), significant strides have been made in high-resource languages such as English and Chinese. While Arabic LLMs have seen notable progress, Arabic LMMs remain largely unexplored, often narrowly focusing on a fe...
2025-01-31T18:58:20Z
20 pages, 16 figures, ACL
null
null
AIN: The Arabic INclusive Large Multimodal Model
['Ahmed Heakl', 'Sara Ghaboura', 'Omkar Thawakar', 'F. Khan', 'Hisham Cholakkal', 'R. Anwer', 'Salman H. Khan']
2,025
arXiv.org
1
0
['Computer Science']
2,502.00196
DermaSynth: Rich Synthetic Image-Text Pairs Using Open Access Dermatology Datasets
['Abdurrahim Yilmaz', 'Furkan Yuceyalcin', 'Ece Gokyayla', 'Donghee Choi', 'Ozan Erdem', 'Ali Anil Demircali', 'Rahmetullah Varol', 'Ufuk Gorkem Kirabali', 'Gulsum Gencoglan', 'Joram M. Posma', 'Burak Temelkuran']
['cs.CV', 'cs.AI', 'cs.CL']
A major barrier to developing vision large language models (LLMs) in dermatology is the lack of large image--text pairs dataset. We introduce DermaSynth, a dataset comprising of 92,020 synthetic image--text pairs curated from 45,205 images (13,568 clinical and 35,561 dermatoscopic) for dermatology-related clinical task...
2025-01-31T22:26:33Z
12 pages, 4 figures
null
null
DermaSynth: Rich Synthetic Image-Text Pairs Using Open Access Dermatology Datasets
['Abdurrahim Yilmaz', 'Furkan Yuceyalcin', 'Ece Gokyayla', 'Donghee Choi', 'Ozan Erdem Ali Anil Demircali', 'Rahmetullah Varol', 'Ufuk Gorkem Kirabali', 'G. Gencoglan', 'J. Posma', 'Burak Temelkuran']
2,025
arXiv.org
0
19
['Computer Science']
2,502.00203
Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment
['Shengyang Sun', 'Yian Zhang', 'Alexander Bukharin', 'David Mosallanezhad', 'Jiaqi Zeng', 'Soumye Singhal', 'Gerald Shen', 'Adithya Renduchintala', 'Tugrul Konuk', 'Yi Dong', 'Zhilin Wang', 'Dmitry Chichkov', 'Olivier Delalleau', 'Oleksii Kuchaiev']
['cs.LG', 'cs.CL']
The rapid development of large language model (LLM) alignment algorithms has resulted in a complex and fragmented landscape, with limited clarity on the effectiveness of different methods and their inter-connections. This paper introduces Reward-Aware Preference Optimization (RPO), a mathematical framework that unifies...
2025-01-31T22:39:04Z
8 pages, 4 figures; update author names
null
null
null
null
null
null
null
null
null
2,502.00212
STP: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving
['Kefan Dong', 'Tengyu Ma']
['cs.LG', 'cs.AI', 'cs.LO']
A fundamental challenge in formal theorem proving by LLMs is the lack of high-quality training data. Although reinforcement learning or expert iteration partially mitigates this issue by alternating between LLM generating proofs and finetuning them on correctly generated ones, performance quickly plateaus due to the sc...
2025-01-31T23:01:48Z
25 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,502.00258
ProxSparse: Regularized Learning of Semi-Structured Sparsity Masks for Pretrained LLMs
['Hongyi Liu', 'Rajarshi Saha', 'Zhen Jia', 'Youngsuk Park', 'Jiaji Huang', 'Shoham Sabach', 'Yu-Xiang Wang', 'George Karypis']
['cs.LG', 'cs.CL']
Large Language Models (LLMs) have demonstrated exceptional performance in natural language processing tasks, yet their massive size makes serving them inefficient and costly. Semi-structured pruning has emerged as an effective method for model acceleration, but existing approaches are suboptimal because they focus on l...
2025-02-01T01:35:23Z
ICML25
null
null
ProxSparse: Regularized Learning of Semi-Structured Sparsity Masks for Pretrained LLMs
['Hongyi Liu', 'Rajarshi Saha', 'Zhen Jia', 'Youngsuk Park', 'Jiaji Huang', 'Shoham Sabach', 'Yu-xiang Wang', 'George Karypis']
2,025
arXiv.org
0
0
['Computer Science']
2,502.00366
Prostate-Specific Foundation Models for Enhanced Detection of Clinically Significant Cancer
['Jeong Hoon Lee', 'Cynthia Xinran Li', 'Hassan Jahanandish', 'Indrani Bhattacharya', 'Sulaiman Vesal', 'Lichun Zhang', 'Shengtian Sang', 'Moon Hyung Choi', 'Simon John Christoph Soerensen', 'Steve Ran Zhou', 'Elijah Richard Sommer', 'Richard Fan', 'Pejman Ghanouni', 'Yuze Song', 'Tyler M. Seibert', 'Geoffrey A. Sonn',...
['eess.IV', 'cs.CV']
Accurate prostate cancer diagnosis remains challenging. Even when using MRI, radiologists exhibit low specificity and significant inter-observer variability, leading to potential delays or inaccuracies in identifying clinically significant cancers. This leads to numerous unnecessary biopsies and risks of missing clinic...
2025-02-01T08:42:33Z
44pages
null
null
null
null
null
null
null
null
null
2,502.00592
M+: Extending MemoryLLM with Scalable Long-Term Memory
['Yu Wang', 'Dmitry Krotov', 'Yuanzhe Hu', 'Yifan Gao', 'Wangchunshu Zhou', 'Julian McAuley', 'Dan Gutfreund', 'Rogerio Feris', 'Zexue He']
['cs.CL']
Equipping large language models (LLMs) with latent-space memory has attracted increasing attention as they can extend the context window of existing language models. However, retaining information from the distant past remains a challenge. For example, MemoryLLM (Wang et al., 2024a), as a representative work with laten...
2025-02-01T23:13:10Z
null
null
null
null
null
null
null
null
null
null
2,502.00816
Sundial: A Family of Highly Capable Time Series Foundation Models
['Yong Liu', 'Guo Qin', 'Zhiyuan Shi', 'Zhi Chen', 'Caiyin Yang', 'Xiangdong Huang', 'Jianmin Wang', 'Mingsheng Long']
['cs.LG']
We introduce Sundial, a family of native, flexible, and scalable time series foundation models. To predict the next-patch's distribution, we propose a TimeFlow Loss based on flow-matching, which facilitates native pre-training of Transformers on continuous-valued time series without discrete tokenization. Conditioned o...
2025-02-02T14:52:50Z
null
null
null
null
null
null
null
null
null
null
2,502.00857
HintEval: A Comprehensive Framework for Hint Generation and Evaluation for Questions
['Jamshid Mozafari', 'Bhawna Piryani', 'Abdelrahman Abdallah', 'Adam Jatowt']
['cs.CL', 'cs.IR']
Large Language Models (LLMs) are transforming how people find information, and many users turn nowadays to chatbots to obtain answers to their questions. Despite the instant access to abundant information that LLMs offer, it is still important to promote critical thinking and problem-solving skills. Automatic hint gene...
2025-02-02T17:07:18Z
Submitted to SIGIR 2025
null
null
HintEval: A Comprehensive Framework for Hint Generation and Evaluation for Questions
['Jamshid Mozafari', 'Bhawna Piryani', 'Abdelrahman Abdallah', 'Adam Jatowt']
2,025
arXiv.org
1
0
['Computer Science']
2,502.00963
PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs
['Mauricio Soroco', 'Jialin Song', 'Mengzhou Xia', 'Kye Emond', 'Weiran Sun', 'Wuyang Chen']
['cs.LG']
While recent AI-for-math has made strides in pure mathematics, areas of applied mathematics, particularly PDEs, remain underexplored despite their significant real-world applications. We present PDE-Controller, a framework that enables large language models (LLMs) to control systems governed by partial differential equ...
2025-02-03T00:03:41Z
null
null
null
null
null
null
null
null
null
null
2,502.01051
Diffusion Model as a Noise-Aware Latent Reward Model for Step-Level Preference Optimization
['Tao Zhang', 'Cheng Da', 'Kun Ding', 'Huan Yang', 'Kun Jin', 'Yan Li', 'Tingting Gao', 'Di Zhang', 'Shiming Xiang', 'Chunhong Pan']
['cs.CV']
Preference optimization for diffusion models aims to align them with human preferences for images. Previous methods typically use Vision-Language Models (VLMs) as pixel-level reward models to approximate human preferences. However, when used for step-level preference optimization, these models face challenges in handli...
2025-02-03T04:51:28Z
25 pages, 26 tables, 15 figures
null
null
null
null
null
null
null
null
null
2,502.01113
GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation
['Linhao Luo', 'Zicheng Zhao', 'Gholamreza Haffari', 'Dinh Phung', 'Chen Gong', 'Shirui Pan']
['cs.IR', 'cs.AI', 'cs.CL']
Retrieval-augmented generation (RAG) has proven effective in integrating knowledge into large language models (LLMs). However, conventional RAGs struggle to capture complex relationships between pieces of knowledge, limiting their performance in intricate reasoning that requires integrating knowledge from multiple sour...
2025-02-03T07:04:29Z
19 pages, 6 figures
null
null
GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation
['Linhao Luo', 'Zicheng Zhao', 'Gholamreza Haffari', 'D.Q. Phung', 'Chen Gong', 'Shirui Pan']
2,025
arXiv.org
5
0
['Computer Science']
2,502.01385
Detecting Backdoor Samples in Contrastive Language Image Pretraining
['Hanxun Huang', 'Sarah Erfani', 'Yige Li', 'Xingjun Ma', 'James Bailey']
['cs.LG', 'cs.CV']
Contrastive language-image pretraining (CLIP) has been found to be vulnerable to poisoning backdoor attacks where the adversary can achieve an almost perfect attack success rate on CLIP models by poisoning only 0.01\% of the training dataset. This raises security concerns on the current practice of pretraining large-sc...
2025-02-03T14:21:05Z
ICLR2025
null
null
Detecting Backdoor Samples in Contrastive Language Image Pretraining
['Hanxun Huang', 'S. Erfani', 'Yige Li', 'Xingjun Ma', 'James Bailey']
2,025
International Conference on Learning Representations
5
0
['Computer Science']
2,502.01406
GRADIEND: Monosemantic Feature Learning within Neural Networks Applied to Gender Debiasing of Transformer Models
['Jonathan Drechsel', 'Steffen Herbold']
['cs.LG', 'cs.AI', 'cs.CL']
AI systems frequently exhibit and amplify social biases, including gender bias, leading to harmful consequences in critical areas. This study introduces a novel encoder-decoder approach that leverages model gradients to learn a single monosemantic feature neuron encoding gender information. We show that our method can ...
2025-02-03T14:38:27Z
null
null
null
null
null
null
null
null
null
null
2,502.01416
Categorical Schrödinger Bridge Matching
['Grigoriy Ksenofontov', 'Alexander Korotin']
['cs.LG']
The Schr\"odinger Bridge (SB) is a powerful framework for solving generative modeling tasks such as unpaired domain translation. Most SB-related research focuses on continuous data space $\mathbb{R}^{D}$ and leaves open theoretical and algorithmic questions about applying SB methods to discrete data, e.g, on finite spa...
2025-02-03T14:55:28Z
null
null
null
Categorical Schrödinger Bridge Matching
['Grigoriy Ksenofontov', 'Alexander Korotin']
2,025
arXiv.org
1
63
['Computer Science']
2,502.01456
Process Reinforcement through Implicit Rewards
['Ganqu Cui', 'Lifan Yuan', 'Zefan Wang', 'Hanbin Wang', 'Wendi Li', 'Bingxiang He', 'Yuchen Fan', 'Tianyu Yu', 'Qixin Xu', 'Weize Chen', 'Jiarui Yuan', 'Huayu Chen', 'Kaiyan Zhang', 'Xingtai Lv', 'Shuo Wang', 'Yuan Yao', 'Xu Han', 'Hao Peng', 'Yu Cheng', 'Zhiyuan Liu', 'Maosong Sun', 'Bowen Zhou', 'Ning Ding']
['cs.LG', 'cs.AI', 'cs.CL']
Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs s...
2025-02-03T15:43:48Z
20 pages. Model&Code&Data available at https://github.com/PRIME-RL/PRIME
null
null
Process Reinforcement through Implicit Rewards
['Ganqu Cui', 'Lifan Yuan', 'Zefan Wang', 'Hanbin Wang', 'Wendi Li', 'Bingxiang He', 'Yuchen Fan', 'Tianyu Yu', 'Qixin Xu', 'Weize Chen', 'Jiarui Yuan', 'Huayu Chen', 'Kaiyan Zhang', 'Xingtai Lv', 'Shuo Wang', 'Yuan Yao', 'Xu Han', 'Hao Peng', 'Yu Cheng', 'Zhiyuan Liu', 'Maosong Sun', 'Bowen Zhou', 'Ning Ding']
2,025
arXiv.org
103
54
['Computer Science']
2,502.01534
Preference Leakage: A Contamination Problem in LLM-as-a-judge
['Dawei Li', 'Renliang Sun', 'Yue Huang', 'Ming Zhong', 'Bohan Jiang', 'Jiawei Han', 'Xiangliang Zhang', 'Wei Wang', 'Huan Liu']
['cs.LG', 'cs.AI', 'cs.CL']
Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods in model development. While their combination significantly enhances the efficiency of model training and evaluation, little attention has been given to the potential contamination brou...
2025-02-03T17:13:03Z
20 pages, 7 figures
null
null
null
null
null
null
null
null
null
2,502.01562
Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints Internalization
['Minttu Alakuijala', 'Ya Gao', 'Georgy Ananov', 'Samuel Kaski', 'Pekka Marttinen', 'Alexander Ilin', 'Harri Valpola']
['cs.LG']
As the general capabilities of artificial intelligence (AI) agents continue to evolve, their ability to learn to master multiple complex tasks through experience remains a key challenge. Current LLM agents, particularly those based on proprietary language models, typically rely on prompts to incorporate knowledge about...
2025-02-03T17:45:46Z
null
null
null
null
null
null
null
null
null
null
2,502.01657
Improving Rule-based Reasoning in LLMs using Neurosymbolic Representations
['Varun Dhanraj', 'Chris Eliasmith']
['cs.LG', 'cs.AI']
Large language models (LLMs) continue to face challenges in reliably solving reasoning tasks, particularly those that require precise rule following, as often found in mathematical reasoning. This paper introduces a novel neurosymbolic method that improves LLM reasoning by encoding hidden states into neurosymbolic vect...
2025-01-31T20:29:51Z
null
null
null
Improving Rule-based Reasoning in LLMs using Neurosymbolic Representations
['Varun Dhanraj', 'Chris Eliasmith']
2,025
null
0
39
['Computer Science']
2,502.01717
Choose Your Model Size: Any Compression by a Single Gradient Descent
['Martin Genzel', 'Patrick Putzky', 'Pengfei Zhao', 'Sebastian Schulze', 'Mattes Mollenhauer', 'Robert Seidel', 'Stefan Dietzel', 'Thomas Wollmann']
['cs.LG']
The adoption of Foundation Models in resource-constrained environments remains challenging due to their large size and inference costs. A promising way to overcome these limitations is post-training compression, which aims to balance reduced model size against performance degradation. This work presents Any Compression...
2025-02-03T18:40:58Z
null
null
null
null
null
null
null
null
null
null
2,502.01718
ACECODER: Acing Coder RL via Automated Test-Case Synthesis
['Huaye Zeng', 'Dongfu Jiang', 'Haozhe Wang', 'Ping Nie', 'Xiaotong Chen', 'Wenhu Chen']
['cs.SE', 'cs.AI', 'cs.CL']
Most progress in recent coder models has been driven by supervised fine-tuning (SFT), while the potential of reinforcement learning (RL) remains largely unexplored, primarily due to the lack of reliable reward data/model in the code domain. In this paper, we address this challenge by leveraging automated large-scale te...
2025-02-03T18:46:04Z
9 pages, 4 figure, 11 tables. Accepted to ACL 2025 main conference
null
null
null
null
null
null
null
null
null
2,502.02016
A Periodic Bayesian Flow for Material Generation
['Hanlin Wu', 'Yuxuan Song', 'Jingjing Gong', 'Ziyao Cao', 'Yawen Ouyang', 'Jianbing Zhang', 'Hao Zhou', 'Wei-Ying Ma', 'Jingjing Liu']
['cs.LG', 'cs.AI']
Generative modeling of crystal data distribution is an important yet challenging task due to the unique periodic physical symmetry of crystals. Diffusion-based methods have shown early promise in modeling crystal distribution. More recently, Bayesian Flow Networks were introduced to aggregate noisy latent variables, re...
2025-02-04T05:07:13Z
Accepted to ICLR25
null
null
A Periodic Bayesian Flow for Material Generation
['Hanlin Wu', 'Yuxuan Song', 'Jingjing Gong', 'Ziyao Cao', 'Yawen Ouyang', 'Jianbing Zhang', 'Hao Zhou', 'Wei-Ying Ma', 'Jingjing Liu']
2,025
International Conference on Learning Representations
3
121
['Computer Science']
2,502.02095
LongDPO: Unlock Better Long-form Generation Abilities for LLMs via Critique-augmented Stepwise Information
['Bowen Ping', 'Jiali Zeng', 'Fandong Meng', 'Shuo Wang', 'Jie Zhou', 'Shanghang Zhang']
['cs.CL']
Long-form generation is crucial for academic writing papers and repo-level code generation. Despite this, current models, including GPT-4o, still exhibit unsatisfactory performance. Existing methods that utilize preference learning with outcome supervision often fail to provide detailed feedback for extended contexts. ...
2025-02-04T08:25:17Z
ACL 2025
null
null
LongDPO: Unlock Better Long-form Generation Abilities for LLMs via Critique-augmented Stepwise Information
['Bowen Ping', 'Jiali Zeng', 'Fandong Meng', 'Shuo Wang', 'Jie Zhou', 'Shanghang Zhang']
2,025
arXiv.org
2
54
['Computer Science']
2,502.02257
UNIP: Rethinking Pre-trained Attention Patterns for Infrared Semantic Segmentation
['Tao Zhang', 'Jinyong Wen', 'Zhen Chen', 'Kun Ding', 'Shiming Xiang', 'Chunhong Pan']
['cs.CV']
Pre-training techniques significantly enhance the performance of semantic segmentation tasks with limited training data. However, the efficacy under a large domain gap between pre-training (e.g. RGB) and fine-tuning (e.g. infrared) remains underexplored. In this study, we first benchmark the infrared semantic segmentat...
2025-02-04T12:08:20Z
ICLR 2025. 27 pages, 13 figures, 21 tables
null
null
UNIP: Rethinking Pre-trained Attention Patterns for Infrared Semantic Segmentation
['Tao Zhang', 'Jinyong Wen', 'Zhen Chen', 'Kun Ding', 'Shiming Xiang', 'Chunhong Pan']
2,025
International Conference on Learning Representations
1
68
['Computer Science']
2,502.02358
MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm
['Ziyan Guo', 'Zeyu Hu', 'Na Zhao', 'De Wen Soh']
['cs.CV']
Human motion generation and editing are key components of computer graphics and vision. However, current approaches in this field tend to offer isolated solutions tailored to specific tasks, which can be inefficient and impractical for real-world applications. While some efforts have aimed to unify motion-related tasks...
2025-02-04T14:43:26Z
null
null
null
null
null
null
null
null
null
null
2,502.02384
STAIR: Improving Safety Alignment with Introspective Reasoning
['Yichi Zhang', 'Siyuan Zhang', 'Yao Huang', 'Zeyu Xia', 'Zhengwei Fang', 'Xiao Yang', 'Ranjie Duan', 'Dong Yan', 'Yinpeng Dong', 'Jun Zhu']
['cs.CL']
Ensuring the safety and harmlessness of Large Language Models (LLMs) has become equally critical as their performance in applications. However, existing safety alignment methods typically suffer from safety-performance trade-offs and the susceptibility to jailbreak attacks, primarily due to their reliance on direct ref...
2025-02-04T15:02:55Z
22 pages, 8 figures, ICML2025 Oral
null
null
null
null
null
null
null
null
null
2,502.02465
Towards Consistent and Controllable Image Synthesis for Face Editing
['Mengting Wei', 'Tuomas Varanka', 'Yante Li', 'Xingxun Jiang', 'Huai-Qian Khor', 'Guoying Zhao']
['cs.CV']
Face editing methods, essential for tasks like virtual avatars, digital human synthesis and identity preservation, have traditionally been built upon GAN-based techniques, while recent focus has shifted to diffusion-based models due to their success in image reconstruction. However, diffusion models still face challeng...
2025-02-04T16:36:07Z
null
null
null
null
null
null
null
null
null
null
2,502.02481
Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study
['Menglong Cui', 'Pengzhi Gao', 'Wei Liu', 'Jian Luan', 'Bin Wang']
['cs.CL']
Large language models (LLMs) have shown continuously improving multilingual capabilities, and even small-scale open-source models have demonstrated rapid performance enhancement. In this paper, we systematically explore the abilities of open LLMs with less than ten billion parameters to handle multilingual machine tran...
2025-02-04T16:57:03Z
Accept to NAACL2025 Main Conference
null
null
null
null
null
null
null
null
null
2,502.02508
Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search
['Maohao Shen', 'Guangtao Zeng', 'Zhenting Qi', 'Zhang-Wei Hong', 'Zhenfang Chen', 'Wei Lu', 'Gregory Wornell', 'Subhro Das', 'David Cox', 'Chuang Gan']
['cs.CL', 'cs.AI']
Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a ...
2025-02-04T17:26:58Z
null
null
null
null
null
null
null
null
null
null
2,502.02631
ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization
['Zechun Liu', 'Changsheng Zhao', 'Hanxian Huang', 'Sijia Chen', 'Jing Zhang', 'Jiawei Zhao', 'Scott Roy', 'Lisa Jin', 'Yunyang Xiong', 'Yangyang Shi', 'Lin Xiao', 'Yuandong Tian', 'Bilge Soran', 'Raghuraman Krishnamoorthi', 'Tijmen Blankevoort', 'Vikas Chandra']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV']
The optimal bit-width for achieving the best trade-off between quantized model size and accuracy has been a subject of ongoing debate. While some advocate for 4-bit quantization, others propose that 1.58-bit offers superior results. However, the lack of a cohesive framework for different bits has left such conclusions ...
2025-02-04T18:59:26Z
null
null
null
null
null
null
null
null
null
null
2,502.02708
AsserT5: Test Assertion Generation Using a Fine-Tuned Code Language Model
['Severin Primbs', 'Benedikt Fein', 'Gordon Fraser']
['cs.SE']
Writing good software tests can be challenging, therefore approaches that support developers are desirable. While generating complete tests automatically is such an approach commonly proposed in research, developers may already have specific test scenarios in mind and thus just require help in selecting the most suitab...
2025-02-04T20:42:22Z
Accepted for AST 2025 (https://conf.researchr.org/home/ast-2025)
null
10.1109/AST66626.2025.00008
null
null
null
null
null
null
null
2,502.02737
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
['Loubna Ben Allal', 'Anton Lozhkov', 'Elie Bakouch', 'Gabriel Martín Blázquez', 'Guilherme Penedo', 'Lewis Tunstall', 'Andrés Marafioti', 'Hynek Kydlíček', 'Agustín Piqueres Lajarín', 'Vaibhav Srivastav', 'Joshua Lochner', 'Caleb Fahlgren', 'Xuan-Son Nguyen', 'Clémentine Fourrier', 'Ben Burtenshaw', 'Hugo Larcher', 'H...
['cs.CL']
While large language models have facilitated breakthroughs in many applications of artificial intelligence, their inherent largeness makes them computationally expensive and challenging to deploy in resource-constrained settings. In this paper, we document the development of SmolLM2, a state-of-the-art "small" (1.7 bil...
2025-02-04T21:43:16Z
null
null
null
SmolLM2: When Smol Goes Big - Data-Centric Training of a Small Language Model
['Loubna Ben Allal', 'Anton Lozhkov', 'Elie Bakouch', "Gabriel Mart'in Bl'azquez", 'Guilherme Penedo', 'Lewis Tunstall', 'Andrés Marafioti', "Hynek Kydl'ivcek", "Agust'in Piqueres Lajar'in", 'Vaibhav Srivastav', 'Joshua Lochner', 'Caleb Fahlgren', 'Xuan-Son Nguyen', 'Clémentine Fourrier', 'Ben Burtenshaw', 'Hugo Larche...
2,025
arXiv.org
47
0
['Computer Science']
2,502.02904
ScholaWrite: A Dataset of End-to-End Scholarly Writing Process
['Linghe Wang', 'Minhwa Lee', 'Ross Volkov', 'Luan Tuyen Chau', 'Dongyeop Kang']
['cs.HC', 'cs.CL', 'q-bio.NC']
Writing is a cognitively demanding task involving continuous decision-making, heavy use of working memory, and frequent switching between multiple activities. Scholarly writing is particularly complex as it requires authors to coordinate many pieces of multiform knowledge. To fully understand writers' cognitive thought...
2025-02-05T05:57:37Z
Equal contribution: Linghe Wang, Minhwa Lee | project page: https://minnesotanlp.github.io/scholawrite/
null
null
ScholaWrite: A Dataset of End-to-End Scholarly Writing Process
['Linghe Wang', 'Minhwa Lee', 'Ross Volkov', 'Luan Tuyen Chau', 'Dongyeop Kang']
2,025
arXiv.org
4
46
['Computer Science', 'Biology']
2,502.03128
Metis: A Foundation Speech Generation Model with Masked Generative Pre-training
['Yuancheng Wang', 'Jiachen Zheng', 'Junan Zhang', 'Xueyao Zhang', 'Huan Liao', 'Zhizheng Wu']
['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS', 'eess.SP']
We introduce Metis, a foundation model for unified speech generation. Unlike previous task-specific or multi-task models, Metis follows a pre-training and fine-tuning paradigm. It is pre-trained on large-scale unlabeled speech data using masked generative modeling and then fine-tuned to adapt to diverse speech generati...
2025-02-05T12:36:21Z
null
null
null
Metis: A Foundation Speech Generation Model with Masked Generative Pre-training
['Yuancheng Wang', 'Jiachen Zheng', 'Junan Zhang', 'Xueyao Zhang', 'Huan Liao', 'Zhizheng Wu']
2,025
arXiv.org
3
0
['Computer Science', 'Engineering']
2,502.03212
Leveraging Broadcast Media Subtitle Transcripts for Automatic Speech Recognition and Subtitling
['Jakob Poncelet', 'Hugo Van hamme']
['eess.AS', 'cs.SD']
The recent advancement of speech recognition technology has been driven by large-scale datasets and attention-based architectures, but many challenges still remain, especially for low-resource languages and dialects. This paper explores the integration of weakly supervised transcripts from TV subtitles into automatic s...
2025-02-05T14:26:58Z
Preprint
null
null
null
null
null
null
null
null
null
2,502.03333
RadVLM: A Multitask Conversational Vision-Language Model for Radiology
['Nicolas Deperrois', 'Hidetoshi Matsuo', 'Samuel Ruipérez-Campillo', 'Moritz Vandenhirtz', 'Sonia Laguna', 'Alain Ryser', 'Koji Fujimoto', 'Mizuho Nishio', 'Thomas M. Sutter', 'Julia E. Vogt', 'Jonas Kluckert', 'Thomas Frauenfelder', 'Christian Blüthgen', 'Farhad Nooralahzadeh', 'Michael Krauthammer']
['cs.CV', 'cs.AI']
The widespread use of chest X-rays (CXRs), coupled with a shortage of radiologists, has driven growing interest in automated CXR analysis and AI-assisted reporting. While existing vision-language models (VLMs) show promise in specific tasks such as report generation or abnormality detection, they often lack support for...
2025-02-05T16:27:02Z
21 pages, 15 figures
null
null
null
null
null
null
null
null
null
2,502.03382
High-Fidelity Simultaneous Speech-To-Speech Translation
['Tom Labiausse', 'Laurent Mazaré', 'Edouard Grave', 'Patrick Pérez', 'Alexandre Défossez', 'Neil Zeghidour']
['cs.CL', 'cs.SD', 'eess.AS']
We introduce Hibiki, a decoder-only model for simultaneous speech translation. Hibiki leverages a multistream language model to synchronously process source and target speech, and jointly produces text and audio tokens to perform speech-to-text and speech-to-speech translation. We furthermore address the fundamental ch...
2025-02-05T17:18:55Z
null
null
null
High-Fidelity Simultaneous Speech-To-Speech Translation
['Tom Labiausse', "Laurent Mazar'e", 'Edouard Grave', "Patrick P'erez", "Alexandre D'efossez", 'Neil Zeghidour']
2,025
arXiv.org
1
42
['Computer Science', 'Engineering']
2,502.03387
LIMO: Less is More for Reasoning
['Yixin Ye', 'Zhen Huang', 'Yang Xiao', 'Ethan Chern', 'Shijie Xia', 'Pengfei Liu']
['cs.CL', 'cs.AI']
We present a fundamental discovery that challenges our understanding of how complex reasoning emerges in large language models. While conventional wisdom suggests that sophisticated reasoning tasks demand extensive training data (>100,000 examples), we demonstrate that complex mathematical reasoning abilities can be ef...
2025-02-05T17:23:45Z
17 pages
null
null
LIMO: Less is More for Reasoning
['Yixin Ye', 'Zhen Huang', 'Yang Xiao', 'Ethan Chern', 'Shijie Xia', 'Pengfei Liu']
2,025
arXiv.org
166
0
['Computer Science']
2,502.03438
BFS-Prover: Scalable Best-First Tree Search for LLM-based Automatic Theorem Proving
['Ran Xin', 'Chenguang Xi', 'Jie Yang', 'Feng Chen', 'Hang Wu', 'Xia Xiao', 'Yifan Sun', 'Shen Zheng', 'Kai Shen']
['cs.AI']
Recent advancements in large language models (LLMs) have spurred growing interest in automatic theorem proving using Lean4, where effective tree search methods are crucial for navigating the underlying large proof search spaces. While the existing approaches primarily rely on value functions and/or Monte Carlo Tree Sea...
2025-02-05T18:33:36Z
null
null
null
BFS-Prover: Scalable Best-First Tree Search for LLM-based Automatic Theorem Proving
['Ran Xin', 'Chenguang Xi', 'Jie Yang', 'Feng Chen', 'Hang Wu', 'Xia Xiao', 'Yifan Sun', 'Shen Zheng', 'Kai Shen']
2,025
arXiv.org
16
32
['Computer Science']
2,502.03492
Teaching Language Models to Critique via Reinforcement Learning
['Zhihui Xie', 'Jie chen', 'Liyu Chen', 'Weichao Mao', 'Jingjing Xu', 'Lingpeng Kong']
['cs.LG', 'cs.AI', 'cs.CL']
Teaching large language models (LLMs) to critique and refine their outputs is crucial for building systems that can iteratively improve, yet it is fundamentally limited by the ability to provide accurate judgments and actionable suggestions. In this work, we study LLM critics for code generation and propose $\texttt{CT...
2025-02-05T02:18:46Z
null
null
null
Teaching Language Models to Critique via Reinforcement Learning
['Zhihui Xie', 'Jie chen', 'Liyu Chen', 'Weichao Mao', 'Jingjing Xu', 'Lingpeng Kong']
2,025
arXiv.org
13
0
['Computer Science']
2,502.03499
Omni-DNA: A Unified Genomic Foundation Model for Cross-Modal and Multi-Task Learning
['Zehui Li', 'Vallijah Subasri', 'Yifei Shen', 'Dongsheng Li', 'Yiren Zhao', 'Guy-Bart Stan', 'Caihua Shan']
['q-bio.GN', 'cs.AI', 'cs.LG']
Large Language Models (LLMs) demonstrate remarkable generalizability across diverse tasks, yet genomic foundation models (GFMs) still require separate finetuning for each downstream application, creating significant overhead as model sizes grow. Moreover, existing GFMs are constrained by rigid output formats, limiting ...
2025-02-05T09:20:52Z
null
null
null
Omni-DNA: A Unified Genomic Foundation Model for Cross-Modal and Multi-Task Learning
['Zehui Li', 'Vallijah Subasri', 'Yifei Shen', 'Dongsheng Li', 'Yiren Zhao', 'Guy-Bart Stan', 'Caihua Shan']
2,025
arXiv.org
0
0
['Computer Science', 'Biology']
2,502.03629
REALEDIT: Reddit Edits As a Large-scale Empirical Dataset for Image Transformations
['Peter Sushko', 'Ayana Bharadwaj', 'Zhi Yang Lim', 'Vasily Ilin', 'Ben Caffee', 'Dongping Chen', 'Mohammadreza Salehi', 'Cheng-Yu Hsieh', 'Ranjay Krishna']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
Existing image editing models struggle to meet real-world demands. Despite excelling in academic benchmarks, they have yet to be widely adopted for real user needs. Datasets that power these models use artificial edits, lacking the scale and ecological validity necessary to address the true diversity of user requests. ...
2025-02-05T21:35:48Z
Published at CVPR 2025
null
null
REALEDIT: Reddit Edits As a Large-scale Empirical Dataset for Image Transformations
['Peter Sushko', 'Ayana Bharadwaj', 'Zhi Yang Lim', 'Vasily Ilin', 'Ben Caffee', 'Dongping Chen', 'Mohammadreza Salehi', 'Cheng-Yu Hsieh', 'Ranjay Krishna']
2,025
arXiv.org
1
71
['Computer Science']
2,502.03738
Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More
['Feng Wang', 'Yaodong Yu', 'Guoyizhe Wei', 'Wei Shao', 'Yuyin Zhou', 'Alan Yuille', 'Cihang Xie']
['cs.CV']
Since the introduction of Vision Transformer (ViT), patchification has long been regarded as a de facto image tokenization approach for plain visual architectures. By compressing the spatial size of images, this approach can effectively shorten the token sequence and reduce the computational cost of ViT-like plain arch...
2025-02-06T03:01:38Z
null
null
null
null
null
null
null
null
null
null
2,502.03793
It's All in The [MASK]: Simple Instruction-Tuning Enables BERT-like Masked Language Models As Generative Classifiers
['Benjamin Clavié', 'Nathan Cooper', 'Benjamin Warner']
['cs.CL', 'cs.AI']
While encoder-only models such as BERT and ModernBERT are ubiquitous in real-world NLP applications, their conventional reliance on task-specific classification heads can limit their applicability compared to decoder-based large language models (LLMs). In this work, we introduce ModernBERT-Large-Instruct, a 0.4B-parame...
2025-02-06T05:47:37Z
null
null
null
null
null
null
null
null
null
null
2,502.03979
Towards Unified Music Emotion Recognition across Dimensional and Categorical Models
['Jaeyong Kang', 'Dorien Herremans']
['cs.SD', 'cs.AI', 'eess.AS']
One of the most significant challenges in Music Emotion Recognition (MER) comes from the fact that emotion labels can be heterogeneous across datasets with regard to the emotion representation, including categorical (e.g., happy, sad) versus dimensional labels (e.g., valence-arousal). In this paper, we present a unifie...
2025-02-06T11:20:22Z
null
null
null
Towards Unified Music Emotion Recognition across Dimensional and Categorical Models
['Jaeyong Kang', 'Dorien Herremans']
2,025
arXiv.org
0
56
['Computer Science', 'Engineering']
2,502.04128
Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis
['Zhen Ye', 'Xinfa Zhu', 'Chi-Min Chan', 'Xinsheng Wang', 'Xu Tan', 'Jiahe Lei', 'Yi Peng', 'Haohe Liu', 'Yizhu Jin', 'Zheqi Dai', 'Hongzhan Lin', 'Jianyi Chen', 'Xingjian Du', 'Liumeng Xue', 'Yunlin Chen', 'Zhifei Li', 'Lei Xie', 'Qiuqiang Kong', 'Yike Guo', 'Wei Xue']
['eess.AS', 'cs.AI', 'cs.CL', 'cs.MM', 'cs.SD']
Recent advances in text-based large language models (LLMs), particularly in the GPT series and the o1 model, have demonstrated the effectiveness of scaling both training-time and inference-time compute. However, current state-of-the-art TTS systems leveraging LLMs are often multi-stage, requiring separate models (e.g.,...
2025-02-06T15:04:00Z
null
null
null
Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis
['Zhen Ye', 'Xinfa Zhu', 'Chi-min Chan', 'Xinsheng Wang', 'Xu Tan', 'Jiahe Lei', 'Yi Peng', 'Haohe Liu', 'Yizhu Jin', 'Zheqi Dai', 'Hongzhan Lin', 'Jianyi Chen', 'Xingjian Du', 'Liumeng Xue', 'Yunlin Chen', 'Zhifei Li', 'Lei Xie', 'Qiuqiang Kong', 'Yi-Ting Guo', 'Wei Xue']
2,025
arXiv.org
9
68
['Engineering', 'Computer Science']
2,502.04153
UltraIF: Advancing Instruction Following from the Wild
['Kaikai An', 'Li Sheng', 'Ganqu Cui', 'Shuzheng Si', 'Ning Ding', 'Yu Cheng', 'Baobao Chang']
['cs.CL', 'cs.AI']
Instruction-following made modern large language models (LLMs) helpful assistants. However, the key to taming LLMs on complex instructions remains mysterious, for that there are huge gaps between models trained by open-source community and those trained by leading companies. To bridge the gap, we propose a simple and s...
2025-02-06T15:39:16Z
null
null
null
null
null
null
null
null
null
null
2,502.04328
Ola: Pushing the Frontiers of Omni-Modal Language Model
['Zuyan Liu', 'Yuhao Dong', 'Jiahui Wang', 'Ziwei Liu', 'Winston Hu', 'Jiwen Lu', 'Yongming Rao']
['cs.CV', 'cs.CL', 'cs.MM', 'cs.SD', 'eess.AS', 'eess.IV']
Recent advances in large language models, particularly following GPT-4o, have sparked increasing interest in developing omni-modal models capable of understanding more modalities. While some open-source alternatives have emerged, there is still a notable lag behind specialized single-modality models in performance. In ...
2025-02-06T18:59:55Z
null
null
null
Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
['Zuyan Liu', 'Yuhao Dong', 'Jiahui Wang', 'Ziwei Liu', 'Winston Hu', 'Jiwen Lu', 'Yongming Rao']
2,025
arXiv.org
17
78
['Computer Science', 'Engineering']
2,502.0435
CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
['Yongchao Chen', 'Yilun Hao', 'Yueying Liu', 'Yang Zhang', 'Chuchu Fan']
['cs.CL', 'cs.AI', 'cs.LG', 'cs.SC', 'cs.SE']
Existing methods fail to effectively steer Large Language Models (LLMs) between textual reasoning and code generation, leaving symbolic computing capabilities underutilized. We introduce CodeSteer, an effective method for guiding LLM code/text generation. We construct a comprehensive benchmark SymBench comprising 37 sy...
2025-02-04T15:53:59Z
28 pages, 12 figures
International Conference on Machine Learning (ICML'2025)
null
null
null
null
null
null
null
null
2,502.04404
Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of Language Models
['Xiao-Wen Yang', 'Xuan-Yi Zhu', 'Wen-Da Wei', 'Ding-Chu Zhang', 'Jie-Jing Shao', 'Zhi Zhou', 'Lan-Zhe Guo', 'Yu-Feng Li']
['cs.CL', 'cs.AI']
The integration of slow-thinking mechanisms into large language models (LLMs) offers a promising way toward achieving Level 2 AGI Reasoners, as exemplified by systems like OpenAI's o1. However, several significant challenges remain, including inefficient overthinking and an overreliance on auxiliary reward models. We p...
2025-02-06T08:52:43Z
This is a preprint under review, 15 pages, 13 figures
null
null
null
null
null
null
null
null
null
2,502.04465
FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks
['Luca Della Libera', 'Francesco Paissan', 'Cem Subakan', 'Mirco Ravanelli']
['cs.LG', 'cs.AI', 'cs.SD', 'eess.AS']
Large language models have revolutionized natural language processing through self-supervised pretraining on massive datasets. Inspired by this success, researchers have explored adapting these methods to speech by discretizing continuous audio into tokens using neural audio codecs. However, existing approaches face li...
2025-02-06T19:24:50Z
18 pages
null
null
FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks
['Luca Della Libera', 'F. Paissan', 'Cem Subakan', 'M. Ravanelli']
2,025
arXiv.org
1
0
['Computer Science', 'Engineering']