arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,504.2069 | In-Context Edit: Enabling Instructional Image Editing with In-Context
Generation in Large Scale Diffusion Transformer | ['Zechuan Zhang', 'Ji Xie', 'Yu Lu', 'Zongxin Yang', 'Yi Yang'] | ['cs.CV'] | Instruction-based image editing enables robust image modification via natural
language prompts, yet current methods face a precision-efficiency tradeoff.
Fine-tuning methods demand significant computational resources and large
datasets, while training-free techniques struggle with instruction
comprehension and edit qua... | 2025-04-29T12:14:47Z | Project Page: https://river-zhang.github.io/ICEdit-gh-pages/ | null | null | null | null | null | null | null | null | null |
2,504.20703 | BrightCookies at SemEval-2025 Task 9: Exploring Data Augmentation for
Food Hazard Classification | ['Foteini Papadopoulou', 'Osman Mutlu', 'Neris Özen', 'Bas H. M. van der Velden', 'Iris Hendrickx', 'Ali Hürriyetoğlu'] | ['cs.CL'] | This paper presents our system developed for the SemEval-2025 Task 9: The
Food Hazard Detection Challenge. The shared task's objective is to evaluate
explainable classification systems for classifying hazards and products in two
levels of granularity from food recall incident reports. In this work, we
propose text augm... | 2025-04-29T12:34:28Z | null | null | null | null | null | null | null | null | null | null |
2,504.20966 | Softpick: No Attention Sink, No Massive Activations with Rectified
Softmax | ['Zayd M. K. Zuhri', 'Erland Hilman Fuadi', 'Alham Fikri Aji'] | ['cs.LG'] | We introduce softpick, a rectified, not sum-to-one, drop-in replacement for
softmax in transformer attention mechanisms that eliminates attention sink and
massive activations. Our experiments with 340M and 1.8B parameter models
demonstrate that softpick achieves 0\% sink rate consistently. The softpick
transformers pro... | 2025-04-29T17:36:18Z | Updated to include experiments on 1.8B parameter models | null | null | null | null | null | null | null | null | null |
2,504.20995 | TesserAct: Learning 4D Embodied World Models | ['Haoyu Zhen', 'Qiao Sun', 'Hongxin Zhang', 'Junyan Li', 'Siyuan Zhou', 'Yilun Du', 'Chuang Gan'] | ['cs.CV', 'cs.RO'] | This paper presents an effective approach for learning novel 4D embodied
world models, which predict the dynamic evolution of 3D scenes over time in
response to an embodied agent's actions, providing both spatial and temporal
consistency. We propose to learn a 4D world model by training on RGB-DN (RGB,
Depth, and Norma... | 2025-04-29T17:59:30Z | Project Page: https://tesseractworld.github.io/ | null | null | null | null | null | null | null | null | null |
2,504.21039 | Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report | ['Paul Kassianik', 'Baturay Saglam', 'Alexander Chen', 'Blaine Nelson', 'Anu Vellore', 'Massimo Aufiero', 'Fraser Burch', 'Dhruv Kedia', 'Avi Zohary', 'Sajana Weerawardhena', 'Aman Priyanshu', 'Adam Swanda', 'Amy Chang', 'Hyrum Anderson', 'Kojin Oshiba', 'Omar Santos', 'Yaron Singer', 'Amin Karbasi'] | ['cs.CR', 'cs.AI'] | As transformer-based large language models (LLMs) increasingly permeate
society, they have revolutionized domains such as software engineering,
creative writing, and digital arts. However, their adoption in cybersecurity
remains limited due to challenges like scarcity of specialized training data
and complexity of repr... | 2025-04-28T08:41:12Z | null | null | null | null | null | null | null | null | null | null |
2,504.21117 | Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG
Evaluation Prompts | ['Hanhua Hong', 'Chenghao Xiao', 'Yang Wang', 'Yiqi Liu', 'Wenge Rong', 'Chenghua Lin'] | ['cs.CL'] | Evaluating natural language generation (NLG) systems is challenging due to
the diversity of valid outputs. While human evaluation is the gold standard, it
suffers from inconsistencies, lack of standardisation, and demographic biases,
limiting reproducibility. LLM-based evaluation offers a scalable alternative
but is hi... | 2025-04-29T18:56:12Z | 10 pages | null | null | Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG Evaluation Prompts | ['Hanhua Hong', 'Chenghao Xiao', 'Yang Wang', 'Yiqi Liu', 'Wenge Rong', 'Chenghua Lin'] | 2,025 | arXiv.org | 0 | 45 | ['Computer Science'] |
2,504.21233 | Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language
Models in Math | ['Haoran Xu', 'Baolin Peng', 'Hany Awadalla', 'Dongdong Chen', 'Yen-Chun Chen', 'Mei Gao', 'Young Jin Kim', 'Yunsheng Li', 'Liliang Ren', 'Yelong Shen', 'Shuohang Wang', 'Weijian Xu', 'Jianfeng Gao', 'Weizhu Chen'] | ['cs.CL'] | Chain-of-Thought (CoT) significantly enhances formal reasoning capabilities
in Large Language Models (LLMs) by training them to explicitly generate
intermediate reasoning steps. While LLMs readily benefit from such techniques,
improving reasoning in Small Language Models (SLMs) remains challenging due to
their limited ... | 2025-04-30T00:04:35Z | null | null | null | null | null | null | null | null | null | null |
2,504.21318 | Phi-4-reasoning Technical Report | ['Marah Abdin', 'Sahaj Agarwal', 'Ahmed Awadallah', 'Vidhisha Balachandran', 'Harkirat Behl', 'Lingjiao Chen', 'Gustavo de Rosa', 'Suriya Gunasekar', 'Mojan Javaheripi', 'Neel Joshi', 'Piero Kauffmann', 'Yash Lara', 'Caio César Teodoro Mendes', 'Arindam Mitra', 'Besmira Nushi', 'Dimitris Papailiopoulos', 'Olli Saarikiv... | ['cs.AI', 'cs.CL'] | We introduce Phi-4-reasoning, a 14-billion parameter reasoning model that
achieves strong performance on complex reasoning tasks. Trained via supervised
fine-tuning of Phi-4 on carefully curated set of "teachable" prompts-selected
for the right level of complexity and diversity-and reasoning demonstrations
generated us... | 2025-04-30T05:05:09Z | null | null | null | Phi-4-reasoning Technical Report | ['Marah Abdin', 'Sahaj Agarwal', 'Ahmed Awadallah', 'Vidhisha Balachandran', 'Harkirat Singh Behl', 'Lingjiao Chen', 'Gustavo de Rosa', 'S. Gunasekar', 'Mojan Javaheripi', 'Neel Joshi', 'Piero Kauffmann', 'Yash Lara', 'C. C. T. Mendes', 'Arindam Mitra', 'Besmira Nushi', 'Dimitris Papailiopoulos', 'Olli Saarikivi', 'Shi... | 2,025 | arXiv.org | 15 | 56 | ['Computer Science'] |
2,504.21336 | UniBiomed: A Universal Foundation Model for Grounded Biomedical Image
Interpretation | ['Linshan Wu', 'Yuxiang Nie', 'Sunan He', 'Jiaxin Zhuang', 'Luyang Luo', 'Neeraj Mahboobani', 'Varut Vardhanabhuti', 'Ronald Cheong Kin Chan', 'Yifan Peng', 'Pranav Rajpurkar', 'Hao Chen'] | ['cs.CV'] | The integration of AI-assisted biomedical image analysis into clinical
practice demands AI-generated findings that are not only accurate but also
interpretable to clinicians. However, existing biomedical AI models generally
lack the ability to simultaneously generate diagnostic findings and localize
corresponding biome... | 2025-04-30T05:51:48Z | The first universal foundation model for grounded biomedical image
interpretation | null | null | null | null | null | null | null | null | null |
2,504.21356 | Nexus-Gen: Unified Image Understanding, Generation, and Editing via
Prefilled Autoregression in Shared Embedding Space | ['Hong Zhang', 'Zhongjie Duan', 'Xingjun Wang', 'Yuze Zhao', 'Weiyi Lu', 'Zhipeng Di', 'Yixuan Xu', 'Yingda Chen', 'Yu Zhang'] | ['cs.CV', 'cs.AI'] | Unified multimodal generative models aim to integrate image understanding and
generation abilities, offering significant advantages in harnessing multimodal
corpora, particularly interleaved text-image data. However, existing unified
models exhibit limitations in image synthesis quality, autoregressive error
accumulati... | 2025-04-30T06:30:48Z | null | null | null | Nexus-Gen: A Unified Model for Image Understanding, Generation, and Editing | ['Hong Zhang', 'Zhongjie Duan', 'Xingjun Wang', 'Yingda Chen', 'Yuze Zhao', 'Yu Zhang'] | 2,025 | arXiv.org | 6 | 28 | ['Computer Science'] |
2,504.21467 | Multiview Point Cloud Registration via Optimization in an Autoencoder
Latent Space | ['Luc Vedrenne', 'Sylvain Faisan', 'Denis Fortun'] | ['cs.CV'] | Point cloud rigid registration is a fundamental problem in 3D computer
vision. In the multiview case, we aim to find a set of 6D poses to align a set
of objects. Methods based on pairwise registration rely on a subsequent
synchronization algorithm, which makes them poorly scalable with the number of
views. Generative a... | 2025-04-30T09:42:38Z | 14 pages, 19 figures, IEEE Transactions on Image Processing | null | 10.1109/TIP.2025.3565998 | null | null | null | null | null | null | null |
2,504.21614 | Mcity Data Engine: Iterative Model Improvement Through Open-Vocabulary
Data Selection | ['Daniel Bogdoll', 'Rajanikant Patnaik Ananta', 'Abeyankar Giridharan', 'Isabel Moore', 'Gregory Stevens', 'Henry X. Liu'] | ['cs.CV'] | With an ever-increasing availability of data, it has become more and more
challenging to select and label appropriate samples for the training of machine
learning models. It is especially difficult to detect long-tail classes of
interest in large amounts of unlabeled data. This holds especially true for
Intelligent Tra... | 2025-04-30T13:10:59Z | null | null | null | null | null | null | null | null | null | null |
2,504.2165 | HoloTime: Taming Video Diffusion Models for Panoramic 4D Scene
Generation | ['Haiyang Zhou', 'Wangbo Yu', 'Jiawen Guan', 'Xinhua Cheng', 'Yonghong Tian', 'Li Yuan'] | ['cs.CV'] | The rapid advancement of diffusion models holds the promise of
revolutionizing the application of VR and AR technologies, which typically
require scene-level 4D assets for user experience. Nonetheless, existing
diffusion models predominantly concentrate on modeling static 3D scenes or
object-level dynamics, constrainin... | 2025-04-30T13:55:28Z | Project Homepage: https://zhouhyocean.github.io/holotime/ Code:
https://github.com/PKU-YuanGroup/HoloTime | null | null | HoloTime: Taming Video Diffusion Models for Panoramic 4D Scene Generation | ['Haiyang Zhou', 'Wangbo Yu', 'Jiawen Guan', 'Xinhua Cheng', 'Yonghong Tian', 'Li Yuan'] | 2,025 | arXiv.org | 1 | 0 | ['Computer Science'] |
2,504.21776 | WebThinker: Empowering Large Reasoning Models with Deep Research
Capability | ['Xiaoxi Li', 'Jiajie Jin', 'Guanting Dong', 'Hongjin Qian', 'Yutao Zhu', 'Yongkang Wu', 'Ji-Rong Wen', 'Zhicheng Dou'] | ['cs.CL', 'cs.AI', 'cs.IR'] | Large reasoning models (LRMs), such as OpenAI-o1 and DeepSeek-R1, demonstrate
impressive long-horizon reasoning capabilities. However, their reliance on
static internal knowledge limits their performance on complex,
knowledge-intensive tasks and hinders their ability to produce comprehensive
research reports requiring ... | 2025-04-30T16:25:25Z | null | null | null | null | null | null | null | null | null | null |
2,504.21798 | SWE-smith: Scaling Data for Software Engineering Agents | ['John Yang', 'Kilian Leret', 'Carlos E. Jimenez', 'Alexander Wettig', 'Kabir Khandpur', 'Yanzhe Zhang', 'Binyuan Hui', 'Ofir Press', 'Ludwig Schmidt', 'Diyi Yang'] | ['cs.SE', 'cs.AI', 'cs.CL'] | Despite recent progress in Language Models (LMs) for software engineering,
collecting training data remains a significant pain point. Existing datasets
are small, with at most 1,000s of training instances from 11 or fewer GitHub
repositories. The procedures to curate such datasets are often complex,
necessitating hundr... | 2025-04-30T16:56:06Z | All assets available at https://swesmith.com | null | null | SWE-smith: Scaling Data for Software Engineering Agents | ['John Yang', 'Kilian Leret', 'Carlos E. Jimenez', 'Alexander Wettig', 'Kabir Khandpur', 'Yanzhe Zhang', 'Binyuan Hui', 'Ofir Press', 'Ludwig Schmidt', 'Diyi Yang'] | 2,025 | arXiv.org | 7 | 0 | ['Computer Science'] |
2,505.00001 | Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model
Reasoning | ['Shaun Baek', 'Shaun Esua-Mensah', 'Cyrus Tsui', 'Sejan Vigneswaralingam', 'Abdullah Alali', 'Michael Lu', 'Vasu Sharma', "Sean O'Brien", 'Kevin Zhu'] | ['cs.CL'] | Large Language Models (LLMs) are primarily trained on high-resource natural
languages, limiting their effectiveness in low-resource settings and in tasks
requiring deep logical reasoning. This research introduces Rosetta-PL, a
benchmark designed to evaluate LLMs' logical reasoning and generalization
capabilities in a c... | 2025-03-25T21:12:29Z | null | null | null | Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning | ['Shaun Baek', 'Shaun Esua-Mensah', 'Cyrus Tsui', 'Sejan Vigneswaralingam', 'Abdullah Alali', 'Michael Lu', 'Vasu Sharma', 'Kevin Zhu'] | 2,025 | North American Chapter of the Association for Computational Linguistics | 0 | 19 | ['Computer Science'] |
2,505.00022 | Aleph-Alpha-GermanWeb: Improving German-language LLM pre-training with
model-based data curation and synthetic data generation | ['Thomas F Burns', 'Letitia Parcalabescu', 'Stephan Wäldchen', 'Michael Barlow', 'Gregor Ziegltrum', 'Volker Stampa', 'Bastian Harren', 'Björn Deiseroth'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Scaling data quantity is essential for large language models (LLMs), yet
recent findings show that data quality can significantly boost performance and
training efficiency. We introduce a German-language dataset curation pipeline
that combines heuristic and model-based filtering techniques with synthetic
data generatio... | 2025-04-24T17:23:46Z | 10 pages, 3 figures | null | null | null | null | null | null | null | null | null |
2,505.00334 | Quaternion Wavelet-Conditioned Diffusion Models for Image
Super-Resolution | ['Luigi Sigillo', 'Christian Bianchi', 'Aurelio Uncini', 'Danilo Comminiello'] | ['cs.CV', 'cs.LG'] | Image Super-Resolution is a fundamental problem in computer vision with broad
applications spacing from medical imaging to satellite analysis. The ability to
reconstruct high-resolution images from low-resolution inputs is crucial for
enhancing downstream tasks such as object detection and segmentation. While
deep lear... | 2025-05-01T06:17:33Z | Accepted for presentation at IJCNN 2025 | null | null | Quaternion Wavelet-Conditioned Diffusion Models for Image Super-Resolution | ['Luigi Sigillo', 'Christian Bianchi', 'A. Uncini', 'Danilo Comminiello'] | 2,025 | arXiv.org | 1 | 49 | ['Computer Science'] |
2,505.00568 | Multimodal Masked Autoencoder Pre-training for 3D MRI-Based Brain Tumor
Analysis with Missing Modalities | ['Lucas Robinet', 'Ahmad Berjaoui', 'Elizabeth Cohen-Jonathan Moyal'] | ['cs.CV', 'cs.AI'] | Multimodal magnetic resonance imaging (MRI) constitutes the first line of
investigation for clinicians in the care of brain tumors, providing crucial
insights for surgery planning, treatment monitoring, and biomarker
identification. Pre-training on large datasets have been shown to help models
learn transferable repres... | 2025-05-01T14:51:30Z | null | null | null | Multimodal Masked Autoencoder Pre-training for 3D MRI-Based Brain Tumor Analysis with Missing Modalities | ['Lucas Robinet', 'Ahmad Berjaoui', 'E. Cohen-Jonathan'] | 2,025 | arXiv.org | 0 | 32 | ['Computer Science'] |
2,505.00598 | Fast and Low-Cost Genomic Foundation Models via Outlier Removal | ['Haozheng Luo', 'Chenghao Qiu', 'Maojiang Su', 'Zhihan Zhou', 'Zoe Mehta', 'Guo Ye', 'Jerry Yao-Chieh Hu', 'Han Liu'] | ['cs.LG', 'cs.AI'] | To address the challenge of scarce computational resources in genomic
modeling, we introduce GERM, a genomic foundation model with strong compression
performance and fast adaptability. GERM improves upon models like DNABERT-2 by
eliminating outliers that hinder low-rank adaptation and post-training
quantization, enhanc... | 2025-05-01T15:31:09Z | International Conference on Machine Learning (ICML) 2025 | null | null | null | null | null | null | null | null | null |
2,505.00703 | T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level
and Token-level CoT | ['Dongzhi Jiang', 'Ziyu Guo', 'Renrui Zhang', 'Zhuofan Zong', 'Hao Li', 'Le Zhuo', 'Shilin Yan', 'Pheng-Ann Heng', 'Hongsheng Li'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | Recent advancements in large language models have demonstrated how
chain-of-thought (CoT) and reinforcement learning (RL) can improve performance.
However, applying such reasoning strategies to the visual generation domain
remains largely unexplored. In this paper, we present T2I-R1, a novel
reasoning-enhanced text-to-... | 2025-05-01T17:59:46Z | Project Page: https://github.com/CaraJ7/T2I-R1 | null | null | null | null | null | null | null | null | null |
2,505.00949 | Llama-Nemotron: Efficient Reasoning Models | ['Akhiad Bercovich', 'Itay Levy', 'Izik Golan', 'Mohammad Dabbah', 'Ran El-Yaniv', 'Omri Puny', 'Ido Galil', 'Zach Moshe', 'Tomer Ronen', 'Najeeb Nabwani', 'Ido Shahaf', 'Oren Tropp', 'Ehud Karpas', 'Ran Zilberstein', 'Jiaqi Zeng', 'Soumye Singhal', 'Alexander Bukharin', 'Yian Zhang', 'Tugrul Konuk', 'Gerald Shen', 'Am... | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce the Llama-Nemotron series of models, an open family of
heterogeneous reasoning models that deliver exceptional reasoning capabilities,
inference efficiency, and an open license for enterprise use. The family comes
in three sizes -- Nano (8B), Super (49B), and Ultra (253B) -- and performs
competitively with... | 2025-05-02T01:35:35Z | null | null | null | null | null | null | null | null | null | null |
2,505.01257 | CAMELTrack: Context-Aware Multi-cue ExpLoitation for Online Multi-Object
Tracking | ['Vladimir Somers', 'Baptiste Standaert', 'Victor Joos', 'Alexandre Alahi', 'Christophe De Vleeschouwer'] | ['cs.CV', 'cs.LG'] | Online multi-object tracking has been recently dominated by
tracking-by-detection (TbD) methods, where recent advances rely on increasingly
sophisticated heuristics for tracklet representation, feature fusion, and
multi-stage matching. The key strength of TbD lies in its modular design,
enabling the integration of spec... | 2025-05-02T13:26:23Z | null | null | null | null | null | null | null | null | null | null |
2,505.01481 | VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations on
Synthetic Video Understanding | ['Zongxia Li', 'Xiyang Wu', 'Guangyao Shi', 'Yubin Qin', 'Hongyang Du', 'Tianyi Zhou', 'Dinesh Manocha', 'Jordan Lee Boyd-Graber'] | ['cs.CV', 'cs.LG'] | Synthetic video generation has gained significant attention for its realism
and broad applications, but remains prone to violations of common sense and
physical laws. This highlights the need for reliable abnormality detectors that
understand such principles and are robust to hallucinations. To address this,
we introdu... | 2025-05-02T15:58:38Z | null | null | null | null | null | null | null | null | null | null |
2,505.01583 | TEMPURA: Temporal Event Masked Prediction and Understanding for
Reasoning in Action | ['Jen-Hao Cheng', 'Vivian Wang', 'Huayu Wang', 'Huapeng Zhou', 'Yi-Hao Peng', 'Hou-I Liu', 'Hsiang-Wei Huang', 'Kuang-Ming Chen', 'Cheng-Yen Yang', 'Wenhao Chai', 'Yi-Ling Chen', 'Vibhav Vineet', 'Qin Cai', 'Jenq-Neng Hwang'] | ['cs.CV', 'cs.AI'] | Understanding causal event relationships and achieving fine-grained temporal
grounding in videos remain challenging for vision-language models. Existing
methods either compress video tokens to reduce temporal resolution, or treat
videos as unsegmented streams, which obscures fine-grained event boundaries and
limits the... | 2025-05-02T21:00:17Z | null | null | null | null | null | null | null | null | null | null |
2,505.02009 | Towards Safer Pretraining: Analyzing and Filtering Harmful Content in
Webscale datasets for Responsible LLMs | ['Sai Krishna Mendu', 'Harish Yenala', 'Aditi Gulati', 'Shanu Kumar', 'Parag Agrawal'] | ['cs.CL', 'cs.LG'] | Large language models (LLMs) have become integral to various real-world
applications, leveraging massive, web-sourced datasets like Common Crawl, C4,
and FineWeb for pretraining. While these datasets provide linguistic data
essential for high-quality natural language generation, they often contain
harmful content, such... | 2025-05-04T06:37:20Z | 10 pages, 5 figures. Accepted at the International Joint Conferences
on Artificial Intelligence IJCAI 2025 (main track) | null | null | null | null | null | null | null | null | null |
2,505.02214 | An Empirical Study of Qwen3 Quantization | ['Xingyu Zheng', 'Yuye Li', 'Haoran Chu', 'Yue Feng', 'Xudong Ma', 'Jie Luo', 'Jinyang Guo', 'Haotong Qin', 'Michele Magno', 'Xianglong Liu'] | ['cs.LG'] | The Qwen series has emerged as a leading family of open-source Large Language
Models (LLMs), demonstrating remarkable capabilities in natural language
understanding tasks. With the recent release of Qwen3, which exhibits superior
performance across diverse benchmarks, there is growing interest in deploying
these models... | 2025-05-04T18:43:44Z | null | null | null | null | null | null | null | null | null | null |
2,505.02387 | RM-R1: Reward Modeling as Reasoning | ['Xiusi Chen', 'Gaotang Li', 'Ziqi Wang', 'Bowen Jin', 'Cheng Qian', 'Yu Wang', 'Hongru Wang', 'Yu Zhang', 'Denghui Zhang', 'Tong Zhang', 'Hanghang Tong', 'Heng Ji'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Reward modeling is essential for aligning large language models with human
preferences through reinforcement learning from human feedback. To provide
accurate reward signals, a reward model (RM) should stimulate deep thinking and
conduct interpretable reasoning before assigning a score or a judgment.
Inspired by recent... | 2025-05-05T06:11:12Z | 25 pages, 8 figures | null | null | RM-R1: Reward Modeling as Reasoning | ['Xiusi Chen', 'Gaotang Li', 'Ziqi Wang', 'Bowen Jin', 'Cheng Qian', 'Yu Wang', 'Hongru Wang', 'Yu Zhang', 'Denghui Zhang', 'Tong Zhang', 'Hanghang Tong', 'Heng Ji'] | 2,025 | arXiv.org | 21 | 64 | ['Computer Science'] |
2,505.0239 | Quantitative Analysis of Performance Drop in DeepSeek Model Quantization | ['Enbo Zhao', 'Yi Shen', 'Shuming Shi', 'Jieyun Huang', 'Zhihao Chen', 'Ning Wang', 'Siqi Xiao', 'Jian Zhang', 'Kai Wang', 'Shiguo Lian'] | ['cs.LG', 'cs.AI'] | Recently, there is a high demand for deploying DeepSeek-R1 and V3 locally,
possibly because the official service often suffers from being busy and some
organizations have data privacy concerns. While single-machine deployment
offers infrastructure simplicity, the models' 671B FP8 parameter configuration
exceeds the pra... | 2025-05-05T06:25:20Z | This version added the results of DeepSeek-V3-0324 | null | null | null | null | null | null | null | null | null |
2,505.02393 | Uncertainty-Weighted Image-Event Multimodal Fusion for Video Anomaly
Detection | ['Sungheon Jeong', 'Jihong Park', 'Mohsen Imani'] | ['cs.CV'] | Most existing video anomaly detectors rely solely on RGB frames, which lack
the temporal resolution needed to capture abrupt or transient motion cues, key
indicators of anomalous events. To address this limitation, we propose
Image-Event Fusion for Video Anomaly Detection (IEF-VAD), a framework that
synthesizes event r... | 2025-05-05T06:33:20Z | null | null | null | null | null | null | null | null | null | null |
2,505.0241 | Bielik 11B v2 Technical Report | ['Krzysztof Ociepa', 'Łukasz Flis', 'Krzysztof Wróbel', 'Adrian Gwoździej', 'Remigiusz Kinas'] | ['cs.CL', 'cs.AI', '68T50', 'I.2.7'] | We present Bielik 11B v2, a state-of-the-art language model optimized for
Polish text processing. Built on the Mistral 7B v0.2 architecture and scaled to
11B parameters using depth up-scaling, this model demonstrates exceptional
performance across Polish language benchmarks while maintaining strong
cross-lingual capabi... | 2025-05-05T07:03:41Z | null | null | null | Bielik 11B v2 Technical Report | ['Krzysztof Ociepa', 'Lukasz Flis', "Krzysztof Wr'obel", "Adrian Gwo'zdziej", 'Remigiusz Kinas'] | 2,025 | arXiv.org | 0 | 56 | ['Computer Science'] |
2,505.02466 | Tevatron 2.0: Unified Document Retrieval Toolkit across Scale, Language,
and Modality | ['Xueguang Ma', 'Luyu Gao', 'Shengyao Zhuang', 'Jiaqi Samantha Zhan', 'Jamie Callan', 'Jimmy Lin'] | ['cs.IR'] | Recent advancements in large language models (LLMs) have driven interest in
billion-scale retrieval models with strong generalization across retrieval
tasks and languages. Additionally, progress in large vision-language models has
created new opportunities for multimodal retrieval. In response, we have
updated the Teva... | 2025-05-05T08:52:49Z | Accepted in SIGIR 2025 (Demo) | null | null | null | null | null | null | null | null | null |
2,505.02471 | Ming-Lite-Uni: Advancements in Unified Architecture for Natural
Multimodal Interaction | ['Inclusion AI', 'Biao Gong', 'Cheng Zou', 'Dandan Zheng', 'Hu Yu', 'Jingdong Chen', 'Jianxin Sun', 'Junbo Zhao', 'Jun Zhou', 'Kaixiang Ji', 'Lixiang Ru', 'Libin Wang', 'Qingpei Guo', 'Rui Liu', 'Weilong Chai', 'Xinyu Xiao', 'Ziyuan Huang'] | ['cs.CV'] | We introduce Ming-Lite-Uni, an open-source multimodal framework featuring a
newly designed unified visual generator and a native multimodal autoregressive
model tailored for unifying vision and language. Specifically, this project
provides an open-source implementation of the integrated MetaQueries and
M2-omni framewor... | 2025-05-05T08:56:12Z | https://github.com/inclusionAI/Ming/tree/Ming-Lite-Omni-Preview/Ming-unify | null | null | null | null | null | null | null | null | null |
2,505.0255 | Bielik v3 Small: Technical Report | ['Krzysztof Ociepa', 'Łukasz Flis', 'Remigiusz Kinas', 'Krzysztof Wróbel', 'Adrian Gwoździej'] | ['cs.LG', 'cs.AI', 'cs.CL', '68T50', 'I.2.7'] | We introduce Bielik v3, a series of parameter-efficient generative text
models (1.5B and 4.5B) optimized for Polish language processing. These models
demonstrate that smaller, well-optimized architectures can achieve performance
comparable to much larger counterparts while requiring substantially fewer
computational re... | 2025-05-05T10:39:51Z | null | null | null | null | null | null | null | null | null | null |
2,505.02625 | LLaMA-Omni2: LLM-based Real-time Spoken Chatbot with Autoregressive
Streaming Speech Synthesis | ['Qingkai Fang', 'Yan Zhou', 'Shoutao Guo', 'Shaolei Zhang', 'Yang Feng'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | Real-time, intelligent, and natural speech interaction is an essential part
of the next-generation human-computer interaction. Recent advancements have
showcased the potential of building intelligent spoken chatbots based on large
language models (LLMs). In this paper, we introduce LLaMA-Omni 2, a series of
speech lang... | 2025-05-05T12:53:09Z | Preprint. Project: https://github.com/ictnlp/LLaMA-Omni2 | null | null | null | null | null | null | null | null | null |
2,505.02707 | Voila: Voice-Language Foundation Models for Real-Time Autonomous
Interaction and Voice Role-Play | ['Yemin Shi', 'Yu Shu', 'Siwei Dong', 'Guangyi Liu', 'Jaward Sesay', 'Jingwen Li', 'Zhiting Hu'] | ['cs.AI', 'cs.CL', 'cs.SD'] | A voice AI agent that blends seamlessly into daily life would interact with
humans in an autonomous, real-time, and emotionally expressive manner. Rather
than merely reacting to commands, it would continuously listen, reason, and
respond proactively, fostering fluid, dynamic, and emotionally resonant
interactions. We i... | 2025-05-05T15:05:01Z | 18 pages, 7 figures, Website: https://voila.maitrix.org | null | null | Voila: Voice-Language Foundation Models for Real-Time Autonomous Interaction and Voice Role-Play | ['Yemin Shi', 'Yu Shu', 'Siwei Dong', 'Guangyi Liu', 'Jaward Sesay', 'Jingwen Li', 'Zhiting Hu'] | 2,025 | arXiv.org | 0 | 62 | ['Computer Science'] |
2,505.02819 | ReplaceMe: Network Simplification via Depth Pruning and Transformer
Block Linearization | ['Dmitriy Shopkhoev', 'Ammar Ali', 'Magauiya Zhussip', 'Valentin Malykh', 'Stamatios Lefkimmiatis', 'Nikos Komodakis', 'Sergey Zagoruyko'] | ['cs.CL'] | We introduce ReplaceMe, a generalized training-free depth pruning method that
effectively replaces transformer blocks with a linear operation, while
maintaining high performance for low compression ratios. In contrast to
conventional pruning approaches that require additional training or
fine-tuning, our approach requi... | 2025-05-05T17:47:42Z | null | null | null | null | null | null | null | null | null | null |
2,505.02829 | LISAT: Language-Instructed Segmentation Assistant for Satellite Imagery | ['Jerome Quenum', 'Wen-Han Hsieh', 'Tsung-Han Wu', 'Ritwik Gupta', 'Trevor Darrell', 'David M. Chan'] | ['cs.AI'] | Segmentation models can recognize a pre-defined set of objects in images.
However, models that can reason over complex user queries that implicitly refer
to multiple objects of interest are still in their infancy. Recent advances in
reasoning segmentation--generating segmentation masks from complex, implicit
query text... | 2025-05-05T17:56:25Z | 28 pages, 10 figures, 19 tables | null | null | null | null | null | null | null | null | null |
2,505.02835 | R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement
Learning | ['Yi-Fan Zhang', 'Xingyu Lu', 'Xiao Hu', 'Chaoyou Fu', 'Bin Wen', 'Tianke Zhang', 'Changyi Liu', 'Kaiyu Jiang', 'Kaibing Chen', 'Kaiyu Tang', 'Haojie Ding', 'Jiankang Chen', 'Fan Yang', 'Zhang Zhang', 'Tingting Gao', 'Liang Wang'] | ['cs.CV', 'cs.CL'] | Multimodal Reward Models (MRMs) play a crucial role in enhancing the
performance of Multimodal Large Language Models (MLLMs). While recent
advancements have primarily focused on improving the model structure and
training data of MRMs, there has been limited exploration into the
effectiveness of long-term reasoning capa... | 2025-05-05T17:59:50Z | Home page: https://github.com/yfzhang114/r1_reward | null | null | null | null | null | null | null | null | null |
2,505.02881 | Rewriting Pre-Training Data Boosts LLM Performance in Math and Code | ['Kazuki Fujii', 'Yukito Tajima', 'Sakae Mizuki', 'Hinari Shimada', 'Taihei Shiotani', 'Koshiro Saito', 'Masanari Ohi', 'Masaki Kawamura', 'Taishi Nakamura', 'Takumi Okamoto', 'Shigeki Ishida', 'Kakeru Hattori', 'Youmi Ma', 'Hiroya Takamura', 'Rio Yokota', 'Naoaki Okazaki'] | ['cs.LG', 'cs.AI'] | The performance of large language models (LLMs) in program synthesis and
mathematical reasoning is fundamentally limited by the quality of their
pre-training corpora. We introduce two openly licensed datasets, released under
the Llama 3.3 Community License, that significantly enhance LLM performance by
systematically r... | 2025-05-05T07:38:43Z | null | null | null | Rewriting Pre-Training Data Boosts LLM Performance in Math and Code | ['Kazuki Fujii', 'Yukito Tajima', 'Sakae Mizuki', 'Hinari Shimada', 'Taihei Shiotani', 'Koshiro Saito', 'Masanari Ohi', 'Masaki Kawamura', 'Taishi Nakamura', 'Takumi Okamoto', 'Shigeki Ishida', 'Kakeru Hattori', 'Youmi Ma', 'Hiroya Takamura', 'Rio Yokota', 'Naoaki Okazaki'] | 2,025 | arXiv.org | 1 | 26 | ['Computer Science'] |
2,505.03005 | RADLADS: Rapid Attention Distillation to Linear Attention Decoders at
Scale | ['Daniel Goldstein', 'Eric Alcaide', 'Janna Lu', 'Eugene Cheah'] | ['cs.CL', 'cs.AI', 'cs.LG', 'I.2.7'] | We present Rapid Attention Distillation to Linear Attention Decoders at Scale
(RADLADS), a protocol for rapidly converting softmax attention transformers
into linear attention decoder models, along with two new RWKV-variant
architectures, and models converted from popular Qwen2.5 open source models in
7B, 32B, and 72B ... | 2025-05-05T20:03:28Z | null | null | null | null | null | null | null | null | null | null |
2,505.03059 | Improving Model Alignment Through Collective Intelligence of Open-Source
LLMS | ['Junlin Wang', 'Roy Xie', 'Shang Zhu', 'Jue Wang', 'Ben Athiwaratkun', 'Bhuwan Dhingra', 'Shuaiwen Leon Song', 'Ce Zhang', 'James Zou'] | ['cs.CL'] | Building helpful and harmless large language models (LLMs) requires effective
model alignment approach based on human instructions and feedback, which
necessitates high-quality human-labeled data. Constructing such datasets is
often expensive and hard to scale, and may face potential limitations on
diversity and genera... | 2025-05-05T22:40:23Z | ICML 2025 | null | null | Improving Model Alignment Through Collective Intelligence of Open-Source LLMS | ['Junlin Wang', 'Roy Xie', 'Shang Zhu', 'Jue Wang', 'Ben Athiwaratkun', 'Bhuwan Dhingra', 'S. Song', 'Ce Zhang', 'James Zou'] | 2,025 | arXiv.org | 0 | 54 | ['Computer Science'] |
2,505.03186 | CoGenAV: Versatile Audio-Visual Representation Learning via
Contrastive-Generative Synchronization | ['Detao Bai', 'Zhiheng Ma', 'Xihan Wei', 'Liefeng Bo'] | ['cs.SD', 'cs.CV', 'eess.AS'] | The inherent synchronization between a speaker's lip movements, voice, and
the underlying linguistic content offers a rich source of information for
improving speech processing tasks, especially in challenging conditions where
traditional audio-only systems falter. We introduce CoGenAV, a powerful and
data-efficient mo... | 2025-05-06T05:07:11Z | null | null | null | null | null | null | null | null | null | null |
2,505.03318 | Unified Multimodal Chain-of-Thought Reward Model through Reinforcement
Fine-Tuning | ['Yibin Wang', 'Zhimin Li', 'Yuhang Zang', 'Chunyu Wang', 'Qinglin Lu', 'Cheng Jin', 'Jiaqi Wang'] | ['cs.CV'] | Recent advances in multimodal Reward Models (RMs) have shown significant
promise in delivering reward signals to align vision models with human
preferences. However, current RMs are generally restricted to providing direct
responses or engaging in shallow reasoning processes with limited depth, often
leading to inaccur... | 2025-05-06T08:46:41Z | project page: https://codegoat24.github.io/UnifiedReward/think | null | null | Unified Multimodal Chain-of-Thought Reward Model through Reinforcement Fine-Tuning | ['Yibin Wang', 'Zhimin Li', 'Yuhang Zang', 'Chunyu Wang', 'Qinglin Lu', 'Cheng Jin', 'Jiaqi Wang'] | 2,025 | arXiv.org | 11 | 40 | ['Computer Science'] |
2,505.03329 | FLUX-Text: A Simple and Advanced Diffusion Transformer Baseline for
Scene Text Editing | ['Rui Lan', 'Yancheng Bai', 'Xu Duan', 'Mingxing Li', 'Lei Sun', 'Xiangxiang Chu'] | ['cs.CV'] | The task of scene text editing is to modify or add texts on images while
maintaining the fidelity of newly generated text and visual coherence with the
background. Recent works based on latent diffusion models (LDM) show improved
text editing results, yet still face challenges and often generate inaccurate
or unrecogni... | 2025-05-06T08:56:28Z | 9 pages, 4 figures | null | null | FLUX-Text: A Simple and Advanced Diffusion Transformer Baseline for Scene Text Editing | ['Rui Lan', 'Yancheng Bai', 'Xu Duan', 'Mingxing Li', 'Lei Sun', 'Xiangxiang Chu'] | 2,025 | arXiv.org | 0 | 40 | ['Computer Science'] |
2,505.03335 | Absolute Zero: Reinforced Self-play Reasoning with Zero Data | ['Andrew Zhao', 'Yiran Wu', 'Yang Yue', 'Tong Wu', 'Quentin Xu', 'Yang Yue', 'Matthieu Lin', 'Shenzhi Wang', 'Qingyun Wu', 'Zilong Zheng', 'Gao Huang'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Reinforcement learning with verifiable rewards (RLVR) has shown promise in
enhancing the reasoning capabilities of large language models by learning
directly from outcome-based rewards. Recent RLVR works that operate under the
zero setting avoid supervision in labeling the reasoning process, but still
depend on manuall... | 2025-05-06T09:08:00Z | null | null | null | null | null | null | null | null | null | null |
2,505.03538 | RAIL: Region-Aware Instructive Learning for Semi-Supervised Tooth
Segmentation in CBCT | ['Chuyu Zhao', 'Hao Huang', 'Jiashuo Guo', 'Ziyu Shen', 'Zhongwei Zhou', 'Jie Liu', 'Zekuan Yu'] | ['cs.CV'] | Semi-supervised learning has become a compelling approach for 3D tooth
segmentation from CBCT scans, where labeled data is minimal. However, existing
methods still face two persistent challenges: limited corrective supervision in
structurally ambiguous or mislabeled regions during supervised training and
performance de... | 2025-05-06T13:50:57Z | null | null | null | null | null | null | null | null | null | null |
2,505.03673 | RoboOS: A Hierarchical Embodied Framework for Cross-Embodiment and
Multi-Agent Collaboration | ['Huajie Tan', 'Xiaoshuai Hao', 'Cheng Chi', 'Minglan Lin', 'Yaoxu Lyu', 'Mingyu Cao', 'Dong Liang', 'Zhuo Chen', 'Mengsi Lyu', 'Cheng Peng', 'Chenrui He', 'Yulong Ao', 'Yonghua Lin', 'Pengwei Wang', 'Zhongyuan Wang', 'Shanghang Zhang'] | ['cs.RO'] | The dawn of embodied intelligence has ushered in an unprecedented imperative
for resilient, cognition-enabled multi-agent collaboration across
next-generation ecosystems, revolutionizing paradigms in autonomous
manufacturing, adaptive service robotics, and cyber-physical production
architectures. However, current robot... | 2025-05-06T16:11:49Z | 22 pages, 10 figures | null | null | RoboOS: A Hierarchical Embodied Framework for Cross-Embodiment and Multi-Agent Collaboration | ['Huajie Tan', 'Xiaoshuai Hao', 'Minglan Lin', 'Pengwei Wang', 'Yaoxu Lyu', 'Mingyu Cao', 'Zhongyuan Wang', 'Shanghang Zhang'] | 2,025 | arXiv.org | 0 | 84 | ['Computer Science'] |
2,505.03688 | IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for
Indic Languages | ['Sharvi Endait', 'Ruturaj Ghatage', 'Aditya Kulkarni', 'Rajlaxmi Patil', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | The rapid progress in question-answering (QA) systems has predominantly
benefited high-resource languages, leaving Indic languages largely
underrepresented despite their vast native speaker base. In this paper, we
present IndicSQuAD, a comprehensive multi-lingual extractive QA dataset
covering nine major Indic language... | 2025-05-06T16:42:54Z | null | null | null | IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages | ['Sharvi Endait', 'Ruturaj Ghatage', 'Aditya Kulkarni', 'Rajlaxmi Patil', 'Raviraj Joshi'] | 2,025 | arXiv.org | 0 | 19 | ['Computer Science'] |
2,505.0373 | FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios | ['Shiyi Zhang', 'Junhao Zhuang', 'Zhaoyang Zhang', 'Ying Shan', 'Yansong Tang'] | ['cs.CV', 'cs.AI', 'cs.MM'] | Action customization involves generating videos where the subject performs
actions dictated by input control signals. Current methods use pose-guided or
global motion customization but are limited by strict constraints on spatial
structure, such as layout, skeleton, and viewpoint consistency, reducing
adaptability acro... | 2025-05-06T17:58:02Z | Accepted by Siggraph2025, Project Page:
https://shiyi-zh0408.github.io/projectpages/FlexiAct/ | null | null | FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios | ['Shiyi Zhang', 'Junhao Zhuang', 'Zhaoyang Zhang', 'Ying Shan', 'Yansong Tang'] | 2,025 | arXiv.org | 0 | 51 | ['Computer Science'] |
2,505.03733 | WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional
Websites from Scratch | ['Zimu Lu', 'Yunqiao Yang', 'Houxing Ren', 'Haotian Hou', 'Han Xiao', 'Ke Wang', 'Weikang Shi', 'Aojun Zhou', 'Mingjie Zhan', 'Hongsheng Li'] | ['cs.CL'] | LLM-based agents have demonstrated great potential in generating and managing
code within complex codebases. In this paper, we introduce WebGen-Bench, a
novel benchmark designed to measure an LLM-based agent's ability to create
multi-file website codebases from scratch. It contains diverse instructions for
website gene... | 2025-05-06T17:59:15Z | null | null | null | null | null | null | null | null | null | null |
2,505.04321 | Generic Two-Mode Gaussian States as Quantum Sensors | ['Pritam Chattopadhyay', 'Saikat Sur', 'Jonas F. G. Santos'] | ['quant-ph'] | Gaussian quantum channels constitute a cornerstone of continuous-variable
quantum information science, underpinning a wide array of protocols in quantum
optics and quantum metrology. While the action of such channels on arbitrary
states is well-characterized under full channel knowledge, we address the
inverse problem,... | 2025-05-07T11:12:23Z | null | null | null | Generic Two-Mode Gaussian States as Quantum Sensors | ['Pritam Chattopadhyay', 'Saikat Sur', 'Jonas F. G. Santos'] | 2,025 | null | 1 | 61 | ['Physics'] |
2,505.04388 | The Aloe Family Recipe for Open and Specialized Healthcare LLMs | ['Dario Garcia-Gasulla', 'Jordi Bayarri-Planas', 'Ashwin Kumar Gururajan', 'Enrique Lopez-Cuena', 'Adrian Tormos', 'Daniel Hinjos', 'Pablo Bernabeu-Perez', 'Anna Arias-Duart', 'Pablo Agustin Martin-Torres', 'Marta Gonzalez-Mallo', 'Sergio Alvarez-Napagao', 'Eduard Ayguadé-Parra', 'Ulises Cortés'] | ['cs.CL', 'cs.AI'] | Purpose: With advancements in Large Language Models (LLMs) for healthcare,
the need arises for competitive open-source models to protect the public
interest. This work contributes to the field of open medical LLMs by optimizing
key stages of data preprocessing and training, while showing how to improve
model safety (th... | 2025-05-07T13:13:14Z | Follow-up work from arXiv:2405.01886 | null | null | null | null | null | null | null | null | null |
2,505.04512 | HunyuanCustom: A Multimodal-Driven Architecture for Customized Video
Generation | ['Teng Hu', 'Zhentao Yu', 'Zhengguang Zhou', 'Sen Liang', 'Yuan Zhou', 'Qin Lin', 'Qinglin Lu'] | ['cs.CV'] | Customized video generation aims to produce videos featuring specific
subjects under flexible user-defined conditions, yet existing methods often
struggle with identity consistency and limited input modalities. In this paper,
we propose HunyuanCustom, a multi-modal customized video generation framework
that emphasizes ... | 2025-05-07T15:33:18Z | null | null | null | HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation | ['Teng Hu', 'Zhentao Yu', 'Zhengguang Zhou', 'Sen Liang', 'Yuan Zhou', 'Qin Lin', 'Qinglin Lu'] | 2,025 | arXiv.org | 6 | 57 | ['Computer Science'] |
2,505.04601 | OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision
Encoders for Multimodal Learning | ['Xianhang Li', 'Yanqing Liu', 'Haoqin Tu', 'Hongru Zhu', 'Cihang Xie'] | ['cs.CV'] | OpenAI's CLIP, released in early 2021, have long been the go-to choice of
vision encoder for building multimodal foundation models. Although recent
alternatives such as SigLIP have begun to challenge this status quo, to our
knowledge none are fully open: their training data remains proprietary and/or
their training rec... | 2025-05-07T17:48:35Z | null | null | null | OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning | ['Xianhang Li', 'Yanqing Liu', 'Haoqin Tu', 'Hongru Zhu', 'Cihang Xie'] | 2,025 | arXiv.org | 2 | 53 | ['Computer Science'] |
2,505.0462 | On Path to Multimodal Generalist: General-Level and General-Bench | ['Hao Fei', 'Yuan Zhou', 'Juncheng Li', 'Xiangtai Li', 'Qingshan Xu', 'Bobo Li', 'Shengqiong Wu', 'Yaoting Wang', 'Junbao Zhou', 'Jiahao Meng', 'Qingyu Shi', 'Zhiyuan Zhou', 'Liangtao Shi', 'Minghe Gao', 'Daoan Zhang', 'Zhiqi Ge', 'Weiming Wu', 'Siliang Tang', 'Kaihang Pan', 'Yaobo Ye', 'Haobo Yuan', 'Tao Zhang', 'Tian... | ['cs.CV'] | The Multimodal Large Language Model (MLLM) is currently experiencing rapid
growth, driven by the advanced capabilities of LLMs. Unlike earlier
specialists, existing MLLMs are evolving towards a Multimodal Generalist
paradigm. Initially limited to understanding multiple modalities, these models
have advanced to not only... | 2025-05-07T17:59:32Z | ICML'25, 305 pages, 115 tables, 177 figures, project page:
https://generalist.top/ | null | null | null | null | null | null | null | null | null |
2,505.04622 | PrimitiveAnything: Human-Crafted 3D Primitive Assembly Generation with
Auto-Regressive Transformer | ['Jingwen Ye', 'Yuze He', 'Yanning Zhou', 'Yiqin Zhu', 'Kaiwen Xiao', 'Yong-Jin Liu', 'Wei Yang', 'Xiao Han'] | ['cs.GR', 'cs.CV'] | Shape primitive abstraction, which decomposes complex 3D shapes into simple
geometric elements, plays a crucial role in human visual cognition and has
broad applications in computer vision and graphics. While recent advances in 3D
content generation have shown remarkable progress, existing primitive
abstraction methods... | 2025-05-07T17:59:46Z | SIGGRAPH 2025. 14 pages, 15 figures | null | null | PrimitiveAnything: Human-Crafted 3D Primitive Assembly Generation with Auto-Regressive Transformer | ['Jingwen Ye', 'Yuze He', 'Yanning Zhou', 'Yiqin Zhu', 'Kaiwen Xiao', 'Yong-Jin Liu', 'Wei Yang', 'Xiao Han'] | 2,025 | arXiv.org | 1 | 78 | ['Computer Science'] |
2,505.04623 | EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via
Reinforcement Learning | ['Zhenghao Xing', 'Xiaowei Hu', 'Chi-Wing Fu', 'Wenhai Wang', 'Jifeng Dai', 'Pheng-Ann Heng'] | ['cs.CV', 'eess.AS'] | Multimodal large language models (MLLMs) have advanced perception across
text, vision, and audio, yet they often struggle with structured cross-modal
reasoning, particularly when integrating audio and visual signals. We introduce
EchoInk-R1, a reinforcement learning framework that enhances such reasoning in
MLLMs. Buil... | 2025-05-07T17:59:49Z | null | null | null | null | null | null | null | null | null | null |
2,505.04655 | Integration of Large Language Models and Traditional Deep Learning for
Social Determinants of Health Prediction | ['Paul Landes', 'Jimeng Sun', 'Adam Cross'] | ['cs.CL'] | Social Determinants of Health (SDoH) are economic, social and personal
circumstances that affect or influence an individual's health status. SDoHs
have shown to be correlated to wellness outcomes, and therefore, are useful to
physicians in diagnosing diseases and in decision-making. In this work, we
automatically extra... | 2025-05-06T23:11:59Z | null | null | null | null | null | null | null | null | null | null |
2,505.05022 | SOAP: Style-Omniscient Animatable Portraits | ['Tingting Liao', 'Yujian Zheng', 'Adilbek Karmanov', 'Liwen Hu', 'Leyang Jin', 'Yuliang Xiu', 'Hao Li'] | ['cs.CV'] | Creating animatable 3D avatars from a single image remains challenging due to
style limitations (realistic, cartoon, anime) and difficulties in handling
accessories or hairstyles. While 3D diffusion models advance single-view
reconstruction for general objects, outputs often lack animation controls or
suffer from artif... | 2025-05-08T07:56:16Z | null | Siggraph 2025, page: https://tingtingliao.github.io/soap/ | 10.1145/3721238.3730691 | null | null | null | null | null | null | null |
2,505.05071 | FG-CLIP: Fine-Grained Visual and Textual Alignment | ['Chunyu Xie', 'Bin Wang', 'Fanjing Kong', 'Jincheng Li', 'Dawei Liang', 'Gengshen Zhang', 'Dawei Leng', 'Yuhui Yin'] | ['cs.CV', 'cs.AI'] | Contrastive Language-Image Pre-training (CLIP) excels in multimodal tasks
such as image-text retrieval and zero-shot classification but struggles with
fine-grained understanding due to its focus on coarse-grained short captions.
To address this, we propose Fine-Grained CLIP (FG-CLIP), which enhances
fine-grained unders... | 2025-05-08T09:06:53Z | Accepted at ICML 2025 | null | null | null | null | null | null | null | null | null |
2,505.05315 | Scalable Chain of Thoughts via Elastic Reasoning | ['Yuhui Xu', 'Hanze Dong', 'Lei Wang', 'Doyen Sahoo', 'Junnan Li', 'Caiming Xiong'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Large reasoning models (LRMs) have achieved remarkable progress on complex
tasks by generating extended chains of thought (CoT). However, their
uncontrolled output lengths pose significant challenges for real-world
deployment, where inference-time budgets on tokens, latency, or compute are
strictly constrained. We prop... | 2025-05-08T15:01:06Z | null | null | null | Scalable Chain of Thoughts via Elastic Reasoning | ['Yuhui Xu', 'Hanze Dong', 'Lei Wang', 'Doyen Sahoo', 'Junnan Li', 'Caiming Xiong'] | 2,025 | arXiv.org | 8 | 39 | ['Computer Science'] |
2,505.05422 | TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and
Generation | ['Haokun Lin', 'Teng Wang', 'Yixiao Ge', 'Yuying Ge', 'Zhichao Lu', 'Ying Wei', 'Qingfu Zhang', 'Zhenan Sun', 'Ying Shan'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Pioneering token-based works such as Chameleon and Emu3 have established a
foundation for multimodal unification but face challenges of high training
computational overhead and limited comprehension performance due to a lack of
high-level semantics. In this paper, we introduce TokLIP, a visual tokenizer
that enhances c... | 2025-05-08T17:12:19Z | Technical Report | null | null | TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation | ['Haokun Lin', 'Teng Wang', 'Yixiao Ge', 'Yuying Ge', 'Zhichao Lu', 'Ying Wei', 'Qingfu Zhang', 'Zhenan Sun', 'Ying Shan'] | 2,025 | arXiv.org | 5 | 71 | ['Computer Science'] |
2,505.05427 | Ultra-FineWeb: Efficient Data Filtering and Verification for
High-Quality LLM Training Data | ['Yudong Wang', 'Zixuan Fu', 'Jie Cai', 'Peijun Tang', 'Hongya Lyu', 'Yewei Fang', 'Zhi Zheng', 'Jie Zhou', 'Guoyang Zeng', 'Chaojun Xiao', 'Xu Han', 'Zhiyuan Liu'] | ['cs.CL'] | Data quality has become a key factor in enhancing model performance with the
rapid development of large language models (LLMs). Model-driven data filtering
has increasingly become a primary approach for acquiring high-quality data.
However, it still faces two main challenges: (1) the lack of an efficient data
verificat... | 2025-05-08T17:15:20Z | The datasets are available on
https://huggingface.co/datasets/openbmb/UltraFineWeb | null | null | Ultra-FineWeb: Efficient Data Filtering and Verification for High-Quality LLM Training Data | ['Yudong Wang', 'Zixuan Fu', 'Jie Cai', 'Peijun Tang', 'Hongya Lyu', 'Yewei Fang', 'Zhi Zheng', 'Jie Zhou', 'Guoyang Zeng', 'Chaojun Xiao', 'Xu Han', 'Zhiyuan Liu'] | 2,025 | arXiv.org | 1 | 61 | ['Computer Science'] |
2,505.05446 | Adaptive Markup Language Generation for Contextually-Grounded Visual
Document Understanding | ['Han Xiao', 'Yina Xie', 'Guanxin Tan', 'Yinghao Chen', 'Rui Hu', 'Ke Wang', 'Aojun Zhou', 'Hao Li', 'Hao Shao', 'Xudong Lu', 'Peng Gao', 'Yafei Wen', 'Xiaoxin Chen', 'Shuai Ren', 'Hongsheng Li'] | ['cs.CV', 'cs.CL'] | Visual Document Understanding has become essential with the increase of
text-rich visual content. This field poses significant challenges due to the
need for effective integration of visual perception and textual comprehension,
particularly across diverse document types with complex layouts. Moreover,
existing fine-tun... | 2025-05-08T17:37:36Z | CVPR2025 | null | null | null | null | null | null | null | null | null |
2,505.05469 | Generating Physically Stable and Buildable Brick Structures from Text | ['Ava Pun', 'Kangle Deng', 'Ruixuan Liu', 'Deva Ramanan', 'Changliu Liu', 'Jun-Yan Zhu'] | ['cs.CV'] | We introduce BrickGPT, the first approach for generating physically stable
interconnecting brick assembly models from text prompts. To achieve this, we
construct a large-scale, physically stable dataset of brick structures, along
with their associated captions, and train an autoregressive large language
model to predic... | 2025-05-08T17:58:18Z | Project page: https://avalovelace1.github.io/BrickGPT/ | null | null | null | null | null | null | null | null | null |
2,505.0547 | Flow-GRPO: Training Flow Matching Models via Online RL | ['Jie Liu', 'Gongye Liu', 'Jiajun Liang', 'Yangguang Li', 'Jiaheng Liu', 'Xintao Wang', 'Pengfei Wan', 'Di Zhang', 'Wanli Ouyang'] | ['cs.CV', 'cs.AI'] | We propose Flow-GRPO, the first method integrating online reinforcement
learning (RL) into flow matching models. Our approach uses two key strategies:
(1) an ODE-to-SDE conversion that transforms a deterministic Ordinary
Differential Equation (ODE) into an equivalent Stochastic Differential Equation
(SDE) that matches ... | 2025-05-08T17:58:45Z | Code: https://github.com/yifan123/flow_grpo | null | null | null | null | null | null | null | null | null |
2,505.05528 | X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on
CLIP | ['Hanxun Huang', 'Sarah Erfani', 'Yige Li', 'Xingjun Ma', 'James Bailey'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | As Contrastive Language-Image Pre-training (CLIP) models are increasingly
adopted for diverse downstream tasks and integrated into large vision-language
models (VLMs), their susceptibility to adversarial perturbations has emerged as
a critical concern. In this work, we introduce \textbf{X-Transfer}, a novel
attack meth... | 2025-05-08T11:59:13Z | ICML 2025 | null | null | null | null | null | null | null | null | null |
2,505.05741 | Dome-DETR: DETR with Density-Oriented Feature-Query Manipulation for
Efficient Tiny Object Detection | ['Zhangchi Hu', 'Peixi Wu', 'Jie Chen', 'Huyue Zhu', 'Yijun Wang', 'Yansong Peng', 'Hebei Li', 'Xiaoyan Sun'] | ['cs.CV'] | Tiny object detection plays a vital role in drone surveillance, remote
sensing, and autonomous systems, enabling the identification of small targets
across vast landscapes. However, existing methods suffer from inefficient
feature leverage and high computational costs due to redundant feature
processing and rigid query... | 2025-05-09T02:44:06Z | null | null | null | null | null | null | null | null | null | null |
2,505.05895 | Leveraging Vision-Language Models for Visual Grounding and Analysis of
Automotive UI | ['Benjamin Raphael Ernhofer', 'Daniil Prokhorov', 'Jannica Langner', 'Dominik Bollmann'] | ['cs.CV', 'cs.AI'] | Modern automotive infotainment systems require intelligent and adaptive
solutions to handle frequent User Interface (UI) updates and diverse design
variations. We introduce a vision-language framework for understanding and
interacting with automotive infotainment systems, enabling seamless adaptation
across different U... | 2025-05-09T09:01:52Z | null | null | null | null | null | null | null | null | null | null |
2,505.06111 | UniVLA: Learning to Act Anywhere with Task-centric Latent Actions | ['Qingwen Bu', 'Yanting Yang', 'Jisong Cai', 'Shenyuan Gao', 'Guanghui Ren', 'Maoqing Yao', 'Ping Luo', 'Hongyang Li'] | ['cs.RO', 'cs.AI', 'cs.LG'] | A generalist robot should perform effectively across various environments.
However, most existing approaches heavily rely on scaling action-annotated data
to enhance their capabilities. Consequently, they are often limited to single
physical specification and struggle to learn transferable knowledge across
different em... | 2025-05-09T15:11:13Z | Accepted to RSS 2025. Code is available at
https://github.com/OpenDriveLab/UniVLA | null | null | null | null | null | null | null | null | null |
2,505.06152 | MM-Skin: Enhancing Dermatology Vision-Language Model with an Image-Text
Dataset Derived from Textbooks | ['Wenqi Zeng', 'Yuqi Sun', 'Chenxi Ma', 'Weimin Tan', 'Bo Yan'] | ['cs.CV', 'cs.AI'] | Medical vision-language models (VLMs) have shown promise as clinical
assistants across various medical fields. However, specialized dermatology VLM
capable of delivering professional and detailed diagnostic analysis remains
underdeveloped, primarily due to less specialized text descriptions in current
dermatology multi... | 2025-05-09T16:03:47Z | null | null | null | null | null | null | null | null | null | null |
2,505.06313 | AI Approaches to Qualitative and Quantitative News Analytics on NATO
Unity | ['Bohdan M. Pavlyshenko'] | ['cs.IR', 'cs.AI', 'cs.CL', 'cs.SI'] | The paper considers the use of GPT models with retrieval-augmented generation
(RAG) for qualitative and quantitative analytics on NATO sentiments, NATO unity
and NATO Article 5 trust opinion scores in different web sources: news sites
found via Google Search API, Youtube videos with comments, and Reddit
discussions. A ... | 2025-05-08T18:42:01Z | null | null | null | AI Approaches to Qualitative and Quantitative News Analytics on NATO Unity | ['Bohdan M. Pavlyshenko'] | 2,025 | arXiv.org | 0 | 17 | ['Computer Science'] |
2,505.06496 | xGen-small Technical Report | ['Erik Nijkamp', 'Bo Pang', 'Egor Pakhomov', 'Akash Gokul', 'Jin Qu', 'Silvio Savarese', 'Yingbo Zhou', 'Caiming Xiong'] | ['cs.CL', 'cs.AI'] | We introduce xGen-small, a family of 4B and 9B Transformer decoder models
optimized for long-context applications. Our vertically integrated pipeline
unites domain-balanced, frequency-aware data curation; multi-stage pre-training
with quality annealing and length extension to 128k tokens; and targeted
post-training via... | 2025-05-10T02:54:16Z | null | null | null | null | null | null | null | null | null | null |
2,505.06668 | StableMotion: Repurposing Diffusion-Based Image Priors for Motion
Estimation | ['Ziyi Wang', 'Haipeng Li', 'Lin Sui', 'Tianhao Zhou', 'Hai Jiang', 'Lang Nie', 'Shuaicheng Liu'] | ['cs.CV', 'cs.LG', 'eess.IV'] | We present StableMotion, a novel framework leverages knowledge (geometry and
content priors) from pretrained large-scale image diffusion models to perform
motion estimation, solving single-image-based image rectification tasks such as
Stitched Image Rectangling (SIR) and Rolling Shutter Correction (RSC).
Specifically, ... | 2025-05-10T14:58:44Z | null | null | null | StableMotion: Repurposing Diffusion-Based Image Priors for Motion Estimation | ['Ziyi Wang', 'Haipeng Li', 'Lin Sui', 'Tianhao Zhou', 'Hai Jiang', 'Lang Nie', 'Shuaicheng Liu'] | 2,025 | arXiv.org | 1 | 55 | ['Computer Science', 'Engineering'] |
2,505.07004 | GuidedQuant: Large Language Model Quantization via Exploiting End Loss
Guidance | ['Jinuk Kim', 'Marwa El Halabi', 'Wonpyo Park', 'Clemens JS Schaefer', 'Deokjae Lee', 'Yeonhong Park', 'Jae W. Lee', 'Hyun Oh Song'] | ['cs.LG'] | Post-training quantization is a key technique for reducing the memory and
inference latency of large language models by quantizing weights and
activations without requiring retraining. However, existing methods either (1)
fail to account for the varying importance of hidden features to the end loss
or, when incorporati... | 2025-05-11T14:55:09Z | ICML 2025 | null | null | null | null | null | null | null | null | null |
2,505.07019 | A Vision-Language Foundation Model for Leaf Disease Identification | ['Khang Nguyen Quoc', 'Lan Le Thi Thu', 'Luyl-Da Quach'] | ['cs.CV'] | Leaf disease identification plays a pivotal role in smart agriculture.
However, many existing studies still struggle to integrate image and textual
modalities to compensate for each other's limitations. Furthermore, many of
these approaches rely on pretraining with constrained datasets such as
ImageNet, which lack doma... | 2025-05-11T15:30:06Z | null | null | null | null | null | null | null | null | null | null |
2,505.07086 | Multi-Objective-Guided Discrete Flow Matching for Controllable
Biological Sequence Design | ['Tong Chen', 'Yinuo Zhang', 'Sophia Tang', 'Pranam Chatterjee'] | ['cs.LG', 'q-bio.BM'] | Designing biological sequences that satisfy multiple, often conflicting,
functional and biophysical criteria remains a central challenge in biomolecule
engineering. While discrete flow matching models have recently shown promise
for efficient sampling in high-dimensional sequence spaces, existing approaches
address onl... | 2025-05-11T18:17:44Z | null | null | null | null | null | null | null | null | null | null |
2,505.07233 | DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for
Dynamic Reranking in Retrieval-Augmented Generation | ['Jiashuo Sun', 'Xianrui Zhong', 'Sizhe Zhou', 'Jiawei Han'] | ['cs.CL', 'cs.AI'] | Retrieval-augmented generation (RAG) systems combine large language models
(LLMs) with external knowledge retrieval, making them highly effective for
knowledge-intensive tasks. A crucial but often under-explored component of
these systems is the reranker. Since irrelevant documents in RAG systems can
mislead the genera... | 2025-05-12T05:19:01Z | 24 pages, 7 figures, 15 tables | null | null | DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation | ['Jiashuo Sun', 'Xianrui Zhong', 'Sizhe Zhou', 'Jiawei Han'] | 2,025 | arXiv.org | 0 | 57 | ['Computer Science'] |
2,505.07263 | Skywork-VL Reward: An Effective Reward Model for Multimodal
Understanding and Reasoning | ['Xiaokun Wang', 'Peiyu Wang', 'Jiangbo Pei', 'Wei Shen', 'Yi Peng', 'Yunzhuo Hao', 'Weijie Qiu', 'Ai Jian', 'Tianyidan Xie', 'Xuchen Song', 'Yang Liu', 'Yahui Zhou'] | ['cs.CV'] | We propose Skywork-VL Reward, a multimodal reward model that provides reward
signals for both multimodal understanding and reasoning tasks. Our technical
approach comprises two key components: First, we construct a large-scale
multimodal preference dataset that covers a wide range of tasks and scenarios,
with responses... | 2025-05-12T06:23:08Z | null | null | null | Skywork-VL Reward: An Effective Reward Model for Multimodal Understanding and Reasoning | ['Xiaokun Wang', 'Peiyu Wang', 'Jiangbo Pei', 'Wei Shen', 'Yi Peng', 'Yunzhuo Hao', 'Weijie Qiu', 'Ai Jian', 'Tianyidan Xie', 'Xuchen Song', 'Yang Liu', 'Yahui Zhou'] | 2,025 | arXiv.org | 2 | 41 | ['Computer Science'] |
2,505.07286 | Piloting Structure-Based Drug Design via Modality-Specific Optimal
Schedule | ['Keyue Qiu', 'Yuxuan Song', 'Zhehuan Fan', 'Peidong Liu', 'Zhe Zhang', 'Mingyue Zheng', 'Hao Zhou', 'Wei-Ying Ma'] | ['q-bio.BM', 'cs.AI', 'cs.LG'] | Structure-Based Drug Design (SBDD) is crucial for identifying bioactive
molecules. Recent deep generative models are faced with challenges in geometric
structure modeling. A major bottleneck lies in the twisted probability path of
multi-modalities -- continuous 3D positions and discrete 2D topologies -- which
jointly d... | 2025-05-12T07:18:09Z | Accepted to ICML 2025 | null | null | null | null | null | null | null | null | null |
2,505.07291 | INTELLECT-2: A Reasoning Model Trained Through Globally Decentralized
Reinforcement Learning | ['Prime Intellect Team', 'Sami Jaghouar', 'Justus Mattern', 'Jack Min Ong', 'Jannik Straube', 'Manveer Basra', 'Aaron Pazdera', 'Kushal Thaman', 'Matthew Di Ferrante', 'Felix Gabriel', 'Fares Obeid', 'Kemal Erdem', 'Michael Keiblinger', 'Johannes Hagemann'] | ['cs.LG', 'cs.DC'] | We introduce INTELLECT-2, the first globally distributed reinforcement
learning (RL) training run of a 32 billion parameter language model. Unlike
traditional centralized training efforts, INTELLECT-2 trains a reasoning model
using fully asynchronous RL across a dynamic, heterogeneous swarm of
permissionless compute co... | 2025-05-12T07:24:33Z | 26 pages, 12 figures | null | null | null | null | null | null | null | null | null |
2,505.07447 | Unified Continuous Generative Models | ['Peng Sun', 'Yi Jiang', 'Tao Lin'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Recent advances in continuous generative models, including multi-step
approaches like diffusion and flow-matching (typically requiring 8-1000
sampling steps) and few-step methods such as consistency models (typically 1-8
steps), have demonstrated impressive generative performance. However, existing
work often treats th... | 2025-05-12T11:15:39Z | https://github.com/LINs-lab/UCGM | null | null | null | null | null | null | null | null | null |
2,505.07538 | Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for
Reasoning | ['Bohan Wang', 'Zhongqi Yue', 'Fengda Zhang', 'Shuo Chen', "Li'an Bi", 'Junzhe Zhang', 'Xue Song', 'Kennard Yanting Chan', 'Jiachun Pan', 'Weijia Wu', 'Mingze Zhou', 'Wang Lin', 'Kaihang Pan', 'Saining Zhang', 'Liyu Jia', 'Wentao Hu', 'Wei Zhao', 'Hanwang Zhang'] | ['cs.CV'] | We completely discard the conventional spatial prior in image representation
and introduce a novel discrete visual tokenizer: Self-consistency Tokenizer
(Selftok). At its design core, we compose an autoregressive (AR) prior --
mirroring the causal structure of language -- into visual tokens by using the
reverse diffusi... | 2025-05-12T13:19:08Z | null | null | null | null | null | null | null | null | null | null |
2,505.07608 | MiMo: Unlocking the Reasoning Potential of Language Model -- From
Pretraining to Posttraining | ['LLM-Core Xiaomi', ':', 'Bingquan Xia', 'Bowen Shen', 'Cici', 'Dawei Zhu', 'Di Zhang', 'Gang Wang', 'Hailin Zhang', 'Huaqiu Liu', 'Jiebao Xiao', 'Jinhao Dong', 'Liang Zhao', 'Peidian Li', 'Peng Wang', 'Shihua Yu', 'Shimao Chen', 'Weikun Wang', 'Wenhan Ma', 'Xiangwei Deng', 'Yi Huang', 'Yifan Song', 'Zihan Jiang', 'Bow... | ['cs.CL', 'cs.AI', 'cs.LG'] | We present MiMo-7B, a large language model born for reasoning tasks, with
optimization across both pre-training and post-training stages. During
pre-training, we enhance the data preprocessing pipeline and employ a
three-stage data mixing strategy to strengthen the base model's reasoning
potential. MiMo-7B-Base is pre-... | 2025-05-12T14:30:11Z | null | null | null | null | null | null | null | null | null | null |
2,505.07747 | Step1X-3D: Towards High-Fidelity and Controllable Generation of Textured
3D Assets | ['Weiyu Li', 'Xuanyang Zhang', 'Zheng Sun', 'Di Qi', 'Hao Li', 'Wei Cheng', 'Weiwei Cai', 'Shihao Wu', 'Jiarui Liu', 'Zihao Wang', 'Xiao Chen', 'Feipeng Tian', 'Jianxiong Pan', 'Zeming Li', 'Gang Yu', 'Xiangyu Zhang', 'Daxin Jiang', 'Ping Tan'] | ['cs.CV'] | While generative artificial intelligence has advanced significantly across
text, image, audio, and video domains, 3D generation remains comparatively
underdeveloped due to fundamental challenges such as data scarcity, algorithmic
limitations, and ecosystem fragmentation. To this end, we present Step1X-3D, an
open frame... | 2025-05-12T16:56:30Z | Technical report | null | null | Step1X-3D: Towards High-Fidelity and Controllable Generation of Textured 3D Assets | ['Weiyu Li', 'Xuanyang Zhang', 'Zheng Sun', 'Di Qi', 'Hao Li', 'Wei Cheng', 'Weiwei Cai', 'Shihao Wu', 'Jiarui Liu', 'Zihao Wang', 'Xiao Chen', 'Feipeng Tian', 'Jianxiong Pan', 'Zeming Li', 'Gang Yu', 'Xiangyu Zhang', 'Daxin Jiang', 'Ping Tan'] | 2,025 | arXiv.org | 3 | 92 | ['Computer Science'] |
2,505.07787 | Learning from Peers in Reasoning Models | ['Tongxu Luo', 'Wenyu Du', 'Jiaxi Bi', 'Stephen Chung', 'Zhengyang Tang', 'Hao Yang', 'Min Zhang', 'Benyou Wang'] | ['cs.CL'] | Large Reasoning Models (LRMs) have the ability to self-correct even when they
make mistakes in their reasoning paths. However, our study reveals that when
the reasoning process starts with a short but poor beginning, it becomes
difficult for the model to recover. We refer to this phenomenon as the "Prefix
Dominance Tra... | 2025-05-12T17:39:56Z | 29 pages, 32 figures | null | null | null | null | null | null | null | null | null |
2,505.07809 | A Comparative Analysis of Static Word Embeddings for Hungarian | ['Máté Gedeon'] | ['cs.CL', 'cs.AI'] | This paper presents a comprehensive analysis of various static word
embeddings for Hungarian, including traditional models such as Word2Vec,
FastText, as well as static embeddings derived from BERT-based models using
different extraction methods. We evaluate these embeddings on both intrinsic
and extrinsic tasks to pro... | 2025-05-12T17:57:11Z | null | null | null | null | null | null | null | null | null | null |
2,505.07849 | SweRank: Software Issue Localization with Code Ranking | ['Revanth Gangi Reddy', 'Tarun Suresh', 'JaeHyeok Doo', 'Ye Liu', 'Xuan Phi Nguyen', 'Yingbo Zhou', 'Semih Yavuz', 'Caiming Xiong', 'Heng Ji', 'Shafiq Joty'] | ['cs.SE', 'cs.AI', 'cs.IR'] | Software issue localization, the task of identifying the precise code
locations (files, classes, or functions) relevant to a natural language issue
description (e.g., bug report, feature request), is a critical yet
time-consuming aspect of software development. While recent LLM-based agentic
approaches demonstrate prom... | 2025-05-07T19:44:09Z | null | null | null | SweRank: Software Issue Localization with Code Ranking | ['R. Reddy', 'Tarun Suresh', 'Jae Doo', 'Ye Liu', 'Xuan-Phi Nguyen', 'Yingbo Zhou', 'Semih Yavuz', 'Caiming Xiong', 'Heng Ji', 'Shafiq Joty'] | 2,025 | arXiv.org | 0 | 55 | ['Computer Science'] |
2,505.07859 | Product of Experts with LLMs: Boosting Performance on ARC Is a Matter of
Perspective | ['Daniel Franzen', 'Jan Disselhoff', 'David Hartmann'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The Abstraction and Reasoning Corpus (ARC-AGI) poses a significant challenge
for large language models (LLMs), exposing limitations in their abstract
reasoning abilities. In this work, we leverage task-specific data augmentations
throughout the training, generation, and scoring phases, and employ a
depth-first search a... | 2025-05-08T11:17:10Z | ICML 2025 camera-ready; 15 pages, 6 figures, 5 tables | null | null | null | null | null | null | null | null | null |
2,505.08175 | Fast Text-to-Audio Generation with Adversarial Post-Training | ['Zachary Novack', 'Zach Evans', 'Zack Zukowski', 'Josiah Taylor', 'CJ Carr', 'Julian Parker', 'Adnan Al-Sinan', 'Gian Marco Iodice', 'Julian McAuley', 'Taylor Berg-Kirkpatrick', 'Jordi Pons'] | ['cs.SD', 'cs.AI', 'cs.LG', 'cs.MM', 'eess.AS'] | Text-to-audio systems, while increasingly performant, are slow at inference
time, thus making their latency unpractical for many creative applications. We
present Adversarial Relativistic-Contrastive (ARC) post-training, the first
adversarial acceleration algorithm for diffusion/flow models not based on
distillation. W... | 2025-05-13T02:25:47Z | null | null | null | Fast Text-to-Audio Generation with Adversarial Post-Training | ['Zachary Novack', 'Zach Evans', 'Zack Zukowski', 'Josiah Taylor', 'CJ Carr', 'Julian Parker', 'Adnan Al-Sinan', 'Gian Marco Iodice', 'Julian McAuley', 'Taylor Berg-Kirkpatrick', 'Jordi Pons'] | 2,025 | arXiv.org | 0 | 50 | ['Computer Science', 'Engineering'] |
2,505.08311 | AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale | ['Yunjie Ji', 'Xiaoyu Tian', 'Sitong Zhao', 'Haotian Wang', 'Shuaiting Chen', 'Yiping Peng', 'Han Zhao', 'Xiangang Li'] | ['cs.CL'] | We present AM-Thinking-v1, a 32B dense language model that advances the
frontier of reasoning, embodying the collaborative spirit of open-source
innovation. Outperforming DeepSeek-R1 and rivaling leading Mixture-of-Experts
(MoE) models like Qwen3-235B-A22B and Seed1.5-Thinking, AM-Thinking-v1 achieves
impressive scores... | 2025-05-13T07:41:15Z | null | null | null | AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale | ['Yunjie Ji', 'Xiaoyu Tian', 'Sitong Zhao', 'Haotian Wang', 'Shuaiting Chen', 'Yiping Peng', 'Han Zhao', 'Xiangang Li'] | 2,025 | arXiv.org | 3 | 35 | ['Computer Science'] |
2,505.08435 | Hakim: Farsi Text Embedding Model | ['Mehran Sarmadi', 'Morteza Alikhani', 'Erfan Zinvandi', 'Zahra Pourbahman'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent advancements in text embedding have significantly improved natural
language understanding across many languages, yet Persian remains notably
underrepresented in large-scale embedding research. In this paper, we present
Hakim, a novel state-of-the-art Persian text embedding model that achieves a
8.5% performance ... | 2025-05-13T10:57:32Z | null | null | null | Hakim: Farsi Text Embedding Model | ['Mehran Sarmadi', 'Morteza Alikhani', 'Erfan Zinvandi', 'Zahra Pourbahman'] | 2,025 | arXiv.org | 0 | 22 | ['Computer Science'] |
2,505.08651 | Scaling Context, Not Parameters: Training a Compact 7B Language Model
for Efficient Long-Context Processing | ['Chen Wu', 'Yin Song'] | ['cs.CL', 'cs.LG'] | We present MegaBeam-Mistral-7B, a language model that supports 512K-token
context length. Our work addresses practical limitations in long-context
training, supporting real-world tasks such as compliance monitoring and
verification. Evaluated on three long-context benchmarks, our 7B-parameter
model demonstrates superio... | 2025-05-13T15:13:15Z | 8 pages, 6 figures, ACL 2025 (Industry Track) | null | null | Scaling Context, Not Parameters: Training a Compact 7B Language Model for Efficient Long-Context Processing | ['Chen Wu', 'Yin Song'] | 2,025 | arXiv.org | 1 | 19 | ['Computer Science'] |
2,505.08699 | Granite-speech: open-source speech-aware LLMs with strong English ASR
capabilities | ['George Saon', 'Avihu Dekel', 'Alexander Brooks', 'Tohru Nagano', 'Abraham Daniels', 'Aharon Satt', 'Ashish Mittal', 'Brian Kingsbury', 'David Haws', 'Edmilson Morais', 'Gakuto Kurata', 'Hagai Aronowitz', 'Ibrahim Ibrahim', 'Jeff Kuo', 'Kate Soule', 'Luis Lastras', 'Masayuki Suzuki', 'Ron Hoory', 'Samuel Thomas', 'Sas... | ['eess.AS'] | Granite-speech LLMs are compact and efficient speech language models
specifically designed for English ASR and automatic speech translation (AST).
The models were trained by modality aligning the 2B and 8B parameter variants
of granite-3.3-instruct to speech on publicly available open-source corpora
containing audio in... | 2025-05-13T15:58:57Z | 7 pages, 9 figures | null | null | Granite-speech: open-source speech-aware LLMs with strong English ASR capabilities | ['G. Saon', 'Avihu Dekel', 'Alexander Brooks', 'Tohru Nagano', 'Abraham Daniels', 'Aharon Satt', 'Ashish R. Mittal', 'Brian Kingsbury', 'David Haws', 'E. Morais', 'Gakuto Kurata', 'Hagai Aronowitz', 'Ibrahim Ibrahim', 'Jeff Kuo', 'Kate Soule', 'Luis A. Lastras', 'Masayuki Suzuki', 'R. Hoory', 'Samuel Thomas', 'Sashi No... | 2,025 | null | 0 | 36 | ['Engineering'] |
2,505.08742 | Applying the ACE2 Emulator to SST Green's Functions for the E3SMv3
Global Atmosphere Model | ['Elynn Wu', 'Finn Rebassoo', 'Pappu Paul', 'Cristian Proistosescu', 'Jacqueline Nugent', 'Daniel McCoy', 'Peter Caldwell', 'Christopher S. Bretherton'] | ['physics.ao-ph'] | Green's functions are a useful technique for interpreting atmospheric state
responses to changes in the spatial pattern of sea surface temperature (SST).
Here we train version 2 of the Ai2 Climate Emulator (ACE2) on reference
historical SST simulations of the US Department of Energy's EAMv3 global
atmosphere model. We ... | 2025-05-13T16:55:15Z | null | null | null | null | null | null | null | null | null | null |
2,505.08762 | The Open Molecules 2025 (OMol25) Dataset, Evaluations, and Models | ['Daniel S. Levine', 'Muhammed Shuaibi', 'Evan Walter Clark Spotte-Smith', 'Michael G. Taylor', 'Muhammad R. Hasyim', 'Kyle Michel', 'Ilyes Batatia', 'Gábor Csányi', 'Misko Dzamba', 'Peter Eastman', 'Nathan C. Frey', 'Xiang Fu', 'Vahe Gharakhanyan', 'Aditi S. Krishnapriyan', 'Joshua A. Rackers', 'Sanjeev Raja', 'Ammar ... | ['physics.chem-ph'] | Machine learning (ML) models hold the promise of transforming atomic
simulations by delivering quantum chemical accuracy at a fraction of the
computational cost. Realization of this potential would enable high-throughout,
high-accuracy molecular screening campaigns to explore vast regions of chemical
space and facilita... | 2025-05-13T17:29:49Z | 60 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,505.08783 | CodePDE: An Inference Framework for LLM-driven PDE Solver Generation | ['Shanda Li', 'Tanya Marwah', 'Junhong Shen', 'Weiwei Sun', 'Andrej Risteski', 'Yiming Yang', 'Ameet Talwalkar'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.NA', 'math.NA'] | Partial differential equations (PDEs) are fundamental to modeling physical
systems, yet solving them remains a complex challenge. Traditional numerical
solvers rely on expert knowledge to implement and are computationally
expensive, while neural-network-based solvers require large training datasets
and often lack inter... | 2025-05-13T17:58:08Z | null | null | null | CodePDE: An Inference Framework for LLM-driven PDE Solver Generation | ['Shanda Li', 'Tanya Marwah', 'Junhong Shen', 'Weiwei Sun', 'Andrej Risteski', 'Yiming Yang', 'Ameet Talwalkar'] | 2,025 | arXiv.org | 2 | 62 | ['Computer Science', 'Mathematics'] |
2,505.08787 | UniSkill: Imitating Human Videos via Cross-Embodiment Skill
Representations | ['Hanjung Kim', 'Jaehyun Kang', 'Hyolim Kang', 'Meedeum Cho', 'Seon Joo Kim', 'Youngwoon Lee'] | ['cs.RO', 'cs.CV'] | Mimicry is a fundamental learning mechanism in humans, enabling individuals
to learn new tasks by observing and imitating experts. However, applying this
ability to robots presents significant challenges due to the inherent
differences between human and robot embodiments in both their visual appearance
and physical cap... | 2025-05-13T17:59:22Z | Project Page: https://kimhanjung.github.io/UniSkill/ | null | null | null | null | null | null | null | null | null |
2,505.0891 | Behind Maya: Building a Multilingual Vision Language Model | ['Nahid Alam', 'Karthik Reddy Kanjula', 'Surya Guthikonda', 'Timothy Chung', 'Bala Krishna S Vegesna', 'Abhipsha Das', 'Anthony Susevski', 'Ryan Sze-Yin Chan', 'S M Iftekhar Uddin', 'Shayekh Bin Islam', 'Roshan Santhosh', 'Snegha A', 'Drishti Sharma', 'Chen Liu', 'Isha Chaturvedi', 'Genta Indra Winata', 'Ashvanth. S', ... | ['cs.CV', 'cs.CL'] | In recent times, we have seen a rapid development of large Vision-Language
Models (VLMs). They have shown impressive results on academic benchmarks,
primarily in widely spoken languages but lack performance on low-resource
languages and varied cultural contexts. To address these limitations, we
introduce Maya, an open-... | 2025-05-13T19:01:12Z | Accepted at VLMs4ALL CVPR 2025 Workshop; corrected workshop name
spelling | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.