arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,502.17057 | ExpandR: Teaching Dense Retrievers Beyond Queries with LLM Guidance | ['Sijia Yao', 'Pengcheng Huang', 'Zhenghao Liu', 'Yu Gu', 'Yukun Yan', 'Shi Yu', 'Ge Yu'] | ['cs.IR', 'cs.AI'] | Large language models (LLMs) have demonstrated significant potential in
enhancing dense retrieval through query augmentation. However, most existing
methods treat the LLM and the retriever as separate modules, overlooking the
alignment between generation and ranking objectives. In this work, we propose
ExpandR, a unifi... | 2025-02-24T11:15:41Z | 16 pages, 10 tables, 5 figures | null | null | null | null | null | null | null | null | null |
2,502.17125 | LettuceDetect: A Hallucination Detection Framework for RAG Applications | ['Ádám Kovács', 'Gábor Recski'] | ['cs.CL', 'cs.AI'] | Retrieval Augmented Generation (RAG) systems remain vulnerable to
hallucinated answers despite incorporating external knowledge sources. We
present LettuceDetect a framework that addresses two critical limitations in
existing hallucination detection methods: (1) the context window constraints of
traditional encoder-bas... | 2025-02-24T13:11:47Z | 6 pages | null | null | LettuceDetect: A Hallucination Detection Framework for RAG Applications | ['Adam Kovacs', 'Gábor Recski'] | 2,025 | arXiv.org | 5 | 29 | ['Computer Science'] |
2,502.17237 | MegaLoc: One Retrieval to Place Them All | ['Gabriele Berton', 'Carlo Masone'] | ['cs.CV'] | Retrieving images from the same location as a given query is an important
component of multiple computer vision tasks, like Visual Place Recognition,
Landmark Retrieval, Visual Localization, 3D reconstruction, and SLAM. However,
existing solutions are built to specifically work for one of these tasks, and
are known to ... | 2025-02-24T15:14:55Z | Tech Report | null | null | null | null | null | null | null | null | null |
2,502.17239 | Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction | ['Tianpeng Li', 'Jun Liu', 'Tao Zhang', 'Yuanbo Fang', 'Da Pan', 'Mingrui Wang', 'Zheng Liang', 'Zehuan Li', 'Mingan Lin', 'Guosheng Dong', 'Jianhua Xu', 'Haoze Sun', 'Zenan Zhou', 'Weipeng Chen'] | ['cs.CL', 'cs.SD', 'eess.AS'] | We introduce Baichuan-Audio, an end-to-end audio large language model that
seamlessly integrates audio understanding and generation. It features a
text-guided aligned speech generation mechanism, enabling real-time speech
interaction with both comprehension and generation capabilities. Baichuan-Audio
leverages a pre-tr... | 2025-02-24T15:16:34Z | null | null | null | Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction | ['Tianpeng Li', 'Jun Liu', 'Tao Zhang', 'Yuanbo Fang', 'Da Pan', 'Mingrui Wang', 'Zheng Liang', 'Zehuan Li', 'Mingan Lin', 'Guosheng Dong', 'Jianhua Xu', 'Haoze Sun', 'Zenan Zhou', 'Weipeng Chen'] | 2,025 | arXiv.org | 7 | 44 | ['Computer Science', 'Engineering'] |
2,502.17387 | Big-Math: A Large-Scale, High-Quality Math Dataset for Reinforcement
Learning in Language Models | ['Alon Albalak', 'Duy Phung', 'Nathan Lile', 'Rafael Rafailov', 'Kanishk Gandhi', 'Louis Castricato', 'Anikait Singh', 'Chase Blagden', 'Violet Xiang', 'Dakota Mahan', 'Nick Haber'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Increasing interest in reasoning models has led math to become a prominent
testing ground for algorithmic and methodological improvements. However,
existing open math datasets either contain a small collection of high-quality,
human-written problems or a large corpus of machine-generated problems of
uncertain quality, ... | 2025-02-24T18:14:01Z | null | null | null | null | null | null | null | null | null | null |
2,502.17424 | Emergent Misalignment: Narrow finetuning can produce broadly misaligned
LLMs | ['Jan Betley', 'Daniel Tan', 'Niels Warncke', 'Anna Sztyber-Betley', 'Xuchan Bao', 'Martín Soto', 'Nathan Labenz', 'Owain Evans'] | ['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG'] | We present a surprising result regarding LLMs and alignment. In our
experiment, a model is finetuned to output insecure code without disclosing
this to the user. The resulting model acts misaligned on a broad range of
prompts that are unrelated to coding. It asserts that humans should be enslaved
by AI, gives malicious... | 2025-02-24T18:56:03Z | 40 pages, 38 figures An earlier revision of this paper was accepted
at ICML 2025. Since then, it has been updated to include new results on
training dynamics (4.7) and base models (4.8) | null | null | null | null | null | null | null | null | null |
2,502.17425 | Introducing Visual Perception Token into Multimodal Large Language Model | ['Runpeng Yu', 'Xinyin Ma', 'Xinchao Wang'] | ['cs.CV', 'cs.LG'] | To utilize visual information, Multimodal Large Language Model (MLLM) relies
on the perception process of its vision encoder. The completeness and accuracy
of visual perception significantly influence the precision of spatial
reasoning, fine-grained understanding, and other tasks. However, MLLM still
lacks the autonomo... | 2025-02-24T18:56:12Z | null | null | null | null | null | null | null | null | null | null |
2,502.17437 | Fractal Generative Models | ['Tianhong Li', 'Qinyi Sun', 'Lijie Fan', 'Kaiming He'] | ['cs.LG', 'cs.CV'] | Modularization is a cornerstone of computer science, abstracting complex
functions into atomic building blocks. In this paper, we introduce a new level
of modularization by abstracting generative models into atomic generative
modules. Analogous to fractals in mathematics, our method constructs a new type
of generative ... | 2025-02-24T18:59:56Z | null | null | null | Fractal Generative Models | ['Tianhong Li', 'Qinyi Sun', 'Lijie Fan', 'Kaiming He'] | 2,025 | arXiv.org | 8 | 0 | ['Computer Science'] |
2,502.17543 | Training a Generally Curious Agent | ['Fahim Tajwar', 'Yiding Jiang', 'Abitha Thankaraj', 'Sumaita Sadia Rahman', 'J Zico Kolter', 'Jeff Schneider', 'Ruslan Salakhutdinov'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Efficient exploration is essential for intelligent systems interacting with
their environment, but existing language models often fall short in scenarios
that require strategic information gathering. In this paper, we present
Paprika, a fine-tuning approach that enables language models to develop general
decision-makin... | 2025-02-24T18:56:58Z | ICML 2025. Project Website: https://paprika-llm.github.io | null | null | Training a Generally Curious Agent | ['Fahim Tajwar', 'Yiding Jiang', 'Abitha Thankaraj', 'Sumaita Sadia Rahman', 'J. Z. Kolter', 'Jeff Schneider', 'Ruslan Salakhutdinov'] | 2,025 | arXiv.org | 3 | 86 | ['Computer Science'] |
2,502.17579 | VANPY: Voice Analysis Framework | ['Gregory Koushnir', 'Michael Fire', 'Galit Fuhrmann Alpert', 'Dima Kagan'] | ['cs.SD', 'cs.LG', 'eess.AS'] | Voice data is increasingly being used in modern digital communications, yet
there is still a lack of comprehensive tools for automated voice analysis and
characterization. To this end, we developed the VANPY (Voice Analysis in
Python) framework for automated pre-processing, feature extraction, and
classification of voi... | 2025-02-17T21:12:57Z | null | null | null | null | null | null | null | null | null | null |
2,502.17796 | LAM: Large Avatar Model for One-shot Animatable Gaussian Head | ['Yisheng He', 'Xiaodong Gu', 'Xiaodan Ye', 'Chao Xu', 'Zhengyi Zhao', 'Yuan Dong', 'Weihao Yuan', 'Zilong Dong', 'Liefeng Bo'] | ['cs.CV'] | We present LAM, an innovative Large Avatar Model for animatable Gaussian head
reconstruction from a single image. Unlike previous methods that require
extensive training on captured video sequences or rely on auxiliary neural
networks for animation and rendering during inference, our approach generates
Gaussian heads t... | 2025-02-25T02:57:45Z | Project Page: https://aigc3d.github.io/projects/LAM/ Source code:
https://github.com/aigc3d/LAM | null | null | LAM: Large Avatar Model for One-shot Animatable Gaussian Head | ['Yisheng He', 'Xiaodong Gu', 'Xiaodan Ye', 'Chao Xu', 'Zhengyi Zhao', 'Yuan Dong', 'Weihao Yuan', 'Zilong Dong', 'Liefeng Bo'] | 2,025 | arXiv.org | 0 | 77 | ['Computer Science'] |
2,502.18008 | NotaGen: Advancing Musicality in Symbolic Music Generation with Large
Language Model Training Paradigms | ['Yashan Wang', 'Shangda Wu', 'Jianhuai Hu', 'Xingjian Du', 'Yueqi Peng', 'Yongxin Huang', 'Shuai Fan', 'Xiaobing Li', 'Feng Yu', 'Maosong Sun'] | ['cs.SD', 'cs.AI', 'eess.AS'] | We introduce NotaGen, a symbolic music generation model aiming to explore the
potential of producing high-quality classical sheet music. Inspired by the
success of Large Language Models (LLMs), NotaGen adopts pre-training,
fine-tuning, and reinforcement learning paradigms (henceforth referred to as
the LLM training par... | 2025-02-25T09:12:07Z | null | null | null | null | null | null | null | null | null | null |
2,502.18009 | Patient Trajectory Prediction: Integrating Clinical Notes with
Transformers | ['Sifal Klioui', 'Sana Sellami', 'Youssef Trardi'] | ['cs.LG'] | Predicting disease trajectories from electronic health records (EHRs) is a
complex task due to major challenges such as data non-stationarity, high
granularity of medical codes, and integration of multimodal data. EHRs contain
both structured data, such as diagnostic codes, and unstructured data, such as
clinical notes... | 2025-02-25T09:14:07Z | null | null | null | Patient Trajectory Prediction: Integrating Clinical Notes with Transformers | ['Sifal Klioui', 'Sana Sellami', 'Youssef Trardi'] | 2,025 | BIOSTEC : HEALTHINF | 0 | 25 | ['Computer Science'] |
2,502.18041 | OpenFly: A Comprehensive Platform for Aerial Vision-Language Navigation | ['Yunpeng Gao', 'Chenhui Li', 'Zhongrui You', 'Junli Liu', 'Zhen Li', 'Pengan Chen', 'Qizhi Chen', 'Zhonghan Tang', 'Liansheng Wang', 'Penghui Yang', 'Yiwen Tang', 'Yuhang Tang', 'Shuai Liang', 'Songyi Zhu', 'Ziqin Xiong', 'Yifei Su', 'Xinyi Ye', 'Jianan Li', 'Yan Ding', 'Dong Wang', 'Zhigang Wang', 'Bin Zhao', 'Xuelon... | ['cs.CV', 'cs.RO'] | Vision-Language Navigation (VLN) aims to guide agents by leveraging language
instructions and visual cues, playing a pivotal role in embodied AI. Indoor VLN
has been extensively studied, whereas outdoor aerial VLN remains underexplored.
The potential reason is that outdoor aerial view encompasses vast areas, making
dat... | 2025-02-25T09:57:18Z | null | null | null | null | null | null | null | null | null | null |
2,502.18101 | Detecting Offensive Memes with Social Biases in Singapore Context Using
Multimodal Large Language Models | ['Cao Yuxuan', 'Wu Jiayang', 'Alistair Cheong Liang Chuen', 'Bryan Shan Guanrong', 'Theodore Lee Chong Jen', 'Sherman Chann Zhi Shen'] | ['cs.CV', 'cs.CL'] | Traditional online content moderation systems struggle to classify modern
multimodal means of communication, such as memes, a highly nuanced and
information-dense medium. This task is especially hard in a culturally diverse
society like Singapore, where low-resource languages are used and extensive
knowledge on local c... | 2025-02-25T11:15:49Z | Accepted at 3rd Workshop on Cross-Cultural Considerations in NLP
(C3NLP), co-located with NAACL 2025. This is an extended version with some
appendix moved to the main body | null | null | Detecting Offensive Memes with Social Biases in Singapore Context Using Multimodal Large Language Models | ['Yuxuan Cao', 'Jiayang Wu', 'Alistair Cheong Liang Chuen', 'Bryan Shan Guanrong', 'Theodore Lee Chong Jen', 'Sherman Chann Zhi Shen'] | 2,025 | arXiv.org | 0 | 48 | ['Computer Science'] |
2,502.18137 | SpargeAttention: Accurate and Training-free Sparse Attention
Accelerating Any Model Inference | ['Jintao Zhang', 'Chendong Xiang', 'Haofeng Huang', 'Jia Wei', 'Haocheng Xi', 'Jun Zhu', 'Jianfei Chen'] | ['cs.LG', 'cs.AI', 'cs.CV', 'cs.PF'] | An efficient attention implementation is essential for large models due to
its quadratic time complexity. Fortunately, attention commonly exhibits
sparsity, i.e., many values in the attention map are near zero, allowing for
the omission of corresponding computations. Many studies have utilized the
sparse pattern to acc... | 2025-02-25T12:02:17Z | @inproceedings{zhang2025spargeattn, title={Spargeattn: Accurate
sparse attention accelerating any model inference}, author={Zhang, Jintao and
Xiang, Chendong and Huang, Haofeng and Wei, Jia and Xi, Haocheng and Zhu, Jun
and Chen, Jianfei}, booktitle={International Conference on Machine Learning
(ICML)}, year={2... | Proceedings of the 42 nd International Conference on Machine
Learning, PMLR 267, 2025 (ICML 2025) | null | null | null | null | null | null | null | null |
2,502.18186 | Steering Language Model to Stable Speech Emotion Recognition via
Contextual Perception and Chain of Thought | ['Zhixian Zhao', 'Xinfa Zhu', 'Xinsheng Wang', 'Shuiyuan Wang', 'Xuelong Geng', 'Wenjie Tian', 'Lei Xie'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Large-scale audio language models (ALMs), such as Qwen2-Audio, are capable of
comprehending diverse audio signal, performing audio analysis and generating
textual responses. However, in speech emotion recognition (SER), ALMs often
suffer from hallucinations, resulting in misclassifications or irrelevant
outputs. To add... | 2025-02-25T13:26:25Z | null | null | null | null | null | null | null | null | null | null |
2,502.18274 | Citrus: Leveraging Expert Cognitive Pathways in a Medical Language Model
for Advanced Medical Decision Support | ['Guoxin Wang', 'Minyu Gao', 'Shuai Yang', 'Ya Zhang', 'Lizhi He', 'Liang Huang', 'Hanlin Xiao', 'Yexuan Zhang', 'Wanyue Li', 'Lu Chen', 'Jintao Fei', 'Xin Li'] | ['cs.AI', 'cs.CL'] | Large language models (LLMs), particularly those with reasoning capabilities,
have rapidly advanced in recent years, demonstrating significant potential
across a wide range of applications. However, their deployment in healthcare,
especially in disease reasoning tasks, is hindered by the challenge of
acquiring expert-l... | 2025-02-25T15:05:12Z | null | null | null | Citrus: Leveraging Expert Cognitive Pathways in a Medical Language Model for Advanced Medical Decision Support | ['Guoxin Wang', 'Minyu Gao', 'Shuai Yang', 'Ya Zhang', 'Lizhi He', 'Liang Huang', 'Hanlin Xiao', 'Yexuan Zhang', 'Wanyue Li', 'Lu Chen', 'Jintao Fei', 'Xin Li'] | 2,025 | arXiv.org | 2 | 86 | ['Computer Science'] |
2,502.18277 | Self-Adjust Softmax | ['Chuanyang Zheng', 'Yihang Gao', 'Guoxuan Chen', 'Han Shi', 'Jing Xiong', 'Xiaozhe Ren', 'Chao Huang', 'Xin Jiang', 'Zhenguo Li', 'Yu Li'] | ['cs.CL'] | The softmax function is crucial in Transformer attention, which normalizes
each row of the attention scores with summation to one, achieving superior
performances over other alternative functions. However, the softmax function
can face a gradient vanishing issue when some elements of the attention scores
approach extre... | 2025-02-25T15:07:40Z | Tech Report | null | null | Self-Adjust Softmax | ['Chuanyang Zheng', 'Yihang Gao', 'Guoxuan Chen', 'Han Shi', 'Jing Xiong', 'Xiaozhe Ren', 'Chao Huang', 'Xin Jiang', 'Zhenguo Li', 'Yu Li'] | 2,025 | arXiv.org | 1 | 108 | ['Computer Science'] |
2,502.18316 | WiCkeD: A Simple Method to Make Multiple Choice Benchmarks More
Challenging | ['Ahmed Elhady', 'Eneko Agirre', 'Mikel Artetxe'] | ['cs.CL'] | We introduce WiCkeD, a simple method to increase the complexity of existing
multiple-choice benchmarks by randomly replacing a choice with "None of the
above", a method often used in educational tests. We show that WiCkeD can be
automatically applied to any existing benchmark, making it more challenging. We
apply WiCke... | 2025-02-25T16:09:38Z | null | null | null | null | null | null | null | null | null | null |
2,502.18364 | ART: Anonymous Region Transformer for Variable Multi-Layer Transparent
Image Generation | ['Yifan Pu', 'Yiming Zhao', 'Zhicong Tang', 'Ruihong Yin', 'Haoxing Ye', 'Yuhui Yuan', 'Dong Chen', 'Jianmin Bao', 'Sirui Zhang', 'Yanbin Wang', 'Lin Liang', 'Lijuan Wang', 'Ji Li', 'Xiu Li', 'Zhouhui Lian', 'Gao Huang', 'Baining Guo'] | ['cs.CV'] | Multi-layer image generation is a fundamental task that enables users to
isolate, select, and edit specific image layers, thereby revolutionizing
interactions with generative models. In this paper, we introduce the Anonymous
Region Transformer (ART), which facilitates the direct generation of variable
multi-layer trans... | 2025-02-25T16:57:04Z | Project page: https://art-msra.github.io/ | null | null | ART: Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation | ['Yifan Pu', 'Yiming Zhao', 'Zhicong Tang', 'Ruihong Yin', 'Haoxing Ye', 'Yuhui Yuan', 'Dong Chen', 'Jianmin Bao', 'Sirui Zhang', 'Yanbin Wang', 'Lin Liang', 'Lijuan Wang', 'Ji Li', 'Xiu Li', 'Zhouhui Lian', 'Gao Huang', 'Baining Guo'] | 2,025 | Computer Vision and Pattern Recognition | 5 | 56 | ['Computer Science'] |
2,502.18411 | OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference | ['Xiangyu Zhao', 'Shengyuan Ding', 'Zicheng Zhang', 'Haian Huang', 'Maosong Cao', 'Weiyun Wang', 'Jiaqi Wang', 'Xinyu Fang', 'Wenhai Wang', 'Guangtao Zhai', 'Haodong Duan', 'Hua Yang', 'Kai Chen'] | ['cs.CV'] | Recent advancements in open-source multi-modal large language models (MLLMs)
have primarily focused on enhancing foundational capabilities, leaving a
significant gap in human preference alignment. This paper introduces
OmniAlign-V, a comprehensive dataset of 200K high-quality training samples
featuring diverse images, ... | 2025-02-25T18:05:14Z | null | null | null | null | null | null | null | null | null | null |
2,502.18418 | Rank1: Test-Time Compute for Reranking in Information Retrieval | ['Orion Weller', 'Kathryn Ricci', 'Eugene Yang', 'Andrew Yates', 'Dawn Lawrie', 'Benjamin Van Durme'] | ['cs.IR', 'cs.CL', 'cs.LG'] | We introduce Rank1, the first reranking model trained to take advantage of
test-time compute. Rank1 demonstrates the applicability within retrieval of
using a reasoning language model (i.e. OpenAI's o1, Deepseek's R1, etc.) for
distillation in order to rapidly improve the performance of a smaller model. We
gather and o... | 2025-02-25T18:14:06Z | null | null | null | Rank1: Test-Time Compute for Reranking in Information Retrieval | ['Orion Weller', 'Kathryn Ricci', 'Eugene Yang', 'Andrew Yates', 'Dawn J. Lawrie', 'Benjamin Van Durme'] | 2,025 | arXiv.org | 12 | 45 | ['Computer Science'] |
2,502.18435 | What Makes the Preferred Thinking Direction for LLMs in Multiple-choice
Questions? | ['Yizhe Zhang', 'Richard Bai', 'Zijin Gu', 'Ruixiang Zhang', 'Jiatao Gu', 'Emmanuel Abbe', 'Samy Bengio', 'Navdeep Jaitly'] | ['cs.CL', 'cs.IT', 'cs.LG', 'math.IT'] | Language models usually use left-to-right (L2R) autoregressive factorization.
However, L2R factorization may not always be the best inductive bias.
Therefore, we investigate whether alternative factorizations of the text
distribution could be beneficial in some tasks. We investigate right-to-left
(R2L) training as a co... | 2025-02-25T18:30:25Z | 10 pages for the main text | null | null | null | null | null | null | null | null | null |
2,502.1846 | DRAMA: Diverse Augmentation from Large Language Models to Smaller Dense
Retrievers | ['Xueguang Ma', 'Xi Victoria Lin', 'Barlas Oguz', 'Jimmy Lin', 'Wen-tau Yih', 'Xilun Chen'] | ['cs.CL', 'cs.IR'] | Large language models (LLMs) have demonstrated strong effectiveness and
robustness while fine-tuned as dense retrievers. However, their large parameter
size brings significant inference time computational challenges, including high
encoding costs for large-scale corpora and increased query latency, limiting
their pract... | 2025-02-25T18:59:07Z | ACL 2025 | null | null | null | null | null | null | null | null | null |
2,502.186 | Chain of Draft: Thinking Faster by Writing Less | ['Silei Xu', 'Wenhao Xie', 'Lingxiao Zhao', 'Pengcheng He'] | ['cs.CL', 'I.2.7'] | Large Language Models (LLMs) have demonstrated remarkable performance in
solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT)
prompting, which emphasizes verbose, step-by-step reasoning. However, humans
typically employ a more efficient strategy: drafting concise intermediate
thoughts that cap... | 2025-02-25T19:36:06Z | null | null | null | null | null | null | null | null | null | null |
2,502.18679 | Discriminative Finetuning of Generative Large Language Models without
Reward Models and Human Preference Data | ['Siqi Guo', 'Ilgee Hong', 'Vicente Balmaseda', 'Changlong Yu', 'Liang Qiu', 'Xin Liu', 'Haoming Jiang', 'Tuo Zhao', 'Tianbao Yang'] | ['cs.CL'] | Supervised fine-tuning (SFT) has become a crucial step for aligning
pretrained large language models (LLMs) using supervised datasets of
input-output pairs. However, despite being supervised, SFT is inherently
limited by its generative training objective. To address its limitations, the
existing common strategy is to f... | 2025-02-25T22:38:55Z | 18 pages, 7 figures | null | null | Discriminative Finetuning of Generative Large Language Models without Reward Models and Human Preference Data | ['Siqi Guo', 'Ilgee Hong', 'Vicente Balmaseda', 'Changlong Yu', 'Liang Qiu', 'Xin Liu', 'Haoming Jiang', 'Tuo Zhao', 'Tianbao Yang'] | 2,025 | null | 0 | 60 | ['Computer Science'] |
2,502.18772 | Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance | ['Xueqing Peng', 'Triantafillos Papadopoulos', 'Efstathia Soufleri', 'Polydoros Giannouris', 'Ruoyu Xiang', 'Yan Wang', 'Lingfei Qian', 'Jimin Huang', 'Qianqian Xie', 'Sophia Ananiadou'] | ['cs.CL'] | Despite Greece's pivotal role in the global economy, large language models
(LLMs) remain underexplored for Greek financial context due to the linguistic
complexity of Greek and the scarcity of domain-specific datasets. Previous
efforts in multilingual financial natural language processing (NLP) have
exposed considerabl... | 2025-02-26T03:04:01Z | 18 pages, 6 figures | null | null | Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance | ['Xueqing Peng', 'Triantafillos Papadopoulos', 'Efstathia Soufleri', 'Polydoros Giannouris', 'Ruoyu Xiang', 'Yan Wang', 'Lingfei Qian', 'Jimin Huang', 'Qianqian Xie', 'Sophia Ananiadou'] | 2,025 | arXiv.org | 3 | 0 | ['Computer Science'] |
2,502.18886 | On Pruning State-Space LLMs | ['Tamer Ghattas', 'Michael Hassid', 'Roy Schwartz'] | ['cs.CL', 'cs.LG'] | Recent work proposed state-space models (SSMs) as an efficient alternative to
transformer-based LLMs. Can these models be pruned to further reduce their
computation costs? We adapt several pruning methods to the SSM structure, and
apply them to four SSM-based LLMs across multiple tasks. We find that such
models are qui... | 2025-02-26T07:04:20Z | null | null | null | null | null | null | null | null | null | null |
2,502.18924 | MegaTTS 3: Sparse Alignment Enhanced Latent Diffusion Transformer for
Zero-Shot Speech Synthesis | ['Ziyue Jiang', 'Yi Ren', 'Ruiqi Li', 'Shengpeng Ji', 'Boyang Zhang', 'Zhenhui Ye', 'Chen Zhang', 'Bai Jionghao', 'Xiaoda Yang', 'Jialong Zuo', 'Yu Zhang', 'Rui Liu', 'Xiang Yin', 'Zhou Zhao'] | ['eess.AS', 'cs.LG', 'cs.SD'] | While recent zero-shot text-to-speech (TTS) models have significantly
improved speech quality and expressiveness, mainstream systems still suffer
from issues related to speech-text alignment modeling: 1) models without
explicit speech-text alignment modeling exhibit less robustness, especially for
hard sentences in pra... | 2025-02-26T08:22:00Z | null | null | null | null | null | null | null | null | null | null |
2,502.18934 | Kanana: Compute-efficient Bilingual Language Models | ['Kanana LLM Team', 'Yunju Bak', 'Hojin Lee', 'Minho Ryu', 'Jiyeon Ham', 'Seungjae Jung', 'Daniel Wontae Nam', 'Taegyeong Eo', 'Donghun Lee', 'Doohae Jung', 'Boseop Kim', 'Nayeon Kim', 'Jaesun Park', 'Hyunho Kim', 'Hyunwoong Ko', 'Changmin Lee', 'Kyoung-Woon On', 'Seulye Baeg', 'Junrae Cho', 'Sunghee Jung', 'Jieun Kang... | ['cs.CL', 'cs.LG'] | We introduce Kanana, a series of bilingual language models that demonstrate
exceeding performance in Korean and competitive performance in English. The
computational cost of Kanana is significantly lower than that of
state-of-the-art models of similar size. The report details the techniques
employed during pre-training... | 2025-02-26T08:36:20Z | 40 pages, 15 figures | null | null | Kanana: Compute-efficient Bilingual Language Models | ['Kanana Llm Team Yunju Bak', 'Hojin Lee', 'Minho Ryu', 'Jiyeon Ham', 'Seungjae Jung', 'D. W. Nam', 'Taegyeong Eo', 'Donghun Lee', 'Doohae Jung', 'Boseop Kim', 'Nayeon Kim', 'Jaesun Park', 'Hyunho Kim', 'H. Ko', 'Changmin Lee', 'Kyoung-Woon On', 'Seulye Baeg', 'Junrae Cho', 'Sunghee Jung', 'Jieun Kang', 'EungGyun Kim',... | 2,025 | arXiv.org | 1 | 86 | ['Computer Science'] |
2,502.1894 | MathTutorBench: A Benchmark for Measuring Open-ended Pedagogical
Capabilities of LLM Tutors | ['Jakub Macina', 'Nico Daheim', 'Ido Hakimi', 'Manu Kapur', 'Iryna Gurevych', 'Mrinmaya Sachan'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Evaluating the pedagogical capabilities of AI-based tutoring models is
critical for making guided progress in the field. Yet, we lack a reliable,
easy-to-use, and simple-to-run evaluation that reflects the pedagogical
abilities of models. To fill this gap, we present MathTutorBench, an
open-source benchmark for holisti... | 2025-02-26T08:43:47Z | https://eth-lre.github.io/mathtutorbench | null | null | null | null | null | null | null | null | null |
2,502.18968 | Know You First and Be You Better: Modeling Human-Like User Simulators
via Implicit Profiles | ['Kuang Wang', 'Xianfei Li', 'Shenghao Yang', 'Li Zhou', 'Feng Jiang', 'Haizhou Li'] | ['cs.CL'] | User simulators are crucial for replicating human interactions with dialogue
systems, supporting both collaborative training and automatic evaluation,
especially for large language models (LLMs). However, current role-playing
methods face challenges such as a lack of utterance-level authenticity and
user-level diversit... | 2025-02-26T09:26:54Z | 9 pages. Accepted to ACL 2025. Camera-ready version | null | null | Know You First and Be You Better: Modeling Human-Like User Simulators via Implicit Profiles | ['Kuang Wang', 'Xianfei Li', 'Shenghao Yang', 'Li Zhou', 'Feng Jiang', 'Haizhou Li'] | 2,025 | arXiv.org | 0 | 63 | ['Computer Science'] |
2,502.18969 | (Mis)Fitting: A Survey of Scaling Laws | ['Margaret Li', 'Sneha Kudugunta', 'Luke Zettlemoyer'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ME'] | Modern foundation models rely heavily on using scaling laws to guide crucial
training decisions. Researchers often extrapolate the optimal architecture and
hyper parameters settings from smaller training runs by describing the
relationship between, loss, or task performance, and scale. All components of
this process va... | 2025-02-26T09:27:54Z | 41 pages, 3 figure, first two authors contributed equally. ICLR, 2025 | null | null | (Mis)Fitting: A Survey of Scaling Laws | ['Margaret Li', 'Sneha Kudugunta', 'Luke S. Zettlemoyer'] | 2,025 | arXiv.org | 4 | 80 | ['Computer Science', 'Mathematics'] |
2,502.19204 | Distill Any Depth: Distillation Creates a Stronger Monocular Depth
Estimator | ['Xiankang He', 'Dongyan Guo', 'Hongji Li', 'Ruibo Li', 'Ying Cui', 'Chi Zhang'] | ['cs.CV'] | Recent advances in zero-shot monocular depth estimation(MDE) have
significantly improved generalization by unifying depth distributions through
normalized depth representations and by leveraging large-scale unlabeled data
via pseudo-label distillation. However, existing methods that rely on global
depth normalization t... | 2025-02-26T15:10:05Z | project page: https://distill-any-depth-official.github.io/ | null | null | null | null | null | null | null | null | null |
2,502.19285 | On the Importance of Text Preprocessing for Multimodal Representation
Learning and Pathology Report Generation | ['Ruben T. Lucassen', 'Tijn van de Luijtgaarden', 'Sander P. J. Moonemans', 'Gerben E. Breimer', 'Willeke A. M. Blokx', 'Mitko Veta'] | ['cs.CV'] | Vision-language models in pathology enable multimodal case retrieval and
automated report generation. Many of the models developed so far, however, have
been trained on pathology reports that include information which cannot be
inferred from paired whole slide images (e.g., patient history), potentially
leading to hall... | 2025-02-26T16:45:09Z | 11 pages, 1 figure | null | null | On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation | ['Ruben T. Lucassen', 'Tijn van de Luijtgaarden', 'S. P. Moonemans', 'G. Breimer', 'W. Blokx', 'M. Veta'] | 2,025 | arXiv.org | 0 | 25 | ['Computer Science'] |
2,502.19293 | Pathology Report Generation and Multimodal Representation Learning for
Cutaneous Melanocytic Lesions | ['Ruben T. Lucassen', 'Sander P. J. Moonemans', 'Tijn van de Luijtgaarden', 'Gerben E. Breimer', 'Willeke A. M. Blokx', 'Mitko Veta'] | ['cs.CV'] | Millions of melanocytic skin lesions are examined by pathologists each year,
the majority of which concern common nevi (i.e., ordinary moles). While most of
these lesions can be diagnosed in seconds, writing the corresponding pathology
report is much more time-consuming. Automating part of the report writing
could, the... | 2025-02-26T16:52:10Z | 11 pages, 2 figures. arXiv admin note: text overlap with
arXiv:2502.19285 | null | null | null | null | null | null | null | null | null |
2,502.1932 | Shh, don't say that! Domain Certification in LLMs | ['Cornelius Emde', 'Alasdair Paren', 'Preetham Arvind', 'Maxime Kayser', 'Tom Rainforth', 'Thomas Lukasiewicz', 'Bernard Ghanem', 'Philip H. S. Torr', 'Adel Bibi'] | ['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG', 'stat.ML'] | Large language models (LLMs) are often deployed to perform constrained tasks,
with narrow domains. For example, customer support bots can be built on top of
LLMs, relying on their broad language understanding and capabilities to enhance
performance. However, these LLMs are adversarially susceptible, potentially
generat... | 2025-02-26T17:13:19Z | 10 pages, includes appendix Published in International Conference on
Learning Representations (ICLR) 2025 | International Conference on Learning Representations (ICLR) 2025 | null | null | null | null | null | null | null | null |
2,502.19546 | Repurposing the scientific literature with vision-language models | ['Anton Alyakin', 'Jaden Stryker', 'Daniel Alexander Alber', 'Karl L. Sangwon', 'Jin Vivian Lee', 'Brandon Duderstadt', 'Akshay Save', 'David Kurland', 'Spencer Frome', 'Shrutika Singh', 'Jeff Zhang', 'Eunice Yang', 'Ki Yun Park', 'Cordelia Orillac', 'Aly A. Valliani', 'Sean Neifert', 'Albert Liu', 'Aneek Patel', 'Chri... | ['cs.AI', 'cs.CL', 'cs.HC'] | Leading vision-language models (VLMs) are trained on general Internet
content, overlooking scientific journals' rich, domain-specific knowledge.
Training on specialty-specific literature could yield high-performance,
task-specific tools, enabling generative AI to match generalist models in
specialty publishing, educati... | 2025-02-26T20:35:37Z | null | null | null | null | null | null | null | null | null | null |
2,502.19587 | NeoBERT: A Next-Generation BERT | ['Lola Le Breton', 'Quentin Fournier', 'Mariam El Mezouar', 'John X. Morris', 'Sarath Chandar'] | ['cs.CL', 'cs.AI'] | Recent innovations in architecture, pre-training, and fine-tuning have led to
the remarkable in-context learning and reasoning abilities of large
auto-regressive language models such as LLaMA and DeepSeek. In contrast,
encoders like BERT and RoBERTa have not seen the same level of progress despite
being foundational fo... | 2025-02-26T22:00:22Z | 19 pages, 5 figures, 9 tables. Submitted to TMLR | null | null | null | null | null | null | null | null | null |
2,502.19634 | MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language
Models (VLMs) via Reinforcement Learning | ['Jiazhen Pan', 'Che Liu', 'Junde Wu', 'Fenglin Liu', 'Jiayuan Zhu', 'Hongwei Bran Li', 'Chen Chen', 'Cheng Ouyang', 'Daniel Rueckert'] | ['cs.CV', 'cs.AI'] | Reasoning is a critical frontier for advancing medical image analysis, where
transparency and trustworthiness play a central role in both clinician trust
and regulatory approval. Although Medical Visual Language Models (VLMs) show
promise for radiological tasks, most existing VLMs merely produce final answers
without r... | 2025-02-26T23:57:34Z | null | null | null | null | null | null | null | null | null | null |
2,502.19645 | Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success | ['Moo Jin Kim', 'Chelsea Finn', 'Percy Liang'] | ['cs.RO', 'cs.AI', 'cs.CV', 'cs.LG'] | Recent vision-language-action models (VLAs) build upon pretrained
vision-language models and leverage diverse robot datasets to demonstrate
strong task execution, language following ability, and semantic generalization.
Despite these successes, VLAs struggle with novel robot setups and require
fine-tuning to achieve go... | 2025-02-27T00:30:29Z | Accepted to Robotics: Science and Systems (RSS) 2025. Project
website: https://openvla-oft.github.io/ | null | null | Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success | ['Moo Jin Kim', 'Chelsea Finn', 'Percy Liang'] | 2,025 | arXiv.org | 37 | 66 | ['Computer Science'] |
2,502.19712 | Teaching Dense Retrieval Models to Specialize with Listwise Distillation
and LLM Data Augmentation | ['Manveer Singh Tamber', 'Suleman Kazi', 'Vivek Sourabh', 'Jimmy Lin'] | ['cs.IR'] | While the current state-of-the-art dense retrieval models exhibit strong
out-of-domain generalization, they might fail to capture nuanced
domain-specific knowledge. In principle, fine-tuning these models for
specialized retrieval tasks should yield higher effectiveness than relying on a
one-size-fits-all model, but in ... | 2025-02-27T03:07:49Z | null | null | null | null | null | null | null | null | null | null |
2,502.19731 | Preference Learning Unlocks LLMs' Psycho-Counseling Skills | ['Mian Zhang', 'Shaun M. Eack', 'Zhiyu Zoey Chen'] | ['cs.CL'] | Applying large language models (LLMs) to assist in psycho-counseling is an
emerging and meaningful approach, driven by the significant gap between patient
needs and the availability of mental health support. However, current LLMs
struggle to consistently provide effective responses to client speeches,
largely due to th... | 2025-02-27T03:50:25Z | 10 pages, 6 figures | null | null | null | null | null | null | null | null | null |
2,502.19868 | C-Drag: Chain-of-Thought Driven Motion Controller for Video Generation | ['Yuhao Li', 'Mirana Claire Angel', 'Salman Khan', 'Yu Zhu', 'Jinqiu Sun', 'Yanning Zhang', 'Fahad Shahbaz Khan'] | ['cs.CV'] | Trajectory-based motion control has emerged as an intuitive and efficient
approach for controllable video generation. However, the existing
trajectory-based approaches are usually limited to only generating the motion
trajectory of the controlled object and ignoring the dynamic interactions
between the controlled objec... | 2025-02-27T08:21:03Z | null | null | null | null | null | null | null | null | null | null |
2,502.19937 | Image Referenced Sketch Colorization Based on Animation Creation
Workflow | ['Dingkun Yan', 'Xinrui Wang', 'Zhuoru Li', 'Suguru Saito', 'Yusuke Iwasawa', 'Yutaka Matsuo', 'Jiaxian Guo'] | ['cs.CV', 'cs.MM'] | Sketch colorization plays an important role in animation and digital
illustration production tasks. However, existing methods still meet problems in
that text-guided methods fail to provide accurate color and style reference,
hint-guided methods still involve manual operation, and image-referenced
methods are prone to ... | 2025-02-27T10:04:47Z | null | null | null | null | null | null | null | null | null | null |
2,502.20056 | Enhanced Contrastive Learning with Multi-view Longitudinal Data for
Chest X-ray Report Generation | ['Kang Liu', 'Zhuoqi Ma', 'Xiaolu Kang', 'Yunan Li', 'Kun Xie', 'Zhicheng Jiao', 'Qiguang Miao'] | ['cs.CV', 'cs.AI'] | Automated radiology report generation offers an effective solution to
alleviate radiologists' workload. However, most existing methods focus
primarily on single or fixed-view images to model current disease conditions,
which limits diagnostic accuracy and overlooks disease progression. Although
some approaches utilize ... | 2025-02-27T12:59:04Z | Accepted by CVPR 2025 | null | null | Enhanced Contrastive Learning with Multi-view Longitudinal Data for Chest X-ray Report Generation | ['Kang Liu', 'Zhuoqi Ma', 'Xiaolu Kang', 'Yunan Li', 'Kun Xie', 'Zhicheng Jiao', 'Qiguang Miao'] | 2,025 | Computer Vision and Pattern Recognition | 4 | 60 | ['Computer Science'] |
2,502.201 | Generative augmentations for improved cardiac ultrasound segmentation
using diffusion models | ['Gilles Van De Vyver', 'Aksel Try Lenz', 'Erik Smistad', 'Sindre Hellum Olaisen', 'Bjørnar Grenne', 'Espen Holte', 'Håavard Dalen', 'Lasse Løvstakken'] | ['eess.IV', 'cs.CV'] | One of the main challenges in current research on segmentation in cardiac
ultrasound is the lack of large and varied labeled datasets and the differences
in annotation conventions between datasets. This makes it difficult to design
robust segmentation models that generalize well to external datasets. This work
utilizes... | 2025-02-27T13:57:14Z | null | null | null | null | null | null | null | null | null | null |
2,502.20122 | Self-Training Elicits Concise Reasoning in Large Language Models | ['Tergel Munkhbat', 'Namgyu Ho', 'Seo Hyun Kim', 'Yongjin Yang', 'Yujin Kim', 'Se-Young Yun'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Chain-of-thought (CoT) reasoning has enabled large language models (LLMs) to
utilize additional computation through intermediate tokens to solve complex
tasks. However, we posit that typical reasoning traces contain many redundant
tokens, incurring extraneous inference costs. Upon examination of the output
distribution... | 2025-02-27T14:14:50Z | 26 pages, 10 figures, 23 tables. Accepted to Findings of ACL 2025 | null | null | Self-Training Elicits Concise Reasoning in Large Language Models | ['Tergel Munkhbat', 'Namgyu Ho', 'Seohyun Kim', 'Yongjin Yang', 'Yujin Kim', 'Se-young Yun'] | 2,025 | arXiv.org | 37 | 44 | ['Computer Science'] |
2,502.20172 | Multimodal Representation Alignment for Image Generation: Text-Image
Interleaved Control Is Easier Than You Think | ['Liang Chen', 'Shuai Bai', 'Wenhao Chai', 'Weichu Xie', 'Haozhe Zhao', 'Leon Vinci', 'Junyang Lin', 'Baobao Chang'] | ['cs.CV', 'cs.CL'] | The field of advanced text-to-image generation is witnessing the emergence of
unified frameworks that integrate powerful text encoders, such as CLIP and T5,
with Diffusion Transformer backbones. Although there have been efforts to
control output images with additional conditions, like canny and depth map, a
comprehensi... | 2025-02-27T15:08:39Z | 13 pages, 9 figures, codebase in
https://github.com/chenllliang/DreamEngine | null | null | null | null | null | null | null | null | null |
2,502.20272 | HVI: A New Color Space for Low-light Image Enhancement | ['Qingsen Yan', 'Yixu Feng', 'Cheng Zhang', 'Guansong Pang', 'Kangbiao Shi', 'Peng Wu', 'Wei Dong', 'Jinqiu Sun', 'Yanning Zhang'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Low-Light Image Enhancement (LLIE) is a crucial computer vision task that
aims to restore detailed visual information from corrupted low-light images.
Many existing LLIE methods are based on standard RGB (sRGB) space, which often
produce color bias and brightness artifacts due to inherent high color
sensitivity in sRGB... | 2025-02-27T16:59:51Z | Qingsen Yan, Yixu Feng, and Cheng Zhang contributed equally to this
work | null | null | null | null | null | null | null | null | null |
2,502.20311 | Adapting Automatic Speech Recognition for Accented Air Traffic Control
Communications | ['Marcus Yu Zhe Wee', 'Justin Juin Hng Wong', 'Lynus Lim', 'Joe Yu Wei Tan', 'Prannaya Gupta', 'Dillion Lim', 'En Hao Tew', 'Aloysius Keng Siew Han', 'Yong Zhi Lim'] | ['cs.LG', 'cs.SD', 'eess.AS'] | Effective communication in Air Traffic Control (ATC) is critical to
maintaining aviation safety, yet the challenges posed by accented English
remain largely unaddressed in Automatic Speech Recognition (ASR) systems.
Existing models struggle with transcription accuracy for Southeast
Asian-accented (SEA-accented) speech,... | 2025-02-27T17:35:59Z | null | null | null | null | null | null | null | null | null | null |
2,502.20317 | Mixture of Structural-and-Textual Retrieval over Text-rich Graph
Knowledge Bases | ['Yongjia Lei', 'Haoyu Han', 'Ryan A. Rossi', 'Franck Dernoncourt', 'Nedim Lipka', 'Mahantesh M Halappanavar', 'Jiliang Tang', 'Yu Wang'] | ['cs.LG', 'cs.AI', 'cs.IR'] | Text-rich Graph Knowledge Bases (TG-KBs) have become increasingly crucial for
answering queries by providing textual and structural knowledge. However,
current retrieval methods often retrieve these two types of knowledge in
isolation without considering their mutual reinforcement and some hybrid
methods even bypass st... | 2025-02-27T17:42:52Z | null | null | null | null | null | null | null | null | null | null |
2,502.20321 | UniTok: A Unified Tokenizer for Visual Generation and Understanding | ['Chuofan Ma', 'Yi Jiang', 'Junfeng Wu', 'Jihan Yang', 'Xin Yu', 'Zehuan Yuan', 'Bingyue Peng', 'Xiaojuan Qi'] | ['cs.CV', 'cs.AI'] | Visual generative and understanding models typically rely on distinct
tokenizers to process images, presenting a key challenge for unifying them
within a single framework. Recent studies attempt to address this by connecting
the training of VQVAE (for autoregressive generation) and CLIP (for
understanding) to build a u... | 2025-02-27T17:47:01Z | null | null | null | UniTok: A Unified Tokenizer for Visual Generation and Understanding | ['Chuofan Ma', 'Yi Jiang', 'Junfeng Wu', 'Jihan Yang', 'Xin Yu', 'Zehuan Yuan', 'Bingyue Peng', 'Xiaojuan Qi'] | 2,025 | arXiv.org | 15 | 78 | ['Computer Science'] |
2,502.20323 | ARTalk: Speech-Driven 3D Head Animation via Autoregressive Model | ['Xuangeng Chu', 'Nabarun Goswami', 'Ziteng Cui', 'Hanqin Wang', 'Tatsuya Harada'] | ['cs.CV'] | Speech-driven 3D facial animation aims to generate realistic lip movements
and facial expressions for 3D head models from arbitrary audio clips. Although
existing diffusion-based methods are capable of producing natural motions,
their slow generation speed limits their application potential. In this paper,
we introduce... | 2025-02-27T17:49:01Z | More video demonstrations, code, models and data can be found on our
project website: http://xg-chu.site/project_artalk/ | null | null | null | null | null | null | null | null | null |
2,502.20388 | Beyond Next-Token: Next-X Prediction for Autoregressive Visual
Generation | ['Sucheng Ren', 'Qihang Yu', 'Ju He', 'Xiaohui Shen', 'Alan Yuille', 'Liang-Chieh Chen'] | ['cs.CV'] | Autoregressive (AR) modeling, known for its next-token prediction paradigm,
underpins state-of-the-art language and visual generative models.
Traditionally, a ``token'' is treated as the smallest prediction unit, often a
discrete symbol in language or a quantized patch in vision. However, the
optimal token definition f... | 2025-02-27T18:59:08Z | Project page at \url{https://oliverrensu.github.io/project/xAR} | null | null | Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation | ['Sucheng Ren', 'Qihang Yu', 'Ju He', 'Xiaohui Shen', 'Alan L. Yuille', 'Liang-Chieh Chen'] | 2,025 | arXiv.org | 11 | 64 | ['Computer Science'] |
2,502.20578 | Interpreting CLIP with Hierarchical Sparse Autoencoders | ['Vladimir Zaigrajew', 'Hubert Baniecki', 'Przemyslaw Biecek'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Sparse autoencoders (SAEs) are useful for detecting and steering
interpretable features in neural networks, with particular potential for
understanding complex multimodal representations. Given their ability to
uncover interpretable features, SAEs are particularly valuable for analyzing
large-scale vision-language mode... | 2025-02-27T22:39:13Z | null | Proceedings of the 42st International Conference on Machine
Learning (ICML 2025) | null | Interpreting CLIP with Hierarchical Sparse Autoencoders | ['Vladimir Zaigrajew', 'Hubert Baniecki', 'P. Biecek'] | 2,025 | arXiv.org | 1 | 74 | ['Computer Science'] |
2,502.20583 | LiteASR: Efficient Automatic Speech Recognition with Low-Rank
Approximation | ['Keisuke Kamahori', 'Jungo Kasai', 'Noriyuki Kojima', 'Baris Kasikci'] | ['cs.LG', 'cs.AI', 'cs.SD', 'eess.AS'] | Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper,
rely on deep encoder-decoder architectures, and their encoders are a critical
bottleneck for efficient deployment due to high computational intensity. We
introduce LiteASR, a low-rank compression scheme for ASR encoders that
significantly reduc... | 2025-02-27T22:52:21Z | null | null | null | null | null | null | null | null | null | null |
2,502.20795 | Plan2Align: Predictive Planning Based Test-Time Preference Alignment for
Large Language Models | ['Kuang-Da Wang', 'Teng-Ruei Chen', 'Yu Heng Hung', 'Guo-Xun Ko', 'Shuoyang Ding', 'Yueh-Hua Wu', 'Yu-Chiang Frank Wang', 'Chao-Han Huck Yang', 'Wen-Chih Peng', 'Ping-Chun Hsieh'] | ['cs.CL'] | Aligning Large Language Models with Preference Fine-Tuning is often
resource-intensive. Test-time alignment techniques that do not modify the
underlying models, such as prompting and guided decodings, offer a lightweight
alternative. However, existing test-time alignment methods primarily improve
short responses and fa... | 2025-02-28T07:24:33Z | Preprint. Code will be released at Plan2Align GitHub link:
https://github.com/NYCU-RL-Bandits-Lab/Plan2Align | null | null | null | null | null | null | null | null | null |
2,502.20855 | MAMUT: A Novel Framework for Modifying Mathematical Formulas for the
Generation of Specialized Datasets for Language Model Training | ['Jonathan Drechsel', 'Anja Reusch', 'Steffen Herbold'] | ['cs.CL', 'cs.LG'] | Mathematical formulas are a fundamental and widely used component in various
scientific fields, serving as a universal language for expressing complex
concepts and relationships. While state-of-the-art transformer models excel in
processing and understanding natural language, they encounter challenges with
mathematical... | 2025-02-28T08:53:42Z | null | Transactions on Machine Learning Research (06/2025) | null | MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training | ['Jonathan Drechsel', 'Anja Reusch', 'Steffen Herbold'] | 2,025 | arXiv.org | 1 | 40 | ['Computer Science'] |
2,502.20864 | Do Language Models Understand Honorific Systems in Javanese? | ['Mohammad Rifqi Farhansyah', 'Iwan Darmawan', 'Adryan Kusumawardhana', 'Genta Indra Winata', 'Alham Fikri Aji', 'Derry Tanti Wijaya'] | ['cs.CL'] | The Javanese language features a complex system of honorifics that vary
according to the social status of the speaker, listener, and referent. Despite
its cultural and linguistic significance, there has been limited progress in
developing a comprehensive corpus to capture these variations for natural
language processin... | 2025-02-28T09:05:35Z | ACL 2025 - Main Conference | null | null | Do Language Models Understand Honorific Systems in Javanese? | ['MohammadRifqi Farhansyah', 'Iwan Darmawan', 'Adryan Kusumawardhana', 'Genta Indra Winata', 'Alham Fikri Aji', 'D. Wijaya'] | 2,025 | arXiv.org | 1 | 38 | ['Computer Science'] |
2,502.20936 | WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense
Retrieval | ['Michael Dinzinger', 'Laura Caspari', 'Kanishka Ghosh Dastidar', 'Jelena Mitrović', 'Michael Granitzer'] | ['cs.CL', 'cs.AI', 'cs.IR'] | We present WebFAQ, a large-scale collection of open-domain question answering
datasets derived from FAQ-style schema.org annotations. In total, the data
collection consists of 96 million natural question-answer (QA) pairs across 75
languages, including 47 million (49%) non-English samples. WebFAQ further
serves as the ... | 2025-02-28T10:46:52Z | 10 pages, 3 figures, 7 tables | null | null | WebFAQ: A Multilingual Collection of Natural Q&A Datasets for Dense Retrieval | ['Michael Dinzinger', 'Laura Caspari', 'Kanishka Ghosh Dastidar', "Jelena Mitrovi'c", 'Michael Granitzer'] | 2,025 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,502.21074 | CODI: Compressing Chain-of-Thought into Continuous Space via
Self-Distillation | ['Zhenyi Shen', 'Hanqi Yan', 'Linhai Zhang', 'Zhanghao Hu', 'Yali Du', 'Yulan He'] | ['cs.CL'] | Chain-of-Thought (CoT) reasoning enhances Large Language Models (LLMs) by
encouraging step-by-step reasoning in natural language. However, leveraging a
latent continuous space for reasoning may offer benefits in terms of both
efficiency and robustness. Prior implicit CoT methods attempt to bypass
language completely by... | 2025-02-28T14:07:48Z | 16 pages | null | null | CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation | ['Zhenyi Shen', 'Hanqi Yan', 'Linhai Zhang', 'Zhanghao Hu', 'Yali Du', 'Yulan He'] | 2,025 | arXiv.org | 27 | 47 | ['Computer Science'] |
2,502.21208 | ARIES: Autonomous Reasoning with LLMs on Interactive Thought Graph
Environments | ['Pedro Gimenes', 'Zeyu Cao', 'Jeffrey Wong', 'Yiren Zhao'] | ['cs.AI', 'cs.LG'] | Recent research has shown that LLM performance on reasoning tasks can be
enhanced by scaling test-time compute. One promising approach, particularly
with decomposable problems, involves arranging intermediate solutions as a
graph on which transformations are performed to explore the solution space.
However, prior works... | 2025-02-28T16:28:13Z | null | null | null | ARIES: Autonomous Reasoning with LLMs on Interactive Thought Graph Environments | ['Pedro Gimenes', 'Zeyu Cao', 'Jeffrey T. H. Wong', 'Yiren Zhao'] | 2,025 | arXiv.org | 0 | 18 | ['Computer Science'] |
2,502.21228 | ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual
Knowledge Transfer | ['Omer Goldman', 'Uri Shaham', 'Dan Malkin', 'Sivan Eiger', 'Avinatan Hassidim', 'Yossi Matias', 'Joshua Maynez', 'Adi Mayrav Gilady', 'Jason Riesa', 'Shruti Rijhwani', 'Laura Rimell', 'Idan Szpektor', 'Reut Tsarfaty', 'Matan Eyal'] | ['cs.CL', 'cs.AI'] | To achieve equitable performance across languages, multilingual large
language models (LLMs) must be able to abstract knowledge beyond the language
in which it was acquired. However, the current literature lacks reliable ways
to measure LLMs' capability of cross-lingual knowledge transfer. To that end,
we present ECLeK... | 2025-02-28T16:59:30Z | null | null | null | null | null | null | null | null | null | null |
2,502.21257 | RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract
to Concrete | ['Yuheng Ji', 'Huajie Tan', 'Jiayu Shi', 'Xiaoshuai Hao', 'Yuan Zhang', 'Hengyuan Zhang', 'Pengwei Wang', 'Mengdi Zhao', 'Yao Mu', 'Pengju An', 'Xinda Xue', 'Qinghang Su', 'Huaihai Lyu', 'Xiaolong Zheng', 'Jiaming Liu', 'Zhongyuan Wang', 'Shanghang Zhang'] | ['cs.RO', 'cs.CV'] | Recent advancements in Multimodal Large Language Models (MLLMs) have shown
remarkable capabilities across various multimodal contexts. However, their
application in robotic scenarios, particularly for long-horizon manipulation
tasks, reveals significant limitations. These limitations arise from the
current MLLMs lackin... | 2025-02-28T17:30:39Z | null | null | null | RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete | ['Yuheng Ji', 'Huajie Tan', 'Jiayu Shi', 'Xiaoshuai Hao', 'Yuan Zhang', 'Hengyuan Zhang', 'Pengwei Wang', 'Mengdi Zhao', 'Yao Mu', 'Pengju An', 'Xinda Xue', 'Qinghang Su', 'Huaihai Lyu', 'Xiaolong Zheng', 'Jiaming Liu', 'Zhongyuan Wang', 'Shanghang Zhang'] | 2,025 | Computer Vision and Pattern Recognition | 15 | 95 | ['Computer Science'] |
2,502.21291 | MIGE: Mutually Enhanced Multimodal Instruction-Based Image Generation
and Editing | ['Xueyun Tian', 'Wei Li', 'Bingbing Xu', 'Yige Yuan', 'Yuanzhuo Wang', 'Huawei Shen'] | ['cs.CV'] | Despite significant progress in diffusion-based image generation,
subject-driven generation and instruction-based editing remain challenging.
Existing methods typically treat them separately, struggling with limited
high-quality data and poor generalization. However, both tasks require
capturing complex visual variatio... | 2025-02-28T18:21:08Z | This paper have been accepted by ACM MM25 | null | null | null | null | null | null | null | null | null |
2,502.21309 | FANformer: Improving Large Language Models Through Effective Periodicity
Modeling | ['Yihong Dong', 'Ge Li', 'Xue Jiang', 'Yongding Tao', 'Kechi Zhang', 'Hao Zhu', 'Huanyu Liu', 'Jiazheng Ding', 'Jia Li', 'Jinliang Deng', 'Hong Mei'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Periodicity, as one of the most important basic characteristics, lays the
foundation for facilitating structured knowledge acquisition and systematic
cognitive processes within human learning paradigms. However, the potential
flaws of periodicity modeling in Transformer affect the learning efficiency and
establishment ... | 2025-02-28T18:52:24Z | null | null | null | null | null | null | null | null | null | null |
2,502.21318 | How far can we go with ImageNet for Text-to-Image generation? | ['L. Degeorge', 'A. Ghosh', 'N. Dufour', 'D. Picard', 'V. Kalogeiton'] | ['cs.CV'] | Recent text-to-image generation models have achieved remarkable results by
training on billion-scale datasets, following a `bigger is better' paradigm
that prioritizes data quantity over availability (closed vs open source) and
reproducibility (data decay vs established collections). We challenge this
established parad... | 2025-02-28T18:59:42Z | null | null | null | null | null | null | null | null | null | null |
2,503.00031 | Efficient Test-Time Scaling via Self-Calibration | ['Chengsong Huang', 'Langlin Huang', 'Jixuan Leng', 'Jiacheng Liu', 'Jiaxin Huang'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Increasing test-time computation is a straightforward approach to enhancing
the quality of responses in Large Language Models (LLMs). While Best-of-N
sampling and Self-Consistency with majority voting are simple and effective,
they require a fixed number of sampling responses for each query, regardless of
its complexit... | 2025-02-25T00:21:14Z | null | null | null | null | null | null | null | null | null | null |
2,503.00084 | InspireMusic: Integrating Super Resolution and Large Language Model for
High-Fidelity Long-Form Music Generation | ['Chong Zhang', 'Yukun Ma', 'Qian Chen', 'Wen Wang', 'Shengkui Zhao', 'Zexu Pan', 'Hao Wang', 'Chongjia Ni', 'Trung Hieu Nguyen', 'Kun Zhou', 'Yidi Jiang', 'Chaohong Tan', 'Zhifu Gao', 'Zhihao Du', 'Bin Ma'] | ['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS'] | We introduce InspireMusic, a framework integrated super resolution and large
language model for high-fidelity long-form music generation. A unified
framework generates high-fidelity music, songs, and audio, which incorporates
an autoregressive transformer with a super-resolution flow-matching model. This
framework enab... | 2025-02-28T09:58:25Z | Work in progress. Correspondence regarding this technical report
should be directed to {chong.zhang, yukun.ma}@alibaba-inc.com. Online demo
available on https://modelscope.cn/studios/iic/InspireMusic and
https://huggingface.co/spaces/FunAudioLLM/InspireMusic | null | null | InspireMusic: Integrating Super Resolution and Large Language Model for High-Fidelity Long-Form Music Generation | ['Chong Zhang', 'Yukun Ma', 'Qian Chen', 'Wen Wang', 'Shengkui Zhao', 'Zexu Pan', 'Hao Wang', 'Chongjia Ni', 'Trung Hieu Nguyen', 'Kun Zhou', 'Yidi Jiang', 'Chaohong Tan', 'Zhifu Gao', 'Zhihao Du', 'Bin Ma'] | 2,025 | arXiv.org | 1 | 24 | ['Computer Science', 'Engineering'] |
2,503.00118 | Novel $|V_{cb}|$ extraction method via boosted $bc$-tagging with in-situ
calibration | ['Yuzhe Zhao', 'Congqiao Li', 'Antonios Agapitos', 'Dawei Fu', 'Leyun Gao', 'Yajun Mao', 'Qiang Li'] | ['hep-ph'] | We present a novel method for measuring $|V_{cb}|$ at the LHC using an
advanced boosted-jet tagger to identify "$bc$ signatures". By associating
boosted $W \rightarrow bc$ signals with $bc$-matched jets from top-quark
decays, we enable an in-situ calibration of the tagger. This approach
significantly suppresses backgro... | 2025-02-28T19:00:25Z | 7 pages (main text), 6 figures | null | null | null | null | null | null | null | null | null |
2,503.00205 | AnalogGenie: A Generative Engine for Automatic Discovery of Analog
Circuit Topologies | ['Jian Gao', 'Weidong Cao', 'Junyi Yang', 'Xuan Zhang'] | ['cs.LG', 'cs.AR'] | The massive and large-scale design of foundational semiconductor integrated
circuits (ICs) is crucial to sustaining the advancement of many emerging and
future technologies, such as generative AI, 5G/6G, and quantum computing.
Excitingly, recent studies have shown the great capabilities of foundational
models in expedi... | 2025-02-28T21:41:20Z | ICLR 2025 camera ready | null | null | null | null | null | null | null | null | null |
2,503.00211 | SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal
Foundation Models | ['Jiawei Zhang', 'Xuan Yang', 'Taiqi Wang', 'Yu Yao', 'Aleksandr Petiushko', 'Bo Li'] | ['cs.RO', 'cs.AI', 'cs.LG', 'cs.SY', 'eess.SY'] | Traditional autonomous driving systems often struggle to connect high-level
reasoning with low-level control, leading to suboptimal and sometimes unsafe
behaviors. Recent advances in multimodal large language models (MLLMs), which
process both visual and textual data, offer an opportunity to unify perception
and reason... | 2025-02-28T21:53:47Z | null | null | null | null | null | null | null | null | null | null |
2,503.00223 | DeepRetrieval: Hacking Real Search Engines and Retrievers with Large
Language Models via Reinforcement Learning | ['Pengcheng Jiang', 'Jiacheng Lin', 'Lang Cao', 'Runchu Tian', 'SeongKu Kang', 'Zifeng Wang', 'Jimeng Sun', 'Jiawei Han'] | ['cs.IR'] | Information retrieval systems are crucial for enabling effective access to
large document collections. Recent approaches have leveraged Large Language
Models (LLMs) to enhance retrieval performance through query augmentation, but
often rely on expensive supervised learning or distillation techniques that
require signif... | 2025-02-28T22:16:42Z | null | null | null | DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning | ['Pengcheng Jiang', 'Jiacheng Lin', 'Lang Cao', 'Runchu Tian', 'SeongKu Kang', 'Zifeng Wang', 'Jimeng Sun', 'Jiawei Han'] | 2,025 | arXiv.org | 13 | 118 | ['Computer Science'] |
2,503.00287 | Passivity-Centric Safe Reinforcement Learning for Contact-Rich Robotic
Tasks | ['Heng Zhang', 'Gokhan Solak', 'Sebastian Hjorth', 'Arash Ajoudani'] | ['cs.RO'] | Reinforcement learning (RL) has achieved remarkable success in various
robotic tasks; however, its deployment in real-world scenarios, particularly in
contact-rich environments, often overlooks critical safety and stability
aspects. Policies without passivity guarantees can result in system
instability, posing risks to... | 2025-03-01T01:34:02Z | revision version | null | null | null | null | null | null | null | null | null |
2,503.00329 | ABC: Achieving Better Control of Multimodal Embeddings using VLMs | ['Benjamin Schneider', 'Florian Kerschbaum', 'Wenhu Chen'] | ['cs.CV', 'cs.LG'] | Visual embedding models excel at zero-shot tasks like visual retrieval and
classification. However, these models cannot be used for tasks that contain
ambiguity or require user instruction. These tasks necessitate a multimodal
embedding model, which outputs embeddings that combine visual and natural
language input. Exi... | 2025-03-01T03:29:02Z | null | null | null | null | null | null | null | null | null | null |
2,503.00332 | Investigating the contribution of terrain-following coordinates and
conservation schemes in AI-driven precipitation forecasts | ['Yingkai Sha', 'John S. Schreck', 'William Chapman', 'David John Gagne II'] | ['physics.ao-ph', 'cs.AI'] | Artificial Intelligence (AI) weather prediction (AIWP) models often produce
"blurry" precipitation forecasts that overestimate drizzle and underestimate
extremes. This study provides a novel solution to tackle this problem --
integrating terrain-following coordinates with global mass and energy
conservation schemes int... | 2025-03-01T03:44:46Z | null | null | null | Investigating the contribution of terrain-following coordinates and conservation schemes in AI-driven precipitation forecasts | ['Yingkai Sha', 'John S. Schreck', 'William Chapman', 'David John Gagne'] | 2,025 | arXiv.org | 1 | 28 | ['Physics', 'Computer Science'] |
2,503.00493 | LLaSE-G1: Incentivizing Generalization Capability for LLaMA-based Speech
Enhancement | ['Boyi Kang', 'Xinfa Zhu', 'Zihan Zhang', 'Zhen Ye', 'Mingshuai Liu', 'Ziqian Wang', 'Yike Zhu', 'Guobin Ma', 'Jun Chen', 'Longshuai Xiao', 'Chao Weng', 'Wei Xue', 'Lei Xie'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.SD'] | Recent advancements in language models (LMs) have demonstrated strong
capabilities in semantic understanding and contextual modeling, which have
flourished in generative speech enhancement (SE). However, many LM-based SE
approaches primarily focus on semantic information, often neglecting the
critical role of acoustic ... | 2025-03-01T13:44:50Z | ACL2025 main, Codes available at
https://github.com/Kevin-naticl/LLaSE-G1 | null | null | null | null | null | null | null | null | null |
2,503.00533 | BodyGen: Advancing Towards Efficient Embodiment Co-Design | ['Haofei Lu', 'Zhe Wu', 'Junliang Xing', 'Jianshu Li', 'Ruoyu Li', 'Zhe Li', 'Yuanchun Shi'] | ['cs.RO', 'cs.LG', 'cs.SY', 'eess.SY'] | Embodiment co-design aims to optimize a robot's morphology and control policy
simultaneously. While prior work has demonstrated its potential for generating
environment-adaptive robots, this field still faces persistent challenges in
optimization efficiency due to the (i) combinatorial nature of morphological
search sp... | 2025-03-01T15:25:42Z | ICLR 2025 (Spotlight). Project Page: https://genesisorigin.github.io,
Code: https://github.com/GenesisOrigin/BodyGen | null | null | BodyGen: Advancing Towards Efficient Embodiment Co-Design | ['Haofei Lu', 'Zhe Wu', 'Junliang Xing', 'Jianshu Li', 'Ruoyu Li', 'Zhe Li', 'Yuanchun Shi'] | 2,025 | International Conference on Learning Representations | 2 | 46 | ['Computer Science'] |
2,503.00564 | ToolDial: Multi-turn Dialogue Generation Method for Tool-Augmented
Language Models | ['Jeonghoon Shim', 'Gyuhyeon Seo', 'Cheongsu Lim', 'Yohan Jo'] | ['cs.CL'] | Tool-Augmented Language Models (TALMs) leverage external APIs to answer user
queries across various domains. However, existing benchmark datasets for TALM
research often feature simplistic dialogues that do not reflect real-world
scenarios, such as the need for models to ask clarifying questions or
proactively call add... | 2025-03-01T17:23:51Z | Accepted to ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,503.00735 | LADDER: Self-Improving LLMs Through Recursive Problem Decomposition | ['Toby Simonds', 'Akira Yoshiyama'] | ['cs.LG', 'cs.AI'] | We introduce LADDER (Learning through Autonomous Difficulty-Driven Example
Recursion), a framework which enables Large Language Models to autonomously
improve their problem-solving capabilities through self-guided learning by
recursively generating and solving progressively simpler variants of complex
problems. Unlike ... | 2025-03-02T05:16:43Z | null | null | null | LADDER: Self-Improving LLMs Through Recursive Problem Decomposition | ['Toby Simonds', 'Akira Yoshiyama'] | 2,025 | arXiv.org | 6 | 9 | ['Computer Science'] |
2,503.00803 | HiMo: High-Speed Objects Motion Compensation in Point Clouds | ['Qingwen Zhang', 'Ajinkya Khoche', 'Yi Yang', 'Li Ling', 'Sina Sharif Mansouri', 'Olov Andersson', 'Patric Jensfelt'] | ['cs.CV', 'cs.RO'] | LiDAR point cloud is essential for autonomous vehicles, but motion
distortions from dynamic objects degrade the data quality. While previous work
has considered distortions caused by ego motion, distortions caused by other
moving objects remain largely overlooked, leading to errors in object shape and
position. This di... | 2025-03-02T08:55:12Z | 12 pages | null | null | null | null | null | null | null | null | null |
2,503.00808 | Predictive Data Selection: The Data That Predicts Is the Data That
Teaches | ['Kashun Shum', 'Yuzhen Huang', 'Hongjian Zou', 'Qi Ding', 'Yixuan Liao', 'Xiaoxin Chen', 'Qian Liu', 'Junxian He'] | ['cs.CL'] | Language model pretraining involves training on extensive corpora, where data
quality plays a pivotal role. In this work, we aim to directly estimate the
contribution of data during pretraining and select pretraining data in an
efficient manner. Specifically, we draw inspiration from recent findings
showing that compre... | 2025-03-02T09:21:28Z | 22 pages | null | null | Predictive Data Selection: The Data That Predicts Is the Data That Teaches | ['Kashun Shum', 'Yuzhen Huang', 'Hongjian Zou', 'Qi Ding', 'Yixuan Liao', 'Xiaoxin Chen', 'Qian Liu', 'Junxian He'] | 2,025 | arXiv.org | 4 | 43 | ['Computer Science'] |
2,503.00865 | Babel: Open Multilingual Large Language Models Serving Over 90% of
Global Speakers | ['Yiran Zhao', 'Chaoqun Liu', 'Yue Deng', 'Jiahao Ying', 'Mahani Aljunied', 'Zhaodonghui Li', 'Lidong Bing', 'Hou Pong Chan', 'Yu Rong', 'Deli Zhao', 'Wenxuan Zhang'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have revolutionized natural language processing
(NLP), yet open-source multilingual LLMs remain scarce, with existing models
often limited in language coverage. Such models typically prioritize
well-resourced languages, while widely spoken but under-resourced languages are
often overlooked.... | 2025-03-02T11:53:55Z | null | null | null | Babel: Open Multilingual Large Language Models Serving Over 90% of Global Speakers | ['Yiran Zhao', 'Chaoqun Liu', 'Yue Deng', 'Jiahao Ying', 'Mahani Aljunied', 'Zhaodonghui Li', 'Li Bing', 'Hou Pong Chan', 'Yu Rong', 'Deli Zhao', 'Wenxuan Zhang'] | 2,025 | arXiv.org | 6 | 26 | ['Computer Science'] |
2,503.00938 | From Poses to Identity: Training-Free Person Re-Identification via
Feature Centralization | ['Chao Yuan', 'Guiwei Zhang', 'Changxiao Ma', 'Tianyi Zhang', 'Guanglin Niu'] | ['cs.CV'] | Person re-identification (ReID) aims to extract accurate identity
representation features. However, during feature extraction, individual samples
are inevitably affected by noise (background, occlusions, and model
limitations). Considering that features from the same identity follow a normal
distribution around identit... | 2025-03-02T15:31:48Z | null | null | null | null | null | null | null | null | null | null |
2,503.00955 | SemViQA: A Semantic Question Answering System for Vietnamese Information
Fact-Checking | ['Dien X. Tran', 'Nam V. Nguyen', 'Thanh T. Tran', 'Anh T. Hoang', 'Tai V. Duong', 'Di T. Le', 'Phuc-Lu Le'] | ['cs.CL', 'cs.AI'] | The rise of misinformation, exacerbated by Large Language Models (LLMs) like
GPT and Gemini, demands robust fact-checking solutions, especially for
low-resource languages like Vietnamese. Existing methods struggle with semantic
ambiguity, homonyms, and complex linguistic structures, often trading accuracy
for efficienc... | 2025-03-02T16:22:46Z | 18 pages | null | null | null | null | null | null | null | null | null |
2,503.00958 | Layered Insights: Generalizable Analysis of Authorial Style by
Leveraging All Transformer Layers | ['Milad Alshomary', 'Nikhil Reddy Varimalla', 'Vishal Anand', 'Smaranda Muresan', 'Kathleen McKeown'] | ['cs.CL'] | We propose a new approach for the authorship attribution task that leverages
the various linguistic representations learned at different layers of
pre-trained transformer-based models. We evaluate our approach on three
datasets, comparing it to a state-of-the-art baseline in in-domain and
out-of-domain scenarios. We fo... | 2025-03-02T16:47:31Z | null | null | null | null | null | null | null | null | null | null |
2,503.00985 | Enhancing Text Editing for Grammatical Error Correction: Arabic as a
Case Study | ['Bashar Alhafni', 'Nizar Habash'] | ['cs.CL'] | Text editing frames grammatical error correction (GEC) as a sequence tagging
problem, where edit tags are assigned to input tokens, and applying these edits
results in the corrected text. This approach has gained attention for its
efficiency and interpretability. However, while extensively explored for
English, text ed... | 2025-03-02T18:48:50Z | null | null | null | Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study | ['Bashar Alhafni', 'Nizar Habash'] | 2,025 | arXiv.org | 2 | 89 | ['Computer Science'] |
2,503.01103 | Direct Discriminative Optimization: Your Likelihood-Based Visual
Generative Model is Secretly a GAN Discriminator | ['Kaiwen Zheng', 'Yongxin Chen', 'Huayu Chen', 'Guande He', 'Ming-Yu Liu', 'Jun Zhu', 'Qinsheng Zhang'] | ['cs.CV', 'cs.LG'] | While likelihood-based generative models, particularly diffusion and
autoregressive models, have achieved remarkable fidelity in visual generation,
the maximum likelihood estimation (MLE) objective, which minimizes the forward
KL divergence, inherently suffers from a mode-covering tendency that limits the
generation qu... | 2025-03-03T02:06:22Z | ICML 2025 Spotlight Project Page:
https://research.nvidia.com/labs/dir/ddo/ Code: https://github.com/NVlabs/DDO | null | null | null | null | null | null | null | null | null |
2,503.01151 | ReaderLM-v2: Small Language Model for HTML to Markdown and JSON | ['Feng Wang', 'Zesheng Shi', 'Bo Wang', 'Nan Wang', 'Han Xiao'] | ['cs.CL', 'cs.AI', 'cs.IR', '68T50', 'I.2.7; I.2.10'] | We present ReaderLM-v2, a compact 1.5 billion parameter language model
designed for efficient web content extraction. Our model processes documents up
to 512K tokens, transforming messy HTML into clean Markdown or JSON formats
with high accuracy -- making it an ideal tool for grounding large language
models. The model'... | 2025-03-03T03:57:04Z | 9 pages, 10-12 refs | null | null | null | null | null | null | null | null | null |
2,503.01183 | DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End
Full-Length Song Generation with Latent Diffusion | ['Ziqian Ning', 'Huakang Chen', 'Yuepeng Jiang', 'Chunbo Hao', 'Guobin Ma', 'Shuai Wang', 'Jixun Yao', 'Lei Xie'] | ['eess.AS'] | Recent advancements in music generation have garnered significant attention,
yet existing approaches face critical limitations. Some current generative
models can only synthesize either the vocal track or the accompaniment track.
While some models can generate combined vocal and accompaniment, they typically
rely on me... | 2025-03-03T05:15:34Z | null | null | null | null | null | null | null | null | null | null |
2,503.01342 | UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended
Language Interface | ['Hao Tang', 'Chenwei Xie', 'Haiyang Wang', 'Xiaoyi Bao', 'Tingyu Weng', 'Pandeng Li', 'Yun Zheng', 'Liwei Wang'] | ['cs.CV'] | Generalist models have achieved remarkable success in both language and
vision-language tasks, showcasing the potential of unified modeling. However,
effectively integrating fine-grained perception tasks like detection and
segmentation into these models remains a significant challenge. This is
primarily because these t... | 2025-03-03T09:27:24Z | null | null | null | null | null | null | null | null | null | null |
2,503.0137 | Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation | ['Jiantao Lin', 'Xin Yang', 'Meixi Chen', 'Yingjie Xu', 'Dongyu Yan', 'Leyi Wu', 'Xinli Xu', 'Lie XU', 'Shunsi Zhang', 'Ying-Cong Chen'] | ['cs.GR', 'cs.CV', 'cs.MM'] | Diffusion models have achieved great success in generating 2D images.
However, the quality and generalizability of 3D content generation remain
limited. State-of-the-art methods often require large-scale 3D assets for
training, which are challenging to collect. In this work, we introduce
Kiss3DGen (Keep It Simple and S... | 2025-03-03T10:07:19Z | The first three authors contributed equally to this work | null | null | null | null | null | null | null | null | null |
2,503.01437 | Eau De $Q$-Network: Adaptive Distillation of Neural Networks in Deep
Reinforcement Learning | ['Théo Vincent', 'Tim Faust', 'Yogesh Tripathi', 'Jan Peters', "Carlo D'Eramo"] | ['cs.LG', 'cs.AI'] | Recent works have successfully demonstrated that sparse deep reinforcement
learning agents can be competitive against their dense counterparts. This opens
up opportunities for reinforcement learning applications in fields where
inference time and memory requirements are cost-sensitive or limited by
hardware. Until now,... | 2025-03-03T11:39:03Z | Published at RLC 2025:
https://openreview.net/forum?id=Bb84iBj4wU#discussion | null | null | Eau De Q-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning | ['Th´eo Vincent', 'T. Faust', 'Yogesh Tripathi', 'Jan Peters', "Carlo D'Eramo"] | 2,025 | arXiv.org | 0 | 52 | ['Computer Science'] |
2,503.01493 | Llama-3.1-Sherkala-8B-Chat: An Open Large Language Model for Kazakh | ['Fajri Koto', 'Rituraj Joshi', 'Nurdaulet Mukhituly', 'Yuxia Wang', 'Zhuohan Xie', 'Rahul Pal', 'Daniil Orel', 'Parvez Mullah', 'Diana Turmakhan', 'Maiya Goloburda', 'Mohammed Kamran', 'Samujjwal Ghosh', 'Bokang Jia', 'Jonibek Mansurov', 'Mukhammed Togmanov', 'Debopriyo Banerjee', 'Nurkhan Laiyk', 'Akhmed Sakip', 'Xud... | ['cs.CL'] | Llama-3.1-Sherkala-8B-Chat, or Sherkala-Chat (8B) for short, is a
state-of-the-art instruction-tuned open generative large language model (LLM)
designed for Kazakh. Sherkala-Chat (8B) aims to enhance the inclusivity of LLM
advancements for Kazakh speakers. Adapted from the LLaMA-3.1-8B model,
Sherkala-Chat (8B) is trai... | 2025-03-03T13:05:48Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,503.01496 | Liger: Linearizing Large Language Models to Gated Recurrent Structures | ['Disen Lan', 'Weigao Sun', 'Jiaxi Hu', 'Jusen Du', 'Yu Cheng'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Transformers with linear recurrent modeling offer linear-time training and
constant-memory inference. Despite their demonstrated efficiency and
performance, pretraining such non-standard architectures from scratch remains
costly and risky. The linearization of large language models (LLMs) transforms
pretrained standard... | 2025-03-03T13:08:00Z | Accepted by ICML 2025, 15 pages | null | null | null | null | null | null | null | null | null |
2,503.01565 | AutoLUT: LUT-Based Image Super-Resolution with Automatic Sampling and
Adaptive Residual Learning | ['Yuheng Xu', 'Shijie Yang', 'Xin Liu', 'Jie Liu', 'Jie Tang', 'Gangshan Wu'] | ['cs.CV', 'eess.IV'] | In recent years, the increasing popularity of Hi-DPI screens has driven a
rising demand for high-resolution images. However, the limited computational
power of edge devices poses a challenge in deploying complex super-resolution
neural networks, highlighting the need for efficient methods. While prior works
have made s... | 2025-03-03T14:09:36Z | Accepted by CVPR2025 | null | null | null | null | null | null | null | null | null |
2,503.0171 | Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with
Single-Stream Decoupled Speech Tokens | ['Xinsheng Wang', 'Mingqi Jiang', 'Ziyang Ma', 'Ziyu Zhang', 'Songxiang Liu', 'Linqin Li', 'Zheng Liang', 'Qixi Zheng', 'Rui Wang', 'Xiaoqin Feng', 'Weizhen Bian', 'Zhen Ye', 'Sitong Cheng', 'Ruibin Yuan', 'Zhixian Zhao', 'Xinfa Zhu', 'Jiahao Pan', 'Liumeng Xue', 'Pengcheng Zhu', 'Yunlin Chen', 'Zhifei Li', 'Xie Chen',... | ['cs.SD', 'cs.AI', 'eess.AS'] | Recent advancements in large language models (LLMs) have driven significant
progress in zero-shot text-to-speech (TTS) synthesis. However, existing
foundation models rely on multi-stage processing or complex architectures for
predicting multiple codebooks, limiting efficiency and integration flexibility.
To overcome th... | 2025-03-03T16:23:10Z | Submitted to ACL 2025 | null | null | null | null | null | null | null | null | null |
2,503.01743 | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language
Models via Mixture-of-LoRAs | ['Microsoft', ':', 'Abdelrahman Abouelenin', 'Atabak Ashfaq', 'Adam Atkinson', 'Hany Awadalla', 'Nguyen Bach', 'Jianmin Bao', 'Alon Benhaim', 'Martin Cai', 'Vishrav Chaudhary', 'Congcong Chen', 'Dong Chen', 'Dongdong Chen', 'Junkun Chen', 'Weizhu Chen', 'Yen-Chun Chen', 'Yi-ling Chen', 'Qi Dai', 'Xiyang Dai', 'Ruchao F... | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable
language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language
model trained on high-quality web and synthetic data, significantly
outperforming recent open-source models of similar size and matching the
performance of models twice... | 2025-03-03T17:05:52Z | 39 pages | null | null | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs | ['Abdelrahman Abouelenin', 'Atabak Ashfaq', 'Adam Atkinson', 'H. Awadalla', 'Nguyen Bach', 'Jianmin Bao', 'A. Benhaim', 'Martin Cai', 'Vishrav Chaudhary', 'Congcong Chen', 'Dongdong Chen', 'Dongdong Chen', 'Junkun Chen', 'Weizhu Chen', 'Yen-Chun Chen', 'Yi-ling Chen', 'Qi Dai', 'Xiyang Dai', 'Ruchao Fan', 'Mei Gao', 'M... | 2,025 | arXiv.org | 71 | 105 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.