arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,502.11187 | TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking | ['Shahriar Kabir Nahin', 'Rabindra Nath Nandi', 'Sagor Sarker', 'Quazi Sarwar Muhtaseem', 'Md Kowsher', 'Apu Chandraw Shill', 'Md Ibrahim', 'Mehadi Hasan Menon', 'Tareq Al Muntasir', 'Firoj Alam'] | ['cs.CL', 'cs.AI', '68T50', 'F.2.2; I.2.7'] | In this paper, we present TituLLMs, the first large pretrained Bangla LLMs,
available in 1b and 3b parameter sizes. Due to computational constraints during
both training and inference, we focused on smaller models. To train TituLLMs,
we collected a pretraining dataset of approximately ~37 billion tokens. We
extended th... | 2025-02-16T16:22:23Z | LLMs, Benchmarking, Large Language Models, Bangla, BanglaLLMs | null | null | TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking | ['Shahriar Kabir Nahin', 'R. N. Nandi', 'Sagor Sarker', 'Quazi Sarwar Muhtaseem', 'Md Kowsher', 'Apu Chandraw Shill', 'Md Ibrahim', 'Mehadi Hasan Menon', 'Tareq Al Muntasir', 'Firoj Alam'] | 2,025 | arXiv.org | 0 | 86 | ['Computer Science'] |
2,502.11191 | Primus: A Pioneering Collection of Open-Source Datasets for
Cybersecurity LLM Training | ['Yao-Ching Yu', 'Tsun-Han Chiang', 'Cheng-Wei Tsai', 'Chien-Ming Huang', 'Wen-Kwang Tsao'] | ['cs.CR', 'cs.AI', 'cs.CL'] | Large Language Models (LLMs) have shown remarkable advancements in
specialized fields such as finance, law, and medicine. However, in
cybersecurity, we have noticed a lack of open-source datasets, with a
particular lack of high-quality cybersecurity pretraining corpora, even though
much research indicates that LLMs acq... | 2025-02-16T16:34:49Z | null | null | null | null | null | null | null | null | null | null |
2,502.11223 | Asymmetric Conflict and Synergy in Post-training for LLM-based
Multilingual Machine Translation | ['Tong Zheng', 'Yan Wen', 'Huiwen Bao', 'Junfeng Guo', 'Heng Huang'] | ['cs.CL'] | The emergence of Large Language Models (LLMs) has advanced the multilingual
machine translation (MMT), yet the Curse of Multilinguality (CoM) remains a
major challenge. Existing work in LLM-based MMT typically mitigates this issue
via scaling up training and computation budget, which raises a critical
question: Is scal... | 2025-02-16T18:06:58Z | 22 pages | null | null | null | null | null | null | null | null | null |
2,502.11275 | Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest | ['Letian Peng', 'Zilong Wang', 'Feng Yao', 'Jingbo Shang'] | ['cs.CL'] | Massive high-quality data, both pre-training raw texts and post-training
annotations, have been carefully prepared to incubate advanced large language
models (LLMs). In contrast, for information extraction (IE), pre-training data,
such as BIO-tagged sequences, are hard to scale up. We show that IE models can
act as fre... | 2025-02-16T21:32:20Z | null | null | null | null | null | null | null | null | null | null |
2,502.11431 | Any Information Is Just Worth One Single Screenshot: Unifying Search
With Visualized Information Retrieval | ['Ze Liu', 'Zhengyang Liang', 'Junjie Zhou', 'Zheng Liu', 'Defu Lian'] | ['cs.CL'] | With the popularity of multimodal techniques, it receives growing interests
to acquire useful information in visual forms. In this work, we formally define
an emerging IR paradigm called \textit{Visualized Information Retrieval}, or
\textbf{Vis-IR}, where multimodal information, such as texts, images, tables
and charts... | 2025-02-17T04:40:15Z | null | null | null | null | null | null | null | null | null | null |
2,502.11492 | Why Vision Language Models Struggle with Visual Arithmetic? Towards
Enhanced Chart and Geometry Understanding | ['Kung-Hsiang Huang', 'Can Qin', 'Haoyi Qiu', 'Philippe Laban', 'Shafiq Joty', 'Caiming Xiong', 'Chien-Sheng Wu'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Vision Language Models (VLMs) have achieved remarkable progress in multimodal
tasks, yet they often struggle with visual arithmetic, seemingly simple
capabilities like object counting or length comparison, which are essential for
relevant complex tasks like chart understanding and geometric reasoning. In
this work, we ... | 2025-02-17T06:54:49Z | Code and data are available at
https://github.com/SalesforceAIResearch/CogAlign | null | null | Why Vision Language Models Struggle with Visual Arithmetic? Towards Enhanced Chart and Geometry Understanding | ['Kung-Hsiang Huang', 'Can Qin', 'Haoyi Qiu', 'Philippe Laban', 'Shafiq Joty', 'Caiming Xiong', 'Chien-Sheng Wu'] | 2,025 | arXiv.org | 5 | 41 | ['Computer Science'] |
2,502.1152 | AURORA:Automated Training Framework of Universal Process Reward Models
via Ensemble Prompting and Reverse Verification | ['Xiaoyu Tan', 'Tianchu Yao', 'Chao Qu', 'Bin Li', 'Minghao Yang', 'Dakuan Lu', 'Haozhe Wang', 'Xihe Qiu', 'Wei Chu', 'Yinghui Xu', 'Yuan Qi'] | ['cs.CL'] | The reasoning capabilities of advanced large language models (LLMs) like o1
have revolutionized artificial intelligence applications. Nevertheless,
evaluating and optimizing complex reasoning processes remain significant
challenges due to diverse policy distributions and the inherent limitations of
human effort and acc... | 2025-02-17T07:41:27Z | Under Review | null | null | AURORA:Automated Training Framework of Universal Process Reward Models via Ensemble Prompting and Reverse Verification | ['Xiaoyu Tan', 'Tianchu Yao', 'Chao Qu', 'Bin Li', 'Minghao Yang', 'Dakuan Lu', 'Haozhe Wang', 'Xihe Qiu', 'Wei Chu', 'Yinghui Xu', 'Yuan Qi'] | 2,025 | arXiv.org | 2 | 54 | ['Computer Science'] |
2,502.11537 | Uncovering Untapped Potential in Sample-Efficient World Model Agents | ['Lior Cohen', 'Kaixin Wang', 'Bingyi Kang', 'Uri Gadot', 'Shie Mannor'] | ['cs.LG', 'cs.AI'] | World model (WM) agents enable sample-efficient reinforcement learning by
learning policies entirely from simulated experience. However, existing
token-based world models (TBWMs) are limited to visual inputs and discrete
actions, restricting their adoption and applicability. Moreover, although both
intrinsic motivation... | 2025-02-17T08:06:10Z | null | null | null | Uncovering Untapped Potential in Sample-Efficient World Model Agents | ['Lior Cohen', 'Kaixin Wang', 'Bingyi Kang', 'Uri Gadot', 'Shie Mannor'] | 2,025 | null | 0 | 62 | ['Computer Science'] |
2,502.11689 | Improve LLM-as-a-Judge Ability as a General Ability | ['Jiachen Yu', 'Shaoning Sun', 'Xiaohui Hu', 'Jiaxu Yan', 'Kaidong Yu', 'Xuelong Li'] | ['cs.CL'] | LLM-as-a-Judge leverages the generative and reasoning capabilities of large
language models (LLMs) to evaluate LLM responses across diverse scenarios,
providing accurate preference signals. This approach plays a vital role in
aligning LLMs with human values, ensuring ethical and reliable AI outputs that
align with soci... | 2025-02-17T11:28:43Z | null | null | null | null | null | null | null | null | null | null |
2,502.12025 | SafeChain: Safety of Language Models with Long Chain-of-Thought
Reasoning Capabilities | ['Fengqing Jiang', 'Zhangchen Xu', 'Yuetai Li', 'Luyao Niu', 'Zhen Xiang', 'Bo Li', 'Bill Yuchen Lin', 'Radha Poovendran'] | ['cs.AI', 'cs.CL'] | Emerging large reasoning models (LRMs), such as DeepSeek-R1 models, leverage
long chain-of-thought (CoT) reasoning to generate structured intermediate
steps, enhancing their reasoning capabilities. However, long CoT does not
inherently guarantee safe outputs, potentially leading to harmful consequences
such as the intr... | 2025-02-17T16:57:56Z | null | null | null | SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities | ['Fengqing Jiang', 'Zhangchen Xu', 'Yuetai Li', 'Luyao Niu', 'Zhen Xiang', 'Bo Li', 'Bill Yuchen Lin', 'Radha Poovendran'] | 2,025 | arXiv.org | 28 | 44 | ['Computer Science'] |
2,502.1208 | HumanGif: Single-View Human Diffusion with Generative Prior | ['Shoukang Hu', 'Takuya Narihira', 'Kazumi Fukuda', 'Ryosuke Sawata', 'Takashi Shibuya', 'Yuki Mitsufuji'] | ['cs.CV'] | Previous 3D human creation methods have made significant progress in
synthesizing view-consistent and temporally aligned results from sparse-view
images or monocular videos. However, it remains challenging to produce
perpetually realistic, view-consistent, and temporally coherent human avatars
from a single image, as l... | 2025-02-17T17:55:27Z | Project page: https://skhu101.github.io/HumanGif/ | null | null | null | null | null | null | null | null | null |
2,502.12082 | AdaSplash: Adaptive Sparse Flash Attention | ['Nuno Gonçalves', 'Marcos Treviso', 'André F. T. Martins'] | ['cs.CL', 'cs.LG'] | The computational cost of softmax-based attention in transformers limits
their applicability to long-context tasks. Adaptive sparsity, of which
$\alpha$-entmax attention is an example, offers a flexible data-dependent
alternative, but existing implementations are inefficient and do not leverage
the sparsity to obtain r... | 2025-02-17T17:56:23Z | Accepted as spotlight in ICML 2025 | null | null | null | null | null | null | null | null | null |
2,502.1213 | Scaling Autonomous Agents via Automatic Reward Modeling And Planning | ['Zhenfang Chen', 'Delin Chen', 'Rui Sun', 'Wenjun Liu', 'Chuang Gan'] | ['cs.AI'] | Large language models (LLMs) have demonstrated remarkable capabilities across
a range of text-generation tasks. However, LLMs still struggle with problems
requiring multi-step decision-making and environmental feedback, such as online
shopping, scientific reasoning, and mathematical problem-solving. Unlike pure
text da... | 2025-02-17T18:49:25Z | ICLR2025, Project page: https://armap-agent.github.io | null | null | null | null | null | null | null | null | null |
2,502.12135 | MagicArticulate: Make Your 3D Models Articulation-Ready | ['Chaoyue Song', 'Jianfeng Zhang', 'Xiu Li', 'Fan Yang', 'Yiwen Chen', 'Zhongcong Xu', 'Jun Hao Liew', 'Xiaoyang Guo', 'Fayao Liu', 'Jiashi Feng', 'Guosheng Lin'] | ['cs.CV', 'cs.GR'] | With the explosive growth of 3D content creation, there is an increasing
demand for automatically converting static 3D models into articulation-ready
versions that support realistic animation. Traditional approaches rely heavily
on manual annotation, which is both time-consuming and labor-intensive.
Moreover, the lack ... | 2025-02-17T18:53:27Z | Project: https://chaoyuesong.github.io/MagicArticulate | null | null | null | null | null | null | null | null | null |
2,502.12138 | FLARE: Feed-forward Geometry, Appearance and Camera Estimation from
Uncalibrated Sparse Views | ['Shangzhan Zhang', 'Jianyuan Wang', 'Yinghao Xu', 'Nan Xue', 'Christian Rupprecht', 'Xiaowei Zhou', 'Yujun Shen', 'Gordon Wetzstein'] | ['cs.CV'] | We present FLARE, a feed-forward model designed to infer high-quality camera
poses and 3D geometry from uncalibrated sparse-view images (i.e., as few as 2-8
inputs), which is a challenging yet practical setting in real-world
applications. Our solution features a cascaded learning paradigm with camera
pose serving as th... | 2025-02-17T18:54:05Z | CVPR 2025. Website: https://zhanghe3z.github.io/FLARE/ | null | null | null | null | null | null | null | null | null |
2,502.12143 | Small Models Struggle to Learn from Strong Reasoners | ['Yuetai Li', 'Xiang Yue', 'Zhangchen Xu', 'Fengqing Jiang', 'Luyao Niu', 'Bill Yuchen Lin', 'Bhaskar Ramasubramanian', 'Radha Poovendran'] | ['cs.AI'] | Large language models (LLMs) excel in complex reasoning tasks, and distilling
their reasoning capabilities into smaller models has shown promise. However, we
uncover an interesting phenomenon, which we term the Small Model Learnability
Gap: small models ($\leq$3B parameters) do not consistently benefit from long
chain-... | 2025-02-17T18:56:15Z | null | null | null | null | null | null | null | null | null | null |
2,502.12147 | Learning Smooth and Expressive Interatomic Potentials for Physical
Property Prediction | ['Xiang Fu', 'Brandon M. Wood', 'Luis Barroso-Luque', 'Daniel S. Levine', 'Meng Gao', 'Misko Dzamba', 'C. Lawrence Zitnick'] | ['physics.comp-ph', 'cs.LG'] | Machine learning interatomic potentials (MLIPs) have become increasingly
effective at approximating quantum mechanical calculations at a fraction of the
computational cost. However, lower errors on held out test sets do not always
translate to improved results on downstream physical property prediction tasks.
In this p... | 2025-02-17T18:57:32Z | 20 pages, 14 figures, 6 tables | null | null | null | null | null | null | null | null | null |
2,502.12148 | HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and
Generation | ['Ling Yang', 'Xinchen Zhang', 'Ye Tian', 'Chenming Shang', 'Minghao Xu', 'Wentao Zhang', 'Bin Cui'] | ['cs.CV'] | The remarkable success of the autoregressive paradigm has made significant
advancement in Multimodal Large Language Models (MLLMs), with powerful models
like Show-o, Transfusion and Emu3 achieving notable progress in unified image
understanding and generation. For the first time, we uncover a common
phenomenon: the und... | 2025-02-17T18:57:51Z | Code: https://github.com/Gen-Verse/HermesFlow | null | null | null | null | null | null | null | null | null |
2,502.1217 | MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway
Dynamic Dense Connections | ['Da Xiao', 'Qingye Meng', 'Shengping Li', 'Xingyuan Yuan'] | ['cs.LG', 'cs.AI', 'cs.CL'] | We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective
method to address the limitations of residual connections and enhance
cross-layer information flow in Transformers. Unlike existing dense connection
approaches with static and shared connection weights, MUDD generates connection
weights dynami... | 2025-02-13T10:26:27Z | Accepted to the 42nd International Conference on Machine Learning
(ICML'25) | null | null | null | null | null | null | null | null | null |
2,502.12202 | To Think or Not to Think: Exploring the Unthinking Vulnerability in
Large Reasoning Models | ['Zihao Zhu', 'Hongbao Zhang', 'Ruotong Wang', 'Ke Xu', 'Siwei Lyu', 'Baoyuan Wu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Reasoning Models (LRMs) are designed to solve complex tasks by
generating explicit reasoning traces before producing final answers. However,
we reveal a critical vulnerability in LRMs -- termed Unthinking Vulnerability
-- wherein the thinking process can be bypassed by manipulating special
delimiter tokens. It is... | 2025-02-16T10:45:56Z | 39 pages, 13 tables, 14 figures | null | null | null | null | null | null | null | null | null |
2,502.12221 | ReF Decompile: Relabeling and Function Call Enhanced Decompile | ['Yunlong Feng', 'Bohan Li', 'Xiaoming Shi', 'Qingfu Zhu', 'Wanxiang Che'] | ['cs.SE'] | The goal of decompilation is to convert compiled low-level code (e.g.,
assembly code) back into high-level programming languages, enabling analysis in
scenarios where source code is unavailable. This task supports various reverse
engineering applications, such as vulnerability identification, malware
analysis, and lega... | 2025-02-17T12:38:57Z | null | null | null | null | null | null | null | null | null | null |
2,502.12342 | REAL-MM-RAG: A Real-World Multi-Modal Retrieval Benchmark | ['Navve Wasserman', 'Roi Pony', 'Oshri Naparstek', 'Adi Raz Goldfarb', 'Eli Schwartz', 'Udi Barzelay', 'Leonid Karlinsky'] | ['cs.IR', 'cs.CV'] | Accurate multi-modal document retrieval is crucial for Retrieval-Augmented
Generation (RAG), yet existing benchmarks do not fully capture real-world
challenges with their current design. We introduce REAL-MM-RAG, an
automatically generated benchmark designed to address four key properties
essential for real-world retri... | 2025-02-17T22:10:47Z | null | null | null | REAL-MM-RAG: A Real-World Multi-Modal Retrieval Benchmark | ['Navve Wasserman', 'Roi Pony', 'O. Naparstek', 'Adi Raz Goldfarb', 'Eli Schwartz', 'Udi Barzelay', 'Leonid Karlinsky'] | 2,025 | arXiv.org | 3 | 38 | ['Computer Science'] |
2,502.12404 | WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages &
Dialects | ['Daniel Deutsch', 'Eleftheria Briakou', 'Isaac Caswell', 'Mara Finkelstein', 'Rebecca Galor', 'Juraj Juraska', 'Geza Kovacs', 'Alison Lui', 'Ricardo Rei', 'Jason Riesa', 'Shruti Rijhwani', 'Parker Riley', 'Elizabeth Salesky', 'Firas Trabelsi', 'Stephanie Winkler', 'Biao Zhang', 'Markus Freitag'] | ['cs.CL'] | As large language models (LLM) become more and more capable in languages
other than English, it is important to collect benchmark datasets in order to
evaluate their multilingual performance, including on tasks like machine
translation (MT). In this work, we extend the WMT24 dataset to cover 55
languages by collecting ... | 2025-02-18T00:39:30Z | null | null | null | WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects | ['Daniel Deutsch', 'Eleftheria Briakou', 'Isaac Caswell', 'Mara Finkelstein', 'Rebecca Galor', 'Juraj Juraska', 'Geza Kovacs', 'Alison Lui', 'Ricardo Rei', 'Jason Riesa', 'Shruti Rijhwani', 'Parker Riley', 'Elizabeth Salesky', 'Firas Trabelsi', 'Stephanie Winkler', 'Biao Zhang', 'Markus Freitag'] | 2,025 | arXiv.org | 11 | 0 | ['Computer Science'] |
2,502.12485 | Safe at the Margins: A General Approach to Safety Alignment in
Low-Resource English Languages -- A Singlish Case Study | ['Isaac Lim', 'Shaun Khoo', 'Roy Ka-Wei Lee', 'Watson Chua', 'Jia Yi Goh', 'Jessica Foo'] | ['cs.CL', 'cs.AI'] | Ensuring the safety of Large Language Models (LLMs) in diverse linguistic
settings remains challenging, particularly for low-resource languages. Existing
safety alignment methods are English-centric, limiting their effectiveness. We
systematically compare Supervised Fine-Tuning (SFT), Direct Preference
Optimization (DP... | 2025-02-18T03:11:06Z | null | null | null | Safe at the Margins: A General Approach to Safety Alignment in Low-Resource English Languages - A Singlish Case Study | ['Isaac Lim', 'Shaun Khoo', 'W. Chua', 'Goh Jiayi', 'Jessica Foo'] | 2,025 | arXiv.org | 0 | 27 | ['Computer Science'] |
2,502.12524 | YOLOv12: Attention-Centric Real-Time Object Detectors | ['Yunjie Tian', 'Qixiang Ye', 'David Doermann'] | ['cs.CV', 'cs.AI'] | Enhancing the network architecture of the YOLO framework has been crucial for
a long time, but has focused on CNN-based improvements despite the proven
superiority of attention mechanisms in modeling capabilities. This is because
attention-based models cannot match the speed of CNN-based models. This paper
proposes an ... | 2025-02-18T04:20:14Z | https://github.com/sunsmarterjie/yolov12 | null | null | null | null | null | null | null | null | null |
2,502.12572 | TechSinger: Technique Controllable Multilingual Singing Voice Synthesis
via Flow Matching | ['Wenxiang Guo', 'Yu Zhang', 'Changhao Pan', 'Rongjie Huang', 'Li Tang', 'Ruiqi Li', 'Zhiqing Hong', 'Yongqi Wang', 'Zhou Zhao'] | ['cs.SD'] | Singing voice synthesis has made remarkable progress in generating natural
and high-quality voices. However, existing methods rarely provide precise
control over vocal techniques such as intensity, mixed voice, falsetto, bubble,
and breathy tones, thus limiting the expressive potential of synthetic voices.
We introduce... | 2025-02-18T06:25:07Z | Accepted by AAAI 2025 | null | null | null | null | null | null | null | null | null |
2,502.12579 | CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for
Text-to-Image Generation | ['Minghao Fu', 'Guo-Hua Wang', 'Liangfu Cao', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang'] | ['cs.CV'] | Diffusion models have emerged as a dominant approach for text-to-image
generation. Key components such as the human preference alignment and
classifier-free guidance play a crucial role in ensuring generation quality.
However, their independent application in current text-to-image models
continues to face significant c... | 2025-02-18T06:31:08Z | ICML 2025. The code is publicly available at
https://github.com/AIDC-AI/CHATS | null | null | null | null | null | null | null | null | null |
2,502.12614 | Label Drop for Multi-Aspect Relation Modeling in Universal Information
Extraction | ['Lu Yang', 'Jiajia Li', 'En Ci', 'Lefei Zhang', 'Zuchao Li', 'Ping Wang'] | ['cs.CL', 'cs.AI'] | Universal Information Extraction (UIE) has garnered significant attention due
to its ability to address model explosion problems effectively. Extractive UIE
can achieve strong performance using a relatively small model, making it widely
adopted. Extractive UIEs generally rely on task instructions for different
tasks, i... | 2025-02-18T07:53:26Z | Accepted to NAACL-main 2025 | null | null | null | null | null | null | null | null | null |
2,502.12671 | Baichuan-M1: Pushing the Medical Capability of Large Language Models | ['Bingning Wang', 'Haizhou Zhao', 'Huozhi Zhou', 'Liang Song', 'Mingyu Xu', 'Wei Cheng', 'Xiangrong Zeng', 'Yupeng Zhang', 'Yuqi Huo', 'Zecheng Wang', 'Zhengyun Zhao', 'Da Pan', 'Fei Kou', 'Fei Li', 'Fuzhong Chen', 'Guosheng Dong', 'Han Liu', 'Hongda Zhang', 'Jin He', 'Jinjie Yang', 'Kangxi Wu', 'Kegeng Wu', 'Lei Su', ... | ['cs.CL'] | The current generation of large language models (LLMs) is typically designed
for broad, general-purpose applications, while domain-specific LLMs, especially
in vertical fields like medicine, remain relatively scarce. In particular, the
development of highly efficient and practical LLMs for the medical domain is
challen... | 2025-02-18T09:21:12Z | 33 pages, technical report | null | null | Baichuan-M1: Pushing the Medical Capability of Large Language Models | ['Bingning Wang', 'Haizhou Zhao', 'Huozhi Zhou', 'Liang Song', 'Mingyu Xu', 'Wei Cheng', 'Xiangrong Zeng', 'Yupeng Zhang', 'Yuqi Huo', 'Zecheng Wang', 'Zhengyun Zhao', 'Da Pan', 'Fan Yang', 'Fei Kou', 'Fei Li', 'Fuzhong Chen', 'Guosheng Dong', 'Han Liu', 'Hongda Zhang', 'Jin He', 'Jinjie Yang', 'Kangxi Wu', 'Kegeng Wu'... | 2,025 | arXiv.org | 10 | 74 | ['Computer Science'] |
2,502.12759 | High-Fidelity Music Vocoder using Neural Audio Codecs | ['Luca A. Lanzendörfer', 'Florian Grötschla', 'Michael Ungersböck', 'Roger Wattenhofer'] | ['cs.SD', 'cs.LG'] | While neural vocoders have made significant progress in high-fidelity speech
synthesis, their application on polyphonic music has remained underexplored. In
this work, we propose DisCoder, a neural vocoder that leverages a generative
adversarial encoder-decoder architecture informed by a neural audio codec to
reconstru... | 2025-02-18T11:25:46Z | Accepted at ICASSP 2025 | null | null | High-Fidelity Music Vocoder using Neural Audio Codecs | ['Luca A. Lanzendörfer', 'Florian Grötschla', 'Michael Ungersböck', 'R. Wattenhofer'] | 2,025 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 1 | 0 | ['Computer Science'] |
2,502.12835 | Subword models struggle with word learning, but surprisal hides it | ['Bastian Bunzeck', 'Sina Zarrieß'] | ['cs.CL'] | We study word learning in subword and character language models with the
psycholinguistic lexical decision task. While subword LMs struggle to discern
words and non-words with high accuracy, character LMs solve this task easily
and consistently. Only when supplied with further contexts do subword LMs
perform similarly ... | 2025-02-18T13:09:16Z | Accepted to ACL 2025 (Main) | null | null | null | null | null | null | null | null | null |
2,502.12892 | Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept
Extraction in Large Vision Models | ['Thomas Fel', 'Ekdeep Singh Lubana', 'Jacob S. Prince', 'Matthew Kowal', 'Victor Boutin', 'Isabel Papadimitriou', 'Binxu Wang', 'Martin Wattenberg', 'Demba Ba', 'Talia Konkle'] | ['cs.CV'] | Sparse Autoencoders (SAEs) have emerged as a powerful framework for machine
learning interpretability, enabling the unsupervised decomposition of model
representations into a dictionary of abstract, human-interpretable concepts.
However, we reveal a fundamental limitation: existing SAEs exhibit severe
instability, as i... | 2025-02-18T14:29:11Z | null | Proceedings of the 42nd International Conference on Machine
Learning (ICML), 2025 | null | Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models | ['Thomas Fel', 'Ekdeep Singh Lubana', 'Jacob S. Prince', 'Matthew Kowal', 'Victor Boutin', 'Isabel Papadimitriou', 'Binxu Wang', 'Martin Wattenberg', 'Demba Ba', 'Talia Konkle'] | 2,025 | arXiv.org | 8 | 173 | ['Computer Science'] |
2,502.129 | Soundwave: Less is More for Speech-Text Alignment in LLMs | ['Yuhao Zhang', 'Zhiheng Liu', 'Fan Bu', 'Ruiyu Zhang', 'Benyou Wang', 'Haizhou Li'] | ['cs.CL', 'cs.AI', 'cs.SD'] | Existing end-to-end speech large language models (LLMs) usually rely on
large-scale annotated data for training, while data-efficient training has not
been discussed in depth. We focus on two fundamental problems between speech
and text: the representation space gap and sequence length inconsistency. We
propose Soundwa... | 2025-02-18T14:36:39Z | null | null | null | null | null | null | null | null | null | null |
2,502.12982 | Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs | ['Longxu Dou', 'Qian Liu', 'Fan Zhou', 'Changyu Chen', 'Zili Wang', 'Ziqi Jin', 'Zichen Liu', 'Tongyao Zhu', 'Cunxiao Du', 'Penghui Yang', 'Haonan Wang', 'Jiaheng Liu', 'Yongchi Zhao', 'Xiachong Feng', 'Xin Mao', 'Man Tsung Yeung', 'Kunat Pipatanakul', 'Fajri Koto', 'Min Si Thu', 'Hynek Kydlíček', 'Zeyi Liu', 'Qunshu L... | ['cs.CL', 'cs.AI', 'cs.LG'] | Sailor2 is a family of cutting-edge multilingual language models for
South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit
diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous
pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to
support 13 SEA languages whi... | 2025-02-18T16:04:57Z | 49 pages, 16 figures. Technical Report of Sailor2:
https://sea-sailor.github.io/blog/sailor2/ | null | null | null | null | null | null | null | null | null |
2,502.1299 | Artificial Intelligence-derived Vascular Age from Photoplethysmography:
A Novel Digital Biomarker for Cardiovascular Health | ['Guangkun Nie', 'Qinghao Zhao', 'Gongzheng Tang', 'Yaxin Li', 'Shenda Hong'] | ['eess.SP'] | With the increasing availability of wearable devices, photoplethysmography
(PPG) has emerged as a promising non-invasive tool for monitoring human
hemodynamics. We propose a deep learning framework to estimate vascular age
(AI-vascular age) from PPG signals, incorporating a distribution-aware loss to
address biases cau... | 2025-02-18T16:12:28Z | null | null | null | null | null | null | null | null | null | null |
2,502.13061 | Robust Adaptation of Large Multimodal Models for Retrieval Augmented
Hateful Meme Detection | ['Jingbiao Mei', 'Jinghong Chen', 'Guangyu Yang', 'Weizhe Lin', 'Bill Byrne'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Hateful memes have become a significant concern on the Internet,
necessitating robust automated detection systems. While LMMs have shown promise
in hateful meme detection, they face notable challenges like sub-optimal
performance and limited out-of-domain generalization capabilities. Recent
studies further reveal the l... | 2025-02-18T17:07:29Z | Preprint. Under Review | null | null | Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection | ['Jingbiao Mei', 'Jinghong Chen', 'Guangyu Yang', 'Weizhe Lin', 'Bill Byrne'] | 2,025 | null | 0 | 51 | ['Computer Science'] |
2,502.1313 | Magma: A Foundation Model for Multimodal AI Agents | ['Jianwei Yang', 'Reuben Tan', 'Qianhui Wu', 'Ruijie Zheng', 'Baolin Peng', 'Yongyuan Liang', 'Yu Gu', 'Mu Cai', 'Seonghyeon Ye', 'Joel Jang', 'Yuquan Deng', 'Lars Liden', 'Jianfeng Gao'] | ['cs.CV', 'cs.AI', 'cs.HC', 'cs.LG', 'cs.RO'] | We present Magma, a foundation model that serves multimodal AI agentic tasks
in both the digital and physical worlds. Magma is a significant extension of
vision-language (VL) models in that it not only retains the VL understanding
ability (verbal intelligence) of the latter, but is also equipped with the
ability to pla... | 2025-02-18T18:55:21Z | 29 pages, 16 figures, technical report from MSR | null | null | null | null | null | null | null | null | null |
2,502.13138 | AIDE: AI-Driven Exploration in the Space of Code | ['Zhengyao Jiang', 'Dominik Schmidt', 'Dhruv Srikanth', 'Dixing Xu', 'Ian Kaplan', 'Deniss Jacenko', 'Yuxiang Wu'] | ['cs.AI', 'cs.LG'] | Machine learning, the foundation of modern artificial intelligence, has
driven innovations that have fundamentally transformed the world. Yet, behind
advancements lies a complex and often tedious process requiring labor and
compute intensive iteration and experimentation. Engineers and scientists
developing machine lea... | 2025-02-18T18:57:21Z | null | null | null | null | null | null | null | null | null | null |
2,502.13143 | SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and
Object Manipulation | ['Zekun Qi', 'Wenyao Zhang', 'Yufei Ding', 'Runpei Dong', 'Xinqiang Yu', 'Jingwen Li', 'Lingyun Xu', 'Baoyu Li', 'Xialin He', 'Guofan Fan', 'Jiazhao Zhang', 'Jiawei He', 'Jiayuan Gu', 'Xin Jin', 'Kaisheng Ma', 'Zhizheng Zhang', 'He Wang', 'Li Yi'] | ['cs.RO', 'cs.AI', 'cs.CV'] | Spatial intelligence is a critical component of embodied AI, promoting robots
to understand and interact with their environments. While recent advances have
enhanced the ability of VLMs to perceive object locations and positional
relationships, they still lack the capability to precisely understand object
orientations-... | 2025-02-18T18:59:02Z | Project page: https://qizekun.github.io/sofar/ | null | null | SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation | ['Zekun Qi', 'Wenyao Zhang', 'Yufei Ding', 'Runpei Dong', 'Xinqiang Yu', 'Jingwen Li', 'Lingyun Xu', 'Baoyu Li', 'Xialin He', 'Guo Fan', 'Jiazhao Zhang', 'Jiawei He', 'Jiayuan Gu', 'Xin Jin', 'Kaisheng Ma', 'Zhizheng Zhang', 'He Wang', 'Li Yi'] | 2,025 | arXiv.org | 7 | 170 | ['Computer Science'] |
2,502.13145 | Multimodal Mamba: Decoder-only Multimodal State Space Model via
Quadratic to Linear Distillation | ['Bencheng Liao', 'Hongyuan Tao', 'Qian Zhang', 'Tianheng Cheng', 'Yingyue Li', 'Haoran Yin', 'Wenyu Liu', 'Xinggang Wang'] | ['cs.CV'] | Recent Multimodal Large Language Models (MLLMs) have achieved remarkable
performance but face deployment challenges due to their quadratic computational
complexity, growing Key-Value cache requirements, and reliance on separate
vision encoders. We propose mmMamba, a framework for developing
linear-complexity native mul... | 2025-02-18T18:59:57Z | Code and model are available at https://github.com/hustvl/mmMamba | null | null | null | null | null | null | null | null | null |
2,502.13167 | SmartLLM: Smart Contract Auditing using Custom Generative AI | ['Jun Kevin', 'Pujianto Yugopuspito'] | ['cs.CR', 'cs.AI'] | Smart contracts are essential to decentralized finance (DeFi) and blockchain
ecosystems but are increasingly vulnerable to exploits due to coding errors and
complex attack vectors. Traditional static analysis tools and existing
vulnerability detection methods often fail to address these challenges
comprehensively, lead... | 2025-02-17T06:22:05Z | null | null | null | null | null | null | null | null | null | null |
2,502.13252 | Multilingual Language Model Pretraining using Machine-translated Data | ['Jiayi Wang', 'Yao Lu', 'Maurice Weber', 'Max Ryabinin', 'David Adelani', 'Yihong Chen', 'Raphael Tang', 'Pontus Stenetorp'] | ['cs.CL'] | High-resource languages such as English, enables the pretraining of
high-quality large language models (LLMs). The same can not be said for most
other languages as LLMs still underperform for non-English languages, likely
due to a gap in the quality and diversity of the available multilingual
pretraining corpora. In th... | 2025-02-18T19:27:53Z | null | null | null | null | null | null | null | null | null | null |
2,502.13398 | GeLLMO: Generalizing Large Language Models for Multi-property Molecule
Optimization | ['Vishal Dey', 'Xiao Hu', 'Xia Ning'] | ['cs.LG', 'cs.AI', 'cs.CL', 'physics.chem-ph', 'q-bio.QM'] | Despite recent advancements, most computational methods for molecule
optimization are constrained to single- or double-property optimization tasks
and suffer from poor scalability and generalizability to novel optimization
tasks. Meanwhile, Large Language Models (LLMs) demonstrate remarkable
out-of-domain generalizabil... | 2025-02-19T03:14:11Z | Accepted to ACL Main 2025. Vishal Dey and Xiao Hu contributed equally
to this paper | null | null | GeLLM3O: Generalizing Large Language Models for Multi-property Molecule Optimization | ['Vishal Dey', 'Xiao Hu', 'Xia Ning'] | 2,025 | arXiv.org | 4 | 69 | ['Computer Science', 'Physics', 'Biology'] |
2,502.13449 | Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular
Language Model | ['Dongki Kim', 'Wonbin Lee', 'Sung Ju Hwang'] | ['cs.LG', 'physics.chem-ph'] | Understanding molecules is key to understanding organisms and driving
advances in drug discovery, requiring interdisciplinary knowledge across
chemistry and biology. Although large molecular language models have achieved
notable success in task transfer, they often struggle to accurately analyze
molecular features due ... | 2025-02-19T05:49:10Z | Project Page: https://mol-llama.github.io/ | null | null | null | null | null | null | null | null | null |
2,502.13458 | ThinkGuard: Deliberative Slow Thinking Leads to Cautious Guardrails | ['Xiaofei Wen', 'Wenxuan Zhou', 'Wenjie Jacky Mo', 'Muhao Chen'] | ['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG'] | Ensuring the safety of large language models (LLMs) is critical as they are
deployed in real-world applications. Existing guardrails rely on rule-based
filtering or single-pass classification, limiting their ability to handle
nuanced safety violations. To address this, we propose ThinkGuard, a
critique-augmented guardr... | 2025-02-19T06:09:58Z | ACL 2025 | null | null | ThinkGuard: Deliberative Slow Thinking Leads to Cautious Guardrails | ['Xiaofei Wen', 'Wenxuan Zhou', 'W. Mo', 'Muhao Chen'] | 2,025 | arXiv.org | 7 | 42 | ['Computer Science'] |
2,502.13502 | PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own
Deep Neural Net At Inference | ['Burc Gokden'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We show that Large Language Model from Power Law Decoder Representations
(PLDR-LLM) is a foundational model whose deductive outputs are invariant
tensors up to a small perturbation. PLDR-LLM learns a singularity condition for
the deductive outputs that enable the once-inferred energy-curvature tensor
$\mathbf{G}_{LM}$ ... | 2025-02-19T07:43:36Z | 15 pages, 1 figure, 12 tables, more ablation data included | null | null | null | null | null | null | null | null | null |
2,502.1352 | A Large and Balanced Corpus for Fine-grained Arabic Readability
Assessment | ['Khalid N. Elmadani', 'Nizar Habash', 'Hanada Taha-Thomure'] | ['cs.CL'] | This paper introduces the Balanced Arabic Readability Evaluation Corpus
(BAREC), a large-scale, fine-grained dataset for Arabic readability assessment.
BAREC consists of 69,441 sentences spanning 1+ million words, carefully curated
to cover 19 readability levels, from kindergarten to postgraduate
comprehension. The cor... | 2025-02-19T08:16:11Z | Accepted at ACL 2025 Findings | null | null | null | null | null | null | null | null | null |
2,502.13603 | Efficient Safety Retrofitting Against Jailbreaking for LLMs | ['Dario Garcia-Gasulla', 'Adrian Tormos', 'Anna Arias-Duart', 'Daniel Hinjos', 'Oscar Molina-Sedano', 'Ashwin Kumar Gururajan', 'Maria Eugenia Cardello'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Direct Preference Optimization (DPO) is an efficient alignment technique that
steers LLMs towards preferable outputs by training on preference data,
bypassing the need for explicit reward models. Its simplicity enables easy
adaptation to various domains and safety requirements. This paper examines
DPO's effectiveness i... | 2025-02-19T10:33:18Z | null | null | null | Efficient Safety Retrofitting Against Jailbreaking for LLMs | ['Dario Garcia-Gasulla', 'Adrián Tormos', 'Anna Arias-Duart', 'Daniel Hinjos', 'Oscar Molina-Sedano', 'Ashwin Kumar Gururajan', 'Maria Eugenia Cardello'] | 2,025 | arXiv.org | 0 | 61 | ['Computer Science'] |
2,502.13656 | Refining Sentence Embedding Model through Ranking Sentences Generation
with Large Language Models | ['Liyang He', 'Chenglong Liu', 'Rui Li', 'Zhenya Huang', 'Shulan Ruan', 'Jun Zhou', 'Enhong Chen'] | ['cs.CL'] | Sentence embedding is essential for many NLP tasks, with contrastive learning
methods achieving strong performance using annotated datasets like NLI. Yet,
the reliance on manual labels limits scalability. Recent studies leverage large
language models (LLMs) to generate sentence pairs, reducing annotation
dependency. Ho... | 2025-02-19T12:07:53Z | null | null | null | Refining Sentence Embedding Model through Ranking Sentences Generation with Large Language Models | ['Liyang He', 'Chenglong Liu', 'Rui Li', 'Zhenya Huang', 'Shulan Ruan', 'Jun Zhou', 'Enhong Chen'] | 2,025 | arXiv.org | 1 | 65 | ['Computer Science'] |
2,502.13685 | MoM: Linear Sequence Modeling with Mixture-of-Memories | ['Jusen Du', 'Weigao Sun', 'Disen Lan', 'Jiaxi Hu', 'Yu Cheng'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Linear sequence modeling methods, such as linear attention, state space
modeling, and linear RNNs, offer significant efficiency improvements by
reducing the complexity of training and inference. However, these methods
typically compress the entire input sequence into a single fixed-size memory
state, which leads to sub... | 2025-02-19T12:53:55Z | Technical report, 16 pages | null | null | null | null | null | null | null | null | null |
2,502.13759 | Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and
Human-Like Reasoning Framework | ['Zirui Song', 'Jingpu Yang', 'Yuan Huang', 'Jonathan Tonglet', 'Zeyu Zhang', 'Tao Cheng', 'Meng Fang', 'Iryna Gurevych', 'Xiuying Chen'] | ['cs.CV'] | Geolocation, the task of identifying an image's location, requires complex
reasoning and is crucial for navigation, monitoring, and cultural preservation.
However, current methods often produce coarse, imprecise, and non-interpretable
localization. A major challenge lies in the quality and scale of existing
geolocation... | 2025-02-19T14:21:25Z | Update new version | null | null | Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and Human-Like Reasoning Framework | ['Zirui Song', 'Jingpu Yang', 'Yuan Huang', 'Jonathan Tonglet', 'Zeyu Zhang', 'Tao Cheng', 'Meng Fang', 'Iryna Gurevych', 'Xiuying Chen'] | 2,025 | arXiv.org | 5 | 64 | ['Computer Science'] |
2,502.13785 | Helix-mRNA: A Hybrid Foundation Model For Full Sequence mRNA
Therapeutics | ['Matthew Wood', 'Mathieu Klop', 'Maxime Allard'] | ['q-bio.GN', 'cs.AI'] | mRNA-based vaccines have become a major focus in the pharmaceutical industry.
The coding sequence as well as the Untranslated Regions (UTRs) of an mRNA can
strongly influence translation efficiency, stability, degradation, and other
factors that collectively determine a vaccine's effectiveness. However,
optimizing mRNA... | 2025-02-19T14:51:41Z | 8 pages, 3 figures, 3 tables | null | null | null | null | null | null | null | null | null |
2,502.13898 | GroundCap: A Visually Grounded Image Captioning Dataset | ['Daniel A. P. Oliveira', 'Lourenço Teodoro', 'David Martins de Matos'] | ['cs.CV', 'cs.CL', 'I.2.10; I.2.7'] | Current image captioning systems lack the ability to link descriptive text to
specific visual elements, making their outputs difficult to verify. While
recent approaches offer some grounding capabilities, they cannot track object
identities across multiple references or ground both actions and objects
simultaneously. W... | 2025-02-19T17:31:59Z | 37 pages | null | null | null | null | null | null | null | null | null |
2,502.13917 | TESS 2: A Large-Scale Generalist Diffusion Language Model | ['Jaesung Tae', 'Hamish Ivison', 'Sachin Kumar', 'Arman Cohan'] | ['cs.CL'] | We introduce TESS 2, a general instruction-following diffusion language model
that outperforms contemporary instruction-tuned diffusion models, as well as
matches and sometimes exceeds strong autoregressive (AR) models. We train TESS
2 by first adapting a strong AR model via continued pretraining with the usual
cross-e... | 2025-02-19T17:50:31Z | ACL 2025 camera-ready | null | null | TESS 2: A Large-Scale Generalist Diffusion Language Model | ['Jaesung Tae', 'Hamish Ivison', 'Sachin Kumar', 'Arman Cohan'] | 2,025 | arXiv.org | 1 | 60 | ['Computer Science'] |
2,502.13922 | LongPO: Long Context Self-Evolution of Large Language Models through
Short-to-Long Preference Optimization | ['Guanzheng Chen', 'Xin Li', 'Michael Qizhe Shieh', 'Lidong Bing'] | ['cs.CL', 'cs.LG'] | Large Language Models (LLMs) have demonstrated remarkable capabilities
through pretraining and alignment. However, superior short-context LLMs may
underperform in long-context scenarios due to insufficient long-context
alignment. This alignment process remains challenging due to the impracticality
of human annotation f... | 2025-02-19T17:59:03Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,502.13923 | Qwen2.5-VL Technical Report | ['Shuai Bai', 'Keqin Chen', 'Xuejing Liu', 'Jialin Wang', 'Wenbin Ge', 'Sibo Song', 'Kai Dang', 'Peng Wang', 'Shijie Wang', 'Jun Tang', 'Humen Zhong', 'Yuanzhi Zhu', 'Mingkun Yang', 'Zhaohai Li', 'Jianqiang Wan', 'Pengfei Wang', 'Wei Ding', 'Zheren Fu', 'Yiheng Xu', 'Jiabo Ye', 'Xi Zhang', 'Tianbao Xie', 'Zesen Cheng',... | ['cs.CV', 'cs.CL'] | We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language
series, which demonstrates significant advancements in both foundational
capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap
forward in understanding and interacting with the world through enhanced visual
recognition, p... | 2025-02-19T18:00:14Z | null | null | null | null | null | null | null | null | null | null |
2,502.13967 | FlexTok: Resampling Images into 1D Token Sequences of Flexible Length | ['Roman Bachmann', 'Jesse Allardice', 'David Mizrahi', 'Enrico Fini', 'Oğuzhan Fatih Kar', 'Elmira Amirloo', 'Alaaeldin El-Nouby', 'Amir Zamir', 'Afshin Dehghan'] | ['cs.CV', 'cs.LG'] | Image tokenization has enabled major advances in autoregressive image
generation by providing compressed, discrete representations that are more
efficient to process than raw pixels. While traditional approaches use 2D grid
tokenization, recent methods like TiTok have shown that 1D tokenization can
achieve high generat... | 2025-02-19T18:59:44Z | ICML 2025. Project page at https://flextok.epfl.ch/ | null | null | null | null | null | null | null | null | null |
2,502.1399 | Remote Sensing Semantic Segmentation Quality Assessment based on Vision
Language Model | ['Huiying Shi', 'Zhihong Tan', 'Zhihan Zhang', 'Hongchen Wei', 'Yaosi Hu', 'Yingxue Zhang', 'Zhenzhong Chen'] | ['eess.IV', 'cs.LG'] | The complexity of scenes and variations in image quality result in
significant variability in the performance of semantic segmentation methods of
remote sensing imagery (RSI) in supervised real-world scenarios. This makes the
evaluation of semantic segmentation quality in such scenarios an issue to be
resolved. However... | 2025-02-19T02:28:12Z | 16 pages,6 figures | null | null | null | null | null | null | null | null | null |
2,502.13991 | Learning to Discover Regulatory Elements for Gene Expression Prediction | ['Xingyu Su', 'Haiyang Yu', 'Degui Zhi', 'Shuiwang Ji'] | ['q-bio.GN', 'cs.AI'] | We consider the problem of predicting gene expressions from DNA sequences. A
key challenge of this task is to find the regulatory elements that control gene
expressions. Here, we introduce Seq2Exp, a Sequence to Expression network
explicitly designed to discover and extract regulatory elements that drive
target gene ex... | 2025-02-19T03:25:49Z | null | null | null | null | null | null | null | null | null | null |
2,502.13995 | FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation | ['Yunpeng Zhang', 'Qiang Wang', 'Fan Jiang', 'Yaqi Fan', 'Mu Xu', 'Yonggang Qi'] | ['cs.GR', 'cs.CV'] | Tuning-free approaches adapting large-scale pre-trained video diffusion
models for identity-preserving text-to-video generation (IPT2V) have gained
popularity recently due to their efficacy and scalability. However, significant
challenges remain to achieve satisfied facial dynamics while keeping the
identity unchanged.... | 2025-02-19T06:50:27Z | null | null | null | FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation | ['Yunpeng Zhang', 'Qiang Wang', 'Fan Jiang', 'Yaqi Fan', 'Mu Xu', 'Yonggang Qi'] | 2,025 | arXiv.org | 4 | 0 | ['Computer Science'] |
2,502.14044 | Enhancing Cognition and Explainability of Multimodal Foundation Models
with Self-Synthesized Data | ['Yucheng Shi', 'Quanzheng Li', 'Jin Sun', 'Xiang Li', 'Ninghao Liu'] | ['cs.CV', 'cs.LG'] | Large Multimodal Models (LMMs), or Vision-Language Models (VLMs), have shown
impressive capabilities in a wide range of visual tasks. However, they often
struggle with fine-grained visual reasoning, failing to identify
domain-specific objectives and provide justifiable explanations for their
predictions. To address the... | 2025-02-19T19:05:45Z | Accepted by ICLR 2025. Code: https://github.com/sycny/SelfSynthX | null | null | Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data | ['Yucheng Shi', 'Quanzheng Li', 'Jin Sun', 'Xiang Li', 'Ninghao Liu'] | 2,025 | International Conference on Learning Representations | 3 | 55 | ['Computer Science'] |
2,502.14301 | SEA-HELM: Southeast Asian Holistic Evaluation of Language Models | ['Yosephine Susanto', 'Adithya Venkatadri Hulagadri', 'Jann Railey Montalan', 'Jian Gang Ngui', 'Xian Bin Yong', 'Weiqi Leong', 'Hamsawardhini Rengarajan', 'Peerat Limkonchotiwat', 'Yifan Mai', 'William Chandra Tjhi'] | ['cs.CL', 'cs.AI'] | With the rapid emergence of novel capabilities in Large Language Models
(LLMs), the need for rigorous multilingual and multicultural benchmarks that
are integrated has become more pronounced. Though existing LLM benchmarks are
capable of evaluating specific capabilities of LLMs in English as well as in
various mid- to ... | 2025-02-20T06:32:45Z | null | null | null | SEA-HELM: Southeast Asian Holistic Evaluation of Language Models | ['Yosephine Susanto', 'Adithya Venkatadri Hulagadri', 'J. Montalan', 'Jian Gang Ngui', 'Xian Bin Yong', 'Weiqi Leong', 'Hamsawardhini Rengarajan', 'Peerat Limkonchotiwat', 'Yifan Mai', 'William-Chandra Tjhi'] | 2,025 | arXiv.org | 0 | 106 | ['Computer Science'] |
2,502.14377 | RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers | ['Ke Cao', 'Jing Wang', 'Ao Ma', 'Jiasong Feng', 'Zhanjie Zhang', 'Xuanhua He', 'Shanyuan Liu', 'Bo Cheng', 'Dawei Leng', 'Yuhui Yin', 'Jie Zhang'] | ['cs.CV'] | The Diffusion Transformer plays a pivotal role in advancing text-to-image and
text-to-video generation, owing primarily to its inherent scalability. However,
existing controlled diffusion transformer methods incur significant parameter
and computational overheads and suffer from inefficient resource allocation due
to t... | 2025-02-20T09:10:05Z | Homepage: https://360cvgroup.github.io/RelaCtrl/ Github:
https://github.com/360CVGroup/RelaCtrl | null | null | RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers | ['Ke Cao', 'Jing Wang', 'Ao Ma', 'Jiasong Feng', 'Zhanjie Zhang', 'Xuanhua He', 'Shanyuan Liu', 'Bo Cheng', 'Dawei Leng', 'Yuhui Yin', 'Jie Zhang'] | 2,025 | arXiv.org | 4 | 37 | ['Computer Science'] |
2,502.14429 | Early-Exit and Instant Confidence Translation Quality Estimation | ['Vilém Zouhar', 'Maike Züfle', 'Beni Egressy', 'Julius Cheng', 'Mrinmaya Sachan', 'Jan Niehues'] | ['cs.CL'] | Quality estimation is omnipresent in machine translation, for both evaluation
and generation. Unfortunately, quality estimation models are often opaque and
computationally expensive, making them impractical to be part of large-scale
pipelines. In this work, we tackle two connected challenges: (1) reducing the
cost of q... | 2025-02-20T10:27:13Z | null | null | null | Early-Exit and Instant Confidence Translation Quality Estimation | ['Vilém Zouhar', 'Maike Zufle', 'Béni Egressy', 'Julius Cheng', 'Jan Niehues'] | 2,025 | arXiv.org | 1 | 0 | ['Computer Science'] |
2,502.14458 | Llamba: Scaling Distilled Recurrent Models for Efficient Language
Processing | ['Aviv Bick', 'Tobias Katsch', 'Nimit Sohoni', 'Arjun Desai', 'Albert Gu'] | ['cs.LG', 'cs.AI'] | We introduce Llamba, a family of efficient recurrent language models
distilled from Llama-3.x into the Mamba architecture. The series includes
Llamba-1B, Llamba-3B, and Llamba-8B, which achieve higher inference throughput
and handle significantly larger batch sizes than Transformer-based models while
maintaining compar... | 2025-02-20T11:18:39Z | null | null | null | null | null | null | null | null | null | null |
2,502.14502 | How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? | ['Sergey Pletenev', 'Maria Marina', 'Daniil Moskovskiy', 'Vasily Konovalov', 'Pavel Braslavski', 'Alexander Panchenko', 'Mikhail Salnikov'] | ['cs.CL'] | The performance of Large Language Models (LLMs) on many tasks is greatly
limited by the knowledge learned during pre-training and stored in the model's
parameters. Low-rank adaptation (LoRA) is a popular and efficient training
technique for updating or domain-specific adaptation of LLMs. In this study, we
investigate h... | 2025-02-20T12:31:03Z | null | null | null | How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? | ['Sergey Pletenev', 'Maria Marina', 'Daniil Moskovskiy', 'Vasily Konovalov', 'Pavel Braslavski', 'Alexander Panchenko', 'M. Salnikov'] | 2,025 | North American Chapter of the Association for Computational Linguistics | 1 | 34 | ['Computer Science'] |
2,502.14561 | Can LLMs Predict Citation Intent? An Experimental Analysis of In-context
Learning and Fine-tuning on Open LLMs | ['Paris Koloveas', 'Serafeim Chatzopoulos', 'Thanasis Vergoulis', 'Christos Tryfonopoulos'] | ['cs.CL', 'cs.DL'] | This work investigates the ability of open Large Language Models (LLMs) to
predict citation intent through in-context learning and fine-tuning. Unlike
traditional approaches relying on domain-specific pre-trained models like
SciBERT, we demonstrate that general-purpose LLMs can be adapted to this task
with minimal task... | 2025-02-20T13:45:42Z | null | null | null | null | null | null | null | null | null | null |
2,502.14637 | ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality
Protein Backbone Generation | ['Angxiao Yue', 'Zichong Wang', 'Hongteng Xu'] | ['cs.LG', 'cs.AI'] | Protein backbone generation plays a central role in de novo protein design
and is significant for many biological and medical applications. Although
diffusion and flow-based generative models provide potential solutions to this
challenging task, they often generate proteins with undesired designability and
suffer compu... | 2025-02-20T15:20:37Z | null | null | null | null | null | null | null | null | null | null |
2,502.14638 | NAVIG: Natural Language-guided Analysis with Vision Language Models for
Image Geo-localization | ['Zheyuan Zhang', 'Runze Li', 'Tasnim Kabir', 'Jordan Boyd-Graber'] | ['cs.CL', 'cs.CV'] | Image geo-localization is the task of predicting the specific location of an
image and requires complex reasoning across visual, geographical, and cultural
contexts. While prior Vision Language Models (VLMs) have the best accuracy at
this task, there is a dearth of high-quality datasets and models for analytical
reason... | 2025-02-20T15:21:35Z | null | null | null | null | null | null | null | null | null | null |
2,502.14669 | AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via
GRPO | ['Alan Dao', 'Dinh Bach Vu'] | ['cs.CL'] | Large Language Models (LLMs) have demonstrated impressive capabilities in
language processing, yet they often struggle with tasks requiring genuine
visual spatial reasoning. In this paper, we introduce a novel two-stage
training framework designed to equip standard LLMs with visual reasoning
abilities for maze navigati... | 2025-02-20T16:05:18Z | null | null | null | AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO | ['Alan Dao', 'Dinh Bach Vu'] | 2,025 | arXiv.org | 4 | 20 | ['Computer Science'] |
2,502.14673 | ChunkFormer: Masked Chunking Conformer For Long-Form Speech
Transcription | ['Khanh Le', 'Tuan Vu Ho', 'Dung Tran', 'Duc Thanh Chau'] | ['cs.SD', 'eess.AS'] | Deploying ASR models at an industrial scale poses significant challenges in
hardware resource management, especially for long-form transcription tasks
where audio may last for hours. Large Conformer models, despite their
capabilities, are limited to processing only 15 minutes of audio on an 80GB
GPU. Furthermore, varia... | 2025-02-20T16:06:06Z | Accepted to ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,502.14706 | Building reliable sim driving agents by scaling self-play | ['Daphne Cornelisse', 'Aarav Pandya', 'Kevin Joseph', 'Joseph Suárez', 'Eugene Vinitsky'] | ['cs.AI', 'cs.RO'] | Simulation agents are essential for designing and testing systems that
interact with humans, such as autonomous vehicles (AVs). These agents serve
various purposes, from benchmarking AV performance to stress-testing system
limits, but all applications share one key requirement: reliability. To enable
sound experimentat... | 2025-02-20T16:30:45Z | v3 | null | null | Building reliable sim driving agents by scaling self-play | ['Daphne Cornelisse', 'Aarav Pandya', 'Kevin Joseph', "Joseph Su'arez", 'Eugene Vinitsky'] | 2,025 | arXiv.org | 1 | 42 | ['Computer Science'] |
2,502.14753 | MedVAE: Efficient Automated Interpretation of Medical Images with
Large-Scale Generalizable Autoencoders | ['Maya Varma', 'Ashwin Kumar', 'Rogier van der Sluijs', 'Sophie Ostmeier', 'Louis Blankemeier', 'Pierre Chambon', 'Christian Bluethgen', 'Jip Prince', 'Curtis Langlotz', 'Akshay Chaudhari'] | ['eess.IV', 'cs.AI', 'cs.CV'] | Medical images are acquired at high resolutions with large fields of view in
order to capture fine-grained features necessary for clinical decision-making.
Consequently, training deep learning models on medical images can incur large
computational costs. In this work, we address the challenge of downsizing
medical imag... | 2025-02-20T17:24:06Z | MIDL 2025 (Oral) | null | null | null | null | null | null | null | null | null |
2,502.14786 | SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic
Understanding, Localization, and Dense Features | ['Michael Tschannen', 'Alexey Gritsenko', 'Xiao Wang', 'Muhammad Ferjad Naeem', 'Ibrahim Alabdulmohsin', 'Nikhil Parthasarathy', 'Talfan Evans', 'Lucas Beyer', 'Ye Xia', 'Basil Mustafa', 'Olivier Hénaff', 'Jeremiah Harmsen', 'Andreas Steiner', 'Xiaohua Zhai'] | ['cs.CV', 'cs.AI'] | We introduce SigLIP 2, a family of new multilingual vision-language encoders
that build on the success of the original SigLIP. In this second iteration, we
extend the original image-text training objective with several prior,
independently developed techniques into a unified recipe -- this includes
captioning-based pre... | 2025-02-20T18:08:29Z | Model checkpoints are available at
https://github.com/google-research/big_vision/tree/main/big_vision/configs/proj/image_text/README_siglip2.md | null | null | SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features | ['Michael Tschannen', 'Alexey Gritsenko', 'Xiao Wang', 'M. Naeem', 'Ibrahim M. Alabdulmohsin', 'Nikhil Parthasarathy', 'Talfan Evans', 'Lucas Beyer', 'Ye Xia', 'Basil Mustafa', "Olivier H'enaff", 'Jeremiah Harmsen', 'A. Steiner', 'Xiao-Qi Zhai'] | 2,025 | arXiv.org | 80 | 0 | ['Computer Science'] |
2,502.1483 | Middle-Layer Representation Alignment for Cross-Lingual Transfer in
Fine-Tuned LLMs | ['Danni Liu', 'Jan Niehues'] | ['cs.CL', 'cs.AI'] | While large language models demonstrate remarkable capabilities at
task-specific applications through fine-tuning, extending these benefits across
diverse languages is essential for broad accessibility. However, effective
cross-lingual transfer is hindered by LLM performance gaps across languages and
the scarcity of fi... | 2025-02-20T18:45:43Z | ACL 2025 | null | null | null | null | null | null | null | null | null |
2,502.14837 | Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent
Attention in Any Transformer-based LLMs | ['Tao Ji', 'Bin Guo', 'Yuanbin Wu', 'Qipeng Guo', 'Lixing Shen', 'Zhan Chen', 'Xipeng Qiu', 'Qi Zhang', 'Tao Gui'] | ['cs.CL', 'cs.AI'] | Multi-head Latent Attention (MLA) is an innovative architecture proposed by
DeepSeek, designed to ensure efficient and economical inference by
significantly compressing the Key-Value (KV) cache into a latent vector.
Compared to MLA, standard LLMs employing Multi-Head Attention (MHA) and its
variants such as Grouped-Que... | 2025-02-20T18:50:42Z | 16 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,502.14854 | CLIPPER: Compression enables long-context synthetic data generation | ['Chau Minh Pham', 'Yapei Chang', 'Mohit Iyyer'] | ['cs.CL'] | LLM developers are increasingly reliant on synthetic data, but generating
high-quality data for complex long-context reasoning tasks remains challenging.
We introduce CLIPPER, a compression-based approach for generating synthetic
data tailored to narrative claim verification - a task that requires reasoning
over a book... | 2025-02-20T18:58:03Z | null | null | null | CLIPPER: Compression enables long-context synthetic data generation | ['Chau Minh Pham', 'Yapei Chang', 'Mohit Iyyer'] | 2,025 | arXiv.org | 1 | 59 | ['Computer Science'] |
2,502.14855 | Prompt-to-Leaderboard | ['Evan Frick', 'Connor Chen', 'Joseph Tennyson', 'Tianle Li', 'Wei-Lin Chiang', 'Anastasios N. Angelopoulos', 'Ion Stoica'] | ['cs.LG', 'cs.CL'] | Large language model (LLM) evaluations typically rely on aggregated metrics
like accuracy or human preference, averaging across users and prompts. This
averaging obscures user- and prompt-specific variations in model performance.
To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces
leaderboar... | 2025-02-20T18:58:07Z | null | null | null | Prompt-to-Leaderboard | ['Evan Frick', 'Connor Chen', 'Joseph Tennyson', 'Tianle Li', 'Wei-Lin Chiang', 'Anastasios N. Angelopoulos', 'Ion Stoica'] | 2,025 | arXiv.org | 7 | 42 | ['Computer Science'] |
2,502.14856 | FR-Spec: Accelerating Large-Vocabulary Language Models via
Frequency-Ranked Speculative Sampling | ['Weilin Zhao', 'Tengyu Pan', 'Xu Han', 'Yudi Zhang', 'Ao Sun', 'Yuxiang Huang', 'Kaihuo Zhang', 'Weilun Zhao', 'Yuxuan Li', 'Jianyong Wang', 'Zhiyuan Liu', 'Maosong Sun'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Speculative sampling has emerged as an important technique for accelerating
the auto-regressive generation process of large language models (LLMs) by
utilizing a draft-then-verify mechanism to produce multiple tokens per forward
pass. While state-of-the-art speculative sampling methods use only a single
layer and a lan... | 2025-02-20T18:58:10Z | null | null | null | FR-Spec: Accelerating Large-Vocabulary Language Models via Frequency-Ranked Speculative Sampling | ['Weilin Zhao', 'Tengyu Pan', 'Xu Han', 'Yudi Zhang', 'Ao Sun', 'Yuxiang Huang', 'Kaihuo Zhang', 'Weilun Zhao', 'Yuxuan Li', 'Jianyong Wang', 'Zhiyuan Liu', 'Maosong Sun'] | 2,025 | arXiv.org | 2 | 49 | ['Computer Science'] |
2,502.14907 | GneissWeb: Preparing High Quality Data for LLMs at Scale | ['Hajar Emami Gohari', 'Swanand Ravindra Kadhe', 'Syed Yousaf Shah. Constantin Adam', 'Abdulhamid Adebayo', 'Praneet Adusumilli', 'Farhan Ahmed', 'Nathalie Baracaldo Angel', 'Santosh Borse', 'Yuan-Chi Chang', 'Xuan-Hong Dang', 'Nirmit Desai', 'Ravital Eres', 'Ran Iwamoto', 'Alexei Karve', 'Yan Koyfman', 'Wei-Han Lee', ... | ['cs.CL', 'cs.AI'] | Data quantity and quality play a vital role in determining the performance of
Large Language Models (LLMs). High-quality data, in particular, can
significantly boost the LLM's ability to generalize on a wide range of
downstream tasks. Large pre-training datasets for leading LLMs remain
inaccessible to the public, where... | 2025-02-19T00:14:29Z | null | null | null | null | null | null | null | null | null | null |
2,502.15011 | CrossOver: 3D Scene Cross-Modal Alignment | ['Sayan Deb Sarkar', 'Ondrej Miksik', 'Marc Pollefeys', 'Daniel Barath', 'Iro Armeni'] | ['cs.CV'] | Multi-modal 3D object understanding has gained significant attention, yet
current approaches often assume complete data availability and rigid alignment
across all modalities. We present CrossOver, a novel framework for cross-modal
3D scene understanding via flexible, scene-level modality alignment. Unlike
traditional ... | 2025-02-20T20:05:30Z | Project Page: https://sayands.github.io/crossover/ | null | null | CrossOver: 3D Scene Cross-Modal Alignment | ['Sayan Deb Sarkar', 'O. Mikšík', 'Marc Pollefeys', 'Daniel Barath', 'Iro Armeni'] | 2,025 | Computer Vision and Pattern Recognition | 2 | 45 | ['Computer Science'] |
2,502.15167 | M3-AGIQA: Multimodal, Multi-Round, Multi-Aspect AI-Generated Image
Quality Assessment | ['Chuan Cui', 'Kejiang Chen', 'Zhihua Wei', 'Wen Shen', 'Weiming Zhang', 'Nenghai Yu'] | ['cs.CV'] | The rapid advancement of AI-generated image (AIGI) models presents new
challenges for evaluating image quality, particularly across three aspects:
perceptual quality, prompt correspondence, and authenticity. To address these
challenges, we introduce M3-AGIQA, a comprehensive framework that leverages
Multimodal Large La... | 2025-02-21T03:05:45Z | 24 pages. This work has been submitted to the ACM for possible
publication | null | null | M3-AGIQA: Multimodal, Multi-Round, Multi-Aspect AI-Generated Image Quality Assessment | ['Chuan Cui', 'Kejiang Chen', 'Zhihua Wei', 'Wen Shen', 'Weiming Zhang', 'Neng H. Yu'] | 2,025 | arXiv.org | 0 | 63 | ['Computer Science'] |
2,502.15168 | mStyleDistance: Multilingual Style Embeddings and their Evaluation | ['Justin Qiu', 'Jiacheng Zhu', 'Ajay Patel', 'Marianna Apidianaki', 'Chris Callison-Burch'] | ['cs.CL'] | Style embeddings are useful for stylistic analysis and style transfer;
however, only English style embeddings have been made available. We introduce
Multilingual StyleDistance (mStyleDistance), a multilingual style embedding
model trained using synthetic data and contrastive learning. We train the model
on data from ni... | 2025-02-21T03:11:41Z | arXiv admin note: substantial text overlap with arXiv:2410.12757 | null | null | null | null | null | null | null | null | null |
2,502.15392 | Chitrarth: Bridging Vision and Language for a Billion People | ['Shaharukh Khan', 'Ayush Tarun', 'Abhinav Ravi', 'Ali Faraz', 'Akshat Patidar', 'Praveen Kumar Pokala', 'Anagha Bhangare', 'Raja Kolla', 'Chandra Khatri', 'Shubham Agarwal'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Recent multimodal foundation models are primarily trained on English or high
resource European language data, which hinders their applicability to other
medium and low-resource languages. To address this limitation, we introduce
Chitrarth (Chitra: Image; Artha: Meaning), an inclusive Vision-Language Model
(VLM), specif... | 2025-02-21T11:38:40Z | null | null | null | Chitrarth: Bridging Vision and Language for a Billion People | ['Shaharukh Khan', 'Ayush K Tarun', 'Abhinav Ravi', 'Ali Faraz', 'Akshat Patidar', 'Praveen Pokala', 'Anagha Bhangare', 'Raja Kolla', 'Chandra Khatri', 'Shubham Agarwal'] | 2,025 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 1 | 54 | ['Computer Science'] |
2,502.15429 | Pub-Guard-LLM: Detecting Fraudulent Biomedical Articles with Reliable
Explanations | ['Lihu Chen', 'Shuojie Fu', 'Gabriel Freedman', 'Cemre Zor', 'Guy Martin', 'James Kinross', 'Uddhav Vaghela', 'Ovidiu Serban', 'Francesca Toni'] | ['cs.CL'] | A significant and growing number of published scientific articles is found to
involve fraudulent practices, posing a serious threat to the credibility and
safety of research in fields such as medicine. We propose Pub-Guard-LLM, the
first large language model-based system tailored to fraud detection of
biomedical scient... | 2025-02-21T12:54:56Z | long paper under review | null | null | null | null | null | null | null | null | null |
2,502.15543 | ParamMute: Suppressing Knowledge-Critical FFNs for Faithful
Retrieval-Augmented Generation | ['Pengcheng Huang', 'Zhenghao Liu', 'Yukun Yan', 'Haiyan Zhao', 'Xiaoyuan Yi', 'Hao Chen', 'Zhiyuan Liu', 'Maosong Sun', 'Tong Xiao', 'Ge Yu', 'Chenyan Xiong'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) integrated with retrieval-augmented generation
(RAG) have improved factuality by grounding outputs in external evidence.
However, they remain susceptible to unfaithful generation, where outputs
contradict retrieved context despite its relevance and accuracy. Existing
approaches aiming to im... | 2025-02-21T15:50:41Z | 22 pages, 7 figures, 7 tables | null | null | ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation | ['Pengcheng Huang', 'Zhenghao Liu', 'Yukun Yan', 'Xiaoyuan Yi', 'Hao Chen', 'Zhiyuan Liu', 'Maosong Sun', 'Tong Xiao', 'Ge Yu', 'Chenyan Xiong'] | 2,025 | null | 2 | 62 | ['Computer Science'] |
2,502.15589 | LightThinker: Thinking Step-by-Step Compression | ['Jintian Zhang', 'Yuqi Zhu', 'Mengshu Sun', 'Yujie Luo', 'Shuofei Qiao', 'Lun Du', 'Da Zheng', 'Huajun Chen', 'Ningyu Zhang'] | ['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG', 'cs.MM'] | Large language models (LLMs) have shown remarkable performance in complex
reasoning tasks, but their efficiency is hindered by the substantial memory and
computational costs associated with generating lengthy tokens. In this paper,
we propose LightThinker, a novel method that enables LLMs to dynamically
compress interm... | 2025-02-21T16:57:22Z | null | null | null | LightThinker: Thinking Step-by-Step Compression | ['Jintian Zhang', 'Yuqi Zhu', 'Mengshu Sun', 'Yujie Luo', 'Shuofei Qiao', 'Lun Du', 'Da Zheng', 'Huajun Chen', 'Ningyu Zhang'] | 2,025 | arXiv.org | 34 | 47 | ['Computer Science'] |
2,502.1561 | A general language model for peptide identification | ['Jixiu Zhai', 'Tianchi Lu', 'Haitian Zhong', 'Ziyang Xu', 'Yuhuan Liu', 'Shengrui Xu', 'Jingwan Wang', 'Dan Huang'] | ['cs.LG', 'cs.AI', '92C40, 68T07', 'I.2.6; J.3'] | Accurate identification of bioactive peptides (BPs) and protein
post-translational modifications (PTMs) is essential for understanding protein
function and advancing therapeutic discovery. However, most computational
methods remain limited in their generalizability across diverse peptide
functions. Here, we present PDe... | 2025-02-21T17:31:22Z | 24 pages, 9 figures, 4 tables, submitted to arXiv | null | null | A general language model for peptide identification | ['Jixiu Zhai', 'Tianchi Lu', 'Haitian Zhong', 'Ziyang Xu', 'Yuhuan Liu', 'Xueying Wang', 'Dan Huang'] | 2,025 | null | 0 | 66 | ['Computer Science'] |
2,502.15637 | Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time
Series Classification | ['Vasilii Feofanov', 'Songkang Wen', 'Marius Alonso', 'Romain Ilbert', 'Hongbo Guo', 'Malik Tiomoko', 'Lujia Pan', 'Jianfeng Zhang', 'Ievgen Redko'] | ['cs.LG', 'cs.AI', 'stat.ML'] | In recent years, there has been increasing interest in developing foundation
models for time series data that can generalize across diverse downstream
tasks. While numerous forecasting-oriented foundation models have been
introduced, there is a notable scarcity of models tailored for time series
classification. To addr... | 2025-02-21T18:06:09Z | null | null | null | null | null | null | null | null | null | null |
2,502.15654 | Machine-generated text detection prevents language model collapse | ['George Drayson', 'Emine Yilmaz', 'Vasileios Lampos'] | ['cs.CL', 'cs.LG'] | As Large Language Models (LLMs) become increasingly prevalent, their
generated outputs are proliferating across the web, risking a future where
machine-generated content dilutes human-authored text. Since online data is the
primary resource for LLM pre-training, subsequent models could be trained on an
unknown portion ... | 2025-02-21T18:22:36Z | null | null | null | null | null | null | null | null | null | null |
2,502.15798 | MaxSup: Overcoming Representation Collapse in Label Smoothing | ['Yuxuan Zhou', 'Heng Li', 'Zhi-Qi Cheng', 'Xudong Yan', 'Yifei Dong', 'Mario Fritz', 'Margret Keuper'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Label Smoothing (LS) is widely adopted to reduce overconfidence in neural
network predictions and improve generalization. Despite these benefits, recent
studies reveal two critical issues with LS. First, LS induces overconfidence in
misclassified samples. Second, it compacts feature representations into overly
tight cl... | 2025-02-18T20:10:34Z | 24 pages, 15 tables, 5 figures. Preliminary work under review. Do not
distribute | null | null | null | null | null | null | null | null | null |
2,502.15814 | Slamming: Training a Speech Language Model on One GPU in a Day | ['Gallil Maimon', 'Avishai Elmakies', 'Yossi Adi'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.SD', 'eess.AS'] | We introduce Slam, a recipe for training high-quality Speech Language Models
(SLMs) on a single academic GPU in 24 hours. We do so through empirical
analysis of model initialisation and architecture, synthetic training data,
preference optimisation with synthetic data and tweaking all other components.
We empirically d... | 2025-02-19T17:21:15Z | ACL 2025 (Findings) | null | null | Slamming: Training a Speech Language Model on One GPU in a Day | ['Gallil Maimon', 'Avishai Elmakies', 'Yossi Adi'] | 2,025 | arXiv.org | 3 | 80 | ['Computer Science', 'Engineering'] |
2,502.15894 | RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion
Transformers | ['Min Zhao', 'Guande He', 'Yixiao Chen', 'Hongzhou Zhu', 'Chongxuan Li', 'Jun Zhu'] | ['cs.CV'] | Recent advancements in video generation have enabled models to synthesize
high-quality, minute-long videos. However, generating even longer videos with
temporal coherence remains a major challenge and existing length extrapolation
methods lead to temporal repetition or motion deceleration. In this work, we
systematical... | 2025-02-21T19:28:05Z | ICML 2025 | null | null | RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers | ['Min Zhao', 'Guande He', 'Yixiao Chen', 'Hongzhou Zhu', 'Chongxuan Li', 'Jun Zhu'] | 2,025 | arXiv.org | 11 | 54 | ['Computer Science'] |
2,502.1592 | Self-Taught Agentic Long Context Understanding | ['Yufan Zhuang', 'Xiaodong Yu', 'Jialian Wu', 'Ximeng Sun', 'Ze Wang', 'Jiang Liu', 'Yusheng Su', 'Jingbo Shang', 'Zicheng Liu', 'Emad Barsoum'] | ['cs.CL', 'cs.AI'] | Answering complex, long-context questions remains a major challenge for large
language models (LLMs) as it requires effective question clarifications and
context retrieval. We propose Agentic Long-Context Understanding (AgenticLU), a
framework designed to enhance an LLM's understanding of such queries by
integrating ta... | 2025-02-21T20:29:36Z | Published at ACL 2025 Main Conference | null | null | null | null | null | null | null | null | null |
2,502.16372 | COMPASS: Cross-embodiment Mobility Policy via Residual RL and Skill
Synthesis | ['Wei Liu', 'Huihua Zhao', 'Chenran Li', 'Joydeep Biswas', 'Soha Pouya', 'Yan Chang'] | ['cs.RO'] | As robots are increasingly deployed in diverse application domains,
generalizable cross-embodiment mobility policies are increasingly essential.
While classical mobility stacks have proven effective on specific robot
platforms, they pose significant challenges when scaling to new embodiments.
Learning-based methods, su... | 2025-02-22T22:26:30Z | null | null | null | COMPASS: Cross-embodiment Mobility Policy via Residual RL and Skill Synthesis | ['Wei Liu', 'Hui Zhao', 'Chenran Li', 'Joydeep Biswas', 'Soha Pouya', 'Yan Chang'] | 2,025 | arXiv.org | 0 | 20 | ['Computer Science'] |
2,502.16666 | SBSC: Step-By-Step Coding for Improving Mathematical Olympiad
Performance | ['Kunal Singh', 'Ankan Biswas', 'Sayandeep Bhowmick', 'Pradeep Moturi', 'Siva Kishore Gollapalli'] | ['cs.AI', 'cs.CL', 'cs.LG'] | We propose Step-by-Step Coding (SBSC): a multi-turn math reasoning framework
that enables Large Language Models (LLMs) to generate sequence of programs for
solving Olympiad level math problems. At each step/turn, by leveraging the code
execution outputs and programs of previous steps, the model generates the next
sub-t... | 2025-02-23T17:51:26Z | Published as a full conference paper at ICLR 2025. Shorter(Early)
Version accepted at NeurIPS'24 MATH-AI track | null | null | null | null | null | null | null | null | null |
2,502.16779 | Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain
Model | ['Yaxuan Huang', 'Xili Dai', 'Jianan Wang', 'Xianbiao Qi', 'Yixing Yuan', 'Xiangyu Yue'] | ['cs.CV', 'cs.AI'] | Room layout estimation from multiple-perspective images is poorly
investigated due to the complexities that emerge from multi-view geometry,
which requires muti-step solutions such as camera intrinsic and extrinsic
estimation, image matching, and triangulation. However, in 3D reconstruction,
the advancement of recent 3... | 2025-02-24T02:14:19Z | Accepted by ICLR 2025. Github
page:https://github.com/justacar/Plane-DUSt3R | null | null | null | null | null | null | null | null | null |
2,502.16839 | "Actionable Help" in Crises: A Novel Dataset and Resource-Efficient
Models for Identifying Request and Offer Social Media Posts | ['Rabindra Lamsal', 'Maria Rodriguez Read', 'Shanika Karunasekera', 'Muhammad Imran'] | ['cs.CL'] | During crises, social media serves as a crucial coordination tool, but the
vast influx of posts--from "actionable" requests and offers to generic content
like emotional support, behavioural guidance, or outdated
information--complicates effective classification. Although generative LLMs
(Large Language Models) can addr... | 2025-02-24T04:50:06Z | null | null | null | "Actionable Help" in Crises: A Novel Dataset and Resource-Efficient Models for Identifying Request and Offer Social Media Posts | ['Rabindra Lamsal', 'M. Read', 'S. Karunasekera', 'Muhammad Imran'] | 2,025 | arXiv.org | 0 | 46 | ['Computer Science'] |
2,502.16943 | MAD-AD: Masked Diffusion for Unsupervised Brain Anomaly Detection | ['Farzad Beizaee', 'Gregory Lodygensky', 'Christian Desrosiers', 'Jose Dolz'] | ['cs.CV', 'eess.IV'] | Unsupervised anomaly detection in brain images is crucial for identifying
injuries and pathologies without access to labels. However, the accurate
localization of anomalies in medical images remains challenging due to the
inherent complexity and variability of brain structures and the scarcity of
annotated abnormal dat... | 2025-02-24T08:11:29Z | null | Information Processing in Medical Imaging (IPMI), 2025 | null | MAD-AD: Masked Diffusion for Unsupervised Brain Anomaly Detection | ['Farzad Beizaee', 'Gregory A. Lodygensky', 'Christian Desrosiers', 'J. Dolz'] | 2,025 | arXiv.org | 0 | 41 | ['Computer Science', 'Engineering'] |
2,502.16982 | Muon is Scalable for LLM Training | ['Jingyuan Liu', 'Jianlin Su', 'Xingcheng Yao', 'Zhejun Jiang', 'Guokun Lai', 'Yulun Du', 'Yidao Qin', 'Weixin Xu', 'Enzhe Lu', 'Junjie Yan', 'Yanru Chen', 'Huabin Zheng', 'Yibo Liu', 'Shaowei Liu', 'Bohong Yin', 'Weiran He', 'Han Zhu', 'Yuzhi Wang', 'Jianzhou Wang', 'Mengnan Dong', 'Zheng Zhang', 'Yongsheng Kang', 'Ha... | ['cs.LG', 'cs.AI', 'cs.CL'] | Recently, the Muon optimizer based on matrix orthogonalization has
demonstrated strong results in training small-scale language models, but the
scalability to larger models has not been proven. We identify two crucial
techniques for scaling up Muon: (1) adding weight decay and (2) carefully
adjusting the per-parameter ... | 2025-02-24T09:12:29Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.