arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,502.04519
GenVC: Self-Supervised Zero-Shot Voice Conversion
['Zexin Cai', 'Henry Li Xinyuan', 'Ashi Garg', 'Leibny Paola García-Perera', 'Kevin Duh', 'Sanjeev Khudanpur', 'Matthew Wiesner', 'Nicholas Andrews']
['eess.AS', 'cs.LG']
Zero-shot voice conversion has recently made substantial progress, but many models still depend on external supervised systems to disentangle speaker identity and linguistic content. Furthermore, current methods often use parallel conversion, where the converted speech inherits the source utterance's temporal structure...
2025-02-06T21:40:09Z
null
null
null
null
null
null
null
null
null
null
2,502.05003
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
['Andrei Panferov', 'Jiale Chen', 'Soroush Tabesh', 'Roberto L. Castro', 'Mahdi Nikdan', 'Dan Alistarh']
['cs.LG']
One approach to reducing the massive costs of large language models (LLMs) is the use of quantized or sparse representations for training or deployment. While post-training compression methods are very popular, the question of obtaining even more accurate compressed models by directly training over such representations...
2025-02-07T15:23:34Z
null
null
null
null
null
null
null
null
null
null
2,502.05139
Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound
['Andros Tjandra', 'Yi-Chiao Wu', 'Baishan Guo', 'John Hoffman', 'Brian Ellis', 'Apoorv Vyas', 'Bowen Shi', 'Sanyuan Chen', 'Matt Le', 'Nick Zacharov', 'Carleigh Wood', 'Ann Lee', 'Wei-Ning Hsu']
['cs.SD', 'cs.LG', 'eess.AS']
The quantification of audio aesthetics remains a complex challenge in audio processing, primarily due to its subjective nature, which is influenced by human perception and cultural context. Traditional methods often depend on human listeners for evaluation, leading to inconsistencies and high resource demands. This pap...
2025-02-07T18:15:57Z
Repository: https://github.com/facebookresearch/audiobox-aesthetics Website: https://ai.meta.com/research/publications/meta-audiobox-aesthetics-unified-automatic-quality-assessment-for-speech-music-and-sound/
null
null
null
null
null
null
null
null
null
2,502.05153
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment
['Minh-Quan Le', 'Gaurav Mittal', 'Tianjian Meng', 'A S M Iftekhar', 'Vishwas Suryanarayanan', 'Barun Patra', 'Dimitris Samaras', 'Mei Chen']
['cs.CV']
While diffusion models are powerful in generating high-quality, diverse synthetic data for object-centric tasks, existing methods struggle with scene-aware tasks such as Visual Question Answering (VQA) and Human-Object Interaction (HOI) Reasoning, where it is critical to preserve scene attributes in generated images co...
2025-02-07T18:32:51Z
Accepted to ICLR 2025. Project page with code release: https://roar-ai.github.io/hummingbird
null
null
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment
['Minh-Quan Le', 'Gaurav Mittal', 'Tianjian Meng', 'A. S. M. Iftekhar', 'Vishwas Suryanarayanan', 'Barun Patra', 'Dimitris Samaras', 'Mei Chen']
2,025
International Conference on Learning Representations
0
43
['Computer Science']
2,502.05163
DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails
['Yihe Deng', 'Yu Yang', 'Junkai Zhang', 'Wei Wang', 'Bo Li']
['cs.CL', 'cs.LG']
The rapid advancement of large language models (LLMs) has increased the need for guardrail models to ensure responsible use, particularly in detecting unsafe and illegal content. While substantial safety data exist in English, multilingual guardrail modeling remains underexplored due to the scarcity of open-source safe...
2025-02-07T18:45:03Z
24 pages, 9 figures, 5 tables
null
null
null
null
null
null
null
null
null
2,502.05171
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
['Jonas Geiping', 'Sean McLeish', 'Neel Jain', 'John Kirchenbauer', 'Siddharth Singh', 'Brian R. Bartoldson', 'Bhavya Kailkhura', 'Abhinav Bhatele', 'Tom Goldstein']
['cs.LG', 'cs.CL']
We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. Our model works by iterating a recurrent block, thereby unrolling to arbitrary depth at test-time. This stands in contrast to mainstream reasoning models that scale up compute by produc...
2025-02-07T18:55:02Z
The model is available at https://huggingface.co/tomg-group-umd/huginn-0125. Code and data recipe can be found at https://github.com/seal-rg/recurrent-pretraining
null
null
null
null
null
null
null
null
null
2,502.05173
VideoRoPE: What Makes for Good Video Rotary Position Embedding?
['Xilin Wei', 'Xiaoran Liu', 'Yuhang Zang', 'Xiaoyi Dong', 'Pan Zhang', 'Yuhang Cao', 'Jian Tong', 'Haodong Duan', 'Qipeng Guo', 'Jiaqi Wang', 'Xipeng Qiu', 'Dahua Lin']
['cs.CV']
While Rotary Position Embedding (RoPE) and its variants are widely adopted for their long-context capabilities, the extension of the 1D RoPE to video, with its complex spatio-temporal structure, remains an open challenge. This work first introduces a comprehensive analysis that identifies four key characteristics essen...
2025-02-07T18:56:04Z
null
null
null
VideoRoPE: What Makes for Good Video Rotary Position Embedding?
['Xilin Wei', 'Xiaoran Liu', 'Yuhang Zang', 'Xiao-wen Dong', 'Pan Zhang', 'Yuhang Cao', 'Jian Tong', 'Haodong Duan', 'Qipeng Guo', 'Jiaqi Wang', 'Xipeng Qiu', 'Dahua Lin']
2,025
arXiv.org
6
80
['Computer Science']
2,502.05178
QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation
['Yue Zhao', 'Fuzhao Xue', 'Scott Reed', 'Linxi Fan', 'Yuke Zhu', 'Jan Kautz', 'Zhiding Yu', 'Philipp Krähenbühl', 'De-An Huang']
['cs.CV']
We introduce Quantized Language-Image Pretraining (QLIP), a visual tokenization method that combines state-of-the-art reconstruction quality with state-of-the-art zero-shot image understanding. QLIP trains a binary-spherical-quantization-based autoencoder with reconstruction and language-image alignment objectives. We ...
2025-02-07T18:59:57Z
Tech report. Project page: https://nvlabs.github.io/QLIP/
null
null
null
null
null
null
null
null
null
2,502.05179
FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generation
['Shilong Zhang', 'Wenbo Li', 'Shoufa Chen', 'Chongjian Ge', 'Peize Sun', 'Yida Zhang', 'Yi Jiang', 'Zehuan Yuan', 'Binyue Peng', 'Ping Luo']
['cs.CV']
DiT diffusion models have achieved great success in text-to-video generation, leveraging their scalability in model capacity and data scale. High content and motion fidelity aligned with text prompts, however, often require large model parameters and a substantial number of function evaluations (NFEs). Realistic and vi...
2025-02-07T18:59:59Z
Model and Weight: https://github.com/FoundationVision/FlashVideo
null
null
FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generation
['Shilong Zhang', 'Wenbo Li', 'Shoufa Chen', 'Chongjian Ge', 'Peize Sun', 'Yida Zhang', 'Yi Jiang', 'Zehuan Yuan', 'Binyue Peng', 'Ping Luo']
2,025
arXiv.org
6
69
['Computer Science']
2,502.05364
Hypencoder: Hypernetworks for Information Retrieval
['Julian Killingback', 'Hansi Zeng', 'Hamed Zamani']
['cs.IR', 'cs.LG']
Existing information retrieval systems are largely constrained by their reliance on vector inner products to assess query-document relevance, which naturally limits the expressiveness of the relevance score they can produce. We propose a new paradigm; instead of representing a query as a vector, we use a small neural n...
2025-02-07T22:31:38Z
null
null
null
Hypencoder: Hypernetworks for Information Retrieval
['Julian Killingback', 'Hansi Zeng', 'Hamed Zamani']
2,025
arXiv.org
1
81
['Computer Science']
2,502.05374
Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond
['Chongyu Fan', 'Jinghan Jia', 'Yihua Zhang', 'Anil Ramakrishna', 'Mingyi Hong', 'Sijia Liu']
['cs.LG', 'cs.CL']
The LLM unlearning technique has recently been introduced to comply with data regulations and address the safety and ethical concerns of LLMs by removing the undesired data-model influence. However, state-of-the-art unlearning methods face a critical vulnerability: they are susceptible to ``relearning'' the removed inf...
2025-02-07T23:03:55Z
Accepted by ICML 2025
null
null
null
null
null
null
null
null
null
2,502.05478
OntoTune: Ontology-Driven Self-training for Aligning Large Language Models
['Zhiqiang Liu', 'Chengtao Gan', 'Junjie Wang', 'Yichi Zhang', 'Zhongpu Bo', 'Mengshu Sun', 'Huajun Chen', 'Wen Zhang']
['cs.CL']
Existing domain-specific Large Language Models (LLMs) are typically developed by fine-tuning general-purposed LLMs with large-scale domain-specific corpora. However, training on large-scale corpora often fails to effectively organize domain knowledge of LLMs, leading to fragmented understanding. Inspired by how humans ...
2025-02-08T07:38:45Z
Accepted by WWW25
null
null
null
null
null
null
null
null
null
2,502.05512
IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System
['Wei Deng', 'Siyi Zhou', 'Jingchen Shu', 'Jinchao Wang', 'Lu Wang']
['cs.SD', 'cs.AI', 'eess.AS']
Recently, large language model (LLM) based text-to-speech (TTS) systems have gradually become the mainstream in the industry due to their high naturalness and powerful zero-shot voice cloning capabilities.Here, we introduce the IndexTTS system, which is mainly based on the XTTS and Tortoise model. We add some novel imp...
2025-02-08T10:23:20Z
null
null
null
null
null
null
null
null
null
null
2,502.05564
TabICL: A Tabular Foundation Model for In-Context Learning on Large Data
['Jingang Qu', 'David Holzmüller', 'Gaël Varoquaux', 'Marine Le Morvan']
['cs.LG', 'cs.AI']
The long-standing dominance of gradient-boosted decision trees on tabular data is currently challenged by tabular foundation models using In-Context Learning (ICL): setting the training data as context for the test data and predicting in a single forward pass without parameter updates. While TabPFNv2 foundation model e...
2025-02-08T13:25:04Z
Published at ICML 2025
null
null
null
null
null
null
null
null
null
2,502.05633
Mol-MoE: Training Preference-Guided Routers for Molecule Generation
['Diego Calanzone', "Pierluca D'Oro", 'Pierre-Luc Bacon']
['cs.LG']
Recent advances in language models have enabled framing molecule generation as sequence modeling. However, existing approaches often rely on single-objective reinforcement learning, limiting their applicability to real-world drug design, where multiple competing properties must be optimized. Traditional multi-objective...
2025-02-08T16:28:33Z
We release our code and data at: https://github.com/ddidacus/mol-moe
null
null
Mol-MoE: Training Preference-Guided Routers for Molecule Generation
['Diego Calanzone', "P. D'Oro", 'Pierre-Luc Bacon']
2,025
arXiv.org
1
48
['Computer Science']
2,502.05664
CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging
['Md. Ashraful Islam', 'Mohammed Eunus Ali', 'Md Rizwan Parvez']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) have made significant strides in code generation and problem solving. Current approaches employ external tool-based iterative debuggers that use compiler or other tool-based runtime feedback to refine coarse programs generated by various methods. However, the effectiveness of these approach...
2025-02-08T18:43:59Z
Accepted in NAACL 2025 Findings
null
null
null
null
null
null
null
null
null
2,502.05674
ShiftySpeech: A Large-Scale Synthetic Speech Dataset with Distribution Shifts
['Ashi Garg', 'Zexin Cai', 'Lin Zhang', 'Henry Li Xinyuan', 'Leibny Paola García-Perera', 'Kevin Duh', 'Sanjeev Khudanpur', 'Matthew Wiesner', 'Nicholas Andrews']
['eess.AS', 'cs.SD']
The problem of synthetic speech detection has enjoyed considerable attention, with recent methods achieving low error rates across several established benchmarks. However, to what extent can low error rates on academic benchmarks translate to more realistic conditions? In practice, while the training set is fixed at on...
2025-02-08T19:49:09Z
null
null
null
null
null
null
null
null
null
null
2,502.05795
The Curse of Depth in Large Language Models
['Wenfang Sun', 'Xinyuan Song', 'Pengxiang Li', 'Lu Yin', 'Yefeng Zheng', 'Shiwei Liu']
['cs.LG', 'cs.AI']
In this paper, we introduce the Curse of Depth, a concept that highlights, explains, and addresses the recent observation in modern Large Language Models (LLMs) where nearly half of the layers are less effective than expected. We first confirm the wide existence of this phenomenon across the most popular families of LL...
2025-02-09T07:03:36Z
null
null
null
null
null
null
null
null
null
null
2,502.05878
Retrieval-augmented Large Language Models for Financial Time Series Forecasting
['Mengxi Xiao', 'Zihao Jiang', 'Lingfei Qian', 'Zhengyu Chen', 'Yueru He', 'Yijing Xu', 'Yuecheng Jiang', 'Dong Li', 'Ruey-Ling Weng', 'Min Peng', 'Jimin Huang', 'Sophia Ananiadou', 'Qianqian Xie']
['cs.CL']
Accurately forecasting stock price movements is critical for informed financial decision-making, supporting applications ranging from algorithmic trading to risk management. However, this task remains challenging due to the difficulty of retrieving subtle yet high-impact patterns from noisy financial time-series data, ...
2025-02-09T12:26:05Z
11 pages, 4 figures
null
null
null
null
null
null
null
null
null
2,502.05932
Skill Expansion and Composition in Parameter Space
['Tenglong Liu', 'Jianxiong Li', 'Yinan Zheng', 'Haoyi Niu', 'Yixing Lan', 'Xin Xu', 'Xianyuan Zhan']
['cs.LG', 'cs.AI', 'cs.RO']
Humans excel at reusing prior knowledge to address new challenges and developing skills while solving problems. This paradigm becomes increasingly popular in the development of autonomous agents, as it develops systems that can self-evolve in response to new challenges like human beings. However, previous methods suffe...
2025-02-09T15:22:38Z
ICLR 2025, 37 pages
null
null
null
null
null
null
null
null
null
2,502.06253
Find Central Dogma Again: Leveraging Multilingual Transfer in Large Language Models
['Wang Liang']
['q-bio.GN', '92-10', 'J.3']
In recent years, large language models (LLMs) have achieved state-of-the-art results in various biological sequence analysis tasks, such as sequence classification, structure prediction, and function prediction. Similar to advancements in AI for other scientific fields, deeper research into biological LLMs has begun to...
2025-02-10T08:37:21Z
31 pages,8 figures
null
null
null
null
null
null
null
null
null
2,502.06352
LANTERN++: Enhancing Relaxed Speculative Decoding with Static Tree Drafting for Visual Auto-regressive Models
['Sihwan Park', 'Doohyuk Jang', 'Sungyub Kim', 'Souvik Kundu', 'Eunho Yang']
['cs.CV']
Speculative decoding has been widely used to accelerate auto-regressive (AR) text generation. However, its effectiveness for visual AR models remains limited due to token selection ambiguity, where multiple tokens share similarly low probabilities and thus reduce acceptance rates. Recently, relaxed speculative decoding...
2025-02-10T11:05:18Z
ICLR 2025 Workshop at SCOPE (Oral), 16 pages, 5 figures, short paper (6 pages exclude reference and appendix)
null
null
LANTERN++: Enhanced Relaxed Speculative Decoding with Static Tree Drafting for Visual Auto-regressive Models
['Sihwan Park', 'Doohyuk Jang', 'Sung-Yub Kim', 'Souvik Kundu', 'Eunho Yang']
2,025
arXiv.org
0
20
['Computer Science']
2,502.06367
FOCUS -- Multi-View Foot Reconstruction From Synthetically Trained Dense Correspondences
['Oliver Boyne', 'Roberto Cipolla']
['cs.CV']
Surface reconstruction from multiple, calibrated images is a challenging task - often requiring a large number of collected images with significant overlap. We look at the specific case of human foot reconstruction. As with previous successful foot reconstruction work, we seek to extract rich per-pixel geometry cues fr...
2025-02-10T11:36:45Z
13 pages, 11 figures
null
null
null
null
null
null
null
null
null
2,502.06394
SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators
['Daniil Moskovskiy', 'Nikita Sushko', 'Sergey Pletenev', 'Elena Tutubalina', 'Alexander Panchenko']
['cs.CL']
Existing approaches to multilingual text detoxification are hampered by the scarcity of parallel multilingual datasets. In this work, we introduce a pipeline for the generation of multilingual parallel detoxification data. We also introduce SynthDetoxM, a manually collected and synthetically generated multilingual para...
2025-02-10T12:30:25Z
Accepted to NAACL 2025 Main Conference
null
null
SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators
['Daniil Moskovskiy', 'Nikita Sushko', 'Sergey Pletenev', 'Elena Tutubalina', 'Alexander Panchenko']
2,025
North American Chapter of the Association for Computational Linguistics
0
0
['Computer Science']
2,502.06608
TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models
['Yangguang Li', 'Zi-Xin Zou', 'Zexiang Liu', 'Dehu Wang', 'Yuan Liang', 'Zhipeng Yu', 'Xingchao Liu', 'Yuan-Chen Guo', 'Ding Liang', 'Wanli Ouyang', 'Yan-Pei Cao']
['cs.CV', 'cs.AI']
Recent advancements in diffusion techniques have propelled image and video generation to unprecedented levels of quality, significantly accelerating the deployment and application of generative AI. However, 3D shape generation technology has so far lagged behind, constrained by limitations in 3D data scale, complexity ...
2025-02-10T16:07:54Z
null
null
null
null
null
null
null
null
null
null
2,502.06692
Multi-label Scandinavian Language Identification (SLIDE)
['Mariia Fedorova', 'Jonas Sebulon Frydenberg', 'Victoria Handford', 'Victoria Ovedie Chruickshank Langø', 'Solveig Helene Willoch', 'Marthe Løken Midtgaard', 'Yves Scherrer', 'Petter Mæhlum', 'David Samuel']
['cs.CL', 'cs.AI']
Identifying closely related languages at sentence level is difficult, in particular because it is often impossible to assign a sentence to a single language. In this paper, we focus on multi-label sentence-level Scandinavian language identification (LID) for Danish, Norwegian Bokm\r{a}l, Norwegian Nynorsk, and Swedish....
2025-02-10T17:16:55Z
null
null
null
Multi-label Scandinavian Language Identification (SLIDE)
['Mariia Fedorova', 'Jonas Sebulon Frydenberg', 'Victoria Handford', 'Victoria Ovedie Chruickshank Lango', 'Solveig Helene Willoch', 'M. Midtgaard', 'Yves Scherrer', 'Petter Maehlum', 'David Samuel']
2,025
arXiv.org
0
32
['Computer Science']
2,502.06703
Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling
['Runze Liu', 'Junqi Gao', 'Jian Zhao', 'Kaiyan Zhang', 'Xiu Li', 'Biqing Qi', 'Wanli Ouyang', 'Bowen Zhou']
['cs.CL']
Test-Time Scaling (TTS) is an important method for improving the performance of Large Language Models (LLMs) by using additional computation during the inference phase. However, current studies do not systematically analyze how policy models, Process Reward Models (PRMs), and problem difficulty influence TTS. This lack...
2025-02-10T17:30:23Z
null
null
null
null
null
null
null
null
null
null
2,502.06734
Señorita-2M: A High-Quality Instruction-based Dataset for General Video Editing by Video Specialists
['Bojia Zi', 'Penghui Ruan', 'Marco Chen', 'Xianbiao Qi', 'Shaozhe Hao', 'Shihao Zhao', 'Youze Huang', 'Bin Liang', 'Rong Xiao', 'Kam-Fai Wong']
['cs.CV']
Recent advancements in video generation have spurred the development of video editing techniques, which can be divided into inversion-based and end-to-end methods. However, current video editing methods still suffer from several challenges. Inversion-based methods, though training-free and flexible, are time-consuming ...
2025-02-10T17:58:22Z
null
null
null
null
null
null
null
null
null
null
2,502.06755
Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision Models
['Samuel Stevens', 'Wei-Lun Chao', 'Tanya Berger-Wolf', 'Yu Su']
['cs.CV']
To truly understand vision models, we must not only interpret their learned features but also validate these interpretations through controlled experiments. Current approaches either provide interpretable features without the ability to test their causal influence, or enable model editing without interpretable controls...
2025-02-10T18:32:41Z
Main text is 11 pages with 7 figures
null
null
Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision Models
['Samuel Stevens', 'Wei-Lun Chao', 'Tanya Y. Berger-Wolf', 'Yu Su']
2,025
arXiv.org
6
61
['Computer Science']
2,502.06764
History-Guided Video Diffusion
['Kiwhan Song', 'Boyuan Chen', 'Max Simchowitz', 'Yilun Du', 'Russ Tedrake', 'Vincent Sitzmann']
['cs.LG', 'cs.CV']
Classifier-free guidance (CFG) is a key technique for improving conditional generation in diffusion models, enabling more accurate control while enhancing sample quality. It is natural to extend this technique to video diffusion, which generates video conditioned on a variable number of context frames, collectively ref...
2025-02-10T18:44:25Z
Project Website: https://boyuan.space/history-guidance
null
null
null
null
null
null
null
null
null
2,502.06772
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
['Ling Yang', 'Zhaochen Yu', 'Bin Cui', 'Mengdi Wang']
['cs.CL', 'cs.AI', 'cs.LG']
We present that hierarchical LLM reasoning via scaling thought templates can effectively optimize the reasoning search space and outperform the mathematical reasoning capabilities of powerful LLMs like OpenAI o1-preview and DeepSeek V3. We train our ReasonFlux-32B model with only 8 GPUs and introduces three innovations...
2025-02-10T18:51:47Z
Code: https://github.com/Gen-Verse/ReasonFlux
null
null
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
['Ling Yang', 'Zhaochen Yu', 'Bin Cui', 'Mengdi Wang']
2,025
arXiv.org
18
58
['Computer Science']
2,502.06781
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning
['Chengqi Lyu', 'Songyang Gao', 'Yuzhe Gu', 'Wenwei Zhang', 'Jianfei Gao', 'Kuikun Liu', 'Ziyi Wang', 'Shuaibin Li', 'Qian Zhao', 'Haian Huang', 'Weihan Cao', 'Jiangning Liu', 'Hongwei Liu', 'Junnan Liu', 'Songyang Zhang', 'Dahua Lin', 'Kai Chen']
['cs.CL', 'cs.LG']
Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techn...
2025-02-10T18:57:29Z
We released our code, data, and model on https://github.com/InternLM/OREAL
null
null
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning
['Chengqi Lyu', 'Songyang Gao', 'Yuzhe Gu', 'Wenwei Zhang', 'Jianfei Gao', 'Kuikun Liu', 'Ziyi Wang', 'Shuaibin Li', 'Qian Zhao', 'Haian Huang', 'Weihan Cao', 'Jiangning Liu', 'Hong-wei Liu', 'Junnan Liu', 'Songyang Zhang', 'Dahua Lin', 'Kai Chen']
2,025
arXiv.org
19
0
['Computer Science']
2,502.06782
Lumina-Video: Efficient and Flexible Video Generation with Multi-scale Next-DiT
['Dongyang Liu', 'Shicheng Li', 'Yutong Liu', 'Zhen Li', 'Kai Wang', 'Xinyue Li', 'Qi Qin', 'Yufei Liu', 'Yi Xin', 'Zhongyu Li', 'Bin Fu', 'Chenyang Si', 'Yuewen Cao', 'Conghui He', 'Ziwei Liu', 'Yu Qiao', 'Qibin Hou', 'Hongsheng Li', 'Peng Gao']
['cs.CV']
Recent advancements have established Diffusion Transformers (DiTs) as a dominant framework in generative modeling. Building on this success, Lumina-Next achieves exceptional performance in the generation of photorealistic images with Next-DiT. However, its potential for video generation remains largely untapped, with s...
2025-02-10T18:58:11Z
null
null
null
null
null
null
null
null
null
null
2,502.06788
EVEv2: Improved Baselines for Encoder-Free Vision-Language Models
['Haiwen Diao', 'Xiaotong Li', 'Yufeng Cui', 'Yueze Wang', 'Haoge Deng', 'Ting Pan', 'Wenxuan Wang', 'Huchuan Lu', 'Xinlong Wang']
['cs.CV', 'cs.AI']
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for unified multimodal systems with structural simplicity and efficient deployment. We systematically clarify the performance gap between VLMs using p...
2025-02-10T18:59:58Z
19 pages, 9 figures
null
null
EVEv2: Improved Baselines for Encoder-Free Vision-Language Models
['Haiwen Diao', 'Xiaotong Li', 'Yufeng Cui', 'Yueze Wang', 'Haoge Deng', 'Ting Pan', 'Wenxuan Wang', 'Huchuan Lu', 'Xinlong Wang']
2,025
arXiv.org
8
0
['Computer Science']
2,502.06814
Diffusion Instruction Tuning
['Chen Jin', 'Ryutaro Tanno', 'Amrutha Saseendran', 'Tom Diethe', 'Philip Teare']
['cs.LG', 'cs.AI', 'cs.GR']
We introduce Lavender, a simple supervised fine-tuning (SFT) method that boosts the performance of advanced vision-language models (VLMs) by leveraging state-of-the-art image generation models such as Stable Diffusion. Specifically, Lavender aligns the text-vision attention in the VLM transformer with the equivalent us...
2025-02-04T22:20:20Z
Project page at https://astrazeneca.github.io/vlm/
null
null
Diffusion Instruction Tuning
['Chen Jin', 'Ryutaro Tanno', 'Amrutha Saseendran', 'Tom Diethe', 'Philip Teare']
2,025
arXiv.org
0
89
['Computer Science']
2,502.06858
LLM-Supported Natural Language to Bash Translation
['Finnian Westenfelder', 'Erik Hemberg', 'Miguel Tulla', 'Stephen Moskal', "Una-May O'Reilly", 'Silviu Chiricescu']
['cs.CL', 'cs.AI']
The Bourne-Again Shell (Bash) command-line interface for Linux systems has complex syntax and requires extensive specialized knowledge. Using the natural language to Bash command (NL2SH) translation capabilities of large language models (LLMs) for command composition circumvents these issues. However, the NL2SH perform...
2025-02-07T19:35:55Z
13 pages, NAACL 2025
null
null
null
null
null
null
null
null
null
2,502.06876
Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging
['Jinluan Yang', 'Dingnan Jin', 'Anke Tang', 'Li Shen', 'Didi Zhu', 'Zhengyu Chen', 'Ziyu Zhao', 'Daixin Wang', 'Qing Cui', 'Zhiqiang Zhang', 'Jun Zhou', 'Fei Wu', 'Kun Kuang']
['cs.CL', 'cs.AI', 'cs.LG']
Achieving balanced alignment of large language models (LLMs) in terms of Helpfulness, Honesty, and Harmlessness (3H optimization) constitutes a cornerstone of responsible AI. Existing methods like data mixture strategies face limitations, including heavy reliance on expert knowledge and conflicting optimization signals...
2025-02-08T11:56:58Z
null
null
null
Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging
['Jinluan Yang', 'Dingnan Jin', 'A. Tang', 'Li Shen', 'Didi Zhu', 'Zhengyu Chen', 'Daixin Wang', 'Qing Cui', 'Zhiqiang Zhang', 'Jun Zhou', 'Fei Wu', 'Kun Kuang']
2,025
arXiv.org
6
0
['Computer Science']
2,502.06997
Conditional diffusion model with spatial attention and latent embedding for medical image segmentation
['Behzad Hejrati', 'Soumyanil Banerjee', 'Carri Glide-Hurst', 'Ming Dong']
['eess.IV', 'cs.CV']
Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at eve...
2025-02-10T19:47:28Z
13 pages, 5 figures, 3 tables, Accepted in MICCAI 2024
null
null
Conditional Diffusion Model with Spatial Attention and Latent Embedding for Medical Image Segmentation
['Behzad Hejrati', 'Soumyanil Banerjee', 'C. Glide-Hurst', 'Ming Dong']
2,024
International Conference on Medical Image Computing and Computer-Assisted Intervention
0
31
['Medicine', 'Computer Science', 'Engineering']
2,502.07272
GENERator: A Long-Context Generative Genomic Foundation Model
['Wei Wu', 'Qiuyi Li', 'Mingyang Li', 'Kun Fu', 'Fuli Feng', 'Jieping Ye', 'Hui Xiong', 'Zheng Wang']
['cs.CL', 'q-bio.GN']
Advancements in DNA sequencing technologies have significantly improved our ability to decode genomic sequences. However, the prediction and interpretation of these sequences remain challenging due to the intricate nature of genetic material. Large language models (LLMs) have introduced new opportunities for biological...
2025-02-11T05:39:49Z
null
null
null
null
null
null
null
null
null
null
2,502.07316
CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction
['Junlong Li', 'Daya Guo', 'Dejian Yang', 'Runxin Xu', 'Yu Wu', 'Junxian He']
['cs.CL', 'cs.AI']
Reasoning is a fundamental capability of Large Language Models. While prior research predominantly focuses on enhancing narrow skills like math or code generation, improving performance on many other reasoning tasks remains challenging due to sparse and fragmented training data. To address this issue, we propose CodeI/...
2025-02-11T07:26:50Z
ICML 2025
null
null
CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction
['Junlong Li', 'Daya Guo', 'Dejian Yang', 'Runxin Xu', 'Yu Wu', 'Junxian He']
2,025
arXiv.org
13
50
['Computer Science']
2,502.07374
LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!
['Dacheng Li', 'Shiyi Cao', 'Tyler Griggs', 'Shu Liu', 'Xiangxi Mo', 'Eric Tang', 'Sumanth Hegde', 'Kourosh Hakhamaneshi', 'Shishir G. Patil', 'Matei Zaharia', 'Joseph E. Gonzalez', 'Ion Stoica']
['cs.AI']
Large reasoning models (LRMs) tackle complex reasoning problems by following long chain-of-thoughts (Long CoT) that incorporate reflection, backtracking, and self-validation. However, the training techniques and data requirements to elicit Long CoT remain poorly understood. In this work, we find that a Large Language m...
2025-02-11T08:48:48Z
null
null
null
null
null
null
null
null
null
null
2,502.07527
Nature Language Model: Deciphering the Language of Nature for Scientific Discovery
['Yingce Xia', 'Peiran Jin', 'Shufang Xie', 'Liang He', 'Chuan Cao', 'Renqian Luo', 'Guoqing Liu', 'Yue Wang', 'Zequn Liu', 'Yuan-Jyue Chen', 'Zekun Guo', 'Yeqi Bai', 'Pan Deng', 'Yaosen Min', 'Ziheng Lu', 'Hongxia Hao', 'Han Yang', 'Jielan Li', 'Chang Liu', 'Jia Zhang', 'Jianwei Zhu', 'Ran Bi', 'Kehan Wu', 'Wei Zhang'...
['cs.AI', 'cs.LG']
Foundation models have revolutionized natural language processing and artificial intelligence, significantly enhancing how machines comprehend and generate human languages. Inspired by the success of these foundation models, researchers have developed foundation models for individual scientific domains, including small...
2025-02-11T13:08:03Z
95 pages
null
null
null
null
null
null
null
null
null
2,502.07599
DPO-Shift: Shifting the Distribution of Direct Preference Optimization
['Xiliang Yang', 'Feng Jiang', 'Qianen Zhang', 'Lei Zhao', 'Xiao Li']
['cs.CL']
Direct Preference Optimization (DPO) and its variants have become increasingly popular for aligning language models with human preferences. These methods aim to teach models to better distinguish between chosen (or preferred) and rejected (or dispreferred) responses. However, prior research has identified that the prob...
2025-02-11T14:49:44Z
null
null
null
null
null
null
null
null
null
null
2,502.0764
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving
['Yong Lin', 'Shange Tang', 'Bohan Lyu', 'Jiayun Wu', 'Hongzhou Lin', 'Kaiyu Yang', 'Jia Li', 'Mengzhou Xia', 'Danqi Chen', 'Sanjeev Arora', 'Chi Jin']
['cs.LG', 'cs.AI']
We introduce Goedel-Prover, an open-source language model that achieves state-of-the-art (as of April 5 2025) performance in automated formal proof generation for mathematical problems. A key challenge in this field is the scarcity of formalized mathematical statements and proofs, which we address through the following...
2025-02-11T15:27:35Z
null
null
null
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving
['Yong Lin', 'Shange Tang', 'Bohan Lyu', 'Jiayun Wu', 'Hongzhou Lin', 'Kaiyu Yang', 'Jia Li', 'Mengzhou Xia', 'Danqi Chen', 'Sanjeev Arora', 'Chi Jin']
2,025
arXiv.org
26
51
['Computer Science']
2,502.07671
Steering Protein Family Design through Profile Bayesian Flow
['Jingjing Gong', 'Yu Pei', 'Siyu Long', 'Yuxuan Song', 'Zhe Zhang', 'Wenhao Huang', 'Ziyao Cao', 'Shuyi Zhang', 'Hao Zhou', 'Wei-Ying Ma']
['q-bio.BM']
Protein family design emerges as a promising alternative by combining the advantages of de novo protein design and mutation-based directed evolution.In this paper, we propose ProfileBFN, the Profile Bayesian Flow Networks, for specifically generative modeling of protein families. ProfileBFN extends the discrete Bayesia...
2025-02-11T16:15:59Z
null
null
null
null
null
null
null
null
null
null
2,502.07737
Next Block Prediction: Video Generation via Semi-Autoregressive Modeling
['Shuhuai Ren', 'Shuming Ma', 'Xu Sun', 'Furu Wei']
['cs.CV', 'cs.AI']
Next-Token Prediction (NTP) is a de facto approach for autoregressive (AR) video generation, but it suffers from suboptimal unidirectional dependencies and slow inference speed. In this work, we propose a semi-autoregressive (semi-AR) framework, called Next-Block Prediction (NBP), for video generation. By uniformly dec...
2025-02-11T17:57:53Z
project page: https://renshuhuai-andy.github.io/NBP-project/
null
null
null
null
null
null
null
null
null
2,502.0776
Scalable Fingerprinting of Large Language Models
['Anshul Nasery', 'Jonathan Hayase', 'Creston Brooks', 'Peiyao Sheng', 'Himanshu Tyagi', 'Pramod Viswanath', 'Sewoong Oh']
['cs.CR', 'cs.LG']
Model fingerprinting has emerged as a powerful tool for model owners to identify their shared model given API access. However, to lower false discovery rate, fight fingerprint leakage, and defend against coalitions of model users attempting to bypass detection, we argue that {\em scalability} is critical, i.e., scaling...
2025-02-11T18:43:07Z
23 pages 15 figures
null
null
null
null
null
null
null
null
null
2,502.0778
DarwinLM: Evolutionary Structured Pruning of Large Language Models
['Shengkun Tang', 'Oliver Sieberling', 'Eldar Kurtic', 'Zhiqiang Shen', 'Dan Alistarh']
['cs.LG', 'cs.CL']
Large Language Models (LLMs) have achieved significant success across various NLP tasks. However, their massive computational costs limit their widespread use, particularly in real-time applications. Structured pruning offers an effective solution by compressing models and directly providing end-to-end speed improvemen...
2025-02-11T18:59:35Z
Code: https://github.com/IST-DASLab/DarwinLM
null
null
null
null
null
null
null
null
null
2,502.07864
TransMLA: Multi-Head Latent Attention Is All You Need
['Fanxu Meng', 'Pingzhi Tang', 'Xiaojuan Tang', 'Zengwei Yao', 'Xing Sun', 'Muhan Zhang']
['cs.LG', 'cs.AI']
In this paper, we present TransMLA, a framework that seamlessly converts any GQA-based pre-trained model into an MLA-based model. Our approach enables direct compatibility with DeepSeek's codebase, allowing these models to fully leverage DeepSeek-specific optimizations such as vLLM and SGlang. By compressing 93% of the...
2025-02-11T18:20:18Z
https://github.com/fxmeng/TransMLA
null
null
null
null
null
null
null
null
null
2,502.07938
Adapting Multilingual Embedding Models to Historical Luxembourgish
['Andrianos Michail', 'Corina Julia Raclé', 'Juri Opitz', 'Simon Clematide']
['cs.CL']
The growing volume of digitized historical texts requires effective semantic search using text embeddings. However, pre-trained multilingual models face challenges with historical content due to OCR noise and outdated spellings. This study examines multilingual embeddings for cross-lingual semantic search in historical...
2025-02-11T20:35:29Z
To appear in LaTeCH-CLfL 2025
null
null
null
null
null
null
null
null
null
2,502.07945
SurGrID: Controllable Surgical Simulation via Scene Graph to Image Diffusion
['Yannik Frisch', 'Ssharvien Kumar Sivakumar', 'Çağhan Köksal', 'Elsa Böhm', 'Felix Wagner', 'Adrian Gericke', 'Ghazal Ghazaei', 'Anirban Mukhopadhyay']
['cs.CV', 'cs.LG']
Surgical simulation offers a promising addition to conventional surgical training. However, available simulation tools lack photorealism and rely on hardcoded behaviour. Denoising Diffusion Models are a promising alternative for high-fidelity image synthesis, but existing state-of-the-art conditioning methods fall shor...
2025-02-11T20:49:13Z
null
null
10.1007/s11548-025-03397-y
null
null
null
null
null
null
null
2,502.07972
Training Sparse Mixture Of Experts Text Embedding Models
['Zach Nussbaum', 'Brandon Duderstadt']
['cs.CL', 'cs.AI', 'cs.IR']
Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe i...
2025-02-11T21:36:31Z
null
null
null
Training Sparse Mixture Of Experts Text Embedding Models
['Zach Nussbaum', 'Brandon Duderstadt']
2,025
arXiv.org
2
38
['Computer Science']
2,502.08127
Fino1: On the Transferability of Reasoning-Enhanced LLMs and Reinforcement Learning to Finance
['Lingfei Qian', 'Weipeng Zhou', 'Yan Wang', 'Xueqing Peng', 'Han Yi', 'Yilun Zhao', 'Jimin Huang', 'Qianqian Xie', 'Jian-yun Nie']
['cs.CL']
As the fundamental capability behind decision-making in finance, financial reasoning poses distinct challenges for LLMs. Although reinforcement learning (RL) have boosted generic reasoning, the progress in finance is hindered by the absence of empirical study of building effective financial chain-of-thought (CoT) corpu...
2025-02-12T05:13:04Z
13 pages, 2 figures, 3 Tables
null
null
Fino1: On the Transferability of Reasoning-Enhanced LLMs and Reinforcement Learning to Finance
['Lingfei Qian', 'Weipeng Zhou', 'Yan Wang', 'Xueqing Peng', 'Han Yi', 'Yilun Zhao', 'Jimin Huang', 'Qianqian Xie', 'Jian-yun Nie']
2,025
null
0
28
['Computer Science']
2,502.08153
Stable rationality of hypersurfaces in schön affine varieties
['Taro Yoshino']
['math.AG', '14E08, 14M25']
In recent years, there has been a development in approaching rationality problems through the motivic methods (cf. [Kontsevich--Tschinkel'19], [Nicaise--Shinder'19], [Nicaise--Ottem'21]). This method requires the explicit construction of degeneration families of curves with favorable properties. While the specific cons...
2025-02-12T06:41:13Z
50 pages. arXiv admin note: text overlap with arXiv:2312.15605
null
null
Stable rationality of hypersurfaces in sch\"{o}n affine varieties
['Taro Yoshino']
2,025
null
0
0
['Mathematics']
2,502.08213
LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention
['Konstantin Kolomeitsev']
['cs.CL', 'cs.LG', 'I.2.7; D.2.11']
In this work, we propose an architecture of LLM Modules that enables the transfer of knowledge from a large pre-trained model to a smaller model using an Enhanced Cross-Attention mechanism. In the proposed scheme, the Qwen2-1.5B model is frozen and its representations are passed through specially designed attention lay...
2025-02-12T08:48:55Z
Code and pre-trained weights available at https://huggingface.co/kkolomeitsev/llm-modules
null
null
null
null
null
null
null
null
null
2,502.08226
TRISHUL: Towards Region Identification and Screen Hierarchy Understanding for Large VLM based GUI Agents
['Kunal Singh', 'Shreyas Singh', 'Mukund Khanna']
['cs.CV', 'cs.AI', 'cs.LG']
Recent advancements in Large Vision Language Models (LVLMs) have enabled the development of LVLM-based Graphical User Interface (GUI) agents under various paradigms. Training-based approaches, such as CogAgent and SeeClick, struggle with cross-dataset and cross-platform generalization due to their reliance on dataset-s...
2025-02-12T09:12:30Z
8 pages 5 figures
null
null
null
null
null
null
null
null
null
2,502.08468
mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data
['Haonan Chen', 'Liang Wang', 'Nan Yang', 'Yutao Zhu', 'Ziliang Zhao', 'Furu Wei', 'Zhicheng Dou']
['cs.CV', 'cs.AI', 'cs.CL']
Multimodal embedding models have gained significant attention for their ability to map data from different modalities, such as text and images, into a unified representation space. However, the limited labeled multimodal data often hinders embedding performance. Recent approaches have leveraged data synthesis to addres...
2025-02-12T15:03:33Z
null
null
null
null
null
null
null
null
null
null
2,502.08489
Salamandra Technical Report
['Aitor Gonzalez-Agirre', 'Marc Pàmies', 'Joan Llop', 'Irene Baucells', 'Severino Da Dalt', 'Daniel Tamayo', 'José Javier Saiz', 'Ferran Espuña', 'Jaume Prats', 'Javier Aula-Blasco', 'Mario Mina', 'Iñigo Pikabea', 'Adrián Rubio', 'Alexander Shvets', 'Anna Sallés', 'Iñaki Lacunza', 'Jorge Palomar', 'Júlia Falcão', 'Lucí...
['cs.CL']
This work introduces Salamandra, a suite of open-source decoder-only large language models available in three different sizes: 2, 7, and 40 billion parameters. The models were trained from scratch on highly multilingual data that comprises text in 35 European languages and code. Our carefully curated corpus is made exc...
2025-02-12T15:26:08Z
null
null
null
null
null
null
null
null
null
null
2,502.08769
Cluster and Predict Latent Patches for Improved Masked Image Modeling
['Timothée Darcet', 'Federico Baldassarre', 'Maxime Oquab', 'Julien Mairal', 'Piotr Bojanowski']
['cs.CV', 'cs.AI']
Masked Image Modeling (MIM) offers a promising approach to self-supervised representation learning, however existing MIM models still lag behind the state-of-the-art. In this paper, we systematically analyze target representations, loss functions, and architectures, to introduce CAPI - a novel pure-MIM framework that r...
2025-02-12T20:17:10Z
26 pages, 14 figures, accepted in TMLR 2025
null
null
Cluster and Predict Latent Patches for Improved Masked Image Modeling
['Timothée Darcet', 'Federico Baldassarre', 'Maxime Oquab', 'J. Mairal', 'Piotr Bojanowski']
2,025
arXiv.org
6
0
['Computer Science']
2,502.08807
InTAR: Inter-Task Auto-Reconfigurable Accelerator Design for High Data Volume Variation in DNNs
['Zifan He', 'Anderson Truong', 'Yingqi Cao', 'Jason Cong']
['cs.AR', 'cs.LG']
The rise of deep neural networks (DNNs) has driven an increased demand for computing power and memory. Modern DNNs exhibit high data volume variation (HDV) across tasks, which poses challenges for FPGA acceleration: conventional accelerators rely on fixed execution patterns (dataflow or sequential) that can lead to pip...
2025-02-12T21:43:51Z
FCCM 2025
null
null
null
null
null
null
null
null
null
2,502.0882
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model
['Emre Can Acikgoz', 'Jeremiah Greer', 'Akul Datta', 'Ze Yang', 'William Zeng', 'Oussama Elachqar', 'Emmanouil Koukoumidis', 'Dilek Hakkani-Tür', 'Gokhan Tur']
['cs.AI', 'cs.CL']
Large Language Models (LLMs) with API-calling capabilities enabled building effective Language Agents (LA), while also revolutionizing the conventional task-oriented dialogue (TOD) paradigm. However, current approaches face a critical dilemma: TOD systems are often trained on a limited set of target APIs, requiring new...
2025-02-12T22:18:34Z
null
null
null
null
null
null
null
null
null
null
2,502.09042
Typhoon T1: An Open Thai Reasoning Model
['Pittawat Taveekitworachai', 'Potsawee Manakul', 'Kasima Tharnpipitchai', 'Kunat Pipatanakul']
['cs.CL', 'cs.AI']
This paper introduces Typhoon T1, an open effort to develop an open Thai reasoning model. A reasoning model is a relatively new type of generative model built on top of large language models (LLMs). A reasoning model generates a long chain of thought before arriving at a final answer, an approach found to improve perfo...
2025-02-13T07:55:54Z
25 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,502.09056
Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging -- An Open Recipe
['Kunat Pipatanakul', 'Pittawat Taveekitworachai', 'Potsawee Manakul', 'Kasima Tharnpipitchai']
['cs.CL', 'cs.AI']
This paper investigates data selection and model merging methodologies aimed at incorporating advanced reasoning capabilities such as those of DeepSeek R1 into language-specific large language models (LLMs), with a particular focus on the Thai LLM. Our goal is to enhance the reasoning capabilities of language-specific ...
2025-02-13T08:10:45Z
9 pages
null
null
null
null
null
null
null
null
null
2,502.09082
CoSER: Coordinating LLM-Based Persona Simulation of Established Roles
['Xintao Wang', 'Heng Wang', 'Yifei Zhang', 'Xinfeng Yuan', 'Rui Xu', 'Jen-tse Huang', 'Siyu Yuan', 'Haoran Guo', 'Jiangjie Chen', 'Shuchang Zhou', 'Wei Wang', 'Yanghua Xiao']
['cs.CL', 'cs.AI']
Role-playing language agents (RPLAs) have emerged as promising applications of large language models (LLMs). However, simulating established characters presents a challenging task for RPLAs, due to the lack of authentic character datasets and nuanced evaluation methods using such data. In this paper, we present CoSER, ...
2025-02-13T08:55:24Z
Accepted by ICML 2025
null
null
null
null
null
null
null
null
null
2,502.09135
Interpreting and Steering Protein Language Models through Sparse Autoencoders
['Edith Natalia Villegas Garcia', 'Alessio Ansuini']
['cs.LG', 'q-bio.BM']
The rapid advancements in transformer-based language models have revolutionized natural language processing, yet understanding the internal mechanisms of these models remains a significant challenge. This paper explores the application of sparse autoencoders (SAE) to interpret the internal representations of protein la...
2025-02-13T10:11:36Z
11 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,502.09284
SparQLe: Speech Queries to Text Translation Through LLMs
['Amirbek Djanibekov', 'Hanan Aldarmaki']
['cs.CL', 'cs.AI']
With the growing influence of Large Language Models (LLMs), there is increasing interest in integrating speech representations with them to enable more seamless multi-modal processing and speech understanding. This study introduces a novel approach that combines self-supervised speech representations with instruction-t...
2025-02-13T12:57:15Z
null
null
null
null
null
null
null
null
null
null
2,502.09387
Truth Knows No Language: Evaluating Truthfulness Beyond English
['Blanca Calvo Figueras', 'Eneko Sagarzazu', 'Julen Etxaniz', 'Jeremy Barnes', 'Pablo Gamallo', 'Iria De Dios Flores', 'Rodrigo Agerri']
['cs.CL', 'cs.AI', 'cs.CY']
We introduce a professionally translated extension of the TruthfulQA benchmark designed to evaluate truthfulness in Basque, Catalan, Galician, and Spanish. Truthfulness evaluations of large language models (LLMs) have primarily been conducted in English. However, the ability of LLMs to maintain truthfulness across lang...
2025-02-13T15:04:53Z
14 pages, 6 figures, 8 tables
null
null
null
null
null
null
null
null
null
2,502.09509
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
['Theodoros Kouzelis', 'Ioannis Kakogeorgiou', 'Spyros Gidaris', 'Nikos Komodakis']
['cs.LG']
Latent generative models have emerged as a leading approach for high-quality image synthesis. These models rely on an autoencoder to compress images into a latent space, followed by a generative model to learn the latent distribution. We identify that existing autoencoders lack equivariance to semantic-preserving trans...
2025-02-13T17:21:51Z
Preprint
null
null
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
['Theodoros Kouzelis', 'Ioannis Kakogeorgiou', 'Spyros Gidaris', 'Nikos Komodakis']
2,025
arXiv.org
8
71
['Computer Science']
2,502.09604
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models
['Yung-Sung Chuang', 'Benjamin Cohen-Wang', 'Shannon Zejiang Shen', 'Zhaofeng Wu', 'Hu Xu', 'Xi Victoria Lin', 'James Glass', 'Shang-Wen Li', 'Wen-tau Yih']
['cs.CL', 'cs.AI', 'cs.LG']
We introduce SelfCite, a novel self-supervised approach that aligns LLMs to generate high-quality, fine-grained, sentence-level citations for the statements in their generated responses. Instead of only relying on costly and labor-intensive annotations, SelfCite leverages a reward signal provided by the LLM itself thro...
2025-02-13T18:55:13Z
ICML 2025 main conference paper. The source code is available at https://github.com/facebookresearch/SelfCite
null
null
null
null
null
null
null
null
null
2,502.09613
Latent Radiance Fields with 3D-aware 2D Representations
['Chaoyi Zhou', 'Xi Liu', 'Feng Luo', 'Siyu Huang']
['cs.CV']
Latent 3D reconstruction has shown great promise in empowering 3D semantic understanding and 3D generation by distilling 2D features into the 3D space. However, existing approaches struggle with the domain gap between 2D feature space and 3D representations, resulting in degraded rendering performance. To address this ...
2025-02-13T18:59:09Z
Accepted to ICLR 2025; Project page: https://latent-radiance-field.github.io/LRF
null
null
null
null
null
null
null
null
null
2,502.0962
Exploring the Potential of Encoder-free Architectures in 3D LMMs
['Yiwen Tang', 'Zoey Guo', 'Zhuhao Wang', 'Ray Zhang', 'Qizhi Chen', 'Junli Liu', 'Delin Qu', 'Zhigang Wang', 'Dong Wang', 'Xuelong Li', 'Bin Zhao']
['cs.CV', 'cs.AI', 'cs.CL']
Encoder-free architectures have been preliminarily explored in the 2D visual domain, yet it remains an open question whether they can be effectively applied to 3D understanding scenarios. In this paper, we present the first comprehensive investigation into the potential of encoder-free architectures to alleviate the ch...
2025-02-13T18:59:45Z
During the review process, we discovered that a portion of the test dataset used in our submission contained content that may have infringed upon the commercial copyrights of others. Due to the conflict regarding these commercial copyrights, we have unfortunately had to retract the submission
null
null
Exploring the Potential of Encoder-free Architectures in 3D LMMs
['Yiwen Tang', 'Zoey Guo', 'Zhuhao Wang', 'Ray Zhang', 'Qizhi Chen', 'Junli Liu', 'Delin Qu', 'Zhigang Wang', 'Dong Wang', 'Xuelong Li', 'Bin Zhao']
2,025
arXiv.org
11
46
['Computer Science']
2,502.0965
Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples
['Chengqian Gao', 'Haonan Li', 'Liu Liu', 'Zeke Xie', 'Peilin Zhao', 'Zhiqiang Xu']
['cs.CL', 'cs.AI', 'cs.LG']
The alignment of large language models (LLMs) often assumes that using more clean data yields better outcomes, overlooking the match between model capacity and example difficulty. Challenging this, we propose a new principle: Preference data vary in difficulty, and overly difficult examples hinder alignment, by exceedi...
2025-02-11T17:01:11Z
Accepted at ICML 2025
null
null
Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples
['Chengqian Gao', 'Haonan Li', 'Liu Liu', 'Zeke Xie', 'Peilin Zhao', 'Zhiqiang Xu']
2,025
arXiv.org
4
90
['Computer Science']
2,502.09653
SASVi -- Segment Any Surgical Video
['Ssharvien Kumar Sivakumar', 'Yannik Frisch', 'Amin Ranem', 'Anirban Mukhopadhyay']
['eess.IV', 'cs.CV']
Purpose: Foundation models, trained on multitudes of public datasets, often require additional fine-tuning or re-prompting mechanisms to be applied to visually distinct target domains such as surgical videos. Further, without domain knowledge, they cannot model the specific semantics of the target domain. Hence, when a...
2025-02-12T00:29:41Z
null
null
10.1007/s11548-025-03408-y
null
null
null
null
null
null
null
2,502.09692
AB-UPT: Scaling Neural CFD Surrogates for High-Fidelity Automotive Aerodynamics Simulations via Anchored-Branched Universal Physics Transformers
['Benedikt Alkin', 'Maurits Bleeker', 'Richard Kurle', 'Tobias Kronlachner', 'Reinhard Sonnleitner', 'Matthias Dorfer', 'Johannes Brandstetter']
['cs.LG', 'cs.AI']
Recent advances in neural surrogate modeling offer the potential for transformative innovations in applications such as automotive aerodynamics. Yet, industrial-scale problems often involve volumetric meshes with cell counts reaching 100 million, presenting major scalability challenges. Complex geometries further compl...
2025-02-13T17:58:07Z
Preprint. Github: https://github.com/Emmi-AI/AB-UPT
null
null
AB-UPT: Scaling Neural CFD Surrogates for High-Fidelity Automotive Aerodynamics Simulations via Anchored-Branched Universal Physics Transformers
['Maurits J. R. Bleeker', 'Matthias Dorfer', 'T. Kronlachner', 'Reinhard Sonnleitner', 'Benedikt Alkin', 'Johannes Brandstetter']
2,025
null
3
76
['Computer Science']
2,502.09814
INJONGO: A Multicultural Intent Detection and Slot-filling Dataset for 16 African Languages
['Hao Yu', 'Jesujoba O. Alabi', 'Andiswa Bukula', 'Jian Yun Zhuang', 'En-Shiun Annie Lee', 'Tadesse Kebede Guge', 'Israel Abebe Azime', 'Happy Buzaaba', 'Blessing Kudzaishe Sibanda', 'Godson K. Kalipe', 'Jonathan Mukiibi', 'Salomon Kabongo Kabenamualu', 'Mmasibidi Setaka', 'Lolwethu Ndolela', 'Nkiruka Odu', 'Rooweither...
['cs.CL']
Slot-filling and intent detection are well-established tasks in Conversational AI. However, current large-scale benchmarks for these tasks often exclude evaluations of low-resource languages and rely on translations from English benchmarks, thereby predominantly reflecting Western-centric concepts. In this paper, we in...
2025-02-13T23:17:10Z
null
null
null
null
null
null
null
null
null
null
2,502.09927
Granite Vision: a lightweight, open-source multimodal model for enterprise Intelligence
['Granite Vision Team', 'Leonid Karlinsky', 'Assaf Arbelle', 'Abraham Daniels', 'Ahmed Nassar', 'Amit Alfassi', 'Bo Wu', 'Eli Schwartz', 'Dhiraj Joshi', 'Jovana Kondic', 'Nimrod Shabtay', 'Pengyuan Li', 'Roei Herzig', 'Shafiq Abedin', 'Shaked Perek', 'Sivan Harary', 'Udi Barzelay', 'Adi Raz Goldfarb', 'Aude Oliva', 'Be...
['cs.CV', 'cs.AI']
We introduce Granite Vision, a lightweight large language model with vision capabilities, specifically designed to excel in enterprise use cases, particularly in visual document understanding. Our model is trained on a comprehensive instruction-following dataset, including document-related tasks, such as content extrac...
2025-02-14T05:36:32Z
null
null
null
null
null
null
null
null
null
null
2,502.09992
Large Language Diffusion Models
['Shen Nie', 'Fengqi Zhu', 'Zebin You', 'Xiaolu Zhang', 'Jingyang Ou', 'Jun Hu', 'Jun Zhou', 'Yankai Lin', 'Ji-Rong Wen', 'Chongxuan Li']
['cs.CL', 'cs.LG']
Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). We challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process a...
2025-02-14T08:23:51Z
null
null
null
null
null
null
null
null
null
null
2,502.10059
RealCam-I2V: Real-World Image-to-Video Generation with Interactive Complex Camera Control
['Teng Li', 'Guangcong Zheng', 'Rui Jiang', 'Shuigen Zhan', 'Tao Wu', 'Yehao Lu', 'Yining Lin', 'Chuanyun Deng', 'Yepan Xiong', 'Min Chen', 'Lin Cheng', 'Xi Li']
['cs.CV']
Recent advancements in camera-trajectory-guided image-to-video generation offer higher precision and better support for complex camera control compared to text-based approaches. However, they also introduce significant usability challenges, as users often struggle to provide precise camera parameters when working with ...
2025-02-14T10:21:49Z
Accepted by ICCV 2025
null
null
RealCam-I2V: Real-World Image-to-Video Generation with Interactive Complex Camera Control
['Teng Li', 'Guangcong Zheng', 'Rui Jiang', 'Shuigenzhan', 'Tao Wu', 'Yehao Lu', 'Yining Lin', 'Xi Li']
2,025
arXiv.org
9
0
['Computer Science']
2,502.1014
Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of Small Multilingual Language Models for Low-Resource Languages
['Daniil Gurgurov', 'Ivan Vykopal', 'Josef van Genabith', 'Simon Ostermann']
['cs.CL']
Low-resource languages (LRLs) face significant challenges in natural language processing (NLP) due to limited data. While current state-of-the-art large language models (LLMs) still struggle with LRLs, smaller multilingual models (mLMs) such as mBERT and XLM-R offer greater promise due to a better fit of their capacity...
2025-02-14T13:10:39Z
Pre-print
null
null
null
null
null
null
null
null
null
2,502.10173
Agentic End-to-End De Novo Protein Design for Tailored Dynamics Using a Language Diffusion Model
['Bo Ni', 'Markus J. Buehler']
['q-bio.BM', 'cond-mat.mes-hall', 'cond-mat.mtrl-sci', 'cs.LG']
Proteins are dynamic molecular machines whose biological functions, spanning enzymatic catalysis, signal transduction, and structural adaptation, are intrinsically linked to their motions. Designing proteins with targeted dynamic properties, however, remains a challenge due to the complex, degenerate relationships betw...
2025-02-14T14:07:54Z
null
null
null
null
null
null
null
null
null
null
2,502.10248
Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model
['Guoqing Ma', 'Haoyang Huang', 'Kun Yan', 'Liangyu Chen', 'Nan Duan', 'Shengming Yin', 'Changyi Wan', 'Ranchen Ming', 'Xiaoniu Song', 'Xing Chen', 'Yu Zhou', 'Deshan Sun', 'Deyu Zhou', 'Jian Zhou', 'Kaijun Tan', 'Kang An', 'Mei Chen', 'Wei Ji', 'Qiling Wu', 'Wen Sun', 'Xin Han', 'Yanan Wei', 'Zheng Ge', 'Aojie Li', 'B...
['cs.CV', 'cs.CL']
We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model with 30B parameters and the ability to generate videos up to 204 frames in length. A deep compression Variational Autoencoder, Video-VAE, is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios, whil...
2025-02-14T15:58:10Z
36 pages, 14 figures
null
null
Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model
['Guoqing Ma', 'Haoyang Huang', 'Kun Yan', 'Liangyu Chen', 'Nan Duan', 'Sheng-Siang Yin', 'Changyi Wan', 'Ranchen Ming', 'Xiaoniu Song', 'Xing Chen', 'Yu Zhou', 'Deshan Sun', 'Deyu Zhou', 'Jian Zhou', 'Kaijun Tan', 'Kang An', 'Mei Chen', 'Wei Ji', 'Qiling Wu', 'Wenzheng Sun', 'Xin Han', 'Yana Wei', 'Zheng Ge', 'Aojie L...
2,025
arXiv.org
41
47
['Computer Science']
2,502.10341
Organize the Web: Constructing Domains Enhances Pre-Training Data Curation
['Alexander Wettig', 'Kyle Lo', 'Sewon Min', 'Hannaneh Hajishirzi', 'Danqi Chen', 'Luca Soldaini']
['cs.CL']
Modern language models are trained on large, unstructured datasets consisting of trillions of tokens and obtained by crawling the web. The unstructured nature makes it difficult to reason about their contents and develop systematic approaches to data curation. In this paper, we unpack monolithic web corpora by developi...
2025-02-14T18:02:37Z
Accepted at ICML 2025. Project page: https://weborganizer.allen.ai
null
null
null
null
null
null
null
null
null
2,502.10362
CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages
['Shangda Wu', 'Zhancheng Guo', 'Ruibin Yuan', 'Junyan Jiang', 'Seungheon Doh', 'Gus Xia', 'Juhan Nam', 'Xiaobing Li', 'Feng Yu', 'Maosong Sun']
['cs.SD', 'eess.AS']
CLaMP 3 is a unified framework developed to address challenges of cross-modal and cross-lingual generalization in music information retrieval. Using contrastive learning, it aligns all major music modalities--including sheet music, performance signals, and audio recordings--with multilingual text in a shared representa...
2025-02-14T18:42:25Z
20 pages, 8 figures, 12 tables, accepted by ACL 2025
null
null
null
null
null
null
null
null
null
2,502.10373
OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models
['William Chen', 'Jinchuan Tian', 'Yifan Peng', 'Brian Yan', 'Chao-Han Huck Yang', 'Shinji Watanabe']
['cs.CL', 'cs.AI', 'cs.LG', 'eess.AS']
Neural scaling laws offer valuable insights for designing robust sequence processing architectures. While these laws have been extensively characterized in other modalities, their behavior in speech remains comparatively underexplored. In this work, we introduce OWLS, an open-access, reproducible suite of multilingual ...
2025-02-14T18:51:40Z
23 pages, 13 figures
null
null
null
null
null
null
null
null
null
2,502.10385
Simplifying DINO via Coding Rate Regularization
['Ziyang Wu', 'Jingyuan Zhang', 'Druv Pai', 'XuDong Wang', 'Chandan Singh', 'Jianwei Yang', 'Jianfeng Gao', 'Yi Ma']
['cs.CV', 'cs.AI']
DINO and DINOv2 are two model families being widely used to learn representations from unlabeled imagery data at large scales. Their learned representations often enable state-of-the-art performance for downstream tasks, such as image classification and segmentation. However, they employ many empirically motivated desi...
2025-02-14T18:58:04Z
17 pages, 5 figures
null
null
Simplifying DINO via Coding Rate Regularization
['Ziyang Wu', 'Jingyuan Zhang', 'Druv Pai', 'XuDong Wang', 'Chandan Singh', 'Jianwei Yang', 'Jianfeng Gao', 'Yi Ma']
2,025
arXiv.org
1
45
['Computer Science']
2,502.10391
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment
['Yi-Fan Zhang', 'Tao Yu', 'Haochen Tian', 'Chaoyou Fu', 'Peiyan Li', 'Jianshu Zeng', 'Wulin Xie', 'Yang Shi', 'Huanyu Zhang', 'Junkang Wu', 'Xue Wang', 'Yibo Hu', 'Bin Wen', 'Fan Yang', 'Zhang Zhang', 'Tingting Gao', 'Di Zhang', 'Liang Wang', 'Rong Jin', 'Tieniu Tan']
['cs.CL', 'cs.CV']
Despite notable advancements in Multimodal Large Language Models (MLLMs), most state-of-the-art models have not undergone thorough alignment with human preferences. This gap exists because current alignment research has primarily achieved progress in specific areas (e.g., hallucination reduction), while the broader que...
2025-02-14T18:59:51Z
Project Page: https://mm-rlhf.github.io/
null
null
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment
['Yi-Fan Zhang', 'Tao Yu', 'Haochen Tian', 'Chaoyou Fu', 'Peiyan Li', 'Jianshu Zeng', 'Wulin Xie', 'Yang Shi', 'Huanyu Zhang', 'Junkang Wu', 'Xue Wang', 'Yibo Hu', 'Bin Wen', 'Fan Yang', 'Zhang Zhang', 'Tingting Gao', 'Di Zhang', 'Liang Wang', 'Rong Jin', 'Tien-Ping Tan']
2,025
arXiv.org
21
0
['Computer Science']
2,502.10392
TSP3D: Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding
['Wenxuan Guo', 'Xiuwei Xu', 'Ziwei Wang', 'Jianjiang Feng', 'Jie Zhou', 'Jiwen Lu']
['cs.CV', 'cs.LG']
In this paper, we propose an efficient multi-level convolution architecture for 3D visual grounding. Conventional methods are difficult to meet the requirements of real-time inference due to the two-stage or point-based architecture. Inspired by the success of multi-level fully sparse convolutional architecture in 3D o...
2025-02-14T18:59:59Z
Accepted at CVPR2025 with a top score
null
null
null
null
null
null
null
null
null
2,502.10582
Named entity recognition for Serbian legal documents: Design, methodology and dataset development
['Vladimir Kalušev', 'Branko Brkljač']
['cs.CL', '68T10, 68T30, 68T35, 68T50, 91F20', 'I.5.2; I.5.4; I.5.5; I.2.1; I.2.7; I.2; H.4.1']
Recent advancements in the field of natural language processing (NLP) and especially large language models (LLMs) and their numerous applications have brought research attention to design of different document processing tools and enhancements in the process of document archiving, search and retrieval. Domain of offici...
2025-02-14T22:23:39Z
9 pages, 6 figures, 1 table, associated NER4Legal_SRB model and dataset are available at https://huggingface.co/kalusev/NER4Legal_SRB , paper submitted to 15th International Conference on Information Society and Technology (ICIST), Kopaonik, Serbia, 9-12 March 2025, conference track: Generative AI and Large Lan...
null
null
null
null
null
null
null
null
null
2,502.10645
BabyLM Turns 3: Call for papers for the 2025 BabyLM workshop
['Lucas Charpentier', 'Leshem Choshen', 'Ryan Cotterell', 'Mustafa Omer Gul', 'Michael Hu', 'Jaap Jumelet', 'Tal Linzen', 'Jing Liu', 'Aaron Mueller', 'Candace Ross', 'Raj Sanjay Shah', 'Alex Warstadt', 'Ethan Wilcox', 'Adina Williams']
['cs.CL']
BabyLM aims to dissolve the boundaries between cognitive modeling and language modeling. We call for both workshop papers and for researchers to join the 3rd BabyLM competition. As in previous years, we call for participants in the data-efficient pretraining challenge in the general track. This year, we also offer a ne...
2025-02-15T02:46:43Z
EMNLP 2025 BabyLM Workshop. arXiv admin note: text overlap with arXiv:2404.06214
null
null
null
null
null
null
null
null
null
2,502.1081
SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming Video Understanding
['Zhenyu Yang', 'Yuhang Hu', 'Zemin Du', 'Dizhan Xue', 'Shengsheng Qian', 'Jiahong Wu', 'Fan Yang', 'Weiming Dong', 'Changsheng Xu']
['cs.CV']
Despite the significant advancements of Large Vision-Language Models (LVLMs) on established benchmarks, there remains a notable gap in suitable evaluation regarding their applicability in the emerging domain of long-context streaming video understanding. Current benchmarks for video understanding typically emphasize is...
2025-02-15T14:29:44Z
ICLR 2025 Accept (Spotlight)
null
null
null
null
null
null
null
null
null
2,502.10841
SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers
['Di Qiu', 'Zhengcong Fei', 'Rui Wang', 'Jialin Bai', 'Changqian Yu', 'Mingyuan Fan', 'Guibin Chen', 'Xiang Wen']
['cs.CV']
We present SkyReels-A1, a simple yet effective framework built upon video diffusion Transformer to facilitate portrait image animation. Existing methodologies still encounter issues, including identity distortion, background instability, and unrealistic facial dynamics, particularly in head-only animation scenarios. Be...
2025-02-15T16:08:40Z
null
null
null
SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers
['Di Qiu', 'Zhengcong Fei', 'Rui Wang', 'Jialin Bai', 'Changqian Yu', 'Mingyuan Fan', 'Guibin Chen', 'Xiang Wen']
2,025
arXiv.org
11
0
['Computer Science']
2,502.10852
Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages
['Zeli Su', 'Ziyin Zhang', 'Guixian Xu', 'Jianing Liu', 'XU Han', 'Ting Zhang', 'Yushuang Dong']
['cs.CL', 'cs.AI']
While multilingual language models like XLM-R have advanced multilingualism in NLP, they still perform poorly in extremely low-resource languages. This situation is exacerbated by the fact that modern LLMs such as LLaMA and Qwen support far fewer languages than XLM-R, making text generation models non-existent for many...
2025-02-15T16:53:10Z
ACL 2025 camera-ready
null
null
Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages
['Zeli Su', 'Ziyin Zhang', 'Guixian Xu', 'Jianing Liu', 'XU Han', 'Ting Zhang', 'Yushuang Dong']
2,025
arXiv.org
1
27
['Computer Science']
2,502.10868
NitiBench: A Comprehensive Study of LLM Framework Capabilities for Thai Legal Question Answering
['Pawitsapak Akarajaradwong', 'Pirat Pothavorn', 'Chompakorn Chaksangchaichot', 'Panuthep Tasawong', 'Thitiwat Nopparatbundit', 'Sarana Nutanong']
['cs.CL']
The application of large language models (LLMs) in the legal domain holds significant potential for information retrieval and question answering, yet Thai legal QA systems face challenges due to a lack of standardized evaluation benchmarks and the complexity of Thai legal structures. This paper introduces NitiBench, a ...
2025-02-15T17:52:14Z
null
null
null
NitiBench: A Comprehensive Studies of LLM Frameworks Capabilities for Thai Legal Question Answering
['Pawitsapak Akarajaradwong', 'Pirat Pothavorn', 'Chompakorn Chaksangchaichot', 'Panuthep Tasawong', 'Thitiwat Nopparatbundit', 'Sarana Nutanong']
2,025
arXiv.org
1
0
['Computer Science']
2,502.1099
FinMTEB: Finance Massive Text Embedding Benchmark
['Yixuan Tang', 'Yi Yang']
['cs.CL', 'cs.IR']
Embedding models play a crucial role in representing and retrieving information across various NLP applications. Recent advances in large language models (LLMs) have further enhanced the performance of embedding models. While these models are often benchmarked on general-purpose datasets, real-world applications demand...
2025-02-16T04:23:52Z
https://github.com/yixuantt/FinMTEB
null
null
FinMTEB: Finance Massive Text Embedding Benchmark
['Yixuan Tang', 'Yi Yang']
2,025
arXiv.org
2
75
['Computer Science']
2,502.10996
RAS: Retrieval-And-Structuring for Knowledge-Intensive LLM Generation
['Pengcheng Jiang', 'Lang Cao', 'Ruike Zhu', 'Minhao Jiang', 'Yunyi Zhang', 'Jimeng Sun', 'Jiawei Han']
['cs.CL']
Large language models (LLMs) have achieved impressive performance on knowledge-intensive tasks, yet they often struggle with multi-step reasoning due to the unstructured nature of retrieved context. While retrieval-augmented generation (RAG) methods provide external information, the lack of explicit organization among ...
2025-02-16T05:01:49Z
under review
null
null
RAS: Retrieval-And-Structuring for Knowledge-Intensive LLM Generation
['Pengcheng Jiang', 'Lang Cao', 'Ruike Zhu', 'Minhao Jiang', 'Yunyi Zhang', 'Jimeng Sun', 'Jiawei Han']
2,025
arXiv.org
4
77
['Computer Science']
2,502.11079
Phantom: Subject-consistent video generation via cross-modal alignment
['Lijie Liu', 'Tianxiang Ma', 'Bingchuan Li', 'Zhuowei Chen', 'Jiawei Liu', 'Gen Li', 'Siyu Zhou', 'Qian He', 'Xinglong Wu']
['cs.CV', 'cs.AI']
The continuous development of foundational models for video generation is evolving into various applications, with subject-consistent video generation still in the exploratory stage. We refer to this as Subject-to-Video, which extracts subject elements from reference images and generates subject-consistent videos follo...
2025-02-16T11:02:50Z
null
null
null
null
null
null
null
null
null
null
2,502.11084
Rewrite to Jailbreak: Discover Learnable and Transferable Implicit Harmfulness Instruction
['Yuting Huang', 'Chengyuan Liu', 'Yifeng Feng', 'Yiquan Wu', 'Chao Wu', 'Fei Wu', 'Kun Kuang']
['cs.CL']
As Large Language Models (LLMs) are widely applied in various domains, the safety of LLMs is increasingly attracting attention to avoid their powerful capabilities being misused. Existing jailbreak methods create a forced instruction-following scenario, or search adversarial prompts with prefix or suffix tokens to achi...
2025-02-16T11:43:39Z
22 pages, 10 figures, accepted to ACL 2025 findings
null
null
null
null
null
null
null
null
null
2,502.11102
OptMATH: A Scalable Bidirectional Data Synthesis Framework for Optimization Modeling
['Hongliang Lu', 'Zhonglin Xie', 'Yaoyu Wu', 'Can Ren', 'Yuxuan Chen', 'Zaiwen Wen']
['cs.AI', 'cs.LG']
Despite the rapid development of large language models (LLMs), a fundamental challenge persists: the lack of high-quality optimization modeling datasets hampers LLMs' robust modeling of practical optimization problems from natural language descriptions (NL). This data scarcity also contributes to the generalization dif...
2025-02-16T12:38:37Z
This paper has 36 pages, 18 figures, and two co-first authors: Hongliang Lu and Zhonglin Xie
null
null
null
null
null
null
null
null
null
2,502.11157
Dyve: Thinking Fast and Slow for Dynamic Process Verification
['Jianyuan Zhong', 'Zeju Li', 'Zhijian Xu', 'Xiangyu Wen', 'Qiang Xu']
['cs.AI']
We present Dyve, a dynamic process verifier that enhances reasoning error detection in large language models by integrating fast and slow thinking, inspired by Kahneman's Systems Theory. Dyve adaptively applies immediate token-level confirmation System 1 for straightforward steps and comprehensive analysis System 2 for...
2025-02-16T15:11:19Z
8 pages, 4 figures
null
null
Dyve: Thinking Fast and Slow for Dynamic Process Verification
['Jianyuan Zhong', 'Zeju Li', 'Zhijian Xu', 'Xiangyu Wen', 'Qiang Xu']
2,025
arXiv.org
4
23
['Computer Science']
2,502.11183
Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls
['Ante Wang', 'Linfeng Song', 'Ye Tian', 'Dian Yu', 'Haitao Mi', 'Xiangyu Duan', 'Zhaopeng Tu', 'Jinsong Su', 'Dong Yu']
['cs.CL']
Recent advancements in tree search algorithms guided by verifiers have significantly enhanced the reasoning capabilities of large language models (LLMs), but at the cost of increased computational resources. In this work, we identify two key challenges contributing to this inefficiency: $\textit{over-exploration}$ due ...
2025-02-16T16:12:01Z
null
null
null
null
null
null
null
null
null
null