arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,408.10007 | P3P: Pseudo-3D Pre-training for Scaling 3D Voxel-based Masked
Autoencoders | ['Xuechao Chen', 'Ying Chen', 'Jialin Li', 'Qiang Nie', 'Hanqiu Deng', 'Yong Liu', 'Qixing Huang', 'Yang Li'] | ['cs.CV'] | 3D pre-training is crucial to 3D perception tasks. Nevertheless, limited by
the difficulties in collecting clean and complete 3D data, 3D pre-training has
persistently faced data scaling challenges. In this work, we introduce a novel
self-supervised pre-training framework that incorporates millions of images
into 3D pr... | 2024-08-19T13:59:53Z | Under review. Pre-print | null | null | null | null | null | null | null | null | null |
2,408.10088 | Recent Surge in Public Interest in Transportation: Sentiment Analysis of
Baidu Apollo Go Using Weibo Data | ['Shiqi Wang', 'Zhouye Zhao', 'Yuhang Xie', 'Mingchuan Ma', 'Zirui Chen', 'Zeyu Wang', 'Bohao Su', 'Wenrui Xu', 'Tianyi Li'] | ['cs.SI', 'J.4'] | Urban mobility and transportation systems have been profoundly transformed by
the advancement of autonomous vehicle technologies. Baidu Apollo Go, a pioneer
robotaxi service from the Chinese tech giant Baidu, has recently been widely
deployed in major cities like Beijing and Wuhan, sparking increased
conversation and o... | 2024-08-19T15:29:56Z | null | null | null | Recent Surge in Public Interest in Transportation: Sentiment Analysis of Baidu Apollo Go Using Weibo Data | ['Shiqi Wang', 'Zhouye Zhao', 'Yuhang Xie', 'Mingchuan Ma', 'Zirui Chen', 'Zeyu Wang', 'Bohao Su', 'Wenrui Xu', 'Tianyi Li'] | 2,024 | arXiv.org | 1 | 48 | ['Computer Science'] |
2,408.10161 | NeuFlow v2: High-Efficiency Optical Flow Estimation on Edge Devices | ['Zhiyong Zhang', 'Aniket Gupta', 'Huaizu Jiang', 'Hanumant Singh'] | ['cs.CV', 'cs.AI', 'cs.RO'] | Real-time high-accuracy optical flow estimation is crucial for various
real-world applications. While recent learning-based optical flow methods have
achieved high accuracy, they often come with significant computational costs.
In this paper, we propose a highly efficient optical flow method that balances
high accuracy... | 2024-08-19T17:13:34Z | null | null | null | NeuFlow v2: High-Efficiency Optical Flow Estimation on Edge Devices | ['Zhiyong Zhang', 'Aniket Gupta', 'Huaizu Jiang', 'H. Singh'] | 2,024 | arXiv.org | 1 | 39 | ['Computer Science'] |
2,408.10188 | LongVILA: Scaling Long-Context Visual Language Models for Long Videos | ['Yukang Chen', 'Fuzhao Xue', 'Dacheng Li', 'Qinghao Hu', 'Ligeng Zhu', 'Xiuyu Li', 'Yunhao Fang', 'Haotian Tang', 'Shang Yang', 'Zhijian Liu', 'Ethan He', 'Hongxu Yin', 'Pavlo Molchanov', 'Jan Kautz', 'Linxi Fan', 'Yuke Zhu', 'Yao Lu', 'Song Han'] | ['cs.CV', 'cs.CL'] | Long-context capability is critical for multi-modal foundation models,
especially for long video understanding. We introduce LongVILA, a full-stack
solution for long-context visual-language models by co-designing the algorithm
and system. For model training, we upgrade existing VLMs to support long video
understanding ... | 2024-08-19T17:48:08Z | Code and models are available at
https://github.com/NVlabs/VILA/tree/main/longvila | null | null | LongVILA: Scaling Long-Context Visual Language Models for Long Videos | ['Fuzhao Xue', 'Yukang Chen', 'Dacheng Li', 'Qinghao Hu', 'Ligeng Zhu', 'Xiuyu Li', 'Yunhao Fang', 'Haotian Tang', 'Shang Yang', 'Zhijian Liu', 'Ethan He', 'Hongxu Yin', 'Pavlo Molchanov', 'Jan Kautz', 'Linxi Fan', 'Yuke Zhu', 'Yao Lu', 'Song Han'] | 2,024 | International Conference on Learning Representations | 97 | 75 | ['Computer Science'] |
2,408.10414 | Towards Automation of Human Stage of Decay Identification: An Artificial
Intelligence Approach | ['Anna-Maria Nau', 'Phillip Ditto', 'Dawnie Wolfe Steadman', 'Audris Mockus'] | ['cs.CV', 'cs.AI'] | Determining the stage of decomposition (SOD) is crucial for estimating the
postmortem interval and identifying human remains. Currently, labor-intensive
manual scoring methods are used for this purpose, but they are subjective and
do not scale for the emerging large-scale archival collections of human
decomposition pho... | 2024-08-19T21:00:40Z | 13 pages | null | null | null | null | null | null | null | null | null |
2,408.10441 | Goldfish: Monolingual Language Models for 350 Languages | ['Tyler A. Chang', 'Catherine Arnett', 'Zhuowen Tu', 'Benjamin K. Bergen'] | ['cs.CL'] | For many low-resource languages, the only available language models are large
multilingual models trained on many languages simultaneously. However, using
FLORES perplexity as a metric, we find that these models perform worse than
bigrams for many languages (e.g. 24% of languages in XGLM 4.5B; 43% in BLOOM
7.1B). To fa... | 2024-08-19T22:31:21Z | null | null | null | Goldfish: Monolingual Language Models for 350 Languages | ['Tyler A. Chang', 'Catherine Arnett', 'Zhuowen Tu', 'Benjamin Bergen'] | 2,024 | arXiv.org | 10 | 115 | ['Computer Science'] |
2,408.10573 | Putting People in LLMs' Shoes: Generating Better Answers via Question
Rewriter | ['Junhao Chen', 'Bowen Wang', 'Zhouqiang Jiang', 'Yuta Nakashima'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have demonstrated significant capabilities,
particularly in the domain of question answering (QA). However, their
effectiveness in QA is often undermined by the vagueness of user questions. To
address this issue, we introduce single-round instance-level prompt
optimization, referred to as q... | 2024-08-20T06:24:47Z | 7 pages, 4 figures, 5 tables and accepted at AAAI 2025 Main
Conference | null | null | Putting People in LLMs' Shoes: Generating Better Answers via Question Rewriter | ['Junhao Chen', 'Bowen Wang', 'Zhouqiang Jiang', 'Yuta Nakashima'] | 2,024 | AAAI Conference on Artificial Intelligence | 1 | 29 | ['Computer Science'] |
2,408.10605 | MUSES: 3D-Controllable Image Generation via Multi-Modal Agent
Collaboration | ['Yanbo Ding', 'Shaobin Zhuang', 'Kunchang Li', 'Zhengrong Yue', 'Yu Qiao', 'Yali Wang'] | ['cs.CV', 'cs.AI'] | Despite recent advancements in text-to-image generation, most existing
methods struggle to create images with multiple objects and complex spatial
relationships in the 3D world. To tackle this limitation, we introduce a
generic AI system, namely MUSES, for 3D-controllable image generation from user
queries. Specificall... | 2024-08-20T07:37:23Z | AAAI 2025 | null | null | MUSES: 3D-Controllable Image Generation via Multi-Modal Agent Collaboration | ['Yanbo Ding', 'Shaobin Zhuang', 'Kunchang Li', 'Zhengrong Yue', 'Yu Qiao', 'Yali Wang'] | 2,024 | AAAI Conference on Artificial Intelligence | 2 | 62 | ['Computer Science'] |
2,408.10613 | Task-level Distributionally Robust Optimization for Large Language
Model-based Dense Retrieval | ['Guangyuan Ma', 'Yongliang Ma', 'Xing Wu', 'Zhenpeng Su', 'Ming Zhou', 'Songlin Hu'] | ['cs.IR'] | Large Language Model-based Dense Retrieval (LLM-DR) optimizes over numerous
heterogeneous fine-tuning collections from different domains. However, the
discussion about its training data distribution is still minimal. Previous
studies rely on empirically assigned dataset choices or sampling ratios, which
inevitably lead... | 2024-08-20T07:48:19Z | Accepted by AAAI25. Source code is available at
https://github.com/ma787639046/tdro | null | null | null | null | null | null | null | null | null |
2,408.10724 | Crafting Tomorrow's Headlines: Neural News Generation and Detection in
English, Turkish, Hungarian, and Persian | ['Cem Üyük', 'Danica Rovó', 'Shaghayegh Kolli', 'Rabia Varol', 'Georg Groh', 'Daryna Dementieva'] | ['cs.CL'] | In the era dominated by information overload and its facilitation with Large
Language Models (LLMs), the prevalence of misinformation poses a significant
threat to public discourse and societal well-being. A critical concern at
present involves the identification of machine-generated news. In this work, we
take a signi... | 2024-08-20T10:45:36Z | EMNLP 2024 NLP4PI Workshop | null | null | null | null | null | null | null | null | null |
2,408.10771 | kNN Retrieval for Simple and Effective Zero-Shot Multi-speaker
Text-to-Speech | ['Karl El Hajal', 'Ajinkya Kulkarni', 'Enno Hermann', 'Mathew Magimai. -Doss'] | ['eess.AS', 'cs.AI', 'cs.LG', 'cs.SD'] | While recent zero-shot multi-speaker text-to-speech (TTS) models achieve
impressive results, they typically rely on extensive transcribed speech
datasets from numerous speakers and intricate training pipelines. Meanwhile,
self-supervised learning (SSL) speech features have emerged as effective
intermediate representati... | 2024-08-20T12:09:58Z | Accepted at NAACL 2025 | null | null | kNN Retrieval for Simple and Effective Zero-Shot Multi-speaker Text-to-Speech | ['Karl El Hajal', 'Ajinkya Kulkarni', 'Enno Hermann', 'Mathew Magimai.-Doss'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 1 | 46 | ['Engineering', 'Computer Science'] |
2,408.10903 | BEYOND DIALOGUE: A Profile-Dialogue Alignment Framework Towards General
Role-Playing Language Model | ['Yeyong Yu', 'Runsheng Yu', 'Haojie Wei', 'Zhanqiu Zhang', 'Quan Qian'] | ['cs.CL', 'cs.HC'] | The rapid advancement of large language models (LLMs) has revolutionized
role-playing, enabling the development of general role-playing models. However,
current role-playing training has two significant issues: (I) Using a
predefined role profile to prompt dialogue training for specific scenarios
usually leads to incon... | 2024-08-20T14:47:38Z | null | null | null | null | null | null | null | null | null | null |
2,408.10914 | To Code, or Not To Code? Exploring Impact of Code in Pre-training | ['Viraat Aryabumi', 'Yixuan Su', 'Raymond Ma', 'Adrien Morisot', 'Ivan Zhang', 'Acyr Locatelli', 'Marzieh Fadaee', 'Ahmet Üstün', 'Sara Hooker'] | ['cs.CL'] | Including code in the pre-training data mixture, even for models not
specifically designed for code, has become a common practice in LLMs
pre-training. While there has been anecdotal consensus among practitioners that
code data plays a vital role in general LLMs' performance, there is only
limited work analyzing the pr... | 2024-08-20T14:58:13Z | null | null | null | To Code, or Not To Code? Exploring Impact of Code in Pre-training | ['Viraat Aryabumi', 'Yixuan Su', 'Raymond Ma', 'Adrien Morisot', 'Ivan Zhang', 'Acyr F. Locatelli', 'Marzieh Fadaee', 'A. Ustun', 'Sara Hooker'] | 2,024 | arXiv.org | 26 | 67 | ['Computer Science'] |
2,408.11039 | Transfusion: Predict the Next Token and Diffuse Images with One
Multi-Modal Model | ['Chunting Zhou', 'Lili Yu', 'Arun Babu', 'Kushal Tirumala', 'Michihiro Yasunaga', 'Leonid Shamis', 'Jacob Kahn', 'Xuezhe Ma', 'Luke Zettlemoyer', 'Omer Levy'] | ['cs.AI', 'cs.CV'] | We introduce Transfusion, a recipe for training a multi-modal model over
discrete and continuous data. Transfusion combines the language modeling loss
function (next token prediction) with diffusion to train a single transformer
over mixed-modality sequences. We pretrain multiple Transfusion models up to 7B
parameters ... | 2024-08-20T17:48:20Z | 23 pages | null | null | Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model | ['Chunting Zhou', 'Lili Yu', 'Arun Babu', 'Kushal Tirumala', 'Michihiro Yasunaga', 'Leonid Shamis', 'Jacob Kahn', 'Xuezhe Ma', 'Luke S. Zettlemoyer', 'Omer Levy'] | 2,024 | arXiv.org | 190 | 50 | ['Computer Science'] |
2,408.11054 | Near, far: Patch-ordering enhances vision foundation models' scene
understanding | ['Valentinos Pariza', 'Mohammadreza Salehi', 'Gertjan Burghouts', 'Francesco Locatello', 'Yuki M. Asano'] | ['cs.CV', 'cs.AI'] | We introduce NeCo: Patch Neighbor Consistency, a novel self-supervised
training loss that enforces patch-level nearest neighbor consistency across a
student and teacher model. Compared to contrastive approaches that only yield
binary learning signals, i.e., 'attract' and 'repel', this approach benefits
from the more fi... | 2024-08-20T17:58:59Z | Accepted at ICLR25. The webpage is accessible at:
https://vpariza.github.io/NeCo/ | null | null | Near, far: Patch-ordering enhances vision foundation models' scene understanding | ['Valentinos Pariza', 'Mohammadreza Salehi', 'G. Burghouts', 'Francesco Locatello', 'Yuki M. Asano'] | 2,024 | International Conference on Learning Representations | 1 | 70 | ['Computer Science'] |
2,408.11172 | SubgoalXL: Subgoal-based Expert Learning for Theorem Proving | ['Xueliang Zhao', 'Lin Zheng', 'Haige Bo', 'Changran Hu', 'Urmish Thakker', 'Lingpeng Kong'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.LO'] | Formal theorem proving, a field at the intersection of mathematics and
computer science, has seen renewed interest with advancements in large language
models (LLMs). This paper introduces SubgoalXL, a novel approach that
synergizes subgoal-based proofs with expert learning to enhance LLMs'
capabilities in formal theore... | 2024-08-20T20:10:53Z | null | null | null | null | null | null | null | null | null | null |
2,408.11227 | OCTCube-M: A 3D multimodal optical coherence tomography foundation model
for retinal and systemic diseases with cross-cohort and cross-device
validation | ['Zixuan Liu', 'Hanwen Xu', 'Addie Woicik', 'Linda G. Shapiro', 'Marian Blazes', 'Yue Wu', 'Verena Steffen', 'Catherine Cukras', 'Cecilia S. Lee', 'Miao Zhang', 'Aaron Y. Lee', 'Sheng Wang'] | ['eess.IV', 'cs.AI', 'cs.CV'] | We present OCTCube-M, a 3D OCT-based multi-modal foundation model for jointly
analyzing OCT and en face images. OCTCube-M first developed OCTCube, a 3D
foundation model pre-trained on 26,685 3D OCT volumes encompassing 1.62 million
2D OCT images. It then exploits a novel multi-modal contrastive learning
framework COEP ... | 2024-08-20T22:55:19Z | null | null | null | null | null | null | null | null | null | null |
2,408.11281 | BearLLM: A Prior Knowledge-Enhanced Bearing Health Management Framework
with Unified Vibration Signal Representation | ['Haotian Peng', 'Jiawei Liu', 'Jinsong Du', 'Jie Gao', 'Wei Wang'] | ['cs.AI'] | We propose a bearing health management framework leveraging large language
models (BearLLM), a novel multimodal model that unifies multiple
bearing-related tasks by processing user prompts and vibration signals.
Specifically, we introduce a prior knowledge-enhanced unified vibration signal
representation to handle vari... | 2024-08-21T02:04:54Z | Accepted to AAAI2025 | null | null | BearLLM: A Prior Knowledge-Enhanced Bearing Health Management Framework with Unified Vibration Signal Representation | ['Haotian Peng', 'Jiawei Liu', 'Jinsong Du', 'Jie Gao', 'Wei Wang'] | 2,024 | AAAI Conference on Artificial Intelligence | 1 | 43 | ['Computer Science'] |
2,408.11294 | RedWhale: An Adapted Korean LLM Through Efficient Continual Pretraining | ['Anh-Dung Vo', 'Minseong Jung', 'Wonbeen Lee', 'Daewoo Choi'] | ['cs.CL'] | The field of Natural Language Processing (NLP) has seen significant
advancements with the development of Large Language Models (LLMs). However,
much of this research remains focused on English, often overlooking
low-resource languages like Korean. This oversight presents challenges due to
the unique non-alphabetic toke... | 2024-08-21T02:49:41Z | null | null | null | null | null | null | null | null | null | null |
2,408.11791 | Critique-out-Loud Reward Models | ['Zachary Ankner', 'Mansheej Paul', 'Brandon Cui', 'Jonathan D. Chang', 'Prithviraj Ammanabrolu'] | ['cs.LG'] | Traditionally, reward models used for reinforcement learning from human
feedback (RLHF) are trained to directly predict preference scores without
leveraging the generation capabilities of the underlying large language model
(LLM). This limits the capabilities of reward models as they must reason
implicitly about the qu... | 2024-08-21T17:24:15Z | null | null | null | null | null | null | null | null | null | null |
2,408.11796 | LLM Pruning and Distillation in Practice: The Minitron Approach | ['Sharath Turuvekere Sreenivas', 'Saurav Muralidharan', 'Raviraj Joshi', 'Marcin Chochowski', 'Ameya Sunil Mahabaleshwarkar', 'Gerald Shen', 'Jiaqi Zeng', 'Zijia Chen', 'Yoshi Suhara', 'Shizhe Diao', 'Chenhan Yu', 'Wei-Chun Chen', 'Hayley Ross', 'Oluwatobi Olabiyi', 'Ashwath Aithal', 'Oleksii Kuchaiev', 'Daniel Korzekw... | ['cs.CL', 'cs.AI', 'cs.LG'] | We present a comprehensive report on compressing the Llama 3.1 8B and Mistral
NeMo 12B models to 4B and 8B parameters, respectively, using pruning and
distillation. We explore two distinct pruning strategies: (1) depth pruning and
(2) joint hidden/attention/MLP (width) pruning, and evaluate the results on
common benchm... | 2024-08-21T17:38:48Z | v4: Update author order | null | null | LLM Pruning and Distillation in Practice: The Minitron Approach | ['Sharath Turuvekere Sreenivas', 'Saurav Muralidharan', 'Raviraj Joshi', 'Marcin Chochowski', 'M. Patwary', 'M. Shoeybi', 'Bryan Catanzaro', 'Jan Kautz', 'Pavlo Molchanov'] | 2,024 | arXiv.org | 36 | 35 | ['Computer Science'] |
2,408.11811 | EmbodiedSAM: Online Segment Any 3D Thing in Real Time | ['Xiuwei Xu', 'Huangxing Chen', 'Linqing Zhao', 'Ziwei Wang', 'Jie Zhou', 'Jiwen Lu'] | ['cs.CV', 'cs.RO'] | Embodied tasks require the agent to fully understand 3D scenes simultaneously
with its exploration, so an online, real-time, fine-grained and
highly-generalized 3D perception model is desperately needed. Since
high-quality 3D data is limited, directly training such a model in 3D is almost
infeasible. Meanwhile, vision ... | 2024-08-21T17:57:06Z | ICLR25 Oral. Project page: https://xuxw98.github.io/ESAM/ | null | null | EmbodiedSAM: Online Segment Any 3D Thing in Real Time | ['Xiuwei Xu', 'Huangxing Chen', 'Linqing Zhao', 'Ziwei Wang', 'Jie Zhou', 'Jiwen Lu'] | 2,024 | International Conference on Learning Representations | 16 | 40 | ['Computer Science'] |
2,408.11812 | Scaling Cross-Embodied Learning: One Policy for Manipulation,
Navigation, Locomotion and Aviation | ['Ria Doshi', 'Homer Walke', 'Oier Mees', 'Sudeep Dasari', 'Sergey Levine'] | ['cs.RO', 'cs.LG'] | Modern machine learning systems rely on large datasets to attain broad
generalization, and this often poses a challenge in robot learning, where each
robotic platform and task might have only a small dataset. By training a single
policy across many different kinds of robots, a robot learning method can
leverage much br... | 2024-08-21T17:57:51Z | Project website at https://crossformer-model.github.io/ | null | null | Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation | ['Ria Doshi', 'H. Walke', 'Oier Mees', 'Sudeep Dasari', 'Sergey Levine'] | 2,024 | Conference on Robot Learning | 60 | 69 | ['Computer Science'] |
2,408.11851 | SAGE-RT: Synthetic Alignment data Generation for Safety Evaluation and
Red Teaming | ['Anurakt Kumar', 'Divyanshu Kumar', 'Jatan Loya', 'Nitin Aravind Birur', 'Tanay Baswa', 'Sahil Agarwal', 'Prashanth Harshangi'] | ['cs.AI', 'cs.CL', 'cs.CR'] | We introduce Synthetic Alignment data Generation for Safety Evaluation and
Red Teaming (SAGE-RT or SAGE) a novel pipeline for generating synthetic
alignment and red-teaming data. Existing methods fall short in creating nuanced
and diverse datasets, providing necessary control over the data generation and
validation pro... | 2024-08-14T08:38:31Z | null | null | null | null | null | null | null | null | null | null |
2,408.11857 | Hermes 3 Technical Report | ['Ryan Teknium', 'Jeffrey Quesnelle', 'Chen Guang'] | ['cs.CL'] | Instruct (or "chat") tuned models have become the primary way in which most
people interact with large language models. As opposed to "base" or
"foundation" models, instruct-tuned models are optimized to respond to
imperative statements. We present Hermes 3, a neutrally-aligned generalist
instruct and tool use model wi... | 2024-08-15T20:17:33Z | null | null | null | Hermes 3 Technical Report | ['Ryan Teknium', 'Jeffrey Quesnelle', 'Chen Guang'] | 2,024 | arXiv.org | 14 | 34 | ['Computer Science'] |
2,408.11878 | Open-FinLLMs: Open Multimodal Large Language Models for Financial
Applications | ['Jimin Huang', 'Mengxi Xiao', 'Dong Li', 'Zihao Jiang', 'Yuzhe Yang', 'Yifei Zhang', 'Lingfei Qian', 'Yan Wang', 'Xueqing Peng', 'Yang Ren', 'Ruoyu Xiang', 'Zhengyu Chen', 'Xiao Zhang', 'Yueru He', 'Weiguang Han', 'Shunian Chen', 'Lihang Shen', 'Daniel Kim', 'Yangyang Yu', 'Yupeng Cao', 'Zhiyang Deng', 'Haohang Li', '... | ['cs.CL', 'cs.CE', 'q-fin.CP'] | Financial LLMs hold promise for advancing financial tasks and domain-specific
applications. However, they are limited by scarce corpora, weak multimodal
capabilities, and narrow evaluations, making them less suited for real-world
application. To address this, we introduce \textit{Open-FinLLMs}, the first
open-source mu... | 2024-08-20T16:15:28Z | 33 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,408.11915 | Video-Foley: Two-Stage Video-To-Sound Generation via Temporal Event
Condition For Foley Sound | ['Junwon Lee', 'Jaekwon Im', 'Dabin Kim', 'Juhan Nam'] | ['cs.SD', 'cs.CV', 'cs.LG', 'cs.MM', 'eess.AS'] | Foley sound synthesis is crucial for multimedia production, enhancing user
experience by synchronizing audio and video both temporally and semantically.
Recent studies on automating this labor-intensive process through
video-to-sound generation face significant challenges. Systems lacking explicit
temporal features suf... | 2024-08-21T18:06:15Z | null | null | null | Video-Foley: Two-Stage Video-To-Sound Generation via Temporal Event Condition For Foley Sound | ['Junwon Lee', 'Jae-Yeol Im', 'Dabin Kim', 'Juhan Nam'] | 2,024 | arXiv.org | 10 | 44 | ['Computer Science', 'Engineering'] |
2,408.12109 | RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual
Preference Data | ['Chenglong Wang', 'Yang Gan', 'Yifu Huo', 'Yongyu Mu', 'Murun Yang', 'Qiaozhi He', 'Tong Xiao', 'Chunliang Zhang', 'Tongran Liu', 'Quan Du', 'Di Yang', 'Jingbo Zhu'] | ['cs.CV', 'cs.CL'] | Large vision-language models (LVLMs) often fail to align with human
preferences, leading to issues like generating misleading content without
proper visual context (also known as hallucination). A promising solution to
this problem is using human-preference alignment techniques, such as best-of-n
sampling and reinforce... | 2024-08-22T03:49:18Z | Accepted by AAAI 2025 | null | null | null | null | null | null | null | null | null |
2,408.12245 | Scalable Autoregressive Image Generation with Mamba | ['Haopeng Li', 'Jinyue Yang', 'Kexin Wang', 'Xuerui Qiu', 'Yuhong Chou', 'Xin Li', 'Guoqi Li'] | ['cs.CV'] | We introduce AiM, an autoregressive (AR) image generative model based on
Mamba architecture. AiM employs Mamba, a novel state-space model characterized
by its exceptional performance for long-sequence modeling with linear time
complexity, to supplant the commonly utilized Transformers in AR image
generation models, aim... | 2024-08-22T09:27:49Z | 9 pages, 8 figures | null | null | Scalable Autoregressive Image Generation with Mamba | ['Haopeng Li', 'Jinyue Yang', 'Kexin Wang', 'Xuerui Qiu', 'Yuhong Chou', 'Xin Li', 'Guoqi Li'] | 2,024 | arXiv.org | 15 | 38 | ['Computer Science'] |
2,408.1248 | Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese | ['Khang T. Doan', 'Bao G. Huynh', 'Dung T. Hoang', 'Thuc D. Pham', 'Nhat H. Pham', 'Quan T. M. Nguyen', 'Bang Q. Vo', 'Suong N. Hoang'] | ['cs.LG', 'cs.CL'] | In this report, we introduce Vintern-1B, a reliable 1-billion-parameters
multimodal large language model (MLLM) for Vietnamese language tasks. By
integrating the Qwen2-0.5B-Instruct language model with the
InternViT-300M-448px visual model, Vintern-1B is optimized for a range of
applications, including optical characte... | 2024-08-22T15:15:51Z | null | null | null | Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese | ['Khang T. Doan', 'Bao G. Huynh', 'D. T. Hoang', 'Thuc D. Pham', 'Nhat H. Pham', 'Quan T.M. Nguyen', 'Bang Q. Vo', 'Suong N. Hoang'] | 2,024 | arXiv.org | 6 | 20 | ['Computer Science'] |
2,408.12503 | The Russian-focused embedders' exploration: ruMTEB benchmark and Russian
embedding model design | ['Artem Snegirev', 'Maria Tikhonova', 'Anna Maksimova', 'Alena Fenogenova', 'Alexander Abramov'] | ['cs.CL', 'cs.AI'] | Embedding models play a crucial role in Natural Language Processing (NLP) by
creating text embeddings used in various tasks such as information retrieval
and assessing semantic text similarity. This paper focuses on research related
to embedding models in the Russian language. It introduces a new
Russian-focused embedd... | 2024-08-22T15:53:23Z | to appear in NAACL 2025 | null | null | The Russian-focused embedders' exploration: ruMTEB benchmark and Russian embedding model design | ['Artem Snegirev', 'Maria Tikhonova', 'Anna Maksimova', 'Alena Fenogenova', 'Alexander Abramov'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 6 | 62 | ['Computer Science'] |
2,408.12528 | Show-o: One Single Transformer to Unify Multimodal Understanding and
Generation | ['Jinheng Xie', 'Weijia Mao', 'Zechen Bai', 'David Junhao Zhang', 'Weihao Wang', 'Kevin Qinghong Lin', 'Yuchao Gu', 'Zhijie Chen', 'Zhenheng Yang', 'Mike Zheng Shou'] | ['cs.CV'] | We present a unified transformer, i.e., Show-o, that unifies multimodal
understanding and generation. Unlike fully autoregressive models, Show-o
unifies autoregressive and (discrete) diffusion modeling to adaptively handle
inputs and outputs of various and mixed modalities. The unified model flexibly
supports a wide ra... | 2024-08-22T16:32:32Z | Technical Report | null | null | Show-o: One Single Transformer to Unify Multimodal Understanding and Generation | ['Jinheng Xie', 'Weijia Mao', 'Zechen Bai', 'David Junhao Zhang', 'Weihao Wang', 'Kevin Qinghong Lin', 'Yuchao Gu', 'Zhijie Chen', 'Zhenheng Yang', 'Mike Zheng Shou'] | 2,024 | International Conference on Learning Representations | 228 | 81 | ['Computer Science'] |
2,408.12547 | Towards Evaluating and Building Versatile Large Language Models for
Medicine | ['Chaoyi Wu', 'Pengcheng Qiu', 'Jinxin Liu', 'Hongfei Gu', 'Na Li', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | ['cs.CL'] | In this study, we present MedS-Bench, a comprehensive benchmark designed to
evaluate the performance of large language models (LLMs) in clinical contexts.
Unlike existing benchmarks that focus on multiple-choice question answering,
MedS-Bench spans 11 high-level clinical tasks, including clinical report
summarization, ... | 2024-08-22T17:01:34Z | null | null | null | Towards evaluating and building versatile large language models for medicine | ['Chaoyi Wu', 'Pengcheng Qiu', 'Jinxin Liu', 'Hongfei Gu', 'Na Li', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie'] | 2,024 | npj Digit. Medicine | 18 | 53 | ['Medicine', 'Computer Science'] |
2,408.12569 | Sapiens: Foundation for Human Vision Models | ['Rawal Khirodkar', 'Timur Bagautdinov', 'Julieta Martinez', 'Su Zhaoen', 'Austin James', 'Peter Selednik', 'Stuart Anderson', 'Shunsuke Saito'] | ['cs.CV'] | We present Sapiens, a family of models for four fundamental human-centric
vision tasks -- 2D pose estimation, body-part segmentation, depth estimation,
and surface normal prediction. Our models natively support 1K high-resolution
inference and are extremely easy to adapt for individual tasks by simply
fine-tuning model... | 2024-08-22T17:37:27Z | ECCV 2024 (Oral) | null | null | null | null | null | null | null | null | null |
2,408.12637 | Building and better understanding vision-language models: insights and
future directions | ['Hugo Laurençon', 'Andrés Marafioti', 'Victor Sanh', 'Léo Tronchon'] | ['cs.CV', 'cs.AI'] | The field of vision-language models (VLMs), which take images and texts as
inputs and output texts, is rapidly evolving and has yet to reach consensus on
several key aspects of the development pipeline, including data, architecture,
and training methods. This paper can be seen as a tutorial for building a VLM.
We begin... | 2024-08-22T17:47:24Z | null | null | null | null | null | null | null | null | null | null |
2,408.12837 | Underwater SONAR Image Classification and Analysis using LIME-based
Explainable Artificial Intelligence | ['Purushothaman Natarajan', 'Athira Nambiar'] | ['cs.CV', 'cs.AI', 'cs.HC', 'cs.LG', '68T07 (Primary) 68T45, 68U10 (Secondary)', 'I.4.8; I.2.10; I.5.4'] | Deep learning techniques have revolutionized image classification by
mimicking human cognition and automating complex decision-making processes.
However, the deployment of AI systems in the wild, especially in high-security
domains such as defence, is curbed by the lack of explainability of the model.
To this end, eXpl... | 2024-08-23T04:54:18Z | 55 pages, 9 tables, 18 figures | null | null | Underwater SONAR Image Classification and Analysis using LIME-based Explainable Artificial Intelligence | ['Purushothaman Natarajan', 'Athira Nambiar'] | 2,024 | arXiv.org | 0 | 86 | ['Computer Science'] |
2,408.12902 | IAA: Inner-Adaptor Architecture Empowers Frozen Large Language Model
with Multimodal Capabilities | ['Bin Wang', 'Chunyu Xie', 'Dawei Leng', 'Yuhui Yin'] | ['cs.AI', 'cs.CL', 'cs.LG'] | In the field of multimodal large language models (MLLMs), common methods
typically involve unfreezing the language model during training to foster
profound visual understanding. However, the fine-tuning of such models with
vision-language data often leads to a diminution of their natural language
processing (NLP) capab... | 2024-08-23T08:10:13Z | AAAI 2025 | null | null | null | null | null | null | null | null | null |
2,408.12963 | Open Llama2 Model for the Lithuanian Language | ['Artūras Nakvosas', 'Povilas Daniušis', 'Vytas Mulevičius'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this paper, we propose and describe the first open Llama2 large language
models (LLMs) for the Lithuanian language, including an accompanying
question/answer (Q/A) dataset and translations of popular LLM benchmarks. We
provide a brief review of open regional LLMs and detailed information on the
proposed LLMs and the... | 2024-08-23T10:18:39Z | 12 pages, 8 figures, 5 tables | Informatica, 2025 | 10.15388/25-INFOR592 | null | null | null | null | null | null | null |
2,408.1301 | A Web-Based Solution for Federated Learning with LLM-Based Automation | ['Chamith Mawela', 'Chaouki Ben Issaid', 'Mehdi Bennis'] | ['cs.LG', 'stat.AP'] | Federated Learning (FL) offers a promising approach for collaborative machine
learning across distributed devices. However, its adoption is hindered by the
complexity of building reliable communication architectures and the need for
expertise in both machine learning and network programming. This paper presents
a compr... | 2024-08-23T11:57:02Z | null | null | null | A Web-Based Solution for Federated Learning with LLM-Based Automation | ['Chamith Mawela', 'Chaouki Ben Issaid', 'Mehdi Bennis'] | 2,024 | arXiv.org | 0 | 0 | ['Computer Science', 'Mathematics'] |
2,408.13106 | NEST: Self-supervised Fast Conformer as All-purpose Seasoning to Speech
Processing Tasks | ['He Huang', 'Taejin Park', 'Kunal Dhawan', 'Ivan Medennikov', 'Krishna C. Puvvada', 'Nithin Rao Koluguri', 'Weiqing Wang', 'Jagadeesh Balam', 'Boris Ginsburg'] | ['cs.SD', 'eess.AS'] | Self-supervised learning has been proved to benefit a wide range of speech
processing tasks, such as speech recognition/translation, speaker verification
and diarization, etc. However, most of current approaches are computationally
expensive. In this paper, we propose a simplified and more efficient
self-supervised lea... | 2024-08-23T14:32:18Z | Published in ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,408.13257 | MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution
Real-World Scenarios that are Difficult for Humans? | ['Yi-Fan Zhang', 'Huanyu Zhang', 'Haochen Tian', 'Chaoyou Fu', 'Shuangqing Zhang', 'Junfei Wu', 'Feng Li', 'Kun Wang', 'Qingsong Wen', 'Zhang Zhang', 'Liang Wang', 'Rong Jin', 'Tieniu Tan'] | ['cs.CV'] | Comprehensive evaluation of Multimodal Large Language Models (MLLMs) has
recently garnered widespread attention in the research community. However, we
observe that existing benchmarks present several common barriers that make it
difficult to measure the significant challenges that models face in the real
world, includi... | 2024-08-23T17:59:51Z | Project Page: https://mme-realworld.github.io/; accepted by ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,408.13359 | Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate
Scheduler | ['Yikang Shen', 'Matthew Stallone', 'Mayank Mishra', 'Gaoyuan Zhang', 'Shawn Tan', 'Aditya Prasad', 'Adriana Meza Soria', 'David D. Cox', 'Rameswar Panda'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Finding the optimal learning rate for language model pretraining is a
challenging task. This is not only because there is a complicated correlation
between learning rate, batch size, number of training tokens, model size, and
other hyperparameters but also because it is prohibitively expensive to perform
a hyperparamet... | 2024-08-23T20:22:20Z | null | null | null | null | null | null | null | null | null | null |
2,408.1337 | BiGS: Bidirectional Gaussian Primitives for Relightable 3D Gaussian
Splatting | ['Zhenyuan Liu', 'Yu Guo', 'Xinyuan Li', 'Bernd Bickel', 'Ran Zhang'] | ['cs.CV', 'cs.GR'] | We present Bidirectional Gaussian Primitives, an image-based novel view
synthesis technique designed to represent and render 3D objects with surface
and volumetric materials under dynamic illumination. Our approach integrates
light intrinsic decomposition into the Gaussian splatting framework, enabling
real-time religh... | 2024-08-23T21:04:40Z | null | null | null | BiGS: Bidirectional Gaussian Primitives for Relightable 3D Gaussian Splatting | ['Zhenyuan Liu', 'Yu Guo', 'Xinyuan Li', 'Bernd Bickel', 'Ran Zhang'] | 2,024 | arXiv.org | 2 | 40 | ['Computer Science'] |
2,408.13402 | LLaVaOLMoBitnet1B: Ternary LLM goes Multimodal! | ['Jainaveen Sundaram', 'Ravi Iyer'] | ['cs.LG'] | Multimodal Large Language Models (MM-LLMs) have seen significant advancements
in the last year, demonstrating impressive performance across tasks. However,
to truly democratize AI, models must exhibit strong capabilities and be able to
run efficiently on small compute footprints accessible by most. Part of this
quest, ... | 2024-08-23T23:00:19Z | null | null | null | null | null | null | null | null | null | null |
2,408.13632 | FungiTastic: A multi-modal dataset and benchmark for image
categorization | ['Lukas Picek', 'Klara Janouskova', 'Vojtech Cermak', 'Jiri Matas'] | ['cs.CV'] | We introduce a new, challenging benchmark and a dataset, FungiTastic, based
on fungal records continuously collected over a twenty-year span. The dataset
is labelled and curated by experts and consists of about 350k multimodal
observations of 6k fine-grained categories (species). The fungi observations
include photogra... | 2024-08-24T17:22:46Z | FGVC workshop, CVPR 2025 | null | null | FungiTastic: A multi-modal dataset and benchmark for image categorization | ['Lukáš Picek', 'Klára Janousková', 'Milan Šulc', 'Jirí Matas'] | 2,024 | arXiv.org | 1 | 70 | ['Computer Science'] |
2,408.13831 | Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics
Fall In! | ['Stefano Perrella', 'Lorenzo Proietti', 'Alessandro Scirè', 'Edoardo Barba', 'Roberto Navigli'] | ['cs.CL', 'cs.AI'] | Annually, at the Conference of Machine Translation (WMT), the Metrics Shared
Task organizers conduct the meta-evaluation of Machine Translation (MT)
metrics, ranking them according to their correlation with human judgments.
Their results guide researchers toward enhancing the next generation of metrics
and MT systems. ... | 2024-08-25T13:29:34Z | Presented at ACL 2024 Main Conference. 29 pages | null | null | Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In! | ['Stefano Perrella', 'Lorenzo Proietti', 'Alessandro Sciré', 'Edoardo Barba', 'Roberto Navigli'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 4 | 32 | ['Computer Science'] |
2,408.13871 | AlphaViT: A Flexible Game-Playing AI for Multiple Games and Variable
Board Sizes | ['Kazuhisa Fujita'] | ['cs.LG', 'cs.AI'] | This paper presents novel game-playing AI agents based on the AlphaZero
framework, enhanced with Vision Transformer (ViT): AlphaViT, AlphaViD, and
AlphaVDA. These agents are designed to play multiple board games of various
sizes using a single network with shared weights, thereby overcoming
AlphaZero's limitation of fi... | 2024-08-25T15:40:21Z | null | null | null | AlphaViT: A Flexible Game-Playing AI for Multiple Games and Variable Board Sizes | ['Kazuhisa Fujita'] | 2,024 | null | 0 | 0 | ['Computer Science'] |
2,408.1392 | Wav2Small: Distilling Wav2Vec2 to 72K parameters for Low-Resource Speech
emotion recognition | ['Dionyssos Kounadis-Bastian', 'Oliver Schrüfer', 'Anna Derington', 'Hagen Wierstorf', 'Florian Eyben', 'Felix Burkhardt', 'Björn Schuller'] | ['cs.SD', 'eess.AS'] | Speech Emotion Recognition (SER) needs high computational resources to
overcome the challenge of substantial annotator disagreement. Today SER is
shifting towards dimensional annotations of arousal, dominance, and valence
(A/D/V). Universal metrics as the L2 distance prove unsuitable for evaluating
A/D/V accuracy due t... | 2024-08-25T19:13:56Z | apply review | null | null | null | null | null | null | null | null | null |
2,408.1408 | SONICS: Synthetic Or Not -- Identifying Counterfeit Songs | ['Md Awsafur Rahman', 'Zaber Ibn Abdul Hakim', 'Najibul Haque Sarker', 'Bishmoy Paul', 'Shaikh Anowarul Fattah'] | ['cs.SD', 'cs.AI', 'cs.CV', 'cs.LG', 'eess.AS'] | The recent surge in AI-generated songs presents exciting possibilities and
challenges. These innovations necessitate the ability to distinguish between
human-composed and synthetic songs to safeguard artistic integrity and protect
human musical artistry. Existing research and datasets in fake song detection
only focus ... | 2024-08-26T08:02:57Z | Accepted to ICLR 2025. Project url: https://github.com/awsaf49/sonics | null | null | null | null | null | null | null | null | null |
2,408.14236 | DSTI at LLMs4OL 2024 Task A: Intrinsic versus extrinsic knowledge for
type classification | ['Hanna Abi Akl'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce semantic towers, an extrinsic knowledge representation method,
and compare it to intrinsic knowledge in large language models for ontology
learning. Our experiments show a trade-off between performance and semantic
grounding for extrinsic knowledge compared to a fine-tuned model intrinsic
knowledge. We rep... | 2024-08-26T12:50:27Z | 8 pages, 4 figures, accepted for the LLMs4OL challenge at the
International Semantic Web Conference (ISWC) 2024 | null | null | DSTI at LLMs4OL 2024 Task A: Intrinsic versus extrinsic knowledge for type classification | ['Hanna Abi Akl'] | 2,024 | LLMs4OL@ISWC | 1 | 13 | ['Computer Science'] |
2,408.14587 | Efficient fine-tuning of 37-level GraphCast with the Canadian global
deterministic analysis | ['Christopher Subich'] | ['cs.LG', 'physics.ao-ph'] | This work describes a process for efficiently fine-tuning the GraphCast
data-driven forecast model to simulate another analysis system, here the Global
Deterministic Prediction System (GDPS) of Environment and Climate Change Canada
(ECCC). Using two years of training data (July 2019 -- December 2021) and 37
GPU-days of... | 2024-08-26T19:16:08Z | null | null | 10.1175/AIES-D-24-0101.1 | null | null | null | null | null | null | null |
2,408.14774 | Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning | ['Simran Kaur', 'Simon Park', 'Anirudh Goyal', 'Sanjeev Arora'] | ['cs.LG', 'cs.CL'] | We introduce Instruct-SkillMix, an automated approach for creating diverse,
high quality SFT data for instruction-following. The pipeline involves two
stages, each leveraging an existing powerful LLM: (1) Skill extraction: uses
the LLM to extract core "skills" for instruction-following by directly
prompting the model. ... | 2024-08-27T04:31:58Z | null | International Conference on Learning Representations (ICLR 2025) | 10.48550/arXiv.2408.14774 | Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning | ['Simran Kaur', 'Simon Park', 'Anirudh Goyal', 'Sanjeev Arora'] | 2,024 | arXiv.org | 10 | 0 | ['Computer Science'] |
2,408.14849 | Project SHADOW: Symbolic Higher-order Associative Deductive reasoning On
Wikidata using LM probing | ['Hanna Abi Akl'] | ['cs.CL', 'cs.AI'] | We introduce SHADOW, a fine-tuned language model trained on an intermediate
task using associative deductive reasoning, and measure its performance on a
knowledge base construction task using Wikidata triple completion. We evaluate
SHADOW on the LM-KBC 2024 challenge and show that it outperforms the baseline
solution b... | 2024-08-27T08:01:13Z | 7 pages, 1 figure, accepted for the International Conference on
Natural Language Computing (NATL) 2024 | null | null | null | null | null | null | null | null | null |
2,408.1496 | Multilingual Arbitrage: Optimizing Data Pools to Accelerate Multilingual
Progress | ['Ayomide Odumakinde', "Daniel D'souza", 'Pat Verga', 'Beyza Ermis', 'Sara Hooker'] | ['cs.CL', 'cs.AI'] | The use of synthetic data has played a critical role in recent state-of-art
breakthroughs. However, overly relying on a single oracle teacher model to
generate data has been shown to lead to model collapse and invite propagation
of biases. These limitations are particularly evident in multilingual settings,
where the a... | 2024-08-27T11:07:15Z | null | null | null | null | null | null | null | null | null | null |
2,408.15221 | LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet | ['Nathaniel Li', 'Ziwen Han', 'Ian Steneker', 'Willow Primack', 'Riley Goodside', 'Hugh Zhang', 'Zifan Wang', 'Cristina Menghini', 'Summer Yue'] | ['cs.LG', 'cs.CL', 'cs.CR', 'cs.CY'] | Recent large language model (LLM) defenses have greatly improved models'
ability to refuse harmful queries, even when adversarially attacked. However,
LLM defenses are primarily evaluated against automated adversarial attacks in a
single turn of conversation, an insufficient threat model for real-world
malicious use. W... | 2024-08-27T17:33:30Z | null | null | null | LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet | ['Nathaniel Li', 'Ziwen Han', 'Ian Steneker', 'Willow E. Primack', 'Riley Goodside', 'Hugh Zhang', 'Zifan Wang', 'Cristina Menghini', 'Summer Yue'] | 2,024 | arXiv.org | 57 | 118 | ['Computer Science'] |
2,408.15237 | The Mamba in the Llama: Distilling and Accelerating Hybrid Models | ['Junxiong Wang', 'Daniele Paliotta', 'Avner May', 'Alexander M. Rush', 'Tri Dao'] | ['cs.LG', 'cs.AI'] | Linear RNN architectures, like Mamba, can be competitive with Transformer
models in language modeling while having advantageous deployment
characteristics. Given the focus on training large-scale Transformer models, we
consider the challenge of converting these pretrained models for deployment. We
demonstrate that it i... | 2024-08-27T17:56:11Z | NeurIPS 2024. v4 updates: mention concurrent work of speculative
decoding for SSM | null | null | null | null | null | null | null | null | null |
2,408.15313 | Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in
Language Models | ['Wenxuan Zhang', 'Philip H. S. Torr', 'Mohamed Elhoseiny', 'Adel Bibi'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Fine-tuning large language models (LLMs) on human preferences, typically
through reinforcement learning from human feedback (RLHF), has proven
successful in enhancing their capabilities. However, ensuring the safety of
LLMs during fine-tuning remains a critical concern, and mitigating the
potential conflicts in safety ... | 2024-08-27T17:31:21Z | The paper has been accepted in ICLR 2025 as spotlight presentation | null | null | Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models | ['Wenxuan Zhang', 'Philip H. S. Torr', 'Mohamed Elhoseiny', 'Adel Bibi'] | 2,024 | International Conference on Learning Representations | 15 | 60 | ['Computer Science'] |
2,408.15518 | Squid: Long Context as a New Modality for Energy-Efficient On-Device
Language Models | ['Wei Chen', 'Zhiyuan Li', 'Shuo Xin', 'Yihao Wang'] | ['cs.CL'] | This paper presents Dolphin, a novel decoder-decoder architecture for
energy-efficient processing of long contexts in language models. Our approach
addresses the significant energy consumption and latency challenges inherent in
on-device models. Dolphin employs a compact 0.5B parameter decoder to distill
extensive cont... | 2024-08-28T04:06:14Z | null | null | null | Dolphin: Long Context as a New Modality for Energy-Efficient On-Device Language Models | ['Wei Chen', 'Zhiyuan Li', 'Shuo Xin', 'Yihao Wang'] | 2,024 | arXiv.org | 5 | 44 | ['Computer Science'] |
2,408.15542 | Kangaroo: A Powerful Video-Language Model Supporting Long-context Video
Input | ['Jiajun Liu', 'Yibing Wang', 'Hanghang Ma', 'Xiaoping Wu', 'Xiaoqi Ma', 'Xiaoming Wei', 'Jianbin Jiao', 'Enhua Wu', 'Jie Hu'] | ['cs.CV', 'cs.AI', 'cs.MM'] | Rapid advancements have been made in extending Large Language Models (LLMs)
to Large Multi-modal Models (LMMs). However, extending input modality of LLMs
to video data remains a challenging endeavor, especially for long videos. Due
to insufficient access to large-scale high-quality video data and the excessive
compress... | 2024-08-28T05:34:14Z | null | null | null | Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input | ['Jiajun Liu', 'Yibing Wang', 'Hanghang Ma', 'Xiaoping Wu', 'Xiaoqi Ma', 'Xiaoming Wei', 'Jianbin Jiao', 'Enhua Wu', 'Jie Hu'] | 2,024 | arXiv.org | 72 | 93 | ['Computer Science'] |
2,408.15545 | SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding | ['Sihang Li', 'Jin Huang', 'Jiaxi Zhuang', 'Yaorui Shi', 'Xiaochen Cai', 'Mingjun Xu', 'Xiang Wang', 'Linfeng Zhang', 'Guolin Ke', 'Hengxing Cai'] | ['cs.LG', 'cs.CL'] | Scientific literature understanding is crucial for extracting targeted
information and garnering insights, thereby significantly advancing scientific
discovery. Despite the remarkable success of Large Language Models (LLMs), they
face challenges in scientific literature understanding, primarily due to (1) a
lack of sci... | 2024-08-28T05:41:52Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,408.15556 | Divide, Conquer and Combine: A Training-Free Framework for
High-Resolution Image Perception in Multimodal Large Language Models | ['Wenbin Wang', 'Liang Ding', 'Minyan Zeng', 'Xiabin Zhou', 'Li Shen', 'Yong Luo', 'Dacheng Tao'] | ['cs.CV'] | Multimodal large language models (MLLMs) have experienced significant
advancements recently, but still struggle to recognize and interpret intricate
details in high-resolution (HR) images effectively. While state-of-the-art
(SOTA) MLLMs claim to process images at 4K resolution, existing MLLM benchmarks
only support up ... | 2024-08-28T06:09:02Z | null | null | null | null | null | null | null | null | null | null |
2,408.15666 | StyleRemix: Interpretable Authorship Obfuscation via Distillation and
Perturbation of Style Elements | ['Jillian Fisher', 'Skyler Hallinan', 'Ximing Lu', 'Mitchell Gordon', 'Zaid Harchaoui', 'Yejin Choi'] | ['cs.CL'] | Authorship obfuscation, rewriting a text to intentionally obscure the
identity of the author, is an important but challenging task. Current methods
using large language models (LLMs) lack interpretability and controllability,
often ignoring author-specific stylistic features, resulting in less robust
performance overal... | 2024-08-28T09:35:15Z | null | null | null | null | null | null | null | null | null | null |
2,408.1571 | Conan-embedding: General Text Embedding with More and Better Negative
Samples | ['Shiyu Li', 'Yang Tang', 'Shizhe Chen', 'Xi Chen'] | ['cs.CL'] | With the growing popularity of RAG, the capabilities of embedding models are
gaining increasing attention. Embedding models are primarily trained through
contrastive loss learning, with negative examples being a key component.
Previous work has proposed various hard negative mining strategies, but these
strategies are ... | 2024-08-28T11:18:06Z | null | null | null | null | null | null | null | null | null | null |
2,408.15787 | Interactive Agents: Simulating Counselor-Client Psychological Counseling
via Role-Playing LLM-to-LLM Interactions | ['Huachuan Qiu', 'Zhenzhong Lan'] | ['cs.CL', 'cs.IR'] | Virtual counselors powered by large language models (LLMs) aim to create
interactive support systems that effectively assist clients struggling with
mental health challenges. To replicate counselor-client conversations,
researchers have built an online mental health platform that allows
professional counselors to provi... | 2024-08-28T13:29:59Z | null | null | null | null | null | null | null | null | null | null |
2,408.15966 | More Text, Less Point: Towards 3D Data-Efficient Point-Language
Understanding | ['Yuan Tang', 'Xu Han', 'Xianzhi Li', 'Qiao Yu', 'Jinfeng Xu', 'Yixue Hao', 'Long Hu', 'Min Chen'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Enabling Large Language Models (LLMs) to comprehend the 3D physical world
remains a significant challenge. Due to the lack of large-scale 3D-text pair
datasets, the success of LLMs has yet to be replicated in 3D understanding. In
this paper, we rethink this issue and propose a new task: 3D Data-Efficient
Point-Language... | 2024-08-28T17:38:44Z | null | null | null | More Text, Less Point: Towards 3D Data-Efficient Point-Language Understanding | ['Yuan Tang', 'Xu Han', 'Xianzhi Li', 'Qiao Yu', 'Jinfeng Xu', 'Yixue Hao', 'Long Hu', 'Min Chen'] | 2,024 | AAAI Conference on Artificial Intelligence | 3 | 35 | ['Computer Science'] |
2,408.15998 | Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of
Encoders | ['Min Shi', 'Fuxiao Liu', 'Shihao Wang', 'Shijia Liao', 'Subhashree Radhakrishnan', 'Yilin Zhao', 'De-An Huang', 'Hongxu Yin', 'Karan Sapra', 'Yaser Yacoob', 'Humphrey Shi', 'Bryan Catanzaro', 'Andrew Tao', 'Jan Kautz', 'Zhiding Yu', 'Guilin Liu'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO'] | The ability to accurately interpret complex visual information is a crucial
topic of multimodal large language models (MLLMs). Recent work indicates that
enhanced visual perception significantly reduces hallucinations and improves
performance on resolution-sensitive tasks, such as optical character
recognition and docu... | 2024-08-28T17:59:31Z | Github: https://github.com/NVlabs/Eagle, HuggingFace:
https://huggingface.co/NVEagle | null | null | null | null | null | null | null | null | null |
2,408.16123 | ChartEye: A Deep Learning Framework for Chart Information Extraction | ['Osama Mustafa', 'Muhammad Khizer Ali', 'Momina Moetesum', 'Imran Siddiqi'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The widespread use of charts and infographics as a means of data
visualization in various domains has inspired recent research in automated
chart understanding. However, information extraction from chart images is a
complex multitasked process due to style variations and, as a consequence, it
is challenging to design a... | 2024-08-28T20:22:39Z | 8 Pages, and 11 Figures | null | 10.1109/DICTA60407.2023.00082 | null | null | null | null | null | null | null |
2,408.16245 | Large-Scale Multi-omic Biosequence Transformers for Modeling
Protein-Nucleic Acid Interactions | ['Sully F. Chen', 'Robert J. Steele', 'Glen M. Hocky', 'Beakal Lemeneh', 'Shivanand P. Lad', 'Eric K. Oermann'] | ['cs.LG', 'q-bio.BM'] | The transformer architecture has revolutionized bioinformatics and driven
progress in the understanding and prediction of the properties of biomolecules.
To date, most biosequence transformers have been trained on single-omic
data-either proteins or nucleic acids and have seen incredible success in
downstream tasks in ... | 2024-08-29T03:56:40Z | 41 pages, 5 figures | null | null | Large-Scale Multi-omic Biosequence Transformers for Modeling Protein-Nucleic Acid Interactions | ['Sully F. Chen', 'Robert J. Steele', 'Glen M. Hocky', 'Beakal Lemeneh', 'S. Lad', 'E. Oermann'] | 2,024 | arXiv.org | 0 | 100 | ['Medicine', 'Computer Science', 'Biology'] |
2,408.16293 | Physics of Language Models: Part 2.2, How to Learn From Mistakes on
Grade-School Math Problems | ['Tian Ye', 'Zicheng Xu', 'Yuanzhi Li', 'Zeyuan Allen-Zhu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Language models have demonstrated remarkable performance in solving reasoning
tasks; however, even the strongest models still occasionally make reasoning
mistakes. Recently, there has been active research aimed at improving reasoning
accuracy, particularly by using pretrained language models to "self-correct"
their mis... | 2024-08-29T06:49:20Z | arXiv admin note: text overlap with arXiv:2407.20311 | null | null | null | null | null | null | null | null | null |
2,408.16357 | Law of Vision Representation in MLLMs | ['Shijia Yang', 'Bohan Zhai', 'Quanzeng You', 'Jianbo Yuan', 'Hongxia Yang', 'Chenfeng Xu'] | ['cs.CV'] | We present the "Law of Vision Representation" in multimodal large language
models (MLLMs). It reveals a strong correlation between the combination of
cross-modal alignment, correspondence in vision representation, and MLLM
performance. We quantify the two factors using the cross-modal Alignment and
Correspondence score... | 2024-08-29T08:56:48Z | The code is available at
https://github.com/bronyayang/Law_of_Vision_Representation_in_MLLMs | null | null | null | null | null | null | null | null | null |
2,408.16493 | Learning from Negative Samples in Generative Biomedical Entity Linking | ['Chanhwi Kim', 'Hyunjae Kim', 'Sihyeon Park', 'Jiwoo Lee', 'Mujeen Sung', 'Jaewoo Kang'] | ['cs.CL'] | Generative models have become widely used in biomedical entity linking
(BioEL) due to their excellent performance and efficient memory usage. However,
these models are usually trained only with positive samples, i.e., entities
that match the input mention's identifier, and do not explicitly learn from
hard negative sam... | 2024-08-29T12:44:01Z | ACL 2025 (Findings) | null | null | null | null | null | null | null | null | null |
2,408.165 | CogVLM2: Visual Language Models for Image and Video Understanding | ['Wenyi Hong', 'Weihan Wang', 'Ming Ding', 'Wenmeng Yu', 'Qingsong Lv', 'Yan Wang', 'Yean Cheng', 'Shiyu Huang', 'Junhui Ji', 'Zhao Xue', 'Lei Zhao', 'Zhuoyi Yang', 'Xiaotao Gu', 'Xiaohan Zhang', 'Guanyu Feng', 'Da Yin', 'Zihan Wang', 'Ji Qi', 'Xixuan Song', 'Peng Zhang', 'Debing Liu', 'Bin Xu', 'Juanzi Li', 'Yuxiao Do... | ['cs.CV'] | Beginning with VisualGLM and CogVLM, we are continuously exploring VLMs in
pursuit of enhanced vision-language fusion, efficient higher-resolution
architecture, and broader modalities and applications. Here we propose the
CogVLM2 family, a new generation of visual language models for image and video
understanding inclu... | 2024-08-29T12:59:12Z | null | null | null | CogVLM2: Visual Language Models for Image and Video Understanding | ['Wenyi Hong', 'Weihan Wang', 'Ming Ding', 'Wenmeng Yu', 'Qingsong Lv', 'Yan Wang', 'Yean Cheng', 'Shiyu Huang', 'Junhui Ji', 'Zhao Xue', 'Lei Zhao', 'Zhuoyi Yang', 'Xiaotao Gu', 'Xiaohan Zhang', 'Guanyu Feng', 'Da Yin', 'Zihan Wang', 'Ji Qi', 'Xixuan Song', 'Peng Zhang', 'De-Feng Liu', 'Bin Xu', 'Juanzi Li', 'Yu-Chen ... | 2,024 | arXiv.org | 121 | 76 | ['Computer Science'] |
2,408.16532 | WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio
Language Modeling | ['Shengpeng Ji', 'Ziyue Jiang', 'Wen Wang', 'Yifu Chen', 'Minghui Fang', 'Jialong Zuo', 'Qian Yang', 'Xize Cheng', 'Zehan Wang', 'Ruiqi Li', 'Ziang Zhang', 'Xiaoda Yang', 'Rongjie Huang', 'Yidi Jiang', 'Qian Chen', 'Siqi Zheng', 'Zhou Zhao'] | ['eess.AS', 'cs.LG', 'cs.MM', 'cs.SD', 'eess.SP'] | Language models have been effectively applied to modeling natural signals,
such as images, video, speech, and audio. A crucial component of these models
is the codec tokenizer, which compresses high-dimensional natural signals into
lower-dimensional discrete tokens. In this paper, we introduce WavTokenizer,
which offer... | 2024-08-29T13:43:36Z | Accepted by ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,408.16589 | CrisperWhisper: Accurate Timestamps on Verbatim Speech Transcriptions | ['Laurin Wagner', 'Bernhard Thallinger', 'Mario Zusag'] | ['cs.LG'] | We demonstrate that carefully adjusting the tokenizer of the Whisper speech
recognition model significantly improves the precision of word-level timestamps
when applying dynamic time warping to the decoder's cross-attention scores. We
fine-tune the model to produce more verbatim speech transcriptions and employ
several... | 2024-08-29T14:52:42Z | Published at INTERSPEECH2024 | null | null | CrisperWhisper: Accurate Timestamps on Verbatim Speech Transcriptions | ['Laurin Wagner', 'Bernhard Thallinger', 'M. Zusag'] | 2,024 | Interspeech | 14 | 35 | ['Computer Science'] |
2,408.16725 | Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming | ['Zhifei Xie', 'Changqiao Wu'] | ['cs.AI', 'cs.CL', 'cs.HC', 'cs.LG', 'cs.SD', 'eess.AS'] | Recent advances in language models have achieved significant progress.
GPT-4o, as a new milestone, has enabled real-time conversations with humans,
demonstrating near-human natural fluency. Such human-computer interaction
necessitates models with the capability to perform reasoning directly with the
audio modality and ... | 2024-08-29T17:18:53Z | Technical report, work in progress. Demo and code:
https://github.com/gpt-omni/mini-omni | null | null | null | null | null | null | null | null | null |
2,408.16766 | CSGO: Content-Style Composition in Text-to-Image Generation | ['Peng Xing', 'Haofan Wang', 'Yanpeng Sun', 'Qixun Wang', 'Xu Bai', 'Hao Ai', 'Renyuan Huang', 'Zechao Li'] | ['cs.CV'] | The diffusion model has shown exceptional capabilities in controlled image
generation, which has further fueled interest in image style transfer. Existing
works mainly focus on training free-based methods (e.g., image inversion) due
to the scarcity of specific data. In this study, we present a data construction
pipelin... | 2024-08-29T17:59:30Z | null | null | null | CSGO: Content-Style Composition in Text-to-Image Generation | ['Peng Xing', 'Haofan Wang', 'Yanpeng Sun', 'Qixun Wang', 'Xu Bai', 'Hao Ai', 'Renyuan Huang', 'Zechao Li'] | 2,024 | arXiv.org | 27 | 44 | ['Computer Science'] |
2,408.17024 | InkubaLM: A small language model for low-resource African languages | ['Atnafu Lambebo Tonja', 'Bonaventure F. P. Dossou', 'Jessica Ojo', 'Jenalea Rajab', 'Fadel Thior', 'Eric Peter Wairagala', 'Anuoluwapo Aremu', 'Pelonomi Moiloa', 'Jade Abbott', 'Vukosi Marivate', 'Benjamin Rosman'] | ['cs.CL'] | High-resource language models often fall short in the African context, where
there is a critical need for models that are efficient, accessible, and locally
relevant, even amidst significant computing and data constraints. This paper
introduces InkubaLM, a small language model with 0.4 billion parameters, which
achieve... | 2024-08-30T05:42:31Z | null | null | null | InkubaLM: A small language model for low-resource African languages | ['A. Tonja', 'Bonaventure F. P. Dossou', 'Jessica Ojo', 'Jenalea Rajab', 'Fadel Thior', 'Eric Peter Wairagala', 'Aremu Anuoluwapo', 'Pelonomi Moiloa', 'Jade Abbott', 'V. Marivate', 'Benjamin Rosman'] | 2,024 | arXiv.org | 11 | 38 | ['Computer Science'] |
2,408.17081 | Stochastic Layer-Wise Shuffle for Improving Vision Mamba Training | ['Zizheng Huang', 'Haoxing Chen', 'Jiaqi Li', 'Jun Lan', 'Huijia Zhu', 'Weiqiang Wang', 'Limin Wang'] | ['cs.CV'] | Recent Vision Mamba (Vim) models exhibit nearly linear complexity in sequence
length, making them highly attractive for processing visual data. However, the
training methodologies and their potential are still not sufficiently explored.
In this paper, we investigate strategies for Vim and propose Stochastic
Layer-Wise ... | 2024-08-30T08:09:19Z | accpeted to ICML25 | Proceedings of the 42nd International Conference on Machine
Learning, 2025 | null | Stochastic Layer-Wise Shuffle for Improving Vision Mamba Training | ['Zizheng Huang', 'Haoxing Chen', 'Jiaqi Li', 'Jun Lan', 'Huijia Zhu', 'Weiqiang Wang', 'Limin Wang'] | 2,024 | null | 2 | 101 | ['Computer Science'] |
2,408.17175 | Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio
Language Model | ['Zhen Ye', 'Peiwen Sun', 'Jiahe Lei', 'Hongzhan Lin', 'Xu Tan', 'Zheqi Dai', 'Qiuqiang Kong', 'Jianyi Chen', 'Jiahao Pan', 'Qifeng Liu', 'Yike Guo', 'Wei Xue'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.SD'] | Recent advancements in audio generation have been significantly propelled by
the capabilities of Large Language Models (LLMs). The existing research on
audio LLM has primarily focused on enhancing the architecture and scale of
audio language models, as well as leveraging larger datasets, and generally,
acoustic codecs,... | 2024-08-30T10:24:07Z | null | null | null | null | null | null | null | null | null | null |
2,408.1728 | Flexible and Effective Mixing of Large Language Models into a Mixture of
Domain Experts | ['Rhui Dih Lee', 'Laura Wynter', 'Raghu Kiran Ganti'] | ['cs.AI', 'cs.CL'] | We present a toolkit for creating low-cost Mixture-of-Domain-Experts (MOE)
from trained models. The toolkit can be used for creating a mixture from models
or from adapters. We perform extensive tests and offer guidance on defining the
architecture of the resulting MOE using the toolkit. A public repository is
available... | 2024-08-30T13:28:45Z | null | null | null | null | null | null | null | null | null | null |
2,409.00063 | Urban Mobility Assessment Using LLMs | ['Prabin Bhandari', 'Antonios Anastasopoulos', 'Dieter Pfoser'] | ['cs.CY', 'cs.CL'] | Understanding urban mobility patterns and analyzing how people move around
cities helps improve the overall quality of life and supports the development
of more livable, efficient, and sustainable urban areas. A challenging aspect
of this work is the collection of mobility data by means of user tracking or
travel surve... | 2024-08-22T19:17:33Z | 13 pages, 10 Figures | null | null | Urban Mobility Assessment Using LLMs | ['Prabin Bhandari', 'Antonios Anastasopoulos', 'Dieter Pfoser'] | 2,024 | SIGSPATIAL/GIS | 8 | 34 | ['Computer Science'] |
2,409.00134 | MAPF-GPT: Imitation Learning for Multi-Agent Pathfinding at Scale | ['Anton Andreychuk', 'Konstantin Yakovlev', 'Aleksandr Panov', 'Alexey Skrynnik'] | ['cs.MA', 'cs.AI', 'cs.LG'] | Multi-agent pathfinding (MAPF) is a problem that generally requires finding
collision-free paths for multiple agents in a shared environment. Solving MAPF
optimally, even under restrictive assumptions, is NP-hard, yet efficient
solutions for this problem are critical for numerous applications, such as
automated warehou... | 2024-08-29T12:55:10Z | null | null | null | null | null | null | null | null | null | null |
2,409.00286 | OnlySportsLM: Optimizing Sports-Domain Language Models with SOTA
Performance under Billion Parameters | ['Zexin Chen', 'Chengxi Li', 'Xiangyu Xie', 'Parijat Dube'] | ['cs.CL', 'cs.AI'] | This paper explores the potential of a small, domain-specific language model
trained exclusively on sports-related data. We investigate whether extensive
training data with specially designed small model structures can overcome model
size constraints. The study introduces the OnlySports collection, comprising
OnlySport... | 2024-08-30T22:39:35Z | 13 pages, 4 figures, 4 tables | null | null | OnlySportsLM: Optimizing Sports-Domain Language Models with SOTA Performance under Billion Parameters | ['Zexin Chen', 'Chengxi Li', 'Xiangyu Xie', 'Parijat Dube'] | 2,024 | arXiv.org | 2 | 26 | ['Computer Science'] |
2,409.0075 | MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec
Transformer | ['Yuancheng Wang', 'Haoyue Zhan', 'Liwei Liu', 'Ruihong Zeng', 'Haotian Guo', 'Jiachen Zheng', 'Qiang Zhang', 'Xueyao Zhang', 'Shunsi Zhang', 'Zhizheng Wu'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | The recent large-scale text-to-speech (TTS) systems are usually grouped as
autoregressive and non-autoregressive systems. The autoregressive systems
implicitly model duration but exhibit certain deficiencies in robustness and
lack of duration controllability. Non-autoregressive systems require explicit
alignment inform... | 2024-09-01T15:26:30Z | null | null | null | MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer | ['Yuancheng Wang', 'Haoyue Zhan', 'Liwei Liu', 'Ruihong Zeng', 'Haotian Guo', 'Jiachen Zheng', 'Qiang Zhang', 'Xueyao Zhang', 'Shunsi Zhang', 'Zhizheng Wu'] | 2,024 | International Conference on Learning Representations | 61 | 76 | ['Computer Science', 'Engineering'] |
2,409.0092 | ToolACE: Winning the Points of LLM Function Calling | ['Weiwen Liu', 'Xu Huang', 'Xingshan Zeng', 'Xinlong Hao', 'Shuai Yu', 'Dexun Li', 'Shuai Wang', 'Weinan Gan', 'Zhengying Liu', 'Yuanqing Yu', 'Zezhong Wang', 'Yuxian Wang', 'Wu Ning', 'Yutai Hou', 'Bin Wang', 'Chuhan Wu', 'Xinzhi Wang', 'Yong Liu', 'Yasheng Wang', 'Duyu Tang', 'Dandan Tu', 'Lifeng Shang', 'Xin Jiang',... | ['cs.LG', 'cs.AI', 'cs.CL'] | Function calling significantly extends the application boundary of large
language models, where high-quality and diverse training data is critical for
unlocking this capability. However, real function-calling data is quite
challenging to collect and annotate, while synthetic data generated by existing
pipelines tends t... | 2024-09-02T03:19:56Z | 21 pages, 22 figures | null | null | null | null | null | null | null | null | null |
2,409.01071 | VideoLLaMB: Long-context Video Understanding with Recurrent Memory
Bridges | ['Yuxuan Wang', 'Cihang Xie', 'Yang Liu', 'Zilong Zheng'] | ['cs.CV', 'cs.CL'] | Recent advancements in large-scale video-language models have shown
significant potential for real-time planning and detailed interactions.
However, their high computational demands and the scarcity of annotated
datasets limit their practicality for academic researchers. In this work, we
introduce VideoLLaMB, a novel f... | 2024-09-02T08:52:58Z | null | null | null | null | null | null | null | null | null | null |
2,409.01347 | Target-Driven Distillation: Consistency Distillation with Target
Timestep Selection and Decoupled Guidance | ['Cunzheng Wang', 'Ziyuan Guo', 'Yuxuan Duan', 'Huaxia Li', 'Nemo Chen', 'Xu Tang', 'Yao Hu'] | ['cs.CV'] | Consistency distillation methods have demonstrated significant success in
accelerating generative tasks of diffusion models. However, since previous
consistency distillation methods use simple and straightforward strategies in
selecting target timesteps, they usually struggle with blurs and detail losses
in generated i... | 2024-09-02T16:01:38Z | null | null | null | Target-Driven Distillation: Consistency Distillation with Target Timestep Selection and Decoupled Guidance | ['Cunzheng Wang', 'Ziyuan Guo', 'Yuxuan Duan', 'Huaxia Li', 'Nemo Chen', 'Xu Tang', 'Yao Hu'] | 2,024 | AAAI Conference on Artificial Intelligence | 3 | 33 | ['Computer Science'] |
2,409.01357 | Know When to Fuse: Investigating Non-English Hybrid Retrieval in the
Legal Domain | ['Antoine Louis', 'Gijs van Dijck', 'Gerasimos Spanakis'] | ['cs.CL', 'cs.IR'] | Hybrid search has emerged as an effective strategy to offset the limitations
of different matching paradigms, especially in out-of-domain contexts where
notable improvements in retrieval quality have been observed. However, existing
research predominantly focuses on a limited set of retrieval methods, evaluated
in pair... | 2024-09-02T16:19:13Z | Under review | null | null | null | null | null | null | null | null | null |
2,409.01548 | VoxHakka: A Dialectally Diverse Multi-speaker Text-to-Speech System for
Taiwanese Hakka | ['Li-Wei Chen', 'Hung-Shin Lee', 'Chen-Chi Chang'] | ['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS'] | This paper introduces VoxHakka, a text-to-speech (TTS) system designed for
Taiwanese Hakka, a critically under-resourced language spoken in Taiwan.
Leveraging the YourTTS framework, VoxHakka achieves high naturalness and
accuracy and low real-time factor in speech synthesis while supporting six
distinct Hakka dialects.... | 2024-09-03T02:37:34Z | Accepted to O-COCOSDA 2024 | null | null | null | null | null | null | null | null | null |
2,409.01577 | EvoChart: A Benchmark and a Self-Training Approach Towards Real-World
Chart Understanding | ['Muye Huang', 'Han Lai', 'Xinyu Zhang', 'Wenjun Wu', 'Jie Ma', 'Lingling Zhang', 'Jun Liu'] | ['cs.CV'] | Chart understanding enables automated data analysis for humans, which
requires models to achieve highly accurate visual comprehension. While existing
Visual Language Models (VLMs) have shown progress in chart understanding, the
lack of high-quality training data and comprehensive evaluation benchmarks
hinders VLM chart... | 2024-09-03T03:23:00Z | This paper has been accepted at AAAI 2025 | null | null | null | null | null | null | null | null | null |
2,409.01704 | General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model | ['Haoran Wei', 'Chenglong Liu', 'Jinyue Chen', 'Jia Wang', 'Lingyu Kong', 'Yanming Xu', 'Zheng Ge', 'Liang Zhao', 'Jianjian Sun', 'Yuang Peng', 'Chunrui Han', 'Xiangyu Zhang'] | ['cs.CV'] | Traditional OCR systems (OCR-1.0) are increasingly unable to meet people's
usage due to the growing demand for intelligent processing of man-made optical
characters. In this paper, we collectively refer to all artificial optical
signals (e.g., plain texts, math/molecular formulas, tables, charts, sheet
music, and even ... | 2024-09-03T08:41:31Z | null | null | null | General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model | ['Haoran Wei', 'Chenglong Liu', 'Jinyue Chen', 'Jia Wang', 'Lingyu Kong', 'Yanming Xu', 'Zheng Ge', 'Liang Zhao', 'Jian‐Yuan Sun', 'Yuang Peng', 'Chunrui Han', 'Xiangyu Zhang'] | 2,024 | arXiv.org | 55 | 51 | ['Computer Science'] |
2,409.0179 | Training on the Benchmark Is Not All You Need | ['Shiwen Ni', 'Xiangtao Kong', 'Chengming Li', 'Xiping Hu', 'Ruifeng Xu', 'Jia Zhu', 'Min Yang'] | ['cs.CL', 'cs.AI'] | The success of Large Language Models (LLMs) relies heavily on the huge amount
of pre-training data learned in the pre-training phase. The opacity of the
pre-training process and the training data causes the results of many benchmark
tests to become unreliable. If any model has been trained on a benchmark test
set, it c... | 2024-09-03T11:09:44Z | null | AAAI 2025 | null | null | null | null | null | null | null | null |
2,409.0206 | OLMoE: Open Mixture-of-Experts Language Models | ['Niklas Muennighoff', 'Luca Soldaini', 'Dirk Groeneveld', 'Kyle Lo', 'Jacob Morrison', 'Sewon Min', 'Weijia Shi', 'Pete Walsh', 'Oyvind Tafjord', 'Nathan Lambert', 'Yuling Gu', 'Shane Arora', 'Akshita Bhagia', 'Dustin Schwenk', 'David Wadden', 'Alexander Wettig', 'Binyuan Hui', 'Tim Dettmers', 'Douwe Kiela', 'Ali Farh... | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce OLMoE, a fully open, state-of-the-art language model leveraging
sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but
uses only 1B per input token. We pretrain it on 5 trillion tokens and further
adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available
models wit... | 2024-09-03T17:08:20Z | 63 pages (24 main), 36 figures, 17 tables | null | null | null | null | null | null | null | null | null |
2,409.02078 | Political DEBATE: Efficient Zero-shot and Few-shot Classifiers for
Political Text | ['Michael Burnham', 'Kayla Kahn', 'Ryan Yank Wang', 'Rachel X. Peng'] | ['cs.CL'] | Social scientists quickly adopted large language models due to their ability
to annotate documents without supervised training, an ability known as
zero-shot learning. However, due to their compute demands, cost, and often
proprietary nature, these models are often at odds with replication and open
science standards. T... | 2024-09-03T17:26:17Z | 26 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,409.02095 | DepthCrafter: Generating Consistent Long Depth Sequences for Open-world
Videos | ['Wenbo Hu', 'Xiangjun Gao', 'Xiaoyu Li', 'Sijie Zhao', 'Xiaodong Cun', 'Yong Zhang', 'Long Quan', 'Ying Shan'] | ['cs.CV', 'cs.AI', 'cs.GR'] | Estimating video depth in open-world scenarios is challenging due to the
diversity of videos in appearance, content motion, camera movement, and length.
We present DepthCrafter, an innovative method for generating temporally
consistent long depth sequences with intricate details for open-world videos,
without requiring... | 2024-09-03T17:52:03Z | Project webpage: https://depthcrafter.github.io | null | null | DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos | ['Wenbo Hu', 'Xiangjun Gao', 'Xiaoyu Li', 'Sijie Zhao', 'Xiaodong Cun', 'Yong Zhang', 'Long Quan', 'Ying Shan'] | 2,024 | Computer Vision and Pattern Recognition | 75 | 78 | ['Computer Science'] |
2,409.02097 | LinFusion: 1 GPU, 1 Minute, 16K Image | ['Songhua Liu', 'Weihao Yu', 'Zhenxiong Tan', 'Xinchao Wang'] | ['cs.CV', 'cs.LG'] | Modern diffusion models, particularly those utilizing a Transformer-based
UNet for denoising, rely heavily on self-attention operations to manage complex
spatial relationships, thus achieving impressive generation performance.
However, this existing paradigm faces significant challenges in generating
high-resolution vi... | 2024-09-03T17:54:39Z | Work in Progress. Codes are available at
https://github.com/Huage001/LinFusion | null | null | LinFusion: 1 GPU, 1 Minute, 16K Image | ['Songhua Liu', 'Weihao Yu', 'Zhenxiong Tan', 'Xinchao Wang'] | 2,024 | arXiv.org | 16 | 75 | ['Computer Science'] |
2,409.02421 | MusicMamba: A Dual-Feature Modeling Approach for Generating Chinese
Traditional Music with Modal Precision | ['Jiatao Chen', 'Tianming Xie', 'Xing Tang', 'Jing Wang', 'Wenjing Dong', 'Bing Shi'] | ['cs.SD', 'eess.AS'] | In recent years, deep learning has significantly advanced the MIDI domain,
solidifying music generation as a key application of artificial intelligence.
However, existing research primarily focuses on Western music and encounters
challenges in generating melodies for Chinese traditional music, especially in
capturing m... | 2024-09-04T04:00:22Z | Accepted by ICASSP 2025 | null | null | MusicMamba: A Dual-Feature Modeling Approach for Generating Chinese Traditional Music with Modal Precision | ['Jiatao Chen', 'Tianming Xie', 'Xing Tang', 'Jing Wang', 'Wenjing Dong', 'Bing Shi'] | 2,024 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 0 | 27 | ['Computer Science', 'Engineering'] |
2,409.02483 | TASAR: Transfer-based Attack on Skeletal Action Recognition | ['Yunfeng Diao', 'Baiqi Wu', 'Ruixuan Zhang', 'Ajian Liu', 'Xiaoshuai Hao', 'Xingxing Wei', 'Meng Wang', 'He Wang'] | ['cs.CV', 'cs.AI'] | Skeletal sequence data, as a widely employed representation of human actions,
are crucial in Human Activity Recognition (HAR). Recently, adversarial attacks
have been proposed in this area, which exposes potential security concerns, and
more importantly provides a good tool for model robustness test. Within this
resear... | 2024-09-04T07:20:01Z | Accepted in ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,409.02685 | RouterRetriever: Routing over a Mixture of Expert Embedding Models | ['Hyunji Lee', 'Luca Soldaini', 'Arman Cohan', 'Minjoon Seo', 'Kyle Lo'] | ['cs.IR', 'cs.AI'] | Information retrieval methods often rely on a single embedding model trained
on large, general-domain datasets like MSMARCO. While this approach can produce
a retriever with reasonable overall performance, they often underperform models
trained on domain-specific data when testing on their respective domains. Prior
wor... | 2024-09-04T13:16:55Z | published at AAAI 2025 | null | null | RouterRetriever: Routing over a Mixture of Expert Embedding Models | ['Hyunji Lee', 'Luca Soldaini', 'Arman Cohan', 'Minjoon Seo', 'Kyle Lo'] | 2,024 | AAAI Conference on Artificial Intelligence | 1 | 41 | ['Computer Science'] |
2,409.0269 | Detecting Calls to Action in Multimodal Content: Analysis of the 2021
German Federal Election Campaign on Instagram | ['Michael Achmann-Denkler', 'Jakob Fehle', 'Mario Haim', 'Christian Wolff'] | ['cs.SI', 'cs.CL'] | This study investigates the automated classification of Calls to Action
(CTAs) within the 2021 German Instagram election campaign to advance the
understanding of mobilization in social media contexts. We analyzed over 2,208
Instagram stories and 712 posts using fine-tuned BERT models and OpenAI's GPT-4
models. The fine... | 2024-09-04T13:23:50Z | Accepted Archival Paper for the CPSS Workshop at KONVENS 2024. Camera
Ready Submission | null | null | Detecting Calls to Action in Multimodal Content: Analysis of the 2021 German Federal Election Campaign on Instagram | ['Michael Achmann-Denkler', 'Jakob Fehle', 'Mario Haim', 'Christian Wolff'] | 2,024 | ACM Cyber-Physical System Security Workshop | 2 | 40 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.