arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,412.09871
Byte Latent Transformer: Patches Scale Better Than Tokens
['Artidoro Pagnoni', 'Ram Pasunuru', 'Pedro Rodriguez', 'John Nguyen', 'Benjamin Muller', 'Margaret Li', 'Chunting Zhou', 'Lili Yu', 'Jason Weston', 'Luke Zettlemoyer', 'Gargi Ghosh', 'Mike Lewis', 'Ari Holtzman', 'Srinivasan Iyer']
['cs.CL']
We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of comp...
2024-12-13T05:33:32Z
null
null
null
Byte Latent Transformer: Patches Scale Better Than Tokens
['Artidoro Pagnoni', 'Ramakanth Pasunuru', 'Pedro Rodriguez', 'John Nguyen', 'Benjamin Muller', 'Margaret Li', 'Chunting Zhou', 'Lili Yu', 'Jason Weston', 'Luke S. Zettlemoyer', 'Gargi Ghosh', 'Mike Lewis', 'Ari Holtzman', 'Srinivasan Iyer']
2,024
arXiv.org
30
0
['Computer Science']
2,412.09951
WiseAD: Knowledge Augmented End-to-End Autonomous Driving with Vision-Language Model
['Songyan Zhang', 'Wenhui Huang', 'Zihui Gao', 'Hao Chen', 'Chen Lv']
['cs.CV']
The emergence of general human knowledge and impressive logical reasoning capacity in rapidly progressed vision-language models (VLMs) have driven increasing interest in applying VLMs to high-level autonomous driving tasks, such as scene understanding and decision-making. However, an in-depth study on the relationship ...
2024-12-13T08:14:24Z
null
null
null
null
null
null
null
null
null
null
2,412.09957
Romanized to Native Malayalam Script Transliteration Using an Encoder-Decoder Framework
['Bajiyo Baiju', 'Kavya Manohar', 'Leena G Pillai', 'Elizabeth Sherly']
['cs.CL']
In this work, we present the development of a reverse transliteration model to convert romanized Malayalam to native script using an encoder-decoder framework built with attention-based bidirectional Long Short Term Memory (Bi-LSTM) architecture. To train the model, we have used curated and combined collection of 4.3 m...
2024-12-13T08:33:26Z
5 pages
null
null
null
null
null
null
null
null
null
2,412.10028
Mr. DETR++: Instructive Multi-Route Training for Detection Transformers with Mixture-of-Experts
['Chang-Bin Zhang', 'Yujie Zhong', 'Kai Han']
['cs.CV']
Existing methods enhance the training of detection transformers by incorporating an auxiliary one-to-many assignment. In this work, we treat the model as a multi-task framework, simultaneously performing one-to-one and one-to-many predictions. We investigate the roles of each component in the transformer decoder across...
2024-12-13T10:39:27Z
Under review. Extended version of our CVPR 2025 paper, see arXiv:2412.10028v3
null
null
null
null
null
null
null
null
null
2,412.10117
CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models
['Zhihao Du', 'Yuxuan Wang', 'Qian Chen', 'Xian Shi', 'Xiang Lv', 'Tianyu Zhao', 'Zhifu Gao', 'Yexin Yang', 'Changfeng Gao', 'Hui Wang', 'Fan Yu', 'Huadai Liu', 'Zhengyan Sheng', 'Yue Gu', 'Chong Deng', 'Wen Wang', 'Shiliang Zhang', 'Zhijie Yan', 'Jingren Zhou']
['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS']
In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, language models (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, a...
2024-12-13T12:59:39Z
Tech report, work in progress
null
null
null
null
null
null
null
null
null
2,412.10151
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation
['Hyeonseok Lim', 'Dongjae Shin', 'Seohyun Song', 'Inho Won', 'Minjun Kim', 'Junghun Yuk', 'Haneol Jang', 'KyungTae Lim']
['cs.CV', 'cs.AI', 'cs.CL']
We propose the VLR-Bench, a visual question answering (VQA) benchmark for evaluating vision language models (VLMs) based on retrieval augmented generation (RAG). Unlike existing evaluation datasets for external knowledge-based VQA, the proposed VLR-Bench includes five input passages. This allows testing of the ability ...
2024-12-13T14:11:26Z
The 31st International Conference on Computational Linguistics (COLING 2025), 19 pages
null
null
VLR-Bench: Multilingual Benchmark Dataset for Vision-Language Retrieval Augmented Generation
['HyeonSeok Lim', 'Dongjae Shin', 'Seohyun Song', 'Inho Won', 'Minjun Kim', 'Junghun Yuk', 'Haneol Jang', 'KyungTae Lim']
2,024
arXiv.org
1
0
['Computer Science']
2,412.10193
Simple Guidance Mechanisms for Discrete Diffusion Models
['Yair Schiff', 'Subham Sekhar Sahoo', 'Hao Phung', 'Guanghan Wang', 'Sam Boshar', 'Hugo Dalla-torre', 'Bernardo P. de Almeida', 'Alexander Rush', 'Thomas Pierrot', 'Volodymyr Kuleshov']
['cs.LG']
Diffusion models for continuous data gained widespread adoption owing to their high quality generation and control mechanisms. However, controllable diffusion on discrete data faces challenges given that continuous guidance methods do not directly apply to discrete diffusion. Here, we provide a straightforward derivati...
2024-12-13T15:08:30Z
ICLR 2025; Code to reproduce our experiments is available here: https://github.com/kuleshov-group/discrete-diffusion-guidance
null
null
null
null
null
null
null
null
null
2,412.10255
AniSora: Exploring the Frontiers of Animation Video Generation in the Sora Era
['Yudong Jiang', 'Baohan Xu', 'Siqian Yang', 'Mingyu Yin', 'Jing Liu', 'Chao Xu', 'Siqi Wang', 'Yidi Wu', 'Bingwen Zhu', 'Xinwen Zhang', 'Xingyu Zheng', 'Jixuan Xu', 'Yue Zhang', 'Jinlong Hou', 'Huyang Sun']
['cs.GR', 'cs.AI']
Animation has gained significant interest in the recent film and TV industry. Despite the success of advanced video generation models like Sora, Kling, and CogVideoX in generating natural videos, they lack the same effectiveness in handling animation videos. Evaluating animation video generation is also a great challen...
2024-12-13T16:24:58Z
null
null
null
null
null
null
null
null
null
null
2,412.10302
DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding
['Zhiyu Wu', 'Xiaokang Chen', 'Zizheng Pan', 'Xingchao Liu', 'Wen Liu', 'Damai Dai', 'Huazuo Gao', 'Yiyang Ma', 'Chengyue Wu', 'Bingxuan Wang', 'Zhenda Xie', 'Yu Wu', 'Kai Hu', 'Jiawei Wang', 'Yaofeng Sun', 'Yukun Li', 'Yishi Piao', 'Kang Guan', 'Aixin Liu', 'Xin Xie', 'Yuxiang You', 'Kai Dong', 'Xingkai Yu', 'Haowei Z...
['cs.CV', 'cs.AI', 'cs.CL']
We present DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL, through two key major upgrades. For the vision component, we incorporate a dynamic tiling vision encoding strategy designed for processing high-resolution i...
2024-12-13T17:37:48Z
null
null
null
DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding
['Zhiyu Wu', 'Xi-aokang Chen', 'Zizheng Pan', 'Xingchao Liu', 'Wen Liu', 'Damai Dai', 'Huazuo Gao', 'Yiyang Ma', 'Chengyue Wu', 'Bing-Li Wang', 'Zhenda Xie', 'Yu Wu', 'Kai Hu', 'Jiawei Wang', 'Yaofeng Sun', 'Yukun Li', 'Yishi Piao', 'Kang Guan', 'A. Liu', 'Xin Xie', 'Yu-mei You', 'Kaihong Dong', 'Xingkai Yu', 'Haowei Z...
2,024
arXiv.org
161
0
['Computer Science']
2,412.10316
BrushEdit: All-In-One Image Inpainting and Editing
['Yaowei Li', 'Yuxuan Bian', 'Xuan Ju', 'Zhaoyang Zhang', 'Junhao Zhuang', 'Ying Shan', 'Yuexian Zou', 'Qiang Xu']
['cs.CV', 'cs.AI']
Image editing has advanced significantly with the development of diffusion models using both inversion-based and instruction-based methods. However, current inversion-based approaches struggle with big modifications (e.g., adding or removing objects) due to the structured nature of inversion noise, which hinders substa...
2024-12-13T17:58:06Z
WebPage available at https://liyaowei-stu.github.io/project/BrushEdit/
null
null
null
null
null
null
null
null
null
2,412.10337
Generative AI in Medicine
['Divya Shanmugam', 'Monica Agrawal', 'Rajiv Movva', 'Irene Y. Chen', 'Marzyeh Ghassemi', 'Maia Jacobs', 'Emma Pierson']
['cs.LG', 'cs.AI', 'cs.CY', 'cs.HC']
The increased capabilities of generative AI have dramatically expanded its possible use cases in medicine. We provide a comprehensive overview of generative AI use cases for clinicians, patients, clinical trial organizers, researchers, and trainees. We then discuss the many challenges -- including maintaining privacy a...
2024-12-13T18:32:21Z
To appear in the Annual Review of Biomedical Data Science, August 2025
null
null
null
null
null
null
null
null
null
2,412.10345
TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies
['Ruijie Zheng', 'Yongyuan Liang', 'Shuaiyi Huang', 'Jianfeng Gao', 'Hal Daumé III', 'Andrey Kolobov', 'Furong Huang', 'Jianwei Yang']
['cs.RO', 'cs.AI']
Although large vision-language-action (VLA) models pretrained on extensive robot datasets offer promising generalist policies for robotic learning, they still struggle with spatial-temporal dynamics in interactive robotics, making them less effective in handling complex tasks, such as manipulation. In this work, we int...
2024-12-13T18:40:51Z
null
null
null
null
null
null
null
null
null
null
2,412.1036
Apollo: An Exploration of Video Understanding in Large Multimodal Models
['Orr Zohar', 'Xiaohan Wang', 'Yann Dubois', 'Nikhil Mehta', 'Tong Xiao', 'Philippe Hansen-Estruch', 'Licheng Yu', 'Xiaofang Wang', 'Felix Juefei-Xu', 'Ning Zhang', 'Serena Yeung-Levy', 'Xide Xia']
['cs.CV', 'cs.AI']
Despite the rapid integration of video perception capabilities into Large Multimodal Models (LMMs), the underlying mechanisms driving their video understanding remain poorly understood. Consequently, many design decisions in this domain are made without proper justification or analysis. The high computational cost of t...
2024-12-13T18:53:24Z
https://apollo-lmms.github.io
null
null
Apollo: An Exploration of Video Understanding in Large Multimodal Models
['Orr Zohar', 'Xiaohan Wang', 'Yann Dubois', 'Nikhil Mehta', 'Tong Xiao', 'Philippe Hansen-Estruch', 'Licheng Yu', 'Xiaofang Wang', 'Felix Juefei-Xu', 'Ning Zhang', 'S. Yeung-Levy', 'Xide Xia']
2,024
arXiv.org
28
0
['Computer Science']
2,412.10893
BgGPT 1.0: Extending English-centric LLMs to other languages
['Anton Alexandrov', 'Veselin Raychev', 'Dimitar I. Dimitrov', 'Ce Zhang', 'Martin Vechev', 'Kristina Toutanova']
['cs.CL', 'cs.AI', 'cs.LG']
We present BgGPT-Gemma-2-27B-Instruct and BgGPT-Gemma-2-9B-Instruct: continually pretrained and fine-tuned versions of Google's Gemma-2 models, specifically optimized for Bulgarian language understanding and generation. Leveraging Gemma-2's multilingual capabilities and over 100 billion tokens of Bulgarian and English ...
2024-12-14T16:49:52Z
null
null
null
null
null
null
null
null
null
null
2,412.11376
ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data
['Chengsen Wang', 'Qi Qi', 'Jingyu Wang', 'Haifeng Sun', 'Zirui Zhuang', 'Jinming Wu', 'Lei Zhang', 'Jianxin Liao']
['cs.CL', 'cs.LG']
Human experts typically integrate numerical and textual multimodal information to analyze time series. However, most traditional deep learning predictors rely solely on unimodal numerical data, using a fixed-length window for training and prediction on a single dataset, and cannot adapt to different scenarios. The powe...
2024-12-16T02:04:06Z
Accepted by AAAI 2025
null
null
null
null
null
null
null
null
null
2,412.11439
Bayesian Flow Is All You Need to Sample Out-of-Distribution Chemical Spaces
['Nianze Tao']
['cs.LG', 'cs.AI', 'physics.chem-ph']
Generating novel molecules with higher properties than the training space, namely the out-of-distribution generation, is important for ${de~novo}$ drug design. However, it is not easy for distribution learning-based models, for example diffusion models, to solve this challenge as these methods are designed to fit the d...
2024-12-16T04:43:54Z
27 pages, 10 figures, 8 tables
null
null
null
null
null
null
null
null
null
2,412.11475
OmniVLM: A Token-Compressed, Sub-Billion-Parameter Vision-Language Model for Efficient On-Device Inference
['Wei Chen', 'Zhiyuan Li', 'Shuo Xin']
['cs.CV']
We present OmniVLM, a sub-billion-parameter vision-language model for efficient on-device inference. OmniVLM introduces a token compression mechanism that reduces visual token sequence length from 729 to 81 tokens, significantly reducing computational overhead while preserving visual-semantic fidelity. Through a multi-...
2024-12-16T06:38:00Z
null
null
null
null
null
null
null
null
null
null
2,412.11538
MERaLiON-SpeechEncoder: Towards a Speech Foundation Model for Singapore and Beyond
['Muhammad Huzaifah', 'Geyu Lin', 'Tianchi Liu', 'Hardik B. Sailor', 'Kye Min Tan', 'Tarun K. Vangani', 'Qiongqiong Wang', 'Jeremy H. M. Wong', 'Nancy F. Chen', 'Ai Ti Aw']
['cs.CL', 'cs.AI', 'eess.AS']
This technical report describes the MERaLiON-SpeechEncoder, a foundation model designed to support a wide range of downstream speech applications. Developed as part of Singapore's National Multimodal Large Language Model Programme, the MERaLiON-SpeechEncoder is tailored to address the speech processing needs in Singapo...
2024-12-16T08:15:19Z
null
null
null
MERaLiON-SpeechEncoder: Towards a Speech Foundation Model for Singapore and Beyond
['M. Huzaifah', 'Tianchi Liu', 'Hardik B. Sailor', 'Kye Min Tan', 'T. K. Vangani', 'Qiongqiong Wang', 'Jeremy H. M. Wong', 'Nancy F. Chen', 'AiTi Aw']
2,024
arXiv.org
2
36
['Computer Science', 'Engineering']
2,412.11618
EvoLlama: Enhancing LLMs' Understanding of Proteins via Multimodal Structure and Sequence Representations
['Nuowei Liu', 'Changzhi Sun', 'Tao Ji', 'Junfeng Tian', 'Jianxin Tang', 'Yuanbin Wu', 'Man Lan']
['cs.LG', 'cs.AI']
Current Large Language Models (LLMs) for understanding proteins primarily treats amino acid sequences as a text modality. Meanwhile, Protein Language Models (PLMs), such as ESM-2, have learned massive sequential evolutionary knowledge from the universe of natural protein sequences. Furthermore, structure-based encoders...
2024-12-16T10:01:33Z
null
null
null
null
null
null
null
null
null
null
2,412.11699
CoinMath: Harnessing the Power of Coding Instruction for Math LLMs
['Chengwei Wei', 'Bin Wang', 'Jung-jae Kim', 'Guimei Liu', 'Nancy F. Chen']
['cs.CL']
Large Language Models (LLMs) have shown strong performance in solving mathematical problems, with code-based solutions proving particularly effective. However, the best practice to leverage coding instruction data to enhance mathematical reasoning remains underexplored. This study investigates three key questions: (1) ...
2024-12-16T12:21:11Z
null
null
null
null
null
null
null
null
null
null
2,412.11704
ElChat: Adapting Chat Language Models Using Only Target Unlabeled Language Data
['Atsuki Yamaguchi', 'Terufumi Morishita', 'Aline Villavicencio', 'Nikolaos Aletras']
['cs.CL', 'cs.AI']
Vocabulary expansion (VE) is the de-facto approach to language adaptation of large language models (LLMs) by adding new tokens and continuing pre-training on target data. While this is effective for base models trained on unlabeled data, it poses challenges for chat models trained to follow instructions through labeled...
2024-12-16T12:26:28Z
null
null
null
null
null
null
null
null
null
null
2,412.11755
Generative Inbetweening through Frame-wise Conditions-Driven Video Generation
['Tianyi Zhu', 'Dongwei Ren', 'Qilong Wang', 'Xiaohe Wu', 'Wangmeng Zuo']
['cs.CV']
Generative inbetweening aims to generate intermediate frame sequences by utilizing two key frames as input. Although remarkable progress has been made in video generation models, generative inbetweening still faces challenges in maintaining temporal stability due to the ambiguous interpolation path between two key fram...
2024-12-16T13:19:41Z
null
null
null
Generative Inbetweening through Frame-wise Conditions-Driven Video Generation
['Tianyi Zhu', 'Dongwei Ren', 'Qilong Wang', 'Xiaohe Wu', 'Wangmeng Zuo']
2,024
arXiv.org
3
62
['Computer Science']
2,412.11785
InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
['Rick Akkerman', 'Haiwen Feng', 'Michael J. Black', 'Dimitrios Tzionas', 'Victoria Fernández Abrevaya']
['cs.CV']
Predicting the dynamics of interacting objects is essential for both humans and intelligent systems. However, existing approaches are limited to simplified, toy settings and lack generalizability to complex, real-world environments. Recent advances in generative models have enabled the prediction of state transitions b...
2024-12-16T13:57:02Z
null
null
null
null
null
null
null
null
null
null
2,412.11815
ColorFlow: Retrieval-Augmented Image Sequence Colorization
['Junhao Zhuang', 'Xuan Ju', 'Zhaoyang Zhang', 'Yong Liu', 'Shiyi Zhang', 'Chun Yuan', 'Ying Shan']
['cs.CV']
Automatic black-and-white image sequence colorization while preserving character and object identity (ID) is a complex task with significant market demand, such as in cartoon or comic series colorization. Despite advancements in visual colorization using large-scale generative models like diffusion models, challenges w...
2024-12-16T14:32:49Z
Project Page: https://zhuang2002.github.io/ColorFlow/
null
null
ColorFlow: Retrieval-Augmented Image Sequence Colorization
['Junhao Zhuang', 'Xu Ju', 'Zhaoyang Zhang', 'Yong Liu', 'Shiyi Zhang', 'Chun Yuan', 'Ying Shan']
2,024
arXiv.org
1
84
['Computer Science']
2,412.11834
Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture
['Jingze Shi', 'Bingheng Wu']
['cs.LG', 'cs.AI', 'cs.CL']
In order to make the foundation model more efficient and effective, our idea is combining sequence transformation and state transformation. First, we prove the availability of rotary position embedding in the state space duality algorithm, which reduces the perplexity of the hybrid quadratic causal self-attention and s...
2024-12-16T14:56:28Z
The code is open-sourced at https://github.com/LoserCheems/WonderfulMatrices
null
null
null
null
null
null
null
null
null
2,412.11863
GeoX: Geometric Problem Solving Through Unified Formalized Vision-Language Pre-training
['Renqiu Xia', 'Mingsheng Li', 'Hancheng Ye', 'Wenjie Wu', 'Hongbin Zhou', 'Jiakang Yuan', 'Tianshuo Peng', 'Xinyu Cai', 'Xiangchao Yan', 'Bin Wang', 'Conghui He', 'Botian Shi', 'Tao Chen', 'Junchi Yan', 'Bo Zhang']
['cs.CV', 'cs.CL']
Despite their proficiency in general tasks, Multi-modal Large Language Models (MLLMs) struggle with automatic Geometry Problem Solving (GPS), which demands understanding diagrams, interpreting symbols, and performing complex reasoning. This limitation arises from their pre-training on natural images and texts, along wi...
2024-12-16T15:20:03Z
Our code is available at https://github.com/Alpha-Innovator/GeoX
null
null
null
null
null
null
null
null
null
2,412.11912
CharacterBench: Benchmarking Character Customization of Large Language Models
['Jinfeng Zhou', 'Yongkang Huang', 'Bosi Wen', 'Guanqun Bi', 'Yuxuan Chen', 'Pei Ke', 'Zhuang Chen', 'Xiyao Xiao', 'Libiao Peng', 'Kuntian Tang', 'Rongsheng Zhang', 'Le Zhang', 'Tangjie Lv', 'Zhipeng Hu', 'Hongning Wang', 'Minlie Huang']
['cs.CL']
Character-based dialogue (aka role-playing) enables users to freely customize characters for interaction, which often relies on LLMs, raising the need to evaluate LLMs' character customization capability. However, existing benchmarks fail to ensure a robust evaluation as they often only involve a single character categ...
2024-12-16T15:55:34Z
AAAI 2025
null
null
null
null
null
null
null
null
null
2,412.1194
The Impact of Token Granularity on the Predictive Power of Language Model Surprisal
['Byung-Doh Oh', 'William Schuler']
['cs.CL']
Word-by-word language model surprisal is often used to model the incremental processing of human readers, which raises questions about how various choices in language modeling influence its predictive power. One factor that has been overlooked in cognitive modeling is the granularity of subword tokens, which explicitly...
2024-12-16T16:24:58Z
ACL 2025; results with Natural Stories alignment issue corrected (commit 4700daa)
null
null
null
null
null
null
null
null
null
2,412.11948
OpenReviewer: A Specialized Large Language Model for Generating Critical Scientific Paper Reviews
['Maximilian Idahl', 'Zahra Ahmadi']
['cs.AI']
We present OpenReviewer, an open-source system for generating high-quality peer reviews of machine learning and AI conference papers. At its core is Llama-OpenReviewer-8B, an 8B parameter language model specifically fine-tuned on 79,000 expert reviews from top conferences. Given a PDF paper submission and review templa...
2024-12-16T16:31:00Z
NAACL 2025 System Demonstrations Track (Camera-ready version) Demo: https://huggingface.co/spaces/maxidl/openreviewer Model: https://huggingface.co/maxidl/Llama-OpenReviewer-8B
null
null
null
null
null
null
null
null
null
2,412.11974
Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning
['Qi Sun', 'Pengfei Hong', 'Tej Deep Pala', 'Vernon Toh', 'U-Xuan Tan', 'Deepanway Ghosal', 'Soujanya Poria']
['cs.RO', 'cs.AI', 'cs.CL', 'cs.CV']
Traditional reinforcement learning-based robotic control methods are often task-specific and fail to generalize across diverse environments or unseen objects and instructions. Visual Language Models (VLMs) demonstrate strong scene understanding and planning capabilities but lack the ability to generate actionable polic...
2024-12-16T16:58:28Z
https://github.com/declare-lab/Emma-X, https://huggingface.co/declare-lab/Emma-X
null
null
null
null
null
null
null
null
null
2,412.12032
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning
['Gaojian Wang', 'Feng Lin', 'Tong Wu', 'Zhenguang Liu', 'Zhongjie Ba', 'Kui Ren']
['cs.CV', 'cs.AI']
This work asks: with abundant, unlabeled real faces, how to learn a robust and transferable facial representation that boosts various face security tasks with respect to generalization performance? We make the first attempt and propose a self-supervised pretraining framework to learn fundamental representations of real...
2024-12-16T17:58:45Z
21 pages, 11 figures, project page: https://fsfm-3c.github.io
null
null
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation Learning
['Gaojian Wang', 'Feng Lin', 'Tong Wu', 'Zhenguang Liu', 'Zhongjie Ba', 'Kui Ren']
2,024
arXiv.org
0
136
['Computer Science']
2,412.12094
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
['Guoxuan Chen', 'Han Shi', 'Jiawei Li', 'Yihang Gao', 'Xiaozhe Ren', 'Yimeng Chen', 'Xin Jiang', 'Zhenguo Li', 'Weiyang Liu', 'Chao Huang']
['cs.CL', 'cs.AI', 'cs.LG']
Large Language Models (LLMs) have exhibited exceptional performance across a spectrum of natural language processing tasks. However, their substantial sizes pose considerable challenges, particularly in computational demands and inference speed, due to their quadratic complexity. In this work, we have identified a key ...
2024-12-16T18:58:57Z
Accepted to ICML 2025
null
null
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
['Guoxuan Chen', 'Han Shi', 'Jiawei Li', 'Yihang Gao', 'Xiaozhe Ren', 'Yimeng Chen', 'Xin Jiang', 'Zhenguo Li', 'Weiyang Liu', 'Chao Huang']
2,024
arXiv.org
11
46
['Computer Science']
2,412.12225
DLF: Disentangled-Language-Focused Multimodal Sentiment Analysis
['Pan Wang', 'Qiang Zhou', 'Yawen Wu', 'Tianlong Chen', 'Jingtong Hu']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.MM']
Multimodal Sentiment Analysis (MSA) leverages heterogeneous modalities, such as language, vision, and audio, to enhance the understanding of human sentiment. While existing models often focus on extracting shared information across modalities or directly fusing heterogeneous modalities, such approaches can introduce re...
2024-12-16T10:03:44Z
AAAI 2025 accepted
null
null
DLF: Disentangled-Language-Focused Multimodal Sentiment Analysis
['Pan Wang', 'Qiang Zhou', 'Yawen Wu', 'Tianlong Chen', 'Jingtong Hu']
2,024
AAAI Conference on Artificial Intelligence
3
42
['Computer Science']
2,412.12318
Graph-Guided Textual Explanation Generation Framework
['Shuzhou Yuan', 'Jingyi Sun', 'Ran Zhang', 'Michael Färber', 'Steffen Eger', 'Pepa Atanasova', 'Isabelle Augenstein']
['cs.CL']
Natural language explanations (NLEs) are commonly used to provide plausible free-text explanations of a model's reasoning about its predictions. However, recent work has questioned their faithfulness, as they may not accurately reflect the model's internal reasoning process regarding its predicted answer. In contrast, ...
2024-12-16T19:35:55Z
null
null
null
null
null
null
null
null
null
null
2,412.12345
Critical groups and partitions of finite groups
['Daniela Bubboloni', 'Nicolas Pinzauti']
['math.GR', 'math.CO', '05C25, 20D99, 06A15']
We define a class of finite groups based on the properties of the closed twins of their power graphs and study the structure of those groups. As a byproduct, we obtain results about finite groups admitting a partition by cyclic subgroups.
2024-12-16T20:39:39Z
null
null
null
null
null
null
null
null
null
null
2,412.12505
DocFusion: A Unified Framework for Document Parsing Tasks
['Mingxu Chai', 'Ziyu Shen', 'Chong Zhang', 'Yue Zhang', 'Xiao Wang', 'Shihan Dou', 'Jihua Kang', 'Jiazheng Zhang', 'Qi Zhang']
['cs.CL']
Document parsing is essential for analyzing complex document structures and extracting fine-grained information, supporting numerous downstream applications. However, existing methods often require integrating multiple independent models to handle various parsing tasks, leading to high complexity and maintenance overhe...
2024-12-17T03:20:00Z
null
null
null
null
null
null
null
null
null
null
2,412.12559
EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation
['Taeho Hwang', 'Sukmin Cho', 'Soyeong Jeong', 'Hoyun Song', 'SeungYoon Han', 'Jong C. Park']
['cs.CL', 'cs.AI', 'cs.IR']
We introduce EXIT, an extractive context compression framework that enhances both the effectiveness and efficiency of retrieval-augmented generation (RAG) in question answering (QA). Current RAG systems often struggle when retrieval models fail to rank the most relevant documents, leading to the inclusion of more conte...
2024-12-17T05:38:27Z
Findings of ACL 2025
null
null
null
null
null
null
null
null
null
2,412.12627
Make Imagination Clearer! Stable Diffusion-based Visual Imagination for Multimodal Machine Translation
['Andong Chen', 'Yuchen Song', 'Kehai Chen', 'Muyun Yang', 'Tiejun Zhao', 'Min Zhang']
['cs.CL']
Visual information has been introduced for enhancing machine translation (MT), and its effectiveness heavily relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we introduce a stable diffusion-based imagination network into a multimodal large la...
2024-12-17T07:41:23Z
Work in progress
null
null
null
null
null
null
null
null
null
2,412.12863
DISC: Plug-and-Play Decoding Intervention with Similarity of Characters for Chinese Spelling Check
['Ziheng Qiao', 'Houquan Zhou', 'Yumeng Liu', 'Zhenghua Li', 'Min Zhang', 'Bo Zhang', 'Chen Li', 'Ji Zhang', 'Fei Huang']
['cs.CL', 'cs.AI']
One key characteristic of the Chinese spelling check (CSC) task is that incorrect characters are usually similar to the correct ones in either phonetics or glyph. To accommodate this, previous works usually leverage confusion sets, which suffer from two problems, i.e., difficulty in determining which character pairs to...
2024-12-17T12:44:06Z
null
null
null
DISC: Plug-and-Play Decoding Intervention with Similarity of Characters for Chinese Spelling Check
['Ziheng Qiao', 'Houquan Zhou', 'Yumeng Liu', 'Zhenghua Li', 'Min Zhang', 'Bo Zhang', 'Chen Li', 'Ji Zhang', 'Fei Huang']
2,024
arXiv.org
0
24
['Computer Science']
2,412.12888
ArtAug: Enhancing Text-to-Image Generation through Synthesis-Understanding Interaction
['Zhongjie Duan', 'Qianyi Zhao', 'Cen Chen', 'Daoyuan Chen', 'Wenmeng Zhou', 'Yaliang Li', 'Yingda Chen']
['cs.CV', 'cs.AI']
The emergence of diffusion models has significantly advanced image synthesis. The recent studies of model interaction and self-corrective reasoning approach in large language models offer new insights for enhancing text-to-image models. Inspired by these studies, we propose a novel method called ArtAug for enhancing te...
2024-12-17T13:12:31Z
18 pages, 8 figures
null
null
null
null
null
null
null
null
null
2,412.12953
Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning
['Moritz Reuss', 'Jyothish Pari', 'Pulkit Agrawal', 'Rudolf Lioutikov']
['cs.LG', 'cs.RO']
Diffusion Policies have become widely used in Imitation Learning, offering several appealing properties, such as generating multimodal and discontinuous behavior. As models are becoming larger to capture more complex capabilities, their computational demands increase, as shown by recent scaling laws. Therefore, continu...
2024-12-17T14:34:51Z
null
null
null
null
null
null
null
null
null
null
2,412.13018
OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain
['Shuting Wang', 'Jiejun Tan', 'Zhicheng Dou', 'Ji-Rong Wen']
['cs.CL']
As a typical and practical application of Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) techniques have gained extensive attention, particularly in vertical domains where LLMs may lack domain-specific knowledge. In this paper, we introduce an omnidirectional and automatic RAG benchmark, OmniEval, i...
2024-12-17T15:38:42Z
null
null
null
OmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain
['Shuting Wang', 'Jiejun Tan', 'Zhicheng Dou', 'Ji-Rong Wen']
2,024
arXiv.org
7
0
['Computer Science']
2,412.13059
3D MedDiffusion: A 3D Medical Diffusion Model for Controllable and High-quality Medical Image Generation
['Haoshen Wang', 'Zhentao Liu', 'Kaicong Sun', 'Xiaodong Wang', 'Dinggang Shen', 'Zhiming Cui']
['eess.IV', 'cs.CV']
The generation of medical images presents significant challenges due to their high-resolution and three-dimensional nature. Existing methods often yield suboptimal performance in generating high-quality 3D medical images, and there is currently no universal generative framework for medical imaging. In this paper, we in...
2024-12-17T16:25:40Z
null
null
null
null
null
null
null
null
null
null
2,412.13061
VidTok: A Versatile and Open-Source Video Tokenizer
['Anni Tang', 'Tianyu He', 'Junliang Guo', 'Xinle Cheng', 'Li Song', 'Jiang Bian']
['cs.CV', 'cs.AI', 'cs.LG']
Encoding video content into compact latent tokens has become a fundamental step in video generation and understanding, driven by the need to address the inherent redundancy in pixel-level representations. Consequently, there is a growing demand for high-performance, open-source video tokenizers as video-centric researc...
2024-12-17T16:27:11Z
Code & Models: https://github.com/microsoft/VidTok
null
null
null
null
null
null
null
null
null
2,412.13071
CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information Retrieval
['Mohammad Mahdi Abootorabi', 'Ehsaneddin Asgari']
['cs.CL', 'cs.IR', 'cs.SD', 'eess.AS']
This study introduces CLASP (Contrastive Language-Speech Pretraining), a multilingual, multimodal representation tailored for audio-text information retrieval. CLASP leverages the synergy between spoken content and textual data. During training, we utilize our newly introduced speech-text dataset, which encompasses 15 ...
2024-12-17T16:38:10Z
accepted at ECIR 2025, 13 pages, 4 figures
null
null
null
null
null
null
null
null
null
2,412.13126
A Knowledge-enhanced Pathology Vision-language Foundation Model for Cancer Diagnosis
['Xiao Zhou', 'Luoyi Sun', 'Dexuan He', 'Wenbin Guan', 'Ruifen Wang', 'Lifeng Wang', 'Xin Sun', 'Kun Sun', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie']
['eess.IV', 'cs.CV']
Deep learning has enabled the development of highly robust foundation models for various pathological tasks across diverse diseases and patient cohorts. Among these models, vision-language pre-training, which leverages large-scale paired data to align pathology image and text embedding spaces, and provides a novel zero...
2024-12-17T17:45:21Z
null
null
null
null
null
null
null
null
null
null
2,412.13147
Are Your LLMs Capable of Stable Reasoning?
['Junnan Liu', 'Hongwei Liu', 'Linchen Xiao', 'Ziyi Wang', 'Kuikun Liu', 'Songyang Gao', 'Wenwei Zhang', 'Songyang Zhang', 'Kai Chen']
['cs.AI', 'cs.CL']
The rapid advancement of large language models (LLMs) has shown remarkable progress in complex reasoning tasks. However, a significant disparity exists between benchmark performances and real-world applications. We attribute this gap primarily to current evaluation protocols and metrics, which inadequately capture the ...
2024-12-17T18:12:47Z
ACL 2025 Camera
null
null
null
null
null
null
null
null
null
2,412.13187
HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction
['Chen Bao', 'Jiarui Xu', 'Xiaolong Wang', 'Abhinav Gupta', 'Homanga Bharadhwaj']
['cs.CV', 'cs.LG']
How can we predict future interaction trajectories of human hands in a scene given high-level colloquial task specifications in the form of natural language? In this paper, we extend the classic hand trajectory prediction task to two tasks involving explicit or implicit language queries. Our proposed tasks require exte...
2024-12-17T18:58:33Z
Preprint. Under Review
null
null
null
null
null
null
null
null
null
2,412.13194
Proposer-Agent-Evaluator(PAE): Autonomous Skill Discovery For Foundation Model Internet Agents
['Yifei Zhou', 'Qianlan Yang', 'Kaixiang Lin', 'Min Bai', 'Xiong Zhou', 'Yu-Xiong Wang', 'Sergey Levine', 'Erran Li']
['cs.LG', 'cs.AI', 'cs.CV']
The vision of a broadly capable and goal-directed agent, such as an Internet-browsing agent in the digital world and a household humanoid in the physical world, has rapidly advanced, thanks to the generalization capability of foundation models. Such a generalist agent needs to have a large and diverse skill repertoire,...
2024-12-17T18:59:50Z
null
null
null
null
null
null
null
null
null
null
2,412.13211
ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks
['Arth Shukla', 'Stone Tao', 'Hao Su']
['cs.RO', 'cs.AI', 'cs.CV', 'cs.LG']
High-quality benchmarks are the foundation for embodied AI research, enabling significant advancements in long-horizon navigation, manipulation and rearrangement tasks. However, as frontier tasks in robotics get more advanced, they require faster simulation speed, more intricate test environments, and larger demonstrat...
2024-12-09T01:29:24Z
null
null
null
null
null
null
null
null
null
null
2,412.13303
FastVLM: Efficient Vision Encoding for Vision Language Models
['Pavan Kumar Anasosalu Vasu', 'Fartash Faghri', 'Chun-Liang Li', 'Cem Koc', 'Nate True', 'Albert Antony', 'Gokul Santhanam', 'James Gabriel', 'Peter Grasch', 'Oncel Tuzel', 'Hadi Pouransari']
['cs.CV', 'cs.AI', 'cs.LG']
Scaling the input image resolution is essential for enhancing the performance of Vision Language Models (VLMs), particularly in text-rich image understanding tasks. However, popular visual encoders such as ViTs become inefficient at high resolutions due to the large number of tokens and high encoding latency caused by ...
2024-12-17T20:09:55Z
CVPR 2025
null
null
FastVLM: Efficient Vision Encoding for Vision Language Models
['Pavan Kumar Anasosalu Vasu', 'Fartash Faghri', 'Chun-Liang Li', 'Cem Koc', 'Nate True', 'Albert Antony', 'Gokul Santhanam', 'James Gabriel', 'Peter Grasch', 'Oncel Tuzel', 'Hadi Pouransari']
2,024
arXiv.org
9
90
['Computer Science']
2,412.13335
Training Dynamics of a 1.7B LLaMa Model: A Data-Efficient Approach
['Miles Q. Li', 'Benjamin C. M. Fung', 'Shih-Chia Huang']
['cs.CL', 'cs.AI']
Pretraining large language models is a complex endeavor influenced by multiple factors, including model architecture, data quality, training continuity, and hardware constraints. In this paper, we share insights gained from the experience of training DMaS-LLaMa-Lite, a fully open source, 1.7-billion-parameter, LLaMa-ba...
2024-12-17T21:15:52Z
null
null
null
null
null
null
null
null
null
null
2,412.13462
SAVGBench: Benchmarking Spatially Aligned Audio-Video Generation
['Kazuki Shimada', 'Christian Simon', 'Takashi Shibuya', 'Shusuke Takahashi', 'Yuki Mitsufuji']
['cs.SD', 'cs.MM', 'eess.AS']
This work addresses the lack of multimodal generative models capable of producing high-quality videos with spatially aligned audio. While recent advancements in generative models have been successful in video generation, they often overlook the spatial alignment between audio and visuals, which is essential for immersi...
2024-12-18T03:18:03Z
5 pages, 3 figures
null
null
SAVGBench: Benchmarking Spatially Aligned Audio-Video Generation
['Kazuki Shimada', 'Christian Simon', 'Takashi Shibuya', 'Shusuke Takahashi', 'Yuki Mitsufuji']
2,024
arXiv.org
0
0
['Computer Science', 'Engineering']
2,412.13663
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference
['Benjamin Warner', 'Antoine Chaffin', 'Benjamin Clavié', 'Orion Weller', 'Oskar Hallström', 'Said Taghadouini', 'Alexis Gallagher', 'Raja Biswas', 'Faisal Ladhak', 'Tom Aarsen', 'Nathan Cooper', 'Griffin Adams', 'Jeremy Howard', 'Iacopo Poli']
['cs.CL', 'cs.AI']
Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of numerous production pipelines, there have been limited Pareto improvements to BERT since its release. In this paper, we i...
2024-12-18T09:39:44Z
null
null
null
null
null
null
null
null
null
null
2,412.13702
Typhoon 2: A Family of Open Text and Multimodal Thai Large Language Models
['Kunat Pipatanakul', 'Potsawee Manakul', 'Natapong Nitarach', 'Warit Sirichotedumrong', 'Surapon Nonesung', 'Teetouch Jaknamon', 'Parinthapat Pengpun', 'Pittawat Taveekitworachai', 'Adisai Na-Thalang', 'Sittipong Sripaisarnmongkol', 'Krisanapong Jirayoot', 'Kasima Tharnpipitchai']
['cs.CL', 'cs.AI']
This paper introduces Typhoon 2, a series of text and multimodal large language models optimized for the Thai language. The series includes models for text, vision, and audio. Typhoon2-Text builds on state-of-the-art open models, such as Llama 3 and Qwen2, and we perform continual pre-training on a mixture of English a...
2024-12-18T10:45:24Z
technical report, 55 pages
null
null
Typhoon 2: A Family of Open Text and Multimodal Thai Large Language Models
['Kunat Pipatanakul', 'Potsawee Manakul', 'Natapong Nitarach', 'Warit Sirichotedumrong', 'Surapon Nonesung', 'Teetouch Jaknamon', 'Parinthapat Pengpun', 'Pittawat Taveekitworachai', 'Adisai Na-Thalang', 'Sittipong Sripaisarnmongkol', 'Krisanapong Jirayoot', 'Kasima Tharnpipitchai']
2,024
arXiv.org
2
0
['Computer Science']
2,412.1386
Domain-adaptative Continual Learning for Low-resource Tasks: Evaluation on Nepali
['Sharad Duwal', 'Suraj Prasai', 'Suresh Manandhar']
['cs.CL', 'cs.LG']
Continual learning has emerged as an important research direction due to the infeasibility of retraining large language models (LLMs) from scratch in the event of new data availability. Of great interest is the domain-adaptive pre-training (DAPT) paradigm, which focuses on continually training a pre-trained language mo...
2024-12-18T13:53:59Z
10 pages, 2 figures
null
null
Domain-adaptative Continual Learning for Low-resource Tasks: Evaluation on Nepali
['Sharad Duwal', 'Suraj Prasai', 'Suresh Manandhar']
2,024
arXiv.org
1
44
['Computer Science']
2,412.13871
LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer
['Yipeng Zhang', 'Yifan Liu', 'Zonghao Guo', 'Yidan Zhang', 'Xuesong Yang', 'Xiaoying Zhang', 'Chi Chen', 'Jun Song', 'Bo Zheng', 'Yuan Yao', 'Zhiyuan Liu', 'Tat-Seng Chua', 'Maosong Sun']
['cs.CV']
Vision transformers (ViTs) are widely employed in multimodal large language models (MLLMs) for visual encoding. However, they exhibit inferior performance on tasks regarding fine-grained visual perception. We attribute this to the limitations of ViTs in capturing diverse multi-modal visual levels, such as low-level det...
2024-12-18T14:07:46Z
null
null
null
null
null
null
null
null
null
null
2,412.13922
Pipeline Analysis for Developing Instruct LLMs in Low-Resource Languages: A Case Study on Basque
['Ander Corral', 'Ixak Sarasua', 'Xabier Saralegi']
['cs.CL', 'cs.AI', 'cs.LG']
Large language models (LLMs) are typically optimized for resource-rich languages like English, exacerbating the gap between high-resource and underrepresented languages. This work presents a detailed analysis of strategies for developing a model capable of following instructions in a low-resource language, specifically...
2024-12-18T15:05:59Z
null
null
null
null
null
null
null
null
null
null
2,412.14042
CAD-Recode: Reverse Engineering CAD Code from Point Clouds
['Danila Rukhovich', 'Elona Dupont', 'Dimitrios Mallis', 'Kseniya Cherenkova', 'Anis Kacem', 'Djamila Aouada']
['cs.CV']
Computer-Aided Design (CAD) models are typically constructed by sequentially drawing parametric sketches and applying CAD operations to obtain a 3D model. The problem of 3D CAD reverse engineering consists of reconstructing the sketch and CAD operation sequences from 3D representations such as point clouds. In this pap...
2024-12-18T16:55:42Z
null
null
null
null
null
null
null
null
null
null
2,412.14123
AnySat: One Earth Observation Model for Many Resolutions, Scales, and Modalities
['Guillaume Astruc', 'Nicolas Gonthier', 'Clement Mallet', 'Loic Landrieu']
['cs.CV']
Geospatial models must adapt to the diversity of Earth observation data in terms of resolutions, scales, and modalities. However, existing approaches expect fixed input configurations, which limits their practical applicability. We propose AnySat, a multimodal model based on joint embedding predictive architecture (JEP...
2024-12-18T18:11:53Z
null
null
null
AnySat: One Earth Observation Model for Many Resolutions, Scales, and Modalities
['Guillaume Astruc', 'Nicolas Gonthier', 'Clement Mallet', 'Loic Landrieu']
2,024
null
15
81
['Computer Science']
2,412.14135
Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective
['Zhiyuan Zeng', 'Qinyuan Cheng', 'Zhangyue Yin', 'Bo Wang', 'Shimin Li', 'Yunhua Zhou', 'Qipeng Guo', 'Xuanjing Huang', 'Xipeng Qiu']
['cs.AI', 'cs.LG']
OpenAI o1 represents a significant milestone in Artificial Inteiligence, which achieves expert-level performances on many challanging tasks that require strong reasoning ability.OpenAI has claimed that the main techinique behinds o1 is the reinforcement learining. Recent works use alternative approaches like knowledge ...
2024-12-18T18:24:47Z
null
null
null
null
null
null
null
null
null
null
2,412.1414
GLIDER: Grading LLM Interactions and Decisions using Explainable Ranking
['Darshan Deshpande', 'Selvan Sunitha Ravi', 'Sky CH-Wang', 'Bartosz Mielczarek', 'Anand Kannappan', 'Rebecca Qian']
['cs.CL', 'cs.AI']
The LLM-as-judge paradigm is increasingly being adopted for automated evaluation of model outputs. While LLM judges have shown promise on constrained evaluation tasks, closed source LLMs display critical shortcomings when deployed in real world applications due to challenges of fine grained metrics and explainability, ...
2024-12-18T18:41:12Z
null
null
null
null
null
null
null
null
null
null
2,412.14158
AKiRa: Augmentation Kit on Rays for optical video generation
['Xi Wang', 'Robin Courant', 'Marc Christie', 'Vicky Kalogeiton']
['cs.CV', 'cs.AI', 'cs.MM']
Recent advances in text-conditioned video diffusion have greatly improved video quality. However, these methods offer limited or sometimes no control to users on camera aspects, including dynamic camera motion, zoom, distorted lens and focus shifts. These motion and optical aspects are crucial for adding controllabilit...
2024-12-18T18:53:22Z
null
null
null
null
null
null
null
null
null
null
2,412.14169
Autoregressive Video Generation without Vector Quantization
['Haoge Deng', 'Ting Pan', 'Haiwen Diao', 'Zhengxiong Luo', 'Yufeng Cui', 'Huchuan Lu', 'Shiguang Shan', 'Yonggang Qi', 'Xinlong Wang']
['cs.CV']
This paper presents a novel approach that enables autoregressive video generation with high efficiency. We propose to reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction and spatial set-by-set prediction. Unlike raster-scan prediction in prior autore...
2024-12-18T18:59:53Z
Accepted to ICLR 2025. Project page at https://github.com/baaivision/NOVA
null
null
Autoregressive Video Generation without Vector Quantization
['Haoge Deng', 'Ting Pan', 'Haiwen Diao', 'Zhengxiong Luo', 'Yufeng Cui', 'Huchuan Lu', 'Shiguang Shan', 'Yonggang Qi', 'Xinlong Wang']
2,024
arXiv.org
31
0
['Computer Science']
2,412.14172
Learning from Massive Human Videos for Universal Humanoid Pose Control
['Jiageng Mao', 'Siheng Zhao', 'Siqi Song', 'Tianheng Shi', 'Junjie Ye', 'Mingtong Zhang', 'Haoran Geng', 'Jitendra Malik', 'Vitor Guizilini', 'Yue Wang']
['cs.RO', 'cs.AI', 'cs.CL', 'cs.CV']
Scalable learning of humanoid robots is crucial for their deployment in real-world applications. While traditional approaches primarily rely on reinforcement learning or teleoperation to achieve whole-body control, they are often limited by the diversity of simulated environments and the high costs of demonstration col...
2024-12-18T18:59:56Z
null
null
null
null
null
null
null
null
null
null
2,412.14173
AniDoc: Animation Creation Made Easier
['Yihao Meng', 'Hao Ouyang', 'Hanlin Wang', 'Qiuyu Wang', 'Wen Wang', 'Ka Leong Cheng', 'Zhiheng Liu', 'Yujun Shen', 'Huamin Qu']
['cs.CV']
The production of 2D animation follows an industry-standard workflow, encompassing four essential stages: character design, keyframe animation, in-betweening, and coloring. Our research focuses on reducing the labor costs in the above process by harnessing the potential of increasingly powerful generative AI. Using vid...
2024-12-18T18:59:59Z
Project page and code: https://yihao-meng.github.io/AniDoc_demo
null
null
AniDoc: Animation Creation Made Easier
['Yihao Meng', 'Ouyang Hao', 'Hanlin Wang', 'Qiuyu Wang', 'Wen Wang', 'Ka Leong Cheng', 'Zhiheng Liu', 'Yujun Shen', 'Huamin Qu']
2,024
arXiv.org
6
60
['Computer Science']
2,412.14197
Advancing Vehicle Plate Recognition: Multitasking Visual Language Models with VehiclePaliGemma
['Nouar AlDahoul', 'Myles Joshua Toledo Tan', 'Raghava Reddy Tera', 'Hezerul Abdul Karim', 'Chee How Lim', 'Manish Kumar Mishra', 'Yasir Zaki']
['cs.CV', 'cs.LG']
License plate recognition (LPR) involves automated systems that utilize cameras and computer vision to read vehicle license plates. Such plates collected through LPR can then be compared against databases to identify stolen vehicles, uninsured drivers, crime suspects, and more. The LPR system plays a significant role i...
2024-12-14T16:22:10Z
33 pages, 9 figures
null
null
Advancing Vehicle Plate Recognition: Multitasking Visual Language Models with VehiclePaliGemma
['Nouar Aldahoul', 'M. J. Tan', 'Raghava Reddy Tera', 'Hezerul Bin Abdul Karim', 'Chee How Lim', 'Manish Kumar Mishra', 'Yasir Zaki']
2,024
arXiv.org
1
0
['Computer Science']
2,412.14203
BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement
['Yuhao Du', 'Shunian Chen', 'Wenbo Zan', 'Peizhao Li', 'Mingxuan Wang', 'Dingjie Song', 'Bo Li', 'Yan Hu', 'Benyou Wang']
['cs.HC', 'cs.AI']
The application of Large Language Models (LLMs) in Computer-Aided Design (CAD) remains an underexplored area, despite their remarkable advancements in other domains. In this paper, we present BlenderLLM, a novel framework for training LLMs specifically for CAD tasks leveraging a self-improvement methodology. To support...
2024-12-16T14:34:02Z
null
null
null
BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement
['Yuhao Du', 'Shunian Chen', 'Wenbo Zan', 'Peizhao Li', 'Mingxuan Wang', 'Dingjie Song', 'Bo Li', 'Yan Hu', 'Benyou Wang']
2,024
arXiv.org
3
0
['Computer Science']
2,412.1447
Agent-SafetyBench: Evaluating the Safety of LLM Agents
['Zhexin Zhang', 'Shiyao Cui', 'Yida Lu', 'Jingzhuo Zhou', 'Junxiao Yang', 'Hongning Wang', 'Minlie Huang']
['cs.CL']
As large language models (LLMs) are increasingly deployed as agents, their integration into interactive environments and tool use introduce new safety challenges beyond those associated with the models themselves. However, the absence of comprehensive benchmarks for evaluating agent safety presents a significant barrie...
2024-12-19T02:35:15Z
26 pages
null
null
null
null
null
null
null
null
null
2,412.14475
MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval
['Junjie Zhou', 'Zheng Liu', 'Ze Liu', 'Shitao Xiao', 'Yueze Wang', 'Bo Zhao', 'Chen Jason Zhang', 'Defu Lian', 'Yongping Xiong']
['cs.CV', 'cs.CL']
Despite the rapidly growing demand for multimodal retrieval, progress in this field remains severely constrained by a lack of training data. In this paper, we introduce MegaPairs, a novel data synthesis method that leverages vision language models (VLMs) and open-domain images, together with a massive synthetic dataset...
2024-12-19T02:49:55Z
null
null
null
null
null
null
null
null
null
null
2,412.1451
PA-RAG: RAG Alignment via Multi-Perspective Preference Optimization
['Jiayi Wu', 'Hengyi Cai', 'Lingyong Yan', 'Hao Sun', 'Xiang Li', 'Shuaiqiang Wang', 'Dawei Yin', 'Ming Gao']
['cs.CL', 'cs.AI']
The emergence of Retrieval-augmented generation (RAG) has alleviated the issues of outdated and hallucinatory content in the generation of large language models (LLMs), yet it still reveals numerous limitations. When a general-purpose LLM serves as the RAG generator, it often suffers from inadequate response informativ...
2024-12-19T04:18:51Z
null
null
null
PA-RAG: RAG Alignment via Multi-Perspective Preference Optimization
['Jiayi Wu', 'Hengyi Cai', 'Lingyong Yan', 'Hao Sun', 'Xiang Li', 'Shuaiqiang Wang', 'Dawei Yin', 'Ming Gao']
2,024
North American Chapter of the Association for Computational Linguistics
1
33
['Computer Science']
2,412.14574
Sliding Windows Are Not the End: Exploring Full Ranking with Long-Context Large Language Models
['Wenhan Liu', 'Xinyu Ma', 'Yutao Zhu', 'Ziliang Zhao', 'Shuaiqiang Wang', 'Dawei Yin', 'Zhicheng Dou']
['cs.IR', 'cs.CL']
Large Language Models (LLMs) have shown exciting performance in listwise passage ranking. Due to the limited input length, existing methods often adopt the sliding window strategy. Such a strategy, though effective, is inefficient as it involves repetitive and serialized processing, which usually re-evaluates relevant ...
2024-12-19T06:44:59Z
14 pages
null
null
Sliding Windows Are Not the End: Exploring Full Ranking with Long-Context Large Language Models
['Wenhan Liu', 'Xinyu Ma', 'Yutao Zhu', 'Ziliang Zhao', 'Shuaiqiang Wang', 'Dawei Yin', 'Zhicheng Dou']
2,024
arXiv.org
2
0
['Computer Science']
2,412.1468
A Light-Weight Framework for Open-Set Object Detection with Decoupled Feature Alignment in Joint Space
['Yonghao He', 'Hu Su', 'Haiyong Yu', 'Cong Yang', 'Wei Sui', 'Cong Wang', 'Song Liu']
['cs.CV', 'cs.AI', 'cs.RO']
Open-set object detection (OSOD) is highly desirable for robotic manipulation in unstructured environments. However, existing OSOD methods often fail to meet the requirements of robotic applications due to their high computational burden and complex deployment. To address this issue, this paper proposes a light-weight ...
2024-12-19T09:32:53Z
null
null
null
A Light-Weight Framework for Open-Set Object Detection with Decoupled Feature Alignment in Joint Space
['Yonghao He', 'Hu Su', 'Haiyong Yu', 'Cong Yang', 'Wei Sui', 'Cong Wang', 'Song Liu']
2,024
arXiv.org
1
0
['Computer Science']
2,412.15084
AceMath: Advancing Frontier Math Reasoning with Post-Training and Reward Modeling
['Zihan Liu', 'Yang Chen', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Wei Ping']
['cs.CL', 'cs.AI', 'cs.LG']
In this paper, we introduce AceMath, a suite of frontier math models that excel in solving complex math problems, along with highly effective reward models capable of evaluating generated solutions and reliably identifying the correct ones. To develop the instruction-tuned math models, we propose a supervised fine-tuni...
2024-12-19T17:29:44Z
null
null
null
null
null
null
null
null
null
null
2,412.15115
Qwen2.5 Technical Report
['Qwen', ':', 'An Yang', 'Baosong Yang', 'Beichen Zhang', 'Binyuan Hui', 'Bo Zheng', 'Bowen Yu', 'Chengyuan Li', 'Dayiheng Liu', 'Fei Huang', 'Haoran Wei', 'Huan Lin', 'Jian Yang', 'Jianhong Tu', 'Jianwei Zhang', 'Jianxin Yang', 'Jiaxi Yang', 'Jingren Zhou', 'Junyang Lin', 'Kai Dang', 'Keming Lu', 'Keqin Bao', 'Kexin Y...
['cs.CL']
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-trai...
2024-12-19T17:56:09Z
null
null
null
Qwen2.5 Technical Report
['Qwen An Yang', 'Baosong Yang', 'Beichen Zhang', 'Binyuan Hui', 'Bo Zheng', 'Bowen Yu', 'Chengyuan Li', 'Dayiheng Liu', 'Fei Huang', 'Guanting Dong', 'Haoran Wei', 'Huan Lin', 'Jian Yang', 'Jianhong Tu', 'Jianwei Zhang', 'Jianxin Yang', 'Jiaxin Yang', 'Jingren Zhou', 'Junyang Lin', 'Kai Dang', 'Keming Lu', 'Keqin Bao'...
2,024
arXiv.org
1,406
0
['Computer Science']
2,412.15119
Parallelized Autoregressive Visual Generation
['Yuqing Wang', 'Shuhuai Ren', 'Zhijie Lin', 'Yujin Han', 'Haoyuan Guo', 'Zhenheng Yang', 'Difan Zou', 'Jiashi Feng', 'Xihui Liu']
['cs.CV']
Autoregressive models have emerged as a powerful approach for visual generation but suffer from slow inference speed due to their sequential token-by-token prediction process. In this paper, we propose a simple yet effective approach for parallelized autoregressive visual generation that improves generation efficiency ...
2024-12-19T17:59:54Z
CVPR 2025 Accepted - Project Page: https://yuqingwang1029.github.io/PAR-project
null
null
null
null
null
null
null
null
null
2,412.15195
Preventing Local Pitfalls in Vector Quantization via Optimal Transport
['Borui Zhang', 'Wenzhao Zheng', 'Jie Zhou', 'Jiwen Lu']
['cs.CV', 'cs.LG']
Vector-quantized networks (VQNs) have exhibited remarkable performance across various tasks, yet they are prone to training instability, which complicates the training process due to the necessity for techniques such as subtle initialization and model distillation. In this study, we identify the local minima issue as t...
2024-12-19T18:58:14Z
Code is available at https://github.com/zbr17/OptVQ
null
null
null
null
null
null
null
null
null
2,412.152
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
['Wang Zhao', 'Yan-Pei Cao', 'Jiale Xu', 'Yuejiang Dong', 'Ying Shan']
['cs.CV', 'cs.AI', 'cs.GR']
Procedural Content Generation (PCG) is powerful in creating high-quality 3D contents, yet controlling it to produce desired shapes is difficult and often requires extensive parameter tuning. Inverse Procedural Content Generation aims to automatically find the best parameters under the input condition. However, existing...
2024-12-19T18:58:46Z
Project page: https://thuzhaowang.github.io/projects/DI-PCG/
null
null
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
['Wang Zhao', 'Yanpei Cao', 'Jiale Xu', 'Yuejiang Dong', 'Ying Shan']
2,024
arXiv.org
3
0
['Computer Science']
2,412.15205
FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching
['Sucheng Ren', 'Qihang Yu', 'Ju He', 'Xiaohui Shen', 'Alan Yuille', 'Liang-Chieh Chen']
['cs.CV']
Autoregressive (AR) modeling has achieved remarkable success in natural language processing by enabling models to generate text with coherence and contextual understanding through next token prediction. Recently, in image generation, VAR proposes scale-wise autoregressive modeling, which extends the next token predicti...
2024-12-19T18:59:31Z
null
null
null
FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching
['Sucheng Ren', 'Qihang Yu', 'Ju He', 'Xiaohui Shen', 'Alan L. Yuille', 'Liang-Chieh Chen']
2,024
arXiv.org
21
0
['Computer Science']
2,412.15213
Flowing from Words to Pixels: A Noise-Free Framework for Cross-Modality Evolution
['Qihao Liu', 'Xi Yin', 'Alan Yuille', 'Andrew Brown', 'Mannat Singh']
['cs.CV']
Diffusion models, and their generalization, flow matching, have had a remarkable impact on the field of media generation. Here, the conventional approach is to learn the complex mapping from a simple source distribution of Gaussian noise to the target media distribution. For cross-modal tasks such as text-to-image gene...
2024-12-19T18:59:56Z
CVPR 2025 camera-ready version. Project page: https://cross-flow.github.io/
null
null
Flowing from Words to Pixels: A Noise-Free Framework for Cross-Modality Evolution
['Qihao Liu', 'Xi Yin', 'Alan L. Yuille', 'Andrew Brown', 'Mannat Singh']
2,024
Computer Vision and Pattern Recognition
2
97
['Computer Science']
2,412.15252
NER- RoBERTa: Fine-Tuning RoBERTa for Named Entity Recognition (NER) within low-resource languages
['Abdulhady Abas Abdullah', 'Srwa Hasan Abdulla', 'Dalia Mohammad Toufiq', 'Halgurd S. Maghdid', 'Tarik A. Rashid', 'Pakshan F. Farho', 'Shadan Sh. Sabr', 'Akar H. Taher', 'Darya S. Hamad', 'Hadi Veisi', 'Aras T. Asaad']
['cs.CL', 'cs.AI']
Nowadays, Natural Language Processing (NLP) is an important tool for most people's daily life routines, ranging from understanding speech, translation, named entity recognition (NER), and text categorization, to generative text models such as ChatGPT. Due to the existence of big data and consequently large corpora for ...
2024-12-15T07:07:17Z
null
null
null
NER- RoBERTa: Fine-Tuning RoBERTa for Named Entity Recognition (NER) within low-resource languages
['A. A. Abdullah', 'Srwa Hasan Abdulla', 'Dalia Mohammad Toufiq', 'H. S. Maghdid', 'Tarik A. Rashid', 'Pakshan F. Farho', 'S. Sabr', 'Akar H. Taher', 'D. S. Hamad', 'Hadi Veisi', 'Aras T. Asaad']
2,024
arXiv.org
2
0
['Computer Science']
2,412.15258
DisEmbed: Transforming Disease Understanding through Embeddings
['Salman Faroz']
['cs.CL', 'cs.LG']
The medical domain is vast and diverse, with many existing embedding models focused on general healthcare applications. However, these models often struggle to capture a deep understanding of diseases due to their broad generalization across the entire medical field. To address this gap, I present DisEmbed, a disease-f...
2024-12-16T12:04:22Z
null
null
null
null
null
null
null
null
null
null
2,412.15322
MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis
['Ho Kei Cheng', 'Masato Ishii', 'Akio Hayakawa', 'Takashi Shibuya', 'Alexander Schwing', 'Yuki Mitsufuji']
['cs.CV', 'cs.LG', 'cs.SD', 'eess.AS']
We propose to synthesize high-quality and synchronized audio, given video and optional text conditions, using a novel multimodal joint training framework MMAudio. In contrast to single-modality training conditioned on (limited) video data only, MMAudio is jointly trained with larger-scale, readily available text-audio ...
2024-12-19T18:59:55Z
Accepted to CVPR 2025. Project page: https://hkchengrex.github.io/MMAudio
null
null
null
null
null
null
null
null
null
2,412.1545
Fietje: An open, efficient LLM for Dutch
['Bram Vanroy']
['cs.CL']
This paper introduces Fietje, a family of small language models (SLMs) specifically designed for the Dutch language. The model is based on Phi 2, an English-centric model of 2.7 billion parameters. Fietje demonstrated competitive results with larger language models upon its release. A core emphasis of this work is tran...
2024-12-19T23:06:01Z
null
null
null
null
null
null
null
null
null
null
2,412.15495
TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use
['Junjie Ye', 'Yilong Wu', 'Sixian Li', 'Yuming Yang', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang', 'Peng Wang', 'Zhongchao Shi', 'Jianping Fan', 'Zhengyin Du']
['cs.CL', 'cs.AI']
Large language models (LLMs) achieve remarkable advancements by leveraging tools to interact with external environments, a critical step toward generalized AI. However, the standard supervised fine-tuning (SFT) approach, which relies on large-scale datasets, often overlooks task-specific characteristics in tool use, le...
2024-12-20T02:21:36Z
null
null
null
TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use
['Junjie Ye', 'Yilong Wu', 'Sixian Li', 'Yuming Yang', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang', 'Peng Wang', 'Zhongchao Shi', 'Jianping Fan', 'Zhengyin Du']
2,024
arXiv.org
3
0
['Computer Science']
2,412.15499
A Robust Prototype-Based Network with Interpretable RBF Classifier Foundations
['Sascha Saralajew', 'Ashish Rana', 'Thomas Villmann', 'Ammar Shaker']
['cs.LG', 'cs.AI', 'cs.CV']
Prototype-based classification learning methods are known to be inherently interpretable. However, this paradigm suffers from major limitations compared to deep models, such as lower performance. This led to the development of the so-called deep Prototype-Based Networks (PBNs), also known as prototypical parts models. ...
2024-12-20T02:25:31Z
To appear at AAAI 2025. Includes the Appendix of the AAAI submission. In v2, the font size has been increased in some figures. In v3, an incorrect hyperparameter specification (Table 6; $\lambda$) has been corrected
null
null
null
null
null
null
null
null
null
2,412.15594
Template-Driven LLM-Paraphrased Framework for Tabular Math Word Problem Generation
['Xiaoqiang Kang', 'Zimu Wang', 'Xiaobo Jin', 'Wei Wang', 'Kaizhu Huang', 'Qiufeng Wang']
['cs.CL']
Solving tabular math word problems (TMWPs) has become a critical role in evaluating the mathematical reasoning ability of large language models (LLMs), where large-scale TMWP samples are commonly required for LLM fine-tuning. Since the collection of high-quality TMWP datasets is costly and time-consuming, recent resear...
2024-12-20T06:34:57Z
Accepted at AAAI 2025, extended version with appendix
null
null
Template-Driven LLM-Paraphrased Framework for Tabular Math Word Problem Generation
['Xiaoqiang Kang', 'Zimu Wang', 'Xiao-Bo Jin', 'Wei Wang', 'Kaizhu Huang', 'Qiufeng Wang']
2,024
arXiv.org
0
0
['Computer Science']
2,412.15606
Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage
['Zhi Gao', 'Bofei Zhang', 'Pengxiang Li', 'Xiaojian Ma', 'Tao Yuan', 'Yue Fan', 'Yuwei Wu', 'Yunde Jia', 'Song-Chun Zhu', 'Qing Li']
['cs.AI', 'cs.CV']
The advancement of large language models (LLMs) prompts the development of multi-modal agents, which are used as a controller to call external tools, providing a feasible way to solve practical tasks. In this paper, we propose a multi-modal agent tuning method that automatically generates multi-modal tool-usage data an...
2024-12-20T07:00:46Z
ICLR 2025, https://mat-agent.github.io/
null
null
Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage
['Zhi Gao', 'Bofei Zhang', 'Pengxiang Li', 'Xiaojian Ma', 'Tao Yuan', 'Yue Fan', 'Yuwei Wu', 'Yunde Jia', 'Song-Chun Zhu', 'Qing Li']
2,024
arXiv.org
11
0
['Computer Science']
2,412.15832
AIFS-CRPS: Ensemble forecasting using a model trained with a loss function based on the Continuous Ranked Probability Score
['Simon Lang', 'Mihai Alexe', 'Mariana C. A. Clare', 'Christopher Roberts', 'Rilwan Adewoyin', 'Zied Ben Bouallègue', 'Matthew Chantry', 'Jesper Dramsch', 'Peter D. Dueben', 'Sara Hahner', 'Pedro Maciel', 'Ana Prieto-Nemesio', "Cathal O'Brien", 'Florian Pinault', 'Jan Polster', 'Baudouin Raoult', 'Steffen Tietsche', 'M...
['physics.ao-ph']
Over the last three decades, ensemble forecasts have become an integral part of forecasting the weather. They provide users with more complete information than single forecasts as they permit to estimate the probability of weather events by representing the sources of uncertainties and accounting for the day-to-day var...
2024-12-20T12:15:54Z
null
null
null
null
null
null
null
null
null
null
2,412.15838
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
['Jiaming Ji', 'Jiayi Zhou', 'Hantao Lou', 'Boyuan Chen', 'Donghai Hong', 'Xuyao Wang', 'Wenqi Chen', 'Kaile Wang', 'Rui Pan', 'Jiahao Li', 'Mohan Wang', 'Josef Dai', 'Tianyi Qiu', 'Hua Xu', 'Dong Li', 'Weipeng Chen', 'Jun Song', 'Bo Zheng', 'Yaodong Yang']
['cs.AI', 'cs.CL']
Reinforcement learning from human feedback (RLHF) has proven effective in enhancing the instruction-following capabilities of large language models; however, it remains underexplored in the cross-modality domain. As the number of modalities increases, aligning all-modality models with human intentions -- such as instru...
2024-12-20T12:27:16Z
null
null
null
null
null
null
null
null
null
null
2,412.15907
Development of a Large-scale Dataset of Chest Computed Tomography Reports in Japanese and a High-performance Finding Classification Model
['Yosuke Yamagishi', 'Yuta Nakamura', 'Tomohiro Kikuchi', 'Yuki Sonoda', 'Hiroshi Hirakawa', 'Shintaro Kano', 'Satoshi Nakamura', 'Shouhei Hanaoka', 'Takeharu Yoshikawa', 'Osamu Abe']
['cs.CL', 'cs.AI']
Background: Recent advances in large language models highlight the need for high-quality multilingual medical datasets. While Japan leads globally in CT scanner deployment and utilization, the lack of large-scale Japanese radiology datasets has hindered the development of specialized language models for medical imaging...
2024-12-20T13:59:11Z
Dataset available at https://huggingface.co/datasets/YYama0/CT-RATE-JPN
null
null
null
null
null
null
null
null
null
2,412.16112
CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up
['Songhua Liu', 'Zhenxiong Tan', 'Xinchao Wang']
['cs.CV']
Diffusion Transformers (DiT) have become a leading architecture in image generation. However, the quadratic complexity of attention mechanisms, which are responsible for modeling token-wise relationships, results in significant latency when generating high-resolution images. To address this issue, we aim at a linear at...
2024-12-20T17:57:09Z
null
null
null
CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up
['Songhua Liu', 'Zhenxiong Tan', 'Xinchao Wang']
2,024
arXiv.org
10
0
['Computer Science']
2,412.16145
Offline Reinforcement Learning for LLM Multi-Step Reasoning
['Huaijie Wang', 'Shibo Hao', 'Hanze Dong', 'Shenao Zhang', 'Yilin Bao', 'Ziran Yang', 'Yi Wu']
['cs.LG', 'cs.AI', 'cs.CL']
Improving the multi-step reasoning ability of large language models (LLMs) with offline reinforcement learning (RL) is essential for quickly adapting them to complex tasks. While Direct Preference Optimization (DPO) has shown promise in aligning LLMs with human preferences, it is less suitable for multi-step reasoning ...
2024-12-20T18:49:45Z
null
null
null
null
null
null
null
null
null
null
2,412.16158
HoVLE: Unleashing the Power of Monolithic Vision-Language Models with Holistic Vision-Language Embedding
['Chenxin Tao', 'Shiqian Su', 'Xizhou Zhu', 'Chenyu Zhang', 'Zhe Chen', 'Jiawen Liu', 'Wenhai Wang', 'Lewei Lu', 'Gao Huang', 'Yu Qiao', 'Jifeng Dai']
['cs.CV']
The rapid advance of Large Language Models (LLMs) has catalyzed the development of Vision-Language Models (VLMs). Monolithic VLMs, which avoid modality-specific encoders, offer a promising alternative to the compositional ones but face the challenge of inferior performance. Most existing monolithic VLMs require tuning ...
2024-12-20T18:59:59Z
null
null
null
null
null
null
null
null
null
null
2,412.16178
Context Clues: Evaluating Long Context Models for Clinical Prediction Tasks on EHRs
['Michael Wornow', 'Suhana Bedi', 'Miguel Angel Fuentes Hernandez', 'Ethan Steinberg', 'Jason Alan Fries', 'Christopher Re', 'Sanmi Koyejo', 'Nigam H. Shah']
['cs.LG', 'cs.AI', 'cs.CE']
Foundation Models (FMs) trained on Electronic Health Records (EHRs) have achieved state-of-the-art results on numerous clinical prediction tasks. However, most existing EHR FMs have context windows of <1k tokens. This prevents them from modeling full patient EHRs which can exceed 10k's of events. Recent advancements in...
2024-12-09T21:58:27Z
null
null
null
Context Clues: Evaluating Long Context Models for Clinical Prediction Tasks on EHRs
['Michael Wornow', 'Suhana Bedi', 'Miguel Angel Fuentes Hernandez', 'E. Steinberg', 'J. Fries', 'Christopher Ré', 'Oluwasanmi Koyejo', 'Nigam H. Shah']
2,024
arXiv.org
6
55
['Computer Science']
2,412.16256
Aria-UI: Visual Grounding for GUI Instructions
['Yuhao Yang', 'Yue Wang', 'Dongxu Li', 'Ziyang Luo', 'Bei Chen', 'Chao Huang', 'Junnan Li']
['cs.HC', 'cs.AI']
Digital agents for automating tasks across different platforms by directly manipulating the GUIs are increasingly important. For these agents, grounding from language instructions to target elements remains a significant challenge due to reliance on HTML or AXTree inputs. In this paper, we introduce Aria-UI, a large mu...
2024-12-20T07:16:57Z
ACL 2025
null
null
null
null
null
null
null
null
null
2,412.16262
VirusT5: Harnessing Large Language Models to Predicting SARS-CoV-2 Evolution
['Vishwajeet Marathe', 'Deewan Bajracharya', 'Changhui Yan']
['q-bio.QM', 'cs.AI']
During a virus's evolution,various regions of the genome are subjected to distinct levels of functional constraints.Combined with factors like codon bias and DNA repair efficiency,these constraints contribute to unique mutation patterns within the genome or a specific gene. In this project, we harnessed the power of La...
2024-12-20T08:46:42Z
This is a preprint of a paper submitted to IEEE for consideration
null
null
null
null
null
null
null
null
null
2,412.16526
Text2midi: Generating Symbolic Music from Captions
['Keshav Bhandari', 'Abhinaba Roy', 'Kyra Wang', 'Geeta Puri', 'Simon Colton', 'Dorien Herremans']
['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS']
This paper introduces text2midi, an end-to-end model to generate MIDI files from textual descriptions. Leveraging the growing popularity of multimodal generative approaches, text2midi capitalizes on the extensive availability of textual data and the success of large language models (LLMs). Our end-to-end system harness...
2024-12-21T08:09:12Z
9 pages, 3 figures, Accepted at the 39th AAAI Conference on Artificial Intelligence (AAAI 2025)
Proceedings of the 39th AAAI Conference on Artificial Intelligence (AAAI 2025)
null
null
null
null
null
null
null
null
2,412.16855
GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
['Xin Zhang', 'Yanzhao Zhang', 'Wen Xie', 'Mingxin Li', 'Ziqi Dai', 'Dingkun Long', 'Pengjun Xie', 'Meishan Zhang', 'Wenjie Li', 'Min Zhang']
['cs.CL', 'cs.IR']
Universal Multimodal Retrieval (UMR) aims to enable search across various modalities using a unified model, where queries and candidates can consist of pure text, images, or a combination of both. Previous work has attempted to adopt multimodal large language models (MLLMs) to realize UMR using only text data. However,...
2024-12-22T04:40:24Z
Accepted to CVPR 2025, models at https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct
null
null
null
null
null
null
null
null
null
2,412.17041
An OpenMind for 3D medical vision self-supervised learning
['Tassilo Wald', 'Constantin Ulrich', 'Jonathan Suprijadi', 'Sebastian Ziegler', 'Michal Nohel', 'Robin Peretzke', 'Gregor Köhler', 'Klaus H. Maier-Hein']
['cs.CV', 'cs.AI', 'cs.LG', 'eess.IV']
The field of self-supervised learning (SSL) for 3D medical images lacks consistency and standardization. While many methods have been developed, it is impossible to identify the current state-of-the-art, due to i) varying and small pretraining datasets, ii) varying architectures, and iii) being evaluated on differing d...
2024-12-22T14:38:28Z
Pre-Print; Dataset, Benchmark and Codebase available through https://github.com/MIC-DKFZ/nnssl
null
null
null
null
null
null
null
null
null