arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,405.2106 | Transformers are SSMs: Generalized Models and Efficient Algorithms
Through Structured State Space Duality | ['Tri Dao', 'Albert Gu'] | ['cs.LG'] | While Transformers have been the main architecture behind deep learning's
success in language modeling, state-space models (SSMs) such as Mamba have
recently been shown to match or outperform Transformers at small to medium
scale. We show that these families of models are actually quite closely
related, and develop a r... | 2024-05-31T17:50:01Z | ICML 2024 | null | null | Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality | ['Tri Dao', 'Albert Gu'] | 2,024 | International Conference on Machine Learning | 544 | 0 | ['Computer Science'] |
2,405.21075 | Video-MME: The First-Ever Comprehensive Evaluation Benchmark of
Multi-modal LLMs in Video Analysis | ['Chaoyou Fu', 'Yuhan Dai', 'Yongdong Luo', 'Lei Li', 'Shuhuai Ren', 'Renrui Zhang', 'Zihan Wang', 'Chenyu Zhou', 'Yunhang Shen', 'Mengdan Zhang', 'Peixian Chen', 'Yanwei Li', 'Shaohui Lin', 'Sirui Zhao', 'Ke Li', 'Tong Xu', 'Xiawu Zheng', 'Enhong Chen', 'Caifeng Shan', 'Ran He', 'Xing Sun'] | ['cs.CV', 'cs.CL'] | In the quest for artificial general intelligence, Multi-modal Large Language
Models (MLLMs) have emerged as a focal point in recent advancements. However,
the predominant focus remains on developing their capabilities in static image
understanding. The potential of MLLMs in processing sequential visual data is
still in... | 2024-05-31T17:59:47Z | Project Page: https://video-mme.github.io | null | null | null | null | null | null | null | null | null |
2,406.00093 | Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data | ['Zeyi Sun', 'Tong Wu', 'Pan Zhang', 'Yuhang Zang', 'Xiaoyi Dong', 'Yuanjun Xiong', 'Dahua Lin', 'Jiaqi Wang'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG', 'cs.MM'] | Recent years have witnessed remarkable progress in multi-view diffusion
models for 3D content creation. However, there remains a significant gap in
image quality and prompt-following ability compared to 2D diffusion models. A
critical bottleneck is the scarcity of high-quality 3D objects with detailed
captions. To addr... | 2024-05-31T17:59:56Z | Project Page: https://sunzey.github.io/Bootstrap3D/ | null | null | Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data | ['Zeyi Sun', 'Tong Wu', 'Pan Zhang', 'Yuhang Zang', 'Xiao-wen Dong', 'Yuanjun Xiong', 'Dahua Lin', 'Jiaqi Wang'] | 2,024 | null | 0 | 95 | ['Computer Science'] |
2,406.00153 | $μ$LO: Compute-Efficient Meta-Generalization of Learned Optimizers | ['Benjamin Thérien', 'Charles-Étienne Joseph', 'Boris Knyazev', 'Edouard Oyallon', 'Irina Rish', 'Eugene Belilovsky'] | ['cs.LG'] | Learned optimizers (LOs) can significantly reduce the wall-clock training
time of neural networks, substantially reducing training costs. However, they
can struggle to optimize unseen tasks (meta-generalize), especially when
training networks wider than those seen during meta-training. To address this,
we derive the Ma... | 2024-05-31T19:28:47Z | null | null | null | μLO: Compute-Efficient Meta-Generalization of Learned Optimizers | ["Benjamin Th'erien", "Charles-'Etienne Joseph", 'Boris Knyazev', 'Edouard Oyallon', 'Irina Rish', 'Eugene Belilovsky'] | 2,024 | arXiv.org | 4 | 49 | ['Computer Science'] |
2,406.00314 | CASE: Efficient Curricular Data Pre-training for Building Assistive
Psychology Expert Models | ['Sarthak Harne', 'Monjoy Narayan Choudhury', 'Madhav Rao', 'TK Srikanth', 'Seema Mehrotra', 'Apoorva Vashisht', 'Aarushi Basu', 'Manjit Sodhi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The limited availability of psychologists necessitates efficient
identification of individuals requiring urgent mental healthcare. This study
explores the use of Natural Language Processing (NLP) pipelines to analyze text
data from online mental health forums used for consultations. By analyzing
forum posts, these pipe... | 2024-06-01T06:17:32Z | null | null | null | CASE: Efficient Curricular Data Pre-training for Building Assistive Psychology Expert Models | ['Sarthak Harne', 'Monjoy Narayan Choudhury', 'Madhav Rao', 'T. Srikanth', 'Seema Mehrotra', 'Apoorva Vashisht', 'Aarushi Basu', 'M. Sodhi'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 0 | 22 | ['Computer Science'] |
2,406.0038 | HonestLLM: Toward an Honest and Helpful Large Language Model | ['Chujie Gao', 'Siyuan Wu', 'Yue Huang', 'Dongping Chen', 'Qihui Zhang', 'Zhengyan Fu', 'Yao Wan', 'Lichao Sun', 'Xiangliang Zhang'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have achieved remarkable success across various
industries due to their exceptional generative capabilities. However, for safe
and effective real-world deployments, ensuring honesty and helpfulness is
critical. This paper addresses the question: Can we prioritize the helpfulness
of LLMs whi... | 2024-06-01T09:36:16Z | null | null | null | HonestLLM: Toward an Honest and Helpful Large Language Model | ['Chujie Gao', 'Siyuan Wu', 'Yue Huang', 'Dongping Chen', 'Qihui Zhang', 'Zhengyan Fu', 'Yao Wan', 'Lichao Sun', 'Xiangliang Zhang'] | 2,024 | Neural Information Processing Systems | 8 | 70 | ['Computer Science'] |
2,406.00492 | A Deep Learning Model for Coronary Artery Segmentation and Quantitative
Stenosis Detection in Angiographic Images | ['Baixiang Huang', 'Yu Luo', 'Guangyu Wei', 'Songyan He', 'Yushuang Shao', 'Xueying Zeng'] | ['eess.IV', 'cs.CV', 'cs.LG'] | Coronary artery disease (CAD) is a leading cause of cardiovascular-related
mortality, and accurate stenosis detection is crucial for effective clinical
decision-making. Coronary angiography remains the gold standard for diagnosing
CAD, but manual analysis of angiograms is prone to errors and subjectivity.
This study ai... | 2024-06-01T16:45:33Z | null | null | null | A Deep Learning Model for Coronary Artery Segmentation and Quantitative Stenosis Detection in Angiographic Images | ['Baixiang Huang', 'Yu Luo', 'Guangyu Wei', 'Songyan He', 'Yushuang Shao', 'Xueying Zeng'] | 2,024 | null | 0 | 43 | ['Engineering', 'Computer Science'] |
2,406.00515 | A Survey on Large Language Models for Code Generation | ['Juyong Jiang', 'Fan Wang', 'Jiasi Shen', 'Sungju Kim', 'Sunghun Kim'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Large Language Models (LLMs) have garnered remarkable advancements across
diverse code-related tasks, known as Code LLMs, particularly in code generation
that generates source code with LLM from natural language descriptions. This
burgeoning field has captured significant interest from both academic
researchers and ind... | 2024-06-01T17:48:15Z | null | null | null | A Survey on Large Language Models for Code Generation | ['Juyong Jiang', 'Fan Wang', 'Jiasi Shen', 'Sungju Kim', 'Sunghun Kim'] | 2,024 | arXiv.org | 204 | 245 | ['Computer Science'] |
2,406.00605 | LongSkywork: A Training Recipe for Efficiently Extending Context Length
in Large Language Models | ['Liang Zhao', 'Tianwen Wei', 'Liang Zeng', 'Cheng Cheng', 'Liu Yang', 'Peng Cheng', 'Lijie Wang', 'Chenxia Li', 'Xuejie Wu', 'Bo Zhu', 'Yimeng Gan', 'Rui Hu', 'Shuicheng Yan', 'Han Fang', 'Yahui Zhou'] | ['cs.CL', 'cs.AI'] | We introduce LongSkywork, a long-context Large Language Model (LLM) capable
of processing up to 200,000 tokens. We provide a training recipe for
efficiently extending context length of LLMs. We identify that the critical
element in enhancing long-context processing capability is to incorporate a
long-context SFT stage ... | 2024-06-02T03:34:41Z | null | null | null | LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models | ['Liang Zhao', 'Tianwen Wei', 'Liang Zeng', 'Cheng Cheng', 'Liu Yang', 'Peng Cheng', 'Lijie Wang', 'Chenxia Li', 'X. Wu', 'Bo Zhu', 'Y. Gan', 'Rui Hu', 'Shuicheng Yan', 'Han Fang', 'Yahui Zhou'] | 2,024 | arXiv.org | 11 | 34 | ['Computer Science'] |
2,406.00856 | DistilDIRE: A Small, Fast, Cheap and Lightweight Diffusion Synthesized
Deepfake Detection | ['Yewon Lim', 'Changyeon Lee', 'Aerin Kim', 'Oren Etzioni'] | ['cs.CV', 'cs.CR', 'cs.LG'] | A dramatic influx of diffusion-generated images has marked recent years,
posing unique challenges to current detection technologies. While the task of
identifying these images falls under binary classification, a seemingly
straightforward category, the computational load is significant when employing
the "reconstructio... | 2024-06-02T20:22:38Z | 6 pages, 1 figure | null | null | null | null | null | null | null | null | null |
2,406.00899 | YODAS: Youtube-Oriented Dataset for Audio and Speech | ['Xinjian Li', 'Shinnosuke Takamichi', 'Takaaki Saeki', 'William Chen', 'Sayaka Shiota', 'Shinji Watanabe'] | ['cs.CL', 'cs.SD', 'eess.AS'] | In this study, we introduce YODAS (YouTube-Oriented Dataset for Audio and
Speech), a large-scale, multilingual dataset comprising currently over 500k
hours of speech data in more than 100 languages, sourced from both labeled and
unlabeled YouTube speech datasets. The labeled subsets, including manual or
automatic subti... | 2024-06-02T23:43:27Z | ASRU 2023 | null | null | null | null | null | null | null | null | null |
2,406.00977 | Dragonfly: Multi-Resolution Zoom-In Encoding Enhances Vision-Language
Models | ['Rahul Thapa', 'Kezhen Chen', 'Ian Covert', 'Rahul Chalamala', 'Ben Athiwaratkun', 'Shuaiwen Leon Song', 'James Zou'] | ['cs.CV', 'cs.AI'] | Recent advances in vision-language models (VLMs) have demonstrated the
advantages of processing images at higher resolutions and utilizing multi-crop
features to preserve native resolution details. However, despite these
improvements, existing vision transformers (ViTs) still struggle to capture
fine-grained details fr... | 2024-06-03T04:17:12Z | null | null | null | null | null | null | null | null | null | null |
2,406.01006 | SemCoder: Training Code Language Models with Comprehensive Semantics
Reasoning | ['Yangruibo Ding', 'Jinjun Peng', 'Marcus J. Min', 'Gail Kaiser', 'Junfeng Yang', 'Baishakhi Ray'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Code Large Language Models (Code LLMs) have excelled at tasks like code
completion but often miss deeper semantics such as execution effects and
dynamic states. This paper aims to bridge the gap between Code LLMs' reliance
on static text data and the need for semantic understanding for complex tasks
like debugging and ... | 2024-06-03T05:36:57Z | NeurIPS 2024 Camera-ready | null | null | SemCoder: Training Code Language Models with Comprehensive Semantics | ['Yangruibo Ding', 'Jinjun Peng', 'Marcus J. Min', 'Gail E. Kaiser', 'Junfeng Yang', 'Baishakhi Ray'] | 2,024 | Neural Information Processing Systems | 21 | 52 | ['Computer Science'] |
2,406.01175 | NeoRL: Efficient Exploration for Nonepisodic RL | ['Bhavya Sukhija', 'Lenart Treven', 'Florian Dörfler', 'Stelian Coros', 'Andreas Krause'] | ['cs.LG'] | We study the problem of nonepisodic reinforcement learning (RL) for nonlinear
dynamical systems, where the system dynamics are unknown and the RL agent has
to learn from a single trajectory, i.e., without resets. We propose Nonepisodic
Optimistic RL (NeoRL), an approach based on the principle of optimism in the
face of... | 2024-06-03T10:14:32Z | null | null | null | null | null | null | null | null | null | null |
2,406.01179 | Are AI-Generated Text Detectors Robust to Adversarial Perturbations? | ['Guanhua Huang', 'Yuchen Zhang', 'Zhe Li', 'Yongjian You', 'Mingze Wang', 'Zhouwang Yang'] | ['cs.CL', 'cs.AI'] | The widespread use of large language models (LLMs) has sparked concerns about
the potential misuse of AI-generated text, as these models can produce content
that closely resembles human-generated text. Current detectors for AI-generated
text (AIGT) lack robustness against adversarial perturbations, with even minor
chan... | 2024-06-03T10:21:48Z | Accepted to ACL 2024 main conference | null | null | Are AI-Generated Text Detectors Robust to Adversarial Perturbations? | ['Guanhua Huang', 'Yuchen Zhang', 'Zhe Li', 'Yongjian You', 'Mingze Wang', 'Zhouwang Yang'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 6 | 53 | ['Computer Science'] |
2,406.01188 | UniAnimate: Taming Unified Video Diffusion Models for Consistent Human
Image Animation | ['Xiang Wang', 'Shiwei Zhang', 'Changxin Gao', 'Jiayu Wang', 'Xiaoqiang Zhou', 'Yingya Zhang', 'Luxin Yan', 'Nong Sang'] | ['cs.CV'] | Recent diffusion-based human image animation techniques have demonstrated
impressive success in synthesizing videos that faithfully follow a given
reference identity and a sequence of desired movement poses. Despite this,
there are still two limitations: i) an extra reference model is required to
align the identity ima... | 2024-06-03T10:51:10Z | Project page: https://unianimate.github.io/ | null | null | UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation | ['Xiang Wang', 'Shiwei Zhang', 'Changxin Gao', 'Jiayu Wang', 'Xiaoqiang Zhou', 'Yingya Zhang', 'Luxin Yan', 'Nong Sang'] | 2,024 | arXiv.org | 41 | 79 | ['Computer Science'] |
2,406.01198 | Automatic Essay Multi-dimensional Scoring with Fine-tuning and Multiple
Regression | ['Kun Sun', 'Rong Wang'] | ['cs.CL', 'cs.AI'] | Automated essay scoring (AES) involves predicting a score that reflects the
writing quality of an essay. Most existing AES systems produce only a single
overall score. However, users and L2 learners expect scores across different
dimensions (e.g., vocabulary, grammar, coherence) for English essays in
real-world applica... | 2024-06-03T10:59:50Z | null | null | null | null | null | null | null | null | null | null |
2,406.01326 | TabPedia: Towards Comprehensive Visual Table Understanding with Concept
Synergy | ['Weichao Zhao', 'Hao Feng', 'Qi Liu', 'Jingqun Tang', 'Shu Wei', 'Binghong Wu', 'Lei Liao', 'Yongjie Ye', 'Hao Liu', 'Wengang Zhou', 'Houqiang Li', 'Can Huang'] | ['cs.CV'] | Tables contain factual and quantitative data accompanied by various
structures and contents that pose challenges for machine comprehension.
Previous methods generally design task-specific architectures and objectives
for individual tasks, resulting in modal isolation and intricate workflows. In
this paper, we present a... | 2024-06-03T13:54:05Z | Accepted by NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,406.01465 | AIFS -- ECMWF's data-driven forecasting system | ['Simon Lang', 'Mihai Alexe', 'Matthew Chantry', 'Jesper Dramsch', 'Florian Pinault', 'Baudouin Raoult', 'Mariana C. A. Clare', 'Christian Lessig', 'Michael Maier-Gerber', 'Linus Magnusson', 'Zied Ben Bouallègue', 'Ana Prieto Nemesio', 'Peter D. Dueben', 'Andrew Brown', 'Florian Pappenberger', 'Florence Rabier'] | ['physics.ao-ph'] | Machine learning-based weather forecasting models have quickly emerged as a
promising methodology for accurate medium-range global weather forecasting.
Here, we introduce the Artificial Intelligence Forecasting System (AIFS), a
data driven forecast model developed by the European Centre for Medium-Range
Weather Forecas... | 2024-06-03T15:55:10Z | null | null | null | null | null | null | null | null | null | null |
2,406.01493 | Learning Temporally Consistent Video Depth from Video Diffusion Priors | ['Jiahao Shao', 'Yuanbo Yang', 'Hongyu Zhou', 'Youmin Zhang', 'Yujun Shen', 'Vitor Guizilini', 'Yue Wang', 'Matteo Poggi', 'Yiyi Liao'] | ['cs.CV'] | This work addresses the challenge of streamed video depth estimation, which
expects not only per-frame accuracy but, more importantly, cross-frame
consistency. We argue that sharing contextual information between frames or
clips is pivotal in fostering temporal consistency. Therefore, we reformulate
depth prediction in... | 2024-06-03T16:20:24Z | null | null | null | Learning Temporally Consistent Video Depth from Video Diffusion Priors | ['Jiahao Shao', 'Yuanbo Yang', 'Hongyu Zhou', 'Youmin Zhang', 'Yujun Shen', 'Matteo Poggi', 'Yiyi Liao'] | 2,024 | arXiv.org | 43 | 91 | ['Computer Science'] |
2,406.01563 | LoFiT: Localized Fine-tuning on LLM Representations | ['Fangcong Yin', 'Xi Ye', 'Greg Durrett'] | ['cs.CL'] | Recent work in interpretability shows that large language models (LLMs) can
be adapted for new tasks in a learning-free way: it is possible to intervene on
LLM representations to elicit desired behaviors for alignment. For instance,
adding certain bias vectors to the outputs of certain attention heads is
reported to bo... | 2024-06-03T17:45:41Z | NeurIPS 2024 Camera Ready | null | null | LoFiT: Localized Fine-tuning on LLM Representations | ['Fangcong Yin', 'Xi Ye', 'Greg Durrett'] | 2,024 | Neural Information Processing Systems | 23 | 58 | ['Computer Science'] |
2,406.01574 | MMLU-Pro: A More Robust and Challenging Multi-Task Language
Understanding Benchmark | ['Yubo Wang', 'Xueguang Ma', 'Ge Zhang', 'Yuansheng Ni', 'Abhranil Chandra', 'Shiguang Guo', 'Weiming Ren', 'Aaran Arulraj', 'Xuan He', 'Ziyan Jiang', 'Tianle Li', 'Max Ku', 'Kai Wang', 'Alex Zhuang', 'Rongqi Fan', 'Xiang Yue', 'Wenhu Chen'] | ['cs.CL'] | In the age of large-scale language models, benchmarks like the Massive
Multitask Language Understanding (MMLU) have been pivotal in pushing the
boundaries of what AI can achieve in language comprehension and reasoning
across diverse domains. However, as models continue to improve, their
performance on these benchmarks ... | 2024-06-03T17:53:00Z | This version has been accepted and published at NeurIPS 2024 Track
Datasets and Benchmarks (Spotlight) | null | null | MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark | ['Yubo Wang', 'Xueguang Ma', 'Ge Zhang', 'Yuansheng Ni', 'Abhranil Chandra', 'Shiguang Guo', 'Weiming Ren', 'Aaran Arulraj', 'Xuan He', 'Ziyan Jiang', 'Tianle Li', 'Max W.F. Ku', 'Kai Wang', 'Alex Zhuang', 'Rongqi "Richard" Fan', 'Xiang Yue', 'Wenhu Chen'] | 2,024 | Neural Information Processing Systems | 465 | 56 | ['Computer Science'] |
2,406.01835 | An Open Multilingual System for Scoring Readability of Wikipedia | ['Mykola Trokhymovych', 'Indira Sen', 'Martin Gerlach'] | ['cs.CL', 'cs.AI'] | With over 60M articles, Wikipedia has become the largest platform for open
and freely accessible knowledge. While it has more than 15B monthly visits, its
content is believed to be inaccessible to many readers due to the lack of
readability of its text. However, previous investigations of the readability of
Wikipedia h... | 2024-06-03T23:07:18Z | null | null | null | An Open Multilingual System for Scoring Readability of Wikipedia | ['Mykola Trokhymovych', 'Indira Sen', 'Martin Gerlach'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 5 | 86 | ['Computer Science'] |
2,406.01867 | MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by
Adversarial Training | ['Kengo Uchida', 'Takashi Shibuya', 'Yuhta Takida', 'Naoki Murata', 'Julian Tanke', 'Shusuke Takahashi', 'Yuki Mitsufuji'] | ['cs.CV'] | In text-to-motion generation, controllability as well as generation quality
and speed has become increasingly critical. The controllability challenges
include generating a motion of a length that matches the given textual
description and editing the generated motions according to control signals,
such as the start-end ... | 2024-06-04T00:38:44Z | CVPR 2025 HuMoGen Workshop | null | null | null | null | null | null | null | null | null |
2,406.019 | Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait
Animation | ['Yue Ma', 'Hongyu Liu', 'Hongfa Wang', 'Heng Pan', 'Yingqing He', 'Junkun Yuan', 'Ailing Zeng', 'Chengfei Cai', 'Heung-Yeung Shum', 'Wei Liu', 'Qifeng Chen'] | ['cs.CV'] | We present Follow-Your-Emoji, a diffusion-based framework for portrait
animation, which animates a reference portrait with target landmark sequences.
The main challenge of portrait animation is to preserve the identity of the
reference portrait and transfer the target expression to this portrait while
maintaining tempo... | 2024-06-04T02:05:57Z | Project Page: https://follow-your-emoji.github.io/ | null | null | null | null | null | null | null | null | null |
2,406.01981 | Zyda: A 1.3T Dataset for Open Language Modeling | ['Yury Tokpanov', 'Beren Millidge', 'Paolo Glorioso', 'Jonathan Pilault', 'Adam Ibrahim', 'James Whittington', 'Quentin Anthony'] | ['cs.CL', 'cs.AI'] | The size of large language models (LLMs) has scaled dramatically in recent
years and their computational and data requirements have surged
correspondingly. State-of-the-art language models, even at relatively smaller
sizes, typically require training on at least a trillion tokens. This rapid
advancement has eclipsed th... | 2024-06-04T05:47:17Z | null | null | null | null | null | null | null | null | null | null |
2,406.02106 | MARS: Benchmarking the Metaphysical Reasoning Abilities of Language
Models with a Multi-task Evaluation Dataset | ['Weiqi Wang', 'Yangqiu Song'] | ['cs.CL'] | To enable Large Language Models (LLMs) to function as conscious agents with
generalizable reasoning capabilities, it is crucial that they possess the
reasoning ability to comprehend situational changes (transitions) in
distribution triggered by environmental factors or actions from other agents.
Despite its fundamental... | 2024-06-04T08:35:04Z | ACL2025 | null | null | null | null | null | null | null | null | null |
2,406.02251 | Modeling Emotional Trajectories in Written Stories Utilizing
Transformers and Weakly-Supervised Learning | ['Lukas Christ', 'Shahin Amiriparian', 'Manuel Milling', 'Ilhan Aslan', 'Björn W. Schuller'] | ['cs.CL', 'cs.AI'] | Telling stories is an integral part of human communication which can evoke
emotions and influence the affective states of the audience. Automatically
modeling emotional trajectories in stories has thus attracted considerable
scholarly interest. However, as most existing works have been limited to
unsupervised dictionar... | 2024-06-04T12:17:16Z | Accepted to ACL 2024 Findings. arXiv admin note: text overlap with
arXiv:2212.11382 | null | null | Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning | ['Lukas Christ', 'S. Amiriparian', 'M. Milling', 'Ilhan Aslan', 'Björn W. Schuller'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 0 | 60 | ['Computer Science'] |
2,406.02255 | MidiCaps: A large-scale MIDI dataset with text captions | ['Jan Melechovsky', 'Abhinaba Roy', 'Dorien Herremans'] | ['eess.AS', 'cs.LG', 'cs.MM', 'cs.SD'] | Generative models guided by text prompts are increasingly becoming more
popular. However, no text-to-MIDI models currently exist due to the lack of a
captioned MIDI dataset. This work aims to enable research that combines LLMs
with symbolic music by presenting, the first openly available large-scale MIDI
dataset with t... | 2024-06-04T12:21:55Z | Accepted in ISMIR2024 | Proceedings of ISMIR 2024 | null | MidiCaps: A Large-Scale MIDI Dataset With Text Captions | ['J. Melechovský', 'Abhinaba Roy', 'Dorien Herremans'] | 2,024 | International Society for Music Information Retrieval Conference | 13 | 31 | ['Computer Science', 'Engineering'] |
2,406.02285 | Towards Supervised Performance on Speaker Verification with
Self-Supervised Learning by Leveraging Large-Scale ASR Models | ['Victor Miara', 'Theo Lepage', 'Reda Dehak'] | ['eess.AS', 'cs.LG', 'cs.SD'] | Recent advancements in Self-Supervised Learning (SSL) have shown promising
results in Speaker Verification (SV). However, narrowing the performance gap
with supervised systems remains an ongoing challenge. Several studies have
observed that speech representations from large-scale ASR models contain
valuable speaker inf... | 2024-06-04T12:58:19Z | accepted at INTERSPEECH 2024 | Proc. Interspeech 2024, Kos, Greece, Sept. 2024, pp. 2660--2664 | 10.21437/Interspeech.2024-486 | Towards Supervised Performance on Speaker Verification with Self-Supervised Learning by Leveraging Large-Scale ASR Models | ['Victor Miara', 'Theo Lepage', 'Réda Dehak'] | 2,024 | Interspeech | 1 | 35 | ['Computer Science', 'Engineering'] |
2,406.02301 | mCoT: Multilingual Instruction Tuning for Reasoning Consistency in
Language Models | ['Huiyuan Lai', 'Malvina Nissim'] | ['cs.CL'] | Large language models (LLMs) with Chain-of-thought (CoT) have recently
emerged as a powerful technique for eliciting reasoning to improve various
downstream tasks. As most research mainly focuses on English, with few
explorations in a multilingual context, the question of how reliable this
reasoning capability is in di... | 2024-06-04T13:30:45Z | Accepted to ACL 2024 main (Corrected Figure 2 (a)) | null | null | null | null | null | null | null | null | null |
2,406.02332 | Extended Mind Transformers | ['Phoebe Klett', 'Thomas Ahle'] | ['cs.LG', 'cs.CL'] | Pre-trained language models demonstrate general intelligence and common
sense, but long inputs quickly become a bottleneck for memorizing information
at inference time. We resurface a simple method, Memorizing Transformers (Wu et
al., 2022), that gives the model access to a bank of pre-computed memories. We
show that i... | 2024-06-04T14:00:25Z | null | null | null | null | null | null | null | null | null | null |
2,406.02345 | Progressive Confident Masking Attention Network for Audio-Visual
Segmentation | ['Yuxuan Wang', 'Jinchao Zhu', 'Feng Dong', 'Shuyue Zhu'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM'] | Audio and visual signals typically occur simultaneously, and humans possess
an innate ability to correlate and synchronize information from these two
modalities. Recently, a challenging problem known as Audio-Visual Segmentation
(AVS) has emerged, intending to produce segmentation maps for sounding objects
within a sce... | 2024-06-04T14:21:41Z | 23 pages, 11 figures, submitted to Elsevier Knowledge-Based System | null | null | Progressive Confident Masking Attention Network for Audio-Visual Segmentation | ['Yuxuan Wang', 'Feng Dong', 'Jinchao Zhu'] | 2,024 | arXiv.org | 0 | 59 | ['Computer Science'] |
2,406.02347 | Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few
Steps Image Generation | ['Clément Chadebec', 'Onur Tasar', 'Eyal Benaroche', 'Benjamin Aubin'] | ['cs.CV', 'cs.AI', 'cs.LG'] | In this paper, we propose an efficient, fast, and versatile distillation
method to accelerate the generation of pre-trained diffusion models: Flash
Diffusion. The method reaches state-of-the-art performances in terms of FID and
CLIP-Score for few steps image generation on the COCO2014 and COCO2017
datasets, while requi... | 2024-06-04T14:23:27Z | Accepted to AAAI 2025 | null | null | Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation | ['Clément Chadebec', 'O. Tasar', 'Eyal Benaroche', 'Benjamin Aubin'] | 2,024 | AAAI Conference on Artificial Intelligence | 14 | 80 | ['Computer Science'] |
2,406.02511 | V-Express: Conditional Dropout for Progressive Training of Portrait
Video Generation | ['Cong Wang', 'Kuan Tian', 'Jun Zhang', 'Yonghang Guan', 'Feng Luo', 'Fei Shen', 'Zhiwei Jiang', 'Qing Gu', 'Xiao Han', 'Wei Yang'] | ['cs.CV', 'cs.AI'] | In the field of portrait video generation, the use of single images to
generate portrait videos has become increasingly prevalent. A common approach
involves leveraging generative models to enhance adapters for controlled
generation. However, control signals (e.g., text, audio, reference image, pose,
depth map, etc.) c... | 2024-06-04T17:32:52Z | null | null | null | null | null | null | null | null | null | null |
2,406.02539 | Parrot: Multilingual Visual Instruction Tuning | ['Hai-Long Sun', 'Da-Wei Zhou', 'Yang Li', 'Shiyin Lu', 'Chao Yi', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'De-Chuan Zhan', 'Han-Jia Ye'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | The rapid development of Multimodal Large Language Models (MLLMs), such as
GPT-4o, marks a significant step toward artificial general intelligence.
Existing methods typically align vision encoders with LLMs via supervised
fine-tuning (SFT), but this often deteriorates their ability to handle multiple
languages as train... | 2024-06-04T17:56:28Z | Accepted to ICML 2025. Code and dataset are available at:
https://github.com/AIDC-AI/Parrot | null | null | null | null | null | null | null | null | null |
2,406.02548 | Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance
Segmentation | ['Mohamed El Amine Boudjoghra', 'Angela Dai', 'Jean Lahoud', 'Hisham Cholakkal', 'Rao Muhammad Anwer', 'Salman Khan', 'Fahad Shahbaz Khan'] | ['cs.CV'] | Recent works on open-vocabulary 3D instance segmentation show strong promise,
but at the cost of slow inference speed and high computation requirements. This
high computation cost is typically due to their heavy reliance on 3D clip
features, which require computationally expensive 2D foundation models like
Segment Anyt... | 2024-06-04T17:59:31Z | ICLR 2025 (Oral) | null | null | Open-YOLO 3D: Towards Fast and Accurate Open-Vocabulary 3D Instance Segmentation | ['Mohamed El Amine Boudjoghra', 'Angela Dai', 'Jean Lahoud', 'Hisham Cholakkal', 'R. Anwer', 'Salman H. Khan', 'F. Khan'] | 2,024 | International Conference on Learning Representations | 6 | 55 | ['Computer Science'] |
2,406.0256 | Less Peaky and More Accurate CTC Forced Alignment by Label Priors | ['Ruizhe Huang', 'Xiaohui Zhang', 'Zhaoheng Ni', 'Li Sun', 'Moto Hira', 'Jeff Hwang', 'Vimal Manohar', 'Vineel Pratap', 'Matthew Wiesner', 'Shinji Watanabe', 'Daniel Povey', 'Sanjeev Khudanpur'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG'] | Connectionist temporal classification (CTC) models are known to have peaky
output distributions. Such behavior is not a problem for automatic speech
recognition (ASR), but it can cause inaccurate forced alignments (FA),
especially at finer granularity, e.g., phoneme level. This paper aims at
alleviating the peaky behav... | 2024-04-22T17:40:08Z | Accepted by ICASSP 2024. Github repo:
https://github.com/huangruizhe/audio/tree/aligner_label_priors | null | null | null | null | null | null | null | null | null |
2,406.0278 | LADI v2: Multi-label Dataset and Classifiers for Low-Altitude Disaster
Imagery | ['Samuel Scheele', 'Katherine Picchione', 'Jeffrey Liu'] | ['cs.CV', 'cs.AI', 'cs.LG', '68T45', 'J.2'] | ML-based computer vision models are promising tools for supporting emergency
management operations following natural disasters. Arial photographs taken from
small manned and unmanned aircraft can be available soon after a disaster and
provide valuable information from multiple perspectives for situational
awareness and... | 2024-06-04T20:51:04Z | null | null | null | null | null | null | null | null | null | null |
2,406.02856 | Xmodel-LM Technical Report | ['Yichuan Wang', 'Yang Liu', 'Yu Yan', 'Qun Wang', 'Xucheng Huang', 'Ling Jiang'] | ['cs.CL', 'cs.AI'] | We introduce Xmodel-LM, a compact and efficient 1.1B language model
pre-trained on around 2 trillion tokens. Trained on our self-built dataset
(Xdata), which balances Chinese and English corpora based on downstream task
optimization, Xmodel-LM exhibits remarkable performance despite its smaller
size. It notably surpass... | 2024-06-05T02:12:06Z | null | null | null | Xmodel-LM Technical Report | ['Yichuan Wang', 'Yang Liu', 'Yu Yan', 'Qun Wang', 'Xucheng Huang', 'Ling Jiang'] | 2,024 | arXiv.org | 1 | 31 | ['Computer Science'] |
2,406.0288 | Controllable Talking Face Generation by Implicit Facial Keypoints
Editing | ['Dong Zhao', 'Jiaying Shi', 'Wenjun Li', 'Shudong Wang', 'Shenghui Xu', 'Zhaoming Pan'] | ['cs.CV', 'cs.AI'] | Audio-driven talking face generation has garnered significant interest within
the domain of digital human research. Existing methods are encumbered by
intricate model architectures that are intricately dependent on each other,
complicating the process of re-editing image or video inputs. In this work, we
present Contro... | 2024-06-05T02:54:46Z | null | null | null | Controllable Talking Face Generation by Implicit Facial Keypoints Editing | ['Dong Zhao', 'Jiaying Shi', 'Wenjun Li', 'Shudong Wang', 'Shenghui Xu', 'Zhaoming Pan'] | 2,024 | arXiv.org | 0 | 33 | ['Computer Science'] |
2,406.03008 | DriVLMe: Enhancing LLM-based Autonomous Driving Agents with Embodied and
Social Experiences | ['Yidong Huang', 'Jacob Sansom', 'Ziqiao Ma', 'Felix Gervits', 'Joyce Chai'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Recent advancements in foundation models (FMs) have unlocked new prospects in
autonomous driving, yet the experimental settings of these studies are
preliminary, over-simplified, and fail to capture the complexity of real-world
driving scenarios in human environments. It remains under-explored whether FM
agents can han... | 2024-06-05T07:14:44Z | 2024 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS) | null | null | null | null | null | null | null | null | null |
2,406.03044 | Population Transformer: Learning Population-level Representations of
Neural Activity | ['Geeling Chau', 'Christopher Wang', 'Sabera Talukder', 'Vighnesh Subramaniam', 'Saraswati Soedarmadji', 'Yisong Yue', 'Boris Katz', 'Andrei Barbu'] | ['cs.LG', 'q-bio.NC'] | We present a self-supervised framework that learns population-level codes for
arbitrary ensembles of neural recordings at scale. We address key challenges in
scaling models with neural time-series data, namely, sparse and variable
electrode distribution across subjects and datasets. The Population Transformer
(PopT) st... | 2024-06-05T08:15:09Z | ICLR 2025, Project page
https://glchau.github.io/population-transformer/ | null | null | Population Transformer: Learning Population-level Representations of Neural Activity | ['Geeling Chau', 'Christopher Wang', 'Sabera Talukder', 'Vighnesh Subramaniam', 'Saraswati Soedarmadji', 'Yisong Yue', 'B. Katz', 'Andrei Barbu'] | 2,024 | International Conference on Learning Representations | 6 | 58 | ['Computer Science', 'Biology', 'Medicine'] |
2,406.03136 | Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for
Transformer Models | ['Jerry Yao-Chieh Hu', 'Maojiang Su', 'En-Jui Kuo', 'Zhao Song', 'Han Liu'] | ['cs.LG', 'cs.AI', 'cs.CC', 'stat.ML'] | We study the computational limits of Low-Rank Adaptation (LoRA) for
finetuning transformer-based models using fine-grained complexity theory. Our
key observation is that the existence of low-rank decompositions within the
gradient computation of LoRA adaptation leads to possible algorithmic speedup.
This allows us to (... | 2024-06-05T10:44:08Z | Accepted at ICLR 2025. v2 matches the camera-ready version | null | null | null | null | null | null | null | null | null |
2,406.03184 | Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion | ['Hao Wen', 'Zehuan Huang', 'Yaohui Wang', 'Xinyuan Chen', 'Lu Sheng'] | ['cs.CV'] | Existing single image-to-3D creation methods typically involve a two-stage
process, first generating multi-view images, and then using these images for 3D
reconstruction. However, training these two stages separately leads to
significant data bias in the inference phase, thus affecting the quality of
reconstructed resu... | 2024-06-05T12:15:22Z | See our project page at https://costwen.github.io/Ouroboros3D/ | null | null | null | null | null | null | null | null | null |
2,406.03344 | Audio Mamba: Bidirectional State Space Model for Audio Representation
Learning | ['Mehmet Hamza Erol', 'Arda Senocak', 'Jiu Feng', 'Joon Son Chung'] | ['cs.SD', 'cs.AI', 'eess.AS'] | Transformers have rapidly become the preferred choice for audio
classification, surpassing methods based on CNNs. However, Audio Spectrogram
Transformers (ASTs) exhibit quadratic scaling due to self-attention. The
removal of this quadratic self-attention cost presents an appealing direction.
Recently, state space model... | 2024-06-05T15:00:59Z | Code is available at https://github.com/mhamzaerol/Audio-Mamba-AuM | null | null | null | null | null | null | null | null | null |
2,406.03363 | LLM-based Rewriting of Inappropriate Argumentation using Reinforcement
Learning from Machine Feedback | ['Timon Ziegenbein', 'Gabriella Skitalinskaya', 'Alireza Bayat Makou', 'Henning Wachsmuth'] | ['cs.CL'] | Ensuring that online discussions are civil and productive is a major
challenge for social media platforms. Such platforms usually rely both on users
and on automated detection tools to flag inappropriate arguments of other
users, which moderators then review. However, this kind of post-hoc moderation
is expensive and t... | 2024-06-05T15:18:08Z | null | null | null | LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback | ['Timon Ziegenbein', 'Gabriella Skitalinskaya', 'Alireza Bayat Makou', 'Henning Wachsmuth'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 8 | 71 | ['Computer Science'] |
2,406.03368 | IrokoBench: A New Benchmark for African Languages in the Age of Large
Language Models | ['David Ifeoluwa Adelani', 'Jessica Ojo', 'Israel Abebe Azime', 'Jian Yun Zhuang', 'Jesujoba O. Alabi', 'Xuanli He', 'Millicent Ochieng', 'Sara Hooker', 'Andiswa Bukula', 'En-Shiun Annie Lee', 'Chiamaka Chukwuneke', 'Happy Buzaaba', 'Blessing Sibanda', 'Godson Kalipe', 'Jonathan Mukiibi', 'Salomon Kabongo', 'Foutse Yue... | ['cs.CL', 'cs.AI'] | Despite the widespread adoption of Large language models (LLMs), their
remarkable capabilities remain limited to a few high-resource languages.
Additionally, many low-resource languages (\eg African languages) are often
evaluated only on basic text classification tasks due to the lack of
appropriate or comprehensive be... | 2024-06-05T15:23:08Z | Accepted to NAACL 2025 (main conference) | null | null | IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models | ['David Ifeoluwa Adelani', 'Jessica Ojo', 'Israel Abebe Azime', 'Zhuang Yun Jian', 'Jesujoba Oluwadara Alabi', 'Xuanli He', 'Millicent Ochieng', 'Sara Hooker', 'Andiswa Bukula', 'En-Shiun Annie Lee', 'Chiamaka Chukwuneke', 'Happy Buzaaba', 'Blessing K. Sibanda', 'Godson Kalipe', 'Jonathan Mukiibi', 'Salomon Kabongo KAB... | 2,024 | North American Chapter of the Association for Computational Linguistics | 10 | 69 | ['Computer Science'] |
2,406.03459 | LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection | ['Qiang Chen', 'Xiangbo Su', 'Xinyu Zhang', 'Jian Wang', 'Jiahui Chen', 'Yunpeng Shen', 'Chuchu Han', 'Ziliang Chen', 'Weixiang Xu', 'Fanrong Li', 'Shan Zhang', 'Kun Yao', 'Errui Ding', 'Gang Zhang', 'Jingdong Wang'] | ['cs.CV'] | In this paper, we present a light-weight detection transformer, LW-DETR,
which outperforms YOLOs for real-time object detection. The architecture is a
simple stack of a ViT encoder, a projector, and a shallow DETR decoder. Our
approach leverages recent advanced techniques, such as training-effective
techniques, e.g., i... | 2024-06-05T17:07:24Z | null | null | null | LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection | ['Qiang Chen', 'Xiangbo Su', 'Xinyu Zhang', 'Jian Wang', 'Jiahui Chen', 'Yunpeng Shen', 'Chuchu Han', 'Ziliang Chen', 'Weixiang Xu', 'Fanrong Li', 'Shan Zhang', 'Kun Yao', 'Errui Ding', 'Gang Zhang', 'Jingdong Wang'] | 2,024 | arXiv.org | 21 | 73 | ['Computer Science'] |
2,406.03496 | Wings: Learning Multimodal LLMs without Text-only Forgetting | ['Yi-Kai Zhang', 'Shiyin Lu', 'Yang Li', 'Yanqing Ma', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'De-Chuan Zhan', 'Han-Jia Ye'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Multimodal large language models (MLLMs), initiated with a trained LLM, first
align images with text and then fine-tune on multimodal mixed inputs. However,
the MLLM catastrophically forgets the text-only instructions, which do not
include images and can be addressed within the initial LLM. In this paper, we
present Wi... | 2024-06-05T17:59:40Z | null | null | null | Wings: Learning Multimodal LLMs without Text-only Forgetting | ['Yi-Kai Zhang', 'Shiyin Lu', 'Yang Li', 'Yanqing Ma', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'De-chuan Zhan', 'Han-Jia Ye'] | 2,024 | Neural Information Processing Systems | 10 | 149 | ['Computer Science'] |
2,406.0352 | VideoPhy: Evaluating Physical Commonsense for Video Generation | ['Hritik Bansal', 'Zongyu Lin', 'Tianyi Xie', 'Zeshun Zong', 'Michal Yarom', 'Yonatan Bitton', 'Chenfanfu Jiang', 'Yizhou Sun', 'Kai-Wei Chang', 'Aditya Grover'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Recent advances in internet-scale video data pretraining have led to the
development of text-to-video generative models that can create high-quality
videos across a broad range of visual concepts, synthesize realistic motions
and render complex objects. Hence, these generative models have the potential
to become genera... | 2024-06-05T17:53:55Z | 43 pages, 29 figures, 12 tables. Added CogVideo and Dream Machine in
v2 | null | null | VideoPhy: Evaluating Physical Commonsense for Video Generation | ['Hritik Bansal', 'Zongyu Lin', 'Tianyi Xie', 'Zeshun Zong', 'Michal Yarom', 'Yonatan Bitton', 'Chenfanfu Jiang', 'Yizhou Sun', 'Kai-Wei Chang', 'Aditya Grover'] | 2,024 | International Conference on Learning Representations | 45 | 119 | ['Computer Science'] |
2,406.03686 | BindGPT: A Scalable Framework for 3D Molecular Design via Language
Modeling and Reinforcement Learning | ['Artem Zholus', 'Maksim Kuznetsov', 'Roman Schutski', 'Rim Shayakhmetov', 'Daniil Polykovskiy', 'Sarath Chandar', 'Alex Zhavoronkov'] | ['cs.LG'] | Generating novel active molecules for a given protein is an extremely
challenging task for generative models that requires an understanding of the
complex physical interactions between the molecule and its environment. In this
paper, we present a novel generative model, BindGPT which uses a conceptually
simple but powe... | 2024-06-06T02:10:50Z | null | null | null | BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning | ['Artem Zholus', 'Maksim Kuznetsov', 'Roman Schutski', 'Shayakhmetov Rim', 'Daniil A Polykovskiy', 'Sarath Chandar', 'Alex Zhavoronkov'] | 2,024 | arXiv.org | 6 | 61 | ['Computer Science'] |
2,406.03736 | Your Absorbing Discrete Diffusion Secretly Models the Conditional
Distributions of Clean Data | ['Jingyang Ou', 'Shen Nie', 'Kaiwen Xue', 'Fengqi Zhu', 'Jiacheng Sun', 'Zhenguo Li', 'Chongxuan Li'] | ['cs.LG', 'cs.CL'] | Discrete diffusion models with absorbing processes have shown promise in
language modeling. The key quantities to be estimated are the ratios between
the marginal probabilities of two transitive states at all timesteps, called
the concrete score. In this paper, we reveal that the concrete score in
absorbing diffusion c... | 2024-06-06T04:22:11Z | null | null | null | null | null | null | null | null | null | null |
2,406.03822 | SilentCipher: Deep Audio Watermarking | ['Mayank Kumar Singh', 'Naoya Takahashi', 'Weihsiang Liao', 'Yuki Mitsufuji'] | ['cs.SD', 'cs.CR', 'eess.AS'] | In the realm of audio watermarking, it is challenging to simultaneously
encode imperceptible messages while enhancing the message capacity and
robustness. Although recent advancements in deep learning-based methods bolster
the message capacity and robustness over traditional methods, the encoded
messages introduce audi... | 2024-06-06T07:58:31Z | null | null | 10.21437/Interspeech.2024-174 | SilentCipher: Deep Audio Watermarking | ['Mayank Kumar Singh', 'Naoya Takahashi', 'Wei-Hsiang Liao', 'Yuki Mitsufuji'] | 2,024 | Interspeech | 10 | 27 | ['Computer Science', 'Engineering'] |
2,406.03872 | BLSP-Emo: Towards Empathetic Large Speech-Language Models | ['Chen Wang', 'Minpeng Liao', 'Zhongqiang Huang', 'Junhong Wu', 'Chengqing Zong', 'Jiajun Zhang'] | ['cs.CL', 'cs.SD', 'eess.AS'] | The recent release of GPT-4o showcased the potential of end-to-end multimodal
models, not just in terms of low latency but also in their ability to
understand and generate expressive speech with rich emotions. While the details
are unknown to the open research community, it likely involves significant
amounts of curate... | 2024-06-06T09:02:31Z | null | null | null | null | null | null | null | null | null | null |
2,406.03877 | Bench2Drive: Towards Multi-Ability Benchmarking of Closed-Loop
End-To-End Autonomous Driving | ['Xiaosong Jia', 'Zhenjie Yang', 'Qifeng Li', 'Zhiyuan Zhang', 'Junchi Yan'] | ['cs.RO', 'cs.CV'] | In an era marked by the rapid scaling of foundation models, autonomous
driving technologies are approaching a transformative threshold where
end-to-end autonomous driving (E2E-AD) emerges due to its potential of scaling
up in the data-driven manner. However, existing E2E-AD methods are mostly
evaluated under the open-l... | 2024-06-06T09:12:30Z | Accepted by NeurIPS 2024 Datasets and Benchmarks Track. Official
Repo: https://github.com/Thinklab-SJTU/Bench2Drive | null | null | null | null | null | null | null | null | null |
2,406.03949 | UltraMedical: Building Specialized Generalists in Biomedicine | ['Kaiyan Zhang', 'Sihang Zeng', 'Ermo Hua', 'Ning Ding', 'Zhang-Ren Chen', 'Zhiyuan Ma', 'Haoxin Li', 'Ganqu Cui', 'Biqing Qi', 'Xuekai Zhu', 'Xingtai Lv', 'Hu Jinfang', 'Zhiyuan Liu', 'Bowen Zhou'] | ['cs.CL'] | Large Language Models (LLMs) have demonstrated remarkable capabilities across
various domains and are moving towards more specialized areas. Recent advanced
proprietary models such as GPT-4 and Gemini have achieved significant
advancements in biomedicine, which have also raised privacy and security
challenges. The cons... | 2024-06-06T10:50:26Z | Camera ready version for NeurIPS 2024 D&B Track | null | null | UltraMedical: Building Specialized Generalists in Biomedicine | ['Kaiyan Zhang', 'Sihang Zeng', 'Ermo Hua', 'Ning Ding', 'Zhangren Chen', 'Zhiyuan Ma', 'Haoxin Li', 'Ganqu Cui', 'Biqing Qi', 'Xuekai Zhu', 'Xingtai Lv', 'Jinfang Hu', 'Zhiyuan Liu', 'Bowen Zhou'] | 2,024 | Neural Information Processing Systems | 33 | 71 | ['Computer Science'] |
2,406.04093 | Scaling and evaluating sparse autoencoders | ['Leo Gao', 'Tom Dupré la Tour', 'Henk Tillman', 'Gabriel Goh', 'Rajan Troll', 'Alec Radford', 'Ilya Sutskever', 'Jan Leike', 'Jeffrey Wu'] | ['cs.LG', 'cs.AI'] | Sparse autoencoders provide a promising unsupervised approach for extracting
interpretable features from a language model by reconstructing activations from
a sparse bottleneck layer. Since language models learn many concepts,
autoencoders need to be very large to recover all relevant features. However,
studying the pr... | 2024-06-06T14:10:12Z | null | null | null | Scaling and evaluating sparse autoencoders | ['Leo Gao', "Tom Dupr'e la Tour", 'Henk Tillman', 'Gabriel Goh', 'Rajan Troll', 'Alec Radford', 'I. Sutskever', 'Jan Leike', 'Jeffrey Wu'] | 2,024 | International Conference on Learning Representations | 163 | 69 | ['Computer Science'] |
2,406.04151 | AgentGym: Evolving Large Language Model-based Agents across Diverse
Environments | ['Zhiheng Xi', 'Yiwen Ding', 'Wenxiang Chen', 'Boyang Hong', 'Honglin Guo', 'Junzhe Wang', 'Dingwen Yang', 'Chenyang Liao', 'Xin Guo', 'Wei He', 'Songyang Gao', 'Lu Chen', 'Rui Zheng', 'Yicheng Zou', 'Tao Gui', 'Qi Zhang', 'Xipeng Qiu', 'Xuanjing Huang', 'Zuxuan Wu', 'Yu-Gang Jiang'] | ['cs.AI', 'cs.CL'] | Building generalist agents that can handle diverse tasks and evolve
themselves across different environments is a long-term goal in the AI
community. Large language models (LLMs) are considered a promising foundation
to build such agents due to their generalized capabilities. Current approaches
either have LLM-based ag... | 2024-06-06T15:15:41Z | Project site: https://agentgym.github.io | null | null | null | null | null | null | null | null | null |
2,406.04202 | Legal Documents Drafting with Fine-Tuned Pre-Trained Large Language
Model | ['Chun-Hsien Lin', 'Pu-Jen Cheng'] | ['cs.CL', 'cs.AI'] | With the development of large-scale Language Models (LLM), fine-tuning
pre-trained LLM has become a mainstream paradigm for solving downstream tasks
of natural language processing. However, training a language model in the legal
field requires a large number of legal documents so that the language model can
learn legal... | 2024-06-06T16:00:20Z | 12th International Conference on Software Engineering & Trends (SE
2024), April 27 ~ 28, 2024, Copenhagen, Denmark Volume Editors : David C.
Wyld, Dhinaharan Nagamalai (Eds) ISBN : 978-1-923107-24-3 | null | null | Legal Documents Drafting with Fine-Tuned Pre-Trained Large Language Model | ['Chun-Hsien Lin', 'Pu-Jen Cheng'] | 2,024 | Software Engineering & Trends | 5 | 15 | ['Computer Science'] |
2,406.04221 | Matching Anything by Segmenting Anything | ['Siyuan Li', 'Lei Ke', 'Martin Danelljan', 'Luigi Piccinelli', 'Mattia Segu', 'Luc Van Gool', 'Fisher Yu'] | ['cs.CV'] | The robust association of the same objects across video frames in complex
scenes is crucial for many applications, especially Multiple Object Tracking
(MOT). Current methods predominantly rely on labeled domain-specific video
datasets, which limits the cross-domain generalization of learned similarity
embeddings. We pr... | 2024-06-06T16:20:07Z | CVPR 2024 Highlight. code at: https://github.com/siyuanliii/masa | null | null | null | null | null | null | null | null | null |
2,406.04233 | FairytaleQA Translated: Enabling Educational Question and Answer
Generation in Less-Resourced Languages | ['Bernardo Leite', 'Tomás Freitas Osório', 'Henrique Lopes Cardoso'] | ['cs.CL', 'cs.AI'] | Question Answering (QA) datasets are crucial in assessing reading
comprehension skills for both machines and humans. While numerous datasets have
been developed in English for this purpose, a noticeable void exists in
less-resourced languages. To alleviate this gap, our paper introduces
machine-translated versions of F... | 2024-06-06T16:31:47Z | Preprint - Accepted for publication at ECTEL 2024 | EC-TEL 2024, Lecture Notes in Computer Science, vol. 15159, pp.
222-236, Springer, 2024 | 10.1007/978-3-031-72315-5_16 | null | null | null | null | null | null | null |
2,406.04264 | MLVU: Benchmarking Multi-task Long Video Understanding | ['Junjie Zhou', 'Yan Shu', 'Bo Zhao', 'Boya Wu', 'Zhengyang Liang', 'Shitao Xiao', 'Minghao Qin', 'Xi Yang', 'Yongping Xiong', 'Bo Zhang', 'Tiejun Huang', 'Zheng Liu'] | ['cs.CV', 'cs.AI', 'cs.CL'] | The evaluation of Long Video Understanding (LVU) performance poses an
important but challenging research problem. Despite previous efforts, the
existing video understanding benchmarks are severely constrained by several
issues, especially the insufficient lengths of videos, a lack of diversity in
video types and evalua... | 2024-06-06T17:09:32Z | null | null | null | null | null | null | null | null | null | null |
2,406.04271 | Buffer of Thoughts: Thought-Augmented Reasoning with Large Language
Models | ['Ling Yang', 'Zhaochen Yu', 'Tianjun Zhang', 'Shiyi Cao', 'Minkai Xu', 'Wentao Zhang', 'Joseph E. Gonzalez', 'Bin Cui'] | ['cs.CL'] | We introduce Buffer of Thoughts (BoT), a novel and versatile
thought-augmented reasoning approach for enhancing accuracy, efficiency and
robustness of large language models (LLMs). Specifically, we propose
meta-buffer to store a series of informative high-level thoughts, namely
thought-template, distilled from the prob... | 2024-06-06T17:22:08Z | NeurIPS 2024 Spotlight. Project:
https://github.com/YangLing0818/buffer-of-thought-llm | null | null | Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models | ['Ling Yang', 'Zhaochen Yu', 'Tianjun Zhang', 'Shiyi Cao', 'Minkai Xu', 'Wentao Zhang', 'Joseph E. Gonzalez', 'Bin Cui'] | 2,024 | Neural Information Processing Systems | 45 | 54 | ['Computer Science'] |
2,406.04277 | VideoTetris: Towards Compositional Text-to-Video Generation | ['Ye Tian', 'Ling Yang', 'Haotian Yang', 'Yuan Gao', 'Yufan Deng', 'Jingmin Chen', 'Xintao Wang', 'Zhaochen Yu', 'Xin Tao', 'Pengfei Wan', 'Di Zhang', 'Bin Cui'] | ['cs.CV'] | Diffusion models have demonstrated great success in text-to-video (T2V)
generation. However, existing methods may face challenges when handling complex
(long) video generation scenarios that involve multiple objects or dynamic
changes in object numbers. To address these limitations, we propose
VideoTetris, a novel fram... | 2024-06-06T17:25:33Z | NeurIPS 2024. Code: https://github.com/YangLing0818/VideoTetris | null | null | VideoTetris: Towards Compositional Text-to-Video Generation | ['Ye Tian', 'Ling Yang', 'Haotian Yang', 'Yuan Gao', 'Yufan Deng', 'Jingmin Chen', 'Xintao Wang', 'Zhaochen Yu', 'Xin Tao', 'Pengfei Wan', 'Di Zhang', 'Bin Cui'] | 2,024 | Neural Information Processing Systems | 19 | 78 | ['Computer Science'] |
2,406.04292 | VISTA: Visualized Text Embedding For Universal Multi-Modal Retrieval | ['Junjie Zhou', 'Zheng Liu', 'Shitao Xiao', 'Bo Zhao', 'Yongping Xiong'] | ['cs.IR', 'cs.CL', 'cs.CV'] | Multi-modal retrieval becomes increasingly popular in practice. However, the
existing retrievers are mostly text-oriented, which lack the capability to
process visual information. Despite the presence of vision-language models like
CLIP, the current methods are severely limited in representing the text-only
and image-o... | 2024-06-06T17:37:47Z | Accepted to ACL 2024 main conference | null | null | null | null | null | null | null | null | null |
2,406.04313 | Improving Alignment and Robustness with Circuit Breakers | ['Andy Zou', 'Long Phan', 'Justin Wang', 'Derek Duenas', 'Maxwell Lin', 'Maksym Andriushchenko', 'Rowan Wang', 'Zico Kolter', 'Matt Fredrikson', 'Dan Hendrycks'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.CY'] | AI systems can take harmful actions and are highly vulnerable to adversarial
attacks. We present an approach, inspired by recent advances in representation
engineering, that interrupts the models as they respond with harmful outputs
with "circuit breakers." Existing techniques aimed at improving alignment, such
as refu... | 2024-06-06T17:57:04Z | Code and models are available at
https://github.com/GraySwanAI/circuit-breakers | null | null | null | null | null | null | null | null | null |
2,406.04314 | Aesthetic Post-Training Diffusion Models from Generic Preferences with
Step-by-step Preference Optimization | ['Zhanhao Liang', 'Yuhui Yuan', 'Shuyang Gu', 'Bohan Chen', 'Tiankai Hang', 'Mingxi Cheng', 'Ji Li', 'Liang Zheng'] | ['cs.CV'] | Generating visually appealing images is fundamental to modern text-to-image
generation models. A potential solution to better aesthetics is direct
preference optimization (DPO), which has been applied to diffusion models to
improve general image quality including prompt alignment and aesthetics.
Popular DPO methods pro... | 2024-06-06T17:57:09Z | CVPR 2025. Project Page: https://rockeycoss.github.io/spo.github.io/ | null | null | Aesthetic Post-Training Diffusion Models from Generic Preferences with Step-by-step Preference Optimization | ['Zhanhao Liang', 'Yuhui Yuan', 'Shuyang Gu', 'Bohan Chen', 'Tiankai Hang', 'Mingxi Cheng', 'Ji Li', 'Liang Zheng'] | 2,024 | null | 14 | 34 | ['Computer Science'] |
2,406.04321 | VidMuse: A Simple Video-to-Music Generation Framework with
Long-Short-Term Modeling | ['Zeyue Tian', 'Zhaoyang Liu', 'Ruibin Yuan', 'Jiahao Pan', 'Qifeng Liu', 'Xu Tan', 'Qifeng Chen', 'Wei Xue', 'Yike Guo'] | ['cs.CV', 'cs.LG', 'cs.MM', 'cs.SD'] | In this work, we systematically study music generation conditioned solely on
the video. First, we present a large-scale dataset comprising 360K video-music
pairs, including various genres such as movie trailers, advertisements, and
documentaries. Furthermore, we propose VidMuse, a simple framework for
generating music ... | 2024-06-06T17:58:11Z | The code and datasets are available at
https://github.com/ZeyueT/VidMuse/ | null | null | null | null | null | null | null | null | null |
2,406.04325 | ShareGPT4Video: Improving Video Understanding and Generation with Better
Captions | ['Lin Chen', 'Xilin Wei', 'Jinsong Li', 'Xiaoyi Dong', 'Pan Zhang', 'Yuhang Zang', 'Zehui Chen', 'Haodong Duan', 'Bin Lin', 'Zhenyu Tang', 'Li Yuan', 'Yu Qiao', 'Dahua Lin', 'Feng Zhao', 'Jiaqi Wang'] | ['cs.CV'] | We present the ShareGPT4Video series, aiming to facilitate the video
understanding of large video-language models (LVLMs) and the video generation
of text-to-video models (T2VMs) via dense and precise captions. The series
comprises: 1) ShareGPT4Video, 40K GPT4V annotated dense captions of videos with
various lengths an... | 2024-06-06T17:58:54Z | Project Page: https://sharegpt4video.github.io/ | null | null | null | null | null | null | null | null | null |
2,406.0433 | Parameter-Inverted Image Pyramid Networks | ['Xizhou Zhu', 'Xue Yang', 'Zhaokai Wang', 'Hao Li', 'Wenhan Dou', 'Junqi Ge', 'Lewei Lu', 'Yu Qiao', 'Jifeng Dai'] | ['cs.CV'] | Image pyramids are commonly used in modern computer vision tasks to obtain
multi-scale features for precise understanding of images. However, image
pyramids process multiple resolutions of images using the same large-scale
model, which requires significant computational cost. To overcome this issue,
we propose a novel ... | 2024-06-06T17:59:10Z | null | null | null | Parameter-Inverted Image Pyramid Networks | ['Xizhou Zhu', 'Xue Yang', 'Zhaokai Wang', 'Hao Li', 'Wenhan Dou', 'Junqi Ge', 'Lewei Lu', 'Yu Qiao', 'Jifeng Dai'] | 2,024 | Neural Information Processing Systems | 0 | 68 | ['Computer Science'] |
2,406.04334 | DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and
Effective for LMMs | ['Lingchen Meng', 'Jianwei Yang', 'Rui Tian', 'Xiyang Dai', 'Zuxuan Wu', 'Jianfeng Gao', 'Yu-Gang Jiang'] | ['cs.CV'] | Most large multimodal models (LMMs) are implemented by feeding visual tokens
as a sequence into the first layer of a large language model (LLM). The
resulting architecture is simple but significantly increases computation and
memory costs, as it has to handle a large number of additional tokens in its
input layer. This... | 2024-06-06T17:59:34Z | Project Page: https://deepstack-vl.github.io/ | null | null | null | null | null | null | null | null | null |
2,406.04449 | MAIRA-2: Grounded Radiology Report Generation | ['Shruthi Bannur', 'Kenza Bouzid', 'Daniel C. Castro', 'Anton Schwaighofer', 'Anja Thieme', 'Sam Bond-Taylor', 'Maximilian Ilse', 'Fernando Pérez-García', 'Valentina Salvatelli', 'Harshita Sharma', 'Felix Meissen', 'Mercy Ranjit', 'Shaury Srivastav', 'Julia Gong', 'Noel C. F. Codella', 'Fabian Falck', 'Ozan Oktay', 'Ma... | ['cs.CL', 'cs.CV'] | Radiology reporting is a complex task requiring detailed medical image
understanding and precise language generation, for which generative multimodal
models offer a promising solution. However, to impact clinical practice, models
must achieve a high level of both verifiable performance and utility. We
augment the utili... | 2024-06-06T19:12:41Z | 72 pages, 21 figures. v2 updates the model and adds results on the
PadChest-GR dataset | null | null | null | null | null | null | null | null | null |
2,406.04496 | Time Sensitive Knowledge Editing through Efficient Finetuning | ['Xiou Ge', 'Ali Mousavi', 'Edouard Grave', 'Armand Joulin', 'Kun Qian', 'Benjamin Han', 'Mostafa Arefiyan', 'Yunyao Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) have demonstrated impressive capability in
different tasks and are bringing transformative changes to many domains.
However, keeping the knowledge in LLMs up-to-date remains a challenge once
pretraining is complete. It is thus essential to design effective methods to
both update obsolete kn... | 2024-06-06T20:41:36Z | ACL 2024 main | null | null | null | null | null | null | null | null | null |
2,406.04823 | BERTs are Generative In-Context Learners | ['David Samuel'] | ['cs.CL', 'cs.AI'] | While in-context learning is commonly associated with causal language models,
such as GPT, we demonstrate that this capability also 'emerges' in masked
language models. Through an embarrassingly simple inference technique, we
enable an existing masked model, DeBERTa, to perform generative tasks without
additional train... | 2024-06-07T10:48:45Z | 26 pages, NeurIPS 2024 | null | null | BERTs are Generative In-Context Learners | ['David Samuel'] | 2,024 | Neural Information Processing Systems | 8 | 62 | ['Computer Science'] |
2,406.04904 | XTTS: a Massively Multilingual Zero-Shot Text-to-Speech Model | ['Edresson Casanova', 'Kelly Davis', 'Eren Gölge', 'Görkem Göknar', 'Iulian Gulea', 'Logan Hart', 'Aya Aljafari', 'Joshua Meyer', 'Reuben Morais', 'Samuel Olayemi', 'Julian Weber'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Most Zero-shot Multi-speaker TTS (ZS-TTS) systems support only a single
language. Although models like YourTTS, VALL-E X, Mega-TTS 2, and Voicebox
explored Multilingual ZS-TTS they are limited to just a few high/medium
resource languages, limiting the applications of these models in most of the
low/medium resource lang... | 2024-06-07T12:56:11Z | Accepted at INTERSPEECH 2024 | null | null | XTTS: a Massively Multilingual Zero-Shot Text-to-Speech Model | ['Edresson Casanova', 'Kelly Davis', 'Eren Gölge', 'Görkem Göknar', 'Iulian Gulea', 'Logan Hart', 'Aya Aljafari', 'Joshua Meyer', 'Reuben Morais', 'Samuel Olayemi', 'Julian Weber'] | 2,024 | Interspeech | 85 | 35 | ['Computer Science', 'Engineering'] |
2,406.04927 | LLM-based speaker diarization correction: A generalizable approach | ['Georgios Efstathiadis', 'Vijay Yadav', 'Anzar Abbas'] | ['eess.AS', 'cs.CL'] | Speaker diarization is necessary for interpreting conversations transcribed
using automated speech recognition (ASR) tools. Despite significant
developments in diarization methods, diarization accuracy remains an issue.
Here, we investigate the use of large language models (LLMs) for diarization
correction as a post-pr... | 2024-06-07T13:33:22Z | null | Speech Communication, Volume 170, 2025, Page 103224 | 10.1016/j.specom.2025.103224 | LLM-based speaker diarization correction: A generalizable approach | ['Georgios Efstathiadis', 'Vijay Yadav', 'Anzar Abbas'] | 2,024 | Speech Communication | 5 | 53 | ['Computer Science', 'Engineering'] |
2,406.05074 | Hibou: A Family of Foundational Vision Transformers for Pathology | ['Dmitry Nechaev', 'Alexey Pchelnikov', 'Ekaterina Ivanova'] | ['eess.IV', 'cs.CV'] | Pathology, the microscopic examination of diseased tissue, is critical for
diagnosing various medical conditions, particularly cancers. Traditional
methods are labor-intensive and prone to human error. Digital pathology, which
converts glass slides into high-resolution digital images for analysis by
computer algorithms... | 2024-06-07T16:45:53Z | null | null | null | null | null | null | null | null | null | null |
2,406.05113 | LlavaGuard: An Open VLM-based Framework for Safeguarding Vision Datasets
and Models | ['Lukas Helff', 'Felix Friedrich', 'Manuel Brack', 'Kristian Kersting', 'Patrick Schramowski'] | ['cs.CV', 'cs.AI', 'cs.LG'] | This paper introduces LlavaGuard, a suite of VLM-based vision safeguards that
address the critical need for reliable guardrails in the era of large-scale
data and models. To this end, we establish a novel open framework, describing a
customizable safety taxonomy, data preprocessing, augmentation, and training
setup. Fo... | 2024-06-07T17:44:32Z | In Proceedings of the 42st International Conference on Machine
Learning (ICML 2025), Project page at
https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html | null | null | null | null | null | null | null | null | null |
2,406.05223 | CorDA: Context-Oriented Decomposition Adaptation of Large Language
Models for Task-Aware Parameter-Efficient Fine-tuning | ['Yibo Yang', 'Xiaojie Li', 'Zhongzhu Zhou', 'Shuaiwen Leon Song', 'Jianlong Wu', 'Liqiang Nie', 'Bernard Ghanem'] | ['cs.LG', 'cs.AI'] | Current parameter-efficient fine-tuning (PEFT) methods build adapters widely
agnostic of the context of downstream task to learn, or the context of
important knowledge to maintain. As a result, there is often a performance gap
compared to full-parameter fine-tuning, and meanwhile the fine-tuned model
suffers from catas... | 2024-06-07T19:10:35Z | NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,406.05285 | VISTA3D: A Unified Segmentation Foundation Model For 3D Medical Imaging | ['Yufan He', 'Pengfei Guo', 'Yucheng Tang', 'Andriy Myronenko', 'Vishwesh Nath', 'Ziyue Xu', 'Dong Yang', 'Can Zhao', 'Benjamin Simon', 'Mason Belue', 'Stephanie Harmon', 'Baris Turkbey', 'Daguang Xu', 'Wenqi Li'] | ['cs.CV'] | Foundation models for interactive segmentation in 2D natural images and
videos have sparked significant interest in building 3D foundation models for
medical imaging. However, the domain gaps and clinical use cases for 3D medical
imaging require a dedicated model that diverges from existing 2D solutions.
Specifically, ... | 2024-06-07T22:41:39Z | null | null | null | null | null | null | null | null | null | null |
2,406.05298 | Spectral Codecs: Improving Non-Autoregressive Speech Synthesis with
Spectrogram-Based Audio Codecs | ['Ryan Langman', 'Ante Jukić', 'Kunal Dhawan', 'Nithin Rao Koluguri', 'Jason Li'] | ['eess.AS'] | Historically, most speech models in machine-learning have used the
mel-spectrogram as a speech representation. Recently, discrete audio tokens
produced by neural audio codecs have become a popular alternate speech
representation for speech synthesis tasks such as text-to-speech (TTS).
However, the data distribution pro... | 2024-06-07T23:47:51Z | null | null | null | null | null | null | null | null | null | null |
2,406.05347 | MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative
Pre-Training | ['Bo Chen', 'Zhilei Bei', 'Xingyi Cheng', 'Pan Li', 'Jie Tang', 'Le Song'] | ['q-bio.BM', 'cs.AI', 'cs.LG'] | Multiple Sequence Alignment (MSA) plays a pivotal role in unveiling the
evolutionary trajectories of protein families. The accuracy of protein
structure predictions is often compromised for protein sequences that lack
sufficient homologous information to construct high quality MSA. Although
various methods have been pr... | 2024-06-08T04:23:57Z | NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,406.05491 | One Perturbation is Enough: On Generating Universal Adversarial
Perturbations against Vision-Language Pre-training Models | ['Hao Fang', 'Jiawei Kong', 'Wenbo Yu', 'Bin Chen', 'Jiawei Li', 'Hao Wu', 'Shutao Xia', 'Ke Xu'] | ['cs.CV', 'cs.CR'] | Vision-Language Pre-training (VLP) models have exhibited unprecedented
capability in many applications by taking full advantage of the multimodal
alignment. However, previous studies have shown they are vulnerable to
maliciously crafted adversarial samples. Despite recent success, these methods
are generally instance-s... | 2024-06-08T15:01:54Z | null | null | null | One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models | ['Hao Fang', 'Jiawei Kong', 'Wenbo Yu', 'Bin Chen', 'Jiawei Li', 'Shutao Xia', 'Ke Xu'] | 2,024 | arXiv.org | 14 | 130 | ['Computer Science'] |
2,406.05587 | Creativity Has Left the Chat: The Price of Debiasing Language Models | ['Behnam Mohammadi'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have revolutionized natural language processing
but can exhibit biases and may generate toxic content. While alignment
techniques like Reinforcement Learning from Human Feedback (RLHF) reduce these
issues, their impact on creativity, defined as syntactic and semantic
diversity, remains unex... | 2024-06-08T22:14:51Z | null | null | null | null | null | null | null | null | null | null |
2,406.05629 | Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding
of Sound and Language | ['Mark Hamilton', 'Andrew Zisserman', 'John R. Hershey', 'William T. Freeman'] | ['cs.CV', 'cs.CL', 'cs.IR', 'cs.LG', 'cs.SD', 'eess.AS'] | We present DenseAV, a novel dual encoder grounding architecture that learns
high-resolution, semantically meaningful, and audio-visually aligned features
solely through watching videos. We show that DenseAV can discover the
``meaning'' of words and the ``location'' of sounds without explicit
localization supervision. F... | 2024-06-09T03:38:21Z | Computer Vision and Pattern Recognition 2024 | null | null | null | null | null | null | null | null | null |
2,406.05661 | MS-HuBERT: Mitigating Pre-training and Inference Mismatch in Masked
Language Modelling methods for learning Speech Representations | ['Hemant Yadav', 'Sunayana Sitaram', 'Rajiv Ratn Shah'] | ['cs.CL'] | In recent years, self-supervised pre-training methods have gained significant
traction in learning high-level information from raw speech. Among these
methods, HuBERT has demonstrated SOTA performance in automatic speech
recognition (ASR). However, HuBERT's performance lags behind data2vec due to
disparities in pre-tra... | 2024-06-09T06:30:28Z | 4 pages, submitted to interspeech2024 | null | null | null | null | null | null | null | null | null |
2,406.05688 | Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based
Interactions | ['Cheng Tan', 'Dongxin Lyu', 'Siyuan Li', 'Zhangyang Gao', 'Jingxuan Wei', 'Siqi Ma', 'Zicheng Liu', 'Stan Z. Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) have demonstrated wide-ranging applications
across various fields and have shown significant potential in the academic
peer-review process. However, existing applications are primarily limited to
static review generation based on submitted papers, which fail to capture the
dynamic and itera... | 2024-06-09T08:24:17Z | Under review | null | null | null | null | null | null | null | null | null |
2,406.05707 | QGEval: Benchmarking Multi-dimensional Evaluation for Question
Generation | ['Weiping Fu', 'Bifan Wei', 'Jianxiang Hu', 'Zhongmin Cai', 'Jun Liu'] | ['cs.CL', 'cs.AI'] | Automatically generated questions often suffer from problems such as unclear
expression or factual inaccuracies, requiring a reliable and comprehensive
evaluation of their quality. Human evaluation is widely used in the field of
question generation (QG) and serves as the gold standard for automatic metrics.
However, th... | 2024-06-09T09:51:55Z | Accepted by EMNLP 2024 | null | null | QGEval: Benchmarking Multi-dimensional Evaluation for Question Generation | ['Weiping Fu', 'Bifan Wei', 'Jianxiang Hu', 'Zhongmin Cai', 'Jun Liu'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 5 | 42 | ['Computer Science'] |
2,406.05763 | WenetSpeech4TTS: A 12,800-hour Mandarin TTS Corpus for Large Speech
Generation Model Benchmark | ['Linhan Ma', 'Dake Guo', 'Kun Song', 'Yuepeng Jiang', 'Shuai Wang', 'Liumeng Xue', 'Weiming Xu', 'Huan Zhao', 'Binbin Zhang', 'Lei Xie'] | ['eess.AS'] | With the development of large text-to-speech (TTS) models and scale-up of the
training data, state-of-the-art TTS systems have achieved impressive
performance. In this paper, we present WenetSpeech4TTS, a multi-domain Mandarin
corpus derived from the open-sourced WenetSpeech dataset. Tailored for the
text-to-speech tas... | 2024-06-09T12:32:42Z | Accepted by INTERSPEECH2024 | null | null | WenetSpeech4TTS: A 12, 800-hour Mandarin TTS Corpus for Large Speech Generation Model Benchmark | ['Linhan Ma', 'Dake Guo', 'Kun Song', 'Yuepeng Jiang', 'Shuai Wang', 'Liumeng Xue', 'Weiming Xu', 'Huan Zhao', 'Binbin Zhang', 'Lei Xie'] | 2,024 | Interspeech | 29 | 20 | ['Computer Science', 'Engineering'] |
2,406.05768 | TLCM: Training-efficient Latent Consistency Model for Image Generation
with 2-8 Steps | ['Qingsong Xie', 'Zhenyi Liao', 'Zhijie Deng', 'Chen chen', 'Haonan Lu'] | ['cs.CV', 'cs.AI'] | Distilling latent diffusion models (LDMs) into ones that are fast to sample
from is attracting growing research interest. However, the majority of existing
methods face two critical challenges: (1) They hinge on long training using a
huge volume of real data. (2) They routinely lead to quality degradation for
generatio... | 2024-06-09T12:55:50Z | null | null | null | TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps | ['Qingsong Xie', 'Zhenyi Liao', 'Chen Chen', 'Zhijie Deng', 'Shixiang Tang', 'H. Lu'] | 2,024 | null | 1 | 42 | ['Computer Science'] |
2,406.0593 | Semisupervised Neural Proto-Language Reconstruction | ['Liang Lu', 'Peirong Xie', 'David R. Mortensen'] | ['cs.CL'] | Existing work implementing comparative reconstruction of ancestral languages
(proto-languages) has usually required full supervision. However, historical
reconstruction models are only of practical value if they can be trained with a
limited amount of labeled data. We propose a semisupervised historical
reconstruction ... | 2024-06-09T22:46:41Z | Accepted to ACL 2024; v2: correct typo | null | null | null | null | null | null | null | null | null |
2,406.05946 | Safety Alignment Should Be Made More Than Just a Few Tokens Deep | ['Xiangyu Qi', 'Ashwinee Panda', 'Kaifeng Lyu', 'Xiao Ma', 'Subhrajit Roy', 'Ahmad Beirami', 'Prateek Mittal', 'Peter Henderson'] | ['cs.CR', 'cs.AI'] | The safety alignment of current Large Language Models (LLMs) is vulnerable.
Relatively simple attacks, or even benign fine-tuning, can jailbreak aligned
models. We argue that many of these vulnerabilities are related to a shared
underlying issue: safety alignment can take shortcuts, wherein the alignment
adapts a model... | 2024-06-10T00:35:23Z | null | null | null | Safety Alignment Should Be Made More Than Just a Few Tokens Deep | ['Xiangyu Qi', 'Ashwinee Panda', 'Kaifeng Lyu', 'Xiao Ma', 'Subhrajit Roy', 'Ahmad Beirami', 'Prateek Mittal', 'Peter Henderson'] | 2,024 | International Conference on Learning Representations | 142 | 87 | ['Computer Science'] |
2,406.05955 | Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated
Parameters | ['Yixin Song', 'Haotong Xie', 'Zhengyan Zhang', 'Bo Wen', 'Li Ma', 'Zeyu Mi', 'Haibo Chen'] | ['cs.LG', 'cs.CL'] | Exploiting activation sparsity is a promising approach to significantly
accelerating the inference process of large language models (LLMs) without
compromising performance. However, activation sparsity is determined by
activation functions, and commonly used ones like SwiGLU and GeGLU exhibit
limited sparsity. Simply r... | 2024-06-10T01:21:59Z | null | null | null | null | null | null | null | null | null | null |
2,406.06046 | MATES: Model-Aware Data Selection for Efficient Pretraining with Data
Influence Models | ['Zichun Yu', 'Spandan Das', 'Chenyan Xiong'] | ['cs.CL', 'cs.LG'] | Pretraining data selection has the potential to improve language model
pretraining efficiency by utilizing higher-quality data from massive web data
corpora. Current data selection methods, which rely on either hand-crafted
rules or larger reference models, are conducted statically and do not capture
the evolving data ... | 2024-06-10T06:27:42Z | Accepted to NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,406.06087 | GAIA: Rethinking Action Quality Assessment for AI-Generated Videos | ['Zijian Chen', 'Wei Sun', 'Yuan Tian', 'Jun Jia', 'Zicheng Zhang', 'Jiarui Wang', 'Ru Huang', 'Xiongkuo Min', 'Guangtao Zhai', 'Wenjun Zhang'] | ['cs.CV'] | Assessing action quality is both imperative and challenging due to its
significant impact on the quality of AI-generated videos, further complicated
by the inherently ambiguous nature of actions within AI-generated video (AIGV).
Current action quality assessment (AQA) algorithms predominantly focus on
actions from real... | 2024-06-10T08:18:07Z | Accepted by NeurIPS2024 Dataset and Benchmark Track as Spotlight. 33
pages, 15 figures | null | null | null | null | null | null | null | null | null |
2,406.0611 | Recurrent Context Compression: Efficiently Expanding the Context Window
of LLM | ['Chensen Huang', 'Guibo Zhu', 'Xuepeng Wang', 'Yifei Luo', 'Guojing Ge', 'Haoran Chen', 'Dong Yi', 'Jinqiao Wang'] | ['cs.CL', 'cs.AI'] | To extend the context length of Transformer-based large language models
(LLMs) and improve comprehension capabilities, we often face limitations due to
computational resources and bounded memory storage capacity. This work
introduces a method called Recurrent Context Compression (RCC), designed to
efficiently expand th... | 2024-06-10T08:50:59Z | null | null | null | null | null | null | null | null | null | null |
2,406.06185 | EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech
Enhancement and Dereverberation | ['Julius Richter', 'Yi-Chiao Wu', 'Steven Krenn', 'Simon Welker', 'Bunlong Lay', 'Shinji Watanabe', 'Alexander Richard', 'Timo Gerkmann'] | ['eess.AS', 'cs.LG', 'cs.SD'] | We release the EARS (Expressive Anechoic Recordings of Speech) dataset, a
high-quality speech dataset comprising 107 speakers from diverse backgrounds,
totaling in 100 hours of clean, anechoic speech data. The dataset covers a
large range of different speaking styles, including emotional speech, different
reading style... | 2024-06-10T11:28:29Z | Accepted at Interspeech 2024 | null | null | null | null | null | null | null | null | null |
2,406.06316 | Tx-LLM: A Large Language Model for Therapeutics | ['Juan Manuel Zambrano Chaves', 'Eric Wang', 'Tao Tu', 'Eeshit Dhaval Vaishnav', 'Byron Lee', 'S. Sara Mahdavi', 'Christopher Semturs', 'David Fleet', 'Vivek Natarajan', 'Shekoofeh Azizi'] | ['cs.CL', 'cs.AI', 'cs.CE', 'cs.LG'] | Developing therapeutics is a lengthy and expensive process that requires the
satisfaction of many different criteria, and AI models capable of expediting
the process would be invaluable. However, the majority of current AI approaches
address only a narrowly defined set of tasks, often circumscribed within a
particular ... | 2024-06-10T14:33:02Z | null | null | null | null | null | null | null | null | null | null |
2,406.06331 | MedExQA: Medical Question Answering Benchmark with Multiple Explanations | ['Yunsoo Kim', 'Jinge Wu', 'Yusuf Abdulle', 'Honghan Wu'] | ['cs.CL', 'cs.AI'] | This paper introduces MedExQA, a novel benchmark in medical
question-answering, to evaluate large language models' (LLMs) understanding of
medical knowledge through explanations. By constructing datasets across five
distinct medical specialties that are underrepresented in current datasets and
further incorporating mul... | 2024-06-10T14:47:04Z | Accepted to ACL2024 BioNLP Workshop | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.