arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,409.12887 | Enhancing Unsupervised Sentence Embeddings via Knowledge-Driven Data
Augmentation and Gaussian-Decayed Contrastive Learning | ['Peichao Lai', 'Zhengfeng Zhang', 'Wentao Zhang', 'Fangcheng Fu', 'Bin Cui'] | ['cs.CL'] | Recently, using large language models (LLMs) for data augmentation has led to
considerable improvements in unsupervised sentence embedding models. However,
existing methods encounter two primary challenges: limited data diversity and
high data noise. Current approaches often neglect fine-grained knowledge, such
as enti... | 2024-09-19T16:29:58Z | null | null | null | null | null | null | null | null | null | null |
2,409.12917 | Training Language Models to Self-Correct via Reinforcement Learning | ['Aviral Kumar', 'Vincent Zhuang', 'Rishabh Agarwal', 'Yi Su', 'John D Co-Reyes', 'Avi Singh', 'Kate Baumli', 'Shariq Iqbal', 'Colton Bishop', 'Rebecca Roelofs', 'Lei M Zhang', 'Kay McKinney', 'Disha Shrivastava', 'Cosmin Paduraru', 'George Tucker', 'Doina Precup', 'Feryal Behbahani', 'Aleksandra Faust'] | ['cs.LG'] | Self-correction is a highly desirable capability of large language models
(LLMs), yet it has consistently been found to be largely ineffective in modern
LLMs. Current methods for training self-correction typically depend on either
multiple models, a more advanced model, or additional forms of supervision. To
address th... | 2024-09-19T17:16:21Z | null | null | null | Training Language Models to Self-Correct via Reinforcement Learning | ['Aviral Kumar', 'Vincent Zhuang', 'Rishabh Agarwal', 'Yi Su', 'John D. Co-Reyes', 'Avi Singh', 'Kate Baumli', 'Shariq Iqbal', 'Colton Bishop', 'Rebecca Roelofs', 'Lei M. Zhang', 'Kay McKinney', 'Disha Shrivastava', 'Cosmin Paduraru', 'George Tucker', 'D. Precup', 'Feryal M. P. Behbahani', 'Aleksandra Faust'] | 2,024 | International Conference on Learning Representations | 177 | 58 | ['Computer Science'] |
2,409.12957 | 3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive
Diffusion | ['Zhaoxi Chen', 'Jiaxiang Tang', 'Yuhao Dong', 'Ziang Cao', 'Fangzhou Hong', 'Yushi Lan', 'Tengfei Wang', 'Haozhe Xie', 'Tong Wu', 'Shunsuke Saito', 'Liang Pan', 'Dahua Lin', 'Ziwei Liu'] | ['cs.CV', 'cs.GR'] | The increasing demand for high-quality 3D assets across various industries
necessitates efficient and automated 3D content creation. Despite recent
advancements in 3D generative models, existing methods still face challenges
with optimization speed, geometric fidelity, and the lack of assets for
physically based render... | 2024-09-19T17:59:06Z | CVPR 2025, Code https://github.com/3DTopia/3DTopia-XL Project Page
https://3dtopia.github.io/3DTopia-XL/ | null | null | 3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion | ['Zhaoxi Chen', 'Jiaxiang Tang', 'Yuhao Dong', 'Ziang Cao', 'Fangzhou Hong', 'Yushi Lan', 'Tengfei Wang', 'Haozhe Xie', 'Tong Wu', 'Shunsuke Saito', 'Liang Pan', 'Dahua Lin', 'Ziwei Liu'] | 2,024 | arXiv.org | 23 | 86 | ['Computer Science'] |
2,409.12958 | MURI: High-Quality Instruction Tuning Datasets for Low-Resource
Languages via Reverse Instructions | ['Abdullatif Köksal', 'Marion Thaler', 'Ayyoob Imani', 'Ahmet Üstün', 'Anna Korhonen', 'Hinrich Schütze'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Instruction tuning enhances large language models (LLMs) by aligning them
with human preferences across diverse tasks. Traditional approaches to create
instruction tuning datasets face serious challenges for low-resource languages
due to their dependence on data annotation. This work introduces a novel
method, Multilin... | 2024-09-19T17:59:20Z | null | null | null | null | null | null | null | null | null | null |
2,409.12961 | Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary
Resolution | ['Zuyan Liu', 'Yuhao Dong', 'Ziwei Liu', 'Winston Hu', 'Jiwen Lu', 'Yongming Rao'] | ['cs.CV'] | Visual data comes in various forms, ranging from small icons of just a few
pixels to long videos spanning hours. Existing multi-modal LLMs usually
standardize these diverse visual inputs to a fixed resolution for visual
encoders and yield similar numbers of tokens for LLMs. This approach is
non-optimal for multimodal u... | 2024-09-19T17:59:51Z | Accepted to ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,409.13191 | Diabetica: Adapting Large Language Model to Enhance Multiple Medical
Tasks in Diabetes Care and Management | ['Lai Wei', 'Zhen Ying', 'Muyang He', 'Yutong Chen', 'Qian Yang', 'Yanzhe Hong', 'Jiaping Lu', 'Kaipeng Zheng', 'Shaoting Zhang', 'Xiaoying Li', 'Weiran Huang', 'Ying Chen'] | ['cs.CL', 'cs.AI', 'cs.CE', 'cs.LG'] | Diabetes is a chronic disease with a significant global health burden,
requiring multi-stakeholder collaboration for optimal management. Large
language models (LLMs) have shown promise in various healthcare scenarios, but
their effectiveness across diverse diabetes tasks remains unproven. Our study
introduced a framewo... | 2024-09-20T03:47:54Z | Accepted by ICLR 2025 SCI-FM workshop | null | null | null | null | null | null | null | null | null |
2,409.13198 | Exploring Scaling Laws for Local SGD in Large Language Model Training | ['Qiaozhi He', 'Xiaomin Zhuang', 'Zhihua Wu'] | ['cs.CL', 'cs.LG', 'stat.ML'] | This paper investigates scaling laws for local SGD in LLM training, a
distributed optimization algorithm that facilitates training on loosely
connected devices. Through extensive experiments, we show that local SGD
achieves competitive results compared to conventional methods, given equivalent
model parameters, dataset... | 2024-09-20T04:02:48Z | Technical Report | null | null | Exploring Scaling Laws for Local SGD in Large Language Model Training | ['Qiaozhi He', 'Xiaomin Zhuang', 'Zhihua Wu'] | 2,024 | arXiv.org | 4 | 20 | ['Computer Science', 'Mathematics'] |
2,409.13268 | JoyHallo: Digital human model for Mandarin | ['Sheng Shi', 'Xuyang Cao', 'Jun Zhao', 'Guoxin Wang'] | ['cs.CV'] | In audio-driven video generation, creating Mandarin videos presents
significant challenges. Collecting comprehensive Mandarin datasets is
difficult, and the complex lip movements in Mandarin further complicate model
training compared to English. In this study, we collected 29 hours of Mandarin
speech video from JD Heal... | 2024-09-20T06:57:42Z | null | null | null | null | null | null | null | null | null | null |
2,409.13321 | SLaVA-CXR: Small Language and Vision Assistant for Chest X-ray Report
Automation | ['Jinge Wu', 'Yunsoo Kim', 'Daqian Shi', 'David Cliffton', 'Fenglin Liu', 'Honghan Wu'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV'] | Inspired by the success of large language models (LLMs), there is growing
research interest in developing LLMs in the medical domain to assist
clinicians. However, for hospitals, using closed-source commercial LLMs
involves privacy issues, and developing open-source public LLMs requires
large-scale computational resour... | 2024-09-20T08:28:46Z | null | null | null | null | null | null | null | null | null | null |
2,409.13523 | EMMeTT: Efficient Multimodal Machine Translation Training | ['Piotr Żelasko', 'Zhehuai Chen', 'Mengru Wang', 'Daniel Galvez', 'Oleksii Hrinchuk', 'Shuoyang Ding', 'Ke Hu', 'Jagadeesh Balam', 'Vitaly Lavrukhin', 'Boris Ginsburg'] | ['cs.CL', 'cs.SD', 'eess.AS'] | A rising interest in the modality extension of foundation language models
warrants discussion on the most effective, and efficient, multimodal training
approach. This work focuses on neural machine translation (NMT) and proposes a
joint multimodal training regime of Speech-LLM to include automatic speech
translation (A... | 2024-09-20T14:03:23Z | 4 pages, submitted to ICASSP 2025 | null | null | EMMeTT: Efficient Multimodal Machine Translation Training | ['Piotr Zelasko', 'Zhehuai Chen', 'Mengru Wang', 'Daniel Galvez', 'Oleksii Hrinchuk', 'Shuoyang Ding', 'Ke Hu', 'Jagadeesh Balam', 'Vitaly Lavrukhin', 'Boris Ginsburg'] | 2,024 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 1 | 33 | ['Computer Science', 'Engineering'] |
2,409.13598 | Prithvi WxC: Foundation Model for Weather and Climate | ['Johannes Schmude', 'Sujit Roy', 'Will Trojak', 'Johannes Jakubik', 'Daniel Salles Civitarese', 'Shraddha Singh', 'Julian Kuehnert', 'Kumar Ankur', 'Aman Gupta', 'Christopher E Phillips', 'Romeo Kienzler', 'Daniela Szwarcman', 'Vishal Gaur', 'Rajat Shinde', 'Rohit Lal', 'Arlindo Da Silva', 'Jorge Luis Guevara Diaz', '... | ['cs.LG', 'physics.ao-ph'] | Triggered by the realization that AI emulators can rival the performance of
traditional numerical weather prediction models running on HPC systems, there
is now an increasing number of large AI models that address use cases such as
forecasting, downscaling, or nowcasting. While the parallel developments in the
AI liter... | 2024-09-20T15:53:17Z | null | null | null | Prithvi WxC: Foundation Model for Weather and Climate | ['J. Schmude', 'Sujit Roy', 'Will Trojak', 'Johannes Jakubik', 'D. S. Civitarese', 'Shraddha Singh', 'Julian Kuehnert', 'Kumar Ankur', 'Aman Gupta', 'C. Phillips', 'Romeo Kienzler', 'Daniela Szwarcman', 'Vishal Gaur', 'Rajat Shinde', 'Rohit Lal', 'Arlindo Da Silva', 'Jorge Luis Guevara Diaz', 'Anne Jones', 'S. Pfreunds... | 2,024 | arXiv.org | 10 | 58 | ['Computer Science', 'Physics'] |
2,409.1371 | You can remove GPT2's LayerNorm by fine-tuning | ['Stefan Heimersheim'] | ['cs.CL', 'cs.LG'] | The LayerNorm (LN) layer in GPT-style transformer models has long been a
hindrance to mechanistic interpretability. LN is a crucial component required
to stabilize the training of large language models, and LN or the similar
RMSNorm have been used in practically all large language models based on the
transformer archit... | 2024-09-06T16:17:06Z | Presented at the Attributing Model Behavior at Scale (ATTRIB) and
Interpretable AI: Past, Present, and Future workshops at NeurIPS 2024 | null | null | You can remove GPT2's LayerNorm by fine-tuning | ['Stefan Heimersheim'] | 2,024 | arXiv.org | 5 | 33 | ['Computer Science'] |
2,409.13832 | GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music
Scores for All Singing Tasks | ['Yu Zhang', 'Changhao Pan', 'Wenxiang Guo', 'Ruiqi Li', 'Zhiyuan Zhu', 'Jialei Wang', 'Wenhao Xu', 'Jingyu Lu', 'Zhiqing Hong', 'Chuxin Wang', 'LiChao Zhang', 'Jinzheng He', 'Ziyue Jiang', 'Yuxin Chen', 'Chen Yang', 'Jiecheng Zhou', 'Xinyu Cheng', 'Zhou Zhao'] | ['eess.AS', 'cs.CL', 'cs.SD'] | The scarcity of high-quality and multi-task singing datasets significantly
hinders the development of diverse controllable and personalized singing tasks,
as existing singing datasets suffer from low quality, limited diversity of
languages and singers, absence of multi-technique information and realistic
music scores, ... | 2024-09-20T18:18:14Z | Accepted by NeurIPS 2024 (Spotlight) | null | null | null | null | null | null | null | null | null |
2,409.1387 | Instruct-Tuning Pretrained Causal Language Models for Ancient Greek
Papyrology and Epigraphy | ['Eric Cullhed'] | ['cs.CL', 'cs.AI', 'cs.LG'] | This article presents an experiment in fine-tuning a pretrained causal
language model (Meta's Llama 3.1 8B Instruct) to assist with restoring missing
or illegible characters in ancient Greek inscriptions and documentary papyri.
Utilizing a straightforward instruction-based approach and a 95%/5% train/test
split, the pa... | 2024-09-20T19:49:45Z | 9 pages, 1 table. To be submitted | null | null | null | null | null | null | null | null | null |
2,409.13882 | Tabular Data Generation using Binary Diffusion | ['Vitaliy Kinakh', 'Slava Voloshynovskiy'] | ['cs.LG', 'cs.AI'] | Generating synthetic tabular data is critical in machine learning, especially
when real data is limited or sensitive. Traditional generative models often
face challenges due to the unique characteristics of tabular data, such as
mixed data types and varied distributions, and require complex preprocessing or
large pretr... | 2024-09-20T20:22:28Z | Accepted to 3rd Table Representation Learning Workshop @ NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,409.14074 | MultiMed: Multilingual Medical Speech Recognition via Attention Encoder
Decoder | ['Khai Le-Duc', 'Phuc Phan', 'Tan-Hanh Pham', 'Bach Phan Tat', 'Minh-Huong Ngo', 'Chris Ngo', 'Thanh Nguyen-Tang', 'Truong-Son Hy'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Multilingual automatic speech recognition (ASR) in the medical domain serves
as a foundational task for various downstream applications such as speech
translation, spoken language understanding, and voice-activated assistants.
This technology improves patient care by enabling efficient communication
across language bar... | 2024-09-21T09:05:48Z | ACL 2025, 38 pages | null | null | null | null | null | null | null | null | null |
2,409.14128 | Present and Future Generalization of Synthetic Image Detectors | ['Pablo Bernabeu-Perez', 'Enrique Lopez-Cuena', 'Dario Garcia-Gasulla'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The continued release of increasingly realistic image generation models
creates a demand for synthetic image detectors. To build effective detectors we
must first understand how factors like data source diversity, training
methodologies and image alterations affect their generalization capabilities.
This work conducts ... | 2024-09-21T12:46:17Z | 21 pages, 12 figures | null | null | null | null | null | null | null | null | null |
2,409.14485 | Video-XL: Extra-Long Vision Language Model for Hour-Scale Video
Understanding | ['Yan Shu', 'Zheng Liu', 'Peitian Zhang', 'Minghao Qin', 'Junjie Zhou', 'Zhengyang Liang', 'Tiejun Huang', 'Bo Zhao'] | ['cs.CV'] | Long video understanding poses a significant challenge for current
Multi-modal Large Language Models (MLLMs). Notably, the MLLMs are constrained
by their limited context lengths and the substantial costs while processing
long videos. Although several existing methods attempt to reduce visual tokens,
their strategies en... | 2024-09-22T15:13:31Z | null | null | null | Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding | ['Yan Shu', 'Peitian Zhang', 'Zheng Liu', 'Minghao Qin', 'Junjie Zhou', 'Tiejun Huang', 'Bo Zhao'] | 2,024 | arXiv.org | 59 | 43 | ['Computer Science'] |
2,409.14713 | Phantom of Latent for Large Language and Vision Models | ['Byung-Kwan Lee', 'Sangyun Chung', 'Chae Won Kim', 'Beomchan Park', 'Yong Man Ro'] | ['cs.CV'] | The success of visual instruction tuning has accelerated the development of
large language and vision models (LLVMs). Following the scaling laws of
instruction-tuned large language models (LLMs), LLVMs either have further
increased their sizes, reaching 26B, 34B, and even 80B parameters. While this
increase in model si... | 2024-09-23T05:19:06Z | Code is available in https://github.com/ByungKwanLee/Phantom | null | null | null | null | null | null | null | null | null |
2,409.14826 | ToolPlanner: A Tool Augmented LLM for Multi Granularity Instructions
with Path Planning and Feedback | ['Qinzhuo Wu', 'Wei Liu', 'Jian Luan', 'Bin Wang'] | ['cs.CL', 'cs.AI'] | Recently, tool-augmented LLMs have gained increasing attention. Given an
instruction, tool-augmented LLMs can interact with various external tools in
multiple rounds and provide a final answer. However, previous LLMs were trained
on overly detailed instructions, which included API names or parameters, while
real users ... | 2024-09-23T08:58:48Z | null | null | null | null | null | null | null | null | null | null |
2,409.1525 | ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models | ['Sombit Dey', 'Jan-Nico Zaech', 'Nikolay Nikolov', 'Luc Van Gool', 'Danda Pani Paudel'] | ['cs.CV', 'cs.RO'] | Recent progress in large language models and access to large-scale robotic
datasets has sparked a paradigm shift in robotics models transforming them into
generalists able to adapt to various tasks, scenes, and robot modalities. A
large step for the community are open Vision Language Action models which
showcase strong... | 2024-09-23T17:47:59Z | Accepted at ICRA-2025, Atlanta | null | null | null | null | null | null | null | null | null |
2,409.15278 | PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language
Instructions | ['Weifeng Lin', 'Xinyu Wei', 'Renrui Zhang', 'Le Zhuo', 'Shitian Zhao', 'Siyuan Huang', 'Huan Teng', 'Junlin Xie', 'Yu Qiao', 'Peng Gao', 'Hongsheng Li'] | ['cs.CV'] | This paper presents a versatile image-to-image visual assistant, PixWizard,
designed for image generation, manipulation, and translation based on free-from
language instructions. To this end, we tackle a variety of vision tasks into a
unified image-text-to-image generation framework and curate an Omni
Pixel-to-Pixel In... | 2024-09-23T17:59:46Z | Code is released at https://github.com/AFeng-x/PixWizard | null | null | PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions | ['Weifeng Lin', 'Xinyu Wei', 'Renrui Zhang', 'Le Zhuo', 'Shitian Zhao', 'Siyuan Huang', 'Junlin Xie', 'Y. Qiao', 'Peng Gao', 'Hongsheng Li'] | 2,024 | International Conference on Learning Representations | 14 | 162 | ['Computer Science'] |
2,409.15355 | Block-Attention for Efficient Prefilling | ['Dongyang Ma', 'Yan Wang', 'Lan Tian'] | ['cs.LG', 'cs.AI', 'cs.CL'] | We introduce Block-attention, an attention mechanism designed to address the
increased inference latency and cost in Retrieval-Augmented Generation (RAG)
scenarios. Traditional approaches often encode the entire context in an
auto-regressive manner. Instead, Block-attention divides retrieved documents
into discrete blo... | 2024-09-14T02:34:26Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,409.15371 | Balancing LoRA Performance and Efficiency with Simple Shard Sharing | ['Jiale Kang', 'Qingyu Yin'] | ['cs.CL', 'cs.AI'] | Parameter-Efficient Fine-Tuning (PEFT) methods, particularly Low-Rank
Adaptation (LoRA), effectively reduce the number of trainable parameters in
Large Language Models (LLMs). However, as model scales continue to grow, the
demand for computational resources remains a significant challenge. Existing
LoRA variants often ... | 2024-09-19T10:26:42Z | null | null | null | Balancing LoRA Performance and Efficiency with Simple Shard Sharing | ['Jiale Kang', 'Qingyu Yin'] | 2,024 | null | 0 | 26 | ['Computer Science'] |
2,409.15398 | Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in
Red Teaming GenAI | ['Ambrish Rawat', 'Stefan Schoepf', 'Giulio Zizzo', 'Giandomenico Cornacchia', 'Muhammad Zaid Hameed', 'Kieran Fraser', 'Erik Miehling', 'Beat Buesser', 'Elizabeth M. Daly', 'Mark Purcell', 'Prasanna Sattigeri', 'Pin-Yu Chen', 'Kush R. Varshney'] | ['cs.CR', 'cs.AI', 'cs.LG'] | As generative AI, particularly large language models (LLMs), become
increasingly integrated into production applications, new attack surfaces and
vulnerabilities emerge and put a focus on adversarial threats in natural
language and multi-modal systems. Red-teaming has gained importance in
proactively identifying weakne... | 2024-09-23T10:18:10Z | null | null | null | null | null | null | null | null | null | null |
2,409.15672 | Language-based Audio Moment Retrieval | ['Hokuto Munakata', 'Taichi Nishimura', 'Shota Nakada', 'Tatsuya Komatsu'] | ['eess.AS', 'cs.CL', 'cs.SD'] | In this paper, we propose and design a new task called audio moment retrieval
(AMR). Unlike conventional language-based audio retrieval tasks that search for
short audio clips from an audio database, AMR aims to predict relevant moments
in untrimmed long audio based on a text query. Given the lack of prior work in
AMR,... | 2024-09-24T02:24:48Z | null | null | null | null | null | null | null | null | null | null |
2,409.157 | Making Text Embedders Few-Shot Learners | ['Chaofan Li', 'MingHao Qin', 'Shitao Xiao', 'Jianlyu Chen', 'Kun Luo', 'Yingxia Shao', 'Defu Lian', 'Zheng Liu'] | ['cs.IR', 'cs.CL'] | Large language models (LLMs) with decoder-only architectures demonstrate
remarkable in-context learning (ICL) capabilities. This feature enables them to
effectively handle both familiar and novel tasks by utilizing examples provided
within their input context. Recognizing the potential of this capability, we
propose le... | 2024-09-24T03:30:19Z | null | null | null | null | null | null | null | null | null | null |
2,409.15804 | NER-Luxury: Named entity recognition for the fashion and luxury domain | ['Akim Mousterou'] | ['cs.CL'] | In this study, we address multiple challenges of developing a named-entity
recognition model in English for the fashion and luxury industry, namely the
entity disambiguation, French technical jargon in multiple sub-sectors,
scarcity of the ESG methodology, and a disparate company structures of the
sector with small and... | 2024-09-24T06:58:28Z | 28 pages, 6 figures | null | null | null | null | null | null | null | null | null |
2,409.15933 | SLIMER-IT: Zero-Shot NER on Italian Language | ['Andrew Zamai', 'Leonardo Rigutini', 'Marco Maggini', 'Andrea Zugarini'] | ['cs.CL', 'cs.IR'] | Traditional approaches to Named Entity Recognition (NER) frame the task into
a BIO sequence labeling problem. Although these systems often excel in the
downstream task at hand, they require extensive annotated data and struggle to
generalize to out-of-distribution input domains and unseen entity types. On the
contrary,... | 2024-09-24T09:57:25Z | null | null | null | null | null | null | null | null | null | null |
2,409.15977 | TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and
Multi-Level Style Control | ['Yu Zhang', 'Ziyue Jiang', 'Ruiqi Li', 'Changhao Pan', 'Jinzheng He', 'Rongjie Huang', 'Chuxin Wang', 'Zhou Zhao'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Zero-shot singing voice synthesis (SVS) with style transfer and style control
aims to generate high-quality singing voices with unseen timbres and styles
(including singing method, emotion, rhythm, technique, and pronunciation) from
audio and text prompts. However, the multifaceted nature of singing styles
poses a sign... | 2024-09-24T11:18:09Z | Accepted by EMNLP 2024 | Proceedings of the 2024 Conference on Empirical Methods in Natural
Language Processing, pages 1960-1975 | 10.18653/v1/2024.emnlp-main.117 | TCSinger: Zero-Shot Singing Voice Synthesis with Style Transfer and Multi-Level Style Control | ['Yu Zhang', 'Ziyue Jiang', 'Ruiqi Li', 'Changhao Pan', 'Jinzheng He', 'Rongjie Huang', 'Chuxin Wang', 'Zhou Zhao'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 8 | 39 | ['Computer Science', 'Engineering'] |
2,409.15997 | Improvements to SDXL in NovelAI Diffusion V3 | ['Juan Ossa', 'Eren Doğan', 'Alex Birch', 'F. Johnson'] | ['cs.CV', 'cs.AI', 'cs.LG'] | In this technical report, we document the changes we made to SDXL in the
process of training NovelAI Diffusion V3, our state of the art anime image
generation model. | 2024-09-24T11:57:12Z | 14 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,409.16016 | VascX Models: Model Ensembles for Retinal Vascular Analysis from Color
Fundus Images | ['Jose Vargas Quiros', 'Bart Liefers', 'Karin van Garderen', 'Jeroen Vermeulen', 'Eyened Reading Center', 'Sinergia Consortium', 'Caroline Klaver'] | ['eess.IV', 'cs.CV', 'q-bio.TO'] | We introduce VascX models, a comprehensive set of model ensembles for
analyzing retinal vasculature from color fundus images (CFIs). Annotated CFIs
were aggregated from public datasets . Additional CFIs, mainly from the
population-based Rotterdam Study were annotated by graders for arteries and
veins at pixel level, re... | 2024-09-24T12:19:31Z | null | null | null | null | null | null | null | null | null | null |
2,409.1604 | Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of
Experts | ['Xiaoming Shi', 'Shiyu Wang', 'Yuqi Nie', 'Dianqi Li', 'Zhou Ye', 'Qingsong Wen', 'Ming Jin'] | ['cs.LG', 'cs.AI'] | Deep learning for time series forecasting has seen significant advancements
over the past decades. However, despite the success of large-scale pre-training
in language and vision domains, pre-trained time series models remain limited
in scale and operate at a high cost, hindering the development of larger
capable forec... | 2024-09-24T12:42:18Z | Accepted by the 13th International Conference on Learning
Representations (ICLR 2025) | null | null | Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts | ['X. Shi', 'Shiyu Wang', 'Yuqi Nie', 'Dianqi Li', 'Zhou Ye', 'Qingsong Wen', 'Ming Jin'] | 2,024 | International Conference on Learning Representations | 56 | 98 | ['Computer Science'] |
2,409.16117 | Generative Speech Foundation Model Pretraining for High-Quality Speech
Extraction and Restoration | ['Pin-Jui Ku', 'Alexander H. Liu', 'Roman Korostik', 'Sung-Feng Huang', 'Szu-Wei Fu', 'Ante Jukić'] | ['eess.AS', 'cs.SD'] | This paper proposes a generative pretraining foundation model for
high-quality speech restoration tasks. By directly operating on complex-valued
short-time Fourier transform coefficients, our model does not rely on any
vocoders for time-domain signal reconstruction. As a result, our model
simplifies the synthesis proce... | 2024-09-24T14:24:47Z | 5 pages, Submitted to ICASSP 2025. The implementation and
configuration could be found in
https://github.com/NVIDIA/NeMo/blob/main/examples/audio/conf/flow_matching_generative_ssl_pretraining.yaml
The audio demo page could be found in
https://kuray107.github.io/ssl_gen25-examples/index.html | null | null | Generative Speech Foundation Model Pretraining for High-Quality Speech Extraction and Restoration | ['Pin-Jui Ku', 'Alexander H. Liu', 'Roman Korostik', 'Sung-Feng Huang', 'Szu-Wei Fu', "Ante Juki'c"] | 2,024 | arXiv.org | 4 | 42 | ['Engineering', 'Computer Science'] |
2,409.16211 | MaskBit: Embedding-free Image Generation via Bit Tokens | ['Mark Weber', 'Lijun Yu', 'Qihang Yu', 'Xueqing Deng', 'Xiaohui Shen', 'Daniel Cremers', 'Liang-Chieh Chen'] | ['cs.CV', 'cs.LG'] | Masked transformer models for class-conditional image generation have become
a compelling alternative to diffusion models. Typically comprising two stages -
an initial VQGAN model for transitioning between latent space and image space,
and a subsequent Transformer model for image generation within latent space -
these ... | 2024-09-24T16:12:12Z | Accepted to TMLR w. featured and reproducibility certification.
Project page: https://weber-mark.github.io/projects/maskbit.html | null | null | MaskBit: Embedding-free Image Generation via Bit Tokens | ['Mark Weber', 'Lijun Yu', 'Qihang Yu', 'Xueqing Deng', 'Xiaohui Shen', 'Daniel Cremers', 'Liang-Chieh Chen'] | 2,024 | Trans. Mach. Learn. Res. | 40 | 59 | ['Computer Science'] |
2,409.16235 | EuroLLM: Multilingual Language Models for Europe | ['Pedro Henrique Martins', 'Patrick Fernandes', 'João Alves', 'Nuno M. Guerreiro', 'Ricardo Rei', 'Duarte M. Alves', 'José Pombal', 'Amin Farajian', 'Manuel Faysse', 'Mateusz Klimaszewski', 'Pierre Colombo', 'Barry Haddow', 'José G. C. de Souza', 'Alexandra Birch', 'André F. T. Martins'] | ['cs.CL'] | The quality of open-weight LLMs has seen significant improvement, yet they
remain predominantly focused on English. In this paper, we introduce the
EuroLLM project, aimed at developing a suite of open-weight multilingual LLMs
capable of understanding and generating text in all official European Union
languages, as well... | 2024-09-24T16:51:36Z | null | null | null | EuroLLM: Multilingual Language Models for Europe | ['P. Martins', 'Patrick Fernandes', 'João Alves', 'Nuno M. Guerreiro', 'Ricardo Rei', 'Duarte M. Alves', 'José P. Pombal', 'Amin Farajian', 'Manuel Faysse', 'Mateusz Klimaszewski', 'Pierre Colombo', 'B. Haddow', 'José G. C. de Souza', 'Alexandra Birch', 'André Martins'] | 2,024 | arXiv.org | 40 | 46 | ['Computer Science'] |
2,409.16563 | Enhancing disease detection in radiology reports through fine-tuning
lightweight LLM on weak labels | ['Yishu Wei', 'Xindi Wang', 'Hanley Ong', 'Yiliang Zhou', 'Adam Flanders', 'George Shih', 'Yifan Peng'] | ['cs.AI'] | Despite significant progress in applying large language models (LLMs) to the
medical domain, several limitations still prevent them from practical
applications. Among these are the constraints on model size and the lack of
cohort-specific labeled datasets. In this work, we investigated the potential
of improving a ligh... | 2024-09-25T02:29:44Z | null | null | null | Enhancing disease detection in radiology reports through fine-tuning lightweight LLM on weak labels | ['Yishu Wei', 'Xindi Wang', 'Hanley Ong', 'Yiliang Zhou', 'Adam E. Flanders', 'George Shih', 'Yifan Peng'] | 2,024 | AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science | 2 | 15 | ['Computer Science', 'Medicine'] |
2,409.16925 | Game4Loc: A UAV Geo-Localization Benchmark from Game Data | ['Yuxiang Ji', 'Boyong He', 'Zhuoyue Tan', 'Liaoni Wu'] | ['cs.CV'] | The vision-based geo-localization technology for UAV, serving as a secondary
source of GPS information in addition to the global navigation satellite
systems (GNSS), can still operate independently in the GPS-denied environment.
Recent deep learning based methods attribute this as the task of image matching
and retriev... | 2024-09-25T13:33:28Z | AAAI 2025, Project page: https://yux1angji.github.io/game4loc/ | null | null | Game4Loc: A UAV Geo-Localization Benchmark from Game Data | ['Yuxiang Ji', 'Boyong He', 'Zhuoyue Tan', 'Liaoni Wu'] | 2,024 | AAAI Conference on Artificial Intelligence | 4 | 31 | ['Computer Science'] |
2,409.17058 | Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors | ['Aiping Zhang', 'Zongsheng Yue', 'Renjing Pei', 'Wenqi Ren', 'Xiaochun Cao'] | ['cs.CV'] | Diffusion-based image super-resolution (SR) methods have achieved remarkable
success by leveraging large pre-trained text-to-image diffusion models as
priors. However, these methods still face two challenges: the requirement for
dozens of sampling steps to achieve satisfactory results, which limits
efficiency in real s... | 2024-09-25T16:15:21Z | The code is available at https://github.com/ArcticHare105/S3Diff | null | null | null | null | null | null | null | null | null |
2,409.17066 | VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large
Language Models | ['Yifei Liu', 'Jicheng Wen', 'Yang Wang', 'Shengyu Ye', 'Li Lyna Zhang', 'Ting Cao', 'Cheng Li', 'Mao Yang'] | ['cs.AI'] | Scaling model size significantly challenges the deployment and inference of
Large Language Models (LLMs). Due to the redundancy in LLM weights, recent
research has focused on pushing weight-only quantization to extremely low-bit
(even down to 2 bits). It reduces memory requirements, optimizes storage costs,
and decreas... | 2024-09-25T16:25:45Z | EMNLP 2024, Main, Poster | null | null | null | null | null | null | null | null | null |
2,409.17095 | General Detection-based Text Line Recognition | ['Raphael Baena', 'Syrine Kalleli', 'Mathieu Aubry'] | ['cs.CV'] | We introduce a general detection-based approach to text line recognition, be
it printed (OCR) or handwritten (HTR), with Latin, Chinese, or ciphered
characters. Detection-based approaches have until now been largely discarded
for HTR because reading characters separately is often challenging, and
character-level annota... | 2024-09-25T17:05:55Z | null | null | null | General Detection-based Text Line Recognition | ['Raphael Baena', 'Syrine Kalleli', 'Mathieu Aubry'] | 2,024 | Neural Information Processing Systems | 0 | 61 | ['Computer Science'] |
2,409.17106 | Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level
Text Prompts | ['Mohammad Sadil Khan', 'Sankalp Sinha', 'Talha Uddin Sheikh', 'Didier Stricker', 'Sk Aziz Ali', 'Muhammad Zeshan Afzal'] | ['cs.CV', 'cs.GR'] | Prototyping complex computer-aided design (CAD) models in modern softwares
can be very time-consuming. This is due to the lack of intelligent systems that
can quickly generate simpler intermediate parts. We propose Text2CAD, the first
AI framework for generating text-to-parametric CAD models using
designer-friendly ins... | 2024-09-25T17:19:33Z | Accepted in NeurIPS 2024 (Spotlight) | null | null | Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level Text Prompts | ['Mohammad Sadil Khan', 'Sankalp Sinha', 'Talha Uddin Sheikh', 'Didier Stricker', 'Sk Aziz Ali', 'Muhammad Zeshan Afzal'] | 2,024 | arXiv.org | 11 | 55 | ['Computer Science'] |
2,409.17115 | Programming Every Example: Lifting Pre-training Data Quality Like
Experts at Scale | ['Fan Zhou', 'Zengzhi Wang', 'Qian Liu', 'Junlong Li', 'Pengfei Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language model pre-training has traditionally relied on human experts
to craft heuristics for improving the corpora quality, resulting in numerous
rules developed to date. However, these rules lack the flexibility to address
the unique characteristics of individual example effectively. Meanwhile,
applying tailore... | 2024-09-25T17:28:13Z | 47 pages, 13 figures, 34 tables | null | null | null | null | null | null | null | null | null |
2,409.17145 | DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D
Diffusion | ['Yukun Huang', 'Jianan Wang', 'Ailing Zeng', 'Zheng-Jun Zha', 'Lei Zhang', 'Xihui Liu'] | ['cs.CV', 'cs.GR', 'cs.LG'] | Leveraging pretrained 2D diffusion models and score distillation sampling
(SDS), recent methods have shown promising results for text-to-3D avatar
generation. However, generating high-quality 3D avatars capable of expressive
animation remains challenging. In this work, we present DreamWaltz-G, a novel
learning framewor... | 2024-09-25T17:59:45Z | Project page: https://yukun-huang.github.io/DreamWaltz-G/ | null | null | DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion | ['Yukun Huang', 'Jianan Wang', 'Ailing Zeng', 'Zhengxian Zha', 'Lei Zhang', 'Xihui Liu'] | 2,024 | IEEE Transactions on Pattern Analysis and Machine Intelligence | 7 | 83 | ['Computer Science', 'Medicine'] |
2,409.17146 | Molmo and PixMo: Open Weights and Open Data for State-of-the-Art
Vision-Language Models | ['Matt Deitke', 'Christopher Clark', 'Sangho Lee', 'Rohun Tripathi', 'Yue Yang', 'Jae Sung Park', 'Mohammadreza Salehi', 'Niklas Muennighoff', 'Kyle Lo', 'Luca Soldaini', 'Jiasen Lu', 'Taira Anderson', 'Erin Bransom', 'Kiana Ehsani', 'Huong Ngo', 'YenSung Chen', 'Ajay Patel', 'Mark Yatskar', 'Chris Callison-Burch', 'An... | ['cs.CV', 'cs.CL', 'cs.LG'] | Today's most advanced vision-language models (VLMs) remain proprietary. The
strongest open-weight models rely heavily on synthetic data from proprietary
VLMs to achieve good performance, effectively distilling these closed VLMs into
open ones. As a result, the community has been missing foundational knowledge
about how... | 2024-09-25T17:59:51Z | Updated with ablations and more technical details | null | null | null | null | null | null | null | null | null |
2,409.17312 | BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers
With Limited Data | ['Jean-Loup Tastet', 'Inar Timiryasov'] | ['cs.CL', 'cs.LG', 'I.2.7'] | We present BabyLlama-2, a 345 million parameter model distillation-pretrained
from two teachers on a 10 million word corpus for the BabyLM competition. On
BLiMP and SuperGLUE benchmarks, BabyLlama-2 outperforms baselines trained on
both 10 and 100 million word datasets with the same data mix, as well as its
teacher mod... | 2024-09-25T19:46:49Z | 9 pages, 3 figures, 5 tables, submitted to the BabyLM Challenge
(CoNLL 2024 Shared Task) | null | null | BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data | ['J. Tastet', 'I. Timiryasov'] | 2,024 | arXiv.org | 4 | 20 | ['Computer Science'] |
2,409.17433 | HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and
Dynamic Workflows | ['Wenlin Yao', 'Haitao Mi', 'Dong Yu'] | ['cs.CL', 'cs.AI'] | Despite recent advancements in large language models (LLMs), their
performance on complex reasoning problems requiring multi-step thinking and
combining various skills is still limited. To address this, we propose a novel
framework HDFlow for complex reasoning with LLMs that combines fast and slow
thinking modes in an ... | 2024-09-25T23:52:17Z | 27 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,409.17692 | MIO: A Foundation Model on Multimodal Tokens | ['Zekun Wang', 'King Zhu', 'Chunpu Xu', 'Wangchunshu Zhou', 'Jiaheng Liu', 'Yibo Zhang', 'Jiashuo Wang', 'Ning Shi', 'Siyu Li', 'Yizhi Li', 'Haoran Que', 'Zhaoxiang Zhang', 'Yuanxing Zhang', 'Ge Zhang', 'Ke Xu', 'Jie Fu', 'Wenhao Huang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this paper, we introduce MIO, a novel foundation model built on multimodal
tokens, capable of understanding and generating speech, text, images, and
videos in an end-to-end, autoregressive manner. While the emergence of large
language models (LLMs) and multimodal large language models (MM-LLMs) propels
advancements ... | 2024-09-26T09:57:16Z | Technical Report. Codes and models are available in
https://github.com/MIO-Team/MIO | null | null | null | null | null | null | null | null | null |
2,409.17808 | Generative Modeling of Molecular Dynamics Trajectories | ['Bowen Jing', 'Hannes Stärk', 'Tommi Jaakkola', 'Bonnie Berger'] | ['q-bio.BM', 'cs.LG'] | Molecular dynamics (MD) is a powerful technique for studying microscopic
phenomena, but its computational cost has driven significant interest in the
development of deep learning-based surrogate models. We introduce generative
modeling of molecular trajectories as a paradigm for learning flexible
multi-task surrogate m... | 2024-09-26T13:02:28Z | NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,409.17827 | BeanCounter: A low-toxicity, large-scale, and open dataset of
business-oriented text | ['Siyan Wang', 'Bradford Levy'] | ['cs.CL'] | Many of the recent breakthroughs in language modeling have resulted from
scaling effectively the same model architecture to larger datasets. In this
vein, recent work has highlighted performance gains from increasing training
dataset size and quality, suggesting a need for novel sources of large-scale
datasets. In this... | 2024-09-26T13:26:46Z | null | null | null | BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text | ['Siyan Wang', 'Bradford Levy'] | 2,024 | Neural Information Processing Systems | 2 | 91 | ['Computer Science'] |
2,409.17874 | DarkSAM: Fooling Segment Anything Model to Segment Nothing | ['Ziqi Zhou', 'Yufei Song', 'Minghui Li', 'Shengshan Hu', 'Xianlong Wang', 'Leo Yu Zhang', 'Dezhong Yao', 'Hai Jin'] | ['cs.AI'] | Segment Anything Model (SAM) has recently gained much attention for its
outstanding generalization to unseen data and tasks. Despite its promising
prospect, the vulnerabilities of SAM, especially to universal adversarial
perturbation (UAP) have not been thoroughly investigated yet. In this paper, we
propose DarkSAM, th... | 2024-09-26T14:20:14Z | This paper has been accepted by the 38th Annual Conference on Neural
Information Processing Systems (NeurIPS'24) | null | null | DarkSAM: Fooling Segment Anything Model to Segment Nothing | ['Ziqi Zhou', 'Yufei Song', 'Minghui Li', 'Shengshan Hu', 'Xianlong Wang', 'Leo Yu Zhang', 'Dezhong Yao', 'Hai Jin'] | 2,024 | Neural Information Processing Systems | 12 | 45 | ['Computer Science'] |
2,409.17892 | EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language
Models | ['Shaoxiong Ji', 'Zihao Li', 'Indraneil Paul', 'Jaakko Paavola', 'Peiqin Lin', 'Pinzhen Chen', "Dayyán O'Brien", 'Hengyu Luo', 'Hinrich Schütze', 'Jörg Tiedemann', 'Barry Haddow'] | ['cs.CL'] | In this work, we introduce EMMA-500, a large-scale multilingual language
model continue-trained on texts across 546 languages designed for enhanced
multilingual performance, focusing on improving language coverage for
low-resource languages. To facilitate continual pre-training, we compile the
MaLA corpus, a comprehens... | 2024-09-26T14:40:45Z | null | null | null | null | null | null | null | null | null | null |
2,409.17912 | Atlas-Chat: Adapting Large Language Models for Low-Resource Moroccan
Arabic Dialect | ['Guokan Shang', 'Hadi Abdine', 'Yousef Khoubrane', 'Amr Mohamed', 'Yassine Abbahaddou', 'Sofiane Ennadir', 'Imane Momayiz', 'Xuguang Ren', 'Eric Moulines', 'Preslav Nakov', 'Michalis Vazirgiannis', 'Eric Xing'] | ['cs.CL'] | We introduce Atlas-Chat, the first-ever collection of LLMs specifically
developed for dialectal Arabic. Focusing on Moroccan Arabic, also known as
Darija, we construct our instruction dataset by consolidating existing Darija
language resources, creating novel datasets both manually and synthetically,
and translating En... | 2024-09-26T14:56:38Z | null | null | null | null | null | null | null | null | null | null |
2,409.18042 | EMOVA: Empowering Language Models to See, Hear and Speak with Vivid
Emotions | ['Kai Chen', 'Yunhao Gou', 'Runhui Huang', 'Zhili Liu', 'Daxin Tan', 'Jing Xu', 'Chunwei Wang', 'Yi Zhu', 'Yihan Zeng', 'Kuo Yang', 'Dingdong Wang', 'Kun Xiang', 'Haoyuan Li', 'Haoli Bai', 'Jianhua Han', 'Xiaohui Li', 'Weike Jin', 'Nian Xie', 'Yu Zhang', 'James T. Kwok', 'Hengshuang Zhao', 'Xiaodan Liang', 'Dit-Yan Yeu... | ['cs.CV', 'cs.CL'] | GPT-4o, an omni-modal model that enables vocal conversations with diverse
emotions and tones, marks a milestone for omni-modal foundation models.
However, empowering Large Language Models to perceive and generate images,
texts, and speeches end-to-end with publicly available data remains challenging
for the open-source... | 2024-09-26T16:44:02Z | Accepted by CVPR 2025. Project Page: https://emova-ollm.github.io/ | null | null | null | null | null | null | null | null | null |
2,409.18111 | E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding | ['Ye Liu', 'Zongyang Ma', 'Zhongang Qi', 'Yang Wu', 'Ying Shan', 'Chang Wen Chen'] | ['cs.CV'] | Recent advances in Video Large Language Models (Video-LLMs) have demonstrated
their great potential in general-purpose video understanding. To verify the
significance of these models, a number of benchmarks have been proposed to
diagnose their capabilities in different scenarios. However, existing
benchmarks merely eva... | 2024-09-26T17:53:04Z | Accepted to NeurIPS 2024 Datasets and Benchmarks Track | null | null | null | null | null | null | null | null | null |
2,409.18124 | Lotus: Diffusion-based Visual Foundation Model for High-quality Dense
Prediction | ['Jing He', 'Haodong Li', 'Wei Yin', 'Yixun Liang', 'Leheng Li', 'Kaiqiang Zhou', 'Hongbo Zhang', 'Bingbing Liu', 'Ying-Cong Chen'] | ['cs.CV'] | Leveraging the visual priors of pre-trained text-to-image diffusion models
offers a promising solution to enhance zero-shot generalization in dense
prediction tasks. However, existing methods often uncritically use the original
diffusion formulation, which may not be optimal due to the fundamental
differences between d... | 2024-09-26T17:58:55Z | The first two authors contributed equally. Project page:
https://lotus3d.github.io/ | null | null | null | null | null | null | null | null | null |
2,409.18125 | LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with
3D-awareness | ['Chenming Zhu', 'Tai Wang', 'Wenwei Zhang', 'Jiangmiao Pang', 'Xihui Liu'] | ['cs.CV'] | Recent advancements in Large Multimodal Models (LMMs) have greatly enhanced
their proficiency in 2D visual understanding tasks, enabling them to
effectively process and understand images and videos. However, the development
of LMMs with 3D scene understanding capabilities has been hindered by the lack
of large-scale 3D... | 2024-09-26T17:59:11Z | Project page: https://zcmax.github.io/projects/LLaVA-3D/ | null | null | null | null | null | null | null | null | null |
2,409.18193 | GrEmLIn: A Repository of Green Baseline Embeddings for 87 Low-Resource
Languages Injected with Multilingual Graph Knowledge | ['Daniil Gurgurov', 'Rishu Kumar', 'Simon Ostermann'] | ['cs.CL'] | Contextualized embeddings based on large language models (LLMs) are available
for various languages, but their coverage is often limited for lower resourced
languages. Using LLMs for such languages is often difficult due to a high
computational cost; not only during training, but also during inference. Static
word embe... | 2024-09-26T18:10:26Z | Long paper, accepted to NAACL 2025 Findings | null | null | GrEmLIn: A Repository of Green Baseline Embeddings for 87 Low-Resource Languages Injected with Multilingual Graph Knowledge | ['Daniil Gurgurov', 'Rishu Kumar', 'Simon Ostermann'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 2 | 80 | ['Computer Science'] |
2,409.18412 | SciDFM: A Large Language Model with Mixture-of-Experts for Science | ['Liangtai Sun', 'Danyu Luo', 'Da Ma', 'Zihan Zhao', 'Baocai Chen', 'Zhennan Shen', 'Su Zhu', 'Lu Chen', 'Xin Chen', 'Kai Yu'] | ['cs.CL', 'cs.AI'] | Recently, there has been a significant upsurge of interest in leveraging
large language models (LLMs) to assist scientific discovery. However, most LLMs
only focus on general science, while they lack domain-specific knowledge, such
as chemical molecules and amino acid sequences. To bridge these gaps, we
introduce SciDF... | 2024-09-27T03:00:29Z | 12 pages, 1 figure, 9 tables. Technical Report, accepted by NeurIPS
2024 Workshop FM4Science | null | null | null | null | null | null | null | null | null |
2,409.18511 | Do We Need Domain-Specific Embedding Models? An Empirical Investigation | ['Yixuan Tang', 'Yi Yang'] | ['cs.CL', 'cs.IR'] | Embedding models play a crucial role in representing and retrieving
information across various NLP applications. Recent advancements in Large
Language Models (LLMs) have further enhanced the performance of embedding
models, which are trained on massive amounts of text covering almost every
domain. These models are ofte... | 2024-09-27T07:46:06Z | https://github.com/yixuantt/FinMTEB, The newer version:
arXiv:2502.10990 | null | null | null | null | null | null | null | null | null |
2,409.18695 | KALE-LM: Unleash The Power Of AI For Science Via Knowledge And Logic
Enhanced Large Model | ['Weichen Dai', 'Yezeng Chen', 'Zijie Dai', 'Yubo Liu', 'Zhijie Huang', 'Yixuan Pan', 'Baiyang Song', 'Chengli Zhong', 'Xinhe Li', 'Zeyu Wang', 'Zhuoying Feng', 'Yi Zhou'] | ['cs.AI', 'cs.CE', 'cs.CL'] | Artificial intelligence is gradually demonstrating its immense potential, and
increasing attention is being given to how AI can be harnessed to advance
scientific research. In this vision paper, we present our perspectives on how
AI can better assist scientific inquiry and explore corresponding technical
approach. We h... | 2024-09-27T12:33:57Z | null | null | null | null | null | null | null | null | null | null |
2,409.18747 | Cottention: Linear Transformers With Cosine Attention | ['Gabriel Mongaras', 'Trevor Dohm', 'Eric C. Larson'] | ['cs.LG'] | Attention mechanisms, particularly softmax attention, have been instrumental
in the success of transformer-based models such as GPT. However, the quadratic
memory complexity of softmax attention with respect to sequence length poses
significant challenges for processing longer sequences. We introduce
Cottention, a nove... | 2024-09-27T13:38:36Z | 12 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,409.18869 | Emu3: Next-Token Prediction is All You Need | ['Xinlong Wang', 'Xiaosong Zhang', 'Zhengxiong Luo', 'Quan Sun', 'Yufeng Cui', 'Jinsheng Wang', 'Fan Zhang', 'Yueze Wang', 'Zhen Li', 'Qiying Yu', 'Yingli Zhao', 'Yulong Ao', 'Xuebin Min', 'Tao Li', 'Boya Wu', 'Bo Zhao', 'Bowen Zhang', 'Liangdong Wang', 'Guang Liu', 'Zheqi He', 'Xi Yang', 'Jingjing Liu', 'Yonghua Lin',... | ['cs.CV'] | While next-token prediction is considered a promising path towards artificial
general intelligence, it has struggled to excel in multimodal tasks, which are
still dominated by diffusion models (e.g., Stable Diffusion) and compositional
approaches (e.g., CLIP combined with LLMs). In this paper, we introduce Emu3, a
new ... | 2024-09-27T16:06:11Z | Project Page: https://emu.baai.ac.cn | null | null | Emu3: Next-Token Prediction is All You Need | ['Xinlong Wang', 'Xiaosong Zhang', 'Zhengxiong Luo', 'Quan Sun', 'Yufeng Cui', 'Jinsheng Wang', 'Fan Zhang', 'Yueze Wang', 'Zhen Li', 'Qiying Yu', 'Yingli Zhao', 'Yulong Ao', 'Xuebin Min', 'Tao Li', 'Boya Wu', 'Bo Zhao', 'Bowen Zhang', 'Lian-zi Wang', 'Guang Liu', 'Zheqi He', 'Xi Yang', 'Jingjing Liu', 'Yonghua Lin', '... | 2,024 | arXiv.org | 233 | 102 | ['Computer Science'] |
2,409.19339 | Visual Question Decomposition on Multimodal Large Language Models | ['Haowei Zhang', 'Jianzhe Liu', 'Zhen Han', 'Shuo Chen', 'Bailan He', 'Volker Tresp', 'Zhiqiang Xu', 'Jindong Gu'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Question decomposition has emerged as an effective strategy for prompting
Large Language Models (LLMs) to answer complex questions. However, while
existing methods primarily focus on unimodal language models, the question
decomposition capability of Multimodal Large Language Models (MLLMs) has yet to
be explored. To th... | 2024-09-28T12:49:16Z | Accepted to EMNLP2024 Findings | null | null | Visual Question Decomposition on Multimodal Large Language Models | ['Haowei Zhang', 'Jianzhe Liu', 'Zhen Han', 'Shuo Chen', 'Bailan He', 'Volker Tresp', 'Zhiqiang Xu', 'Jindong Gu'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 2 | 34 | ['Computer Science'] |
2,409.19467 | INSIGHTBUDDY-AI: Medication Extraction and Entity Linking using Large
Language Models and Ensemble Learning | ['Pablo Romero', 'Lifeng Han', 'Goran Nenadic'] | ['cs.CL', 'cs.AI'] | Medication Extraction and Mining play an important role in healthcare NLP
research due to its practical applications in hospital settings, such as their
mapping into standard clinical knowledge bases (SNOMED-CT, BNF, etc.). In this
work, we investigate state-of-the-art LLMs in text mining tasks on medications
and their... | 2024-09-28T22:06:06Z | ongoing work, 24 pages | null | null | INSIGHTBUDDY-AI: Medication Extraction and Entity Linking using Large Language Models and Ensemble Learning | ['Pablo Romero', 'Lifeng Han', 'Goran Nenadic'] | 2,024 | arXiv.org | 1 | 24 | ['Computer Science'] |
2,409.1951 | Making LLMs Better Many-to-Many Speech-to-Text Translators with
Curriculum Learning | ['Yexing Du', 'Youcheng Pan', 'Ziyang Ma', 'Bo Yang', 'Yifan Yang', 'Keqi Deng', 'Xie Chen', 'Yang Xiang', 'Ming Liu', 'Bing Qin'] | ['cs.CL'] | Multimodal Large Language Models (MLLMs) have achieved significant success in
Speech-to-Text Translation (S2TT) tasks. While most existing research has
focused on English-centric translation directions, the exploration of
many-to-many translation is still limited by the scarcity of parallel data. To
address this, we pr... | 2024-09-29T01:48:09Z | Accepted in ACL 2025 (Main) | null | null | Making LLMs Better Many-to-Many Speech-to-Text Translators with Curriculum Learning | ['Yexing Du', 'Ziyang Ma', 'Yifan Yang', 'Keqi Deng', 'Xie Chen', 'Bo Yang', 'Yang Xiang', 'Ming Liu', 'Bing Qin'] | 2,024 | null | 9 | 33 | ['Computer Science'] |
2,409.19521 | GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending
Against Prompt Injection Attacks | ['Rongchang Li', 'Minjie Chen', 'Chang Hu', 'Han Chen', 'Wenpeng Xing', 'Meng Han'] | ['cs.CR', 'cs.LG'] | Large Language Models (LLMs) like GPT-4, LLaMA, and Qwen have demonstrated
remarkable success across a wide range of applications. However, these models
remain inherently vulnerable to prompt injection attacks, which can bypass
existing safety mechanisms, highlighting the urgent need for more robust attack
detection me... | 2024-09-29T02:35:38Z | null | null | null | GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending Against Prompt Injection Attacks | ['Rongchang Li', 'Minjie Chen', 'Chang Hu', 'Han Chen', 'Wenpeng Xing', 'Meng Han'] | 2,024 | arXiv.org | 2 | 46 | ['Computer Science'] |
2,409.19581 | DiMB-RE: Mining the Scientific Literature for Diet-Microbiome
Associations | ['Gibong Hong', 'Veronica Hindle', 'Nadine M. Veasley', 'Hannah D. Holscher', 'Halil Kilicoglu'] | ['cs.CL'] | Objective: To develop a corpus annotated for diet-microbiome associations
from the biomedical literature and train natural language processing (NLP)
models to identify these associations, thereby improving the understanding of
their role in health and disease, and supporting personalized nutrition
strategies. Materials... | 2024-09-29T06:58:26Z | Accepted for publication in Journal of the American Medical
Informatics Association. Please refer to the supplementary data if needed:
https://doi.org/10.1093/jamia/ocaf054 | null | 10.1093/jamia/ocaf054 | null | null | null | null | null | null | null |
2,409.19603 | One Token to Seg Them All: Language Instructed Reasoning Segmentation in
Videos | ['Zechen Bai', 'Tong He', 'Haiyang Mei', 'Pichao Wang', 'Ziteng Gao', 'Joya Chen', 'Lei Liu', 'Zheng Zhang', 'Mike Zheng Shou'] | ['cs.CV', 'cs.AI'] | We introduce VideoLISA, a video-based multimodal large language model
designed to tackle the problem of language-instructed reasoning segmentation in
videos. Leveraging the reasoning capabilities and world knowledge of large
language models, and augmented by the Segment Anything Model, VideoLISA
generates temporally co... | 2024-09-29T07:47:15Z | Accepted by NeurlPS 2024 | null | null | null | null | null | null | null | null | null |
2,409.19667 | Can Large Language Models Analyze Graphs like Professionals? A
Benchmark, Datasets and Models | ['Xin Sky Li', 'Weize Chen', 'Qizhi Chu', 'Haopeng Li', 'Zhaojun Sun', 'Ran Li', 'Chen Qian', 'Yiwei Wei', 'Zhiyuan Liu', 'Chuan Shi', 'Maosong Sun', 'Cheng Yang'] | ['cs.CL', 'cs.AI'] | The need to analyze graphs is ubiquitous across various fields, from social
networks to biological research and recommendation systems. Therefore, enabling
the ability of large language models (LLMs) to process graphs is an important
step toward more advanced general intelligence. However, current LLM benchmarks
on gra... | 2024-09-29T11:38:45Z | NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,409.1975 | AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs
for Astronomy | ['Rui Pan', 'Tuan Dung Nguyen', 'Hardik Arora', 'Alberto Accomazzi', 'Tirthankar Ghosal', 'Yuan-Sen Ting'] | ['astro-ph.IM', 'cs.CL'] | Continual pretraining of large language models on domain-specific data has
been proposed to enhance performance on downstream tasks. In astronomy, the
previous absence of astronomy-focused benchmarks has hindered objective
evaluation of these specialized LLM models. Leveraging a recent initiative to
curate high-quality... | 2024-09-29T16:02:22Z | 10 pages, 1 figure, 1 table, accepted to AI4S: The 5th Workshop on
Artificial Intelligence and Machine Learning for Scientific Applications at
the International Conference for High Performance Computing, Networking,
Storage, and Analysis (SC24). Models will be released at
https://huggingface.co/AstroMLab. Astro... | null | null | null | null | null | null | null | null | null |
2,409.1983 | GameLabel-10K: Collecting Image Preference Data Through Mobile Game
Crowdsourcing | ['Jonathan Zhou'] | ['cs.CV'] | The rise of multi-billion parameter models has sparked an intense hunger for
data across deep learning. This study explores the possibility of replacing
paid annotators with video game players who are rewarded with in-game currency
for good performance. We collaborate with the developers of a mobile historical
strategy... | 2024-09-30T00:00:49Z | 7 pages, 7 images | null | null | null | null | null | null | null | null | null |
2,409.19911 | Replace Anyone in Videos | ['Xiang Wang', 'Shiwei Zhang', 'Haonan Qiu', 'Ruihang Chu', 'Zekun Li', 'Yingya Zhang', 'Changxin Gao', 'Yuehuan Wang', 'Chunhua Shen', 'Nong Sang'] | ['cs.CV'] | The field of controllable human-centric video generation has witnessed
remarkable progress, particularly with the advent of diffusion models. However,
achieving precise and localized control over human motion in videos, such as
replacing or inserting individuals while preserving desired motion patterns,
still remains a... | 2024-09-30T03:27:33Z | null | null | null | null | null | null | null | null | null | null |
2,409.19946 | Illustrious: an Open Advanced Illustration Model | ['Sang Hyun Park', 'Jun Young Koh', 'Junha Lee', 'Joy Song', 'Dongha Kim', 'Hoyeon Moon', 'Hyunju Lee', 'Min Song'] | ['cs.CV'] | In this work, we share the insights for achieving state-of-the-art quality in
our text-to-image anime image generative model, called Illustrious. To achieve
high resolution, dynamic color range images, and high restoration ability, we
focus on three critical approaches for model improvement. First, we delve into
the si... | 2024-09-30T04:59:12Z | null | null | null | null | null | null | null | null | null | null |
2,409.20007 | DeSTA2: Developing Instruction-Following Speech Language Model Without
Speech Instruction-Tuning Data | ['Ke-Han Lu', 'Zhehuai Chen', 'Szu-Wei Fu', 'Chao-Han Huck Yang', 'Jagadeesh Balam', 'Boris Ginsburg', 'Yu-Chiang Frank Wang', 'Hung-yi Lee'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Recent end-to-end speech language models (SLMs) have expanded upon the
capabilities of large language models (LLMs) by incorporating pre-trained
speech models. However, these SLMs often undergo extensive speech
instruction-tuning to bridge the gap between speech and text modalities. This
requires significant annotation... | 2024-09-30T07:01:21Z | Accepted by ICASSP 2025 | null | null | null | null | null | null | null | null | null |
2,409.20196 | Melody-Guided Music Generation | ['Shaopeng Wei', 'Manzhen Wei', 'Haoyu Wang', 'Yu Zhao', 'Gang Kou'] | ['cs.SD', 'cs.AI', 'eess.AS'] | We present the Melody-Guided Music Generation (MG2) model, a novel approach
using melody to guide the text-to-music generation that, despite a simple
method and limited resources, achieves excellent performance. Specifically, we
first align the text with audio waveforms and their associated melodies using
the newly pro... | 2024-09-30T11:13:35Z | 16 pages, 8 figure, 8 tables | null | null | Melody-Guided Music Generation | ['Shaopeng Wei', 'Manzhen Wei', 'Haoyu Wang', 'Yu Zhao', 'Gang Kou'] | 2,024 | null | 2 | 0 | ['Computer Science', 'Engineering'] |
2,409.20201 | AfriHuBERT: A self-supervised speech representation model for African
languages | ['Jesujoba O. Alabi', 'Xuechen Liu', 'Dietrich Klakow', 'Junichi Yamagishi'] | ['cs.CL', 'cs.SD', 'eess.AS'] | In this work, we present AfriHuBERT, an extension of mHuBERT-147, a compact
self-supervised learning (SSL) model pretrained on 147 languages. While
mHuBERT-147 covered 16 African languages, we expand this to 1,226 through
continued pretraining on 10K+ hours of speech data from diverse sources,
benefiting an African pop... | 2024-09-30T11:28:33Z | Interspeech 2025 | null | null | AfriHuBERT: A self-supervised speech representation model for African languages | ['Jesujoba Oluwadara Alabi', 'Xuechen Liu', 'D. Klakow', 'Junichi Yamagishi'] | 2,024 | arXiv.org | 3 | 52 | ['Computer Science', 'Engineering'] |
2,409.20537 | Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained
Transformers | ['Lirui Wang', 'Xinlei Chen', 'Jialiang Zhao', 'Kaiming He'] | ['cs.RO', 'cs.CV', 'cs.LG'] | One of the roadblocks for training generalist robotic models today is
heterogeneity. Previous robot learning methods often collect data to train with
one specific embodiment for one task, which is expensive and prone to
overfitting. This work studies the problem of learning policy representations
through heterogeneous ... | 2024-09-30T17:39:41Z | See the project website (https://liruiw.github.io/hpt/) for code and
videos | Neurips 2024 | null | null | null | null | null | null | null | null |
2,409.20551 | UniAff: A Unified Representation of Affordances for Tool Usage and
Articulation with Vision-Language Models | ['Qiaojun Yu', 'Siyuan Huang', 'Xibin Yuan', 'Zhengkai Jiang', 'Ce Hao', 'Xin Li', 'Haonan Chang', 'Junbo Wang', 'Liu Liu', 'Hongsheng Li', 'Peng Gao', 'Cewu Lu'] | ['cs.RO'] | Previous studies on robotic manipulation are based on a limited understanding
of the underlying 3D motion constraints and affordances. To address these
challenges, we propose a comprehensive paradigm, termed UniAff, that integrates
3D object-centric manipulation and task understanding in a unified formulation.
Specific... | 2024-09-30T17:52:05Z | ICRA 2025 | null | null | null | null | null | null | null | null | null |
2,410.00025 | Improving Spoken Language Modeling with Phoneme Classification: A Simple
Fine-tuning Approach | ['Maxime Poli', 'Emmanuel Chemla', 'Emmanuel Dupoux'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Recent progress in Spoken Language Modeling has shown that learning language
directly from speech is feasible. Generating speech through a pipeline that
operates at the text level typically loses nuances, intonations, and non-verbal
vocalizations. Modeling directly from speech opens up the path to more natural
and expr... | 2024-09-16T10:29:15Z | Accepted at EMNLP 2024 main conference. 9 pages, 4 figures | null | null | Improving Spoken Language Modeling with Phoneme Classification: A Simple Fine-tuning Approach | ['Maxime Poli', 'Emmanuel Chemla', 'Emmanuel Dupoux'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 3 | 31 | ['Computer Science', 'Engineering'] |
2,410.00037 | Moshi: a speech-text foundation model for real-time dialogue | ['Alexandre Défossez', 'Laurent Mazaré', 'Manu Orsini', 'Amélie Royer', 'Patrick Pérez', 'Hervé Jégou', 'Edouard Grave', 'Neil Zeghidour'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.SD'] | We introduce Moshi, a speech-text foundation model and full-duplex spoken
dialogue framework. Current systems for spoken dialogue rely on pipelines of
independent components, namely voice activity detection, speech recognition,
textual dialogue and text-to-speech. Such frameworks cannot emulate the
experience of real c... | 2024-09-17T17:55:39Z | null | null | null | Moshi: a speech-text foundation model for real-time dialogue | ["Alexandre D'efossez", "Laurent Mazar'e", 'Manu Orsini', "Am'elie Royer", "Patrick P'erez", "Herv'e J'egou", 'Edouard Grave', 'Neil Zeghidour'] | 2,024 | arXiv.org | 150 | 0 | ['Computer Science', 'Engineering'] |
2,410.00086 | ACE: All-round Creator and Editor Following Instructions via Diffusion
Transformer | ['Zhen Han', 'Zeyinzi Jiang', 'Yulin Pan', 'Jingfeng Zhang', 'Chaojie Mao', 'Chenwei Xie', 'Yu Liu', 'Jingren Zhou'] | ['cs.CV', 'cs.AI'] | Diffusion models have emerged as a powerful generative technology and have
been found to be applicable in various scenarios. Most existing foundational
diffusion models are primarily designed for text-guided visual generation and
do not support multi-modal conditions, which are essential for many visual
editing tasks. ... | 2024-09-30T17:56:27Z | null | null | null | null | null | null | null | null | null | null |
2,410.00163 | Adapting LLMs for the Medical Domain in Portuguese: A Study on
Fine-Tuning and Model Evaluation | ['Pedro Henrique Paiola', 'Gabriel Lino Garcia', 'João Renato Ribeiro Manesco', 'Mateus Roder', 'Douglas Rodrigues', 'João Paulo Papa'] | ['cs.CL', 'cs.AI'] | This study evaluates the performance of large language models (LLMs) as
medical agents in Portuguese, aiming to develop a reliable and relevant virtual
assistant for healthcare professionals. The HealthCareMagic-100k-en and MedQuAD
datasets, translated from English using GPT-3.5, were used to fine-tune the
ChatBode-7B ... | 2024-09-30T19:10:03Z | This work has been submitted to the IEEE for possible publication | null | null | null | null | null | null | null | null | null |
2,410.00337 | SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D
Semantic MPIs | ['Leheng Li', 'Weichao Qiu', 'Yingjie Cai', 'Xu Yan', 'Qing Lian', 'Bingbing Liu', 'Ying-Cong Chen'] | ['cs.CV'] | The advancement of autonomous driving is increasingly reliant on high-quality
annotated datasets, especially in the task of 3D occupancy prediction, where
the occupancy labels require dense 3D annotation with significant human effort.
In this paper, we propose SyntheOcc, which denotes a diffusion model that
Synthesize ... | 2024-10-01T02:29:24Z | null | null | null | null | null | null | null | null | null | null |
2,410.00361 | PclGPT: A Large Language Model for Patronizing and Condescending
Language Detection | ['Hongbo Wang', 'Mingda Li', 'Junyu Lu', 'Hebin Xia', 'Liang Yang', 'Bo Xu', 'Ruizhu Liu', 'Hongfei Lin'] | ['cs.CL'] | Disclaimer: Samples in this paper may be harmful and cause discomfort!
Patronizing and condescending language (PCL) is a form of speech directed at
vulnerable groups. As an essential branch of toxic language, this type of
language exacerbates conflicts and confrontations among Internet communities
and detrimentally i... | 2024-10-01T03:19:13Z | Accepted for EMNLP2024 (Findings) | null | null | PclGPT: A Large Language Model for Patronizing and Condescending Language Detection | ['Hongbo Wang', 'Mingda Li', 'Junyu Lu', 'Hebin Xia', 'Liang Yang', 'Bo Xu', 'Ruizhu Liu', 'Hongfei Lin'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 0 | 44 | ['Computer Science'] |
2,410.00683 | Efficient Technical Term Translation: A Knowledge Distillation Approach
for Parenthetical Terminology Translation | ['Jiyoon Myung', 'Jihyeon Park', 'Jungki Son', 'Kyungro Lee', 'Joohyung Han'] | ['cs.CL', 'cs.AI'] | This paper addresses the challenge of accurately translating technical terms,
which are crucial for clear communication in specialized fields. We introduce
the Parenthetical Terminology Translation (PTT) task, designed to mitigate
potential inaccuracies by displaying the original term in parentheses alongside
its trans... | 2024-10-01T13:40:28Z | Paper accepted in EMNLPW 2024 | null | null | null | null | null | null | null | null | null |
2,410.00741 | VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP
Models | ['Jiapeng Wang', 'Chengyu Wang', 'Kunzhe Huang', 'Jun Huang', 'Lianwen Jin'] | ['cs.CL', 'cs.CV', 'cs.MM'] | Contrastive Language-Image Pre-training (CLIP) has been widely studied and
applied in numerous applications. However, the emphasis on brief summary texts
during pre-training prevents CLIP from understanding long descriptions. This
issue is particularly acute regarding videos given that videos often contain
abundant det... | 2024-10-01T14:33:22Z | EMNLP 2024 Main conference | null | null | null | null | null | null | null | null | null |
2,410.00751 | Thinking Outside of the Differential Privacy Box: A Case Study in Text
Privatization with Language Model Prompting | ['Stephen Meisenbacher', 'Florian Matthes'] | ['cs.CL'] | The field of privacy-preserving Natural Language Processing has risen in
popularity, particularly at a time when concerns about privacy grow with the
proliferation of Large Language Models. One solution consistently appearing in
recent literature has been the integration of Differential Privacy (DP) into
NLP techniques... | 2024-10-01T14:46:15Z | 10 pages, 3 tables, Accepted to EMNLP 2024 (Main) | null | null | Thinking Outside of the Differential Privacy Box: A Case Study in Text Privatization with Language Model Prompting | ['Stephen Meisenbacher', 'Florian Matthes'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 3 | 37 | ['Computer Science'] |
2,410.00775 | Decoding Hate: Exploring Language Models' Reactions to Hate Speech | ['Paloma Piot', 'Javier Parapar'] | ['cs.CL'] | Hate speech is a harmful form of online expression, often manifesting as
derogatory posts. It is a significant risk in digital environments. With the
rise of Large Language Models (LLMs), there is concern about their potential to
replicate hate speech patterns, given their training on vast amounts of
unmoderated intern... | 2024-10-01T15:16:20Z | null | Proceedings of the 2025 Conference of the Nations of the Americas
Chapter of the Association for Computational Linguistics: Human Language
Technologies | 10.18653/v1/2025.naacl-long.45 | null | null | null | null | null | null | null |
2,410.00822 | VHASR: A Multimodal Speech Recognition System With Vision Hotwords | ['Jiliang Hu', 'Zuchao Li', 'Ping Wang', 'Haojun Ai', 'Lefei Zhang', 'Hai Zhao'] | ['cs.SD', 'cs.CL', 'eess.AS'] | The image-based multimodal automatic speech recognition (ASR) model enhances
speech recognition performance by incorporating audio-related image. However,
some works suggest that introducing image information to model does not help
improving ASR performance. In this paper, we propose a novel approach
effectively utiliz... | 2024-10-01T16:06:02Z | 14 pages, 6 figures, accepted by EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,410.00847 | Uncertainty-aware Reward Model: Teaching Reward Models to Know What is
Unknown | ['Xingzhou Lou', 'Dong Yan', 'Wei Shen', 'Yuzi Yan', 'Jian Xie', 'Junge Zhang'] | ['cs.LG'] | Reward models (RMs) are essential for aligning large language models (LLM)
with human expectations. However, existing RMs struggle to capture the
stochastic and uncertain nature of human preferences and fail to assess the
reliability of reward predictions. To address these challenges, we introduce
the Uncertainty-aware... | 2024-10-01T16:29:59Z | null | null | null | Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown | ['Xingzhou Lou', 'Dong Yan', 'Wei Shen', 'Yuzi Yan', 'Jian Xie', 'Junge Zhang'] | 2,024 | arXiv.org | 28 | 70 | ['Computer Science'] |
2,410.01036 | MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation
Model Training on EU Languages | ['Marco Gaido', 'Sara Papi', 'Luisa Bentivogli', 'Alessio Brutti', 'Mauro Cettolo', 'Roberto Gretter', 'Marco Matassoni', 'Mohamed Nabih', 'Matteo Negri'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | The rise of foundation models (FMs), coupled with regulatory efforts
addressing their risks and impacts, has sparked significant interest in
open-source models. However, existing speech FMs (SFMs) fall short of full
compliance with the open-source principles, even if claimed otherwise, as no
existing SFM has model weig... | 2024-10-01T19:54:10Z | Accepted at EMNLP 2024 Main Conference | null | null | null | null | null | null | null | null | null |
2,410.01044 | RATIONALYST: Mining Implicit Rationales for Process Supervision of
Reasoning | ['Dongwei Jiang', 'Guoxuan Wang', 'Yining Lu', 'Andrew Wang', 'Jingyu Zhang', 'Chuyu Liu', 'Benjamin Van Durme', 'Daniel Khashabi'] | ['cs.AI', 'cs.CL'] | The reasoning steps generated by LLMs might be incomplete, as they mimic
logical leaps common in everyday communication found in their pre-training
data: underlying rationales are frequently left implicit (unstated). To address
this challenge, we introduce RATIONALYST, a model for process-supervision of
reasoning based... | 2024-10-01T20:05:51Z | Our code, data, and model can be found at this repository:
https://github.com/JHU-CLSP/Rationalyst | null | null | RATIONALYST: Mining Implicit Rationales for Process Supervision of Reasoning | ['Dongwei Jiang', 'Guoxuan Wang', 'Yining Lu', 'Andrew Wang', 'Jingyu Zhang', 'Chuyu Liu', 'Benjamin Van Durme', 'Daniel Khashabi'] | 2,024 | null | 0 | 39 | ['Computer Science'] |
2,410.01131 | nGPT: Normalized Transformer with Representation Learning on the
Hypersphere | ['Ilya Loshchilov', 'Cheng-Ping Hsieh', 'Simeng Sun', 'Boris Ginsburg'] | ['cs.LG', 'cs.AI'] | We propose a novel neural network architecture, the normalized Transformer
(nGPT) with representation learning on the hypersphere. In nGPT, all vectors
forming the embeddings, MLP, attention matrices and hidden states are unit norm
normalized. The input stream of tokens travels on the surface of a hypersphere,
with eac... | 2024-10-01T23:50:09Z | null | null | null | null | null | null | null | null | null | null |
2,410.01201 | Were RNNs All We Needed? | ['Leo Feng', 'Frederick Tung', 'Mohamed Osama Ahmed', 'Yoshua Bengio', 'Hossein Hajimirsadeghi'] | ['cs.LG', 'cs.AI'] | The introduction of Transformers in 2017 reshaped the landscape of deep
learning. Originally proposed for sequence modelling, Transformers have since
achieved widespread success across various domains. However, the scalability
limitations of Transformers - particularly with respect to sequence length -
have sparked ren... | 2024-10-02T03:06:49Z | null | null | null | null | null | null | null | null | null | null |
2,410.01257 | HelpSteer2-Preference: Complementing Ratings with Preferences | ['Zhilin Wang', 'Alexander Bukharin', 'Olivier Delalleau', 'Daniel Egert', 'Gerald Shen', 'Jiaqi Zeng', 'Oleksii Kuchaiev', 'Yi Dong'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Reward models are critical for aligning models to follow instructions, and
are typically trained following one of two popular paradigms: Bradley-Terry
style or Regression style. However, there is a lack of evidence that either
approach is better than the other, when adequately matched for data. This is
primarily becaus... | 2024-10-02T06:05:52Z | Accepted to ICLR 2025; 28 pages, 3 figures | null | null | HelpSteer2-Preference: Complementing Ratings with Preferences | ['Zhilin Wang', 'Alexander Bukharin', 'Olivier Delalleau', 'Daniel Egert', 'Gerald Shen', 'Jiaqi Zeng', 'Oleksii Kuchaiev', 'Yi Dong'] | 2,024 | International Conference on Learning Representations | 59 | 34 | ['Computer Science'] |
2,410.01345 | Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark
and LLM-guided 3D Policy | ['Ricardo Garcia', 'Shizhe Chen', 'Cordelia Schmid'] | ['cs.RO', 'cs.CV'] | Generalizing language-conditioned robotic policies to new tasks remains a
significant challenge, hampered by the lack of suitable simulation benchmarks.
In this paper, we address this gap by introducing GemBench, a novel benchmark
to assess generalization capabilities of vision-language robotic manipulation
policies. G... | 2024-10-02T09:02:34Z | ICRA 2025 | null | null | Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy | ['Ricardo Garcia', 'Shizhe Chen', 'Cordelia Schmid'] | 2,024 | arXiv.org | 14 | 58 | ['Computer Science'] |
2,410.01469 | TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for
Efficient Speech Separation | ['Mohan Xu', 'Kai Li', 'Guo Chen', 'Xiaolin Hu'] | ['cs.SD', 'cs.AI', 'eess.AS'] | In recent years, much speech separation research has focused primarily on
improving model performance. However, for low-latency speech processing
systems, high efficiency is equally important. Therefore, we propose a speech
separation model with significantly reduced parameters and computational costs:
Time-frequency I... | 2024-10-02T12:21:06Z | Accepted by ICLR 2025, demo page: https://cslikai.cn/TIGER/ | null | null | null | null | null | null | null | null | null |
2,410.01524 | HarmAug: Effective Data Augmentation for Knowledge Distillation of
Safety Guard Models | ['Seanie Lee', 'Haebin Seong', 'Dong Bok Lee', 'Minki Kang', 'Xiaoyin Chen', 'Dominik Wagner', 'Yoshua Bengio', 'Juho Lee', 'Sung Ju Hwang'] | ['cs.CL', 'cs.LG'] | Safety guard models that detect malicious queries aimed at large language
models (LLMs) are essential for ensuring the secure and responsible deployment
of LLMs in real-world applications. However, deploying existing safety guard
models with billions of parameters alongside LLMs on mobile devices is
impractical due to ... | 2024-10-02T13:12:13Z | ICLR 2025 | null | null | HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models | ['Seanie Lee', 'Haebin Seong', 'Dong Bok Lee', 'Minki Kang', 'Xiaoyin Chen', 'Dominik Wagner', 'Y. Bengio', 'Juho Lee', 'Sung Ju Hwang'] | 2,024 | International Conference on Learning Representations | 6 | 62 | ['Computer Science'] |
2,410.0156 | OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source
Instruction Data | ['Shubham Toshniwal', 'Wei Du', 'Ivan Moshkov', 'Branislav Kisacanin', 'Alexan Ayrapetyan', 'Igor Gitman'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Mathematical reasoning continues to be a critical challenge in large language
model (LLM) development with significant interest. However, most of the
cutting-edge progress in mathematical reasoning with LLMs has become
\emph{closed-source} due to lack of access to training data. This lack of data
access limits research... | 2024-10-02T14:00:09Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.