arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,402.19305 | HyenaPixel: Global Image Context with Convolutions | ['Julian Spravil', 'Sebastian Houben', 'Sven Behnke'] | ['cs.CV'] | In computer vision, a larger effective receptive field (ERF) is associated
with better performance. While attention natively supports global context, its
quadratic complexity limits its applicability to tasks that benefit from
high-resolution input. In this work, we extend Hyena, a convolution-based
attention replaceme... | 2024-02-29T16:10:49Z | null | null | null | HyenaPixel: Global Image Context with Convolutions | ['Julian Spravil', 'Sebastian Houben', 'Sven Behnke'] | 2,024 | European Conference on Artificial Intelligence | 1 | 68 | ['Computer Science'] |
2,402.19411 | PaECTER: Patent-level Representation Learning using Citation-informed
Transformers | ['Mainak Ghosh', 'Sebastian Erhardt', 'Michael E. Rose', 'Erik Buunk', 'Dietmar Harhoff'] | ['cs.IR', 'cs.CL', 'cs.LG'] | PaECTER is a publicly available, open-source document-level encoder specific
for patents. We fine-tune BERT for Patents with examiner-added citation
information to generate numerical representations for patent documents. PaECTER
performs better in similarity tasks than current state-of-the-art models used
in the patent... | 2024-02-29T18:09:03Z | 7 pages, 3 figures | null | null | null | null | null | null | null | null | null |
2,402.19427 | Griffin: Mixing Gated Linear Recurrences with Local Attention for
Efficient Language Models | ['Soham De', 'Samuel L. Smith', 'Anushan Fernando', 'Aleksandar Botev', 'George Cristian-Muraru', 'Albert Gu', 'Ruba Haroun', 'Leonard Berrada', 'Yutian Chen', 'Srivatsan Srinivasan', 'Guillaume Desjardins', 'Arnaud Doucet', 'David Budden', 'Yee Whye Teh', 'Razvan Pascanu', 'Nando De Freitas', 'Caglar Gulcehre'] | ['cs.LG', 'cs.CL'] | Recurrent neural networks (RNNs) have fast inference and scale efficiently on
long sequences, but they are difficult to train and hard to scale. We propose
Hawk, an RNN with gated linear recurrences, and Griffin, a hybrid model that
mixes gated linear recurrences with local attention. Hawk exceeds the reported
performa... | 2024-02-29T18:24:46Z | 25 pages, 11 figures | null | null | null | null | null | null | null | null | null |
2,403.00043 | RiNALMo: General-Purpose RNA Language Models Can Generalize Well on
Structure Prediction Tasks | ['Rafael Josip Penić', 'Tin Vlašić', 'Roland G. Huber', 'Yue Wan', 'Mile Šikić'] | ['q-bio.BM', 'cs.LG'] | While RNA has recently been recognized as an interesting small-molecule drug
target, many challenges remain to be addressed before we take full advantage of
it. This emphasizes the necessity to improve our understanding of its
structures and functions. Over the years, sequencing technologies have produced
an enormous a... | 2024-02-29T14:50:58Z | 31 pages, 9 figures | Nat. Commun. 16, 5671 (2025) | 10.1038/s41467-025-60872-5 | null | null | null | null | null | null | null |
2,403.00212 | Transcription and translation of videos using fine-tuned XLSR Wav2Vec2
on custom dataset and mBART | ['Aniket Tathe', 'Anand Kamble', 'Suyash Kumbharkar', 'Atharva Bhandare', 'Anirban C. Mitra'] | ['cs.CL', 'cs.CV', 'cs.LG', 'cs.SD', 'eess.AS'] | This research addresses the challenge of training an ASR model for
personalized voices with minimal data. Utilizing just 14 minutes of custom
audio from a YouTube video, we employ Retrieval-Based Voice Conversion (RVC) to
create a custom Common Voice 16.0 corpus. Subsequently, a Cross-lingual
Self-supervised Representa... | 2024-03-01T01:15:45Z | null | null | null | Transcription and translation of videos using fine-tuned XLSR Wav2Vec2 on custom dataset and mBART | ['Aniket Tathe', 'Anand Kamble', 'Suyash Kumbharkar', 'Atharva Bhandare', 'Anirban C. Mitra'] | 2,024 | arXiv.org | 1 | 13 | ['Computer Science', 'Engineering'] |
2,403.00476 | TempCompass: Do Video LLMs Really Understand Videos? | ['Yuanxin Liu', 'Shicheng Li', 'Yi Liu', 'Yuxiang Wang', 'Shuhuai Ren', 'Lei Li', 'Sishuo Chen', 'Xu Sun', 'Lu Hou'] | ['cs.CV'] | Recently, there is a surge in interest surrounding video large language
models (Video LLMs). However, existing benchmarks fail to provide a
comprehensive feedback on the temporal perception ability of Video LLMs. On the
one hand, most of them are unable to distinguish between different temporal
aspects (e.g., speed, di... | 2024-03-01T12:02:19Z | null | null | null | TempCompass: Do Video LLMs Really Understand Videos? | ['Yuanxin Liu', 'Shicheng Li', 'Yi Liu', 'Yuxiang Wang', 'Shuhuai Ren', 'Lei Li', 'Sishuo Chen', 'Xu Sun', 'Lu Hou'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 141 | 39 | ['Computer Science'] |
2,403.00522 | VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks | ['Xiangxiang Chu', 'Jianlin Su', 'Bo Zhang', 'Chunhua Shen'] | ['cs.CV'] | Large language models are built on top of a transformer-based architecture to
process textual inputs. For example, the LLaMA stands out among many
open-source implementations. Can the same transformer be used to process 2D
images? In this paper, we answer this question by unveiling a LLaMA-like vision
transformer in pl... | 2024-03-01T13:30:51Z | Accepted to ECCV2024 | null | null | null | null | null | null | null | null | null |
2,403.00712 | Rethinking Inductive Biases for Surface Normal Estimation | ['Gwangbin Bae', 'Andrew J. Davison'] | ['cs.CV'] | Despite the growing demand for accurate surface normal estimation models,
existing methods use general-purpose dense prediction models, adopting the same
inductive biases as other tasks. In this paper, we discuss the inductive biases
needed for surface normal estimation and propose to (1) utilize the per-pixel
ray dire... | 2024-03-01T17:54:37Z | CVPR 2024 (camera-ready version will be uploaded in March 2024) | null | null | null | null | null | null | null | null | null |
2,403.00818 | DenseMamba: State Space Models with Dense Hidden Connection for
Efficient Large Language Models | ['Wei He', 'Kai Han', 'Yehui Tang', 'Chengcheng Wang', 'Yujie Yang', 'Tianyu Guo', 'Yunhe Wang'] | ['cs.CL', 'cs.LG'] | Large language models (LLMs) face a daunting challenge due to the excessive
computational and memory requirements of the commonly used Transformer
architecture. While state space model (SSM) is a new type of foundational
network architecture offering lower computational complexity, their performance
has yet to fully ri... | 2024-02-26T09:21:59Z | null | null | null | null | null | null | null | null | null | null |
2,403.00835 | CLLMs: Consistency Large Language Models | ['Siqi Kou', 'Lanxiang Hu', 'Zhezhi He', 'Zhijie Deng', 'Hao Zhang'] | ['cs.CL', 'cs.AI'] | Parallel decoding methods such as Jacobi decoding show promise for more
efficient LLM inference as it breaks the sequential nature of the LLM decoding
process and transforms it into parallelizable computation. However, in
practice, it achieves little speedup compared to traditional autoregressive
(AR) decoding, primari... | 2024-02-28T20:17:04Z | In the proceedings of the 41st International Conference on Machine
Learning (ICML) 2024 | null | null | null | null | null | null | null | null | null |
2,403.00946 | Fine-tuning with Very Large Dropout | ['Jianyu Zhang', 'Léon Bottou'] | ['cs.LG', 'cs.CV'] | It is impossible today to pretend that the practice of machine learning is
always compatible with the idea that training and testing data follow the same
distribution. Several authors have recently used ensemble techniques to show
how scenarios involving multiple data distributions are best served by
representations th... | 2024-03-01T19:50:22Z | Fine-tuning with very large dropout outperforms weight-averaging and
ensemble on ResNet and large vision transformer | null | null | null | null | null | null | null | null | null |
2,403.01031 | Peacock: A Family of Arabic Multimodal Large Language Models and
Benchmarks | ['Fakhraddin Alwajih', 'El Moatez Billah Nagoudi', 'Gagan Bhatia', 'Abdelrahman Mohamed', 'Muhammad Abdul-Mageed'] | ['cs.CL', 'cs.AI'] | Multimodal large language models (MLLMs) have proven effective in a wide
range of tasks requiring complex reasoning and linguistic comprehension.
However, due to a lack of high-quality multimodal resources in languages other
than English, success of MLLMs remains relatively limited to English-based
settings. This poses... | 2024-03-01T23:38:02Z | null | null | null | null | null | null | null | null | null | null |
2,403.01081 | LAB: Large-Scale Alignment for ChatBots | ['Shivchander Sudalairaj', 'Abhishek Bhandwaldar', 'Aldo Pareja', 'Kai Xu', 'David D. Cox', 'Akash Srivastava'] | ['cs.CL', 'cs.LG'] | This work introduces LAB (Large-scale Alignment for chatBots), a novel
methodology designed to overcome the scalability challenges in the
instruction-tuning phase of large language model (LLM) training. Leveraging a
taxonomy-guided synthetic data generation process and a multi-phase tuning
framework, LAB significantly ... | 2024-03-02T03:48:37Z | Corresponding Author: Akash Srivastava. Equal Contribution:
Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Akash Srivastava,
Code: https://github.com/instructlab | null | null | null | null | null | null | null | null | null |
2,403.01306 | ICC: Quantifying Image Caption Concreteness for Multimodal Dataset
Curation | ['Moran Yanuka', 'Morris Alper', 'Hadar Averbuch-Elor', 'Raja Giryes'] | ['cs.LG', 'cs.CV'] | Web-scale training on paired text-image data is becoming increasingly central
to multimodal learning, but is challenged by the highly noisy nature of
datasets in the wild. Standard data filtering approaches succeed in removing
mismatched text-image pairs, but permit semantically related but highly
abstract or subjectiv... | 2024-03-02T20:36:10Z | Accepted to ACL 2024 (Finding). For Project webpage, see
https://moranyanuka.github.io/icc/ | null | null | Mitigating Open-Vocabulary Caption Hallucinations | ['Moran Yanuka', 'Morris Alper', 'Hadar Averbuch-Elor', 'Raja Giryes'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 6 | 103 | ['Computer Science'] |
2,403.01308 | VBART: The Turkish LLM | ['Meliksah Turker', 'Mehmet Erdi Ari', 'Aydin Han'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We present VBART, the first Turkish sequence-to-sequence Large Language
Models (LLMs) pre-trained on a large corpus from scratch. VBART are compact
LLMs based on good ideas leveraged from BART and mBART models and come in two
sizes, Large and XLarge. Fine-tuned VBART models surpass the prior
state-of-the-art results in... | 2024-03-02T20:40:11Z | null | null | null | VBART: The Turkish LLM | ['Meliksah Turker', 'Mehmet Erdi Ari', 'Aydin Han'] | 2,024 | arXiv.org | 4 | 53 | ['Computer Science'] |
2,403.01422 | DreamFrame: Enhancing Video Understanding via Automatically Generated QA
and Style-Consistent Keyframes | ['Zhende Song', 'Chenchen Wang', 'Jiamu Sheng', 'Chi Zhang', 'Shengji Tang', 'Jiayuan Fan', 'Tao Chen'] | ['cs.CV'] | Recent large vision-language models (LVLMs) for video understanding are
primarily fine-tuned with various videos scraped from online platforms.
Existing datasets, such as ActivityNet, require considerable human labor for
structuring and annotation before effectively utilized for tuning LVLMs. While
current LVLMs are pr... | 2024-03-03T07:43:39Z | null | null | null | null | null | null | null | null | null | null |
2,403.01469 | KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean
Healthcare Professional Licensing Examinations | ['Sunjun Kweon', 'Byungjin Choi', 'Gyouk Chu', 'Junyeong Song', 'Daeun Hyeon', 'Sujin Gan', 'Jueon Kim', 'Minkyu Kim', 'Rae Woong Park', 'Edward Choi'] | ['cs.CL'] | We present KorMedMCQA, the first Korean Medical Multiple-Choice Question
Answering benchmark, derived from professional healthcare licensing
examinations conducted in Korea between 2012 and 2024. The dataset contains
7,469 questions from examinations for doctor, nurse, pharmacist, and dentist,
covering a wide range of ... | 2024-03-03T10:31:49Z | null | null | null | KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations | ['Sunjun Kweon', 'B. Choi', 'Minkyu Kim', 'Rae Woong Park', 'Edward Choi'] | 2,024 | arXiv.org | 8 | 20 | ['Computer Science'] |
2,403.01487 | InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding | ['Haogeng Liu', 'Quanzeng You', 'Xiaotian Han', 'Yiqi Wang', 'Bohan Zhai', 'Yongfei Liu', 'Yunzhe Tao', 'Huaibo Huang', 'Ran He', 'Hongxia Yang'] | ['cs.CV'] | Multimodal Large Language Models (MLLMs) have experienced significant
advancements recently. Nevertheless, challenges persist in the accurate
recognition and comprehension of intricate details within high-resolution
images. Despite being indispensable for the development of robust MLLMs, this
area remains underinvestig... | 2024-03-03T11:39:41Z | null | null | null | null | null | null | null | null | null | null |
2,403.01598 | APISR: Anime Production Inspired Real-World Anime Super-Resolution | ['Boyang Wang', 'Fengyu Yang', 'Xihang Yu', 'Chao Zhang', 'Hanbin Zhao'] | ['eess.IV', 'cs.AI', 'cs.CV'] | While real-world anime super-resolution (SR) has gained increasing attention
in the SR community, existing methods still adopt techniques from the
photorealistic domain. In this paper, we analyze the anime production workflow
and rethink how to use characteristics of it for the sake of the real-world
anime SR. First, w... | 2024-03-03T19:52:43Z | null | null | null | null | null | null | null | null | null | null |
2,403.01616 | Towards Comprehensive Vietnamese Retrieval-Augmented Generation and
Large Language Models | ['Nguyen Quang Duc', 'Le Hai Son', 'Nguyen Duc Nhan', 'Nguyen Dich Nhat Minh', 'Le Thanh Huong', 'Dinh Viet Sang'] | ['cs.CL'] | This paper presents our contributions towards advancing the state of
Vietnamese language understanding and generation through the development and
dissemination of open datasets and pre-trained models for Vietnamese
Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs). | 2024-03-03T21:24:35Z | null | null | null | Towards Comprehensive Vietnamese Retrieval-Augmented Generation and Large Language Models | ['Nguyen Quang Duc', 'Le Hai Son', 'Nguyen Duc Nhan', 'Nguyen Dich Nhat Minh', 'Le Thanh Huong', 'D. V. Sang'] | 2,024 | arXiv.org | 2 | 8 | ['Computer Science'] |
2,403.01643 | Cost-Effective Attention Mechanisms for Low Resource Settings: Necessity
& Sufficiency of Linear Transformations | ['Peyman Hosseini', 'Mehran Hosseini', 'Ignacio Castro', 'Matthew Purver'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', '68T07 (Primary) 68T45, 68T50, 68T10, 15A03, 15A04 (Secondary)', 'I.2.6; I.2.7; I.2.10; I.4.0; I.5.0; I.7.0'] | From natural language processing to vision, Scaled Dot Product Attention
(SDPA) is the backbone of most modern deep learning applications.
Unfortunately, its memory and computational requirements can be prohibitive in
low-resource settings. In this paper, we improve its efficiency without
sacrificing its versatility. W... | 2024-03-03T23:40:35Z | null | null | null | Cost-Effective Attention Mechanisms for Low Resource Settings: Necessity&Sufficiency of Linear Transformations | ['Peyman Hosseini', 'Mehran Hosseini', 'Ignacio Castro', 'Matthew Purver'] | 2,024 | null | 1 | 0 | ['Computer Science'] |
2,403.01779 | OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable
Virtual Try-on | ['Yuhao Xu', 'Tao Gu', 'Weifeng Chen', 'Chengcai Chen'] | ['cs.CV'] | We present OOTDiffusion, a novel network architecture for realistic and
controllable image-based virtual try-on (VTON). We leverage the power of
pretrained latent diffusion models, designing an outfitting UNet to learn the
garment detail features. Without a redundant warping process, the garment
features are precisely ... | 2024-03-04T07:17:44Z | null | null | null | null | null | null | null | null | null | null |
2,403.01817 | NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural | ['Wilson Wongso', 'David Samuel Setiawan', 'Steven Limcorn', 'Ananto Joyoadikusumo'] | ['cs.CL'] | Indonesia's linguistic landscape is remarkably diverse, encompassing over 700
languages and dialects, making it one of the world's most linguistically rich
nations. This diversity, coupled with the widespread practice of code-switching
and the presence of low-resource regional languages, presents unique challenges
for ... | 2024-03-04T08:05:34Z | null | null | null | NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural | ['Wilson Wongso', 'David Samuel Setiawan', 'Steven Limcorn', 'Ananto Joyoadikusumo'] | 2,024 | arXiv.org | 1 | 38 | ['Computer Science'] |
2,403.01851 | Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral | ['Yiming Cui', 'Xin Yao'] | ['cs.CL', 'cs.AI'] | Mixtral, a representative sparse mixture of experts (SMoE) language model,
has received significant attention due to its unique model design and superior
performance. Based on Mixtral-8x7B-v0.1, in this paper, we propose
Chinese-Mixtral and Chinese-Mixtral-Instruct with improved Chinese language
abilities by adopting f... | 2024-03-04T09:01:10Z | 13 pages | null | null | Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral | ['Yiming Cui', 'Xin Yao'] | 2,024 | arXiv.org | 5 | 27 | ['Computer Science'] |
2,403.01897 | Fostering the Ecosystem of Open Neural Encoders for Portuguese with
Albertina PT* Family | ['Rodrigo Santos', 'João Rodrigues', 'Luís Gomes', 'João Silva', 'António Branco', 'Henrique Lopes Cardoso', 'Tomás Freitas Osório', 'Bernardo Leite'] | ['cs.CL'] | To foster the neural encoding of Portuguese, this paper contributes
foundation encoder models that represent an expansion of the still very scarce
ecosystem of large language models specifically developed for this language
that are fully open, in the sense that they are open source and openly
distributed for free under... | 2024-03-04T09:56:47Z | null | null | null | null | null | null | null | null | null | null |
2,403.01924 | To Generate or to Retrieve? On the Effectiveness of Artificial Contexts
for Medical Open-Domain Question Answering | ['Giacomo Frisoni', 'Alessio Cocchieri', 'Alex Presepi', 'Gianluca Moro', 'Zaiqiao Meng'] | ['cs.CL', 'cs.AI'] | Medical open-domain question answering demands substantial access to
specialized knowledge. Recent efforts have sought to decouple knowledge from
model parameters, counteracting architectural scaling and allowing for training
on common low-resource hardware. The retrieve-then-read paradigm has become
ubiquitous, with m... | 2024-03-04T10:41:52Z | ACL 2024 (camera-ready paper) | null | null | null | null | null | null | null | null | null |
2,403.02084 | ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models | ['Jiaxiang Cheng', 'Pan Xie', 'Xin Xia', 'Jiashi Li', 'Jie Wu', 'Yuxi Ren', 'Huixia Li', 'Xuefeng Xiao', 'Min Zheng', 'Lean Fu'] | ['cs.CV'] | Recent advancement in text-to-image models (e.g., Stable Diffusion) and
corresponding personalized technologies (e.g., DreamBooth and LoRA) enables
individuals to generate high-quality and imaginative images. However, they
often suffer from limitations when generating images with resolutions outside
of their trained do... | 2024-03-04T14:36:56Z | Accepted by AAAI 2025 | null | null | null | null | null | null | null | null | null |
2,403.02107 | Iterated $Q$-Network: Beyond One-Step Bellman Updates in Deep
Reinforcement Learning | ['Théo Vincent', 'Daniel Palenicek', 'Boris Belousov', 'Jan Peters', "Carlo D'Eramo"] | ['cs.LG', 'cs.AI'] | The vast majority of Reinforcement Learning methods is largely impacted by
the computation effort and data requirements needed to obtain effective
estimates of action-value functions, which in turn determine the quality of the
overall performance and the sample-efficiency of the learning procedure.
Typically, action-va... | 2024-03-04T15:07:33Z | Published at TMLR: https://openreview.net/forum?id=Lt2H8Bd8jF | null | null | null | null | null | null | null | null | null |
2,403.02127 | LOCR: Location-Guided Transformer for Optical Character Recognition | ['Yu Sun', 'Dongzhan Zhou', 'Chen Lin', 'Conghui He', 'Wanli Ouyang', 'Han-Sen Zhong'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Academic documents are packed with texts, equations, tables, and figures,
requiring comprehensive understanding for accurate Optical Character
Recognition (OCR). While end-to-end OCR methods offer improved accuracy over
layout-based approaches, they often grapple with significant repetition issues,
especially with comp... | 2024-03-04T15:34:12Z | null | null | null | LOCR: Location-Guided Transformer for Optical Character Recognition | ['Yu Sun', 'Dongzhan Zhou', 'Chen Lin', 'Conghui He', 'Wanli Ouyang', 'Han-Sen Zhong'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 1 | 31 | ['Computer Science'] |
2,403.02151 | TripoSR: Fast 3D Object Reconstruction from a Single Image | ['Dmitry Tochilkin', 'David Pankratz', 'Zexiang Liu', 'Zixuan Huang', 'Adam Letts', 'Yangguang Li', 'Ding Liang', 'Christian Laforte', 'Varun Jampani', 'Yan-Pei Cao'] | ['cs.CV'] | This technical report introduces TripoSR, a 3D reconstruction model
leveraging transformer architecture for fast feed-forward 3D generation,
producing 3D mesh from a single image in under 0.5 seconds. Building upon the
LRM network architecture, TripoSR integrates substantial improvements in data
processing, model desig... | 2024-03-04T16:00:56Z | Model: https://huggingface.co/stabilityai/TripoSR Code:
https://github.com/VAST-AI-Research/TripoSR Demo:
https://huggingface.co/spaces/stabilityai/TripoSR | null | null | null | null | null | null | null | null | null |
2,403.02177 | ProTrix: Building Models for Planning and Reasoning over Tables with
Sentence Context | ['Zirui Wu', 'Yansong Feng'] | ['cs.CL'] | Tables play a crucial role in conveying information in various domains. We
propose a Plan-then-Reason framework to answer different types of user queries
over tables with sentence context. The framework first plans the reasoning
paths over the context, then assigns each step to program-based or textual
reasoning to rea... | 2024-03-04T16:21:19Z | EMNLP 2024 Findings | null | null | ProTrix: Building Models for Planning and Reasoning over Tables with Sentence Context | ['Zirui Wu', 'Yansong Feng'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 12 | 61 | ['Computer Science'] |
2,403.02178 | Masked Thought: Simply Masking Partial Reasoning Steps Can Improve
Mathematical Reasoning Learning of Language Models | ['Changyu Chen', 'Xiting Wang', 'Ting-En Lin', 'Ang Lv', 'Yuchuan Wu', 'Xin Gao', 'Ji-Rong Wen', 'Rui Yan', 'Yongbin Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In reasoning tasks, even a minor error can cascade into inaccurate results,
leading to suboptimal performance of large language models in such domains.
Earlier fine-tuning approaches sought to mitigate this by leveraging more
precise supervisory signals from human labeling, larger models, or
self-sampling, although at ... | 2024-03-04T16:21:54Z | Accepted by ACL 2024 | null | null | null | null | null | null | null | null | null |
2,403.0227 | FENICE: Factuality Evaluation of summarization based on Natural language
Inference and Claim Extraction | ['Alessandro Scirè', 'Karim Ghonim', 'Roberto Navigli'] | ['cs.CL'] | Recent advancements in text summarization, particularly with the advent of
Large Language Models (LLMs), have shown remarkable performance. However, a
notable challenge persists as a substantial number of automatically-generated
summaries exhibit factual inconsistencies, such as hallucinations. In response
to this issu... | 2024-03-04T17:57:18Z | ACL 2024 camera ready. Code and data at
https://github.com/Babelscape/FENICE | null | null | null | null | null | null | null | null | null |
2,403.02302 | Beyond Specialization: Assessing the Capabilities of MLLMs in Age and
Gender Estimation | ['Maksim Kuprashevich', 'Grigorii Alekseenko', 'Irina Tolstykh'] | ['cs.CV', 'cs.AI', 'cs.LG', 'I.2.0; I.4.0; I.4.9'] | Multimodal Large Language Models (MLLMs) have recently gained immense
popularity. Powerful commercial models like ChatGPT-4V and Gemini, as well as
open-source ones such as LLaVA, are essentially general-purpose models and are
applied to solve a wide variety of tasks, including those in computer vision.
These neural ne... | 2024-03-04T18:32:12Z | null | null | null | null | null | null | null | null | null | null |
2,403.02333 | Key-Point-Driven Data Synthesis with its Enhancement on Mathematical
Reasoning | ['Yiming Huang', 'Xiao Liu', 'Yeyun Gong', 'Zhibin Gou', 'Yelong Shen', 'Nan Duan', 'Weizhu Chen'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have shown great potential in complex reasoning
tasks, yet their performance is often hampered by the scarcity of high-quality
and reasoning-focused training datasets. Addressing this challenge, we propose
Key-Point-Driven Data Synthesis (KPDDS), a novel data synthesis framework that
synthe... | 2024-03-04T18:58:30Z | In progress | null | null | null | null | null | null | null | null | null |
2,403.02411 | NiNformer: A Network in Network Transformer with Token Mixing Generated
Gating Function | ['Abdullah Nazhat Abdullah', 'Tarkan Aydin'] | ['cs.CV', 'cs.LG'] | The attention mechanism is the primary component of the transformer
architecture; it has led to significant advancements in deep learning spanning
many domains and covering multiple tasks. In computer vision, the attention
mechanism was first incorporated in the Vision Transformer ViT, and then its
usage has expanded i... | 2024-03-04T19:08:20Z | Neural Comput & Applic (2025) | null | 10.1007/s00521-025-11226-1 | NiNformer: A Network in Network Transformer with Token Mixing Generated Gating Function | ['Abdullah Nazhat Abdullah', 'Tarkan Aydin'] | 2,024 | Neural computing & applications (Print) | 0 | 56 | ['Computer Science'] |
2,403.02513 | Balancing Enhancement, Harmlessness, and General Capabilities: Enhancing
Conversational LLMs with Direct RLHF | ['Chen Zheng', 'Ke Sun', 'Hang Wu', 'Chenguang Xi', 'Xun Zhou'] | ['cs.CL'] | In recent advancements in Conversational Large Language Models (LLMs), a
concerning trend has emerged, showing that many new base LLMs experience a
knowledge reduction in their foundational capabilities following Supervised
Fine-Tuning (SFT). This process often leads to issues such as forgetting or a
decrease in the ba... | 2024-03-04T22:02:12Z | null | null | null | Balancing Enhancement, Harmlessness, and General Capabilities: Enhancing Conversational LLMs with Direct RLHF | ['Chen Zheng', 'Ke Sun', 'Hang Wu', 'Chenguang Xi', 'Xun Zhou'] | 2,024 | arXiv.org | 12 | 45 | ['Computer Science'] |
2,403.02522 | HeAR -- Health Acoustic Representations | ['Sebastien Baur', 'Zaid Nabulsi', 'Wei-Hung Weng', 'Jake Garrison', 'Louis Blankemeier', 'Sam Fishman', 'Christina Chen', 'Sujay Kakarmath', 'Minyoi Maimbolwa', 'Nsala Sanjase', 'Brian Shuma', 'Yossi Matias', 'Greg S. Corrado', 'Shwetak Patel', 'Shravya Shetty', 'Shruthi Prabhakara', 'Monde Muyoyeta', 'Diego Ardila'] | ['cs.LG', 'cs.AI'] | Health acoustic sounds such as coughs and breaths are known to contain useful
health signals with significant potential for monitoring health and disease,
yet are underexplored in the medical machine learning community. The existing
deep learning systems for health acoustics are often narrowly trained and
evaluated on ... | 2024-03-04T22:26:25Z | 4 tables, 4 figures, 6 supplementary tables, 3 supplementary figures | null | null | null | null | null | null | null | null | null |
2,403.02677 | Finetuned Multimodal Language Models Are High-Quality Image-Text Data
Filters | ['Weizhi Wang', 'Khalil Mrini', 'Linjie Yang', 'Sateesh Kumar', 'Yu Tian', 'Xifeng Yan', 'Heng Wang'] | ['cs.CV', 'cs.CL'] | We propose a novel framework for filtering image-text data by leveraging
fine-tuned Multimodal Language Models (MLMs). Our approach outperforms
predominant filtering methods (e.g., CLIPScore) via integrating the recent
advances in MLMs. We design four distinct yet complementary metrics to
holistically measure the quali... | 2024-03-05T06:05:15Z | Project Website: https://mlm-filter.github.io | null | null | null | null | null | null | null | null | null |
2,403.02712 | Breeze-7B Technical Report | ['Chan-Jan Hsu', 'Chang-Le Liu', 'Feng-Ting Liao', 'Po-Chun Hsu', 'Yi-Chang Chen', 'Da-Shan Shiu'] | ['cs.CL'] | Breeze-7B is an open-source language model based on Mistral-7B, designed to
address the need for improved language comprehension and chatbot-oriented
capabilities in Traditional Chinese. This technical report provides an overview
of the additional pretraining, finetuning, and evaluation stages for the
Breeze-7B model. ... | 2024-03-05T07:08:06Z | null | null | null | Breeze-7B Technical Report | ['Chan-Jan Hsu', 'Chang-Le Liu', 'Fengting Liao', 'Po-Chun Hsu', 'Yi-Chang Chen', 'Da-shan Shiu'] | 2,024 | arXiv.org | 2 | 21 | ['Computer Science'] |
2,403.02715 | Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of
Vietnamese Large Language Models | ['Sang T. Truong', 'Duc Q. Nguyen', 'Toan Nguyen', 'Dong D. Le', 'Nhi N. Truong', 'Tho Quan', 'Sanmi Koyejo'] | ['cs.CL', 'cs.AI', '68T50'] | Recent advancements in large language models (LLMs) have underscored their
importance in the evolution of artificial intelligence. However, despite
extensive pretraining on multilingual datasets, available open-sourced LLMs
exhibit limited effectiveness in processing Vietnamese. The challenge is
exacerbated by the abse... | 2024-03-05T07:13:28Z | 51 pages | null | null | Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of Vietnamese Large Language Models | ['Sang T. Truong', 'D. Q. Nguyen', 'Toan Nguyen', 'Dong D. Le', 'Nhi N. Truong', 'Tho Quan', 'Oluwasanmi Koyejo'] | 2,024 | NAACL-HLT | 2 | 57 | ['Computer Science'] |
2,403.02745 | CURATRON: Complete and Robust Preference Data for Rigorous Alignment of
Large Language Models | ['Son The Nguyen', 'Niranjan Uma Naresh', 'Theja Tulabandhula'] | ['cs.AI', 'cs.CL'] | This paper addresses the challenges of aligning large language models (LLMs)
with human values via preference learning (PL), focusing on incomplete and
corrupted data in preference datasets. We propose a novel method for robustly
and completely recalibrating values within these datasets to enhance LLMs'
resilience agai... | 2024-03-05T07:58:12Z | null | null | null | CURATRON: Complete and Robust Preference Data for Rigorous Alignment of Large Language Models | ['S. Nguyen', 'Niranjan Uma Naresh', 'Theja Tulabandhula'] | 2,024 | DASH | 0 | 106 | ['Computer Science'] |
2,403.02884 | MathScale: Scaling Instruction Tuning for Mathematical Reasoning | ['Zhengyang Tang', 'Xingxing Zhang', 'Benyou Wang', 'Furu Wei'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models (LLMs) have demonstrated remarkable capabilities in
problem-solving. However, their proficiency in solving mathematical problems
remains inadequate. We propose MathScale, a simple and scalable method to
create high-quality mathematical reasoning data using frontier LLMs (e.g., {\tt
GPT-3.5}). Insp... | 2024-03-05T11:42:59Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,403.031 | NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and
Diffusion Models | ['Zeqian Ju', 'Yuancheng Wang', 'Kai Shen', 'Xu Tan', 'Detai Xin', 'Dongchao Yang', 'Yanqing Liu', 'Yichong Leng', 'Kaitao Song', 'Siliang Tang', 'Zhizheng Wu', 'Tao Qin', 'Xiang-Yang Li', 'Wei Ye', 'Shikun Zhang', 'Jiang Bian', 'Lei He', 'Jinyu Li', 'Sheng Zhao'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.SD'] | While recent large-scale text-to-speech (TTS) models have achieved
significant progress, they still fall short in speech quality, similarity, and
prosody. Considering speech intricately encompasses various attributes (e.g.,
content, prosody, timbre, and acoustic details) that pose significant
challenges for generation,... | 2024-03-05T16:35:25Z | Achieving human-level quality and naturalness on multi-speaker
datasets (e.g., LibriSpeech) in a zero-shot way | null | null | NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models | ['Zeqian Ju', 'Yuancheng Wang', 'Kai Shen', 'Xu Tan', 'Detai Xin', 'Dongchao Yang', 'Yanqing Liu', 'Yichong Leng', 'Kaitao Song', 'Siliang Tang', 'Zhizheng Wu', 'Tao Qin', 'Xiang-Yang Li', 'Wei Ye', 'Shikun Zhang', 'Jiang Bian', 'Lei He', 'Jinyu Li', 'Sheng Zhao'] | 2,024 | International Conference on Machine Learning | 180 | 75 | ['Engineering', 'Computer Science'] |
2,403.03163 | Design2Code: Benchmarking Multimodal Code Generation for Automated
Front-End Engineering | ['Chenglei Si', 'Yanzhe Zhang', 'Ryan Li', 'Zhengyuan Yang', 'Ruibo Liu', 'Diyi Yang'] | ['cs.CL', 'cs.CV', 'cs.CY'] | Generative AI has made rapid advancements in recent years, achieving
unprecedented capabilities in multimodal understanding and code generation.
This can enable a new paradigm of front-end development in which multimodal
large language models (MLLMs) directly convert visual designs into code
implementations. In this wo... | 2024-03-05T17:56:27Z | NAACL 2025; The first two authors contributed equally | null | null | null | null | null | null | null | null | null |
2,403.0317 | SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context
Misinformation Detection | ['Peng Qi', 'Zehong Yan', 'Wynne Hsu', 'Mong Li Lee'] | ['cs.MM', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.CY'] | Misinformation is a prevalent societal issue due to its potential high risks.
Out-of-context (OOC) misinformation, where authentic images are repurposed with
false text, is one of the easiest and most effective ways to mislead audiences.
Current methods focus on assessing image-text consistency but lack convincing
expl... | 2024-03-05T18:04:59Z | To appear in CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,403.03181 | Behavior Generation with Latent Actions | ['Seungjae Lee', 'Yibin Wang', 'Haritheja Etukuru', 'H. Jin Kim', 'Nur Muhammad Mahi Shafiullah', 'Lerrel Pinto'] | ['cs.LG', 'cs.AI', 'cs.RO'] | Generative modeling of complex behaviors from labeled datasets has been a
longstanding problem in decision making. Unlike language or image generation,
decision making requires modeling actions - continuous-valued vectors that are
multimodal in their distribution, potentially drawn from uncurated sources,
where generat... | 2024-03-05T18:19:29Z | Github repo: https://github.com/jayLEE0301/vq_bet_official | PMLR 235:26991-27008, 2024 | null | null | null | null | null | null | null | null |
2,403.03206 | Scaling Rectified Flow Transformers for High-Resolution Image Synthesis | ['Patrick Esser', 'Sumith Kulal', 'Andreas Blattmann', 'Rahim Entezari', 'Jonas Müller', 'Harry Saini', 'Yam Levi', 'Dominik Lorenz', 'Axel Sauer', 'Frederic Boesel', 'Dustin Podell', 'Tim Dockhorn', 'Zion English', 'Kyle Lacey', 'Alex Goodwin', 'Yannik Marek', 'Robin Rombach'] | ['cs.CV'] | Diffusion models create data from noise by inverting the forward paths of
data towards noise and have emerged as a powerful generative modeling technique
for high-dimensional, perceptual data such as images and videos. Rectified flow
is a recent generative model formulation that connects data and noise in a
straight li... | 2024-03-05T18:45:39Z | null | null | null | Scaling Rectified Flow Transformers for High-Resolution Image Synthesis | ['Patrick Esser', 'Sumith Kulal', 'A. Blattmann', 'Rahim Entezari', 'Jonas Muller', 'Harry Saini', 'Yam Levi', 'Dominik Lorenz', 'Axel Sauer', 'Frederic Boesel', 'Dustin Podell', 'Tim Dockhorn', 'Zion English', 'Kyle Lacey', 'Alex Goodwin', 'Yannik Marek', 'Robin Rombach'] | 2,024 | International Conference on Machine Learning | 1,410 | 75 | ['Computer Science'] |
2,403.03218 | The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning | ['Nathaniel Li', 'Alexander Pan', 'Anjali Gopal', 'Summer Yue', 'Daniel Berrios', 'Alice Gatti', 'Justin D. Li', 'Ann-Kathrin Dombrowski', 'Shashwat Goel', 'Long Phan', 'Gabriel Mukobi', 'Nathan Helm-Burger', 'Rassin Lababidi', 'Lennart Justen', 'Andrew B. Liu', 'Michael Chen', 'Isabelle Barrass', 'Oliver Zhang', 'Xiao... | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CY'] | The White House Executive Order on Artificial Intelligence highlights the
risks of large language models (LLMs) empowering malicious actors in developing
biological, cyber, and chemical weapons. To measure these risks of malicious
use, government institutions and major AI labs are developing evaluations for
hazardous c... | 2024-03-05T18:59:35Z | See the project page at https://wmdp.ai | null | null | null | null | null | null | null | null | null |
2,403.03234 | Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling | ['Yair Schiff', 'Chia-Hsiang Kao', 'Aaron Gokaslan', 'Tri Dao', 'Albert Gu', 'Volodymyr Kuleshov'] | ['q-bio.GN', 'cs.LG'] | Large-scale sequence modeling has sparked rapid advances that now extend into
biology and genomics. However, modeling genomic sequences introduces challenges
such as the need to model long-range token interactions, the effects of
upstream and downstream regions of the genome, and the reverse complementarity
(RC) of DNA... | 2024-03-05T01:42:51Z | ICML 2024; Code to reproduce our experiments is available at
https://github.com/kuleshov-group/caduceus | null | null | null | null | null | null | null | null | null |
2,403.03419 | Negating Negatives: Alignment with Human Negative Samples via
Distributional Dispreference Optimization | ['Shitong Duan', 'Xiaoyuan Yi', 'Peng Zhang', 'Yan Liu', 'Zheng Liu', 'Tun Lu', 'Xing Xie', 'Ning Gu'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have revolutionized the role of AI, yet pose
potential social risks. To steer LLMs towards human preference, alignment
technologies have been introduced and gained increasing attention.
Nevertheless, existing methods heavily rely on high-quality positive-negative
training pairs, suffering f... | 2024-03-06T03:02:38Z | Accepted by EMNLP 2024(Findings) | null | null | null | null | null | null | null | null | null |
2,403.03432 | Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language
Models | ['Wenfeng Feng', 'Chuzhan Hao', 'Yuewei Zhang', 'Yu Han', 'Hao Wang'] | ['cs.CL', 'cs.AI'] | Instruction Tuning has the potential to stimulate or enhance specific
capabilities of large language models (LLMs). However, achieving the right
balance of data is crucial to prevent catastrophic forgetting and interference
between tasks. To address these limitations and enhance training flexibility,
we propose the Mix... | 2024-03-06T03:33:48Z | 10 pages, COLING24 Accepted | null | null | null | null | null | null | null | null | null |
2,403.03507 | GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection | ['Jiawei Zhao', 'Zhenyu Zhang', 'Beidi Chen', 'Zhangyang Wang', 'Anima Anandkumar', 'Yuandong Tian'] | ['cs.LG'] | Training Large Language Models (LLMs) presents significant memory challenges,
predominantly due to the growing size of weights and optimizer states. Common
memory-reduction approaches, such as low-rank adaptation (LoRA), add a
trainable low-rank matrix to the frozen pre-trained weight in each layer,
reducing trainable ... | 2024-03-06T07:29:57Z | ICML 2024 (Oral) | null | null | GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection | ['Jiawei Zhao', 'Zhenyu (Allen) Zhang', 'Beidi Chen', 'Zhangyang Wang', 'Anima Anandkumar', 'Yuandong Tian'] | 2,024 | International Conference on Machine Learning | 230 | 57 | ['Computer Science'] |
2,403.03542 | DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE
Pre-Training | ['Zhongkai Hao', 'Chang Su', 'Songming Liu', 'Julius Berner', 'Chengyang Ying', 'Hang Su', 'Anima Anandkumar', 'Jian Song', 'Jun Zhu'] | ['cs.LG', 'cs.NA', 'math.NA'] | Pre-training has been investigated to improve the efficiency and performance
of training neural operators in data-scarce settings. However, it is largely in
its infancy due to the inherent complexity and diversity, such as long
trajectories, multiple scales and varying dimensions of partial differential
equations (PDEs... | 2024-03-06T08:38:34Z | null | null | null | DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training | ['Zhongkai Hao', 'Chang Su', 'Songming Liu', 'Julius Berner', 'Chengyang Ying', 'Hang Su', 'Anima Anandkumar', 'Jian Song', 'Jun Zhu'] | 2,024 | International Conference on Machine Learning | 37 | 55 | ['Computer Science', 'Mathematics'] |
2,403.0364 | Apollo: A Lightweight Multilingual Medical LLM towards Democratizing
Medical AI to 6B People | ['Xidong Wang', 'Nuo Chen', 'Junyin Chen', 'Yidong Wang', 'Guorui Zhen', 'Chunxian Zhang', 'Xiangbo Wu', 'Yan Hu', 'Anningzhe Gao', 'Xiang Wan', 'Haizhou Li', 'Benyou Wang'] | ['cs.CL', 'cs.AI'] | Despite the vast repository of global medical knowledge predominantly being
in English, local languages are crucial for delivering tailored healthcare
services, particularly in areas with limited medical resources. To extend the
reach of medical AI advancements to a broader population, we aim to develop
medical LLMs ac... | 2024-03-06T11:56:02Z | Preprint | null | null | Apollo: A Lightweight Multilingual Medical LLM towards Democratizing Medical AI to 6B People | ['Xidong Wang', 'Nuo Chen', 'Junying Chen', 'Yan Hu', 'Yidong Wang', 'Xiangbo Wu', 'Anningzhe Gao', 'Xiang Wan', 'Haizhou Li', 'Benyou Wang'] | 2,024 | null | 28 | 63 | ['Computer Science'] |
2,403.03853 | ShortGPT: Layers in Large Language Models are More Redundant Than You
Expect | ['Xin Men', 'Mingyu Xu', 'Qingyu Zhang', 'Bingning Wang', 'Hongyu Lin', 'Yaojie Lu', 'Xianpei Han', 'Weipeng Chen'] | ['cs.CL'] | As Large Language Models (LLMs) continue to advance in performance, their
size has escalated significantly, with current LLMs containing billions or even
trillions of parameters. However, in this study, we discovered that many layers
of LLMs exhibit high similarity, and some layers play a negligible role in
network fun... | 2024-03-06T17:04:18Z | null | null | null | null | null | null | null | null | null | null |
2,403.03883 | SaulLM-7B: A pioneering Large Language Model for Law | ['Pierre Colombo', 'Telmo Pessoa Pires', 'Malik Boudiaf', 'Dominic Culver', 'Rui Melo', 'Caio Corro', 'Andre F. T. Martins', 'Fabrizio Esposito', 'Vera Lúcia Raposo', 'Sofia Morgado', 'Michael Desa'] | ['cs.CL'] | In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored
for the legal domain. With 7 billion parameters, SaulLM-7B is the first LLM
designed explicitly for legal text comprehension and generation. Leveraging the
Mistral 7B architecture as its foundation, SaulLM-7B is trained on an English
legal cor... | 2024-03-06T17:42:16Z | null | null | null | null | null | null | null | null | null | null |
2,403.03952 | Bridging Language and Items for Retrieval and Recommendation | ['Yupeng Hou', 'Jiacheng Li', 'Zhankui He', 'An Yan', 'Xiusi Chen', 'Julian McAuley'] | ['cs.IR'] | This paper introduces BLaIR, a series of pretrained sentence embedding models
specialized for recommendation scenarios. BLaIR is trained to learn
correlations between item metadata and potential natural language context,
which is useful for retrieving and recommending items. To pretrain BLaIR, we
collect Amazon Reviews... | 2024-03-06T18:56:36Z | null | null | null | null | null | null | null | null | null | null |
2,403.04132 | Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference | ['Wei-Lin Chiang', 'Lianmin Zheng', 'Ying Sheng', 'Anastasios Nikolas Angelopoulos', 'Tianle Li', 'Dacheng Li', 'Hao Zhang', 'Banghua Zhu', 'Michael Jordan', 'Joseph E. Gonzalez', 'Ion Stoica'] | ['cs.AI', 'cs.CL'] | Large Language Models (LLMs) have unlocked new capabilities and applications;
however, evaluating the alignment with human preferences still poses
significant challenges. To address this issue, we introduce Chatbot Arena, an
open platform for evaluating LLMs based on human preferences. Our methodology
employs a pairwis... | 2024-03-07T01:22:38Z | null | null | null | null | null | null | null | null | null | null |
2,403.04197 | Large Language Models are In-Context Molecule Learners | ['Jiatong Li', 'Wei Liu', 'Zhihao Ding', 'Wenqi Fan', 'Yuqiang Li', 'Qing Li'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have demonstrated exceptional performance in
biochemical tasks, especially the molecule caption translation task, which aims
to bridge the gap between molecules and natural language texts. However,
previous methods in adapting LLMs to the molecule-caption translation task
required extra dom... | 2024-03-07T03:58:28Z | Accepted by IEEE TKDE | null | null | null | null | null | null | null | null | null |
2,403.04224 | Aligners: Decoupling LLMs and Alignment | ['Lilian Ngweta', 'Mayank Agarwal', 'Subha Maity', 'Alex Gittens', 'Yuekai Sun', 'Mikhail Yurochkin'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) need to be aligned with human expectations to
ensure their safety and utility in most applications. Alignment is challenging,
costly, and needs to be repeated for every LLM and alignment criterion. We
propose to decouple LLMs and alignment by training aligner models that can be
used to alig... | 2024-03-07T04:54:56Z | Short version accepted as a Tiny Paper at the International
Conference on Learning Representations (ICLR) 2024. Long version accepted to
the Conference on Empirical Methods in Natural Language Processing (EMNLP)
2024 Findings | null | null | null | null | null | null | null | null | null |
2,403.04652 | Yi: Open Foundation Models by 01.AI | ['01. AI', ':', 'Alex Young', 'Bei Chen', 'Chao Li', 'Chengen Huang', 'Ge Zhang', 'Guanwei Zhang', 'Guoyin Wang', 'Heng Li', 'Jiangcheng Zhu', 'Jianqun Chen', 'Jing Chang', 'Kaidong Yu', 'Peng Liu', 'Qiang Liu', 'Shawn Yue', 'Senbin Yang', 'Shiming Yang', 'Wen Xie', 'Wenhao Huang', 'Xiaohui Hu', 'Xiaoyi Ren', 'Xinyao N... | ['cs.CL', 'cs.AI'] | We introduce the Yi model family, a series of language and multimodal models
that demonstrate strong multi-dimensional capabilities. The Yi model family is
based on 6B and 34B pretrained language models, then we extend them to chat
models, 200K long context models, depth-upscaled models, and vision-language
models. Our... | 2024-03-07T16:52:49Z | null | null | null | null | null | null | null | null | null | null |
2,403.04692 | PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K
Text-to-Image Generation | ['Junsong Chen', 'Chongjian Ge', 'Enze Xie', 'Yue Wu', 'Lewei Yao', 'Xiaozhe Ren', 'Zhongdao Wang', 'Ping Luo', 'Huchuan Lu', 'Zhenguo Li'] | ['cs.CV'] | In this paper, we introduce PixArt-\Sigma, a Diffusion Transformer
model~(DiT) capable of directly generating images at 4K resolution.
PixArt-\Sigma represents a significant advancement over its predecessor,
PixArt-\alpha, offering images of markedly higher fidelity and improved
alignment with text prompts. A key featu... | 2024-03-07T17:41:37Z | Project Page: https://pixart-alpha.github.io/PixArt-sigma-project/ | null | null | null | null | null | null | null | null | null |
2,403.04706 | Common 7B Language Models Already Possess Strong Math Capabilities | ['Chen Li', 'Weiqi Wang', 'Jingcheng Hu', 'Yixuan Wei', 'Nanning Zheng', 'Han Hu', 'Zheng Zhang', 'Houwen Peng'] | ['cs.CL', 'cs.AI'] | Mathematical capabilities were previously believed to emerge in common
language models only at a very large scale or require extensive math-related
pre-training. This paper shows that the LLaMA-2 7B model with common
pre-training already exhibits strong mathematical abilities, as evidenced by
its impressive accuracy of... | 2024-03-07T18:00:40Z | null | null | null | null | null | null | null | null | null | null |
2,403.0477 | Social Orientation: A New Feature for Dialogue Analysis | ['Todd Morrill', 'Zhaoyuan Deng', 'Yanda Chen', 'Amith Ananthram', 'Colin Wayne Leach', 'Kathleen McKeown'] | ['cs.CL', 'cs.LG'] | There are many settings where it is useful to predict and explain the success
or failure of a dialogue. Circumplex theory from psychology models the social
orientations (e.g., Warm-Agreeable, Arrogant-Calculating) of conversation
participants and can be used to predict and explain the outcome of social
interactions. Ou... | 2024-02-26T01:55:45Z | Accepted to LREC-COLING 2024 | null | null | null | null | null | null | null | null | null |
2,403.04814 | Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks | ['Linyuan Gong', 'Sida Wang', 'Mostafa Elhoushi', 'Alvin Cheung'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.SE'] | We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for
evaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)
task. This benchmark focuses on syntax-aware completions of program structures
such as code blocks and conditional expressions, and includes 17,720 examples
from multipl... | 2024-03-07T05:05:56Z | 22 pages; ICML 2024 Oral: https://icml.cc/virtual/2024/oral/35482 | null | null | null | null | null | null | null | null | null |
2,403.04908 | Self-Adapting Large Visual-Language Models to Edge Devices across Visual
Modalities | ['Kaiwen Cai', 'Zhekai Duan', 'Gaowen Liu', 'Charles Fleming', 'Chris Xiaoxuan Lu'] | ['cs.CV'] | Recent advancements in Vision-Language (VL) models have sparked interest in
their deployment on edge devices, yet challenges in handling diverse visual
modalities, manual annotation, and computational constraints remain. We
introduce EdgeVL, a novel framework that bridges this gap by seamlessly
integrating dual-modalit... | 2024-03-07T21:34:40Z | ECCV2024 Accepted | null | null | null | null | null | null | null | null | null |
2,403.05034 | CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction
Model | ['Zhengyi Wang', 'Yikai Wang', 'Yifei Chen', 'Chendong Xiang', 'Shuo Chen', 'Dajiang Yu', 'Chongxuan Li', 'Hang Su', 'Jun Zhu'] | ['cs.CV', 'cs.LG'] | Feed-forward 3D generative models like the Large Reconstruction Model (LRM)
have demonstrated exceptional generation speed. However, the transformer-based
methods do not leverage the geometric priors of the triplane component in their
architecture, often leading to sub-optimal quality given the limited size of 3D
data ... | 2024-03-08T04:25:29Z | Project page: https://ml.cs.tsinghua.edu.cn/~zhengyi/CRM/ | null | null | CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model | ['Zhengyi Wang', 'Yikai Wang', 'Yifei Chen', 'Chendong Xiang', 'Shuo Chen', 'Dajiang Yu', 'Chongxuan Li', 'Hang Su', 'Jun Zhu'] | 2,024 | European Conference on Computer Vision | 136 | 70 | ['Computer Science'] |
2,403.05121 | CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion | ['Wendi Zheng', 'Jiayan Teng', 'Zhuoyi Yang', 'Weihan Wang', 'Jidong Chen', 'Xiaotao Gu', 'Yuxiao Dong', 'Ming Ding', 'Jie Tang'] | ['cs.CV'] | Recent advancements in text-to-image generative systems have been largely
driven by diffusion models. However, single-stage text-to-image diffusion
models still face challenges, in terms of computational efficiency and the
refinement of image details. To tackle the issue, we propose CogView3, an
innovative cascaded fra... | 2024-03-08T07:32:50Z | null | null | null | CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion | ['Wendi Zheng', 'Jiayan Teng', 'Zhuoyi Yang', 'Weihan Wang', 'Jidong Chen', 'Xiaotao Gu', 'Yuxiao Dong', 'Ming Ding', 'Jie Tang'] | 2,024 | European Conference on Computer Vision | 41 | 35 | ['Computer Science'] |
2,403.05135 | ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment | ['Xiwei Hu', 'Rui Wang', 'Yixiao Fang', 'Bin Fu', 'Pei Cheng', 'Gang Yu'] | ['cs.CV'] | Diffusion models have demonstrated remarkable performance in the domain of
text-to-image generation. However, most widely used models still employ CLIP as
their text encoder, which constrains their ability to comprehend dense prompts,
encompassing multiple objects, detailed attributes, complex relationships,
long-text ... | 2024-03-08T08:08:10Z | Project Page: https://ella-diffusion.github.io/ | null | null | null | null | null | null | null | null | null |
2,403.05139 | Improving Diffusion Models for Authentic Virtual Try-on in the Wild | ['Yisol Choi', 'Sangkyung Kwak', 'Kyungmin Lee', 'Hyungwon Choi', 'Jinwoo Shin'] | ['cs.CV'] | This paper considers image-based virtual try-on, which renders an image of a
person wearing a curated garment, given a pair of images depicting the person
and the garment, respectively. Previous works adapt existing exemplar-based
inpainting diffusion models for virtual try-on to improve the naturalness of
the generate... | 2024-03-08T08:12:18Z | ECCV 2024 | null | null | Improving Diffusion Models for Authentic Virtual Try-on in the Wild | ['Yisol Choi', 'Sangkyung Kwak', 'Kyungmin Lee', 'Hyungwon Choi', 'Jinwoo Shin'] | 2,024 | European Conference on Computer Vision | 29 | 60 | ['Computer Science'] |
2,403.05286 | LLM4Decompile: Decompiling Binary Code with Large Language Models | ['Hanzhuo Tan', 'Qi Luo', 'Jing Li', 'Yuqun Zhang'] | ['cs.PL', 'cs.CL'] | Decompilation aims to convert binary code to high-level source code, but
traditional tools like Ghidra often produce results that are difficult to read
and execute. Motivated by the advancements in Large Language Models (LLMs), we
propose LLM4Decompile, the first and largest open-source LLM series (1.3B to
33B) trained... | 2024-03-08T13:10:59Z | null | null | null | LLM4Decompile: Decompiling Binary Code with Large Language Models | ['Hanzhuo Tan', 'Qi Luo', 'Jing Li', 'Yuqun Zhang'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 28 | 60 | ['Computer Science'] |
2,403.05419 | Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery | ['Mubashir Noman', 'Muzammal Naseer', 'Hisham Cholakkal', 'Rao Muhammad Anwar', 'Salman Khan', 'Fahad Shahbaz Khan'] | ['cs.CV'] | Recent advances in unsupervised learning have demonstrated the ability of
large vision models to achieve promising results on downstream tasks by
pre-training on large amount of unlabelled data. Such pre-training techniques
have also been explored recently in the remote sensing domain due to the
availability of large a... | 2024-03-08T16:18:04Z | Accepted at CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,403.05493 | To Err Is Human, but Llamas Can Learn It Too | ['Agnes Luhtaru', 'Taido Purason', 'Martin Vainikko', 'Maksym Del', 'Mark Fishel'] | ['cs.CL'] | This study explores enhancing grammatical error correction (GEC) through
artificial error generation (AEG) using language models (LMs). Specifically, we
fine-tune Llama 2-based LMs for error generation and find that this approach
yields synthetic errors akin to human errors. Next, we train GEC Llama models
with the hel... | 2024-03-08T18:04:03Z | null | null | null | To Err Is Human, but Llamas Can Learn It Too | ['Agnes Luhtaru', 'Taido Purason', 'Martin Vainikko', 'Maksym Del', 'Mark Fishel'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 2 | 65 | ['Computer Science'] |
2,403.05525 | DeepSeek-VL: Towards Real-World Vision-Language Understanding | ['Haoyu Lu', 'Wen Liu', 'Bo Zhang', 'Bingxuan Wang', 'Kai Dong', 'Bo Liu', 'Jingxiang Sun', 'Tongzheng Ren', 'Zhuoshu Li', 'Hao Yang', 'Yaofeng Sun', 'Chengqi Deng', 'Hanwei Xu', 'Zhenda Xie', 'Chong Ruan'] | ['cs.AI'] | We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed
for real-world vision and language understanding applications. Our approach is
structured around three key dimensions:
We strive to ensure our data is diverse, scalable, and extensively covers
real-world scenarios including web screenshots, PD... | 2024-03-08T18:46:00Z | https://github.com/deepseek-ai/DeepSeek-VL | null | null | null | null | null | null | null | null | null |
2,403.0553 | Gemini 1.5: Unlocking multimodal understanding across millions of tokens
of context | ['Gemini Team', 'Petko Georgiev', 'Ving Ian Lei', 'Ryan Burnell', 'Libin Bai', 'Anmol Gulati', 'Garrett Tanzer', 'Damien Vincent', 'Zhufeng Pan', 'Shibo Wang', 'Soroosh Mariooryad', 'Yifan Ding', 'Xinyang Geng', 'Fred Alcober', 'Roy Frostig', 'Mark Omernick', 'Lexi Walker', 'Cosmin Paduraru', 'Christina Sorokin', 'Andr... | ['cs.CL', 'cs.AI'] | In this report, we introduce the Gemini 1.5 family of models, representing
the next generation of highly compute-efficient multimodal models capable of
recalling and reasoning over fine-grained information from millions of tokens
of context, including multiple long documents and hours of video and audio. The
family inc... | 2024-03-08T18:54:20Z | null | null | null | null | null | null | null | null | null | null |
2,403.05973 | Calibrating Large Language Models Using Their Generations Only | ['Dennis Ulmer', 'Martin Gubri', 'Hwaran Lee', 'Sangdoo Yun', 'Seong Joon Oh'] | ['cs.CL', 'cs.AI', 'cs.LG'] | As large language models (LLMs) are increasingly deployed in user-facing
applications, building trust and maintaining safety by accurately quantifying a
model's confidence in its prediction becomes even more important. However,
finding effective ways to calibrate LLMs - especially when the only interface
to the models ... | 2024-03-09T17:46:24Z | null | null | null | Calibrating Large Language Models Using Their Generations Only | ['Dennis Ulmer', 'Martin Gubri', 'Hwaran Lee', 'Sangdoo Yun', 'Seong Joon Oh'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 28 | 81 | ['Computer Science'] |
2,403.06009 | Detectors for Safe and Reliable LLMs: Implementations, Uses, and
Limitations | ['Swapnaja Achintalwar', 'Adriana Alvarado Garcia', 'Ateret Anaby-Tavor', 'Ioana Baldini', 'Sara E. Berger', 'Bishwaranjan Bhattacharjee', 'Djallel Bouneffouf', 'Subhajit Chaudhury', 'Pin-Yu Chen', 'Lamogha Chiazor', 'Elizabeth M. Daly', 'Kirushikesh DB', 'Rogério Abreu de Paula', 'Pierre Dognin', 'Eitan Farchi', 'Soum... | ['cs.LG'] | Large language models (LLMs) are susceptible to a variety of risks, from
non-faithful output to biased and toxic generations. Due to several limiting
factors surrounding LLMs (training cost, API access, data availability, etc.),
it may not always be feasible to impose direct safety constraints on a deployed
model. Ther... | 2024-03-09T21:07:16Z | null | null | null | Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations | ['Swapnaja Achintalwar', 'Adriana Alvarado Garcia', 'Ateret Anaby-Tavor', 'Ioana Baldini', 'Sara E. Berger', 'Bishwaranjan Bhattacharjee', 'Djallel Bouneffouf', 'Subhajit Chaudhury', 'Pin-Yu Chen', 'Lamogha Chiazor', 'Elizabeth M. Daly', "Rog'erio Abreu de Paula", 'Pierre L. Dognin', 'E. Farchi', 'Soumya Ghosh', 'Micha... | 2,024 | arXiv.org | 11 | 155 | ['Computer Science'] |
2,403.06018 | Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in
Low-Resource Languages | ['Christopher Toukmaji'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large pre-trained language models (PLMs) are at the forefront of advances in
Natural Language Processing. One widespread use case of PLMs is "prompting" -
or in-context learning - where a user provides a description of a task and some
completed examples of the task to a PLM as context before prompting the PLM to
perfor... | 2024-03-09T21:36:13Z | 47 pages, 26 figures; a thesis submitted in partial satisfaction of
the requirements for the degree of Bachelor of Science in Computer Science at
the University of California - Santa Cruz | null | null | Few-Shot Cross-Lingual Transfer for Prompting Large Language Models in Low-Resource Languages | ['Christopher Toukmaji'] | 2,024 | arXiv.org | 1 | 50 | ['Computer Science'] |
2,403.06098 | VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video
Diffusion Models | ['Wenhao Wang', 'Yi Yang'] | ['cs.CV', 'cs.CL'] | The arrival of Sora marks a new era for text-to-video diffusion models,
bringing significant advancements in video generation and potential
applications. However, Sora, along with other text-to-video diffusion models,
is highly reliant on prompts, and there is no publicly available dataset that
features a study of text... | 2024-03-10T05:40:12Z | Accepted by NeurIPS 2024 (Datasets and Benchmarks Track) | null | null | null | null | null | null | null | null | null |
2,403.06164 | Platypose: Calibrated Zero-Shot Multi-Hypothesis 3D Human Motion
Estimation | ['Paweł A. Pierzchlewicz', 'Caio O. da Silva', 'R. James Cotton', 'Fabian H. Sinz'] | ['cs.CV'] | Single camera 3D pose estimation is an ill-defined problem due to inherent
ambiguities from depth, occlusion or keypoint noise. Multi-hypothesis pose
estimation accounts for this uncertainty by providing multiple 3D poses
consistent with the 2D measurements. Current research has predominantly
concentrated on generating... | 2024-03-10T10:30:34Z | null | null | null | Platypose: Calibrated Zero-Shot Multi-Hypothesis 3D Human Motion Estimation | ['Paweł Antoni Pierzchlewicz', 'Caio da Silva', 'R. J. Cotton', 'Fabian H. Sinz'] | 2,024 | arXiv.org | 0 | 54 | ['Computer Science'] |
2,403.0635 | IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning
Datasets for Indian Languages | ['Mohammed Safi Ur Rahman Khan', 'Priyam Mehta', 'Ananth Sankar', 'Umashankar Kumaravelan', 'Sumanth Doddapaneni', 'Suriyaprasaad B', 'Varun Balan G', 'Sparsh Jain', 'Anoop Kunchukuttan', 'Pratyush Kumar', 'Raj Dabre', 'Mitesh M. Khapra'] | ['cs.CL'] | Despite the considerable advancements in English LLMs, the progress in
building comparable models for other languages has been hindered due to the
scarcity of tailored resources. Our work aims to bridge this divide by
introducing an expansive suite of resources specifically designed for the
development of Indic LLMs, c... | 2024-03-11T00:46:56Z | ACL-2024 Outstanding Paper | null | 10.18653/v1/2024.acl-long.843 | IndicLLMSuite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for Indian Languages | ['Mohammed Safi Ur Rahman Khan', 'Priyam Mehta', 'Ananth Sankar', 'Umashankar Kumaravelan', 'Sumanth Doddapaneni', 'G. Suriyaprasaad', 'G. VarunBalan', 'Sparsh Jain', 'Anoop Kunchukuttan', 'Pratyush Kumar', 'Raj Dabre', 'Mitesh M. Khapra'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 34 | 53 | ['Computer Science'] |
2,403.06354 | Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages | ['Michael Andersland'] | ['cs.CL'] | Large Language Models (LLMs) like GPT-4 and LLaMA have shown incredible
proficiency at natural language processing tasks and have even begun to excel
at tasks across other modalities such as vision and audio. Despite their
success, LLMs often struggle to perform well on low-resource languages because
there is so little... | 2024-03-11T01:04:36Z | null | null | null | null | null | null | null | null | null | null |
2,403.06399 | GlossLM: A Massively Multilingual Corpus and Pretrained Model for
Interlinear Glossed Text | ['Michael Ginn', 'Lindia Tjuatja', 'Taiqi He', 'Enora Rice', 'Graham Neubig', 'Alexis Palmer', 'Lori Levin'] | ['cs.CL'] | Language documentation projects often involve the creation of annotated text
in a format such as interlinear glossed text (IGT), which captures fine-grained
morphosyntactic analyses in a morpheme-by-morpheme format. However, there are
few existing resources providing large amounts of standardized, easily
accessible IGT... | 2024-03-11T03:21:15Z | EMNLP 2024. First two authors are equal contribution | null | null | null | null | null | null | null | null | null |
2,403.06412 | CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in
Korean | ['Eunsu Kim', 'Juyoung Suk', 'Philhoon Oh', 'Haneul Yoo', 'James Thorne', 'Alice Oh'] | ['cs.CL'] | Despite the rapid development of large language models (LLMs) for the Korean
language, there remains an obvious lack of benchmark datasets that test the
requisite Korean cultural and linguistic knowledge. Because many existing
Korean benchmark datasets are derived from the English counterparts through
translation, they... | 2024-03-11T03:54:33Z | null | null | null | null | null | null | null | null | null | null |
2,403.06754 | ALaRM: Align Language Models via Hierarchical Rewards Modeling | ['Yuhang Lai', 'Siyuan Wang', 'Shujun Liu', 'Xuanjing Huang', 'Zhongyu Wei'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We introduce ALaRM, the first framework modeling hierarchical rewards in
reinforcement learning from human feedback (RLHF), which is designed to enhance
the alignment of large language models (LLMs) with human preferences. The
framework addresses the limitations of current alignment approaches, which
often struggle wit... | 2024-03-11T14:28:40Z | 15 pages, 6 figures | null | null | ALaRM: Align Language Models via Hierarchical Rewards Modeling | ['Yuhang Lai', 'Siyuan Wang', 'Shujun Liu', 'Xuanjing Huang', 'Zhongyu Wei'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 5 | 51 | ['Computer Science'] |
2,403.06765 | ConspEmoLLM: Conspiracy Theory Detection Using an Emotion-Based Large
Language Model | ['Zhiwei Liu', 'Boyang Liu', 'Paul Thompson', 'Kailai Yang', 'Sophia Ananiadou'] | ['cs.CL'] | The internet has brought both benefits and harms to society. A prime example
of the latter is misinformation, including conspiracy theories, which flood the
web. Recent advances in natural language processing, particularly the emergence
of large language models (LLMs), have improved the prospects of accurate
misinforma... | 2024-03-11T14:35:45Z | Work in progress | null | 10.3233/FAIA241060 | ConspEmoLLM: Conspiracy Theory Detection Using an Emotion-Based Large Language Model | ['Zhiwei Liu', 'Boyang Liu', 'Paul Thompson', 'Kailai Yang', 'Raghav Jain', 'Sophia Ananiadou'] | 2,024 | European Conference on Artificial Intelligence | 3 | 43 | ['Computer Science'] |
2,403.06789 | SPLADE-v3: New baselines for SPLADE | ['Carlos Lassance', 'Hervé Déjean', 'Thibault Formal', 'Stéphane Clinchant'] | ['cs.IR', 'cs.CL'] | A companion to the release of the latest version of the SPLADE library. We
describe changes to the training structure and present our latest series of
models -- SPLADE-v3. We compare this new version to BM25, SPLADE++, as well as
re-rankers, and showcase its effectiveness via a meta-analysis over more than
40 query set... | 2024-03-11T15:04:55Z | Technical report | null | null | SPLADE-v3: New baselines for SPLADE | ['Carlos Lassance', "Herv'e D'ejean", 'Thibault Formal', 'S. Clinchant'] | 2,024 | arXiv.org | 29 | 20 | ['Computer Science'] |
2,403.06801 | CT2Rep: Automated Radiology Report Generation for 3D Medical Imaging | ['Ibrahim Ethem Hamamci', 'Sezgin Er', 'Bjoern Menze'] | ['eess.IV', 'cs.CV'] | Medical imaging plays a crucial role in diagnosis, with radiology reports
serving as vital documentation. Automating report generation has emerged as a
critical need to alleviate the workload of radiologists. While machine learning
has facilitated report generation for 2D medical imaging, extending this to 3D
has been ... | 2024-03-11T15:17:45Z | null | null | null | CT2Rep: Automated Radiology Report Generation for 3D Medical Imaging | ['Ibrahim Ethem Hamamci', 'Sezgin Er', 'Bjoern H Menze'] | 2,024 | International Conference on Medical Image Computing and Computer-Assisted Intervention | 30 | 30 | ['Engineering', 'Computer Science'] |
2,403.06892 | Real-time Transformer-based Open-Vocabulary Detection with Efficient
Fusion Head | ['Tiancheng Zhao', 'Peng Liu', 'Xuan He', 'Lu Zhang', 'Kyusong Lee'] | ['cs.CV', 'cs.CL'] | End-to-end transformer-based detectors (DETRs) have shown exceptional
performance in both closed-set and open-vocabulary object detection (OVD) tasks
through the integration of language modalities. However, their demanding
computational requirements have hindered their practical application in
real-time object detectio... | 2024-03-11T16:48:25Z | Preprint | null | null | Real-time Transformer-based Open-Vocabulary Detection with Efficient Fusion Head | ['Tiancheng Zhao', 'Peng Liu', 'Xuan He', 'Lu Zhang', 'Kyusong Lee'] | 2,024 | arXiv.org | 8 | 50 | ['Computer Science'] |
2,403.0697 | MRL Parsing Without Tears: The Case of Hebrew | ['Shaltiel Shmidman', 'Avi Shmidman', 'Moshe Koppel', 'Reut Tsarfaty'] | ['cs.CL'] | Syntactic parsing remains a critical tool for relation extraction and
information extraction, especially in resource-scarce languages where LLMs are
lacking. Yet in morphologically rich languages (MRLs), where parsers need to
identify multiple lexical units in each token, existing systems suffer in
latency and setup co... | 2024-03-11T17:54:33Z | null | null | null | MRL Parsing Without Tears: The Case of Hebrew | ['Shaltiel Shmidman', 'Avi Shmidman', 'Moshe Koppel', 'Reut Tsarfaty'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 6 | 23 | ['Computer Science'] |
2,403.06976 | BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion | ['Xuan Ju', 'Xian Liu', 'Xintao Wang', 'Yuxuan Bian', 'Ying Shan', 'Qiang Xu'] | ['cs.CV'] | Image inpainting, the process of restoring corrupted images, has seen
significant advancements with the advent of diffusion models (DMs). Despite
these advancements, current DM adaptations for inpainting, which involve
modifications to the sampling strategy or the development of
inpainting-specific DMs, frequently suff... | 2024-03-11T17:59:31Z | null | null | null | null | null | null | null | null | null | null |
2,403.06977 | VideoMamba: State Space Model for Efficient Video Understanding | ['Kunchang Li', 'Xinhao Li', 'Yi Wang', 'Yinan He', 'Yali Wang', 'Limin Wang', 'Yu Qiao'] | ['cs.CV'] | Addressing the dual challenges of local redundancy and global dependencies in
video understanding, this work innovatively adapts the Mamba to the video
domain. The proposed VideoMamba overcomes the limitations of existing 3D
convolution neural networks and video transformers. Its linear-complexity
operator enables effi... | 2024-03-11T17:59:34Z | 19 Pages, 7 Figures, 8 Tables | null | null | VideoMamba: State Space Model for Efficient Video Understanding | ['Kunchang Li', 'Xinhao Li', 'Yi Wang', 'Yinan He', 'Yali Wang', 'Limin Wang', 'Yu Qiao'] | 2,024 | European Conference on Computer Vision | 214 | 95 | ['Computer Science'] |
2,403.0735 | VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark | ['Han Huang', 'Haitian Zhong', 'Tao Yu', 'Qiang Liu', 'Shu Wu', 'Liang Wang', 'Tieniu Tan'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Recently, knowledge editing on large language models (LLMs) has received
considerable attention. Compared to this, editing Large Vision-Language Models
(LVLMs) faces extra challenges from diverse data modalities and complicated
model components, and data for LVLMs editing are limited. The existing LVLM
editing benchmar... | 2024-03-12T06:16:33Z | NeurIPS 2024, Datasets and Benchmarks Track | null | null | VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark | ['Han Huang', 'Haitian Zhong', 'Q. Liu', 'Shu Wu', 'Liang Wang', 'Tien-Ping Tan'] | 2,024 | Neural Information Processing Systems | 11 | 37 | ['Computer Science'] |
2,403.07508 | MoAI: Mixture of All Intelligence for Large Language and Vision Models | ['Byung-Kwan Lee', 'Beomchan Park', 'Chae Won Kim', 'Yong Man Ro'] | ['cs.CV'] | The rise of large language models (LLMs) and instruction tuning has led to
the current trend of instruction-tuned large language and vision models
(LLVMs). This trend involves either meticulously curating numerous instruction
tuning datasets tailored to specific objectives or enlarging LLVMs to manage
vast amounts of v... | 2024-03-12T10:44:13Z | ECCV 2024. Code available: https://github.com/ByungKwanLee/MoAI | null | null | null | null | null | null | null | null | null |
2,403.07652 | Harder Tasks Need More Experts: Dynamic Routing in MoE Models | ['Quzhe Huang', 'Zhenwei An', 'Nan Zhuang', 'Mingxu Tao', 'Chen Zhang', 'Yang Jin', 'Kun Xu', 'Kun Xu', 'Liwei Chen', 'Songfang Huang', 'Yansong Feng'] | ['cs.LG', 'cs.CL'] | In this paper, we introduce a novel dynamic expert selection framework for
Mixture of Experts (MoE) models, aiming to enhance computational efficiency and
model performance by adjusting the number of activated experts based on input
difficulty. Unlike traditional MoE approaches that rely on fixed Top-K routing,
which a... | 2024-03-12T13:41:15Z | null | null | null | null | null | null | null | null | null | null |
2,403.07691 | ORPO: Monolithic Preference Optimization without Reference Model | ['Jiwoo Hong', 'Noah Lee', 'James Thorne'] | ['cs.CL', 'cs.AI'] | While recent preference alignment algorithms for language models have
demonstrated promising results, supervised fine-tuning (SFT) remains imperative
for achieving successful convergence. In this paper, we study the crucial role
of SFT within the context of preference alignment, emphasizing that a minor
penalty for the... | 2024-03-12T14:34:08Z | Preprint | null | null | null | null | null | null | null | null | null |
2,403.0772 | Multi-modal Auto-regressive Modeling via Visual Words | ['Tianshuo Peng', 'Zuchao Li', 'Lefei Zhang', 'Hai Zhao', 'Ping Wang', 'Bo Du'] | ['cs.CV', 'cs.AI'] | Large Language Models (LLMs), benefiting from the auto-regressive modelling
approach performed on massive unannotated texts corpora, demonstrates powerful
perceptual and reasoning capabilities. However, as for extending
auto-regressive modelling to multi-modal scenarios to build Large Multi-modal
Models (LMMs), there l... | 2024-03-12T14:58:52Z | ACM MM 2024 | null | null | Multi-modal Auto-regressive Modeling via Visual Tokens | ['Tianshuo Peng', 'Zuchao Li', 'Lefei Zhang', 'Hai Zhao', 'Ping Wang', 'Bo Du'] | 2,024 | ACM Multimedia | 5 | 30 | ['Computer Science'] |
2,403.07807 | StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting | ['Kunhao Liu', 'Fangneng Zhan', 'Muyu Xu', 'Christian Theobalt', 'Ling Shao', 'Shijian Lu'] | ['cs.CV'] | We introduce StyleGaussian, a novel 3D style transfer technique that allows
instant transfer of any image's style to a 3D scene at 10 frames per second
(fps). Leveraging 3D Gaussian Splatting (3DGS), StyleGaussian achieves style
transfer without compromising its real-time rendering ability and multi-view
consistency. I... | 2024-03-12T16:44:52Z | null | null | null | StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting | ['Kunhao Liu', 'Fangneng Zhan', 'Muyu Xu', 'C. Theobalt', 'Ling Shao', 'Shijian Lu'] | 2,024 | SIGGRAPH Asia Technical Communications | 39 | 55 | ['Computer Science'] |
2,403.07815 | Chronos: Learning the Language of Time Series | ['Abdul Fatir Ansari', 'Lorenzo Stella', 'Caner Turkmen', 'Xiyuan Zhang', 'Pedro Mercado', 'Huibin Shen', 'Oleksandr Shchur', 'Syama Sundar Rangapuram', 'Sebastian Pineda Arango', 'Shubham Kapoor', 'Jasper Zschiegner', 'Danielle C. Maddix', 'Hao Wang', 'Michael W. Mahoney', 'Kari Torkkola', 'Andrew Gordon Wilson', 'Mic... | ['cs.LG', 'cs.AI'] | We introduce Chronos, a simple yet effective framework for pretrained
probabilistic time series models. Chronos tokenizes time series values using
scaling and quantization into a fixed vocabulary and trains existing
transformer-based language model architectures on these tokenized time series
via the cross-entropy loss... | 2024-03-12T16:53:54Z | Code and model checkpoints available at
https://github.com/amazon-science/chronos-forecasting | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.