arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,402.05195
$λ$-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space
['Maitreya Patel', 'Sangmin Jung', 'Chitta Baral', 'Yezhou Yang']
['cs.CV', 'cs.CL']
Despite the recent advances in personalized text-to-image (P-T2I) generative models, it remains challenging to perform finetuning-free multi-subject-driven T2I in a resource-efficient manner. Predominantly, contemporary approaches, involving the training of Hypernetworks and Multimodal Large Language Models (MLLMs), re...
2024-02-07T19:07:10Z
Project page: https://eclipse-t2i.github.io/Lambda-ECLIPSE/
null
null
λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space
['Maitreya Patel', 'Sangmin Jung', 'Chitta Baral', 'Yezhou Yang']
2,024
Trans. Mach. Learn. Res.
35
56
['Computer Science']
2,402.05369
Noise Contrastive Alignment of Language Models with Explicit Rewards
['Huayu Chen', 'Guande He', 'Lifan Yuan', 'Ganqu Cui', 'Hang Su', 'Jun Zhu']
['cs.LG', 'cs.CL']
User intentions are typically formalized as evaluation rewards to be maximized when fine-tuning language models (LMs). Existing alignment methods, such as Direct Preference Optimization (DPO), are mainly tailored for pairwise preference data where rewards are implicitly defined rather than explicitly given. In this pap...
2024-02-08T02:58:47Z
NeurIPS 2024
null
null
Noise Contrastive Alignment of Language Models with Explicit Rewards
['Huayu Chen', 'Guande He', 'Lifan Yuan', 'Hang Su', 'Jun Zhu']
2,024
Neural Information Processing Systems
56
53
['Computer Science']
2,402.05406
Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes
['Lucio Dery', 'Steven Kolawole', 'Jean-François Kagy', 'Virginia Smith', 'Graham Neubig', 'Ameet Talwalkar']
['cs.LG', 'cs.CL']
Structured pruning is a promising approach to create smaller, faster LLMs. However, existing methods typically rely on backward passes, which can inflate memory requirements and compute costs. In this work we introduce Bonsai, a gradient-free structured pruning method that eliminates the need for backpropagation, signi...
2024-02-08T04:48:26Z
19 pages, 6 fiigures, 16 tables
null
null
null
null
null
null
null
null
null
2,402.05424
Neural Circuit Diagrams: Robust Diagrams for the Communication, Implementation, and Analysis of Deep Learning Architectures
['Vincent Abbott']
['cs.LG']
Diagrams matter. Unfortunately, the deep learning community has no standard method for diagramming architectures. The current combination of linear algebra notation and ad-hoc diagrams fails to offer the necessary precision to understand architectures in all their detail. However, this detail is critical for faithful i...
2024-02-08T05:42:13Z
null
Transactions on Machine Learning Research (2024)
null
Neural Circuit Diagrams: Robust Diagrams for the Communication, Implementation, and Analysis of Deep Learning Architectures
['Vincent Abbott']
2,024
Trans. Mach. Learn. Res.
6
50
['Computer Science']
2,402.05457
It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech Recognition
['Chen Chen', 'Ruizhe Li', 'Yuchen Hu', 'Sabato Marco Siniscalchi', 'Pin-Yu Chen', 'Ensiong Chng', 'Chao-Han Huck Yang']
['cs.CL', 'cs.AI', 'cs.MM', 'cs.SD', 'eess.AS']
Recent studies have successfully shown that large language models (LLMs) can be successfully used for generative error correction (GER) on top of the automatic speech recognition (ASR) output. Specifically, an LLM is utilized to carry out a direct mapping from the N-best hypotheses list generated by an ASR system to th...
2024-02-08T07:21:45Z
Accepted to ICLR 2024, 17 pages. This work will be open sourced under MIT license
null
null
null
null
null
null
null
null
null
2,402.05672
Multilingual E5 Text Embeddings: A Technical Report
['Liang Wang', 'Nan Yang', 'Xiaolong Huang', 'Linjun Yang', 'Rangan Majumder', 'Furu Wei']
['cs.CL', 'cs.IR']
This technical report presents the training methodology and evaluation results of the open-source multilingual E5 text embedding models, released in mid-2023. Three embedding models of different sizes (small / base / large) are provided, offering a balance between the inference efficiency and embedding quality. The tra...
2024-02-08T13:47:50Z
6 pages
null
null
null
null
null
null
null
null
null
2,402.05755
Spirit LM: Interleaved Spoken and Written Language Model
['Tu Anh Nguyen', 'Benjamin Muller', 'Bokai Yu', 'Marta R. Costa-jussa', 'Maha Elbayad', 'Sravya Popuri', 'Christophe Ropers', 'Paul-Ambroise Duquenne', 'Robin Algayres', 'Ruslan Mavlyutov', 'Itai Gat', 'Mary Williamson', 'Gabriel Synnaeve', 'Juan Pino', 'Benoit Sagot', 'Emmanuel Dupoux']
['cs.CL', 'cs.SD', 'eess.AS']
We introduce Spirit LM, a foundation multimodal language model that freely mixes text and speech. Our model is based on a 7B pretrained text language model that we extend to the speech modality by continuously training it on text and speech units. Speech and text sequences are concatenated as a single stream of tokens,...
2024-02-08T15:39:32Z
null
null
null
null
null
null
null
null
null
null
2,402.05804
InkSight: Offline-to-Online Handwriting Conversion by Teaching Vision-Language Models to Read and Write
['Blagoj Mitrevski', 'Arina Rak', 'Julian Schnitzler', 'Chengkun Li', 'Andrii Maksai', 'Jesse Berent', 'Claudiu Musat']
['cs.CV', 'cs.AI']
Digital note-taking is gaining popularity, offering a durable, editable, and easily indexable way of storing notes in a vectorized form, known as digital ink. However, a substantial gap remains between this way of note-taking and traditional pen-and-paper note-taking, a practice that is still favored by a vast majority...
2024-02-08T16:41:41Z
Accepted by Transactions on Machine Learning Research
null
null
null
null
null
null
null
null
null
2,402.05856
Structure-Informed Protein Language Model
['Zuobai Zhang', 'Jiarui Lu', 'Vijil Chenthamarakshan', 'Aurélie Lozano', 'Payel Das', 'Jian Tang']
['q-bio.BM', 'cs.LG']
Protein language models are a powerful tool for learning protein representations through pre-training on vast protein sequence datasets. However, traditional protein language models lack explicit structural supervision, despite its relevance to protein function. To address this issue, we introduce the integration of re...
2024-02-07T09:32:35Z
null
null
null
null
null
null
null
null
null
null
2,402.05892
Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data
['Shufan Li', 'Harkanwar Singh', 'Aditya Grover']
['cs.CV']
In recent years, Transformers have become the de-facto architecture for sequence modeling on text and a variety of multi-dimensional data, such as images and video. However, the use of self-attention layers in a Transformer incurs prohibitive compute and memory complexity that scales quadratically w.r.t. the sequence l...
2024-02-08T18:30:50Z
24 pages, 7 figures
null
null
Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data
['Shufan Li', 'Harkanwar Singh', 'Aditya Grover']
2,024
European Conference on Computer Vision
64
66
['Computer Science']
2,402.05904
FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs
['Eun Cheol Choi', 'Emilio Ferrara']
['cs.CL', 'cs.CY', 'cs.HC', 'cs.SI']
Our society is facing rampant misinformation harming public health and trust. To address the societal challenge, we introduce FACT-GPT, a system leveraging Large Language Models (LLMs) to automate the claim matching stage of fact-checking. FACT-GPT, trained on a synthetic dataset, identifies social media content that a...
2024-02-08T18:43:05Z
null
null
null
null
null
null
null
null
null
null
2,402.0593
WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
['Xing Han Lù', 'Zdeněk Kasner', 'Siva Reddy']
['cs.CL', 'cs.CV', 'cs.LG']
We propose the problem of conversational web navigation, where a digital agent controls a web browser and follows user instructions to solve real-world tasks in a multi-turn dialogue fashion. To support this problem, we introduce WEBLINX - a large-scale benchmark of 100K interactions across 2300 expert demonstrations o...
2024-02-08T18:58:02Z
null
null
10.5555/3692070.3693410
null
null
null
null
null
null
null
2,402.06094
Rethinking Data Selection for Supervised Fine-Tuning
['Ming Shen']
['cs.CL']
Although supervised finetuning (SFT) has emerged as an essential technique to align large language models with humans, it is considered superficial, with style learning being its nature. At the same time, recent works indicate the importance of data selection for SFT, showing that finetuning with high-quality and diver...
2024-02-08T23:02:04Z
null
null
null
null
null
null
null
null
null
null
2,402.06332
InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning
['Huaiyuan Ying', 'Shuo Zhang', 'Linyang Li', 'Zhejian Zhou', 'Yunfan Shao', 'Zhaoye Fei', 'Yichuan Ma', 'Jiawei Hong', 'Kuikun Liu', 'Ziyi Wang', 'Yudong Wang', 'Zijian Wu', 'Shuaibin Li', 'Fengzhe Zhou', 'Hongwei Liu', 'Songyang Zhang', 'Wenwei Zhang', 'Hang Yan', 'Xipeng Qiu', 'Jiayu Wang', 'Kai Chen', 'Dahua Lin']
['cs.CL']
The math abilities of large language models can represent their abstract reasoning ability. In this paper, we introduce and open-source our math reasoning LLMs InternLM-Math which is continue pre-trained from InternLM2. We unify chain-of-thought reasoning, reward modeling, formal reasoning, data augmentation, and code ...
2024-02-09T11:22:08Z
null
null
null
null
null
null
null
null
null
null
2,402.06363
StruQ: Defending Against Prompt Injection with Structured Queries
['Sizhe Chen', 'Julien Piet', 'Chawin Sitawarin', 'David Wagner']
['cs.CR']
Recent advances in Large Language Models (LLMs) enable exciting LLM-integrated applications, which perform text-based tasks by utilizing their advanced language understanding capabilities. However, as LLMs have improved, so have the attacks against them. Prompt injection attacks are an important threat: they trick the ...
2024-02-09T12:15:51Z
To appear at USENIX Security Symposium 2025. Key words: prompt injection defense, LLM security, LLM-integrated applications
null
null
StruQ: Defending Against Prompt Injection with Structured Queries
['Sizhe Chen', 'Julien Piet', 'Chawin Sitawarin', 'David Wagner']
2,024
arXiv.org
89
65
['Computer Science']
2,402.06475
Large Language Models for Captioning and Retrieving Remote Sensing Images
['João Daniel Silva', 'João Magalhães', 'Devis Tuia', 'Bruno Martins']
['cs.CV']
Image captioning and cross-modal retrieval are examples of tasks that involve the joint analysis of visual and linguistic information. In connection to remote sensing imagery, these tasks can help non-expert users in extracting relevant Earth observation information for a variety of applications. Still, despite some pr...
2024-02-09T15:31:01Z
null
null
null
null
null
null
null
null
null
null
2,402.06584
G-SciEdBERT: A Contextualized LLM for Science Assessment Tasks in German
['Ehsan Latif', 'Gyeong-Geon Lee', 'Knut Neumann', 'Tamara Kastorff', 'Xiaoming Zhai']
['cs.CL', 'cs.AI']
The advancement of natural language processing has paved the way for automated scoring systems in various languages, such as German (e.g., German BERT [G-BERT]). Automatically scoring written responses to science questions in German is a complex task and challenging for standard G-BERT as they lack contextual knowledge...
2024-02-09T18:05:03Z
Accepted by EDM and Submitted to JEDM
null
null
null
null
null
null
null
null
null
2,402.06617
FaBERT: Pre-training BERT on Persian Blogs
['Mostafa Masumi', 'Seyed Soroush Majd', 'Mehrnoush Shamsfard', 'Hamid Beigy']
['cs.CL']
We introduce FaBERT, a Persian BERT-base model pre-trained on the HmBlogs corpus, encompassing both informal and formal Persian texts. FaBERT is designed to excel in traditional Natural Language Understanding (NLU) tasks, addressing the intricacies of diverse sentence structures and linguistic styles prevalent in the P...
2024-02-09T18:50:51Z
null
null
null
null
null
null
null
null
null
null
2,402.06619
Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning
['Shivalika Singh', 'Freddie Vargus', 'Daniel Dsouza', 'Börje F. Karlsson', 'Abinaya Mahendiran', 'Wei-Yin Ko', 'Herumb Shandilya', 'Jay Patel', 'Deividas Mataciunas', 'Laura OMahony', 'Mike Zhang', 'Ramith Hettiarachchi', 'Joseph Wilson', 'Marina Machado', 'Luisa Souza Moura', 'Dominik Krzemiński', 'Hakimeh Fadaei', '...
['cs.CL', 'cs.AI']
Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruct...
2024-02-09T18:51:49Z
null
null
null
null
null
null
null
null
null
null
2,402.06698
FNSPID: A Comprehensive Financial News Dataset in Time Series
['Zihan Dong', 'Xinyu Fan', 'Zhiyuan Peng']
['q-fin.ST']
Financial market predictions utilize historical data to anticipate future stock prices and market trends. Traditionally, these predictions have focused on the statistical analysis of quantitative factors, such as stock prices, trading volumes, inflation rates, and changes in industrial production. Recent advancements i...
2024-02-09T04:26:13Z
null
null
null
null
null
null
null
null
null
null
2,402.06852
ChemLLM: A Chemical Large Language Model
['Di Zhang', 'Wei Liu', 'Qian Tan', 'Jingdan Chen', 'Hang Yan', 'Yuliang Yan', 'Jiatong Li', 'Weiran Huang', 'Xiangyu Yue', 'Wanli Ouyang', 'Dongzhan Zhou', 'Shufei Zhang', 'Mao Su', 'Han-Sen Zhong', 'Yuqiang Li']
['cs.AI', 'cs.CL']
Large language models (LLMs) have made impressive progress in chemistry applications. However, the community lacks an LLM specifically designed for chemistry. The main challenges are two-fold: firstly, most chemical data and scientific knowledge are stored in structured databases, which limits the model's ability to su...
2024-02-10T01:11:59Z
9 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,402.06888
Analysis of Self-Supervised Speech Models on Children's Speech and Infant Vocalizations
['Jialu Li', 'Mark Hasegawa-Johnson', 'Nancy L. McElwain']
['eess.AS', 'cs.SD']
To understand why self-supervised learning (SSL) models have empirically achieved strong performances on several speech-processing downstream tasks, numerous studies have focused on analyzing the encoded information of the SSL layer representations in adult speech. Limited work has investigated how pre-training and fin...
2024-02-10T05:20:50Z
Accepted to 2024 ICASSP Workshop of Self-supervision in Audio, Speech and Beyond (SASB)
null
null
Analysis of Self-Supervised Speech Models on Children’s Speech and Infant Vocalizations
['Jialu Li', 'M. Hasegawa-Johnson', 'Nancy L. McElwain']
2,024
2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)
3
36
['Engineering', 'Computer Science', 'Medicine']
2,402.06894
GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators
['Yuchen Hu', 'Chen Chen', 'Chao-Han Huck Yang', 'Ruizhe Li', 'Dong Zhang', 'Zhehuai Chen', 'Eng Siong Chng']
['cs.CL', 'cs.AI', 'cs.LG', 'cs.SD', 'eess.AS']
Recent advances in large language models (LLMs) have stepped forward the development of multilingual speech and machine translation by its reduced representation errors and incorporated external knowledge. However, both translation tasks typically utilize beam search decoding and top-1 hypothesis selection for inferenc...
2024-02-10T07:20:49Z
18 pages, Accepted by ACL 2024. This work is open sourced at: https://github.com/YUCHEN005/GenTranslate
null
null
GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators
['Yuchen Hu', 'Chen Chen', 'Chao-Han Huck Yang', 'Ruizhe Li', 'Dong Zhang', 'Zhehuai Chen', 'E. Chng']
2,024
Annual Meeting of the Association for Computational Linguistics
21
66
['Computer Science', 'Engineering']
2,402.06994
A Change Detection Reality Check
['Isaac Corley', 'Caleb Robinson', 'Anthony Ortiz']
['cs.CV', 'cs.LG']
In recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature. These approaches claim to offer state-of-the-art performance on different standard benchmark datasets. However, has the field truly made significant progress? In this paper we perform ...
2024-02-10T17:02:53Z
null
null
null
null
null
null
null
null
null
null
2,402.07023
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & Hallucinations
['Ankit Pal', 'Malaikannan Sankarasubbu']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.HC', 'cs.LG']
Large language models have the potential to be valuable in the healthcare industry, but it's crucial to verify their safety and effectiveness through rigorous evaluation. For this purpose, we comprehensively evaluated both open-source LLMs and Google's new multimodal LLM called Gemini across Medical reasoning, hallucin...
2024-02-10T19:08:28Z
Preprint version, Under Review
null
null
null
null
null
null
null
null
null
2,402.07148
X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models with Applications in Protein Mechanics and Molecular Design
['Eric L. Buehler', 'Markus J. Buehler']
['cond-mat.soft', 'cond-mat.dis-nn', 'cs.AI', 'cs.CL', 'cs.LG', 'q-bio.QM']
We report a mixture of expert strategy to create fine-tuned large language models using a deep layer-wise token-level approach based on low-rank adaptation (LoRA). Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoR...
2024-02-11T10:23:34Z
null
null
null
null
null
null
null
null
null
null
2,402.07319
ODIN: Disentangled Reward Mitigates Hacking in RLHF
['Lichang Chen', 'Chen Zhu', 'Davit Soselia', 'Jiuhai Chen', 'Tianyi Zhou', 'Tom Goldstein', 'Heng Huang', 'Mohammad Shoeybi', 'Bryan Catanzaro']
['cs.LG', 'cs.AI', 'cs.CL']
In this work, we study the issue of reward hacking on the response length, a challenge emerging in Reinforcement Learning from Human Feedback (RLHF) on LLMs. A well-formatted, verbose but less helpful response from the LLMs can often deceive LLMs or even human evaluators to achieve high scores. The same issue also hold...
2024-02-11T22:40:12Z
null
null
null
ODIN: Disentangled Reward Mitigates Hacking in RLHF
['Lichang Chen', 'Chen Zhu', 'Davit Soselia', 'Jiuhai Chen', 'Tianyi Zhou', 'Tom Goldstein', 'Heng Huang', 'M. Shoeybi', 'Bryan Catanzaro']
2,024
International Conference on Machine Learning
66
58
['Computer Science']
2,402.0744
Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT
['Jon Saad-Falcon', 'Daniel Y. Fu', 'Simran Arora', 'Neel Guha', 'Christopher Ré']
['cs.IR', 'cs.LG']
Retrieval pipelines-an integral component of many machine learning systems-perform poorly in domains where documents are long (e.g., 10K tokens or more) and where identifying the relevant document requires synthesizing information across the entire text. Developing long-context retrieval encoders suitable for these dom...
2024-02-12T06:43:52Z
International Conference on Machine Learning (ICML) 2024
null
null
Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT
['Jon Saad-Falcon', 'Daniel Y. Fu', 'Simran Arora', 'Neel Guha', "Christopher R'e"]
2,024
International Conference on Machine Learning
18
58
['Computer Science']
2,402.07596
Sheet Music Transformer: End-To-End Optical Music Recognition Beyond Monophonic Transcription
['Antonio Ríos-Vila', 'Jorge Calvo-Zaragoza', 'Thierry Paquet']
['cs.CV', 'cs.SD', 'eess.AS']
State-of-the-art end-to-end Optical Music Recognition (OMR) has, to date, primarily been carried out using monophonic transcription techniques to handle complex score layouts, such as polyphony, often by resorting to simplifications or specific adaptations. Despite their efficacy, these approaches imply challenges rela...
2024-02-12T11:52:21Z
Submitted to the International Conference on Document Analysis and Recognition 2024
null
null
null
null
null
null
null
null
null
2,402.07625
Autonomous Data Selection with Zero-shot Generative Classifiers for Mathematical Texts
['Yifan Zhang', 'Yifan Luo', 'Yang Yuan', 'Andrew C Yao']
['cs.CL', 'cs.AI', 'cs.LG']
We present Autonomous Data Selection (AutoDS), a method that leverages base language models themselves as zero-shot "generative classifiers" to automatically curate high-quality mathematical texts. Unlike prior approaches that require human annotations or training a dedicated data filter, AutoDS relies solely on a mode...
2024-02-12T13:09:21Z
22 pages, 9 figures
null
null
Autonomous Data Selection with Zero-shot Generative Classifiers for Mathematical Texts
['Yifan Zhang', 'Yifan Luo', 'Yang Yuan', 'A. C. Yao']
2,024
null
19
50
['Computer Science']
2,402.0763
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering
['Xiaoxin He', 'Yijun Tian', 'Yifei Sun', 'Nitesh V. Chawla', 'Thomas Laurent', 'Yann LeCun', 'Xavier Bresson', 'Bryan Hooi']
['cs.LG']
Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface. In response to a user's questions, our method provides textual replies and highlights the relevant parts of the graph. While existing works integrate large langu...
2024-02-12T13:13:04Z
null
null
null
null
null
null
null
null
null
null
2,402.07688
CyberMetric: A Benchmark Dataset based on Retrieval-Augmented Generation for Evaluating LLMs in Cybersecurity Knowledge
['Norbert Tihanyi', 'Mohamed Amine Ferrag', 'Ridhi Jain', 'Tamas Bisztray', 'Merouane Debbah']
['cs.AI', 'cs.CR']
Large Language Models (LLMs) are increasingly used across various domains, from software development to cyber threat intelligence. Understanding all the different fields of cybersecurity, which includes topics such as cryptography, reverse engineering, and risk assessment, poses a challenge even for human experts. To a...
2024-02-12T14:53:28Z
null
null
null
null
null
null
null
null
null
null
2,402.07827
Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
['Ahmet Üstün', 'Viraat Aryabumi', 'Zheng-Xin Yong', 'Wei-Yin Ko', "Daniel D'souza", 'Gbemileke Onilude', 'Neel Bhandari', 'Shivalika Singh', 'Hui-Lee Ooi', 'Amr Kayid', 'Freddie Vargus', 'Phil Blunsom', 'Shayne Longpre', 'Niklas Muennighoff', 'Marzieh Fadaee', 'Julia Kreutzer', 'Sara Hooker']
['cs.CL']
Recent breakthroughs in large language models (LLMs) have centered around a handful of data-rich languages. What does it take to broaden access to breakthroughs beyond first-class citizen languages? Our work introduces Aya, a massively multilingual generative language model that follows instructions in 101 languages of...
2024-02-12T17:34:13Z
null
null
null
Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
['A. Ustun', 'Viraat Aryabumi', 'Zheng-Xin Yong', 'Wei-Yin Ko', "Daniel D'souza", 'Gbemileke Onilude', 'Neel Bhandari', 'Shivalika Singh', 'Hui-Lee Ooi', 'Amr Kayid', 'Freddie Vargus', 'Phil Blunsom', 'Shayne Longpre', 'Niklas Muennighoff', 'Marzieh Fadaee', 'Julia Kreutzer', 'Sara Hooker']
2,024
Annual Meeting of the Association for Computational Linguistics
231
158
['Computer Science']
2,402.07865
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
['Siddharth Karamcheti', 'Suraj Nair', 'Ashwin Balakrishna', 'Percy Liang', 'Thomas Kollar', 'Dorsa Sadigh']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning; adoption that has fueled a wealth of new models such as LLaVa, InstructBLIP, and PaLI-3. Despite the volume of new releases, key design decisions around image p...
2024-02-12T18:21:14Z
Published at ICML 2024. 22 pages, 11 figures. Training code and models: https://github.com/TRI-ML/prismatic-vlms. Evaluation code: https://github.com/TRI-ML/vlm-evaluation
null
null
null
null
null
null
null
null
null
2,402.07894
MODIPHY: Multimodal Obscured Detection for IoT using PHantom Convolution-Enabled Faster YOLO
['Shubhabrata Mukherjee', 'Cory Beard', 'Zhu Li']
['cs.CV']
Low-light conditions and occluded scenarios impede object detection in real-world Internet of Things (IoT) applications like autonomous vehicles and security systems. While advanced machine learning models strive for accuracy, their computational demands clash with the limitations of resource-constrained devices, hampe...
2024-02-12T18:56:53Z
This paper has been accepted for publication at the IEEE International Conference on Image Processing (ICIP) 2024
null
null
null
null
null
null
null
null
null
2,402.08183
Pixel Sentence Representation Learning
['Chenghao Xiao', 'Zhuoxu Huang', 'Danlu Chen', 'G Thomas Hudson', 'Yizhi Li', 'Haoran Duan', 'Chenghua Lin', 'Jie Fu', 'Jungong Han', 'Noura Al Moubayed']
['cs.CL', 'cs.CV']
Pretrained language models are long known to be subpar in capturing sentence and document-level semantics. Though heavily investigated, transferring perturbation-based methods from unsupervised visual representation learning to NLP remains an unsolved problem. This is largely due to the discreteness of subword units br...
2024-02-13T02:46:45Z
null
null
null
null
null
null
null
null
null
null
2,402.08268
World Model on Million-Length Video And Language With Blockwise RingAttention
['Hao Liu', 'Wilson Yan', 'Matei Zaharia', 'Pieter Abbeel']
['cs.LG']
Enabling long-context understanding remains a key challenge in scaling existing sequence models -- a crucial component in developing generally intelligent models that can process and operate over long temporal horizons that potentially consist of millions of tokens. In this paper, we aim to address these challenges by ...
2024-02-13T07:47:36Z
null
null
null
null
null
null
null
null
null
null
2,402.08327
PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers
['Weizhe Lin', 'Jingbiao Mei', 'Jinghong Chen', 'Bill Byrne']
['cs.CL']
Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive ...
2024-02-13T09:47:07Z
ACL 2024; Project page: https://preflmr.github.io/
null
null
null
null
null
null
null
null
null
2,402.08666
Improving Generalization in Semantic Parsing by Increasing Natural Language Variation
['Irina Saparina', 'Mirella Lapata']
['cs.CL']
Text-to-SQL semantic parsing has made significant progress in recent years, with various models demonstrating impressive performance on the challenging Spider benchmark. However, it has also been shown that these models often struggle to generalize even when faced with small perturbations of previously (accurately) par...
2024-02-13T18:48:23Z
EACL 2024
null
null
Improving Generalization in Semantic Parsing by Increasing Natural Language Variation
['Irina Saparina', 'Mirella Lapata']
2,024
Conference of the European Chapter of the Association for Computational Linguistics
2
43
['Computer Science']
2,402.08777
DNABERT-S: Pioneering Species Differentiation with Species-Aware DNA Embeddings
['Zhihan Zhou', 'Weimin Wu', 'Harrison Ho', 'Jiayi Wang', 'Lizhen Shi', 'Ramana V Davuluri', 'Zhong Wang', 'Han Liu']
['q-bio.GN', 'cs.AI', 'cs.CE', 'cs.CL']
We introduce DNABERT-S, a tailored genome model that develops species-aware embeddings to naturally cluster and segregate DNA sequences of different species in the embedding space. Differentiating species from genomic sequences (i.e., DNA and RNA) is vital yet challenging, since many real-world species remain uncharact...
2024-02-13T20:21:29Z
null
null
null
null
null
null
null
null
null
null
2,402.08875
Advancing Human Action Recognition with Foundation Models trained on Unlabeled Public Videos
['Yang Qian', 'Yinan Sun', 'Ali Kargarandehkordi', 'Parnian Azizian', 'Onur Cezmi Mutlu', 'Saimourya Surabhi', 'Pingyi Chen', 'Zain Jabbar', 'Dennis Paul Wall', 'Peter Washington']
['cs.CV']
The increasing variety and quantity of tagged multimedia content on a variety of online platforms offer a unique opportunity to advance the field of human action recognition. In this study, we utilize 283,582 unique, unlabeled TikTok video clips, categorized into 386 hashtags, to train a domain-specific foundation mode...
2024-02-14T00:41:10Z
10 pages
null
null
Advancing Human Action Recognition with Foundation Models trained on Unlabeled Public Videos
['Yang Qian', 'Yinan Sun', 'A. Kargarandehkordi', 'Parnian Azizian', 'O. Mutlu', 'Saimourya Surabhi', 'Pingyi Chen', 'Zain Jabbar', 'Dennis P. Wall', 'Peter Washington']
2,024
null
1
41
['Computer Science']
2,402.09025
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
['Jiwon Song', 'Kyungseok Oh', 'Taesu Kim', 'Hyungjun Kim', 'Yulhwa Kim', 'Jae-Joon Kim']
['cs.CL', 'cs.LG']
Large language models (LLMs) have proven to be highly effective across various natural language processing tasks. However, their large number of parameters poses significant challenges for practical deployment. Pruning, a technique aimed at reducing the size and complexity of LLMs, offers a potential solution by removi...
2024-02-14T09:01:13Z
ICML 2024
null
null
null
null
null
null
null
null
null
2,402.09099
Neuron-based Multifractal Analysis of Neuron Interaction Dynamics in Large Models
['Xiongye Xiao', 'Heng Ping', 'Chenyu Zhou', 'Defu Cao', 'Yaxing Li', 'Yi-Zhuo Zhou', 'Shixuan Li', 'Nikos Kanakaris', 'Paul Bogdan']
['cs.AI']
In recent years, there has been increasing attention on the capabilities of large models, particularly in handling complex tasks that small-scale models are unable to perform. Notably, large language models (LLMs) have demonstrated ``intelligent'' abilities such as complex reasoning and abstract language comprehension,...
2024-02-14T11:20:09Z
ICLR 2025: https://openreview.net/forum?id=nt8gBX58Kh
null
null
null
null
null
null
null
null
null
2,402.09151
Chinese MentalBERT: Domain-Adaptive Pre-training on Social Media for Chinese Mental Health Text Analysis
['Wei Zhai', 'Hongzhi Qi', 'Qing Zhao', 'Jianqiang Li', 'Ziqi Wang', 'Han Wang', 'Bing Xiang Yang', 'Guanghui Fu']
['cs.CL', 'cs.LG']
In the current environment, psychological issues are prevalent and widespread, with social media serving as a key outlet for individuals to share their feelings. This results in the generation of vast quantities of data daily, where negative emotions have the potential to precipitate crisis situations. There is a recog...
2024-02-14T13:08:25Z
null
null
null
null
null
null
null
null
null
null
2,402.09205
Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents
['Cheng Qian', 'Bingxiang He', 'Zhong Zhuang', 'Jia Deng', 'Yujia Qin', 'Xin Cong', 'Zhong Zhang', 'Jie Zhou', 'Yankai Lin', 'Zhiyuan Liu', 'Maosong Sun']
['cs.CL', 'cs.AI', 'cs.HC']
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions. Although adept at devising strategies and performing tasks, these agents struggle with seeking clarification and grasping precise user intentions. To bri...
2024-02-14T14:36:30Z
26 pages, 5 tables, 6 figures
null
null
null
null
null
null
null
null
null
2,402.09353
DoRA: Weight-Decomposed Low-Rank Adaptation
['Shih-Yang Liu', 'Chien-Yi Wang', 'Hongxu Yin', 'Pavlo Molchanov', 'Yu-Chiang Frank Wang', 'Kwang-Ting Cheng', 'Min-Hung Chen']
['cs.CL', 'cs.CV']
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT). In this work, we first introduce a novel weig...
2024-02-14T17:59:34Z
ICML2024(Oral)
null
null
DoRA: Weight-Decomposed Low-Rank Adaptation
['Shih-Yang Liu', 'Chien-Yi Wang', 'Hongxu Yin', 'Pavlo Molchanov', 'Yu-Chiang Frank Wang', 'Kwang-Ting Cheng', 'Min-Hung Chen']
2,024
International Conference on Machine Learning
423
59
['Computer Science']
2,402.09371
Transformers Can Achieve Length Generalization But Not Robustly
['Yongchao Zhou', 'Uri Alon', 'Xinyun Chen', 'Xuezhi Wang', 'Rishabh Agarwal', 'Denny Zhou']
['cs.LG', 'cs.AI', 'cs.CL']
Length generalization, defined as the ability to extrapolate from shorter training sequences to longer test ones, is a significant challenge for language models. This issue persists even with large-scale Transformers handling relatively straightforward tasks. In this paper, we test the Transformer's ability of length g...
2024-02-14T18:18:29Z
null
null
null
null
null
null
null
null
null
null
2,402.09391
LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset
['Botao Yu', 'Frazier N. Baker', 'Ziqi Chen', 'Xia Ning', 'Huan Sun']
['cs.AI', 'cs.CE', 'cs.CL']
Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, ...
2024-02-14T18:42:25Z
Accepted by COLM 2024
null
null
LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset
['Botao Yu', 'Frazier N. Baker', 'Ziqi Chen', 'Xia Ning', 'Huan Sun']
2,024
arXiv.org
51
82
['Computer Science']
2,402.09739
QuRating: Selecting High-Quality Data for Training Language Models
['Alexander Wettig', 'Aatmik Gupta', 'Saumya Malik', 'Danqi Chen']
['cs.CL', 'cs.LG']
Selecting high-quality pre-training data is important for creating capable language models, but existing methods rely on simple heuristics. We introduce QuRating, a method for selecting pre-training data that can capture human intuitions about data quality. In this paper, we investigate four qualities - writing style, ...
2024-02-15T06:36:07Z
Accepted at ICML 2024. The results for top-k selection have been corrected. The code, models and data are available at https://github.com/princeton-nlp/QuRating
null
null
QuRating: Selecting High-Quality Data for Training Language Models
['Alexander Wettig', 'Aatmik Gupta', 'Saumya Malik', 'Danqi Chen']
2,024
International Conference on Machine Learning
81
0
['Computer Science']
2,402.09759
Efficient Language Adaptive Pre-training: Extending State-of-the-Art Large Language Models for Polish
['Szymon Ruciński']
['cs.CL', 'cs.AI']
This study explores the potential of fine-tuning foundational English Large Language Models (LLMs) for generating Polish text. The first step involves Language Adaptive Pre-training (LAPT) on a high-quality dataset of 3.11 GB, consisting of 276 million Polish tokens. The LAPT is followed by additional fine-tuning aimed...
2024-02-15T07:17:10Z
10 pages
null
null
null
null
null
null
null
null
null
2,402.09844
Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent
['Quentin Gallouédec', 'Edward Beeching', 'Clément Romac', 'Emmanuel Dellandréa']
['cs.AI']
The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task within a unimodal framework, a limitation that contrasts with the broader vision of a ver...
2024-02-15T10:01:55Z
null
38th Workshop on Aligning Reinforcement Learning Experimentalists and Theorists (ARLET 2024)
null
null
null
null
null
null
null
null
2,402.09906
Generative Representational Instruction Tuning
['Niklas Muennighoff', 'Hongjin Su', 'Liang Wang', 'Nan Yang', 'Furu Wei', 'Tao Yu', 'Amanpreet Singh', 'Douwe Kiela']
['cs.CL', 'cs.AI', 'cs.LG']
All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between th...
2024-02-15T12:12:19Z
67 pages (16 main), 25 figures, 34 tables
null
null
null
null
null
null
null
null
null
2,402.10176
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
['Shubham Toshniwal', 'Ivan Moshkov', 'Sean Narenthiran', 'Daria Gitman', 'Fei Jia', 'Igor Gitman']
['cs.CL', 'cs.AI', 'cs.LG']
Recent work has shown the immense potential of synthetically generated datasets for training large language models (LLMs), especially for acquiring targeted skills. Current large-scale math instruction tuning datasets such as MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed using outputs from...
2024-02-15T18:26:11Z
Camera-ready version for NeurIPS 2024
null
null
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset
['Shubham Toshniwal', 'Ivan Moshkov', 'Sean Narenthiran', 'Daria Gitman', 'Fei Jia', 'Igor Gitman']
2,024
Neural Information Processing Systems
97
40
['Computer Science']
2,402.10207
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment
['Rui Yang', 'Xiaoman Pan', 'Feng Luo', 'Shuang Qiu', 'Han Zhong', 'Dong Yu', 'Jianshu Chen']
['cs.LG', 'cs.AI', 'cs.CL']
We consider the problem of multi-objective alignment of foundation models with human preferences, which is a critical step towards helpful and harmless AI systems. However, it is generally costly and unstable to fine-tune large foundation models using reinforcement learning (RL), and the multi-dimensionality, heterogen...
2024-02-15T18:58:31Z
Accepted by ICML 2024
null
null
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment
['Rui Yang', 'Xiaoman Pan', 'Feng Luo', 'Shuang Qiu', 'Han Zhong', 'Dong Yu', 'Jianshu Chen']
2,024
International Conference on Machine Learning
83
73
['Computer Science']
2,402.1021
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
['Huizhuo Yuan', 'Zixiang Chen', 'Kaixuan Ji', 'Quanquan Gu']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'stat.ML']
Fine-tuning Diffusion Models remains an underexplored frontier in generative artificial intelligence (GenAI), especially when compared with the remarkable progress made in fine-tuning Large Language Models (LLMs). While cutting-edge diffusion models such as Stable Diffusion (SD) and SDXL rely on supervised fine-tuning,...
2024-02-15T18:59:18Z
28 pages, 8 figures, 10 tables
null
null
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
['Huizhuo Yuan', 'Zixiang Chen', 'Kaixuan Ji', 'Quanquan Gu']
2,024
Neural Information Processing Systems
29
49
['Computer Science', 'Mathematics']
2,402.10373
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
['Yanis Labrak', 'Adrien Bazoge', 'Emmanuel Morin', 'Pierre-Antoine Gourraud', 'Mickael Rouvier', 'Richard Dufour']
['cs.CL', 'cs.AI', 'cs.LG']
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presen...
2024-02-15T23:39:04Z
Accepted at ACL 2024 - Proceedings of the 62st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Proceedings of the 62st Annual Meeting of the Association for Computational Linguistics - Volume 1: Long Papers (ACL 2024)
null
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
['Yanis Labrak', 'Adrien Bazoge', 'Emmanuel Morin', 'P. Gourraud', 'Mickael Rouvier', 'Richard Dufour']
2,024
Annual Meeting of the Association for Computational Linguistics
228
69
['Computer Science']
2,402.10422
Pushing the Limits of Zero-shot End-to-End Speech Translation
['Ioannis Tsiamas', 'Gerard I. Gállego', 'José A. R. Fonollosa', 'Marta R. Costa-jussà']
['cs.CL']
Data scarcity and the modality gap between the speech and text modalities are two major obstacles of end-to-end Speech Translation (ST) systems, thus hindering their performance. Prior work has attempted to mitigate these challenges by leveraging external MT data and optimizing distance metrics that bring closer the sp...
2024-02-16T03:06:37Z
ACL 2024 (Findings)
null
null
null
null
null
null
null
null
null
2,402.10453
Steering Conversational Large Language Models for Long Emotional Support Conversations
['Navid Madani', 'Sougata Saha', 'Rohini Srihari']
['cs.CL']
In this study, we address the challenge of enabling large language models (LLMs) to consistently adhere to emotional support strategies in extended conversations. We focus on the steerability of the Llama-2 and Llama-3 suite of models, examining their ability to maintain these strategies throughout interactions. To ass...
2024-02-16T05:03:01Z
null
null
null
null
null
null
null
null
null
null
2,402.10597
Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks
['Niall Taylor', 'Upamanyu Ghose', 'Omid Rohanian', 'Mohammadmahdi Nouriborji', 'Andrey Kormilitzin', 'David Clifton', 'Alejo Nevado-Holgado']
['cs.CL', 'cs.AI']
The entry of large language models (LLMs) into research and commercial spaces has led to a trend of ever-larger models, with initial promises of generalisability, followed by a widespread desire to downsize and create specialised models without the need for complete fine-tuning, using Parameter Efficient Fine-tuning (P...
2024-02-16T11:30:11Z
null
null
null
null
null
null
null
null
null
null
2,402.10631
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation
['Dayou Du', 'Yijia Zhang', 'Shijie Cao', 'Jiaqi Guo', 'Ting Cao', 'Xiaowen Chu', 'Ningyi Xu']
['cs.CL']
The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework tha...
2024-02-16T12:27:15Z
null
null
null
null
null
null
null
null
null
null
2,402.10712
An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Language Model Inference
['Atsuki Yamaguchi', 'Aline Villavicencio', 'Nikolaos Aletras']
['cs.CL', 'cs.AI']
The development of state-of-the-art generative large language models (LLMs) disproportionately relies on English-centric tokenizers, vocabulary and pre-training data. Despite the fact that some LLMs have multilingual capabilities, recent studies have shown that their inference efficiency deteriorates when generating te...
2024-02-16T14:15:15Z
Accepted at EMNLP 2024 Findings
null
null
null
null
null
null
null
null
null
2,402.10884
Multi-modal Preference Alignment Remedies Degradation of Visual Instruction Tuning on Language Models
['Shengzhi Li', 'Rongyu Lin', 'Shichao Pei']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG']
Multi-modal large language models (MLLMs) are expected to support multi-turn queries of interchanging image and text modalities in production. However, the current MLLMs trained with visual-question-answering (VQA) datasets could suffer from degradation, as VQA datasets lack the diversity and complexity of the original...
2024-02-16T18:42:08Z
Project code, model and data: https://github.com/findalexli/mllm-dpo
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14188-14200, 2024
10.18653/v1/2024.acl-long.765
null
null
null
null
null
null
null
2,402.10886
Reviewer2: Optimizing Review Generation Through Prompt Generation
['Zhaolin Gao', 'Kianté Brantley', 'Thorsten Joachims']
['cs.CL']
Recent developments in LLMs offer new opportunities for assisting authors in improving their work. In this paper, we envision a use case where authors can receive LLM-generated reviews that uncover weak points in the current draft. While initial methods for automated review generation already exist, these methods tend ...
2024-02-16T18:43:10Z
null
null
null
null
null
null
null
null
null
null
2,402.11073
AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
['Jingwei Ni', 'Minjing Shi', 'Dominik Stammbach', 'Mrinmaya Sachan', 'Elliott Ash', 'Markus Leippold']
['cs.CL', 'cs.AI']
With the rise of generative AI, automated fact-checking methods to combat misinformation are becoming more and more important. However, factual claim detection, the first step in a fact-checking pipeline, suffers from two key issues that limit its scalability and generalizability: (1) inconsistency in definitions of th...
2024-02-16T20:59:57Z
ACL2024 Main Conference
null
null
null
null
null
null
null
null
null
2,402.11095
GIM: Learning Generalizable Image Matcher From Internet Videos
['Xuelun Shen', 'Zhipeng Cai', 'Wei Yin', 'Matthias Müller', 'Zijun Li', 'Kaixuan Wang', 'Xiaozhi Chen', 'Cheng Wang']
['cs.CV']
Image matching is a fundamental computer vision problem. While learning-based methods achieve state-of-the-art performance on existing benchmarks, they generalize poorly to in-the-wild images. Such methods typically need to train separate models for different scene types and are impractical when the scene type is unkno...
2024-02-16T21:48:17Z
Accepted to ICLR 2024 for spotlight presentation
null
null
GIM: Learning Generalizable Image Matcher From Internet Videos
['Xuelun Shen', 'Zhipeng Cai', 'Wei Yin', 'Matthias Müller', 'Zijun Li', 'Kaixuan Wang', 'Xiaozhi Chen', 'Cheng Wang']
2,024
International Conference on Learning Representations
30
40
['Computer Science']
2,402.11111
Language Models as Science Tutors
['Alexis Chevalier', 'Jiayi Geng', 'Alexander Wettig', 'Howard Chen', 'Sebastian Mizera', 'Toni Annala', 'Max Jameson Aragon', 'Arturo Rodríguez Fanlo', 'Simon Frieder', 'Simon Machado', 'Akshara Prabhakar', 'Ellie Thieu', 'Jiachen T. Wang', 'Zirui Wang', 'Xindi Wu', 'Mengzhou Xia', 'Wenhan Xia', 'Jiatong Yu', 'Jun-Jie...
['cs.CL']
NLP has recently made exciting progress toward training language models (LMs) with strong scientific problem-solving skills. However, model development has not focused on real-life use-cases of LMs for science, including applications in education that require processing long scientific documents. To address this, we in...
2024-02-16T22:24:13Z
8 pages without bibliography and appendix, 26 pages total
null
null
null
null
null
null
null
null
null
2,402.11161
PEDANTS: Cheap but Effective and Interpretable Answer Equivalence
['Zongxia Li', 'Ishani Mondal', 'Yijun Liang', 'Huy Nghiem', 'Jordan Lee Boyd-Graber']
['cs.CL', 'cs.AI']
Question answering (QA) can only make progress if we know if an answer is correct, but current answer correctness (AC) metrics struggle with verbose, free-form answers from large language models (LLMs). There are two challenges with current short-form QA evaluations: a lack of diverse styles of evaluation data and an o...
2024-02-17T01:56:19Z
Efficient PEDANTS Classifier for short-form QA in github: https://github.com/zli12321/qa_metrics. arXiv admin note: text overlap with arXiv:2401.13170
Empirical Methods in Natural Language Processing 2024
null
PEDANTS: Cheap but Effective and Interpretable Answer Equivalence
['Zongxia Li', 'Ishani Mondal', 'Huy Nghiem', 'Yijun Liang', 'Jordan L. Boyd-Graber']
2,024
Conference on Empirical Methods in Natural Language Processing
21
57
['Computer Science']
2,402.11176
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
['Yougang Lyu', 'Lingyong Yan', 'Shuaiqiang Wang', 'Haibo Shi', 'Dawei Yin', 'Pengjie Ren', 'Zhumin Chen', 'Maarten de Rijke', 'Zhaochun Ren']
['cs.CL', 'cs.AI']
Despite their success at many natural language processing (NLP) tasks, large language models still struggle to effectively leverage knowledge for knowledge-intensive tasks, manifesting limitations such as generating incomplete, non-factual, or illogical answers. These limitations stem from inadequate knowledge awarenes...
2024-02-17T02:54:32Z
EMNLP 2024 main paper
null
null
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
['Yougang Lyu', 'Lingyong Yan', 'Shuaiqiang Wang', 'Haibo Shi', 'Dawei Yin', 'Pengjie Ren', 'Zhumin Chen', 'M. D. Rijke', 'Zhaochun Ren']
2,024
Conference on Empirical Methods in Natural Language Processing
7
113
['Computer Science']
2,402.11187
LaCo: Large Language Model Pruning via Layer Collapse
['Yifei Yang', 'Zouying Cao', 'Hai Zhao']
['cs.CL', 'cs.AI']
Large language models (LLMs) based on transformer are witnessing a notable trend of size expansion, which brings considerable costs to both model training and inference. However, existing methods such as model quantization, knowledge distillation, and model pruning are constrained by various issues, including hardware ...
2024-02-17T04:16:30Z
Accepted as Findings of EMNLP2024
null
null
LaCo: Large Language Model Pruning via Layer Collapse
['Yifei Yang', 'Zouying Cao', 'Hai Zhao']
2,024
Conference on Empirical Methods in Natural Language Processing
64
42
['Computer Science']
2,402.11248
CoLLaVO: Crayon Large Language and Vision mOdel
['Byung-Kwan Lee', 'Beomchan Park', 'Chae Won Kim', 'Yong Man Ro']
['cs.CV']
The remarkable success of Large Language Models (LLMs) and instruction tuning drives the evolution of Vision Language Models (VLMs) towards a versatile general-purpose model. Yet, it remains unexplored whether current VLMs genuinely possess quality object-level image understanding capabilities determined from 'what obj...
2024-02-17T11:03:02Z
ACL 2024 Findings. Code available: https://github.com/ByungKwanLee/CoLLaVO
null
null
null
null
null
null
null
null
null
2,402.11337
Learning by Reconstruction Produces Uninformative Features For Perception
['Randall Balestriero', 'Yann LeCun']
['cs.CV', 'cs.AI', 'stat.ML']
Input space reconstruction is an attractive representation learning paradigm. Despite interpretability of the reconstruction and generation, we identify a misalignment between learning by reconstruction, and learning for perception. We show that the former allocates a model's capacity towards a subspace of the data exp...
2024-02-17T17:08:16Z
null
null
null
null
null
null
null
null
null
null
2,402.11485
LEIA: Facilitating Cross-lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation
['Ikuya Yamada', 'Ryokan Ri']
['cs.CL', 'cs.AI', 'cs.LG']
Adapting English-based large language models (LLMs) to other languages has become increasingly popular due to the efficiency and potential of cross-lingual transfer. However, existing language adaptation methods often overlook the benefits of cross-lingual supervision. In this study, we introduce LEIA, a language adapt...
2024-02-18T07:24:34Z
ACL Findings 2024
null
null
null
null
null
null
null
null
null
2,402.1153
Efficient Multimodal Learning from Data-centric Perspective
['Muyang He', 'Yexin Liu', 'Boya Wu', 'Jianhao Yuan', 'Yueze Wang', 'Tiejun Huang', 'Bo Zhao']
['cs.CV']
Multimodal Large Language Models (MLLMs) have demonstrated notable capabilities in general visual understanding and reasoning tasks. However, their deployment is hindered by substantial computational costs in both training and inference, limiting accessibility to the broader research and user communities. A straightfor...
2024-02-18T10:09:10Z
null
null
null
Efficient Multimodal Learning from Data-centric Perspective
['Muyang He', 'Yexin Liu', 'Boya Wu', 'Jianhao Yuan', 'Yueze Wang', 'Tiejun Huang', 'Bo Zhao']
2,024
arXiv.org
88
63
['Computer Science']
2,402.11566
Boosting Semi-Supervised 2D Human Pose Estimation by Revisiting Data Augmentation and Consistency Training
['Huayi Zhou', 'Mukun Luo', 'Fei Jiang', 'Yue Ding', 'Hongtao Lu', 'Kui Jia']
['cs.CV']
The 2D human pose estimation (HPE) is a basic visual problem. However, its supervised learning requires massive keypoint labels, which is labor-intensive to collect. Thus, we aim at boosting a pose estimator by excavating extra unlabeled data with semi-supervised learning (SSL). Most previous SSHPE methods are consiste...
2024-02-18T12:27:59Z
under review. Semi-Supervised 2D Human Pose Estimation
null
null
null
null
null
null
null
null
null
2,402.11684
ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language Models
['Guiming Hardy Chen', 'Shunian Chen', 'Ruifei Zhang', 'Junying Chen', 'Xiangbo Wu', 'Zhiyi Zhang', 'Zhihong Chen', 'Jianquan Li', 'Xiang Wan', 'Benyou Wang']
['cs.CL', 'cs.AI']
Large vision-language models (LVLMs) have shown premise in a broad range of vision-language tasks with their strong reasoning and generalization capabilities. However, they require considerable computational resources for training and deployment. This study aims to bridge the performance gap between traditional-scale L...
2024-02-18T19:26:49Z
22 pages
null
null
ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language Models
['Guiming Hardy Chen', 'Shunian Chen', 'Ruifei Zhang', 'Junying Chen', 'Xiangbo Wu', 'Zhiyi Zhang', 'Zhihong Chen', 'Jianquan Li', 'Xiang Wan', 'Benyou Wang']
2,024
null
139
44
['Computer Science']
2,402.11746
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic
['Rishabh Bhardwaj', 'Do Duc Anh', 'Soujanya Poria']
['cs.CL', 'cs.AI']
Aligned language models face a significant limitation as their fine-tuning often results in compromised safety. To tackle this, we propose a simple method RESTA that performs LLM safety realignment. RESTA stands for REstoring Safety through Task Arithmetic. At its core, it involves a simple arithmetic addition of a saf...
2024-02-19T00:18:09Z
null
null
null
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic
['Rishabh Bhardwaj', 'Do Duc Anh', 'Soujanya Poria']
2,024
Annual Meeting of the Association for Computational Linguistics
48
49
['Computer Science']
2,402.11801
Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models
['Zhou Yang', 'Zhaochun Ren', 'Wang Yufeng', 'Shizhong Peng', 'Haizhou Sun', 'Xiaofei Zhu', 'Xiangwen Liao']
['cs.HC']
Empathetic response generation is increasingly significant in AI, necessitating nuanced emotional and cognitive understanding coupled with articulate response expression. Current large language models (LLMs) excel in response expression; however, they lack the ability to deeply understand emotional and cognitive nuance...
2024-02-19T03:12:12Z
12 pages, 4 figures
null
null
null
null
null
null
null
null
null
2,402.11809
Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding
['Hanling Yi', 'Feng Lin', 'Hongbin Li', 'Peiyang Ning', 'Xiaotian Yu', 'Rong Xiao']
['cs.CL', 'cs.AI', 'cs.LG']
This research aims to accelerate the inference speed of large language models (LLMs) with billions of parameters. We propose \textbf{S}mart \textbf{P}arallel \textbf{A}uto-\textbf{C}orrect d\textbf{E}coding (SPACE), an innovative approach designed for achieving lossless acceleration of LLMs. By integrating semi-autoreg...
2024-02-19T03:39:10Z
Accepted by ACL 2024 Findings
null
null
null
null
null
null
null
null
null
2,402.11811
FIPO: Free-form Instruction-oriented Prompt Optimization with Preference Dataset and Modular Fine-tuning Schema
['Junru Lu', 'Siyu An', 'Min Zhang', 'Yulan He', 'Di Yin', 'Xing Sun']
['cs.CL']
When the quality of naive prompts is carefully optimized by human experts, the task performance of large language models (LLMs) can be significantly improved. However, expert-based prompt optimizations are expensive. Herein, some works have proposed Automatic Prompt Optimization (APO), to optimize naive prompts accordi...
2024-02-19T03:56:44Z
COLING 2025, Final Version
null
null
FIPO: Free-form Instruction-oriented Prompt Optimization with Preference Dataset and Modular Fine-tuning Schema
['Junru Lu', 'Siyu An', 'Min Zhang', 'Yulan He', 'Di Yin', 'Xing Sun']
2,024
International Conference on Computational Linguistics
2
91
['Computer Science']
2,402.11819
Head-wise Shareable Attention for Large Language Models
['Zouying Cao', 'Yifei Yang', 'Hai Zhao']
['cs.CL']
Large Language Models (LLMs) suffer from huge number of parameters, which restricts their deployment on edge devices. Weight sharing is one promising solution that encourages weight reuse, effectively reducing memory usage with less performance drop. However, current weight sharing techniques primarily focus on small-s...
2024-02-19T04:19:36Z
17 pages, 7 figures, 21 tables, EMNLP'24 Findings
null
null
null
null
null
null
null
null
null
2,402.11882
NOTE: Notable generation Of patient Text summaries through Efficient approach based on direct preference optimization
['Imjin Ahn', 'Hansle Gwon', 'Young-Hak Kim', 'Tae Joon Jun', 'Sanghyun Park']
['cs.CV', 'J.3']
The discharge summary is a one of critical documents in the patient journey, encompassing all events experienced during hospitalization, including multiple visits, medications, tests, surgery/procedures, and admissions/discharge. Providing a summary of the patient's progress is crucial, as it significantly influences f...
2024-02-19T06:43:25Z
13 pages, 3 figures, 5 tables
null
null
null
null
null
null
null
null
null
2,402.11883
InMD-X: Large Language Models for Internal Medicine Doctors
['Hansle Gwon', 'Imjin Ahn', 'Hyoje Jung', 'Byeolhee Kim', 'Young-Hak Kim', 'Tae Joon Jun']
['cs.CV']
In this paper, we introduce InMD-X, a collection of multiple large language models specifically designed to cater to the unique characteristics and demands of Internal Medicine Doctors (IMD). InMD-X represents a groundbreaking development in natural language processing, offering a suite of language models fine-tuned fo...
2024-02-19T06:46:16Z
null
null
null
null
null
null
null
null
null
null
2,402.11929
DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation
['Chong Zeng', 'Yue Dong', 'Pieter Peers', 'Youkang Kong', 'Hongzhi Wu', 'Xin Tong']
['cs.CV', 'cs.GR']
This paper presents a novel method for exerting fine-grained lighting control during text-driven diffusion-based image generation. While existing diffusion models already have the ability to generate images under any lighting condition, without additional guidance these models tend to correlate image content and lighti...
2024-02-19T08:17:21Z
Accepted to SIGGRAPH 2024. Project page: https://dilightnet.github.io/
ACM SIGGRAPH 2024 Conference Proceedings
10.1145/3641519.3657396
null
null
null
null
null
null
null
2,402.11975
Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations
['Nuo Chen', 'Hongguang Li', 'Juhua Huang', 'Baoyuan Wang', 'Jia Li']
['cs.CL']
Existing retrieval-based methods have made significant strides in maintaining long-term conversations. However, these approaches face challenges in memory database management and accurate memory retrieval, hindering their efficacy in dynamic, real-world interactions. This study introduces a novel framework, COmpressive...
2024-02-19T09:19:50Z
17pages, 5 figures
null
null
Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations
['Nuo Chen', 'Hongguang Li', 'Juhua Huang', 'Baoyuan Wang', 'Jia Li']
2,024
arXiv.org
11
46
['Computer Science']
2,402.12052
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs
['Jiejun Tan', 'Zhicheng Dou', 'Yutao Zhu', 'Peidong Guo', 'Kun Fang', 'Ji-Rong Wen']
['cs.CL']
The integration of large language models (LLMs) and search engines represents a significant evolution in knowledge acquisition methodologies. However, determining the knowledge that an LLM already possesses and the knowledge that requires the help of a search engine remains an unresolved issue. Most existing methods so...
2024-02-19T11:11:08Z
Accepted by ACL 2024 main conference. Repo: https://github.com/plageon/SlimPLM
null
null
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs
['Jiejun Tan', 'Zhicheng Dou', 'Yutao Zhu', 'Peidong Guo', 'Kun Fang', 'Ji-Rong Wen']
2,024
Annual Meeting of the Association for Computational Linguistics
30
77
['Computer Science']
2,402.12195
Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion
['Ziyue Wang', 'Chi Chen', 'Yiqi Zhu', 'Fuwen Luo', 'Peng Li', 'Ming Yan', 'Ji Zhang', 'Fei Huang', 'Maosong Sun', 'Yang Liu']
['cs.CL']
With the bloom of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks. However, they fall short to comprehend context involving multiple images. A primary reason...
2024-02-19T14:59:07Z
17 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,402.12204
Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages
['Yuanchi Zhang', 'Yile Wang', 'Zijun Liu', 'Shuo Wang', 'Xiaolong Wang', 'Peng Li', 'Maosong Sun', 'Yang Liu']
['cs.CL']
While large language models (LLMs) have been pre-trained on multilingual corpora, their performance still lags behind in most languages compared to a few resource-rich languages. One common approach to mitigate this issue is to translate training data from resource-rich languages into other languages and then continue ...
2024-02-19T15:07:32Z
null
null
null
null
null
null
null
null
null
null
2,402.12208
Language-Codec: Bridging Discrete Codec Representations and Speech Language Models
['Shengpeng Ji', 'Minghui Fang', 'Jialong Zuo', 'Ziyue Jiang', 'Dingdong Wang', 'Hanting Wang', 'Hai Huang', 'Zhou Zhao']
['eess.AS', 'cs.SD']
In recent years, large language models have achieved significant success in generative tasks related to speech, audio, music, and other signal domains. A crucial element of these models is the discrete acoustic codecs, which serve as an intermediate representation replacing the mel-spectrogram. However, there exist sev...
2024-02-19T15:12:12Z
ACL 2025 Main
null
null
null
null
null
null
null
null
null
2,402.12226
AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling
['Jun Zhan', 'Junqi Dai', 'Jiasheng Ye', 'Yunhua Zhou', 'Dong Zhang', 'Zhigeng Liu', 'Xin Zhang', 'Ruibin Yuan', 'Ge Zhang', 'Linyang Li', 'Hang Yan', 'Jie Fu', 'Tao Gui', 'Tianxiang Sun', 'Yugang Jiang', 'Xipeng Qiu']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG']
We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music. AnyGPT can be trained stably without any alterations to the current large language model (LLM) architecture or training paradig...
2024-02-19T15:33:10Z
28 pages, 16 figures, under review, work in progress
null
null
null
null
null
null
null
null
null
2,402.12332
Triple-Encoders: Representations That Fire Together, Wire Together
['Justus-Jonas Erker', 'Florian Mai', 'Nils Reimers', 'Gerasimos Spanakis', 'Iryna Gurevych']
['cs.CL']
Search-based dialog models typically re-encode the dialog history at every turn, incurring high cost. Curved Contrastive Learning, a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder, has recently shown promising results for dialog modeling at fa...
2024-02-19T18:06:02Z
accepted at ACL 2024 (main conference)
null
null
Triple-Encoders: Representations That Fire Together, Wire Together
['Justus-Jonas Erker', 'Florian Mai', 'Nils Reimers', 'Gerasimos Spanakis', 'Iryna Gurevych']
2,024
Annual Meeting of the Association for Computational Linguistics
2
38
['Computer Science']
2,402.12336
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
['Christian Schlarmann', 'Naman Deep Singh', 'Francesco Croce', 'Matthias Hein']
['cs.LG', 'cs.AI', 'cs.CV', 'stat.ML']
Multi-modal foundation models like OpenFlamingo, LLaVA, and GPT-4 are increasingly used for various real-world tasks. Prior work has shown that these models are highly vulnerable to adversarial attacks on the vision modality. These attacks can be leveraged to spread fake information or defraud users, and thus pose a si...
2024-02-19T18:09:48Z
ICML 2024 Oral
null
null
null
null
null
null
null
null
null
2,402.12354
LoRA+: Efficient Low Rank Adaptation of Large Models
['Soufiane Hayou', 'Nikhil Ghosh', 'Bin Yu']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
In this paper, we show that Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021) leads to suboptimal finetuning of models with large width (embedding dimension). This is due to the fact that adapter matrices A and B in LoRA are updated with the same learning rate. Using scaling arguments for large wi...
2024-02-19T18:33:49Z
27 pages
null
null
null
null
null
null
null
null
null
2,402.12374
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
['Zhuoming Chen', 'Avner May', 'Ruslan Svirschevski', 'Yuhsun Huang', 'Max Ryabinin', 'Zhihao Jia', 'Beidi Chen']
['cs.CL']
As the usage of large language models (LLMs) grows, performing efficient inference with these models becomes increasingly important. While speculative decoding has recently emerged as a promising direction for speeding up inference, existing methods are limited in their ability to scale to larger speculation budgets, a...
2024-02-19T18:58:32Z
null
null
null
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
['Zhuoming Chen', 'Avner May', 'Ruslan Svirschevski', 'Yuhsun Huang', 'Max Ryabinin', 'Zhihao Jia', 'Beidi Chen']
2,024
arXiv.org
52
48
['Computer Science']
2,402.12376
FiT: Flexible Vision Transformer for Diffusion Model
['Zeyu Lu', 'Zidong Wang', 'Di Huang', 'Chengyue Wu', 'Xihui Liu', 'Wanli Ouyang', 'Lei Bai']
['cs.CV']
Nature is infinitely resolution-free. In the context of this reality, existing diffusion models, such as Diffusion Transformers, often face challenges when processing image resolutions outside of their trained domain. To overcome this limitation, we present the Flexible Vision Transformer (FiT), a transformer architect...
2024-02-19T18:59:07Z
null
null
null
null
null
null
null
null
null
null
2,402.12399
Turn Waste into Worth: Rectifying Top-$k$ Router of MoE
['Zhiyuan Zeng', 'Qipeng Guo', 'Zhaoye Fei', 'Zhangyue Yin', 'Yunhua Zhou', 'Linyang Li', 'Tianxiang Sun', 'Hang Yan', 'Dahua Lin', 'Xipeng Qiu']
['cs.LG', 'cs.AI', 'cs.CL']
Sparse Mixture of Experts (MoE) models are popular for training large language models due to their computational efficiency. However, the commonly used top-$k$ routing mechanism suffers from redundancy computation and memory costs due to the unbalanced routing. Some experts are overflow, where the exceeding tokens are ...
2024-02-17T06:23:27Z
null
null
null
null
null
null
null
null
null
null
2,402.12479
In value-based deep reinforcement learning, a pruned network is a good network
['Johan Obando-Ceron', 'Aaron Courville', 'Pablo Samuel Castro']
['cs.LG', 'cs.AI']
Recent work has shown that deep reinforcement learning agents have difficulty in effectively using their network parameters. We leverage prior insights into the advantages of sparse training techniques and demonstrate that gradual magnitude pruning enables value-based agents to maximize parameter effectiveness. This re...
2024-02-19T19:34:07Z
null
null
null
null
null
null
null
null
null
null
2,402.12652
PDEformer: Towards a Foundation Model for One-Dimensional Partial Differential Equations
['Zhanhong Ye', 'Xiang Huang', 'Leheng Chen', 'Hongsheng Liu', 'Zidong Wang', 'Bin Dong']
['math.NA', 'cs.NA']
This paper introduces PDEformer, a neural solver for partial differential equations (PDEs) capable of simultaneously addressing various types of PDEs. We propose to represent the PDE in the form of a computational graph, facilitating the seamless integration of both symbolic and numerical information inherent in a PDE....
2024-02-20T02:02:29Z
null
null
null
null
null
null
null
null
null
null
2,402.12749
Me LLaMA: Foundation Large Language Models for Medical Applications
['Qianqian Xie', 'Qingyu Chen', 'Aokun Chen', 'Cheng Peng', 'Yan Hu', 'Fongci Lin', 'Xueqing Peng', 'Jimin Huang', 'Jeffrey Zhang', 'Vipina Keloth', 'Xinyu Zhou', 'Lingfei Qian', 'Huan He', 'Dennis Shung', 'Lucila Ohno-Machado', 'Yonghui Wu', 'Hua Xu', 'Jiang Bian']
['cs.CL', 'cs.AI']
Recent advancements in large language models (LLMs) like ChatGPT and LLaMA show promise in medical applications, yet challenges remain in medical language comprehension. This study presents Me-LLaMA, a new medical LLM family based on open-source LLaMA models, optimized for medical text analysis and diagnosis by leverag...
2024-02-20T06:37:31Z
21 pages, 4 figures, 8 tables
null
null
Me LLaMA: Foundation Large Language Models for Medical Applications
['Qianqian Xie', 'Qingyu Chen', 'Aokun Chen', 'C.A.I. Peng', 'Yan Hu', 'Fongci Lin', 'Xueqing Peng', 'Jimin Huang', 'Jeffrey Zhang', 'V. Keloth', 'Xinyu Zhou', 'Lingfei Qian', 'Huan He', 'Dennis Shung', 'Lucila Ohno-Machado', 'Yonghui Wu', 'Hua Xu', 'Jiang Bian']
2,024
null
4
42
['Computer Science']
2,402.1284
ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic
['Fajri Koto', 'Haonan Li', 'Sara Shatnawi', 'Jad Doughman', 'Abdelrahman Boda Sadallah', 'Aisha Alraeesi', 'Khalid Almubarak', 'Zaid Alyafeai', 'Neha Sengupta', 'Shady Shehata', 'Nizar Habash', 'Preslav Nakov', 'Timothy Baldwin']
['cs.CL']
The focus of language model evaluation has transitioned towards reasoning and knowledge-intensive tasks, driven by advancements in pretraining large models. While state-of-the-art models are partially trained on large Arabic texts, evaluating their performance in Arabic remains challenging due to the limited availabili...
2024-02-20T09:07:41Z
Findings of ACL 2024
null
null
ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic
['Fajri Koto', 'Haonan Li', 'Sara Shatnawi', 'Jad Doughman', 'A. Sadallah', 'A. Alraeesi', 'Khalid Almubarak', 'Zaid Alyafeai', 'Neha Sengupta', 'Shady Shehata', 'Nizar Habash', 'Preslav Nakov', 'Timothy Baldwin']
2,024
Annual Meeting of the Association for Computational Linguistics
44
58
['Computer Science']
2,402.13022
SoMeLVLM: A Large Vision Language Model for Social Media Processing
['Xinnong Zhang', 'Haoyu Kuang', 'Xinyi Mou', 'Hanjia Lyu', 'Kun Wu', 'Siming Chen', 'Jiebo Luo', 'Xuanjing Huang', 'Zhongyu Wei']
['cs.CL', 'cs.MM']
The growth of social media, characterized by its multimodal nature, has led to the emergence of diverse phenomena and challenges, which calls for an effective approach to uniformly solve automated tasks. The powerful Large Vision Language Models make it possible to handle a variety of tasks simultaneously, but even wit...
2024-02-20T14:02:45Z
null
null
10.18653/v1/2024.findings-acl.140
null
null
null
null
null
null
null