arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,402.13064
Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models
['Haoran Li', 'Qingxiu Dong', 'Zhengyang Tang', 'Chaojun Wang', 'Xingxing Zhang', 'Haoyang Huang', 'Shaohan Huang', 'Xiaolong Huang', 'Zeqiang Huang', 'Dongdong Zhang', 'Yuxian Gu', 'Xin Cheng', 'Xun Wang', 'Si-Qing Chen', 'Li Dong', 'Wei Lu', 'Zhifang Sui', 'Benyou Wang', 'Wai Lam', 'Furu Wei']
['cs.CL']
We introduce Generalized Instruction Tuning (called GLAN), a general and scalable method for instruction tuning of Large Language Models (LLMs). Unlike prior work that relies on seed examples or existing datasets to construct instruction tuning data, GLAN exclusively utilizes a pre-curated taxonomy of human knowledge a...
2024-02-20T15:00:35Z
Work in progress
null
null
null
null
null
null
null
null
null
2,402.13126
VGMShield: Mitigating Misuse of Video Generative Models
['Yan Pang', 'Baicheng Chen', 'Yang Zhang', 'Tianhao Wang']
['cs.CR', 'cs.AI', 'cs.CV', 'cs.LG', 'eess.IV']
With the rapid advancement in video generation, people can conveniently use video generation models to create videos tailored to their specific desires. As a result, there are also growing concerns about the potential misuse of video generation for spreading illegal content and misinformation. In this work, we introd...
2024-02-20T16:39:23Z
18 pages
null
null
null
null
null
null
null
null
null
2,402.13178
Benchmarking Retrieval-Augmented Generation for Medicine
['Guangzhi Xiong', 'Qiao Jin', 'Zhiyong Lu', 'Aidong Zhang']
['cs.CL', 'cs.AI']
While large language models (LLMs) have achieved state-of-the-art performance on a wide range of medical question answering (QA) tasks, they still face challenges with hallucinations and outdated knowledge. Retrieval-augmented generation (RAG) is a promising solution and has been widely adopted. However, a RAG system c...
2024-02-20T17:44:06Z
Homepage: https://teddy-xionggz.github.io/benchmark-medical-rag/
null
null
null
null
null
null
null
null
null
2,402.13217
VideoPrism: A Foundational Visual Encoder for Video Understanding
['Long Zhao', 'Nitesh B. Gundavarapu', 'Liangzhe Yuan', 'Hao Zhou', 'Shen Yan', 'Jennifer J. Sun', 'Luke Friedman', 'Rui Qian', 'Tobias Weyand', 'Yue Zhao', 'Rachel Hornung', 'Florian Schroff', 'Ming-Hsuan Yang', 'David A. Ross', 'Huisheng Wang', 'Hartwig Adam', 'Mikhail Sirotenko', 'Ting Liu', 'Boqing Gong']
['cs.CV', 'cs.AI']
We introduce VideoPrism, a general-purpose video encoder that tackles diverse video understanding tasks with a single frozen model. We pretrain VideoPrism on a heterogeneous corpus containing 36M high-quality video-caption pairs and 582M video clips with noisy parallel text (e.g., ASR transcripts). The pretraining appr...
2024-02-20T18:29:49Z
Accepted to ICML 2024. v2: added retrieval results on MSRVTT (1K-A), more data analyses, and ablation studies; v3: released models at https://github.com/google-deepmind/videoprism
null
null
VideoPrism: A Foundational Visual Encoder for Video Understanding
['Long Zhao', 'N. B. Gundavarapu', 'Liangzhe Yuan', 'Hao Zhou', 'Shen Yan', 'Jennifer J. Sun', 'Luke Friedman', 'Rui Qian', 'Tobias Weyand', 'Yue Zhao', 'Rachel Hornung', 'Florian Schroff', 'Ming Yang', 'David A. Ross', 'Huisheng Wang', 'Hartwig Adam', 'Mikhail Sirotenko', 'Ting Liu', 'Boqing Gong']
2,024
International Conference on Machine Learning
36
144
['Computer Science']
2,402.13228
Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive
['Arka Pal', 'Deep Karkhanis', 'Samuel Dooley', 'Manley Roberts', 'Siddartha Naidu', 'Colin White']
['cs.CL', 'cs.AI', 'cs.LG']
Direct Preference Optimisation (DPO) is effective at significantly improving the performance of large language models (LLMs) on downstream tasks such as reasoning, summarisation, and alignment. Using pairs of preferred and dispreferred data, DPO models the relative probability of picking one response over another. In t...
2024-02-20T18:42:34Z
null
null
null
null
null
null
null
null
null
null
2,402.13232
A Touch, Vision, and Language Dataset for Multimodal Alignment
['Letian Fu', 'Gaurav Datta', 'Huang Huang', 'William Chung-Ho Panitch', 'Jaimyn Drake', 'Joseph Ortiz', 'Mustafa Mukadam', 'Mike Lambeta', 'Roberto Calandra', 'Ken Goldberg']
['cs.CV', 'cs.RO']
Touch is an important sensing modality for humans, but it has not yet been incorporated into a multimodal generative language model. This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language de...
2024-02-20T18:47:56Z
null
null
null
null
null
null
null
null
null
null
2,402.13253
BiMediX: Bilingual Medical Mixture of Experts LLM
['Sara Pieri', 'Sahal Shaji Mullappilly', 'Fahad Shahbaz Khan', 'Rao Muhammad Anwer', 'Salman Khan', 'Timothy Baldwin', 'Hisham Cholakkal']
['cs.CL']
In this paper, we introduce BiMediX, the first bilingual medical mixture of experts LLM designed for seamless interaction in both English and Arabic. Our model facilitates a wide range of medical interactions in English and Arabic, including multi-turn chats to inquire about additional details such as patient symptoms ...
2024-02-20T18:59:26Z
Accepted to EMNLP 2024 (Findings)
Findings of the Association for Computational Linguistics: EMNLP 2024, pages 16984-17002
10.18653/v1/2024.findings-emnlp.989
BiMediX: Bilingual Medical Mixture of Experts LLM
['Sara Pieri', 'Sahal Shaji Mullappilly', 'F. Khan', 'R. Anwer', 'Salman H. Khan', 'Timothy Baldwin', 'Hisham Cholakkal']
2,024
Conference on Empirical Methods in Natural Language Processing
15
43
['Computer Science']
2,402.1335
PIRB: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods
['Sławomir Dadas', 'Michał Perełkiewicz', 'Rafał Poświata']
['cs.CL']
We present Polish Information Retrieval Benchmark (PIRB), a comprehensive evaluation framework encompassing 41 text information retrieval tasks for Polish. The benchmark incorporates existing datasets as well as 10 new, previously unpublished datasets covering diverse topics such as medicine, law, business, physics, an...
2024-02-20T19:53:36Z
null
null
null
null
null
null
null
null
null
null
2,402.13516
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models
['Chenyang Song', 'Xu Han', 'Zhengyan Zhang', 'Shengding Hu', 'Xiyu Shi', 'Kuai Li', 'Chen Chen', 'Zhiyuan Liu', 'Guangli Li', 'Tao Yang', 'Maosong Sun']
['cs.LG', 'cs.AI', 'cs.CL', 'I.2.7']
Activation sparsity refers to the existence of considerable weakly-contributed elements among activation outputs. As a prevalent property of the models using the ReLU activation function, activation sparsity has been proven a promising paradigm to boost model inference efficiency. Nevertheless, most large language mode...
2024-02-21T03:58:49Z
19 pages, 4 figures, 9 tables
null
null
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models
['Chenyang Song', 'Xu Han', 'Zhengyan Zhang', 'Shengding Hu', 'Xiyu Shi', 'Kuai Li', 'Chen Chen', 'Zhiyuan Liu', 'Guanglin Li', 'Tao Yang', 'Maosong Sun']
2,024
International Conference on Computational Linguistics
32
98
['Computer Science']
2,402.13583
LongWanjuan: Towards Systematic Measurement for Long Text Quality
['Kai Lv', 'Xiaoran Liu', 'Qipeng Guo', 'Hang Yan', 'Conghui He', 'Xipeng Qiu', 'Dahua Lin']
['cs.CL']
The quality of training data are crucial for enhancing the long-text capabilities of foundation models. Despite existing efforts to refine data quality through heuristic rules and evaluations based on data diversity and difficulty, there's a lack of systematic approaches specifically tailored for assessing long texts. ...
2024-02-21T07:27:18Z
Update Figures
null
null
LongWanjuan: Towards Systematic Measurement for Long Text Quality
['Kai Lv', 'Xiaoran Liu', 'Qipeng Guo', 'Hang Yan', 'Conghui He', 'Xipeng Qiu', 'Dahua Lin']
2,024
Conference on Empirical Methods in Natural Language Processing
4
67
['Computer Science']
2,402.13604
Breaking the HISCO Barrier: Automatic Occupational Standardization with OccCANINE
['Christian Møller Dahl', 'Torben Johansen', 'Christian Vedel']
['cs.CL', 'econ.EM', 'I.2.7; I.7.0']
This paper introduces a new tool, OccCANINE, to automatically transform occupational descriptions into the HISCO classification system. The manual work involved in processing and classifying occupational descriptions is error-prone, tedious, and time-consuming. We finetune a preexisting language model (CANINE) to do th...
2024-02-21T08:10:43Z
All code and guides on how to use OccCANINE is available on GitHub https://github.com/christianvedels/OccCANINE
null
null
null
null
null
null
null
null
null
2,402.13616
YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
['Chien-Yao Wang', 'I-Hau Yeh', 'Hong-Yuan Mark Liao']
['cs.CV']
Today's deep learning methods focus on how to design the most appropriate objective functions so that the prediction results of the model can be closest to the ground truth. Meanwhile, an appropriate architecture that can facilitate acquisition of enough information for prediction has to be designed. Existing methods i...
2024-02-21T08:42:53Z
null
null
null
null
null
null
null
null
null
null
2,402.13643
Class-Aware Mask-Guided Feature Refinement for Scene Text Recognition
['Mingkun Yang', 'Biao Yang', 'Minghui Liao', 'Yingying Zhu', 'Xiang Bai']
['cs.CV']
Scene text recognition is a rapidly developing field that faces numerous challenges due to the complexity and diversity of scene text, including complex backgrounds, diverse fonts, flexible arrangements, and accidental occlusions. In this paper, we propose a novel approach called Class-Aware Mask-guided feature refinem...
2024-02-21T09:22:45Z
Accepted by Pattern Recognition
null
null
null
null
null
null
null
null
null
2,402.13718
$\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens
['Xinrong Zhang', 'Yingfa Chen', 'Shengding Hu', 'Zihang Xu', 'Junhao Chen', 'Moo Khai Hao', 'Xu Han', 'Zhen Leng Thai', 'Shuo Wang', 'Zhiyuan Liu', 'Maosong Sun']
['cs.CL']
Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction. Despite recent strides in making LLMs process contexts with more than 100K tokens, there is currently a lack of a standardized benchmark to evalu...
2024-02-21T11:30:29Z
null
2023.12.15ARR
null
∞Bench: Extending Long Context Evaluation Beyond 100K Tokens
['Xinrong Zhang', 'Yingfa Chen', 'Shengding Hu', 'Zihang Xu', 'Junhao Chen', 'Moo Khai Hao', 'Xu Han', 'Z. Thai', 'Shuo Wang', 'Zhiyuan Liu', 'Maosong Sun']
2,024
Volume 1
195
53
['Computer Science']
2,402.13753
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
['Yiran Ding', 'Li Lyna Zhang', 'Chengruidong Zhang', 'Yuanyuan Xu', 'Ning Shang', 'Jiahang Xu', 'Fan Yang', 'Mao Yang']
['cs.CL']
Large context window is a desirable feature in large language models (LLMs). However, due to high fine-tuning costs, scarcity of long texts, and catastrophic values introduced by new token positions, current extended context windows are limited to around 128k tokens. This paper introduces LongRoPE that, for the first t...
2024-02-21T12:30:33Z
null
null
null
null
null
null
null
null
null
null
2,402.13852
Neural Control System for Continuous Glucose Monitoring and Maintenance
['Azmine Toushik Wasi']
['cs.LG', 'cs.AI', 'cs.NE', 'cs.SY', 'eess.SY', 'stat.ML']
Precise glucose level monitoring is critical for people with diabetes to avoid serious complications. While there are several methods for continuous glucose level monitoring, research on maintenance devices is limited. To mitigate the gap, we provide a novel neural control system for continuous glucose monitoring and m...
2024-02-21T14:56:36Z
9 Pages, 4 figures, ICLR 2024 Tiny Papers Track https://openreview.net/forum?id=Te4P3Cn54g
The Second Tiny Papers Track at ICLR 2024
null
null
null
null
null
null
null
null
2,402.13929
SDXL-Lightning: Progressive Adversarial Diffusion Distillation
['Shanchuan Lin', 'Anran Wang', 'Xiao Yang']
['cs.CV', 'cs.AI', 'cs.LG']
We propose a diffusion distillation method that achieves new state-of-the-art in one-step/few-step 1024px text-to-image generation based on SDXL. Our method combines progressive and adversarial distillation to achieve a balance between quality and mode coverage. In this paper, we discuss the theoretical analysis, discr...
2024-02-21T16:51:05Z
null
null
null
SDXL-Lightning: Progressive Adversarial Diffusion Distillation
['Shanchuan Lin', 'Anran Wang', 'Xiao Yang']
2,024
arXiv.org
134
79
['Computer Science']
2,402.13963
Towards Building Multilingual Language Model for Medicine
['Pengcheng Qiu', 'Chaoyi Wu', 'Xiaoman Zhang', 'Weixiong Lin', 'Haicheng Wang', 'Ya Zhang', 'Yanfeng Wang', 'Weidi Xie']
['cs.CL']
The development of open-source, multilingual medical language models can benefit a wide, linguistically diverse audience from different regions. To promote this domain, we present contributions from the following: First, we construct a multilingual medical corpus, containing approximately 25.5B tokens encompassing 6 ma...
2024-02-21T17:47:20Z
null
null
null
null
null
null
null
null
null
null
2,402.13991
Analysing The Impact of Sequence Composition on Language Model Pre-Training
['Yu Zhao', 'Yuanbin Qu', 'Konrad Staniszewski', 'Szymon Tworkowski', 'Wei Liu', 'Piotr Miłoś', 'Yuxiang Wu', 'Pasquale Minervini']
['cs.CL']
Most language model pre-training frameworks concatenate multiple documents into fixed-length sequences and use causal masking to compute the likelihood of each token given its context; this strategy is widely adopted due to its simplicity and efficiency. However, to this day, the influence of the pre-training sequence ...
2024-02-21T18:23:16Z
null
Analysing The Impact of Sequence Composition on Language Model Pre-Training (Zhao et al., ACL 2024)
10.18653/v1/2024.acl-long.427
Analysing The Impact of Sequence Composition on Language Model Pre-Training
['Yu Zhao', 'Yuanbin Qu', 'Konrad Staniszewski', 'Szymon Tworkowski', 'Wei Liu', "Piotr Milo's", 'Yuxiang Wu', 'Pasquale Minervini']
2,024
Annual Meeting of the Association for Computational Linguistics
15
43
['Computer Science']
2,402.14285
Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion
['Yujia Huang', 'Adishree Ghatare', 'Yuanzhe Liu', 'Ziniu Hu', 'Qinsheng Zhang', 'Chandramouli S Sastry', 'Siddharth Gururani', 'Sageev Oore', 'Yisong Yue']
['cs.SD', 'cs.LG', 'eess.AS']
We study the problem of symbolic music generation (e.g., generating piano rolls), with a technical focus on non-differentiable rule guidance. Musical rules are often expressed in symbolic form on note characteristics, such as note density or chord progression, many of which are non-differentiable which pose a challenge...
2024-02-22T04:55:58Z
ICML 2024 (Oral)
null
null
null
null
null
null
null
null
null
2,402.14289
TinyLLaVA: A Framework of Small-scale Large Multimodal Models
['Baichuan Zhou', 'Ying Hu', 'Xi Weng', 'Junlong Jia', 'Jie Luo', 'Xien Liu', 'Ji Wu', 'Lei Huang']
['cs.LG', 'cs.CL']
We present the TinyLLaVA framework that provides a unified perspective in designing and analyzing the small-scale Large Multimodal Models (LMMs). We empirically study the effects of different vision encoders, connection modules, language models, training data and training recipes. Our extensive experiments showed that ...
2024-02-22T05:05:30Z
Our model weights and codes will be made public at https://github.com/DLCV-BUAA/TinyLLaVABench
null
null
null
null
null
null
null
null
null
2,402.1431
Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge
['Jinlan Fu', 'Shenzhen Huangfu', 'Hang Yan', 'See-Kiong Ng', 'Xipeng Qiu']
['cs.CL']
Large Language Models (LLMs) have recently showcased remarkable generalizability in various domains. Despite their extensive knowledge, LLMs still face challenges in efficiently utilizing encoded knowledge to develop accurate and logical reasoning processes. To mitigate this problem, we introduced Hint-before-Solving P...
2024-02-22T05:58:03Z
18 pages
null
null
Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge
['Jinlan Fu', 'Shenzhen Huangfu', 'Hang Yan', 'See-Kiong Ng', 'Xipeng Qiu']
2,024
arXiv.org
8
35
['Computer Science']
2,402.14318
Assessing generalization capability of text ranking models in Polish
['Sławomir Dadas', 'Małgorzata Grębowiec']
['cs.CL']
Retrieval-augmented generation (RAG) is becoming an increasingly popular technique for integrating internal knowledge bases with large language models. In a typical RAG pipeline, three models are used, responsible for the retrieval, reranking, and generation stages. In this article, we focus on the reranking problem fo...
2024-02-22T06:21:41Z
null
null
null
null
null
null
null
null
null
null
2,402.14327
Subobject-level Image Tokenization
['Delong Chen', 'Samuel Cahyawijaya', 'Jianfeng Liu', 'Baoyuan Wang', 'Pascale Fung']
['cs.CV', 'cs.CL']
Patch-based image tokenization ignores the morphology of the visual world, limiting effective and efficient learning of image understanding. Inspired by subword tokenization, we introduce subobject-level adaptive token segmentation and explore several approaches, including superpixel, SAM, and a proposed Efficient and ...
2024-02-22T06:47:44Z
null
null
null
Subobject-level Image Tokenization
['Delong Chen', 'Samuel Cahyawijaya', 'Jianfeng Liu', 'Baoyuan Wang', 'Pascale Fung']
2,024
arXiv.org
9
91
['Computer Science']
2,402.14379
Novi jezički modeli za srpski jezik
['Mihailo Škorić']
['cs.CL']
The paper will briefly present the development history of transformer-based language models for the Serbian language. Several new models for text generation and vectorization, trained on the resources of the Society for Language Resources and Technologies, will also be presented. Ten selected vectorization models for S...
2024-02-22T08:48:21Z
in Serbian language
null
null
null
null
null
null
null
null
null
2,402.14407
Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training
['Haoran He', 'Chenjia Bai', 'Ling Pan', 'Weinan Zhang', 'Bin Zhao', 'Xuelong Li']
['cs.LG', 'cs.CV', 'cs.RO']
Learning a generalist embodied agent capable of completing multiple tasks poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets. In contrast, a vast amount of human videos exist, capturing intricate tasks and interactions with the physical world. Promising prospects arise for utilizi...
2024-02-22T09:48:47Z
Accepted by NeurIPS 2024. 24 pages
null
null
null
null
null
null
null
null
null
2,402.14499
"My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
['Xinpeng Wang', 'Bolei Ma', 'Chengzhi Hu', 'Leon Weber-Genzel', 'Paul Röttger', 'Frauke Kreuter', 'Dirk Hovy', 'Barbara Plank']
['cs.CL']
The open-ended nature of language generation makes the evaluation of autoregressive large language models (LLMs) challenging. One common evaluation approach uses multiple-choice questions (MCQ) to limit the response space. The model is then evaluated by ranking the candidate answers by the log probability of the first ...
2024-02-22T12:47:33Z
ACL 2024 Findings
null
null
null
null
null
null
null
null
null
2,402.14526
Balanced Data Sampling for Language Model Training with Clustering
['Yunfan Shao', 'Linyang Li', 'Zhaoye Fei', 'Hang Yan', 'Dahua Lin', 'Xipeng Qiu']
['cs.CL', 'cs.AI']
Data plays a fundamental role in the training of Large Language Models (LLMs). While attention has been paid to the collection and composition of datasets, determining the data sampling strategy in training remains an open question. Most LLMs are trained with a simple strategy, random sampling. However, this sampling s...
2024-02-22T13:20:53Z
ACL 2024 (findings), Code is released at https://github.com/choosewhatulike/cluster-clip
null
null
null
null
null
null
null
null
null
2,402.14545
Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective
['Zihao Yue', 'Liang Zhang', 'Qin Jin']
['cs.CL', 'cs.CV']
Large Multimodal Models (LMMs) often suffer from multimodal hallucinations, wherein they may create content that is not present in the visual inputs. In this paper, we explore a new angle of this issue: overly detailed training data hinders the model's ability to timely terminate generation, leading to continued output...
2024-02-22T13:33:13Z
Accepted to ACL 2024
null
null
null
null
null
null
null
null
null
2,402.14654
Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot
['Fabien Baradel', 'Matthieu Armando', 'Salma Galaaoui', 'Romain Brégier', 'Philippe Weinzaepfel', 'Grégory Rogez', 'Thomas Lucas']
['cs.CV']
We present Multi-HMR, a strong sigle-shot model for multi-person 3D human mesh recovery from a single RGB image. Predictions encompass the whole body, i.e., including hands and facial expressions, using the SMPL-X parametric model and 3D location in the camera coordinate system. Our model detects people by predicting c...
2024-02-22T16:05:13Z
Accepted at ECCV'24 - Code: https://github.com/naver/multi-hmr
null
null
null
null
null
null
null
null
null
2,402.14658
OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement
['Tianyu Zheng', 'Ge Zhang', 'Tianhao Shen', 'Xueling Liu', 'Bill Yuchen Lin', 'Jie Fu', 'Wenhu Chen', 'Xiang Yue']
['cs.SE', 'cs.AI', 'cs.CL']
The introduction of large language models has significantly advanced code generation. However, open-source models often lack the execution capabilities and iterative refinement of advanced systems like the GPT-4 Code Interpreter. To address this, we introduce OpenCodeInterpreter, a family of open-source code systems de...
2024-02-22T16:06:23Z
null
null
null
OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement
['Tianyu Zheng', 'Ge Zhang', 'Tianhao Shen', 'Xueling Liu', 'Bill Yuchen Lin', 'Jie Fu', 'Wenhu Chen', 'Xiang Yue']
2,024
Annual Meeting of the Association for Computational Linguistics
131
45
['Computer Science']
2,402.1471
IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus
['Honghao Gui', 'Lin Yuan', 'Hongbin Ye', 'Ningyu Zhang', 'Mengshu Sun', 'Lei Liang', 'Huajun Chen']
['cs.CL', 'cs.AI', 'cs.DB', 'cs.IR', 'cs.LG']
Large Language Models (LLMs) demonstrate remarkable potential across various domains; however, they exhibit a significant performance gap in Information Extraction (IE). Note that high-quality instruction data is the vital key for enhancing the specific capabilities of LLMs, while current IE datasets tend to be small i...
2024-02-22T17:11:38Z
ACL 2024 (short); 21 pages; Github: https://github.com/zjunlp/IEPile
null
null
null
null
null
null
null
null
null
2,402.14714
Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
['Seungduk Kim', 'Seungtaek Choi', 'Myeongho Jeong']
['cs.CL', 'cs.AI']
This report introduces \texttt{EEVE-Korean-v1.0}, a Korean adaptation of large language models that exhibit remarkable capabilities across English and Korean text understanding. Building on recent highly capable but English-centric LLMs, such as SOLAR-10.7B and Phi-2, where non-English texts are inefficiently processed...
2024-02-22T17:12:39Z
null
null
null
Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models
['Seungduk Kim', 'Seungtaek Choi', 'Myeongho Jeong']
2,024
arXiv.org
7
31
['Computer Science']
2,402.1474
Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs
['Arash Ahmadian', 'Chris Cremer', 'Matthias Gallé', 'Marzieh Fadaee', 'Julia Kreutzer', 'Olivier Pietquin', 'Ahmet Üstün', 'Sara Hooker']
['cs.LG', 'I.2.7']
AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF. However, it involves bot...
2024-02-22T17:52:34Z
27 pages, 7 figures, 2 tables
null
null
null
null
null
null
null
null
null
2,402.14776
2D Matryoshka Sentence Embeddings
['Xianming Li', 'Zongxi Li', 'Jing Li', 'Haoran Xie', 'Qing Li']
['cs.CL', 'cs.LG']
Common approaches rely on fixed-length embedding vectors from language models as sentence embeddings for downstream tasks such as semantic textual similarity (STS). Such methods are limited in their flexibility due to unknown computational constraints and budgets across various applications. Matryoshka Representation L...
2024-02-22T18:35:05Z
Decoupled with ESE
null
null
2D Matryoshka Sentence Embeddings
['Xianming Li', 'Zongxi Li', 'Jing Li', 'Haoran Xie', 'Qing Li']
2,024
arXiv.org
4
34
['Computer Science']
2,402.14811
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking
['Nikhil Prakash', 'Tamar Rott Shaham', 'Tal Haklay', 'Yonatan Belinkov', 'David Bau']
['cs.CL', 'cs.LG']
Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models' performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning af...
2024-02-22T18:59:24Z
ICLR 2024. 26 pages, 13 figures. Code and data at https://finetuning.baulab.info/
null
null
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking
['Nikhil Prakash', 'Tamar Rott Shaham', 'Tal Haklay', 'Yonatan Belinkov', 'David Bau']
2,024
International Conference on Learning Representations
67
51
['Computer Science']
2,402.1483
Orca-Math: Unlocking the potential of SLMs in Grade School Math
['Arindam Mitra', 'Hamed Khanpour', 'Corby Rosset', 'Ahmed Awadallah']
['cs.CL', 'cs.AI']
Mathematical word problem-solving has long been recognized as a complex task for small language models (SLMs). A recent study hypothesized that the smallest model size, needed to achieve over 80% accuracy on the GSM8K benchmark, is 34 billion parameters. To reach this level of performance with smaller models, researche...
2024-02-16T23:44:38Z
null
null
null
null
null
null
null
null
null
null
2,402.14905
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
['Zechun Liu', 'Changsheng Zhao', 'Forrest Iandola', 'Chen Lai', 'Yuandong Tian', 'Igor Fedorov', 'Yunyang Xiong', 'Ernie Chang', 'Yangyang Shi', 'Raghuraman Krishnamoorthi', 'Liangzhen Lai', 'Vikas Chandra']
['cs.LG', 'cs.AI', 'cs.CL']
This paper addresses the growing need for efficient large language models (LLMs) on mobile devices, driven by increasing cloud costs and latency concerns. We focus on designing top-quality LLMs with fewer than a billion parameters, a practical choice for mobile deployment. Contrary to prevailing belief emphasizing the ...
2024-02-22T18:58:55Z
ICML 2024. Code is available at https://github.com/facebookresearch/MobileLLM
null
null
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
['Zechun Liu', 'Changsheng Zhao', 'Forrest N. Iandola', 'Chen Lai', 'Yuandong Tian', 'Igor Fedorov', 'Yunyang Xiong', 'Ernie Chang', 'Yangyang Shi', 'Raghuraman Krishnamoorthi', 'Liangzhen Lai', 'Vikas Chandra']
2,024
International Conference on Machine Learning
103
65
['Computer Science']
2,402.14992
tinyBenchmarks: evaluating LLMs with fewer examples
['Felipe Maia Polo', 'Lucas Weber', 'Leshem Choshen', 'Yuekai Sun', 'Gongjun Xu', 'Mikhail Yurochkin']
['cs.CL', 'cs.AI', 'cs.LG', 'stat.ML']
The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of eva...
2024-02-22T22:05:23Z
Proceedings of the 41st International Conference on Machine Learning (ICML)
null
null
null
null
null
null
null
null
null
2,402.15043
KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
['Zhuohao Yu', 'Chang Gao', 'Wenjin Yao', 'Yidong Wang', 'Wei Ye', 'Jindong Wang', 'Xing Xie', 'Yue Zhang', 'Shikun Zhang']
['cs.CL', 'cs.AI', 'cs.LG']
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination status instead of accurately gauging model performance. In this paper,...
2024-02-23T01:30:39Z
Accepted to ACL 2024 (main conference); 19 pages, 5 figures, 19 tables, code is available at: https://github.com/zhuohaoyu/KIEval
null
null
null
null
null
null
null
null
null
2,402.15059
ColBERT-XM: A Modular Multi-Vector Representation Model for Zero-Shot Multilingual Information Retrieval
['Antoine Louis', 'Vageesh Saxena', 'Gijs van Dijck', 'Gerasimos Spanakis']
['cs.CL', 'cs.IR']
State-of-the-art neural retrievers predominantly focus on high-resource languages like English, which impedes their adoption in retrieval scenarios involving other languages. Current approaches circumvent the lack of high-quality labeled data in non-English languages by leveraging multilingual pretrained language model...
2024-02-23T02:21:24Z
Under review. Code is available at https://github.com/ant-louis/xm-retrievers
null
null
null
null
null
null
null
null
null
2,402.15343
NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data
['Sergei Bogdanov', 'Alexandre Constantin', 'Timothée Bernard', 'Benoit Crabbé', 'Etienne Bernard']
['cs.CL', 'cs.AI', 'cs.LG']
Large Language Models (LLMs) have shown impressive abilities in data annotation, opening the way for new approaches to solve classic NLP problems. In this paper, we show how to use LLMs to create NuNER, a compact language representation model specialized in the Named Entity Recognition (NER) task. NuNER can be fine-tun...
2024-02-23T14:23:51Z
null
null
null
null
null
null
null
null
null
null
2,402.15391
Genie: Generative Interactive Environments
['Jake Bruce', 'Michael Dennis', 'Ashley Edwards', 'Jack Parker-Holder', 'Yuge Shi', 'Edward Hughes', 'Matthew Lai', 'Aditi Mavalankar', 'Richie Steigerwald', 'Chris Apps', 'Yusuf Aytar', 'Sarah Bechtle', 'Feryal Behbahani', 'Stephanie Chan', 'Nicolas Heess', 'Lucy Gonzalez', 'Simon Osindero', 'Sherjil Ozair', 'Scott R...
['cs.LG', 'cs.AI', 'cs.CV']
We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, ...
2024-02-23T15:47:26Z
https://sites.google.com/corp/view/genie-2024/
null
null
Genie: Generative Interactive Environments
['Jake Bruce', 'Michael Dennis', 'Ashley Edwards', 'Jack Parker-Holder', 'Yuge Shi', 'Edward Hughes', 'Matthew Lai', 'Aditi Mavalankar', 'Richie Steigerwald', 'Chris Apps', 'Y. Aytar', 'Sarah Bechtle', 'Feryal M. P. Behbahani', 'Stephanie Chan', 'N. Heess', 'Lucy Gonzalez', 'Simon Osindero', 'Sherjil Ozair', 'Scott Ree...
2,024
International Conference on Machine Learning
188
80
['Computer Science']
2,402.15449
Repetition Improves Language Model Embeddings
['Jacob Mitchell Springer', 'Suhas Kotha', 'Daniel Fried', 'Graham Neubig', 'Aditi Raghunathan']
['cs.CL', 'cs.LG']
Recent approaches to improving the extraction of text embeddings from autoregressive large language models (LLMs) have largely focused on improvements to data, backbone pretrained language models, or improving task-differentiation via instructions. In this work, we address an architectural limitation of autoregressive ...
2024-02-23T17:25:10Z
36 pages, 11 figures, 16 tables
null
null
null
null
null
null
null
null
null
2,402.15506
AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning
['Jianguo Zhang', 'Tian Lan', 'Rithesh Murthy', 'Zhiwei Liu', 'Weiran Yao', 'Ming Zhu', 'Juntao Tan', 'Thai Hoang', 'Zuxin Liu', 'Liangwei Yang', 'Yihao Feng', 'Shirley Kokane', 'Tulika Awalgaonkar', 'Juan Carlos Niebles', 'Silvio Savarese', 'Shelby Heinecke', 'Huan Wang', 'Caiming Xiong']
['cs.AI', 'cs.CL', 'cs.LG']
Autonomous agents powered by large language models (LLMs) have garnered significant research attention. However, fully harnessing the potential of LLMs for agent-based tasks presents inherent challenges due to the heterogeneous nature of diverse data sources featuring multi-turn trajectories. In this paper, we introduc...
2024-02-23T18:56:26Z
Add GitHub repo link at \url{https://github.com/SalesforceAIResearch/xLAM} and HuggingFace model link at \url{https://huggingface.co/Salesforce/xLAM-v0.1-r}
null
null
null
null
null
null
null
null
null
2,402.15648
MambaIR: A Simple Baseline for Image Restoration with State-Space Model
['Hang Guo', 'Jinmin Li', 'Tao Dai', 'Zhihao Ouyang', 'Xudong Ren', 'Shu-Tao Xia']
['cs.CV']
Recent years have seen significant advancements in image restoration, largely attributed to the development of modern deep neural networks, such as CNNs and Transformers. However, existing restoration backbones often face the dilemma between global receptive fields and efficient computation, hindering their application...
2024-02-23T23:15:54Z
Accepted by ECCV2024
null
null
MambaIR: A Simple Baseline for Image Restoration with State-Space Model
['Hang Guo', 'Jinmin Li', 'Tao Dai', 'Zhihao Ouyang', 'Xudong Ren', 'Shu-Tao Xia']
2,024
European Conference on Computer Vision
249
96
['Computer Science']
2,402.15729
How Do Humans Write Code? Large Models Do It the Same Way Too
['Long Li', 'Xuzheng He', 'Haozhe Wang', 'Linlin Wang', 'Liang He']
['cs.AI', 'cs.CL', 'cs.PL']
Program-of-Thought (PoT) replaces natural language-based Chain-of-Thought (CoT) as the most popular method in Large Language Models (LLMs) mathematical reasoning tasks by utilizing external tool calls to circumvent computational errors. However, our evaluation of the GPT-4 and Llama series reveals that using PoT introd...
2024-02-24T05:40:01Z
null
null
null
null
null
null
null
null
null
null
2,402.15761
Res-VMamba: Fine-Grained Food Category Visual Classification Using Selective State Space Models with Deep Residual Learning
['Chi-Sheng Chen', 'Guan-Ying Chen', 'Dong Zhou', 'Di Jiang', 'Dai-Shi Chen']
['cs.CV', 'cs.AI']
Food classification is the foundation for developing food vision tasks and plays a key role in the burgeoning field of computational nutrition. Due to the complexity of food requiring fine-grained classification, recent academic research mainly modifies Convolutional Neural Networks (CNNs) and/or Vision Transformers (V...
2024-02-24T08:20:39Z
14 pages, 3 figures
null
null
Res-VMamba: Fine-Grained Food Category Visual Classification Using Selective State Space Models with Deep Residual Learning
['Chi-Sheng Chen', 'Guan-Ying Chen', 'Dong Zhou', 'Di Jiang', 'Daishi Chen']
2,024
arXiv.org
24
67
['Computer Science']
2,402.15861
MATHWELL: Generating Educational Math Word Problems Using Teacher Annotations
['Bryan R Christ', 'Jonathan Kropko', 'Thomas Hartvigsen']
['cs.CL']
Math word problems are critical K-8 educational tools, but writing them is time consuming and requires extensive expertise. To be educational, problems must be solvable, have accurate answers, and, most importantly, be educationally appropriate. We propose that language models have potential to support K-8 math educati...
2024-02-24T17:08:45Z
24 pages, 10 figures Accepted to EMNLP 2024 (Findings)
null
null
null
null
null
null
null
null
null
2,402.15865
HIR-Diff: Unsupervised Hyperspectral Image Restoration Via Improved Diffusion Models
['Li Pang', 'Xiangyu Rui', 'Long Cui', 'Hongzhong Wang', 'Deyu Meng', 'Xiangyong Cao']
['cs.CV', 'eess.IV']
Hyperspectral image (HSI) restoration aims at recovering clean images from degraded observations and plays a vital role in downstream tasks. Existing model-based methods have limitations in accurately modeling the complex image characteristics with handcraft priors, and deep learning-based methods suffer from poor gene...
2024-02-24T17:15:05Z
null
null
null
null
null
null
null
null
null
null
2,402.16029
GraphWiz: An Instruction-Following Language Model for Graph Problems
['Nuo Chen', 'Yuhan Li', 'Jianheng Tang', 'Jia Li']
['cs.CL']
Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored. To bridge this gap, we introduce GraphInstruct, a novel and comprehensive instruction-tuning dataset designed to equip language models with t...
2024-02-25T08:41:32Z
27pages, 15 tables
null
null
null
null
null
null
null
null
null
2,402.16065
Training a Bilingual Language Model by Mapping Tokens onto a Shared Character Space
['Aviad Rom', 'Kfir Bar']
['cs.CL', 'cs.LG']
We train a bilingual Arabic-Hebrew language model using a transliterated version of Arabic texts in Hebrew, to ensure both languages are represented in the same script. Given the morphological, structural similarities, and the extensive number of cognates shared among Arabic and Hebrew, we assess the performance of a l...
2024-02-25T11:26:39Z
null
null
null
null
null
null
null
null
null
null
2,402.16107
Knowledge Fusion of Chat LLMs: A Preliminary Technical Report
['Fanqi Wan', 'Ziyi Yang', 'Longguang Zhong', 'Xiaojun Quan', 'Xinting Huang', 'Wei Bi']
['cs.CL']
Recently, FuseLLM introduced the concept of knowledge fusion to transfer the collective knowledge of multiple structurally varied LLMs into a target LLM through lightweight continual training. In this report, we extend the scalability and flexibility of the FuseLLM framework to realize the fusion of chat LLMs, resultin...
2024-02-25T15:11:58Z
Technical Report, work in progress
null
null
Knowledge Fusion of Chat LLMs: A Preliminary Technical Report
['Fanqi Wan', 'Ziyi Yang', 'Longguang Zhong', 'Xiaojun Quan', 'Xinting Huang', 'Wei Bi']
2,024
null
1
40
['Computer Science']
2,402.16153
ChatMusician: Understanding and Generating Music Intrinsically with LLM
['Ruibin Yuan', 'Hanfeng Lin', 'Yi Wang', 'Zeyue Tian', 'Shangda Wu', 'Tianhao Shen', 'Ge Zhang', 'Yuhang Wu', 'Cong Liu', 'Ziya Zhou', 'Ziyang Ma', 'Liumeng Xue', 'Ziyu Wang', 'Qin Liu', 'Tianyu Zheng', 'Yizhi Li', 'Yinghao Ma', 'Yiming Liang', 'Xiaowei Chi', 'Ruibo Liu', 'Zili Wang', 'Pengfei Li', 'Jingcheng Wu', 'Ch...
['cs.SD', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM', 'eess.AS']
While Large Language Models (LLMs) demonstrate impressive capabilities in text generation, we find that their ability has yet to be generalized to music, humanity's creative language. We introduce ChatMusician, an open-source LLM that integrates intrinsic musical abilities. It is based on continual pre-training and fin...
2024-02-25T17:19:41Z
GitHub: https://shanghaicannon.github.io/ChatMusician/
null
null
null
null
null
null
null
null
null
2,402.16352
MathGenie: Generating Synthetic Data with Question Back-translation for Enhancing Mathematical Reasoning of LLMs
['Zimu Lu', 'Aojun Zhou', 'Houxing Ren', 'Ke Wang', 'Weikang Shi', 'Junting Pan', 'Mingjie Zhan', 'Hongsheng Li']
['cs.CL', 'cs.AI']
Large language models (LLMs) have exhibited great potential in mathematical reasoning. However, there remains a performance gap in this area between existing open-source models and closed-source models such as GPT-4. In this paper, we introduce MathGenie, a novel method for generating diverse and reliable math problems...
2024-02-26T07:17:25Z
ACL 2024 camera ready
null
null
null
null
null
null
null
null
null
2,402.16444
ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
['Zhexin Zhang', 'Yida Lu', 'Jingyuan Ma', 'Di Zhang', 'Rui Li', 'Pei Ke', 'Hao Sun', 'Lei Sha', 'Zhifang Sui', 'Hongning Wang', 'Minlie Huang']
['cs.CL']
The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs' responses in an aligned, customizable and explainable manner. In this paper, we propose ShieldLM, an LLM-based safety detector, which aligns...
2024-02-26T09:43:02Z
19 pages. Camera ready version of EMNLP 2024 Findings
null
null
ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
['Zhexin Zhang', 'Yida Lu', 'Jingyuan Ma', 'Di Zhang', 'Rui Li', 'Pei Ke', 'Hao Sun', 'Lei Sha', 'Zhifang Sui', 'Hongning Wang', 'Minlie Huang']
2,024
Conference on Empirical Methods in Natural Language Processing
31
40
['Computer Science']
2,402.16445
ProLLaMA: A Protein Language Model for Multi-Task Protein Language Processing
['Liuzhenghao Lv', 'Zongying Lin', 'Hao Li', 'Yuyang Liu', 'Jiaxi Cui', 'Calvin Yu-Chian Chen', 'Li Yuan', 'Yonghong Tian']
['cs.CE', 'q-bio.BM']
Large Language Models (LLMs) have achieved remarkable performance in multiple Natural Language Processing (NLP) tasks. Under the premise that protein sequences constitute the protein language, Protein Language Models(PLMs) have advanced the field of protein engineering. However, as of now, unlike LLMs in NLP, PLMs cann...
2024-02-26T09:43:52Z
null
null
null
null
null
null
null
null
null
null
2,402.16472
mEdIT: Multilingual Text Editing via Instruction Tuning
['Vipul Raheja', 'Dimitris Alikaniotis', 'Vivek Kulkarni', 'Bashar Alhafni', 'Dhruv Kumar']
['cs.CL', 'cs.AI', 'I.2.7']
We introduce mEdIT, a multi-lingual extension to CoEdIT -- the recent state-of-the-art text editing models for writing assistance. mEdIT models are trained by fine-tuning multi-lingual large, pre-trained language models (LLMs) via instruction tuning. They are designed to take instructions from the user specifying the a...
2024-02-26T10:33:36Z
Accepted to NAACL 2024 (Main). 23 pages, 8 tables, 11 figures
null
null
null
null
null
null
null
null
null
2,402.16602
Rethinking Negative Instances for Generative Named Entity Recognition
['Yuyang Ding', 'Juntao Li', 'Pinzheng Wang', 'Zecheng Tang', 'Bowen Yan', 'Min Zhang']
['cs.CL']
Large Language Models (LLMs) have demonstrated impressive capabilities for generalizing in unseen tasks. In the Named Entity Recognition (NER) task, recent advancements have seen the remarkable improvement of LLMs in a broad range of entity domains via instruction tuning, by adopting entity-centric schema. In this work...
2024-02-26T14:30:37Z
ACL 2024 Findings
null
null
Rethinking Negative Instances for Generative Named Entity Recognition
['Yuyang Ding', 'Juntao Li', 'Pinzheng Wang', 'Zecheng Tang', 'Bowen Yan', 'Min Zhang']
2,024
Annual Meeting of the Association for Computational Linguistics
13
49
['Computer Science']
2,402.16641
Towards Open-ended Visual Quality Comparison
['Haoning Wu', 'Hanwei Zhu', 'Zicheng Zhang', 'Erli Zhang', 'Chaofeng Chen', 'Liang Liao', 'Chunyi Li', 'Annan Wang', 'Wenxiu Sun', 'Qiong Yan', 'Xiaohong Liu', 'Guangtao Zhai', 'Shiqi Wang', 'Weisi Lin']
['cs.CV']
Comparative settings (e.g. pairwise choice, listwise ranking) have been adopted by a wide range of subjective studies for image quality assessment (IQA), as it inherently standardizes the evaluation criteria across different observers and offer more clear-cut responses. In this work, we extend the edge of emerging larg...
2024-02-26T15:10:56Z
Fix typos
null
null
null
null
null
null
null
null
null
2,402.16671
StructLM: Towards Building Generalist Models for Structured Knowledge Grounding
['Alex Zhuang', 'Ge Zhang', 'Tianyu Zheng', 'Xinrun Du', 'Junjie Wang', 'Weiming Ren', 'Stephen W. Huang', 'Jie Fu', 'Xiang Yue', 'Wenhu Chen']
['cs.CL']
Structured data sources, such as tables, graphs, and databases, are ubiquitous knowledge sources. Despite the demonstrated capabilities of large language models (LLMs) on plain text, their proficiency in interpreting and utilizing structured data remains limited. Our investigation reveals a notable deficiency in LLMs' ...
2024-02-26T15:47:01Z
Technical Report
null
null
null
null
null
null
null
null
null
2,402.16689
Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study
['Adrien Bazoge', 'Emmanuel Morin', 'Beatrice Daille', 'Pierre-Antoine Gourraud']
['cs.CL', 'cs.AI']
Recently, pretrained language models based on BERT have been introduced for the French biomedical domain. Although these models have achieved state-of-the-art results on biomedical and clinical NLP tasks, they are constrained by a limited input sequence length of 512 tokens, which poses challenges when applied to clini...
2024-02-26T16:05:33Z
null
null
null
null
null
null
null
null
null
null
2,402.16775
A Comprehensive Evaluation of Quantization Strategies for Large Language Models
['Renren Jin', 'Jiangcun Du', 'Wuwei Huang', 'Wei Liu', 'Jian Luan', 'Bin Wang', 'Deyi Xiong']
['cs.CL', 'cs.AI']
Increasing the number of parameters in large language models (LLMs) usually improves performance in downstream tasks but raises compute and memory costs, making deployment difficult in resource-limited settings. Quantization techniques, which reduce the bits needed for model weights or activations with minimal performa...
2024-02-26T17:45:36Z
ACL 2024 Findings
null
null
null
null
null
null
null
null
null
2,402.16819
Nemotron-4 15B Technical Report
['Jupinder Parmar', 'Shrimai Prabhumoye', 'Joseph Jennings', 'Mostofa Patwary', 'Sandeep Subramanian', 'Dan Su', 'Chen Zhu', 'Deepak Narayanan', 'Aastha Jhunjhunwala', 'Ayush Dattagupta', 'Vibhu Jawa', 'Jiwei Liu', 'Ameya Mahabaleshwarkar', 'Osvald Nitski', 'Annika Brundyn', 'James Maki', 'Miguel Martinez', 'Jiaxuan Yo...
['cs.CL', 'cs.AI', 'cs.LG']
We introduce Nemotron-4 15B, a 15-billion-parameter large multilingual language model trained on 8 trillion text tokens. Nemotron-4 15B demonstrates strong performance when assessed on English, multilingual, and coding tasks: it outperforms all existing similarly-sized open models on 4 out of 7 downstream evaluation ar...
2024-02-26T18:43:45Z
null
null
null
null
null
null
null
null
null
null
2,402.16829
GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning
['Aivin V. Solatorio']
['cs.LG', 'cs.CL']
Embedding models are integral to AI applications like semantic search, personalized recommendations, and retrieval augmented generation for LLMs, necessitating high-quality training data. However, the limited scalability of manual data curation prompts the need for automated methods to ensure data integrity. Traditiona...
2024-02-26T18:55:15Z
GISTEmbed GitHub repository at https://github.com/avsolatorio/GISTEmbed
null
null
GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning
['Aivin V. Solatorio']
2,024
arXiv.org
24
31
['Computer Science']
2,402.1684
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT
['Omkar Thawakar', 'Ashmal Vayani', 'Salman Khan', 'Hisham Cholakal', 'Rao M. Anwer', 'Michael Felsberg', 'Tim Baldwin', 'Eric P. Xing', 'Fahad Shahbaz Khan']
['cs.CL']
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development. However, LLMs do not suit well for scenarios that require on-device processing, energy efficiency, low memory footprint, and response efficiency. These requisites are crucial for privacy, security, and sustainable dep...
2024-02-26T18:59:03Z
Code available at : https://github.com/mbzuai-oryx/MobiLlama
null
null
null
null
null
null
null
null
null
2,402.16918
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers
['Ka Man Lo', 'Yiming Liang', 'Wenyu Du', 'Yuantao Fan', 'Zili Wang', 'Wenhao Huang', 'Lei Ma', 'Jie Fu']
['cs.LG', 'cs.CV']
Modular neural architectures are gaining attention for their powerful generalization and efficient adaptation to new domains. However, training these models poses challenges due to optimization difficulties arising from intrinsic sparse connectivity. Leveraging knowledge from monolithic models through techniques like k...
2024-02-26T04:47:32Z
null
null
null
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers
['Ka Man Lo', 'Yiming Liang', 'Wenyu Du', 'Yuantao Fan', 'Zili Wang', 'Wenhao Huang', 'Lei Ma', 'Jie Fu']
2,024
arXiv.org
2
45
['Computer Science']
2,402.16928
CLAP: Learning Transferable Binary Code Representations with Natural Language Supervision
['Hao Wang', 'Zeyu Gao', 'Chao Zhang', 'Zihan Sha', 'Mingyang Sun', 'Yuchen Zhou', 'Wenyu Zhu', 'Wenju Sun', 'Han Qiu', 'Xi Xiao']
['cs.SE', 'cs.AI']
Binary code representation learning has shown significant performance in binary analysis tasks. But existing solutions often have poor transferability, particularly in few-shot and zero-shot scenarios where few or no training samples are available for the tasks. To address this problem, we present CLAP (Contrastive Lan...
2024-02-26T13:49:52Z
null
null
null
null
null
null
null
null
null
null
2,402.17016
Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings
['Isabelle Mohr', 'Markus Krimmel', 'Saba Sturua', 'Mohammad Kalim Akram', 'Andreas Koukounas', 'Michael Günther', 'Georgios Mastrapas', 'Vinit Ravishankar', 'Joan Fontanals Martínez', 'Feng Wang', 'Qi Liu', 'Ziniu Yu', 'Jie Fu', 'Saahil Ognawala', 'Susana Guzman', 'Bo Wang', 'Maximilian Werk', 'Nan Wang', 'Han Xiao']
['cs.CL', 'cs.AI', 'cs.IR', '68T50', 'I.2.7']
We introduce a novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language. These models are capable of processing lengthy text inputs with up to 8192 tokens, making them highly versatile for a range of natural language processing tasks such as text r...
2024-02-26T20:53:12Z
null
null
null
null
null
null
null
null
null
null
2,402.17113
Transparent Image Layer Diffusion using Latent Transparency
['Lvmin Zhang', 'Maneesh Agrawala']
['cs.CV', 'cs.GR']
We present LayerDiffuse, an approach enabling large-scale pretrained latent diffusion models to generate transparent images. The method allows generation of single transparent images or of multiple transparent layers. The method learns a "latent transparency" that encodes alpha channel transparency into the latent mani...
2024-02-27T01:19:53Z
44 pages, 37 figures, github.com/layerdiffusion/LayerDiffuse
null
null
null
null
null
null
null
null
null
2,402.17245
Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation
['Daiqing Li', 'Aleks Kamko', 'Ehsan Akhgari', 'Ali Sabet', 'Linmiao Xu', 'Suhail Doshi']
['cs.CV', 'cs.AI']
In this work, we share three insights for achieving state-of-the-art aesthetic quality in text-to-image generative models. We focus on three critical aspects for model improvement: enhancing color and contrast, improving generation across multiple aspect ratios, and improving human-centric fine details. First, we delve...
2024-02-27T06:31:52Z
Model weights: https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic
null
null
null
null
null
null
null
null
null
2,402.173
VoCo: A Simple-yet-Effective Volume Contrastive Learning Framework for 3D Medical Image Analysis
['Linshan Wu', 'Jiaxin Zhuang', 'Hao Chen']
['eess.IV']
Self-Supervised Learning (SSL) has demonstrated promising results in 3D medical image analysis. However, the lack of high-level semantics in pre-training still heavily hinders the performance of downstream tasks. We observe that 3D medical images contain relatively consistent contextual position information, i.e., cons...
2024-02-27T08:22:55Z
Accepted by CVPR 2024. The camera-ready version will soon be available
null
null
null
null
null
null
null
null
null
2,402.17497
REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering
['Yuhao Wang', 'Ruiyang Ren', 'Junyi Li', 'Wayne Xin Zhao', 'Jing Liu', 'Ji-Rong Wen']
['cs.CL', 'cs.IR']
Considering the limited internal parametric knowledge, retrieval-augmented generation (RAG) has been widely used to extend the knowledge scope of large language models (LLMs). Despite the extensive efforts on RAG research, in existing methods, LLMs cannot precisely assess the relevance of retrieved documents, thus like...
2024-02-27T13:22:51Z
Accepted to EMNLP 2024 Main Conference. Published on ACL Anthology: https://aclanthology.org/2024.emnlp-main.321.pdf
null
null
null
null
null
null
null
null
null
2,402.17645
SongComposer: A Large Language Model for Lyric and Melody Generation in Song Composition
['Shuangrui Ding', 'Zihan Liu', 'Xiaoyi Dong', 'Pan Zhang', 'Rui Qian', 'Junhao Huang', 'Conghui He', 'Dahua Lin', 'Jiaqi Wang']
['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS']
Creating lyrics and melodies for the vocal track in a symbolic format, known as song composition, demands expert musical knowledge of melody, an advanced understanding of lyrics, and precise alignment between them. Despite achievements in sub-tasks such as lyric generation, lyric-to-melody, and melody-to-lyric, etc, a ...
2024-02-27T16:15:28Z
ACL 2025 main. project page: https://pjlab-songcomposer.github.io/ code: https://github.com/pjlab-songcomposer/songcomposer
null
null
SongComposer: A Large Language Model for Lyric and Melody Generation in Song Composition
['Shuangrui Ding', 'Zihan Liu', 'Xiao-wen Dong', 'Pan Zhang', 'Rui Qian', 'Junhao Huang', 'Conghui He', 'Dahua Lin', 'Jiaqi Wang']
2,024
null
1
50
['Computer Science', 'Engineering']
2,402.1766
TorchMD-Net 2.0: Fast Neural Network Potentials for Molecular Simulations
['Raul P. Pelaez', 'Guillem Simeon', 'Raimondas Galvelis', 'Antonio Mirarchi', 'Peter Eastman', 'Stefan Doerr', 'Philipp Thölke', 'Thomas E. Markland', 'Gianni De Fabritiis']
['cs.LG', 'physics.bio-ph', 'physics.chem-ph', 'physics.comp-ph']
Achieving a balance between computational speed, prediction accuracy, and universal applicability in molecular simulations has been a persistent challenge. This paper presents substantial advancements in the TorchMD-Net software, a pivotal step forward in the shift from conventional force fields to neural network-based...
2024-02-27T16:27:06Z
Version accepted in Journal of Chemical Theory and Computation
null
10.1021/acs.jctc.4c00253
null
null
null
null
null
null
null
2,402.17701
Real-time Low-latency Music Source Separation using Hybrid Spectrogram-TasNet
['Satvik Venkatesh', 'Arthur Benilov', 'Philip Coleman', 'Frederic Roskam']
['eess.AS', 'cs.LG', 'cs.SD', 'I.5.1; I.5.4']
There have been significant advances in deep learning for music demixing in recent years. However, there has been little attention given to how these neural networks can be adapted for real-time low-latency applications, which could be helpful for hearing aids, remixing audio streams and live shows. In this paper, we i...
2024-02-27T17:26:33Z
Accepted to ICASSP 2024
null
null
Real-Time Low-Latency Music Source Separation Using Hybrid Spectrogram-Tasnet
['Satvik Venkatesh', 'Arthur Benilov', 'Philip Coleman', 'Frederic Roskam']
2,024
IEEE International Conference on Acoustics, Speech, and Signal Processing
6
37
['Engineering', 'Computer Science']
2,402.17733
Tower: An Open Multilingual Large Language Model for Translation-Related Tasks
['Duarte M. Alves', 'José Pombal', 'Nuno M. Guerreiro', 'Pedro H. Martins', 'João Alves', 'Amin Farajian', 'Ben Peters', 'Ricardo Rei', 'Patrick Fernandes', 'Sweta Agrawal', 'Pierre Colombo', 'José G. C. de Souza', 'André F. T. Martins']
['cs.CL']
While general-purpose large language models (LLMs) demonstrate proficiency on multiple tasks within the domain of translation, approaches based on open LLMs are competitive only when specializing on a single task. In this paper, we propose a recipe for tailoring LLMs to multiple tasks present in translation workflows. ...
2024-02-27T18:09:36Z
null
null
null
Tower: An Open Multilingual Large Language Model for Translation-Related Tasks
['Duarte M. Alves', 'José P. Pombal', 'Nuno M. Guerreiro', 'P. Martins', 'João Alves', 'Amin Farajian', 'Ben Peters', 'Ricardo Rei', 'Patrick Fernandes', 'Sweta Agrawal', 'Pierre Colombo', 'José G. C. de Souza', 'André Martins']
2,024
arXiv.org
157
96
['Computer Science']
2,402.17764
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
['Shuming Ma', 'Hongyu Wang', 'Lingxiao Ma', 'Lei Wang', 'Wenhui Wang', 'Shaohan Huang', 'Li Dong', 'Ruiping Wang', 'Jilong Xue', 'Furu Wei']
['cs.CL', 'cs.LG']
Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM wi...
2024-02-27T18:56:19Z
Work in progress
null
null
null
null
null
null
null
null
null
2,402.17766
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
['Zekun Qi', 'Runpei Dong', 'Shaochen Zhang', 'Haoran Geng', 'Chunrui Han', 'Zheng Ge', 'Li Yi', 'Kaisheng Ma']
['cs.CV']
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages. ShapeLLM is built upon an improved 3D encoder by extending ReCon to ReCon++ that benefits from multi-view image distillati...
2024-02-27T18:57:12Z
Accepted at ECCV 2024
null
null
null
null
null
null
null
null
null
2,402.1781
BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning
['Qizhi Pei', 'Lijun Wu', 'Kaiyuan Gao', 'Xiaozhuan Liang', 'Yin Fang', 'Jinhua Zhu', 'Shufang Xie', 'Tao Qin', 'Rui Yan']
['q-bio.QM', 'cs.AI', 'cs.CE', 'cs.LG', 'q-bio.BM']
Recent research trends in computational biology have increasingly focused on integrating text and bio-entity modeling, especially in the context of molecules and proteins. However, previous efforts like BioT5 faced challenges in generalizing across diverse tasks and lacked a nuanced understanding of molecular structure...
2024-02-27T12:43:09Z
Accepted by ACL 2024 (Findings)
null
null
null
null
null
null
null
null
null
2,402.17811
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
['Shaolei Zhang', 'Tian Yu', 'Yang Feng']
['cs.CL', 'cs.AI', 'cs.LG']
Large Language Models (LLMs) sometimes suffer from producing hallucinations, especially LLMs may generate untruthful responses despite knowing the correct knowledge. Activating the truthfulness within LLM is the key to fully unlocking LLM's knowledge potential. In this paper, we propose TruthX, an inference-time interv...
2024-02-27T14:45:04Z
Accepted to ACL 2024 main conference, Project Page: https://ictnlp.github.io/TruthX-site/
null
null
null
null
null
null
null
null
null
2,402.17834
Stable LM 2 1.6B Technical Report
['Marco Bellagente', 'Jonathan Tow', 'Dakota Mahan', 'Duy Phung', 'Maksym Zhuravinskyi', 'Reshinth Adithyan', 'James Baicoianu', 'Ben Brooks', 'Nathan Cooper', 'Ashish Datta', 'Meng Lee', 'Emad Mostaque', 'Michael Pieler', 'Nikhil Pinnaparju', 'Paulo Rocha', 'Harry Saini', 'Hannah Teufel', 'Niccolo Zanichelli', 'Carlos...
['cs.CL', 'stat.ML']
We introduce StableLM 2 1.6B, the first in a new generation of our language model series. In this technical report, we present in detail the data and training procedure leading to the base and instruction-tuned versions of StableLM 2 1.6B. The weights for both models are available via Hugging Face for anyone to downloa...
2024-02-27T19:00:07Z
23 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,402.17916
Adversarial Math Word Problem Generation
['Roy Xie', 'Chengxuan Huang', 'Junlin Wang', 'Bhuwan Dhingra']
['cs.CL', 'cs.AI']
Large language models (LLMs) have significantly transformed the educational landscape. As current plagiarism detection tools struggle to keep pace with LLMs' rapid advancements, the educational community faces the challenge of assessing students' true problem-solving abilities in the presence of LLMs. In this work, we ...
2024-02-27T22:07:52Z
Code/data: https://github.com/ruoyuxie/adversarial_mwps_generation
null
null
null
null
null
null
null
null
null
2,402.17946
SparseLLM: Towards Global Pruning for Pre-trained Language Models
['Guangji Bai', 'Yijiang Li', 'Chen Ling', 'Kibaek Kim', 'Liang Zhao']
['cs.CL']
The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands. Pruning has emerged as a pivotal compression strategy, introducing sparsity to enhance both memory and computational efficiency. Yet, traditional global pr...
2024-02-28T00:09:07Z
NeurIPS 2024
null
null
SparseLLM: Towards Global Pruning of Pre-trained Language Models
['Guangji Bai', 'Yijiang Li', 'Chen Ling', 'Kibaek Kim', 'Liang Zhao']
2,024
Neural Information Processing Systems
11
40
['Computer Science']
2,402.1806
Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions
['Hanjie Chen', 'Zhouxiang Fang', 'Yash Singla', 'Mark Dredze']
['cs.CL']
LLMs have demonstrated impressive performance in answering medical questions, such as achieving passing scores on medical licensing examinations. However, medical board exams or general clinical questions do not capture the complexity of realistic clinical cases. Moreover, the lack of reference explanations means we ca...
2024-02-28T05:44:41Z
NAACL 2025
null
null
null
null
null
null
null
null
null
2,402.18153
Diffusion-Based Neural Network Weights Generation
['Bedionita Soro', 'Bruno Andreis', 'Hayeon Lee', 'Wonyong Jeong', 'Song Chong', 'Frank Hutter', 'Sung Ju Hwang']
['cs.LG', 'cs.AI']
Transfer learning has gained significant attention in recent deep learning research due to its ability to accelerate convergence and enhance performance on new tasks. However, its success is often contingent on the similarity between source and target data, and training on numerous datasets can be costly, leading to bl...
2024-02-28T08:34:23Z
32 pages
null
null
null
null
null
null
null
null
null
2,402.18191
Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation
['Yuan Ge', 'Yilun Liu', 'Chi Hu', 'Weibin Meng', 'Shimin Tao', 'Xiaofeng Zhao', 'Hongxia Ma', 'Li Zhang', 'Boxing Chen', 'Hao Yang', 'Bei Li', 'Tong Xiao', 'Jingbo Zhu']
['cs.CL']
With contributions from the open-source community, a vast amount of instruction tuning (IT) data has emerged. Given the significant resource allocation required for training and evaluating models, it is advantageous to have an efficient method for selecting high-quality IT data. However, existing methods for instructio...
2024-02-28T09:27:29Z
Accepted by EMNLP2024
https://aclanthology.org/2024.emnlp-main.28/
null
null
null
null
null
null
null
null
2,402.18329
Robust Synthetic Data-Driven Detection of Living-Off-the-Land Reverse Shells
['Dmitrijs Trizna', 'Luca Demetrio', 'Battista Biggio', 'Fabio Roli']
['cs.CR', 'cs.LG']
Living-off-the-land (LOTL) techniques pose a significant challenge to security operations, exploiting legitimate tools to execute malicious commands that evade traditional detection methods. To address this, we present a robust augmentation framework for cyber defense systems as Security Information and Event Managemen...
2024-02-28T13:49:23Z
null
null
null
null
null
null
null
null
null
null
2,402.18334
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
['Nihal V. Nayak', 'Yiyang Nan', 'Avi Trost', 'Stephen H. Bach']
['cs.CL', 'cs.LG']
We introduce Bonito, an open-source model for conditional task generation that converts unannotated text into task-specific training datasets for instruction tuning. We aim to enable zero-shot task adaptation of large language models on users' specialized, private data. We train Bonito by fine-tuning a pretrained large...
2024-02-28T13:54:57Z
ACL Findings 2024
null
null
null
null
null
null
null
null
null
2,402.18381
Large Language Models As Evolution Strategies
['Robert Tjarko Lange', 'Yingtao Tian', 'Yujin Tang']
['cs.AI', 'cs.LG', 'cs.NE']
Large Transformer models are capable of implementing a plethora of so-called in-context learning algorithms. These include gradient descent, classification, sequence completion, transformation, and improvement. In this work, we investigate whether large language models (LLMs), which never explicitly encountered the tas...
2024-02-28T15:02:17Z
11 pages, 14 figures
null
null
null
null
null
null
null
null
null
2,402.18567
Diffusion Language Models Are Versatile Protein Learners
['Xinyou Wang', 'Zaixiang Zheng', 'Fei Ye', 'Dongyu Xue', 'Shujian Huang', 'Quanquan Gu']
['cs.LG', 'q-bio.BM']
This paper introduces diffusion protein language model (DPLM), a versatile protein language model that demonstrates strong generative and predictive capabilities for protein sequences. We first pre-train scalable DPLMs from evolutionary-scale protein sequences within a generative self-supervised discrete diffusion prob...
2024-02-28T18:57:56Z
ICML 2024 camera-ready version
null
null
Diffusion Language Models Are Versatile Protein Learners
['Xinyou Wang', 'Zaixiang Zheng', 'Fei Ye', 'Dongyu Xue', 'Shujian Huang', 'Quanquan Gu']
2,024
International Conference on Machine Learning
50
0
['Computer Science', 'Biology']
2,402.18571
Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards
['Haoxiang Wang', 'Yong Lin', 'Wei Xiong', 'Rui Yang', 'Shizhe Diao', 'Shuang Qiu', 'Han Zhao', 'Tong Zhang']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
Fine-grained control over large language models (LLMs) remains a significant challenge, hindering their adaptability to diverse user needs. While Reinforcement Learning from Human Feedback (RLHF) shows promise in aligning LLMs, its reliance on scalar rewards often limits its ability to capture diverse user preferences ...
2024-02-28T18:58:25Z
The code and model are released at https://github.com/Haoxiang-Wang/directional-preference-alignment
null
null
null
null
null
null
null
null
null
2,402.18589
Verif.ai: Towards an Open-Source Scientific Generative Question-Answering System with Referenced and Verifiable Answers
['Miloš Košprdić', 'Adela Ljajić', 'Bojana Bašaragin', 'Darija Medvecki', 'Nikola Milošević']
['cs.IR', 'cs.AI', 'cs.CL', 'cs.LG']
In this paper, we present the current progress of the project Verif.ai, an open-source scientific generative question-answering system with referenced and verified answers. The components of the system are (1) an information retrieval system combining semantic and lexical search techniques over scientific papers (PubMe...
2024-02-09T10:25:01Z
Accepted as a short paper at The Sixteenth International Conference on Evolving Internet (INTERNET 2024)
The Sixteenth International Conference on Evolving Internet (INTERNET 2024)
null
Verif.ai: Towards an Open-Source Scientific Generative Question-Answering System with Referenced and Verifiable Answers
['Milos Kosprdic', 'Adela Ljajić', 'Bojana Bašaragin', 'Darija Medvecki', 'Nikola Milosevic']
2,024
arXiv.org
3
15
['Computer Science']
2,402.18668
Simple linear attention language models balance the recall-throughput tradeoff
['Simran Arora', 'Sabri Eyuboglu', 'Michael Zhang', 'Aman Timalsina', 'Silas Alberti', 'Dylan Zinsley', 'James Zou', 'Atri Rudra', 'Christopher Ré']
['cs.CL', 'cs.LG']
Recent work has shown that attention-based language models excel at recall, the ability to ground generations in tokens previously seen in context. However, the efficiency of attention-based models is bottle-necked during inference by the KV-cache's aggressive memory consumption. In this work, we explore whether we can...
2024-02-28T19:28:27Z
null
null
null
null
null
null
null
null
null
null
2,402.18766
Advancing Generative AI for Portuguese with Open Decoder Gervásio PT*
['Rodrigo Santos', 'João Silva', 'Luís Gomes', 'João Rodrigues', 'António Branco']
['cs.CL']
To advance the neural decoding of Portuguese, in this paper we present a fully open Transformer-based, instruction-tuned decoder model that sets a new state of the art in this respect. To develop this decoder, which we named Gerv\'asio PT*, a strong LLaMA~2 7B model was used as a starting point, and its further improve...
2024-02-29T00:19:13Z
null
null
null
null
null
null
null
null
null
null
2,402.18848
SwitchLight: Co-design of Physics-driven Architecture and Pre-training Framework for Human Portrait Relighting
['Hoon Kim', 'Minje Jang', 'Wonjun Yoon', 'Jisoo Lee', 'Donghyun Na', 'Sanghyun Woo']
['cs.CV']
We introduce a co-designed approach for human portrait relighting that combines a physics-guided architecture with a pre-training framework. Drawing on the Cook-Torrance reflectance model, we have meticulously configured the architecture design to precisely simulate light-surface interactions. Furthermore, to overcome ...
2024-02-29T04:52:04Z
CVPR2024. Live demos available at https://www.beeble.ai/
null
null
null
null
null
null
null
null
null
2,402.19043
WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis
['Paul Friedrich', 'Julia Wolleb', 'Florentin Bieder', 'Alicia Durrer', 'Philippe C. Cattin']
['eess.IV', 'cs.CV']
Due to the three-dimensional nature of CT- or MR-scans, generative modeling of medical images is a particularly challenging task. Existing approaches mostly apply patch-wise, slice-wise, or cascaded generation techniques to fit the high-dimensional data into the limited GPU memory. However, these approaches may introdu...
2024-02-29T11:11:05Z
Accepted at DGM4MICCAI 2024. Project page: https://pfriedri.github.io/wdm-3d-io Code: https://github.com/pfriedri/wdm-3d
null
10.1007/978-3-031-72744-3_2
WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis
['Paul Friedrich', 'Julia Wolleb', 'Florentin Bieder', 'Alicia Durrer', 'Philippe C. Cattin']
2,024
DGM4MICCAI@MICCAI
20
37
['Engineering', 'Computer Science']
2,402.19155
Beyond Language Models: Byte Models are Digital World Simulators
['Shangda Wu', 'Xu Tan', 'Zili Wang', 'Rui Wang', 'Xiaobing Li', 'Maosong Sun']
['cs.LG']
Traditional deep learning often overlooks bytes, the basic units of the digital world, where all forms of information and operations are encoded and manipulated in binary format. Inspired by the success of next token prediction in natural language processing, we introduce bGPT, a model with next byte prediction to simu...
2024-02-29T13:38:07Z
19 pages, 5 figures, 5 tables
null
null
null
null
null
null
null
null
null
2,402.19159
Trajectory Consistency Distillation: Improved Latent Consistency Distillation by Semi-Linear Consistency Function with Trajectory Mapping
['Jianbin Zheng', 'Minghui Hu', 'Zhongyi Fan', 'Chaoyue Wang', 'Changxing Ding', 'Dacheng Tao', 'Tat-Jen Cham']
['cs.CV']
Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy. Con...
2024-02-29T13:44:14Z
Project Page: https://mhh0318.github.io/tcd
null
null
Trajectory Consistency Distillation
['Jianbin Zheng', 'Minghui Hu', 'Zhongyi Fan', 'Chaoyue Wang', 'Changxing Ding', 'Dacheng Tao', 'Tat-Jen Cham']
2,024
arXiv.org
30
56
['Computer Science']
2,402.19173
StarCoder 2 and The Stack v2: The Next Generation
['Anton Lozhkov', 'Raymond Li', 'Loubna Ben Allal', 'Federico Cassano', 'Joel Lamy-Poirier', 'Nouamane Tazi', 'Ao Tang', 'Dmytro Pykhtar', 'Jiawei Liu', 'Yuxiang Wei', 'Tianyang Liu', 'Max Tian', 'Denis Kocetkov', 'Arthur Zucker', 'Younes Belkada', 'Zijian Wang', 'Qian Liu', 'Dmitry Abulkhanov', 'Indraneil Paul', 'Zhua...
['cs.SE', 'cs.AI']
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories...
2024-02-29T13:53:35Z
null
null
null
null
null
null
null
null
null
null