arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,401.1428
RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization
['Jaavid Aktar Husain', 'Raj Dabre', 'Aswanth Kumar', 'Jay Gala', 'Thanmay Jayakumar', 'Ratish Puduppully', 'Anoop Kunchukuttan']
['cs.CL', 'cs.AI']
This study addresses the challenge of extending Large Language Models (LLMs) to non-English languages that use non-Roman scripts. We propose an approach that utilizes the romanized form of text as an interface for LLMs, hypothesizing that its frequent informal use and shared tokens with English enhance cross-lingual al...
2024-01-25T16:11:41Z
Accepted to ACL 2024
null
null
null
null
null
null
null
null
null
2,401.14373
TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation
['Gökçe Uludoğan', 'Zeynep Yirmibeşoğlu Balal', 'Furkan Akkurt', 'Melikşah Türker', 'Onur Güngör', 'Susan Üsküdarlı']
['cs.CL', 'cs.AI', 'cs.LG']
The recent advances in natural language processing have predominantly favored well-resourced English-centric models, resulting in a significant gap with low-resource languages. In this work, we introduce the language model TURNA, which is developed for the low-resource language Turkish and is capable of both natural la...
2024-01-25T18:24:13Z
null
null
null
TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation
['Gokcce Uludougan', 'Zeynep Yirmibecsouglu Balal', 'Furkan Akkurt', 'Melikcsah Turker', 'Onur Gungor', 'S. Uskudarli']
2,024
Annual Meeting of the Association for Computational Linguistics
12
66
['Computer Science']
2,401.14391
Rethinking Patch Dependence for Masked Autoencoders
['Letian Fu', 'Long Lian', 'Renhao Wang', 'Baifeng Shi', 'Xudong Wang', 'Adam Yala', 'Trevor Darrell', 'Alexei A. Efros', 'Ken Goldberg']
['cs.CV']
In this work, we examine the impact of inter-patch dependencies in the decoder of masked autoencoders (MAE) on representation learning. We decompose the decoding mechanism for masked reconstruction into self-attention between mask tokens and cross-attention between masked and visible tokens. Our findings reveal that MA...
2024-01-25T18:49:57Z
Transactions on Machine Learning Research (TMLR) 2025
null
null
null
null
null
null
null
null
null
2,401.14398
pix2gestalt: Amodal Segmentation by Synthesizing Wholes
['Ege Ozguroglu', 'Ruoshi Liu', 'Dídac Surís', 'Dian Chen', 'Achal Dave', 'Pavel Tokmakov', 'Carl Vondrick']
['cs.CV', 'cs.LG']
We introduce pix2gestalt, a framework for zero-shot amodal segmentation, which learns to estimate the shape and appearance of whole objects that are only partially visible behind occlusions. By capitalizing on large-scale diffusion models and transferring their representations to this task, we learn a conditional diffu...
2024-01-25T18:57:36Z
Website: https://gestalt.cs.columbia.edu/
null
null
null
null
null
null
null
null
null
2,401.144
Modular Adaptation of Multilingual Encoders to Written Swiss German Dialect
['Jannis Vamvas', 'Noëmi Aepli', 'Rico Sennrich']
['cs.CL']
Creating neural text encoders for written Swiss German is challenging due to a dearth of training data combined with dialectal variation. In this paper, we build on several existing multilingual encoders and adapt them to Swiss German using continued pre-training. Evaluation on three diverse downstream tasks shows that...
2024-01-25T18:59:32Z
First Workshop on Modular and Open Multilingual NLP (MOOMIN 2024)
null
null
Modular Adaptation of Multilingual Encoders to Written Swiss German Dialect
['Jannis Vamvas', 'Noëmi Aepli', 'Rico Sennrich']
2,024
MOOMIN
0
23
['Computer Science']
2,401.14489
The Case for Co-Designing Model Architectures with Hardware
['Quentin Anthony', 'Jacob Hatef', 'Deepak Narayanan', 'Stella Biderman', 'Stas Bekman', 'Junqi Yin', 'Aamir Shafi', 'Hari Subramoni', 'Dhabaleswar Panda']
['cs.DC', 'cs.AI']
While GPUs are responsible for training the vast majority of state-of-the-art deep learning models, the implications of their architecture are often overlooked when designing new deep learning (DL) models. As a consequence, modifying a DL model to be more amenable to the target hardware can significantly improve the ru...
2024-01-25T19:50:31Z
null
null
null
null
null
null
null
null
null
null
2,401.14688
Taiyi-Diffusion-XL: Advancing Bilingual Text-to-Image Generation with Large Vision-Language Model Support
['Xiaojun Wu', 'Dixiang Zhang', 'Ruyi Gan', 'Junyu Lu', 'Ziwei Wu', 'Renliang Sun', 'Jiaxing Zhang', 'Pingjian Zhang', 'Yan Song']
['cs.CL']
Recent advancements in text-to-image models have significantly enhanced image generation capabilities, yet a notable gap of open-source models persists in bilingual or Chinese language support. To address this need, we present Taiyi-Diffusion-XL, a new Chinese and English bilingual text-to-image model which is develope...
2024-01-26T07:17:50Z
Taiyi-Diffusion-XL Tech Report
null
null
null
null
null
null
null
null
null
2,401.14818
Developing ChemDFM as a large language foundation model for chemistry
['Zihan Zhao', 'Da Ma', 'Lu Chen', 'Liangtai Sun', 'Zihao Li', 'Yi Xia', 'Bo Chen', 'Hongshen Xu', 'Zichen Zhu', 'Su Zhu', 'Shuai Fan', 'Guodong Shen', 'Kai Yu', 'Xin Chen']
['cs.CL', 'cs.DL']
Artificial intelligence (AI) has played an increasingly important role in chemical research. However, most models currently used in chemistry are specialist models that require training and tuning for specific tasks. A more generic and efficient solution would be an AI model that could address many tasks and support fr...
2024-01-26T12:45:55Z
10 pages, 12 figures, 12 tables. Published on Cell Report Physical Science, DOI: https://doi.org/10.1016/j.xcrp.2025.102523
Cell Rep. Phys. Sci. 6 (2025) 102523
10.1016/j.xcrp.2025.102523
null
null
null
null
null
null
null
2,401.15006
Airavata: Introducing Hindi Instruction-tuned LLM
['Jay Gala', 'Thanmay Jayakumar', 'Jaavid Aktar Husain', 'Aswanth Kumar M', 'Mohammed Safi Ur Rahman Khan', 'Diptesh Kanojia', 'Ratish Puduppully', 'Mitesh M. Khapra', 'Raj Dabre', 'Rudra Murthy', 'Anoop Kunchukuttan']
['cs.CL', 'cs.AI']
We announce the initial release of "Airavata," an instruction-tuned LLM for Hindi. Airavata was created by fine-tuning OpenHathi with diverse, instruction-tuning Hindi datasets to make it better suited for assistive tasks. Along with the model, we also share the IndicInstruct dataset, which is a collection of diverse i...
2024-01-26T17:07:08Z
Work in progress
null
null
null
null
null
null
null
null
null
2,401.15391
MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries
['Yixuan Tang', 'Yi Yang']
['cs.CL']
Retrieval-augmented generation (RAG) augments large language models (LLM) by retrieving relevant knowledge, showing promising potential in mitigating LLM hallucinations and enhancing response quality, thereby facilitating the great adoption of LLMs in practice. However, we find that existing RAG systems are inadequate ...
2024-01-27T11:41:48Z
Link: https://github.com/yixuantt/MultiHop-RAG/
null
null
null
null
null
null
null
null
null
2,401.15896
M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining
['Qingpei Guo', 'Furong Xu', 'Hanxiao Zhang', 'Wang Ren', 'Ziping Ma', 'Lin Ju', 'Jian Wang', 'Jingdong Chen', 'Ming Yang']
['cs.CV', 'cs.AI']
Vision-language foundation models like CLIP have revolutionized the field of artificial intelligence. Nevertheless, VLM models supporting multi-language, e.g., in both Chinese and English, have lagged due to the relative scarcity of large-scale pretraining datasets. Toward this end, we introduce a comprehensive bilingu...
2024-01-29T05:43:33Z
null
null
null
null
null
null
null
null
null
null
2,401.15947
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
['Bin Lin', 'Zhenyu Tang', 'Yang Ye', 'Jinfa Huang', 'Junwu Zhang', 'Yatian Pang', 'Peng Jin', 'Munan Ning', 'Jiebo Luo', 'Li Yuan']
['cs.CV']
Recent advances demonstrate that scaling Large Vision-Language Models (LVLMs) effectively improves downstream task performances. However, existing scaling methods enable all model parameters to be active for each token in the calculation, which brings massive training and inferring costs. In this work, we propose a sim...
2024-01-29T08:13:40Z
update author
null
null
null
null
null
null
null
null
null
2,401.16122
DeFlow: Decoder of Scene Flow Network in Autonomous Driving
['Qingwen Zhang', 'Yi Yang', 'Heng Fang', 'Ruoyu Geng', 'Patric Jensfelt']
['cs.CV', 'cs.RO']
Scene flow estimation determines a scene's 3D motion field, by predicting the motion of points in the scene, especially for aiding tasks in autonomous driving. Many networks with large-scale point clouds as input use voxelization to create a pseudo-image for real-time running. However, the voxelization process often re...
2024-01-29T12:47:55Z
7 pages, 4 figures, Code check https://github.com/KTH-RPL/deflow, accepted by ICRA 2024
null
null
null
null
null
null
null
null
null
2,401.16182
LLaMandement: Large Language Models for Summarization of French Legislative Proposals
['Joseph Gesnouin', 'Yannis Tannier', 'Christophe Gomes Da Silva', 'Hatim Tapory', 'Camille Brier', 'Hugo Simon', 'Raphael Rozenberg', 'Hermann Woehrel', 'Mehdi El Yakaabi', 'Thomas Binder', 'Guillaume Marie', 'Emilie Caron', 'Mathile Nogueira', 'Thomas Fontas', 'Laure Puydebois', 'Marie Theophile', 'Stephane Morandi',...
['cs.CL', 'cs.AI']
This report introduces LLaMandement, a state-of-the-art Large Language Model, fine-tuned by the French government and designed to enhance the efficiency and efficacy of processing parliamentary sessions (including the production of bench memoranda and documents required for interministerial meetings) by generating neut...
2024-01-29T14:23:51Z
21 pages, 9 figures
null
null
LLaMandement: Large Language Models for Summarization of French Legislative Proposals
['Joseph Gesnouin', 'Yannis Tannier', 'Christophe Gomes Da Silva', 'Hatim Tapory', 'Camille Brier', 'Hugo Simon', 'Raphael Rozenberg', 'Hermann Woehrel', 'Mehdi El Yakaabi', 'Thomas Binder', 'Guillaume Marie', 'Emilie Caron', 'Mathile Nogueira', 'Thomas Fontas', 'Laure Puydebois', 'Marie Theophile', 'Stephane Morandi',...
2,024
arXiv.org
8
52
['Computer Science']
2,401.16224
Diffutoon: High-Resolution Editable Toon Shading via Diffusion Models
['Zhongjie Duan', 'Chengyu Wang', 'Cen Chen', 'Weining Qian', 'Jun Huang']
['cs.CV']
Toon shading is a type of non-photorealistic rendering task of animation. Its primary purpose is to render objects with a flat and stylized appearance. As diffusion models have ascended to the forefront of image synthesis methodologies, this paper delves into an innovative form of toon shading based on diffusion models...
2024-01-29T15:21:37Z
null
null
null
Diffutoon: High-Resolution Editable Toon Shading via Diffusion Models
['Zhongjie Duan', 'Chengyu Wang', 'Cen Chen', 'Weining Qian', 'Jun Huang']
2,024
International Joint Conference on Artificial Intelligence
7
46
['Computer Science']
2,401.16265
CO2: Efficient Distributed Training with Full Communication-Computation Overlap
['Weigao Sun', 'Zhen Qin', 'Weixuan Sun', 'Shidi Li', 'Dong Li', 'Xuyang Shen', 'Yu Qiao', 'Yiran Zhong']
['cs.CL', 'cs.DC']
The fundamental success of large language models hinges upon the efficacious implementation of large-scale distributed training techniques. Nevertheless, building a vast, high-performance cluster featuring high-speed communication interconnectivity is prohibitively costly, and accessible only to prominent entities. In ...
2024-01-29T16:12:31Z
ICLR 2024 Spotlight. Yiran Zhong is the corresponding author. Code is available at: https://github.com/OpenNLPLab/CO2
null
null
null
null
null
null
null
null
null
2,401.1642
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model
['Xiaoyi Dong', 'Pan Zhang', 'Yuhang Zang', 'Yuhang Cao', 'Bin Wang', 'Linke Ouyang', 'Xilin Wei', 'Songyang Zhang', 'Haodong Duan', 'Maosong Cao', 'Wenwei Zhang', 'Yining Li', 'Hang Yan', 'Yang Gao', 'Xinyue Zhang', 'Wei Li', 'Jingwen Li', 'Kai Chen', 'Conghui He', 'Xingcheng Zhang', 'Yu Qiao', 'Dahua Lin', 'Jiaqi Wan...
['cs.CV', 'cs.CL']
We introduce InternLM-XComposer2, a cutting-edge vision-language model excelling in free-form text-image composition and comprehension. This model goes beyond conventional vision-language understanding, adeptly crafting interleaved text-image content from diverse inputs like outlines, detailed textual specifications, a...
2024-01-29T18:59:02Z
Code and models are available at https://github.com/InternLM/InternLM-XComposer
null
null
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model
['Xiao-wen Dong', 'Pan Zhang', 'Yuhang Zang', 'Yuhang Cao', 'Bin Wang', 'Linke Ouyang', 'Xilin Wei', 'Songyang Zhang', 'Haodong Duan', 'Maosong Cao', 'Wenwei Zhang', 'Yining Li', 'Hang Yan', 'Yang Gao', 'Xinyue Zhang', 'Wei Li', 'Jingwen Li', 'Kai Chen', 'Conghui He', 'Xingcheng Zhang', 'Yu Qiao', 'Dahua Lin', 'Jiaqi W...
2,024
arXiv.org
268
92
['Computer Science']
2,401.16421
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation
['Zhenyu He', 'Guhao Feng', 'Shengjie Luo', 'Kai Yang', 'Liwei Wang', 'Jingjing Xu', 'Zhi Zhang', 'Hongxia Yang', 'Di He']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
In this work, we leverage the intrinsic segmentation of language sequences and design a new positional encoding method called Bilevel Positional Encoding (BiPE). For each position, our BiPE blends an intra-segment encoding and an inter-segment encoding. The intra-segment encoding identifies the locations within a segme...
2024-01-29T18:59:07Z
17 pages, 7 figures, 8 tables; ICML 2024 Camera Ready version; Code: https://github.com/zhenyuhe00/BiPE
null
null
null
null
null
null
null
null
null
2,401.16437
A Benchmark Dataset for Tornado Detection and Prediction using Full-Resolution Polarimetric Weather Radar Data
['Mark S. Veillette', 'James M. Kurdzo', 'Phillip M. Stepanian', 'John Y. N. Cho', 'Siddharth Samsi', 'Joseph McDonald']
['physics.ao-ph', 'cs.LG']
Weather radar is the primary tool used by forecasters to detect and warn for tornadoes in near-real time. In order to assist forecasters in warning the public, several algorithms have been developed to automatically detect tornadic signatures in weather radar observations. Recently, Machine Learning (ML) algorithms, wh...
2024-01-26T21:47:39Z
37 pages, 15 Figures, 2 Tables
null
null
null
null
null
null
null
null
null
2,401.16456
SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design
['Seokju Yun', 'Youngmin Ro']
['cs.CV']
Recently, efficient Vision Transformers have shown great performance with low latency on resource-constrained devices. Conventionally, they use 4x4 patch embeddings and a 4-stage structure at the macro level, while utilizing sophisticated attention with multi-head configuration at the micro level. This paper aims to ad...
2024-01-29T09:12:23Z
CVPR 2024
null
null
null
null
null
null
null
null
null
2,401.16468
InstructIR: High-Quality Image Restoration Following Human Instructions
['Marcos V. Conde', 'Gregor Geigle', 'Radu Timofte']
['cs.CV', 'cs.LG', 'eess.IV']
Image restoration is a fundamental problem that involves recovering a high-quality clean image from its degraded observation. All-In-One image restoration models can effectively restore images from various types and levels of degradation using degradation-specific information as prompts to guide the restoration model. ...
2024-01-29T18:53:33Z
European Conference on Computer Vision (ECCV) 2024
null
null
InstructIR: High-Quality Image Restoration Following Human Instructions
['Marcos V. Conde', 'Gregor Geigle', 'R. Timofte']
2,024
European Conference on Computer Vision
58
109
['Computer Science', 'Engineering']
2,401.1664
TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese
['Nicholas Kluge Corrêa', 'Sophia Falk', 'Shiza Fatimah', 'Aniket Sen', 'Nythamar de Oliveira']
['cs.CL', 'cs.LG']
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual fou...
2024-01-30T00:25:54Z
21 pages, 5 figures
Machine Learning With Applications, 16, 100558
10.1016/j.mlwa.2024.100558
null
null
null
null
null
null
null
2,401.16658
OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer
['Yifan Peng', 'Jinchuan Tian', 'William Chen', 'Siddhant Arora', 'Brian Yan', 'Yui Sudo', 'Muhammad Shakeel', 'Kwanghee Choi', 'Jiatong Shi', 'Xuankai Chang', 'Jee-weon Jung', 'Shinji Watanabe']
['cs.CL', 'eess.AS']
Recent studies have highlighted the importance of fully open foundation models. The Open Whisper-style Speech Model (OWSM) is an initial step towards reproducing OpenAI Whisper using public data and open-source toolkits. However, previous versions of OWSM (v1 to v3) are still based on standard Transformer, which might ...
2024-01-30T01:22:18Z
Accepted at INTERSPEECH 2024. Webpage: https://www.wavlab.org/activities/2024/owsm/
null
null
OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer
['Yifan Peng', 'Jinchuan Tian', 'William Chen', 'Siddhant Arora', 'Brian Yan', 'Yui Sudo', 'Muhammad Shakeel', 'Kwanghee Choi', 'Jiatong Shi', 'Xuankai Chang', 'Jee-weon Jung', 'Shinji Watanabe']
2,024
Interspeech
54
59
['Computer Science', 'Engineering']
2,401.16818
H2O-Danube-1.8B Technical Report
['Philipp Singer', 'Pascal Pfeiffer', 'Yauhen Babakhin', 'Maximilian Jeblick', 'Nischay Dhankhar', 'Gabor Fodor', 'Sri Satish Ambati']
['cs.CL', 'cs.LG']
We present H2O-Danube, a series of small 1.8B language models consisting of H2O-Danube-1.8B, trained on 1T tokens, and the incremental improved H2O-Danube2-1.8B trained on an additional 2T tokens. Our models exhibit highly competitive metrics across a multitude of benchmarks and, as of the time of this writing, H2O-Dan...
2024-01-30T08:45:08Z
null
null
null
null
null
null
null
null
null
null
2,401.1723
ESPnet-SPK: full pipeline speaker embedding toolkit with reproducible recipes, self-supervised front-ends, and off-the-shelf models
['Jee-weon Jung', 'Wangyou Zhang', 'Jiatong Shi', 'Zakaria Aldeneh', 'Takuya Higuchi', 'Barry-John Theobald', 'Ahmed Hussen Abdelaziz', 'Shinji Watanabe']
['cs.SD', 'cs.AI', 'eess.AS']
This paper introduces ESPnet-SPK, a toolkit designed with several objectives for training speaker embedding extractors. First, we provide an open-source platform for researchers in the speaker recognition community to effortlessly build models. We provide several models, ranging from x-vector to recent SKA-TDNN. Throug...
2024-01-30T18:18:27Z
5 pages, 3 figures, 7 tables, Interspeech 2024
null
null
null
null
null
null
null
null
null
2,401.1727
YOLO-World: Real-Time Open-Vocabulary Object Detection
['Tianheng Cheng', 'Lin Song', 'Yixiao Ge', 'Wenyu Liu', 'Xinggang Wang', 'Ying Shan']
['cs.CV']
The You Only Look Once (YOLO) series of detectors have established themselves as efficient and practical tools. However, their reliance on predefined and trained object categories limits their applicability in open scenarios. Addressing this limitation, we introduce YOLO-World, an innovative approach that enhances YOLO...
2024-01-30T18:59:38Z
Work still in progress. Code & models are available at: https://github.com/AILab-CVC/YOLO-World
null
null
YOLO-World: Real-Time Open-Vocabulary Object Detection
['Tianheng Cheng', 'Lin Song', 'Yixiao Ge', 'Wenyu Liu', 'Xinggang Wang', 'Ying Shan']
2,024
Computer Vision and Pattern Recognition
301
69
['Computer Science']
2,401.17396
Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks
['Savas Yildirim']
['cs.CL', 'cs.AI']
Deep learning-based and lately Transformer-based language models have been dominating the studies of natural language processing in the last years. Thanks to their accurate and fast fine-tuning characteristics, they have outperformed traditional machine learning-based approaches and achieved state-of-the-art results fo...
2024-01-30T19:27:04Z
null
null
null
Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks
['Savaş Yıldırım']
2,024
arXiv.org
7
34
['Computer Science']
2,401.17851
Instruction-Guided Scene Text Recognition
['Yongkun Du', 'Zhineng Chen', 'Yuchen Su', 'Caiyan Jia', 'Yu-Gang Jiang']
['cs.CV']
Multi-modal models have shown appealing performance in visual recognition tasks, as free-form text-guided training evokes the ability to understand fine-grained visual content. However, current models cannot be trivially applied to scene text recognition (STR) due to the compositional difference between natural and tex...
2024-01-31T14:13:01Z
Accepted by TPAMI
null
null
null
null
null
null
null
null
null
2,401.17948
HyperZ$\cdot$Z$\cdot$W Operator Connects Slow-Fast Networks for Full Context Interaction
['Harvie Zhang']
['cs.CV']
The self-attention mechanism utilizes large implicit weight matrices, programmed through dot product-based activations with very few trainable parameters, to enable long sequence modeling. In this paper, we investigate the possibility of discarding residual learning by employing large implicit kernels to achieve full c...
2024-01-31T15:57:21Z
10 pages, 6 figures, 5 tables
null
null
null
null
null
null
null
null
null
2,401.18034
Paramanu: A Family of Novel Efficient Generative Foundation Language Models for Indian Languages
['Mitodru Niyogi', 'Arnab Bhattacharya']
['cs.CL', 'cs.AI']
We present "Paramanu", a family of novel language models (LM) for Indian languages, consisting of auto-regressive monolingual, bilingual, and multilingual models pretrained from scratch. Currently, it covers 10 languages (Assamese, Bangla, Hindi, Konkani, Maithili, Marathi, Odia, Sanskrit, Tamil, Telugu) across 5 scrip...
2024-01-31T17:58:10Z
null
null
null
null
null
null
null
null
null
null
2,401.18058
LongAlign: A Recipe for Long Context Alignment of Large Language Models
['Yushi Bai', 'Xin Lv', 'Jiajie Zhang', 'Yuze He', 'Ji Qi', 'Lei Hou', 'Jie Tang', 'Yuxiao Dong', 'Juanzi Li']
['cs.CL', 'cs.LG']
Extending large language models to effectively handle long contexts requires instruction fine-tuning on input sequences of similar length. To address this, we present LongAlign -- a recipe of the instruction data, training, and evaluation for long context alignment. First, we construct a long instruction-following data...
2024-01-31T18:29:39Z
null
null
null
null
null
null
null
null
null
null
2,401.18079
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
['Coleman Hooper', 'Sehoon Kim', 'Hiva Mohammadzadeh', 'Michael W. Mahoney', 'Yakun Sophia Shao', 'Kurt Keutzer', 'Amir Gholami']
['cs.LG']
LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions f...
2024-01-31T18:58:14Z
NeurIPS 2024
null
null
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
['Coleman Hooper', 'Sehoon Kim', 'Hiva Mohammadzadeh', 'Michael W. Mahoney', 'Y. Shao', 'Kurt Keutzer', 'A. Gholami']
2,024
Neural Information Processing Systems
224
51
['Computer Science']
2,402.00075
D-Nikud: Enhancing Hebrew Diacritization with LSTM and Pretrained Models
['Adi Rosenthal', 'Nadav Shaked']
['cs.CL']
D-Nikud, a novel approach to Hebrew diacritization that integrates the strengths of LSTM networks and BERT-based (transformer) pre-trained model. Inspired by the methodologies employed in Nakdimon, we integrate it with the TavBERT pre-trained model, our system incorporates advanced architectural choices and diverse tra...
2024-01-30T22:07:12Z
null
null
null
null
null
null
null
null
null
null
2,402.00126
Common Sense Reasoning for Deepfake Detection
['Yue Zhang', 'Ben Colman', 'Xiao Guo', 'Ali Shahriyari', 'Gaurav Bharaj']
['cs.CV', 'cs.CL']
State-of-the-art deepfake detection approaches rely on image-based features extracted via neural networks. While these approaches trained in a supervised manner extract likely fake features, they may fall short in representing unnatural `non-physical' semantic facial attributes -- blurry hairlines, double eyebrows, rig...
2024-01-31T19:11:58Z
null
null
null
Common Sense Reasoning for Deep Fake Detection
['Yue Zhang', 'Ben Colman', 'Ali Shahriyari', 'Gaurav Bharaj']
2,024
European Conference on Computer Vision
35
67
['Computer Science']
2,402.00159
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
['Luca Soldaini', 'Rodney Kinney', 'Akshita Bhagia', 'Dustin Schwenk', 'David Atkinson', 'Russell Authur', 'Ben Bogin', 'Khyathi Chandu', 'Jennifer Dumas', 'Yanai Elazar', 'Valentin Hofmann', 'Ananya Harsh Jha', 'Sachin Kumar', 'Li Lucy', 'Xinxi Lyu', 'Nathan Lambert', 'Ian Magnusson', 'Jacob Morrison', 'Niklas Muennig...
['cs.CL']
Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance ...
2024-01-31T20:29:50Z
Accepted at ACL 2024; Dataset: https://hf.co/datasets/allenai/dolma; Code: https://github.com/allenai/dolma
null
null
null
null
null
null
null
null
null
2,402.0016
Emergency Department Decision Support using Clinical Pseudo-notes
['Simon A. Lee', 'Sujay Jain', 'Alex Chen', 'Kyoka Ono', 'Jennifer Fang', 'Akos Rudas', 'Jeffrey N. Chiang']
['cs.CL']
In this work, we introduce the Multiple Embedding Model for EHR (MEME), an approach that serializes multimodal EHR tabular data into text using pseudo-notes, mimicking clinical text generation. This conversion not only preserves better representations of categorical data and learns contexts but also enables the effecti...
2024-01-31T20:31:56Z
null
npj Digital Medicine 8 (1), 394, 2025
10.1038/s41746-025-01777-x
Emergency Department Decision Support using Clinical Pseudo-notes
['Simon A. Lee', 'Sujay Jain', 'Alex Chen', 'Kyoka Ono', 'Jennifer Fang', 'Á. Rudas', 'Jeffrey N. Chiang']
2,024
null
12
54
['Computer Science']
2,402.00281
Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues
['Soufiane Belharbi', 'Marco Pedersoli', 'Alessandro Lameiras Koerich', 'Simon Bacon', 'Eric Granger']
['cs.CV']
Although state-of-the-art classifiers for facial expression recognition (FER) can achieve a high level of accuracy, they lack interpretability, an important feature for end-users. Experts typically associate spatial action units (\aus) from a codebook to facial regions for the visual interpretation of expressions. In t...
2024-02-01T02:13:49Z
15 pages, 11 figures, 3 tables, International Conference on Automatic Face and Gesture Recognition (FG 2024)
null
null
Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues
['Soufiane Belharbi', 'Marco Pedersoli', 'A. Koerich', 'Simon Bacon', 'Eric Granger']
2,024
IEEE International Conference on Automatic Face & Gesture Recognition
14
94
['Computer Science']
2,402.003
Self-supervised learning of video representations from a child's perspective
['A. Emin Orhan', 'Wentao Wang', 'Alex N. Wang', 'Mengye Ren', 'Brenden M. Lake']
['cs.CV', 'cs.LG', 'cs.NE', 'q-bio.NC']
Children learn powerful internal models of the world around them from a few years of egocentric visual experience. Can such internal models be learned from a child's visual experience with highly generic learning algorithms or do they require strong inductive biases? Recent advances in collecting large-scale, longitudi...
2024-02-01T03:27:26Z
v3 updates results with significantly improved models; v2 was published as a conference paper at CogSci 2024; code & models available from https://github.com/eminorhan/video-models
null
null
Self-supervised learning of video representations from a child's perspective
['A. Orhan', 'Wentao Wang', 'Alex N. Wang', 'Mengye Ren', 'B. Lake']
2,024
arXiv.org
4
23
['Computer Science', 'Biology']
2,402.00453
Instruction Makes a Difference
['Tosin Adewumi', 'Nudrat Habib', 'Lama Alkhaled', 'Elisa Barney']
['cs.CV', 'cs.CL']
We introduce Instruction Document Visual Question Answering (iDocVQA) dataset and Large Language Document (LLaDoc) model, for training Language-Vision (LV) models for document analysis and predictions on document images, respectively. Usually, deep neural networks for the DocVQA task are trained on datasets lacking ins...
2024-02-01T09:43:30Z
Accepted at the 16th IAPR International Workshop On Document Analysis Systems (DAS)
null
null
Instruction Makes a Difference
['Tosin P. Adewumi', 'Nudrat Habib', 'Lama Alkhaled', 'Elisa Barney']
2,024
International Workshop on Document Analysis Systems
1
47
['Computer Science']
2,402.00691
Comparative Study of Large Language Model Architectures on Frontier
['Junqi Yin', 'Avishek Bose', 'Guojing Cong', 'Isaac Lyngaas', 'Quentin Anthony']
['cs.DC']
Large language models (LLMs) have garnered significant attention in both the AI community and beyond. Among these, the Generative Pre-trained Transformer (GPT) has emerged as the dominant architecture, spawning numerous variants. However, these variants have undergone pre-training under diverse conditions, including va...
2024-02-01T15:50:37Z
null
null
null
Comparative Study of Large Language Model Architectures on Frontier
['Junqi Yin', 'A. Bose', 'Guojing Cong', 'Isaac Lyngaas', 'Quentin Anthony']
2,024
IEEE International Parallel and Distributed Processing Symposium
7
47
['Computer Science']
2,402.00769
AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
['Fu-Yun Wang', 'Zhaoyang Huang', 'Weikang Bian', 'Xiaoyu Shi', 'Keqiang Sun', 'Guanglu Song', 'Yu Liu', 'Hongsheng Li']
['cs.CV', 'cs.LG']
This paper introduces an effective method for computation-efficient personalized style video generation without requiring access to any personalized video data. It reduces the necessary generation time of similarly sized video diffusion models from 25 seconds to around 1 second while maintaining the same level of perfo...
2024-02-01T16:58:11Z
Accepted as a Short Paper by SIGGRAPH ASIA 2024 Technical Communications. This is a short version of the original work. Project Page: https://animatelcm.github.io/
null
null
null
null
null
null
null
null
null
2,402.00786
CroissantLLM: A Truly Bilingual French-English Language Model
['Manuel Faysse', 'Patrick Fernandes', 'Nuno M. Guerreiro', 'António Loison', 'Duarte M. Alves', 'Caio Corro', 'Nicolas Boizard', 'João Alves', 'Ricardo Rei', 'Pedro H. Martins', 'Antoni Bigata Casademunt', 'François Yvon', 'André F. T. Martins', 'Gautier Viaud', 'Céline Hudelot', 'Pierre Colombo']
['cs.CL', 'cs.LG']
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware. To that end, we pioneer the approach of training an intrinsic...
2024-02-01T17:17:55Z
null
null
null
CroissantLLM: A Truly Bilingual French-English Language Model
['Manuel Faysse', 'Patrick Fernandes', 'Nuno M. Guerreiro', 'António Loison', 'Duarte M. Alves', 'Caio Corro', 'Nicolas Boizard', 'João Alves', 'Ricardo Rei', 'P. Martins', 'Antoni Bigata Casademunt', 'François Yvon', 'André Martins', 'Gautier Viaud', "C'eline Hudelot", 'Pierre Colombo']
2,024
Trans. Mach. Learn. Res.
37
84
['Computer Science']
2,402.00838
OLMo: Accelerating the Science of Language Models
['Dirk Groeneveld', 'Iz Beltagy', 'Pete Walsh', 'Akshita Bhagia', 'Rodney Kinney', 'Oyvind Tafjord', 'Ananya Harsh Jha', 'Hamish Ivison', 'Ian Magnusson', 'Yizhong Wang', 'Shane Arora', 'David Atkinson', 'Russell Authur', 'Khyathi Raghavi Chandu', 'Arman Cohan', 'Jennifer Dumas', 'Yanai Elazar', 'Yuling Gu', 'Jack Hess...
['cs.CL']
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclose...
2024-02-01T18:28:55Z
null
null
null
null
null
null
null
null
null
null
2,402.00841
Tiny Titans: Can Smaller Large Language Models Punch Above Their Weight in the Real World for Meeting Summarization?
['Xue-Yong Fu', 'Md Tahmid Rahman Laskar', 'Elena Khasanova', 'Cheng Chen', 'Shashi Bhushan TN']
['cs.CL']
Large Language Models (LLMs) have demonstrated impressive capabilities to solve a wide range of tasks without being explicitly fine-tuned on task-specific datasets. However, deploying LLMs in the real world is not trivial, as it requires substantial computing resources. In this paper, we investigate whether smaller, co...
2024-02-01T18:31:34Z
Accepted by NAACL 2024 (Industry Track). The first two authors contributed equally to this work
null
null
Tiny Titans: Can Smaller Large Language Models Punch Above Their Weight in the Real World for Meeting Summarization?
['Xue-Yong Fu', 'Md Tahmid Rahman Laskar', 'Elena Khasanova', 'Cheng Chen', 'TN ShashiBhushan']
2,024
North American Chapter of the Association for Computational Linguistics
23
27
['Computer Science']
2,402.00847
BootsTAP: Bootstrapped Training for Tracking-Any-Point
['Carl Doersch', 'Pauline Luc', 'Yi Yang', 'Dilara Gokay', 'Skanda Koppula', 'Ankush Gupta', 'Joseph Heyward', 'Ignacio Rocco', 'Ross Goroshin', 'João Carreira', 'Andrew Zisserman']
['cs.CV', 'stat.ML']
To endow models with greater understanding of physics and motion, it is useful to enable them to perceive how solid surfaces move and deform in real scenes. This can be formalized as Tracking-Any-Point (TAP), which requires the algorithm to track any point on solid surfaces in a video, potentially densely in space and ...
2024-02-01T18:38:55Z
null
null
null
null
null
null
null
null
null
null
2,402.00856
Towards Efficient Exact Optimization of Language Model Alignment
['Haozhe Ji', 'Cheng Lu', 'Yilin Niu', 'Pei Ke', 'Hongning Wang', 'Jun Zhu', 'Jie Tang', 'Minlie Huang']
['cs.CL']
The alignment of language models with human preferences is vital for their application in real-world tasks. The problem is formulated as optimizing the model's policy to maximize the expected reward that reflects human preferences with minimal deviation from the initial policy. While considered as a straightforward sol...
2024-02-01T18:51:54Z
24 pages, 9 figures
Forty-first International Conference on Machine Learning (ICML 2024)
null
null
null
null
null
null
null
null
2,402.00892
EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks
['Shijia Liao', 'Shiyi Lan', 'Arun George Zachariah']
['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS']
The advent of Large Models marks a new era in machine learning, significantly outperforming smaller models by leveraging vast datasets to capture and synthesize complex patterns. Despite these advancements, the exploration into scaling, especially in the audio generation domain, remains limited, with previous efforts d...
2024-01-31T03:31:03Z
null
null
null
EVA-GAN: Enhanced Various Audio Generation via Scalable Generative Adversarial Networks
['Shijia Liao', 'Shiyi Lan', 'Arun George Zachariah']
2,024
arXiv.org
1
25
['Computer Science', 'Engineering']
2,402.01002
AI-generated faces influence gender stereotypes and racial homogenization
['Nouar AlDahoul', 'Talal Rahwan', 'Yasir Zaki']
['cs.CV', 'cs.AI']
Text-to-image generative AI models such as Stable Diffusion are used daily by millions worldwide. However, the extent to which these models exhibit racial and gender stereotypes is not yet fully understood. Here, we document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight...
2024-02-01T20:32:14Z
47 pages, 19 figures
null
null
null
null
null
null
null
null
null
2,402.0103
Executable Code Actions Elicit Better LLM Agents
['Xingyao Wang', 'Yangyi Chen', 'Lifan Yuan', 'Yizhe Zhang', 'Yunzhu Li', 'Hao Peng', 'Heng Ji']
['cs.CL', 'cs.AI']
Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking tools and controlling robots, show great potential in tackling real-world challenges. LLM agents are typically prompted to produce actions by generating JSON or text in a pre-defined format, which is usually limited by c...
2024-02-01T21:38:58Z
Accepted by ICML 2024; Code, data, model, and demo are available at https://github.com/xingyaoww/code-act
null
null
null
null
null
null
null
null
null
2,402.01053
Plan-Grounded Large Language Models for Dual Goal Conversational Settings
['Diogo Glória-Silva', 'Rafael Ferreira', 'Diogo Tavares', 'David Semedo', 'João Magalhães']
['cs.CL', 'cs.AI']
Training Large Language Models (LLMs) to follow user instructions has been shown to supply the LLM with ample capacity to converse fluently while being aligned with humans. Yet, it is not completely clear how an LLM can lead a plan-grounded conversation in mixed-initiative settings where instructions flow in both direc...
2024-02-01T22:56:39Z
null
null
null
null
null
null
null
null
null
null
2,402.01306
KTO: Model Alignment as Prospect Theoretic Optimization
['Kawin Ethayarajh', 'Winnie Xu', 'Niklas Muennighoff', 'Dan Jurafsky', 'Douwe Kiela']
['cs.LG', 'cs.AI']
Kahneman & Tversky's $\textit{prospect theory}$ tells us that humans perceive random variables in a biased but well-defined manner (1992); for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objec...
2024-02-02T10:53:36Z
ICML 2024
null
null
null
null
null
null
null
null
null
2,402.01469
AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback
['Jian Guan', 'Wei Wu', 'Zujie Wen', 'Peng Xu', 'Hongning Wang', 'Minlie Huang']
['cs.CL']
The notable success of large language models (LLMs) has sparked an upsurge in building language agents to complete various complex tasks. We present AMOR, an agent framework based on open-source LLMs, which reasons with external knowledge bases and adapts to specific domains through human supervision to the reasoning p...
2024-02-02T14:56:48Z
NeurIPS 2024
null
null
AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback
['Jian Guan', 'Wei Wu', 'Zujie Wen', 'Peng Xu', 'Hongning Wang', 'Minlie Huang']
2,024
Neural Information Processing Systems
20
57
['Computer Science']
2,402.01528
Decoding Speculative Decoding
['Minghao Yan', 'Saurabh Agarwal', 'Shivaram Venkataraman']
['cs.LG', 'cs.CL']
Speculative Decoding is a widely used technique to speed up inference for Large Language Models (LLMs) without sacrificing quality. When performing inference, speculative decoding uses a smaller draft model to generate speculative tokens and then uses the target LLM to verify those draft tokens. The speedup provided by...
2024-02-02T16:15:24Z
Proceedings of the 2025 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2025)
null
null
null
null
null
null
null
null
null
2,402.01613
Nomic Embed: Training a Reproducible Long Context Text Embedder
['Zach Nussbaum', 'John X. Morris', 'Brandon Duderstadt', 'Andriy Mulyar']
['cs.CL', 'cs.AI']
This technical report describes the training of nomic-embed-text-v1, the first fully reproducible, open-source, open-weights, open-data, 8192 context length English text embedding model that outperforms both OpenAI Ada-002 and OpenAI text-embedding-3-small on the short-context MTEB benchmark and the long context LoCo b...
2024-02-02T18:23:18Z
Accepted to TMLR https://openreview.net/forum?id=IPmzyQSiQE
null
null
Nomic Embed: Training a Reproducible Long Context Text Embedder
['Zach Nussbaum', 'John X. Morris', 'Brandon Duderstadt', 'Andriy Mulyar']
2,024
Trans. Mach. Learn. Res.
124
76
['Computer Science']
2,402.01728
Hardware Phi-1.5B: A Large Language Model Encodes Hardware Domain Specific Knowledge
['Weimin Fu', 'Shijie Li', 'Yifang Zhao', 'Haocheng Ma', 'Raj Dutta', 'Xuan Zhang', 'Kaichen Yang', 'Yier Jin', 'Xiaolong Guo']
['cs.CL', 'cs.AI', 'cs.AR']
In the rapidly evolving semiconductor industry, where research, design, verification, and manufacturing are intricately linked, the potential of Large Language Models to revolutionize hardware design and security verification is immense. The primary challenge, however, lies in the complexity of hardware specific issues...
2024-01-27T22:49:43Z
6 pages, 6 figures
29th IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC); 2024 January; Incheon Songdo Convensia, South Korea
null
Hardware Phi-1.5B: A Large Language Model Encodes Hardware Domain Specific Knowledge
['Weimin Fu', 'Shijie Li', 'Yifang Zhao', 'Haocheng Ma', 'R. Dutta', 'Xuan Zhang', 'Kaichen Yang', 'Yier Jin', 'Xiaolong Guo']
2,024
Asia and South Pacific Design Automation Conference
10
37
['Computer Science']
2,402.01758
Aalap: AI Assistant for Legal & Paralegal Functions in India
['Aman Tiwari', 'Prathamesh Kalamkar', 'Atreyo Banerjee', 'Saurabh Karn', 'Varun Hemachandran', 'Smita Gupta']
['cs.CY', 'cs.AI', 'cs.CL']
Using proprietary Large Language Models on legal tasks poses challenges due to data privacy issues, domain data heterogeneity, domain knowledge sophistication, and domain objectives uniqueness. We created Aalalp, a fine-tuned Mistral 7B model on instructions data related to specific Indian legal tasks. The performance ...
2024-01-30T12:39:58Z
null
null
null
Aalap: AI Assistant for Legal & Paralegal Functions in India
['Aman Tiwari', 'Prathamesh Kalamkar', 'Atreyo Banerjee', 'S. Karn', 'V. Hemachandran', 'Smita Gupta']
2,024
arXiv.org
1
30
['Computer Science']
2,402.01771
BlackMamba: Mixture of Experts for State-Space Models
['Quentin Anthony', 'Yury Tokpanov', 'Paolo Glorioso', 'Beren Millidge']
['cs.CL', 'cs.AI', 'cs.DC', 'cs.LG']
State-space models (SSMs) have recently demonstrated competitive performance to transformers at large-scale language modeling benchmarks while achieving linear time and memory complexity as a function of sequence length. Mamba, a recently released SSM model, shows impressive performance in both language modeling and lo...
2024-02-01T07:15:58Z
null
null
null
null
null
null
null
null
null
null
2,402.01831
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
['Zhifeng Kong', 'Arushi Goel', 'Rohan Badlani', 'Wei Ping', 'Rafael Valle', 'Bryan Catanzaro']
['cs.SD', 'cs.LG', 'eess.AS']
Augmenting large language models (LLMs) to understand audio -- including non-speech sounds and non-verbal speech -- is critically important for diverse real-world applications of LLMs. In this paper, we propose Audio Flamingo, a novel audio language model with 1) strong audio understanding abilities, 2) the ability to ...
2024-02-02T18:58:34Z
ICML 2024
null
null
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
['Zhifeng Kong', 'Arushi Goel', 'Rohan Badlani', 'Wei Ping', 'Rafael Valle', 'Bryan Catanzaro']
2,024
International Conference on Machine Learning
94
93
['Computer Science', 'Engineering']
2,402.01912
Natural language guidance of high-fidelity text-to-speech with synthetic annotations
['Dan Lyth', 'Simon King']
['cs.SD', 'cs.CL', 'eess.AS']
Text-to-speech models trained on large-scale datasets have demonstrated impressive in-context learning capabilities and naturalness. However, control of speaker identity and style in these models typically requires conditioning on reference speech recordings, limiting creative applications. Alternatively, natural langu...
2024-02-02T21:29:34Z
null
null
null
null
null
null
null
null
null
null
2,402.01935
Code Representation Learning At Scale
['Dejiao Zhang', 'Wasi Ahmad', 'Ming Tan', 'Hantian Ding', 'Ramesh Nallapati', 'Dan Roth', 'Xiaofei Ma', 'Bing Xiang']
['cs.CL']
Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i.e., code generation. However, most of the existing works on code representation learning train models at a hundred million parameter scale using very limited pretraining corpora. In this work, w...
2024-02-02T22:19:15Z
10 pages
ICLR 2024
null
null
null
null
null
null
null
null
2,402.0198
SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks
['Gourab Dey', 'Adithya V Ganesan', 'Yash Kumar Lal', 'Manal Shah', 'Shreyashee Sinha', 'Matthew Matero', 'Salvatore Giorgi', 'Vivek Kulkarni', 'H. Andrew Schwartz']
['cs.CL']
Social science NLP tasks, such as emotion or humor detection, are required to capture the semantics along with the implicit pragmatics from text, often with limited amounts of training data. Instruction tuning has been shown to improve the many capabilities of large language models (LLMs) such as commonsense reasoning,...
2024-02-03T01:33:16Z
Short paper accepted to EACL 2024. 4 pgs, 2 tables
null
null
null
null
null
null
null
null
null
2,402.01981
Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes
['Isabel O. Gallegos', 'Ryan A. Rossi', 'Joe Barrow', 'Md Mehrab Tanjim', 'Tong Yu', 'Hanieh Deilamsalehy', 'Ruiyi Zhang', 'Sungchul Kim', 'Franck Dernoncourt']
['cs.CL', 'cs.AI', 'cs.CY', 'cs.LG']
Large language models (LLMs) have shown remarkable advances in language generation and understanding but are also prone to exhibiting harmful social biases. While recognition of these behaviors has generated an abundance of bias mitigation techniques, most require modifications to the training data, model parameters, o...
2024-02-03T01:40:11Z
null
null
null
null
null
null
null
null
null
null
2,402.02207
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models
['Yongshuo Zong', 'Ondrej Bohdal', 'Tingyang Yu', 'Yongxin Yang', 'Timothy Hospedales']
['cs.LG']
Current vision large language models (VLLMs) exhibit remarkable capabilities yet are prone to generate harmful content and are vulnerable to even the simplest jailbreaking attacks. Our initial analysis finds that this is due to the presence of harmful data during vision-language instruction fine-tuning, and that VLLM f...
2024-02-03T16:43:42Z
ICML 2024
null
null
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models
['Yongshuo Zong', 'Ondrej Bohdal', 'Tingyang Yu', 'Yongxin Yang', 'Timothy M. Hospedales']
2,024
International Conference on Machine Learning
73
47
['Computer Science']
2,402.02263
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
['Yatong Bai', 'Mo Zhou', 'Vishal M. Patel', 'Somayeh Sojoudi']
['cs.LG', 'cs.AI', 'cs.CV', '68T07']
Adversarial robustness often comes at the cost of degraded accuracy, impeding real-life applications of robust classification models. Training-based solutions for better trade-offs are limited by incompatibilities with already-trained high-performance large models, necessitating the exploration of training-free ensembl...
2024-02-03T21:12:36Z
null
null
null
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
['Yatong Bai', 'Mo Zhou', 'Vishal M. Patel', 'S. Sojoudi']
2,024
Trans. Mach. Learn. Res.
8
63
['Computer Science']
2,402.02368
Timer: Generative Pre-trained Transformers Are Large Time Series Models
['Yong Liu', 'Haoran Zhang', 'Chenyu Li', 'Xiangdong Huang', 'Jianmin Wang', 'Mingsheng Long']
['cs.LG', 'stat.ML']
Deep learning has contributed remarkably to the advancement of time series analysis. Still, deep models can encounter performance bottlenecks in real-world data-scarce scenarios, which can be concealed due to the performance saturation with small models on current benchmarks. Meanwhile, large models have demonstrated g...
2024-02-04T06:55:55Z
null
null
null
null
null
null
null
null
null
null
2,402.02416
Aligner: Efficient Alignment by Learning to Correct
['Jiaming Ji', 'Boyuan Chen', 'Hantao Lou', 'Donghai Hong', 'Borong Zhang', 'Xuehai Pan', 'Juntao Dai', 'Tianyi Qiu', 'Yaodong Yang']
['cs.CL', 'cs.AI', 'cs.LG']
With the rapid development of large language models (LLMs) and ever-evolving practical requirements, finding an efficient and effective alignment method has never been more critical. However, the tension between the complexity of current alignment methods and the need for rapid iteration in deployment scenarios necessi...
2024-02-04T09:24:51Z
Accepted by NeurIPS 2024 Oral Presentation
null
null
null
null
null
null
null
null
null
2,402.02464
A Graph is Worth $K$ Words: Euclideanizing Graph using Pure Transformer
['Zhangyang Gao', 'Daize Dong', 'Cheng Tan', 'Jun Xia', 'Bozhen Hu', 'Stan Z. Li']
['cs.LG', 'cs.AI', 'cs.SI']
Can we model Non-Euclidean graphs as pure language or even Euclidean vectors while retaining their inherent information? The Non-Euclidean property have posed a long term challenge in graph modeling. Despite recent graph neural networks and graph transformers efforts encoding graphs as Euclidean vectors, recovering the...
2024-02-04T12:29:40Z
null
null
null
null
null
null
null
null
null
null
2,402.02574
Spatio-temporal Prompting Network for Robust Video Feature Extraction
['Guanxiong Sun', 'Chi Wang', 'Zhaoyu Zhang', 'Jiankang Deng', 'Stefanos Zafeiriou', 'Yang Hua']
['cs.CV', 'cs.LG']
Frame quality deterioration is one of the main challenges in the field of video understanding. To compensate for the information loss caused by deteriorated frames, recent approaches exploit transformer-based integration modules to obtain spatio-temporal information. However, these integration modules are heavy and com...
2024-02-04T17:52:04Z
null
2023 International Conference on Computer Vision (ICCV) 13541-13551
10.1109/ICCV51070.2023.01250
Spatio-temporal Prompting Network for Robust Video Feature Extraction
['Guanxiong Sun', 'Chi Wang', 'Zhaoyu Zhang', 'Jiankang Deng', 'S. Zafeiriou', 'Yang Hua']
2,023
IEEE International Conference on Computer Vision
4
70
['Computer Science']
2,402.02583
DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image Editing
['Chong Mou', 'Xintao Wang', 'Jiechong Song', 'Ying Shan', 'Jian Zhang']
['cs.CV', 'cs.LG']
Large-scale Text-to-Image (T2I) diffusion models have revolutionized image generation over the last few years. Although owning diverse and high-quality generation capabilities, translating these abilities to fine-grained image editing remains challenging. In this paper, we propose DiffEditor to rectify two weaknesses i...
2024-02-04T18:50:29Z
null
null
null
DiffEditor: Boosting Accuracy and Flexibility on Diffusion-Based Image Editing
['Chong Mou', 'Xintao Wang', 'Jie Song', 'Ying Shan', 'Jian Zhang']
2,024
Computer Vision and Pattern Recognition
55
0
['Computer Science']
2,402.02592
Unified Training of Universal Time Series Forecasting Transformers
['Gerald Woo', 'Chenghao Liu', 'Akshat Kumar', 'Caiming Xiong', 'Silvio Savarese', 'Doyen Sahoo']
['cs.LG', 'cs.AI']
Deep learning for time series forecasting has traditionally operated within a one-model-per-dataset framework, limiting its potential to leverage the game-changing impact of large pre-trained models. The concept of universal forecasting, emerging from pre-training on a vast collection of time series datasets, envisions...
2024-02-04T20:00:45Z
null
null
null
null
null
null
null
null
null
null
2,402.02622
DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging
['Matteo Pagliardini', 'Amirkeivan Mohtashami', 'Francois Fleuret', 'Martin Jaggi']
['cs.CL', 'cs.LG']
The transformer architecture by Vaswani et al. (2017) is now ubiquitous across application domains, from natural language processing to speech processing and image understanding. We propose DenseFormer, a simple modification to the standard architecture that improves the perplexity of the model without increasing its s...
2024-02-04T21:44:09Z
null
null
null
null
null
null
null
null
null
null
2,402.02632
GIRT-Model: Automated Generation of Issue Report Templates
['Nafiseh Nikeghbal', 'Amir Hossein Kargaran', 'Abbas Heydarnoori']
['cs.SE', 'cs.CL']
Platforms such as GitHub and GitLab introduce Issue Report Templates (IRTs) to enable more effective issue management and better alignment with developer expectations. However, these templates are not widely adopted in most repositories, and there is currently no tool available to aid developers in generating them. In ...
2024-02-04T22:53:38Z
Accepted to be published at the 21st IEEE/ACM International Conference on Mining Software Repositories (MSR 2024)
null
10.1145/3643991.3644906
null
null
null
null
null
null
null
2,402.02754
Focal Modulation Networks for Interpretable Sound Classification
['Luca Della Libera', 'Cem Subakan', 'Mirco Ravanelli']
['cs.SD', 'cs.LG', 'eess.AS']
The increasing success of deep neural networks has raised concerns about their inherent black-box nature, posing challenges related to interpretability and trust. While there has been extensive exploration of interpretation techniques in vision and language, interpretability in the audio domain has received limited att...
2024-02-05T06:20:52Z
Accepted to ICASSP 2024 XAI-SA Workshop
null
null
Focal Modulation Networks for Interpretable Sound Classification
['Luca Della Libera', 'Cem Subakan', 'M. Ravanelli']
2,024
2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)
2
52
['Computer Science', 'Engineering']
2,402.02834
Shortened LLaMA: Depth Pruning for Large Language Models with Comparison of Retraining Methods
['Bo-Kyeong Kim', 'Geonmin Kim', 'Tae-Ho Kim', 'Thibault Castells', 'Shinkook Choi', 'Junho Shin', 'Hyoung-Kyu Song']
['cs.LG', 'cs.CL']
Structured pruning of modern large language models (LLMs) has emerged as a way of decreasing their high computational needs. Width pruning reduces the size of projection weight matrices (e.g., by removing attention heads) while maintaining the number of layers. Depth pruning, in contrast, removes entire layers or block...
2024-02-05T09:44:49Z
Update (arXiv-v2): continued pretraining for severe pruning ratios, compatibility with quantization, and enhanced baselines. Preliminary work (arXiv-v1) accepted at ICLR 2024 Workshop on ME-FoMo: https://openreview.net/forum?id=18VGxuOdpu
null
null
Shortened LLaMA: A Simple Depth Pruning for Large Language Models
['Bo-Kyeong Kim', 'Geonmin Kim', 'Tae-Ho Kim', 'Thibault Castells', 'Shinkook Choi', 'Junho Shin', 'Hyoung-Kyu Song']
2,024
arXiv.org
40
69
['Computer Science']
2,402.03166
RRWNet: Recursive Refinement Network for effective retinal artery/vein segmentation and classification
['José Morano', 'Guilherme Aresta', 'Hrvoje Bogunović']
['eess.IV', 'cs.CV']
The caliber and configuration of retinal blood vessels serve as important biomarkers for various diseases and medical conditions. A thorough analysis of the retinal vasculature requires the segmentation of the blood vessels and their classification into arteries and veins, typically performed on color fundus images obt...
2024-02-05T16:35:29Z
null
Expert Systems with Applications, 2024
10.1016/j.eswa.2024.124970
RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification
['José Morano', 'Guilherme Aresta', "Hrvoje Bogunovi'c"]
2,024
Expert systems with applications
2
81
['Computer Science', 'Engineering']
2,402.03216
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation
['Jianlv Chen', 'Shitao Xiao', 'Peitian Zhang', 'Kun Luo', 'Defu Lian', 'Zheng Liu']
['cs.CL', 'cs.AI', 'cs.LG']
In this paper, we present a new embedding model, called M3-Embedding, which is distinguished for its versatility in Multi-Linguality, Multi-Functionality, and Multi-Granularity. It can support more than 100 working languages, leading to new state-of-the-art performances on multi-lingual and cross-lingual retrieval task...
2024-02-05T17:26:49Z
null
null
null
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation
['Jianlv Chen', 'Shitao Xiao', 'Peitian Zhang', 'Kun Luo', 'Defu Lian', 'Zheng Liu']
2,024
Annual Meeting of the Association for Computational Linguistics
449
60
['Computer Science']
2,402.03284
Deal, or no deal (or who knows)? Forecasting Uncertainty in Conversations using Large Language Models
['Anthony Sicilia', 'Hyunwoo Kim', 'Khyathi Raghavi Chandu', 'Malihe Alikhani', 'Jack Hessel']
['cs.CL', 'cs.AI', 'cs.LG']
Effective interlocutors account for the uncertain goals, beliefs, and emotions of others. But even the best human conversationalist cannot perfectly anticipate the trajectory of a dialogue. How well can language models represent inherent uncertainty in conversations? We propose FortUne Dial, an expansion of the long-st...
2024-02-05T18:39:47Z
2 Figures; 7 Tables; 27 pages
null
null
null
null
null
null
null
null
null
2,402.0329
InstanceDiffusion: Instance-level Control for Image Generation
['Xudong Wang', 'Trevor Darrell', 'Sai Saketh Rambhatla', 'Rohit Girdhar', 'Ishan Misra']
['cs.CV', 'cs.AI', 'cs.LG']
Text-to-image diffusion models produce high quality images but do not offer control over individual instances in the image. We introduce InstanceDiffusion that adds precise instance-level control to text-to-image diffusion models. InstanceDiffusion supports free-form language conditions per instance and allows flexible...
2024-02-05T18:49:17Z
Preprint; Project page: https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/
null
null
null
null
null
null
null
null
null
2,402.033
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
['Zhihong Shao', 'Peiyi Wang', 'Qihao Zhu', 'Runxin Xu', 'Junxiao Song', 'Xiao Bi', 'Haowei Zhang', 'Mingchuan Zhang', 'Y. K. Li', 'Y. Wu', 'Daya Guo']
['cs.CL', 'cs.AI', 'cs.LG']
Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. ...
2024-02-05T18:55:32Z
null
null
null
null
null
null
null
null
null
null
2,402.03477
Arabic Synonym BERT-based Adversarial Examples for Text Classification
['Norah Alshahrani', 'Saied Alshahrani', 'Esma Wali', 'Jeanna Matthews']
['cs.CL']
Text classification systems have been proven vulnerable to adversarial text examples, modified versions of the original text examples that are often unnoticed by human eyes, yet can force text classification models to alter their classification. Often, research works quantifying the impact of adversarial text attacks h...
2024-02-05T19:39:07Z
This paper is accepted at The 18th Conference of the European Chapter of the Association for Computational Linguistics (Student Research Workshop), March 17-22, 2024
null
null
Arabic Synonym BERT-based Adversarial Examples for Text Classification
['Norah M. Alshahrani', 'Saied Alshahrani', 'Esma Wali', 'J. Matthews']
2,024
Conference of the European Chapter of the Association for Computational Linguistics
6
53
['Computer Science']
2,402.03686
Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification
['Soumya Sanyal', 'Tianyi Xiao', 'Jiacheng Liu', 'Wenya Wang', 'Xiang Ren']
['cs.CL', 'cs.AI']
Making inferences in text comprehension to understand the meaning is essential in language processing. This work studies the entailment verification (EV) problem of multi-sentence premises that requires a system to make multiple inferences implicitly. Studying EV for such complex premises is important because modern NL...
2024-02-06T04:14:09Z
null
null
null
null
null
null
null
null
null
null
2,402.03766
MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
['Xiangxiang Chu', 'Limeng Qiao', 'Xinyu Zhang', 'Shuang Xu', 'Fei Wei', 'Yang Yang', 'Xiaofei Sun', 'Yiming Hu', 'Xinyang Lin', 'Bo Zhang', 'Chunhua Shen']
['cs.CV', 'cs.AI']
We introduce MobileVLM V2, a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs' performance. Specifi...
2024-02-06T07:16:36Z
null
null
null
MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
['Xiangxiang Chu', 'Limeng Qiao', 'Xinyu Zhang', 'Shuang Xu', 'Fei Wei', 'Yang Yang', 'Xiaofei Sun', 'Yiming Hu', 'Xinyang Lin', 'Bo Zhang', 'Chunhua Shen']
2,024
arXiv.org
109
68
['Computer Science']
2,402.03774
Learning a Decision Tree Algorithm with Transformers
['Yufan Zhuang', 'Liyuan Liu', 'Chandan Singh', 'Jingbo Shang', 'Jianfeng Gao']
['cs.LG', 'cs.AI', 'cs.CL']
Decision trees are renowned for their ability to achieve high predictive performance while remaining interpretable, especially on tabular data. Traditionally, they are constructed through recursive algorithms, where they partition the data at every node in a tree. However, identifying a good partition is challenging, a...
2024-02-06T07:40:53Z
null
null
null
null
null
null
null
null
null
null
2,402.03804
ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs
['Zhengyan Zhang', 'Yixin Song', 'Guanghui Yu', 'Xu Han', 'Yankai Lin', 'Chaojun Xiao', 'Chenyang Song', 'Zhiyuan Liu', 'Zeyu Mi', 'Maosong Sun']
['cs.LG', 'cs.AI']
Sparse computation offers a compelling solution for the inference of Large Language Models (LLMs) in low-resource scenarios by dynamically skipping the computation of inactive neurons. While traditional approaches focus on ReLU-based LLMs, leveraging zeros in activation values, we broaden the scope of sparse LLMs beyon...
2024-02-06T08:45:51Z
null
null
null
null
null
null
null
null
null
null
2,402.03885
MOMENT: A Family of Open Time-series Foundation Models
['Mononito Goswami', 'Konrad Szafer', 'Arjun Choudhry', 'Yifu Cai', 'Shuo Li', 'Artur Dubrawski']
['cs.LG', 'cs.AI']
We introduce MOMENT, a family of open-source foundation models for general-purpose time series analysis. Pre-training large models on time series data is challenging due to (1) the absence of a large and cohesive public time series repository, and (2) diverse time series characteristics which make multi-dataset trainin...
2024-02-06T10:48:46Z
Accepted at ICML'24. This is a revision. See changelog in the Appendix
null
null
null
null
null
null
null
null
null
2,402.04249
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
['Mantas Mazeika', 'Long Phan', 'Xuwang Yin', 'Andy Zou', 'Zifan Wang', 'Norman Mu', 'Elham Sakhaee', 'Nathaniel Li', 'Steven Basart', 'Bo Li', 'David Forsyth', 'Dan Hendrycks']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV']
Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess new methods. To address this issue, we introduce HarmBench, a standardized evaluation fr...
2024-02-06T18:59:08Z
Website: https://www.harmbench.org
null
null
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
['Mantas Mazeika', 'Long Phan', 'Xuwang Yin', 'Andy Zou', 'Zifan Wang', 'Norman Mu', 'Elham Sakhaee', 'Nathaniel Li', 'Steven Basart', 'Bo Li', 'David Forsyth', 'Dan Hendrycks']
2,024
International Conference on Machine Learning
419
107
['Computer Science']
2,402.04252
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
['Quan Sun', 'Jinsheng Wang', 'Qiying Yu', 'Yufeng Cui', 'Fan Zhang', 'Xiaosong Zhang', 'Xinlong Wang']
['cs.CV']
Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional 80.7% ...
2024-02-06T18:59:48Z
null
null
null
null
null
null
null
null
null
null
2,402.04324
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
['Weiming Ren', 'Huan Yang', 'Ge Zhang', 'Cong Wei', 'Xinrun Du', 'Wenhao Huang', 'Wenhu Chen']
['cs.CV']
Image-to-video (I2V) generation aims to use the initial frame (alongside a text prompt) to create a video sequence. A grand challenge in I2V generation is to maintain visual consistency throughout the video: existing methods often struggle to preserve the integrity of the subject, background, and style from the first f...
2024-02-06T19:08:18Z
Project Page: https://tiger-ai-lab.github.io/ConsistI2V/
null
null
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
['Weiming Ren', 'Harry Yang', 'Ge Zhang', 'Cong Wei', 'Xinrun Du', 'Stephen W. Huang', 'Wenhu Chen']
2,024
Trans. Mach. Learn. Res.
66
73
['Computer Science']
2,402.04379
Fine-Tuned Language Models Generate Stable Inorganic Materials as Text
['Nate Gruver', 'Anuroop Sriram', 'Andrea Madotto', 'Andrew Gordon Wilson', 'C. Lawrence Zitnick', 'Zachary Ulissi']
['cs.LG', 'cond-mat.mtrl-sci']
We propose fine-tuning large language models for generation of stable materials. While unorthodox, fine-tuning large language models on text-encoded atomistic data is simple to implement yet reliable, with around 90% of sampled structures obeying physical constraints on atom positions and charges. Using energy above hu...
2024-02-06T20:35:28Z
ICLR 2024. Code available at: https://github.com/facebookresearch/crystal-llm
null
null
Fine-Tuned Language Models Generate Stable Inorganic Materials as Text
['Nate Gruver', 'Anuroop Sriram', 'Andrea Madotto', 'A. Wilson', 'C. L. Zitnick', 'Zachary W. Ulissi', 'Meta Fair']
2,024
International Conference on Learning Representations
67
41
['Computer Science', 'Physics']
2,402.04588
UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset
['Haoyu Wang', 'Shuo Wang', 'Yukun Yan', 'Xujia Wang', 'Zhiyu Yang', 'Yuzhuang Xu', 'Zhenghao Liu', 'Liner Yang', 'Ning Ding', 'Xu Han', 'Zhiyuan Liu', 'Maosong Sun']
['cs.CL']
Open-source large language models (LLMs) have gained significant strength across diverse fields. Nevertheless, the majority of studies primarily concentrate on English, with only limited exploration into the realm of multilingual abilities. In this work, we therefore construct an open-source multilingual supervised fin...
2024-02-07T05:05:53Z
Work in Progress
null
null
UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset
['Haoyu Wang', 'Shuo Wang', 'Yukun Yan', 'Xujia Wang', 'Zhiyu Yang', 'Yuzhuang Xu', 'Zhenghao Liu', 'Ning Ding', 'Xu Han', 'Zhiyuan Liu', 'Maosong Sun']
2,024
Annual Meeting of the Association for Computational Linguistics
0
21
['Computer Science']
2,402.04624
MEMORYLLM: Towards Self-Updatable Large Language Models
['Yu Wang', 'Yifan Gao', 'Xiusi Chen', 'Haoming Jiang', 'Shiyang Li', 'Jingfeng Yang', 'Qingyu Yin', 'Zheng Li', 'Xian Li', 'Bing Yin', 'Jingbo Shang', 'Julian McAuley']
['cs.CL']
Existing Large Language Models (LLMs) usually remain static after deployment, which might make it hard to inject new knowledge into the model. We aim to build models containing a considerable portion of self-updatable parameters, enabling the model to integrate new knowledge effectively and efficiently. To this end, we...
2024-02-07T07:14:11Z
13 pages, 9 figures
null
null
null
null
null
null
null
null
null
2,402.04717
InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior
['Chenguo Lin', 'Yadong Mu']
['cs.CV']
Comprehending natural language instructions is a charming property for 3D indoor scene synthesis systems. Existing methods directly model object joint distributions and express object relations implicitly within a scene, thereby hindering the controllability of generation. We introduce InstructScene, a novel generative...
2024-02-07T10:09:00Z
Accepted by ICLR 2024 for spotlight presentation; Project page: https://chenguolin.github.io/projects/InstructScene
null
null
InstructScene: Instruction-Driven 3D Indoor Scene Synthesis with Semantic Graph Prior
['Chenguo Lin', 'Yadong Mu']
2,024
International Conference on Learning Representations
40
83
['Computer Science']
2,402.04792
Direct Language Model Alignment from Online AI Feedback
['Shangmin Guo', 'Biao Zhang', 'Tianlin Liu', 'Tianqi Liu', 'Misha Khalman', 'Felipe Llinares', 'Alexandre Rame', 'Thomas Mesnard', 'Yao Zhao', 'Bilal Piot', 'Johan Ferret', 'Mathieu Blondel']
['cs.AI', 'cs.CL', 'cs.HC']
Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated,...
2024-02-07T12:31:13Z
18 pages, 9 figures, 4 tables
null
null
null
null
null
null
null
null
null
2,402.04841
Data-efficient Large Vision Models through Sequential Autoregression
['Jianyuan Guo', 'Zhiwei Hao', 'Chengcheng Wang', 'Yehui Tang', 'Han Wu', 'Han Hu', 'Kai Han', 'Chang Xu']
['cs.CV']
Training general-purpose vision models on purely sequential visual data, eschewing linguistic inputs, has heralded a new frontier in visual understanding. These models are intended to not only comprehend but also seamlessly transit to out-of-domain tasks. However, current endeavors are hamstrung by an over-reliance on ...
2024-02-07T13:41:53Z
15 pages
ICML 2024
null
null
null
null
null
null
null
null
2,402.04914
Personalized Text Generation with Fine-Grained Linguistic Control
['Bashar Alhafni', 'Vivek Kulkarni', 'Dhruv Kumar', 'Vipul Raheja']
['cs.CL']
As the text generation capabilities of large language models become increasingly prominent, recent studies have focused on controlling particular aspects of the generated text to make it more personalized. However, most research on controllable text generation focuses on controlling the content or modeling specific hig...
2024-02-07T14:41:08Z
null
null
null
null
null
null
null
null
null
null
2,402.05
Pedagogical Alignment of Large Language Models
['Shashank Sonkar', 'Kangqi Ni', 'Sapana Chaudhary', 'Richard G. Baraniuk']
['cs.CL']
Large Language Models (LLMs), when used in educational settings without pedagogical fine-tuning, often provide immediate answers rather than guiding students through the problem-solving process. This approach falls short of pedagogically best practices and limits their effectiveness as educational tools. We term the ob...
2024-02-07T16:15:59Z
Accepted at EMNLP 2024 Findings Track
null
null
null
null
null
null
null
null
null
2,402.05008
EfficientViT-SAM: Accelerated Segment Anything Model Without Accuracy Loss
['Zhuoyang Zhang', 'Han Cai', 'Song Han']
['cs.CV', 'cs.AI', 'cs.LG']
We present EfficientViT-SAM, a new family of accelerated segment anything models. We retain SAM's lightweight prompt encoder and mask decoder while replacing the heavy image encoder with EfficientViT. For the training, we begin with the knowledge distillation from the SAM-ViT-H image encoder to EfficientViT. Subsequent...
2024-02-07T16:28:36Z
CVPR 2024 Workshop (Efficient Large Vision Models)
null
null
EfficientViT-SAM: Accelerated Segment Anything Model Without Accuracy Loss
['Zhuoyang Zhang', 'Han Cai', 'Song Han']
2,024
null
3
24
['Computer Science']
2,402.05044
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models
['Lijun Li', 'Bowen Dong', 'Ruohui Wang', 'Xuhao Hu', 'Wangmeng Zuo', 'Dahua Lin', 'Yu Qiao', 'Jing Shao']
['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG']
In the rapidly evolving landscape of Large Language Models (LLMs), ensuring robust safety measures is paramount. To meet this crucial need, we propose \emph{SALAD-Bench}, a safety benchmark specifically designed for evaluating LLMs, attack, and defense methods. Distinguished by its breadth, SALAD-Bench transcends conve...
2024-02-07T17:33:54Z
Accepted at ACL 2024 Findings
null
null
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models
['Lijun Li', 'Bowen Dong', 'Ruohui Wang', 'Xuhao Hu', 'Wangmeng Zuo', 'Dahua Lin', 'Yu Qiao', 'Jing Shao']
2,024
Annual Meeting of the Association for Computational Linguistics
106
59
['Computer Science']
2,402.05054
LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation
['Jiaxiang Tang', 'Zhaoxi Chen', 'Xiaokang Chen', 'Tengfei Wang', 'Gang Zeng', 'Ziwei Liu']
['cs.CV']
3D content creation has achieved significant progress in terms of both quality and speed. Although current feed-forward models can produce 3D objects in seconds, their resolution is constrained by the intensive computation required during training. In this paper, we introduce Large Multi-View Gaussian Model (LGM), a no...
2024-02-07T17:57:03Z
Project page: https://me.kiui.moe/lgm/
null
null
null
null
null
null
null
null
null
2,402.0512
More Agents Is All You Need
['Junyou Li', 'Qin Zhang', 'Yangbin Yu', 'Qiang Fu', 'Deheng Ye']
['cs.CL', 'cs.AI', 'cs.LG']
We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method, termed as Agent Forest, is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the tas...
2024-02-03T05:55:24Z
Published at Transactions on Machine Learning Research (TMLR)
null
null
More Agents Is All You Need
['Junyou Li', 'Qin Zhang', 'Yangbin Yu', 'Qiang Fu', 'Deheng Ye']
2,024
Trans. Mach. Learn. Res.
73
40
['Computer Science']