arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,309.11497
FreeU: Free Lunch in Diffusion U-Net
['Chenyang Si', 'Ziqi Huang', 'Yuming Jiang', 'Ziwei Liu']
['cs.CV']
In this paper, we uncover the untapped potential of diffusion U-Net, which serves as a "free lunch" that substantially improves the generation quality on the fly. We initially investigate the key contributions of the U-Net architecture to the denoising process and identify that its main backbone primarily contributes t...
2023-09-20T17:56:18Z
Method update: we proposed structure-based scaling to enhance the performance of FreeU. Project page: https://chenyangsi.top/FreeU/
null
null
null
null
null
null
null
null
null
2,309.11566
SignBank+: Preparing a Multilingual Sign Language Dataset for Machine Translation Using Large Language Models
['Amit Moryossef', 'Zifan Jiang']
['cs.CL']
We introduce SignBank+, a clean version of the SignBank dataset, optimized for machine translation between spoken language text and SignWriting, a phonetic sign language writing system. In addition to previous work that employs complex factorization techniques to enable translation between text and SignWriting, we show...
2023-09-20T18:08:28Z
null
null
null
SignBank+: Preparing a Multilingual Sign Language Dataset for Machine Translation Using Large Language Models
['Amit Moryossef', 'Zifan Jiang']
2,023
null
0
27
['Computer Science']
2,309.11568
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
['Nolan Dey', 'Daria Soboleva', 'Faisal Al-Khateeb', 'Bowen Yang', 'Ribhu Pathria', 'Hemant Khachane', 'Shaheer Muhammad', 'Zhiming', 'Chen', 'Robert Myers', 'Jacob Robert Steeves', 'Natalia Vassilieva', 'Marvin Tom', 'Joel Hestness']
['cs.AI', 'cs.CL', 'cs.LG']
We introduce the Bittensor Language Model, called "BTLM-3B-8K", a new state-of-the-art 3 billion parameter open-source language model. BTLM-3B-8K was trained on 627B tokens from the SlimPajama dataset with a mixture of 2,048 and 8,192 context lengths. BTLM-3B-8K outperforms all existing 3B parameter models by 2-5.5% ac...
2023-09-20T18:12:56Z
null
null
null
null
null
null
null
null
null
null
2,309.11674
A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models
['Haoran Xu', 'Young Jin Kim', 'Amr Sharaf', 'Hany Hassan Awadalla']
['cs.CL']
Generative Large Language Models (LLMs) have achieved remarkable advancements in various NLP tasks. However, these advances have not been reflected in the translation task, especially those with moderate model sizes (i.e., 7B or 13B parameters), which still lag behind conventional supervised encoder-decoder translation...
2023-09-20T22:53:15Z
Accepted at ICLR 2024
null
null
null
null
null
null
null
null
null
2,309.11925
Scaling up COMETKIWI: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task
['Ricardo Rei', 'Nuno M. Guerreiro', 'José Pombal', 'Daan van Stigt', 'Marcos Treviso', 'Luisa Coheur', 'José G. C. de Souza', 'André F. T. Martins']
['cs.CL']
We present the joint contribution of Unbabel and Instituto Superior T\'ecnico to the WMT 2023 Shared Task on Quality Estimation (QE). Our team participated on all tasks: sentence- and word-level quality prediction (task 1) and fine-grained error span detection (task 2). For all tasks, we build on the COMETKIWI-22 model...
2023-09-21T09:38:56Z
null
null
null
Scaling up CometKiwi: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task
['Ricardo Rei', 'Nuno M. Guerreiro', 'José P. Pombal', 'Daan van Stigt', 'Marcos Vinícius Treviso', 'Luísa Coheur', 'José G. C. de Souza', 'André F. T. Martins']
2,023
Conference on Machine Translation
63
17
['Computer Science']
2,309.11998
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
['Lianmin Zheng', 'Wei-Lin Chiang', 'Ying Sheng', 'Tianle Li', 'Siyuan Zhuang', 'Zhanghao Wu', 'Yonghao Zhuang', 'Zhuohan Li', 'Zi Lin', 'Eric P. Xing', 'Joseph E. Gonzalez', 'Ion Stoica', 'Hao Zhang']
['cs.CL', 'cs.AI']
Studying how people interact with large language models (LLMs) in real-world scenarios is increasingly important due to their widespread use in various applications. In this paper, we introduce LMSYS-Chat-1M, a large-scale dataset containing one million real-world conversations with 25 state-of-the-art LLMs. This datas...
2023-09-21T12:13:55Z
null
null
null
null
null
null
null
null
null
null
2,309.12053
AceGPT, Localizing Large Language Models in Arabic
['Huang Huang', 'Fei Yu', 'Jianqing Zhu', 'Xuening Sun', 'Hao Cheng', 'Dingjie Song', 'Zhihong Chen', 'Abdulmohsen Alharthi', 'Bang An', 'Juncai He', 'Ziche Liu', 'Zhiyi Zhang', 'Junying Chen', 'Jianquan Li', 'Benyou Wang', 'Lian Zhang', 'Ruoyu Sun', 'Xiang Wan', 'Haizhou Li', 'Jinchao Xu']
['cs.CL']
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the ...
2023-09-21T13:20:13Z
Accepted to NAACL main conference. https://github.com/FreedomIntelligence/AceGPT
null
null
null
null
null
null
null
null
null
2,309.12161
Code Soliloquies for Accurate Calculations in Large Language Models
['Shashank Sonkar', 'MyCo Le', 'Xinghe Chen', 'Naiming Liu', 'Debshila Basu Mallick', 'Richard G. Baraniuk']
['cs.CL']
High-quality conversational datasets are crucial for the successful development of Intelligent Tutoring Systems (ITS) that utilize a Large Language Model (LLM) backend. Synthetic student-teacher dialogues, generated using advanced GPT-4 models, are a common strategy for creating these datasets. However, subjects like p...
2023-09-21T15:16:58Z
null
null
null
null
null
null
null
null
null
null
2,309.12284
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
['Longhui Yu', 'Weisen Jiang', 'Han Shi', 'Jincheng Yu', 'Zhengying Liu', 'Yu Zhang', 'James T. Kwok', 'Zhenguo Li', 'Adrian Weller', 'Weiyang Liu']
['cs.CL', 'cs.AI']
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (e.g., LLaMA-2) are still far away from satisfactory for solving mathematical problem due to the complex reasoning procedures. ...
2023-09-21T17:45:42Z
To appear at ICLR 2024 (Spotlight). Project Page: https://meta-math.github.io/
null
null
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
['L. Yu', 'Weisen Jiang', 'Han Shi', 'Jincheng Yu', 'Zhengying Liu', 'Yu Zhang', 'James T. Kwok', 'Zheng Li', 'Adrian Weller', 'Weiyang Liu']
2,023
International Conference on Learning Representations
395
84
['Computer Science']
2,309.12307
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
['Yukang Chen', 'Shengju Qian', 'Haotian Tang', 'Xin Lai', 'Zhijian Liu', 'Song Han', 'Jiaya Jia']
['cs.CL', 'cs.AI', 'cs.LG']
We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. For example, training on ...
2023-09-21T17:59:11Z
Code, models, dataset, and demo are available at https://github.com/dvlab-research/LongLoRA
null
null
null
null
null
null
null
null
null
2,309.12871
AnglE-optimized Text Embeddings
['Xianming Li', 'Jing Li']
['cs.CL', 'cs.AI', 'cs.LG']
High-quality text embedding is pivotal in improving semantic textual similarity (STS) tasks, which are crucial components in Large Language Model (LLM) applications. However, a common challenge existing text embedding models face is the problem of vanishing gradients, primarily due to their reliance on the cosine funct...
2023-09-22T13:52:42Z
Published at the Proceedings of ACL24. AoE: Angle-optimized Embeddings for Semantic Textual Similarity (https://aclanthology.org/2024.acl-long.101/)
null
null
null
null
null
null
null
null
null
2,309.13202
Investigating Large Language Models and Control Mechanisms to Improve Text Readability of Biomedical Abstracts
['Zihao Li', 'Samuel Belkadi', 'Nicolo Micheletti', 'Lifeng Han', 'Matthew Shardlow', 'Goran Nenadic']
['cs.CL', 'cs.AI']
Biomedical literature often uses complex language and inaccessible professional terminologies. That is why simplification plays an important role in improving public health literacy. Applying Natural Language Processing (NLP) models to automate such tasks allows for quick and direct accessibility for lay readers. In th...
2023-09-22T22:47:32Z
Accepted by IEEE-ICHI 2024 https://ieeeichi2024.github.io/
null
null
null
null
null
null
null
null
null
2,309.13259
EMelodyGen: Emotion-Conditioned Melody Generation in ABC Notation with the Musical Feature Template
['Monan Zhou', 'Xiaobing Li', 'Feng Yu', 'Wei Li']
['cs.IR', 'cs.AI', 'cs.SD', 'eess.AS']
The EMelodyGen system focuses on emotional melody generation in ABC notation controlled by the musical feature template. Owing to the scarcity of well-structured and emotionally labeled sheet music, we designed a template for controlling emotional melody generation by statistical correlations between musical features a...
2023-09-23T04:46:28Z
6 pages, 4 figures, accepted by ICMEW2025
2025 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Nantes, France, 2025
null
null
null
null
null
null
null
null
2,309.13353
Beyond Grids: Exploring Elastic Input Sampling for Vision Transformers
['Adam Pardyl', 'Grzegorz Kurzejamski', 'Jan Olszewski', 'Tomasz Trzciński', 'Bartosz Zieliński']
['cs.CV']
Vision transformers have excelled in various computer vision tasks but mostly rely on rigid input sampling using a fixed-size grid of patches. It limits their applicability in real-world problems, such as active visual exploration, where patches have various scales and positions. Our paper addresses this limitation by ...
2023-09-23T12:03:30Z
WACV 2025
null
null
Beyond Grids: Exploring Elastic Input Sampling for Vision Transformers
['Adam Pardyl', 'Grzegorz Kurzejamski', 'Jan Olszewski', "Tomasz Trzci'nski", "Bartosz Zieli'nski"]
2,023
IEEE Workshop/Winter Conference on Applications of Computer Vision
1
36
['Computer Science']
2,309.13567
MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models
['Kailai Yang', 'Tianlin Zhang', 'Ziyan Kuang', 'Qianqian Xie', 'Jimin Huang', 'Sophia Ananiadou']
['cs.CL']
With the development of web technology, social media texts are becoming a rich source for automatic mental health analysis. As traditional discriminative methods bear the problem of low interpretability, the recent large language models have been explored for interpretable mental health analysis on social media, which ...
2023-09-24T06:46:08Z
Accepted by WWW 2024
null
10.1145/3589334.3648137
MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models
['Kailai Yang', 'Tianlin Zhang', 'Zi-Zhou Kuang', 'Qianqian Xie', 'Sophia Ananiadou']
2,023
The Web Conference
58
62
['Computer Science']
2,309.13876
Reproducing Whisper-Style Training Using an Open-Source Toolkit and Publicly Available Data
['Yifan Peng', 'Jinchuan Tian', 'Brian Yan', 'Dan Berrebbi', 'Xuankai Chang', 'Xinjian Li', 'Jiatong Shi', 'Siddhant Arora', 'William Chen', 'Roshan Sharma', 'Wangyou Zhang', 'Yui Sudo', 'Muhammad Shakeel', 'Jee-weon Jung', 'Soumi Maiti', 'Shinji Watanabe']
['cs.CL', 'cs.SD', 'eess.AS']
Pre-training speech models on large volumes of data has achieved remarkable success. OpenAI Whisper is a multilingual multitask model trained on 680k hours of supervised speech data. It generalizes well to various speech recognition and translation benchmarks even in a zero-shot setup. However, the full pipeline for de...
2023-09-25T05:01:34Z
Accepted at ASRU 2023
null
null
Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data
['Yifan Peng', 'Jinchuan Tian', 'Brian Yan', 'Dan Berrebbi', 'Xuankai Chang', 'Xinjian Li', 'Jiatong Shi', 'Siddhant Arora', 'William Chen', 'Roshan Sharma', 'Wangyou Zhang', 'Yui Sudo', 'Muhammad Shakeel', 'Jee-weon Jung', 'Soumi Maiti', 'Shinji Watanabe']
2,023
Automatic Speech Recognition & Understanding
41
70
['Computer Science', 'Engineering']
2,309.14113
HyperTrack: Neural Combinatorics for High Energy Physics
['Mikael Mieskolainen']
['hep-ph', 'cs.LG', 'hep-ex']
Combinatorial inverse problems in high energy physics span enormous algorithmic challenges. This work presents a new deep learning driven clustering algorithm that utilizes a space-time non-local trainable graph constructor, a graph neural network, and a set transformer. The model is trained with loss functions at the ...
2023-09-25T13:12:08Z
CHEP 2023 proceedings. 8 pages (max)
null
null
null
null
null
null
null
null
null
2,309.14316
Physics of Language Models: Part 3.1, Knowledge Storage and Extraction
['Zeyuan Allen-Zhu', 'Yuanzhi Li']
['cs.CL', 'cs.AI', 'cs.LG']
Large language models (LLMs) can store a vast amount of world knowledge, often extractable via question-answering (e.g., "What is Abraham Lincoln's birthday?"). However, do they answer such questions based on exposure to similar questions during training (i.e., cheating), or by genuinely learning to extract knowledge f...
2023-09-25T17:37:20Z
V2 polishes writing + fixes author name; V3 includes additional Llama experiments and writing improvements
null
null
null
null
null
null
null
null
null
2,309.14322
Small-scale proxies for large-scale Transformer training instabilities
['Mitchell Wortsman', 'Peter J. Liu', 'Lechao Xiao', 'Katie Everett', 'Alex Alemi', 'Ben Adlam', 'John D. Co-Reyes', 'Izzeddin Gur', 'Abhishek Kumar', 'Roman Novak', 'Jeffrey Pennington', 'Jascha Sohl-dickstein', 'Kelvin Xu', 'Jaehoon Lee', 'Justin Gilmer', 'Simon Kornblith']
['cs.LG']
Teams that have trained large Transformer-based models have reported training instabilities at large scale that did not appear when training with the same hyperparameters at smaller scales. Although the causes of such instabilities are of scientific interest, the amount of resources required to reproduce them has made ...
2023-09-25T17:48:51Z
null
null
null
null
null
null
null
null
null
null
2,309.14402
Physics of Language Models: Part 3.2, Knowledge Manipulation
['Zeyuan Allen-Zhu', 'Yuanzhi Li']
['cs.CL', 'cs.AI', 'cs.LG']
Language models can store vast factual knowledge, yet their ability to flexibly use this knowledge for downstream tasks (e.g., via instruction finetuning) remains questionable. This paper investigates four fundamental knowledge manipulation tasks: retrieval (e.g., "What is person A's attribute X?"), classification (e.g...
2023-09-25T17:50:41Z
V2 polishes writing and includes additional Llama/Mistral experiments and larger data; but the conclusions remain unchanged
null
null
Physics of Language Models: Part 3.2, Knowledge Manipulation
['Zeyuan Allen-Zhu', 'Yuanzhi Li']
2,023
International Conference on Learning Representations
105
37
['Computer Science']
2,309.14405
Joint Audio and Speech Understanding
['Yuan Gong', 'Alexander H. Liu', 'Hongyin Luo', 'Leonid Karlinsky', 'James Glass']
['cs.SD', 'cs.AI', 'eess.AS']
Humans are surrounded by audio signals that include both speech and non-speech sounds. The recognition and understanding of speech and non-speech audio events, along with a profound comprehension of the relationship between them, constitute fundamental cognitive capabilities. For the first time, we build a machine lear...
2023-09-25T17:59:05Z
Accepted at ASRU 2023. Code, dataset, and pretrained models are at https://github.com/yuangongnd/ltu. Interactive demo at https://huggingface.co/spaces/yuangongfdu/ltu-2
null
null
null
null
null
null
null
null
null
2,309.14507
Noise-Robust DSP-Assisted Neural Pitch Estimation with Very Low Complexity
['Krishna Subramani', 'Jean-Marc Valin', 'Jan Buethe', 'Paris Smaragdis', 'Mike Goodwin']
['eess.AS', 'cs.SD']
Pitch estimation is an essential step of many speech processing algorithms, including speech coding, synthesis, and enhancement. Recently, pitch estimators based on deep neural networks (DNNs) have have been outperforming well-established DSP-based techniques. Unfortunately, these new estimators can be impractical to d...
2023-09-25T20:14:31Z
Submitted to ICASSP 2024, 5 pages
null
null
Noise-Robust DSP-Assisted Neural Pitch Estimation With Very Low Complexity
['K. Subramani', 'J. Valin', 'Jan Büthe', 'Paris Smaragdis', 'Mike Goodwin']
2,023
IEEE International Conference on Acoustics, Speech, and Signal Processing
3
32
['Engineering', 'Computer Science']
2,309.14509
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
['Sam Ade Jacobs', 'Masahiro Tanaka', 'Chengming Zhang', 'Minjia Zhang', 'Shuaiwen Leon Song', 'Samyam Rajbhandari', 'Yuxiong He']
['cs.LG', 'cs.CL', 'cs.DC']
Computation in a typical Transformer-based large language model (LLM) can be characterized by batch size, hidden dimension, number of layers, and sequence length. Until now, system works for accelerating LLM training have focused on the first three dimensions: data parallelism for batch size, tensor parallelism for hid...
2023-09-25T20:15:57Z
null
null
null
null
null
null
null
null
null
null
2,309.14859
Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation
['Shih-Ying Yeh', 'Yu-Guan Hsieh', 'Zhidong Gao', 'Bernard B W Yang', 'Giyeong Oh', 'Yanmin Gong']
['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG']
Text-to-image generative models have garnered immense attention for their ability to produce high-fidelity images from text prompts. Among these, Stable Diffusion distinguishes itself as a leading open-source model in this fast-growing field. However, the intricacies of fine-tuning these models pose multiple challenges...
2023-09-26T11:36:26Z
In International Conference on Learning Representations 12 (ICLR 2024) [79 pages, 54 figures, 7 tables]
null
null
Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation
['Shin-Ying Yeh', 'Yu-Guan Hsieh', 'Zhidong Gao', 'Bernard B. W. Yang', 'Giyeong Oh', 'Yanmin Gong']
2,023
arXiv.org
87
0
['Computer Science']
2,309.15088
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
['Ronak Pradeep', 'Sahel Sharifymoghaddam', 'Jimmy Lin']
['cs.IR', 'cs.CL']
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-determin...
2023-09-26T17:31:57Z
null
null
null
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
['Ronak Pradeep', 'Sahel Sharifymoghaddam', 'Jimmy Lin']
2,023
arXiv.org
43
41
['Computer Science']
2,309.15103
LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models
['Yaohui Wang', 'Xinyuan Chen', 'Xin Ma', 'Shangchen Zhou', 'Ziqi Huang', 'Yi Wang', 'Ceyuan Yang', 'Yinan He', 'Jiashuo Yu', 'Peiqing Yang', 'Yuwei Guo', 'Tianxing Wu', 'Chenyang Si', 'Yuming Jiang', 'Cunjian Chen', 'Chen Change Loy', 'Bo Dai', 'Dahua Lin', 'Yu Qiao', 'Ziwei Liu']
['cs.CV']
This work aims to learn a high-quality text-to-video (T2V) generative model by leveraging a pre-trained text-to-image (T2I) model as a basis. It is a highly desirable yet challenging task to simultaneously a) accomplish the synthesis of visually realistic and temporally coherent videos while b) preserving the strong cr...
2023-09-26T17:52:03Z
Project webpage: https://vchitect.github.io/LaVie-project/
null
null
LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models
['Yaohui Wang', 'Xinyuan Chen', 'Xin Ma', 'Shangchen Zhou', 'Ziqi Huang', 'Yi Wang', 'Ceyuan Yang', 'Yinan He', 'Jiashuo Yu', 'Pe-der Yang', 'Yuwei Guo', 'Tianxing Wu', 'Chenyang Si', 'Yuming Jiang', 'Cunjian Chen', 'Chen Change Loy', 'Bo Dai', 'Dahua Lin', 'Y. Qiao', 'Ziwei Liu']
2,023
International Journal of Computer Vision
231
76
['Computer Science']
2,309.15112
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition
['Pan Zhang', 'Xiaoyi Dong', 'Bin Wang', 'Yuhang Cao', 'Chao Xu', 'Linke Ouyang', 'Zhiyuan Zhao', 'Haodong Duan', 'Songyang Zhang', 'Shuangrui Ding', 'Wenwei Zhang', 'Hang Yan', 'Xinyue Zhang', 'Wei Li', 'Jingwen Li', 'Kai Chen', 'Conghui He', 'Xingcheng Zhang', 'Yu Qiao', 'Dahua Lin', 'Jiaqi Wang']
['cs.CV']
We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition. The innovative nature of our model is highlighted by three appealing properties: 1) Interleaved Text-Image Composition: InternLM-XComposer can effortlessly generate coherent and contextual articl...
2023-09-26T17:58:20Z
Code and models are available at https://github.com/InternLM/InternLM-XComposer
null
null
null
null
null
null
null
null
null
2,309.15217
Ragas: Automated Evaluation of Retrieval Augmented Generation
['Shahul Es', 'Jithin James', 'Luis Espinosa-Anke', 'Steven Schockaert']
['cs.CL']
We introduce Ragas (Retrieval Augmented Generation Assessment), a framework for reference-free evaluation of Retrieval Augmented Generation (RAG) pipelines. RAG systems are composed of a retrieval and an LLM based generation module, and provide LLMs with knowledge from a reference textual database, which enables them t...
2023-09-26T19:23:54Z
Reference-free (not tied to having ground truth available) evaluation framework for retrieval agumented generation
null
null
null
null
null
null
null
null
null
2,309.15317
Joint Prediction and Denoising for Large-scale Multilingual Self-supervised Learning
['William Chen', 'Jiatong Shi', 'Brian Yan', 'Dan Berrebbi', 'Wangyou Zhang', 'Yifan Peng', 'Xuankai Chang', 'Soumi Maiti', 'Shinji Watanabe']
['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS']
Multilingual self-supervised learning (SSL) has often lagged behind state-of-the-art (SOTA) methods due to the expenses and complexity required to handle many languages. This further harms the reproducibility of SSL, which is already limited to few research groups due to its resource usage. We show that more powerful t...
2023-09-26T23:55:57Z
Accepted to ASRU 2023
null
null
null
null
null
null
null
null
null
2,309.15505
Finite Scalar Quantization: VQ-VAE Made Simple
['Fabian Mentzer', 'David Minnen', 'Eirikur Agustsson', 'Michael Tschannen']
['cs.CV', 'cs.LG']
We propose to replace vector quantization (VQ) in the latent representation of VQ-VAEs with a simple scheme termed finite scalar quantization (FSQ), where we project the VAE representation down to a few dimensions (typically less than 10). Each dimension is quantized to a small set of fixed values, leading to an (impli...
2023-09-27T09:13:40Z
Code: https://github.com/google-research/google-research/tree/master/fsq
null
null
Finite Scalar Quantization: VQ-VAE Made Simple
['Fabian Mentzer', 'David C. Minnen', 'E. Agustsson', 'Michael Tschannen']
2,023
International Conference on Learning Representations
190
51
['Computer Science']
2,309.15818
Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
['David Junhao Zhang', 'Jay Zhangjie Wu', 'Jia-Wei Liu', 'Rui Zhao', 'Lingmin Ran', 'Yuchao Gu', 'Difei Gao', 'Mike Zheng Shou']
['cs.CV']
Significant advancements have been achieved in the realm of large-scale pre-trained text-to-video Diffusion Models (VDMs). However, previous methods either rely solely on pixel-based VDMs, which come with high computational costs, or on latent-based VDMs, which often struggle with precise text-video alignment. In this ...
2023-09-27T17:44:18Z
project page is https://showlab.github.io/Show-1
null
null
null
null
null
null
null
null
null
2,309.1602
GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization
['Vicente Vivanco Cepeda', 'Gaurav Kumar Nayak', 'Mubarak Shah']
['cs.CV', 'cs.LG']
Worldwide Geo-localization aims to pinpoint the precise location of images taken anywhere on Earth. This task has considerable challenges due to immense variation in geographic landscapes. The image-to-image retrieval-based approaches fail to solve this problem on a global scale as it is not feasible to construct a lar...
2023-09-27T20:54:56Z
Accepted at NeurIPS 2023
null
null
null
null
null
null
null
null
null
2,309.16039
Effective Long-Context Scaling of Foundation Models
['Wenhan Xiong', 'Jingyu Liu', 'Igor Molybog', 'Hejia Zhang', 'Prajjwal Bhargava', 'Rui Hou', 'Louis Martin', 'Rashi Rungta', 'Karthik Abinav Sankararaman', 'Barlas Oguz', 'Madian Khabsa', 'Han Fang', 'Yashar Mehdad', 'Sharan Narang', 'Kshitiz Malik', 'Angela Fan', 'Shruti Bhosale', 'Sergey Edunov', 'Mike Lewis', 'Sino...
['cs.CL']
We present a series of long-context LLMs that support effective context windows of up to 32,768 tokens. Our model series are built through continual pretraining from Llama 2 with longer training sequences and on a dataset where long texts are upsampled. We perform extensive evaluation on language modeling, synthetic co...
2023-09-27T21:41:49Z
null
null
null
Effective Long-Context Scaling of Foundation Models
['Wenhan Xiong', 'Jingyu Liu', 'Igor Molybog', 'Hejia Zhang', 'Prajjwal Bhargava', 'Rui Hou', 'Louis Martin', 'Rashi Rungta', 'Karthik Abinav Sankararaman', 'Barlas Oğuz', 'Madian Khabsa', 'Han Fang', 'Yashar Mehdad', 'Sharan Narang', 'Kshitiz Malik', 'Angela Fan', 'Shruti Bhosale', 'Sergey Edunov', 'Mike Lewis', 'Sino...
2,023
North American Chapter of the Association for Computational Linguistics
231
66
['Computer Science']
2,309.16058
AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model
['Seungwhan Moon', 'Andrea Madotto', 'Zhaojiang Lin', 'Tushar Nagarajan', 'Matt Smith', 'Shashank Jain', 'Chun-Fu Yeh', 'Prakash Murugesan', 'Peyman Heidari', 'Yue Liu', 'Kavya Srinet', 'Babak Damavandi', 'Anuj Kumar']
['cs.LG', 'cs.CL', 'cs.CV']
We present Any-Modality Augmented Language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses. AnyMAL inherits the powerful text-based reasoning abilities of the state-of-the-art LLMs including LLaMA-2 (70...
2023-09-27T22:50:51Z
null
null
null
null
null
null
null
null
null
null
2,309.16287
Predicting performance difficulty from piano sheet music images
['Pedro Ramoneda', 'Jose J. Valero-Mas', 'Dasaem Jeong', 'Xavier Serra']
['cs.SD', 'cs.DL', 'eess.AS']
Estimating the performance difficulty of a musical score is crucial in music education for adequately designing the learning curriculum of the students. Although the Music Information Retrieval community has recently shown interest in this task, existing approaches mainly use machine-readable scores, leaving the broade...
2023-09-28T09:33:47Z
null
null
null
null
null
null
null
null
null
null
2,309.16374
MHG-GNN: Combination of Molecular Hypergraph Grammar with Graph Neural Network
['Akihiro Kishimoto', 'Hiroshi Kajino', 'Masataka Hirose', 'Junta Fuchiwaki', 'Indra Priyadarsini', 'Lisa Hamada', 'Hajime Shinohara', 'Daiju Nakano', 'Seiji Takeda']
['cs.LG']
Property prediction plays an important role in material discovery. As an initial step to eventually develop a foundation model for material science, we introduce a new autoencoder called the MHG-GNN, which combines graph neural network (GNN) with Molecular Hypergraph Grammar (MHG). Results on a variety of property pred...
2023-09-28T12:19:43Z
8 pages, 1 figure
null
null
MHG-GNN: Combination of Molecular Hypergraph Grammar with Graph Neural Network
['Akihiro Kishimoto', 'Hiroshi Kajino', 'Masataka Hirose', 'Junta Fuchiwaki', 'Indra Priyadarsini', 'Lisa Hamada', 'Hajime Shinohara', 'D. Nakano', 'Seiji Takeda']
2,023
arXiv.org
5
45
['Computer Science']
2,309.16418
Efficient Supervised Training of Audio Transformers for Music Representation Learning
['Pablo Alonso-Jiménez', 'Xavier Serra', 'Dmitry Bogdanov']
['cs.SD', 'eess.AS']
In this work, we address music representation learning using convolution-free transformers. We build on top of existing spectrogram-based audio transformers such as AST and train our models on a supervised task using patchout training similar to PaSST. In contrast to previous works, we study how specific design decisio...
2023-09-28T13:11:48Z
Accepted at the 2023 International Society for Music Information Retrieval Conference (ISMIR'23)
null
null
null
null
null
null
null
null
null
2,309.16496
CCEdit: Creative and Controllable Video Editing via Diffusion Models
['Ruoyu Feng', 'Wenming Weng', 'Yanhui Wang', 'Yuhui Yuan', 'Jianmin Bao', 'Chong Luo', 'Zhibo Chen', 'Baining Guo']
['cs.CV']
In this paper, we present CCEdit, a versatile generative video editing framework based on diffusion models. Our approach employs a novel trident network structure that separates structure and appearance control, ensuring precise and creative editing capabilities. Utilizing the foundational ControlNet architecture, we m...
2023-09-28T15:03:44Z
null
null
null
CCEdit: Creative and Controllable Video Editing via Diffusion Models
['Ruoyu Feng', 'Wenming Weng', 'Yanhui Wang', 'Yuhui Yuan', 'Jianmin Bao', 'Chong Luo', 'Zhibo Chen', 'Baining Guo']
2,023
Computer Vision and Pattern Recognition
49
64
['Computer Science']
2,309.16588
Vision Transformers Need Registers
['Timothée Darcet', 'Maxime Oquab', 'Julien Mairal', 'Piotr Bojanowski']
['cs.CV']
Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative backg...
2023-09-28T16:45:46Z
null
null
null
Vision Transformers Need Registers
['Timothée Darcet', 'Maxime Oquab', 'J. Mairal', 'Piotr Bojanowski']
2,023
International Conference on Learning Representations
357
34
['Computer Science']
2,309.16609
Qwen Technical Report
['Jinze Bai', 'Shuai Bai', 'Yunfei Chu', 'Zeyu Cui', 'Kai Dang', 'Xiaodong Deng', 'Yang Fan', 'Wenbin Ge', 'Yu Han', 'Fei Huang', 'Binyuan Hui', 'Luo Ji', 'Mei Li', 'Junyang Lin', 'Runji Lin', 'Dayiheng Liu', 'Gao Liu', 'Chengqiang Lu', 'Keming Lu', 'Jianxin Ma', 'Rui Men', 'Xingzhang Ren', 'Xuancheng Ren', 'Chuanqi Ta...
['cs.CL']
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model seri...
2023-09-28T17:07:49Z
59 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,309.16671
Demystifying CLIP Data
['Hu Xu', 'Saining Xie', 'Xiaoqing Ellen Tan', 'Po-Yao Huang', 'Russell Howes', 'Vasu Sharma', 'Shang-Wen Li', 'Gargi Ghosh', 'Luke Zettlemoyer', 'Christoph Feichtenhofer']
['cs.CV', 'cs.CL']
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its data and not the model architecture or pre-training objective. However...
2023-09-28T17:59:56Z
17 pages. arXiv admin note: text overlap with arXiv:2103.00020 by other authors
null
null
null
null
null
null
null
null
null
2,309.16676
On a Seldom Oversight in Fermi's Calculations: Seventy Years Later
['Sergei K. Suslov']
['physics.hist-ph']
We discuss an unfortunate mistake, for a Dirac free particle, in the last Fermi lecture notes on quantum mechanics, in a course given at the University of Chicago in winter and spring of 1954. As is demonstrated, the correct result can be obtained by a simple matrix multiplication. An attempt to collect a relevant bibl...
2023-07-09T17:12:09Z
14 pages, 4 figures, 51 references
null
null
null
null
null
null
null
null
null
2,309.16844
DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task
['Israel Campiotti', 'Matheus Rodrigues', 'Yuri Albuquerque', 'Rafael Azevedo', 'Alyson Andrade']
['cs.CL']
This paper presents an approach for adapting the DebertaV3 XSmall model pre-trained in English for Brazilian Portuguese natural language processing (NLP) tasks. A key aspect of the methodology involves a multistep training process to ensure the model is effectively tuned for the Portuguese language. Initial datasets fr...
2023-09-28T20:53:25Z
6 pages, 1 table
null
null
DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task
['Israel Campiotti', 'Matheus Rodrigues', 'Yuri Albuquerque', 'Rafael Azevedo', 'Alyson Andrade']
2,023
arXiv.org
3
17
['Computer Science']
2,309.16921
YOLOR-Based Multi-Task Learning
['Hung-Shuo Chang', 'Chien-Yao Wang', 'Richard Robert Wang', 'Gene Chou', 'Hong-Yuan Mark Liao']
['cs.CV']
Multi-task learning (MTL) aims to learn multiple tasks using a single model and jointly improve all of them assuming generalization and shared semantics. Reducing conflicts between tasks during joint learning is difficult and generally requires careful network design and extremely large models. We propose building on Y...
2023-09-29T01:42:21Z
null
null
null
YOLOR-Based Multi-Task Learning
['Hung-Shuo Chang', 'Chien-Yao Wang', 'Richard Robert Wang', 'Gene Chou', 'Hongpeng Liao']
2,023
arXiv.org
16
44
['Computer Science']
2,309.16948
Denoising Diffusion Bridge Models
['Linqi Zhou', 'Aaron Lou', 'Samar Khanna', 'Stefano Ermon']
['cs.CV', 'cs.AI']
Diffusion models are powerful generative models that map noise to data using stochastic processes. However, for many applications such as image editing, the model input comes from a distribution that is not random noise. As such, diffusion models must rely on cumbersome methods like guidance or projected sampling to in...
2023-09-29T03:24:24Z
Github: https://github.com/alexzhou907/DDBM/
null
null
null
null
null
null
null
null
null
2,309.17012
Benchmarking Cognitive Biases in Large Language Models as Evaluators
['Ryan Koo', 'Minhwa Lee', 'Vipul Raheja', 'Jong Inn Park', 'Zae Myung Kim', 'Dongyeop Kang']
['cs.CL', 'cs.AI', 'cs.LG', 'I.2.7']
Large Language Models are cognitively biased judges. Large Language Models (LLMs) have recently been shown to be effective as automatic evaluators with simple prompting and in-context learning. In this work, we assemble 15 LLMs of four different size ranges and evaluate their output responses by preference ranking from...
2023-09-29T06:53:10Z
Publishsed at ACL 2024. 29 pages, 9 figures, 14 tables
null
null
null
null
null
null
null
null
null
2,309.1705
Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models
['Antoine Louis', 'Gijs van Dijck', 'Gerasimos Spanakis']
['cs.CL']
Many individuals are likely to face a legal dispute at some point in their lives, but their lack of understanding of how to navigate these complex issues often renders them vulnerable. The advancement of natural language processing opens new avenues for bridging this legal literacy gap through the development of automa...
2023-09-29T08:23:19Z
Under review. Code is available at https://github.com/maastrichtlawtech/lleqa
null
null
Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models
['Antoine Louis', 'G. van Dijck', 'Gerasimos Spanakis']
2,023
AAAI Conference on Artificial Intelligence
41
90
['Computer Science']
2,309.17102
Guiding Instruction-based Image Editing via Multimodal Large Language Models
['Tsu-Jui Fu', 'Wenze Hu', 'Xianzhi Du', 'William Yang Wang', 'Yinfei Yang', 'Zhe Gan']
['cs.CV']
Instruction-based image editing improves the controllability and flexibility of image manipulation via natural commands without elaborate descriptions or regional masks. However, human instructions are sometimes too brief for current methods to capture and follow. Multimodal large language models (MLLMs) show promising...
2023-09-29T10:01:50Z
ICLR'24 (Spotlight) ; Project at https://mllm-ie.github.io ; Code at https://github.com/tsujuifu/pytorch_mgie
null
null
null
null
null
null
null
null
null
2,309.17134
Promoting Generalized Cross-lingual Question Answering in Few-resource Scenarios via Self-knowledge Distillation
['Casimiro Pio Carrino', 'Carlos Escolano', 'José A. R. Fonollosa']
['cs.CL']
Despite substantial progress in multilingual extractive Question Answering (QA), models with high and uniformly distributed performance across languages remain challenging, especially for languages with limited resources. We study cross-lingual transfer mainly focusing on the Generalized Cross-Lingual Transfer (G-XLT) ...
2023-09-29T10:54:59Z
Submitted to the Journal of Artificial Intelligence Research (JAIR)
null
null
null
null
null
null
null
null
null
2,309.17179
Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training
['Xidong Feng', 'Ziyu Wan', 'Muning Wen', 'Stephen Marcus McAleer', 'Ying Wen', 'Weinan Zhang', 'Jun Wang']
['cs.LG', 'cs.AI', 'cs.CL']
Recent works like Tree-of-Thought (ToT) and Reasoning via Planning (RAP) aim to augment the reasoning capabilities of LLMs by using tree-search algorithms to guide multi-step reasoning. These methods rely on prompting a pre-trained model to serve as a value function and focus on problems with low search depth. As a res...
2023-09-29T12:20:19Z
null
null
null
null
null
null
null
null
null
null
2,309.17207
Memory Gym: Towards Endless Tasks to Benchmark Memory Capabilities of Agents
['Marco Pleines', 'Matthias Pallasch', 'Frank Zimmer', 'Mike Preuss']
['cs.LG']
Memory Gym presents a suite of 2D partially observable environments, namely Mortar Mayhem, Mystery Path, and Searing Spotlights, designed to benchmark memory capabilities in decision-making agents. These environments, originally with finite tasks, are expanded into innovative, endless formats, mirroring the escalating ...
2023-09-29T12:59:28Z
40 pages, 12 figures, 7 tables, accepted at JMLR
null
null
null
null
null
null
null
null
null
2,309.17352
Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Supervision, and LLM Mix-up Augmentation
['Shih-Lun Wu', 'Xuankai Chang', 'Gordon Wichern', 'Jee-weon Jung', 'François Germain', 'Jonathan Le Roux', 'Shinji Watanabe']
['cs.SD', 'eess.AS']
Automated audio captioning (AAC) aims to generate informative descriptions for various sounds from nature and/or human activities. In recent years, AAC has quickly attracted research interest, with state-of-the-art systems now relying on a sequence-to-sequence (seq2seq) backbone powered by strong models such as Transfo...
2023-09-29T15:57:46Z
ICASSP 2024 camera-ready paper. Winner of the DCASE 2023 Challenge Task 6A: Automated Audio Captioning (AAC)
null
null
null
null
null
null
null
null
null
2,309.17425
Data Filtering Networks
['Alex Fang', 'Albin Madappally Jose', 'Amit Jain', 'Ludwig Schmidt', 'Alexander Toshev', 'Vaishaal Shankar']
['cs.AI', 'cs.LG']
Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidat...
2023-09-29T17:37:29Z
null
null
null
null
null
null
null
null
null
null
2,309.17444
LLM-grounded Video Diffusion Models
['Long Lian', 'Baifeng Shi', 'Adam Yala', 'Trevor Darrell', 'Boyi Li']
['cs.CV', 'cs.AI', 'cs.CL']
Text-conditioned diffusion models have emerged as a promising tool for neural video generation. However, current models still struggle with intricate spatiotemporal prompts and often generate restricted or incorrect motion. To address these limitations, we introduce LLM-grounded Video Diffusion (LVD). Instead of direct...
2023-09-29T17:54:46Z
ICLR 2024. Project Page: https://llm-grounded-video-diffusion.github.io/
null
null
LLM-grounded Video Diffusion Models
['Long Lian', 'Baifeng Shi', 'Adam Yala', 'Trevor Darrell', 'Boyi Li']
2,023
International Conference on Learning Representations
55
57
['Computer Science']
2,309.17448
SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation
['Zhongang Cai', 'Wanqi Yin', 'Ailing Zeng', 'Chen Wei', 'Qingping Sun', 'Yanjun Wang', 'Hui En Pang', 'Haiyi Mei', 'Mingyuan Zhang', 'Lei Zhang', 'Chen Change Loy', 'Lei Yang', 'Ziwei Liu']
['cs.CV']
Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications. Despite encouraging progress, current state-of-the-art methods still depend largely on a confined set of training datasets. In this work, we investigate scaling up EHPS towards the first generalist...
2023-09-29T17:58:06Z
Homepage: https://caizhongang.github.io/projects/SMPLer-X/
null
null
SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation
['Zhongang Cai', 'Wanqi Yin', 'Ailing Zeng', 'Chen Wei', 'Qingping Sun', 'Yanjun Wang', 'Hui En Pang', 'Haiyi Mei', 'Mingyuan Zhang', 'Lei Zhang', 'Chen Change Loy', 'Lei Yang', 'Ziwei Liu']
2,023
Neural Information Processing Systems
87
70
['Computer Science']
2,309.17452
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
['Zhibin Gou', 'Zhihong Shao', 'Yeyun Gong', 'Yelong Shen', 'Yujiu Yang', 'Minlie Huang', 'Nan Duan', 'Weizhu Chen']
['cs.CL', 'cs.AI']
Large language models have made significant progress in various language tasks, yet they still struggle with complex mathematics. In this paper, we propose ToRA a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems by seamlessly integrating natural language reasoning with the ...
2023-09-29T17:59:38Z
ICLR 2024; First two authors equal contribution
null
null
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
['Zhibin Gou', 'Zhihong Shao', 'Yeyun Gong', 'Yelong Shen', 'Yujiu Yang', 'Minlie Huang', 'Nan Duan', 'Weizhu Chen']
2,023
International Conference on Learning Representations
168
85
['Computer Science']
2,309.17453
Efficient Streaming Language Models with Attention Sinks
['Guangxuan Xiao', 'Yuandong Tian', 'Beidi Chen', 'Song Han', 'Mike Lewis']
['cs.CL', 'cs.AI']
Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs...
2023-09-29T17:59:56Z
ICLR 2024
null
null
null
null
null
null
null
null
null
2,310.0012
Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs
['Jean Kossaifi', 'Nikola Kovachki', 'Kamyar Azizzadenesheli', 'Anima Anandkumar']
['cs.LG']
Memory complexity and data scarcity have so far prohibited learning solution operators of partial differential equations (PDEs) at high resolutions. We address these limitations by introducing a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generaliza...
2023-09-29T20:18:52Z
null
null
null
null
null
null
null
null
null
null
2,310.00274
AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and General Domain ASR
['Tobi Olatunji', 'Tejumade Afonja', 'Aditya Yadavalli', 'Chris Chinenye Emezue', 'Sahib Singh', 'Bonaventure F. P. Dossou', 'Joanne Osuchukwu', 'Salomey Osei', 'Atnafu Lambebo Tonja', 'Naome Etori', 'Clinton Mbataku']
['cs.CL']
Africa has a very low doctor-to-patient ratio. At very busy clinics, doctors could see 30+ patients per day -- a heavy patient burden compared with developed countries -- but productivity tools such as clinical automatic speech recognition (ASR) are lacking for these overworked clinicians. However, clinical ASR is matu...
2023-09-30T06:38:43Z
Accepted to TACL 2023. This is a pre-MIT Press publication version
null
null
AfriSpeech-200: Pan-African Accented Speech Dataset for Clinical and General Domain ASR
['Tobi Olatunji', 'Tejumade Afonja', 'Aditya Yadavalli', 'Chris Chinenye Emezue', 'Sahib Singh', 'Bonaventure F. P. Dossou', 'Joanne I. Osuchukwu', 'Salomey Osei', 'A. Tonja', 'Naome A. Etori', 'Clinton Mbataku']
2,023
Transactions of the Association for Computational Linguistics
19
80
['Computer Science']
2,310.00426
PixArt-$α$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
['Junsong Chen', 'Jincheng Yu', 'Chongjian Ge', 'Lewei Yao', 'Enze Xie', 'Yue Wu', 'Zhongdao Wang', 'James Kwok', 'Ping Luo', 'Huchuan Lu', 'Zhenguo Li']
['cs.CV']
The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-$\alpha$, a Transformer-based T2I diffusion model whose image generation quali...
2023-09-30T16:18:00Z
Project Page: https://pixart-alpha.github.io
null
null
PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
['Junsong Chen', 'Jincheng Yu', 'Chongjian Ge', 'Lewei Yao', 'Enze Xie', 'Yue Wu', 'Zhongdao Wang', 'James T. Kwok', 'Ping Luo', 'Huchuan Lu', 'Zhenguo Li']
2,023
International Conference on Learning Representations
460
79
['Computer Science']
2,310.00566
Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models
['Duanyu Feng', 'Yongfu Dai', 'Jimin Huang', 'Yifang Zhang', 'Qianqian Xie', 'Weiguang Han', 'Zhengyu Chen', 'Alejandro Lopez-Lira', 'Hao Wang']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CY']
In the financial industry, credit scoring is a fundamental element, shaping access to credit and determining the terms of loans for individuals and businesses alike. Traditional credit scoring methods, however, often grapple with challenges such as narrow knowledge scope and isolated evaluation of credit tasks. Our wor...
2023-10-01T03:50:34Z
null
null
null
Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models
['Duanyu Feng', 'Yongfu Dai', 'Jimin Huang', 'Yifang Zhang', 'Qianqian Xie', 'Weiguang Han', 'Alejandro Lopez-Lira', 'Hao Wang']
2,023
arXiv.org
12
79
['Computer Science']
2,310.00673
Learning Type Inference for Enhanced Dataflow Analysis
['Lukas Seidel', 'Sedick David Baker Effendi', 'Xavier Pinho', 'Konrad Rieck', 'Brink van der Merwe', 'Fabian Yamaguchi']
['cs.LG', 'cs.CR']
Statically analyzing dynamically-typed code is a challenging endeavor, as even seemingly trivial tasks such as determining the targets of procedure calls are non-trivial without knowing the types of objects at compile time. Addressing this challenge, gradual typing is increasingly added to dynamically-typed languages, ...
2023-10-01T13:52:28Z
- fixed last author's name - fixed header
28th European Symposium on Research in Computer Security (ESORICS) 2023
null
Learning Type Inference for Enhanced Dataflow Analysis
['Lukas Seidel', 'Sedick Baker Effendi', 'Xavier Pinho', 'Konrad Rieck', 'Brink van der Merwe', 'Fabian Yamaguchi']
2,023
European Symposium on Research in Computer Security
2
46
['Computer Science']
2,310.00752
TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks
['Dongfu Jiang', 'Yishan Li', 'Ge Zhang', 'Wenhao Huang', 'Bill Yuchen Lin', 'Wenhu Chen']
['cs.CL', 'cs.AI']
We present TIGERScore, a \textbf{T}rained metric that follows \textbf{I}nstruction \textbf{G}uidance to perform \textbf{E}xplainable, and \textbf{R}eference-free evaluation over a wide spectrum of text generation tasks. Different from other automatic evaluation methods that only provide arcane scores, TIGERScore is gui...
2023-10-01T18:01:51Z
null
null
null
null
null
null
null
null
null
null
2,310.00796
SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation
['Matthias Lindemann', 'Alexander Koller', 'Ivan Titov']
['cs.CL']
Strong inductive biases enable learning from little data and help generalization outside of the training distribution. Popular neural architectures such as Transformers lack strong structural inductive biases for seq2seq NLP tasks on their own. Consequently, they struggle with systematic generalization beyond the train...
2023-10-01T21:19:12Z
ACL 2024 camera-ready
null
null
Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation
['Matthias Lindemann', 'Alexander Koller', 'Ivan Titov']
2,023
Annual Meeting of the Association for Computational Linguistics
5
52
['Computer Science']
2,310.01018
Controlling Vision-Language Models for Multi-Task Image Restoration
['Ziwei Luo', 'Fredrik K. Gustafsson', 'Zheng Zhao', 'Jens Sjölund', 'Thomas B. Schön']
['cs.CV']
Vision-language models such as CLIP have shown great impact on diverse downstream tasks for zero-shot or label-free predictions. However, when it comes to low-level vision such as image restoration their performance deteriorates dramatically due to corrupted inputs. In this paper, we present a degradation-aware vision-...
2023-10-02T09:10:16Z
Accepted by ICLR 2024. Project page: https://algolzw.github.io/daclip-uir/index.html
null
null
null
null
null
null
null
null
null
2,310.01045
Tool-Augmented Reward Modeling
['Lei Li', 'Yekun Chai', 'Shuohuan Wang', 'Yu Sun', 'Hao Tian', 'Ningyu Zhang', 'Hua Wu']
['cs.CL']
Reward modeling (a.k.a., preference modeling) is instrumental for aligning large language models with human preferences, particularly within the context of reinforcement learning from human feedback (RLHF). While conventional reward models (RMs) have exhibited remarkable scalability, they oft struggle with fundamental ...
2023-10-02T09:47:40Z
ICLR 2024 Spotlight
null
null
Tool-Augmented Reward Modeling
['Lei Li', 'Yekun Chai', 'Shuohuan Wang', 'Yu Sun', 'Hao Tian', 'Ningyu Zhang', 'Hua Wu']
2,023
International Conference on Learning Representations
14
52
['Computer Science']
2,310.01074
Back to the Future: Towards Explainable Temporal Reasoning with Large Language Models
['Chenhan Yuan', 'Qianqian Xie', 'Jimin Huang', 'Sophia Ananiadou']
['cs.CL', 'cs.AI']
Temporal reasoning is a crucial NLP task, providing a nuanced understanding of time-sensitive contexts within textual data. Although recent advancements in LLMs have demonstrated their potential in temporal reasoning, the predominant focus has been on tasks such as temporal expression and temporal relation extraction. ...
2023-10-02T10:35:23Z
14 pages, 5 figures, code and dataset: https://github.com/chenhan97/TimeLlama
null
null
null
null
null
null
null
null
null
2,310.01119
Synthetic Data Generation in Low-Resource Settings via Fine-Tuning of Large Language Models
['Jean Kaddour', 'Qi Liu']
['cs.CL', 'cs.LG']
The in-context learning ability of large language models (LLMs) enables them to generalize to novel downstream tasks with relatively few labeled examples. However, they require enormous computational resources to be deployed. Alternatively, smaller models can solve specific tasks if fine-tuned with enough labeled examp...
2023-10-02T11:49:05Z
null
null
null
Synthetic Data Generation in Low-Resource Settings via Fine-Tuning of Large Language Models
['Jean Kaddour', 'Qi Liu']
2,023
null
2
38
['Computer Science']
2,310.01188
Quantifying the Plausibility of Context Reliance in Neural Machine Translation
['Gabriele Sarti', 'Grzegorz Chrupała', 'Malvina Nissim', 'Arianna Bisazza']
['cs.CL', 'cs.AI', 'cs.HC', 'cs.LG', 'I.2.7']
Establishing whether language models can use contextual information in a human-plausible way is important to ensure their trustworthiness in real-world settings. However, the questions of when and which parts of the context affect model generations are typically tackled separately, with current plausibility evaluations...
2023-10-02T13:26:43Z
ICLR 2024 Camera Ready. Code: https://github.com/gsarti/pecore. Artifacts: https://huggingface.co/collections/gsarti/pecore-iclr-2024-65edab42e28439e21b612c2e
null
null
Quantifying the Plausibility of Context Reliance in Neural Machine Translation
['Gabriele Sarti', 'Grzegorz Chrupała', 'M. Nissim', 'Arianna Bisazza']
2,023
International Conference on Learning Representations
5
79
['Computer Science']
2,310.01208
Label Supervised LLaMA Finetuning
['Zongxi Li', 'Xianming Li', 'Yuzhang Liu', 'Haoran Xie', 'Jing Li', 'Fu-lee Wang', 'Qing Li', 'Xiaoqin Zhong']
['cs.CL']
The recent success of Large Language Models (LLMs) has gained significant attention in both academia and industry. Substantial efforts have been made to enhance the zero- and few-shot generalization capabilities of open-source LLMs through finetuning. Currently, the prevailing approach is instruction-tuning, which trai...
2023-10-02T13:53:03Z
null
null
null
Label Supervised LLaMA Finetuning
['Zongxi Li', 'Xianming Li', 'Yuzhang Liu', 'Haoran Xie', 'Jing Li', 'F. Wang', 'Qing Li', 'Xiaoqin Zhong']
2,023
arXiv.org
23
31
['Computer Science']
2,310.0121
Towards Robust Cardiac Segmentation using Graph Convolutional Networks
['Gilles Van De Vyver', 'Sarina Thomas', 'Guy Ben-Yosef', 'Sindre Hellum Olaisen', 'Håvard Dalen', 'Lasse Løvstakken', 'Erik Smistad']
['eess.IV', 'cs.CV', 'cs.LG']
Fully automatic cardiac segmentation can be a fast and reproducible method to extract clinical measurements from an echocardiography examination. The U-Net architecture is the current state-of-the-art deep learning architecture for medical segmentation and can segment cardiac structures in real-time with average errors...
2023-10-02T13:55:06Z
This work has been submitted to the IEEE for possible publication
null
null
null
null
null
null
null
null
null
2,310.01218
Making LLaMA SEE and Draw with SEED Tokenizer
['Yuying Ge', 'Sijie Zhao', 'Ziyun Zeng', 'Yixiao Ge', 'Chen Li', 'Xintao Wang', 'Ying Shan']
['cs.CV']
The great success of Large Language Models (LLMs) has expanded the potential of multimodality, contributing to the gradual evolution of General Artificial Intelligence (AGI). A true AGI agent should not only possess the capability to perform predefined multi-tasks but also exhibit emergent abilities in an open-world co...
2023-10-02T14:03:02Z
Project released at: https://github.com/AILab-CVC/SEED. arXiv admin note: substantial text overlap with arXiv:2307.08041
null
null
null
null
null
null
null
null
null
2,310.01324
ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video
['Xinhao Li', 'Yuhan Zhu', 'Limin Wang']
['cs.CV']
Adapting image models to the video domain has emerged as an efficient paradigm for solving video recognition tasks. Due to the huge number of parameters and effective transferability of image models, performing full fine-tuning is less efficient and even unnecessary. Thus, recent research is shifting its focus toward p...
2023-10-02T16:41:20Z
Accepted by ECCV2024
null
null
ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video
['Xinhao Li', 'Limin Wang']
2,023
European Conference on Computer Vision
9
87
['Computer Science']
2,310.01377
UltraFeedback: Boosting Language Models with Scaled AI Feedback
['Ganqu Cui', 'Lifan Yuan', 'Ning Ding', 'Guanming Yao', 'Bingxiang He', 'Wei Zhu', 'Yuan Ni', 'Guotong Xie', 'Ruobing Xie', 'Yankai Lin', 'Zhiyuan Liu', 'Maosong Sun']
['cs.CL', 'cs.AI', 'cs.LG']
Learning from human feedback has become a pivot technique in aligning large language models (LLMs) with human preferences. However, acquiring vast and premium human feedback is bottlenecked by time, labor, and human capability, resulting in small sizes or limited topics of current datasets. This further hinders feedbac...
2023-10-02T17:40:01Z
ICML 2024 camera ready
null
null
UltraFeedback: Boosting Language Models with High-quality Feedback
['Ganqu Cui', 'Lifan Yuan', 'Ning Ding', 'Guanming Yao', 'Wei Zhu', 'Yuan Ni', 'Guotong Xie', 'Zhiyuan Liu', 'Maosong Sun']
2,023
International Conference on Machine Learning
413
81
['Computer Science']
2,310.01596
ImagenHub: Standardizing the evaluation of conditional image generation models
['Max Ku', 'Tianle Li', 'Kai Zhang', 'Yujie Lu', 'Xingyu Fu', 'Wenwen Zhuang', 'Wenhu Chen']
['cs.CV', 'cs.GR', 'cs.MM']
Recently, a myriad of conditional image generation and editing models have been developed to serve different downstream tasks, including text-to-image generation, text-guided image editing, subject-driven image generation, control-guided image generation, etc. However, we observe huge inconsistencies in experimental co...
2023-10-02T19:41:42Z
Accepted to ICLR2024 Camera Ready
null
null
null
null
null
null
null
null
null
2,310.01602
CAT-LM: Training Language Models on Aligned Code And Tests
['Nikitha Rao', 'Kush Jain', 'Uri Alon', 'Claire Le Goues', 'Vincent J. Hellendoorn']
['cs.SE', 'cs.AI']
Testing is an integral part of the software development process. Yet, writing tests is time-consuming and therefore often neglected. Classical test generation tools such as EvoSuite generate behavioral test suites by optimizing for coverage, but tend to produce tests that are hard to understand. Language models trained...
2023-10-02T19:52:22Z
null
null
null
CAT-LM Training Language Models on Aligned Code And Tests
['Nikitha Rao', 'Kush Jain', 'Uri Alon', 'Claire Le Goues', 'Vincent J. Hellendoorn']
2,023
International Conference on Automated Software Engineering
47
52
['Computer Science']
2,310.01809
Mel-Band RoFormer for Music Source Separation
['Ju-Chiang Wang', 'Wei-Tsung Lu', 'Minz Won']
['cs.SD', 'eess.AS']
Recently, multi-band spectrogram-based approaches such as Band-Split RNN (BSRNN) have demonstrated promising results for music source separation. In our recent work, we introduce the BS-RoFormer model which inherits the idea of band-split scheme in BSRNN at the front-end, and then uses the hierarchical Transformer with...
2023-10-03T05:53:23Z
submitted as an ISMIR 2023 late-breaking and demo paper
null
null
null
null
null
null
null
null
null
2,310.01852
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
['Bin Zhu', 'Bin Lin', 'Munan Ning', 'Yang Yan', 'Jiaxi Cui', 'HongFa Wang', 'Yatian Pang', 'Wenhao Jiang', 'Junwu Zhang', 'Zongwei Li', 'Wancai Zhang', 'Zhifeng Li', 'Wei Liu', 'Li Yuan']
['cs.CV', 'cs.AI']
The video-language (VL) pretraining has achieved remarkable improvement in multiple downstream tasks. However, the current VL pretraining framework is hard to extend to multiple modalities (N modalities, N>=3) beyond vision and language. We thus propose LanguageBind, taking the language as the bind across different mod...
2023-10-03T07:33:27Z
Accepted by ICLR 2024
null
null
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
['Bin Zhu', 'Bin Lin', 'Munan Ning', 'Yang Yan', 'Jiaxi Cui', 'Hongfa Wang', 'Yatian Pang', 'Wenhao Jiang', 'Junwu Zhang', 'Zongwei Li', 'Wancai Zhang', 'Zhifeng Li', 'Wei Liu', 'Liejie Yuan']
2,023
International Conference on Learning Representations
229
76
['Computer Science']
2,310.01889
Ring Attention with Blockwise Transformers for Near-Infinite Context
['Hao Liu', 'Matei Zaharia', 'Pieter Abbeel']
['cs.CL']
Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby posing challenges in utilizing videos, actions...
2023-10-03T08:44:50Z
Code: https://github.com/lhao499/llm_large_context
null
null
Ring Attention with Blockwise Transformers for Near-Infinite Context
['Hao Liu', 'Matei Zaharia', 'Pieter Abbeel']
2,023
International Conference on Learning Representations
258
44
['Computer Science']
2,310.02031
OceanGPT: A Large Language Model for Ocean Science Tasks
['Zhen Bi', 'Ningyu Zhang', 'Yida Xue', 'Yixin Ou', 'Daxiong Ji', 'Guozhou Zheng', 'Huajun Chen']
['cs.CL', 'cs.AI', 'cs.CE', 'cs.LG', 'cs.RO']
Ocean science, which delves into the oceans that are reservoirs of life and biodiversity, is of great significance given that oceans cover over 70% of our planet's surface. Recently, advances in Large Language Models (LLMs) have transformed the paradigm in science. Despite the success in other domains, current LLMs oft...
2023-10-03T13:17:35Z
ACL2024. Project Website: http://oceangpt.zjukg.cn/
null
null
null
null
null
null
null
null
null
2,310.02074
ACE: A fast, skillful learned global atmospheric model for climate prediction
['Oliver Watt-Meyer', 'Gideon Dresdner', 'Jeremy McGibbon', 'Spencer K. Clark', 'Brian Henn', 'James Duncan', 'Noah D. Brenowitz', 'Karthik Kashinath', 'Michael S. Pritchard', 'Boris Bonev', 'Matthew E. Peters', 'Christopher S. Bretherton']
['physics.ao-ph', 'cs.LG']
Existing ML-based atmospheric models are not suitable for climate prediction, which requires long-term stability and physical consistency. We present ACE (AI2 Climate Emulator), a 200M-parameter, autoregressive machine learning emulator of an existing comprehensive 100-km resolution global atmospheric model. The formul...
2023-10-03T14:15:06Z
Accepted at Tackling Climate Change with Machine Learning: workshop at NeurIPS 2023
null
null
ACE: A fast, skillful learned global atmospheric model for climate prediction
['Oliver Watt‐Meyer', 'Gideon Dresdner', 'J. McGibbon', 'Spencer K. Clark', 'Brian Henn', 'James P. C. Duncan', 'Noah D. Brenowitz', 'K. Kashinath', 'Michael S. Pritchard', 'B. Bonev', 'Matthew E. Peters', 'Christopher S. Bretherton']
2,023
arXiv.org
47
25
['Physics', 'Computer Science']
2,310.02575
AdaMerging: Adaptive Model Merging for Multi-Task Learning
['Enneng Yang', 'Zhenyi Wang', 'Li Shen', 'Shiwei Liu', 'Guibing Guo', 'Xingwei Wang', 'Dacheng Tao']
['cs.LG', 'cs.CV']
Multi-task learning (MTL) aims to empower a model to tackle multiple tasks simultaneously. A recent development known as task arithmetic has revealed that several models, each fine-tuned for distinct tasks, can be directly merged into a single model to execute MTL without necessitating a retraining process using the in...
2023-10-04T04:26:33Z
International Conference on Learning Representations (ICLR 2024)
null
null
AdaMerging: Adaptive Model Merging for Multi-Task Learning
['Enneng Yang', 'Zhenyi Wang', 'Li Shen', 'Shiwei Liu', 'Guibing Guo', 'Xingwei Wang', 'Dacheng Tao']
2,023
International Conference on Learning Representations
125
87
['Computer Science']
2,310.02601
MagicDrive: Street View Generation with Diverse 3D Geometry Control
['Ruiyuan Gao', 'Kai Chen', 'Enze Xie', 'Lanqing Hong', 'Zhenguo Li', 'Dit-Yan Yeung', 'Qiang Xu']
['cs.CV', 'cs.AI']
Recent advancements in diffusion models have significantly enhanced the data synthesis with 2D control. Yet, precise 3D control in street view generation, crucial for 3D perception tasks, remains elusive. Specifically, utilizing Bird's-Eye View (BEV) as the primary condition often leads to challenges in geometry contro...
2023-10-04T06:14:06Z
Project Page: https://flymin.github.io/magicdrive; Figure 7 updated
null
null
null
null
null
null
null
null
null
2,310.02743
Reward Model Ensembles Help Mitigate Overoptimization
['Thomas Coste', 'Usman Anwar', 'Robert Kirk', 'David Krueger']
['cs.LG']
Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the "true" reward, these learned reward models...
2023-10-04T11:34:22Z
Accepted at ICLR 2024
null
null
Reward Model Ensembles Help Mitigate Overoptimization
['Thomas Coste', 'Usman Anwar', 'Robert Kirk', 'D. Krueger']
2,023
International Conference on Learning Representations
139
55
['Computer Science']
2,310.03024
AstroCLIP: A Cross-Modal Foundation Model for Galaxies
['Liam Parker', 'Francois Lanusse', 'Siavash Golkar', 'Leopoldo Sarra', 'Miles Cranmer', 'Alberto Bietti', 'Michael Eickenberg', 'Geraud Krawezik', 'Michael McCabe', 'Ruben Ohana', 'Mariel Pettee', 'Bruno Regaldo-Saint Blancard', 'Tiberiu Tesileanu', 'Kyunghyun Cho', 'Shirley Ho']
['astro-ph.IM', 'cs.AI', 'cs.LG']
We present AstroCLIP, a single, versatile model that can embed both galaxy images and spectra into a shared, physically meaningful latent space. These embeddings can then be used - without any model fine-tuning - for a variety of downstream tasks including (1) accurate in-modality and cross-modality semantic similarity...
2023-10-04T17:59:38Z
18 pages, accepted in Monthly Notices of the Royal Astronomical Society, Presented at the NeurIPS 2023 AI4Science Workshop
null
10.1093/mnras/stae1450
null
null
null
null
null
null
null
2,310.03269
InstructProtein: Aligning Human and Protein Language via Knowledge Instruction
['Zeyuan Wang', 'Qiang Zhang', 'Keyan Ding', 'Ming Qin', 'Xiang Zhuang', 'Xiaotong Li', 'Huajun Chen']
['q-bio.BM', 'cs.CL']
Large Language Models (LLMs) have revolutionized the field of natural language processing, but they fall short in comprehending biological sequences such as proteins. To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein l...
2023-10-05T02:45:39Z
null
null
null
null
null
null
null
null
null
null
2,310.03477
Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation
['François Remy', 'Pieter Delobelle', 'Bettina Berendt', 'Kris Demuynck', 'Thomas Demeester']
['cs.CL', 'cs.AI']
Training monolingual language models for low and mid-resource languages is made challenging by limited and often inadequate pretraining data. In this study, we propose a novel model conversion strategy to address this issue, adapting high-resources monolingual language models to a new target language. By generalizing o...
2023-10-05T11:45:29Z
As first reviewed at TACL
null
null
null
null
null
null
null
null
null
2,310.03668
GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction
['Oscar Sainz', 'Iker García-Ferrero', 'Rodrigo Agerri', 'Oier Lopez de Lacalle', 'German Rigau', 'Eneko Agirre']
['cs.CL']
Large Language Models (LLMs) combined with instruction tuning have made significant progress when generalizing to unseen tasks. However, they have been less successful in Information Extraction (IE), lagging behind task-specific models. Typically, IE tasks are characterized by complex annotation guidelines that describ...
2023-10-05T16:43:13Z
The Twelfth International Conference on Learning Representations - ICLR 2024
null
null
null
null
null
null
null
null
null
2,310.03708
Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
['Zhanhui Zhou', 'Jie Liu', 'Jing Shao', 'Xiangyu Yue', 'Chao Yang', 'Wanli Ouyang', 'Yu Qiao']
['cs.LG', 'cs.AI']
A single language model, even when aligned with labelers through reinforcement learning from human feedback (RLHF), may not suit all human preferences. Recent approaches therefore prefer customization, gathering multi-dimensional feedback, and creating distinct reward models for each dimension. Different language model...
2023-10-05T17:35:26Z
Findings of ACL 2024
null
null
null
null
null
null
null
null
null
2,310.03731
MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning
['Ke Wang', 'Houxing Ren', 'Aojun Zhou', 'Zimu Lu', 'Sichun Luo', 'Weikang Shi', 'Renrui Zhang', 'Linqi Song', 'Mingjie Zhan', 'Hongsheng Li']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG']
The recently released GPT-4 Code Interpreter has demonstrated remarkable proficiency in solving challenging math problems, primarily attributed to its ability to seamlessly reason with natural language, generate code, execute code, and continue reasoning based on the execution output. In this paper, we present a method...
2023-10-05T17:52:09Z
The state-of-the-art open-source language models for mathematical reasoning
null
null
null
null
null
null
null
null
null
2,310.03739
Aligning Text-to-Image Diffusion Models with Reward Backpropagation
['Mihir Prabhudesai', 'Anirudh Goyal', 'Deepak Pathak', 'Katerina Fragkiadaki']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO']
Text-to-image diffusion models have recently emerged at the forefront of image generation, powered by very large-scale unsupervised or weakly supervised text-to-image training datasets. Due to their unsupervised training, controlling their behavior in downstream tasks, such as maximizing human-perceived image quality, ...
2023-10-05T17:59:18Z
This paper is subsumed by a later paper of ours: arXiv:2407.08737
null
null
Aligning Text-to-Image Diffusion Models with Reward Backpropagation
['Mihir Prabhudesai', 'Anirudh Goyal', 'Deepak Pathak', 'Katerina Fragkiadaki']
2,023
arXiv.org
133
55
['Computer Science']
2,310.03744
Improved Baselines with Visual Instruction Tuning
['Haotian Liu', 'Chunyuan Li', 'Yuheng Li', 'Yong Jae Lee']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP...
2023-10-05T17:59:56Z
Camera ready, CVPR 2024 (highlight). LLaVA project page: https://llava-vl.github.io
null
null
Improved Baselines with Visual Instruction Tuning
['Haotian Liu', 'Chunyuan Li', 'Yuheng Li', 'Yong Jae Lee']
2,023
Computer Vision and Pattern Recognition
2,834
71
['Computer Science']
2,310.03842
PepMLM: Target Sequence-Conditioned Generation of Therapeutic Peptide Binders via Span Masked Language Modeling
['Tianlai Chen', 'Madeleine Dumas', 'Rio Watson', 'Sophia Vincoff', 'Christina Peng', 'Lin Zhao', 'Lauren Hong', 'Sarah Pertsemlidis', 'Mayumi Shaepers-Cheu', 'Tian Zi Wang', 'Divya Srijay', 'Connor Monticello', 'Pranay Vure', 'Rishab Pulugurta', 'Kseniia Kholina', 'Shrey Goel', 'Matthew P. DeLisa', 'Ray Truant', 'Hect...
['q-bio.BM']
Target proteins that lack accessible binding pockets and conformational stability have posed increasing challenges for drug development. Induced proximity strategies, such as PROTACs and molecular glues, have thus gained attention as pharmacological alternatives, but still require small molecule docking at binding pock...
2023-10-05T18:59:51Z
null
null
null
PepMLM: Target Sequence-Conditioned Generation of Therapeutic Peptide Binders via Span Masked Language Modeling
['Tianlai Chen', 'Madeleine Dumas', 'Rio Watson', 'Sophia Vincoff', 'Christina Peng', 'Lin Zhao', 'Lauren Hong', 'Sarah Pertsemlidis', 'Mayumi Shaepers-Cheu', 'Tian Wang', 'Divya Srijay', 'Connor Monticello', 'Pranay Vure', 'Rishab Pulugurta', 'Kseniia Kholina', 'Shrey Goel', 'M. DeLisa', 'R. Truant', 'Hector C. Aguila...
2,023
arXiv.org
18
53
['Biology', 'Medicine']
2,310.04378
Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference
['Simian Luo', 'Yiqin Tan', 'Longbo Huang', 'Jian Li', 'Hang Zhao']
['cs.CV', 'cs.LG']
Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference w...
2023-10-06T17:11:58Z
null
null
null
Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference
['Simian Luo', 'Yiqin Tan', 'Longbo Huang', 'Jian Li', 'Hang Zhao']
2,023
arXiv.org
479
38
['Computer Science']
2,310.04418
Functional Interpolation for Relative Positions Improves Long Context Transformers
['Shanda Li', 'Chong You', 'Guru Guruganesh', 'Joshua Ainslie', 'Santiago Ontanon', 'Manzil Zaheer', 'Sumit Sanghai', 'Yiming Yang', 'Sanjiv Kumar', 'Srinadh Bhojanapalli']
['cs.LG']
Preventing the performance decay of Transformers on inputs longer than those used for training has been an important challenge in extending the context length of these models. Though the Transformer architecture has fundamentally no limits on the input sequence lengths it can process, the choice of position encoding us...
2023-10-06T17:59:11Z
26 pages; ICLR 2024 camera ready version
null
null
null
null
null
null
null
null
null
2,310.04484
Ada-Instruct: Adapting Instruction Generators for Complex Reasoning
['Wanyun Cui', 'Qianle Wang']
['cs.CL', 'cs.AI']
Instructions augmentation is a crucial step for unleashing the full potential of large language models (LLMs) in downstream tasks. Existing Self-Instruct methods primarily simulate new instructions from a few initial instructions with in-context learning. However, our study identifies a critical flaw in this approach: ...
2023-10-06T13:28:04Z
null
null
null
null
null
null
null
null
null
null
2,310.04562
Towards Foundation Models for Knowledge Graph Reasoning
['Mikhail Galkin', 'Xinyu Yuan', 'Hesham Mostafa', 'Jian Tang', 'Zhaocheng Zhu']
['cs.CL', 'cs.AI']
Foundation models in language and vision have the ability to run inference on any textual and visual inputs thanks to the transferable representations such as a vocabulary of tokens in language. Knowledge graphs (KGs) have different entity and relation vocabularies that generally do not overlap. The key challenge of de...
2023-10-06T20:00:07Z
ICLR 2024
null
null
null
null
null
null
null
null
null
2,310.04564
ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models
['Iman Mirzadeh', 'Keivan Alizadeh', 'Sachin Mehta', 'Carlo C Del Mundo', 'Oncel Tuzel', 'Golnoosh Samei', 'Mohammad Rastegari', 'Mehrdad Farajtabar']
['cs.LG', 'cs.AI']
Large Language Models (LLMs) with billions of parameters have drastically transformed AI applications. However, their demanding computation during inference has raised significant challenges for deployment on resource-constrained devices. Despite recent trends favoring alternative activation functions such as GELU or S...
2023-10-06T20:01:33Z
preprint
null
null
null
null
null
null
null
null
null
2,310.04799
Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages
['Shih-Cheng Huang', 'Pin-Zu Li', 'Yu-Chi Hsu', 'Kuang-Ming Chen', 'Yu Tung Lin', 'Shih-Kai Hsiao', 'Richard Tzong-Han Tsai', 'Hung-yi Lee']
['cs.CL']
Recently, the development of open-source large language models (LLMs) has advanced rapidly. Nevertheless, due to data constraints, the capabilities of most open-source LLMs are primarily focused on English. To address this issue, we introduce the concept of $\textit{chat vector}$ to equip pre-trained language models wi...
2023-10-07T13:34:21Z
ACL 2024 camera-ready version
null
null
Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages
['Shih-Cheng Huang', 'Pin-Zu Li', 'Yu-Chi Hsu', 'Kuang-Ming Chen', 'Yu Tung Lin', 'Shih-Kai Hsiao', 'Richard Tzong-Han Tsai', 'Hung-yi Lee']
2,023
Annual Meeting of the Association for Computational Linguistics
17
39
['Computer Science']
2,310.04901
WAIT: Feature Warping for Animation to Illustration video Translation using GANs
['Samet Hicsonmez', 'Nermin Samet', 'Fidan Samet', 'Oguz Bakir', 'Emre Akbas', 'Pinar Duygulu']
['cs.CV']
In this paper, we explore a new domain for video-to-video translation. Motivated by the availability of animation movies that are adopted from illustrated books for children, we aim to stylize these videos with the style of the original illustrations. Current state-of-the-art video-to-video translation models rely on h...
2023-10-07T19:45:24Z
Accepted to Neurocomputing
null
null
null
null
null
null
null
null
null