arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,305.14342
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
['Hong Liu', 'Zhiyuan Li', 'David Hall', 'Percy Liang', 'Tengyu Ma']
['cs.LG', 'cs.CL', 'math.OC']
Given the massive cost of language model pre-training, a non-trivial improvement of the optimization algorithm would lead to a material reduction on the time and cost of training. Adam and its variants have been state-of-the-art for years, and more sophisticated second-order (Hessian-based) optimizers often incur too m...
2023-05-23T17:59:21Z
null
null
null
null
null
null
null
null
null
null
2,305.14378
Predicting Stock Market Time-Series Data using CNN-LSTM Neural Network Model
['Aadhitya A', 'Rajapriya R', 'Vineetha R S', 'Anurag M Bagde']
['q-fin.ST', 'cs.LG']
Stock market is often important as it represents the ownership claims on businesses. Without sufficient stocks, a company cannot perform well in finance. Predicting a stock market performance of a company is nearly hard because every time the prices of a company stock keeps changing and not constant. So, its complex to...
2023-05-21T08:00:23Z
8 pages, 9 figures, 5 tables
null
null
Predicting Stock Market Time-Series Data using CNN-LSTM Neural Network Model
['A. Aadhitya', 'R. Rajapriya', 'S. VineethaR', 'Anurag M Bagde']
2,023
arXiv.org
9
20
['Economics', 'Computer Science']
2,305.14458
Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA
['David Heineman', 'Yao Dou', 'Mounica Maddela', 'Wei Xu']
['cs.CL']
Large language models (e.g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems' specific strengths and weaknesses. To address this limitation, we introduce SALSA, an edit-based human annotation framework that ...
2023-05-23T18:30:49Z
Accepted to EMNLP 2023
null
null
null
null
null
null
null
null
null
2,305.14463
ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment
['Tarek Naous', 'Michael J. Ryan', 'Anton Lavrouk', 'Mohit Chandra', 'Wei Xu']
['cs.CL', 'cs.AI', 'cs.LG']
We present a comprehensive evaluation of large language models for multilingual readability assessment. Existing evaluation resources lack domain and language diversity, limiting the ability for cross-domain and cross-lingual analyses. This paper introduces ReadMe++, a multilingual multi-domain dataset with human annot...
2023-05-23T18:37:30Z
Accepted to EMNLP 2024 Main Conference
null
null
ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment
['Tarek Naous', 'Michael Joseph Ryan', 'Mohit Chandra', 'Wei Xu']
2,023
Conference on Empirical Methods in Natural Language Processing
8
101
['Computer Science', 'Medicine']
2,305.14471
CGCE: A Chinese Generative Chat Evaluation Benchmark for General and Financial Domains
['Xuanyu Zhang', 'Bingbing Li', 'Qing Yang']
['cs.CL']
Generative chat models, such as ChatGPT and GPT-4, have revolutionized natural language generation (NLG) by incorporating instructions and human feedback to achieve significant performance improvements. However, the lack of standardized evaluation benchmarks for chat models, particularly for Chinese and domain-specific...
2023-05-23T18:54:15Z
null
null
null
null
null
null
null
null
null
null
2,305.14481
FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models
['Konstantin Dobler', 'Gerard de Melo']
['cs.CL']
Using model weights pretrained on a high-resource language as a warm start can reduce the need for data and compute to obtain high-quality language models for other, especially low-resource, languages. However, if we want to use a new tokenizer specialized for the target language, we cannot transfer the source model's ...
2023-05-23T19:21:53Z
Accepted to EMNLP 2023 Main Conference (Long Paper). Code: https://github.com/konstantinjdobler/focus
null
10.18653/v1/2023.emnlp-main.829
null
null
null
null
null
null
null
2,305.14677
Optimal Linear Subspace Search: Learning to Construct Fast and High-Quality Schedulers for Diffusion Models
['Zhongjie Duan', 'Chengyu Wang', 'Cen Chen', 'Jun Huang', 'Weining Qian']
['cs.CV']
In recent years, diffusion models have become the most popular and powerful methods in the field of image synthesis, even rivaling human artists in artistic creativity. However, the key issue currently limiting the application of diffusion models is its extremely slow generation process. Although several methods were p...
2023-05-24T03:33:30Z
13 pages, 5 figures
null
10.1145/3583780.3614999
Optimal Linear Subspace Search: Learning to Construct Fast and High-Quality Schedulers for Diffusion Models
['Zhongjie Duan', 'Chengyu Wang', 'Cen Chen', 'Jun Huang', 'Weining Qian']
2,023
International Conference on Information and Knowledge Management
12
45
['Computer Science']
2,305.14688
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
['Benfeng Xu', 'An Yang', 'Junyang Lin', 'Quan Wang', 'Chang Zhou', 'Yongdong Zhang', 'Zhendong Mao']
['cs.CL', 'cs.AI']
The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed a...
2023-05-24T03:51:31Z
null
null
null
null
null
null
null
null
null
null
2,305.14718
Leftover Lunch: Advantage-based Offline Reinforcement Learning for Language Models
['Ashutosh Baheti', 'Ximing Lu', 'Faeze Brahman', 'Ronan Le Bras', 'Maarten Sap', 'Mark Riedl']
['cs.CL']
Reinforcement Learning with Human Feedback (RLHF) is the most prominent method for Language Model (LM) alignment. However, RLHF is an unstable and data-hungry process that continually requires new high-quality LM-generated data for finetuning. We introduce Advantage-Leftover Lunch RL (A-LoL), a new class of offline pol...
2023-05-24T04:42:17Z
published at ICLR 2024
null
null
null
null
null
null
null
null
null
2,305.1472
BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
['Dongxu Li', 'Junnan Li', 'Steven C. H. Hoi']
['cs.CV', 'cs.AI']
Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that...
2023-05-24T04:51:04Z
null
null
null
BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
['Dongxu Li', 'Junnan Li', 'Steven C. H. Hoi']
2,023
Neural Information Processing Systems
331
40
['Computer Science']
2,305.14734
Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation
['Bashar Alhafni', 'Go Inoue', 'Christian Khairallah', 'Nizar Habash']
['cs.CL']
Grammatical error correction (GEC) is a well-explored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using tw...
2023-05-24T05:12:58Z
Accepted to EMNLP 2023
null
null
null
null
null
null
null
null
null
2,305.14749
gRNAde: Geometric Deep Learning for 3D RNA inverse design
['Chaitanya K. Joshi', 'Arian R. Jamasb', 'Ramon Viñas', 'Charles Harris', 'Simon V. Mathis', 'Alex Morehead', 'Rishabh Anand', 'Pietro Liò']
['cs.LG', 'q-bio.BM', 'q-bio.QM']
Computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D conformational diversity. We introduce gRNAde, a geometric RNA design pipeline operating on 3D RNA backbones to design sequences that explicitly ...
2023-05-24T05:46:56Z
ICLR 2025 camera-ready version (Spotlight presentation). Previously titled 'Multi-State RNA Design with Geometric Multi-Graph Neural Networks', presented at ICML 2023 Computational Biology Workshop
null
null
gRNAde: Geometric Deep Learning for 3D RNA inverse design
['Chaitanya K. Joshi', 'Arian R. Jamasb', 'Ramón Viñas', 'Charles Harris', 'Simon V. Mathis', 'P. Liò']
2,023
bioRxiv
18
63
['Medicine', 'Computer Science', 'Biology']
2,305.14761
UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning
['Ahmed Masry', 'Parsa Kavehzadeh', 'Xuan Long Do', 'Enamul Hoque', 'Shafiq Joty']
['cs.CL']
Charts are very popular for analyzing data, visualizing key insights and answering complex reasoning questions about data. To facilitate chart-based data analysis using natural language, several downstream tasks have been introduced recently such as chart question answering and chart summarization. However, most of the...
2023-05-24T06:11:17Z
null
null
null
null
null
null
null
null
null
null
2,305.14783
Disentangled Phonetic Representation for Chinese Spelling Correction
['Zihong Liang', 'Xiaojun Quan', 'Qifan Wang']
['cs.CL']
Chinese Spelling Correction (CSC) aims to detect and correct erroneous characters in Chinese texts. Although efforts have been made to introduce phonetic information (Hanyu Pinyin) in this task, they typically merge phonetic representations with character representations, which tends to weaken the representation effect...
2023-05-24T06:39:12Z
Accepted to ACL 2023 Main Conference
null
null
null
null
null
null
null
null
null
2,305.14788
Adapting Language Models to Compress Contexts
['Alexis Chevalier', 'Alexander Wettig', 'Anirudh Ajith', 'Danqi Chen']
['cs.CL']
Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents. We propose to adapt pre-trained LMs into AutoCompressors. These language models are capable of compress...
2023-05-24T06:42:44Z
Accepted to EMNLP 2023; added results for Llama-2-7B model
null
null
Adapting Language Models to Compress Contexts
['Alexis Chevalier', 'Alexander Wettig', 'Anirudh Ajith', 'Danqi Chen']
2,023
Conference on Empirical Methods in Natural Language Processing
192
58
['Computer Science']
2,305.14904
Identifying Informational Sources in News Articles
['Alexander Spangher', 'Nanyun Peng', 'Jonathan May', 'Emilio Ferrara']
['cs.CL', 'cs.AI', 'cs.CY']
News articles are driven by the informational sources journalists use in reporting. Modeling when, how and why sources get used together in stories can help us better understand the information we consume and even help journalists with the task of producing it. In this work, we take steps toward this goal by constructi...
2023-05-24T08:56:35Z
13 pages
null
null
null
null
null
null
null
null
null
2,305.14964
Detecting Multidimensional Political Incivility on Social Media
['Sagi Pendzel', 'Nir Lotan', 'Alon Zoizner', 'Einat Minkov']
['cs.CL']
The rise of social media has been argued to intensify uncivil and hostile online political discourse. Yet, to date, there is a lack of clarity on what incivility means in the political sphere. In this work, we utilize a multidimensional perspective of political incivility, developed in the fields of political science a...
2023-05-24T09:57:12Z
null
null
null
null
null
null
null
null
null
null
2,305.15011
Bactrian-X: Multilingual Replicable Instruction-Following Models with Low-Rank Adaptation
['Haonan Li', 'Fajri Koto', 'Minghao Wu', 'Alham Fikri Aji', 'Timothy Baldwin']
['cs.CL']
Instruction tuning has shown great promise in improving the performance of large language models. However, research on multilingual instruction tuning has been limited due to the scarcity of high-quality instruction-response datasets across different languages. To bridge this gap, we present Bactrian-X, a comprehensive...
2023-05-24T10:50:31Z
null
null
null
null
null
null
null
null
null
null
2,305.15017
Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems
['Marek Kadlčík', 'Michal Štefánik', 'Ondřej Sotolář', 'Vlastimil Martinek']
['cs.LG', 'cs.AI', 'cs.CL']
Despite outstanding performance in many tasks, language models are notoriously inclined to make factual errors in tasks requiring arithmetic computation. We address this deficiency by creating Calc-X, a collection of datasets that demonstrates the appropriate use of a calculator in reasoning chains. Calc-X is suitable ...
2023-05-24T10:58:20Z
Published in EMNLP 2023: Main track
null
null
Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems
['Marek Kadlcík', 'Michal Štefánik']
2,023
Conference on Empirical Methods in Natural Language Processing
15
21
['Computer Science']
2,305.15062
Lawyer LLaMA Technical Report
['Quzhe Huang', 'Mingxu Tao', 'Chen Zhang', 'Zhenwei An', 'Cong Jiang', 'Zhibin Chen', 'Zirui Wu', 'Yansong Feng']
['cs.CL', 'cs.AI']
Large Language Models (LLMs), like LLaMA, have exhibited remarkable performance across various tasks. Nevertheless, when deployed to specific domains such as law or medicine, the models still confront the challenge of a deficiency in domain-specific knowledge and an inadequate capability to leverage that knowledge to r...
2023-05-24T11:52:07Z
null
null
null
null
null
null
null
null
null
null
2,305.15077
Contrastive Learning of Sentence Embeddings from Scratch
['Junlei Zhang', 'Zhenzhong Lan', 'Junxian He']
['cs.CL']
Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings. Previous studies have typically learned sentence embeddings either through the use of human-annotated natural language inference (NLI) data or via large-scale unlabeled sentences in an unsupervised manner. However, even i...
2023-05-24T11:56:21Z
Emnlp 2023
null
null
null
null
null
null
null
null
null
2,305.15194
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
['Sungnyun Kim', 'Junsoo Lee', 'Kibeom Hong', 'Daesik Kim', 'Namhyuk Ahn']
['cs.CV', 'cs.AI', 'cs.LG']
In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model. We thus design a multimodal T2I diffusion model, coined as DiffBlender...
2023-05-24T14:31:20Z
Project page: https://sungnyun.github.io/diffblender/
null
null
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
['Sungnyun Kim', 'Junsoo Lee', 'Kibeom Hong', 'Daesik Kim', 'Namhyuk Ahn']
2,023
arXiv.org
15
61
['Computer Science']
2,305.15225
SAIL: Search-Augmented Instruction Learning
['Hongyin Luo', 'Yung-Sung Chuang', 'Yuan Gong', 'Tianhua Zhang', 'Yoon Kim', 'Xixin Wu', 'Danny Fox', 'Helen Meng', 'James Glass']
['cs.CL']
Large language models (LLMs) have been significantly improved by instruction fine-tuning, but still lack transparency and the ability to utilize up-to-date knowledge and information. In this work, we propose search-augmented instruction learning (SAIL), which grounds the language generation and instruction following ab...
2023-05-24T15:07:30Z
null
null
null
SAIL: Search-Augmented Instruction Learning
['Hongyin Luo', 'Yung-Sung Chuang', 'Yuan Gong', 'Tianhua Zhang', 'Yoon Kim', 'Xixin Wu', 'D. Fox', 'H. Meng', 'James R. Glass']
2,023
Conference on Empirical Methods in Natural Language Processing
27
49
['Computer Science']
2,305.15272
ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers
['Jingfeng Yao', 'Xinggang Wang', 'Shusheng Yang', 'Baoyuan Wang']
['cs.CV']
Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining. However, they have not yet conquered the problem of image matting. We hypothesize that image matting could also be boosted by ViTs and pres...
2023-05-24T15:59:35Z
codes: https://github.com/hustvl/ViTMatte
null
null
null
null
null
null
null
null
null
2,305.15324
Model evaluation for extreme risks
['Toby Shevlane', 'Sebastian Farquhar', 'Ben Garfinkel', 'Mary Phuong', 'Jess Whittlestone', 'Jade Leung', 'Daniel Kokotajlo', 'Nahema Marchal', 'Markus Anderljung', 'Noam Kolt', 'Lewis Ho', 'Divya Siddarth', 'Shahar Avin', 'Will Hawkins', 'Been Kim', 'Iason Gabriel', 'Vijay Bolina', 'Jack Clark', 'Yoshua Bengio', 'Pau...
['cs.AI', 'K.4.1']
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is c...
2023-05-24T16:38:43Z
Fixed typos; added citation
null
null
null
null
null
null
null
null
null
2,305.15334
Gorilla: Large Language Model Connected with Massive APIs
['Shishir G. Patil', 'Tianjun Zhang', 'Xin Wang', 'Joseph E. Gonzalez']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) have seen an impressive wave of advances recently, with models now excelling in a variety of tasks, such as mathematical reasoning and program synthesis. However, their potential to effectively use tools via API calls remains unfulfilled. This is a challenging task even for today's state-of...
2023-05-24T16:48:11Z
null
null
null
null
null
null
null
null
null
null
2,305.15391
A Neural Space-Time Representation for Text-to-Image Personalization
['Yuval Alaluf', 'Elad Richardson', 'Gal Metzer', 'Daniel Cohen-Or']
['cs.CV']
A key aspect of text-to-image personalization methods is the manner in which the target concept is represented within the generative process. This choice greatly affects the visual fidelity, downstream editability, and disk space needed to store the learned concept. In this paper, we explore a new text-conditioning spa...
2023-05-24T17:53:07Z
Project page available at https://neuraltextualinversion.github.io/NeTI/
null
null
null
null
null
null
null
null
null
2,305.15541
Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation
['Yuan Yang', 'Siheng Xiong', 'Ali Payani', 'Ehsan Shareghi', 'Faramarz Fekri']
['cs.CL', 'cs.AI']
Translating natural language sentences to first-order logic (NL-FOL translation) is a longstanding challenge in the NLP and formal logic literature. This paper introduces LogicLLaMA, a LLaMA-7B model fine-tuned for NL-FOL translation using LoRA on a single GPU. LogicLLaMA is capable of directly translating natural lang...
2023-05-24T19:59:51Z
null
null
null
Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation
['Yuan Yang', 'Siheng Xiong', 'Ali Payani', 'Ehsan Shareghi', 'F. Fekri']
2,023
Annual Meeting of the Association for Computational Linguistics
41
33
['Computer Science']
2,305.15798
BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion
['Bo-Kyeong Kim', 'Hyoung-Kyu Song', 'Thibault Castells', 'Shinkook Choi']
['cs.LG']
Text-to-image (T2I) generation with Stable Diffusion models (SDMs) involves high computing demands due to billion-scale parameters. To enhance efficiency, recent studies have reduced sampling steps and applied network quantization while retaining the original architectures. The lack of architectural reduction attempts ...
2023-05-25T07:28:28Z
ECCV 2024 Camera-Ready Version
null
10.1007/978-3-031-72949-2
Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part LIV
['Bo-Kyeong Kim', 'Hyoung-Kyu Song', 'Thibault Castells', 'Shinkook Choi']
2,023
European Conference on Computer Vision
11
89
['Computer Science']
2,305.16023
NaSGEC: a Multi-Domain Chinese Grammatical Error Correction Dataset from Native Speaker Texts
['Yue Zhang', 'Bo Zhang', 'Haochen Jiang', 'Zhenghua Li', 'Chen Li', 'Fei Huang', 'Min Zhang']
['cs.CL']
We introduce NaSGEC, a new dataset to facilitate research on Chinese grammatical error correction (CGEC) for native speaker texts from multiple domains. Previous CGEC research primarily focuses on correcting texts from a single domain, especially learner essays. To broaden the target domain, we annotate multiple refere...
2023-05-25T13:05:52Z
Accepted by ACL 2023 (Findings, long paper)
null
null
null
null
null
null
null
null
null
2,305.16037
GenerateCT: Text-Conditional Generation of 3D Chest CT Volumes
['Ibrahim Ethem Hamamci', 'Sezgin Er', 'Anjany Sekuboyina', 'Enis Simsar', 'Alperen Tezcan', 'Ayse Gulnihan Simsek', 'Sevval Nil Esirgun', 'Furkan Almas', 'Irem Dogan', 'Muhammed Furkan Dasdelen', 'Chinmay Prabhakar', 'Hadrien Reynaud', 'Sarthak Pati', 'Christian Bluethgen', 'Mehmet Kemal Ozdemir', 'Bjoern Menze']
['cs.CV']
GenerateCT, the first approach to generating 3D medical imaging conditioned on free-form medical text prompts, incorporates a text encoder and three key components: a novel causal vision transformer for encoding 3D CT volumes, a text-image transformer for aligning CT and text tokens, and a text-conditional super-resolu...
2023-05-25T13:16:39Z
null
null
null
GenerateCT: Text-Conditional Generation of 3D Chest CT Volumes
['Ibrahim Ethem Hamamci', 'Sezgin Er', 'Enis Simsar', 'A. Sekuboyina', 'Chinmay Prabhakar', 'A. Tezcan', 'Ayse Gulnihan Simsek', 'S. Esirgun', 'Furkan Almas', 'Irem Dougan', 'M. F. Dasdelen', 'Hadrien Reynaud', 'Sarthak Pati', 'Christian Bluethgen', 'M. K. Ozdemir', 'Bjoern H Menze']
2,023
European Conference on Computer Vision
24
48
['Computer Science']
2,305.16066
Guided Attention for Next Active Object @ EGO4D STA Challenge
['Sanket Thakur', 'Cigdem Beyan', 'Pietro Morerio', 'Vittorio Murino', 'Alessio Del Bue']
['cs.CV']
In this technical report, we describe the Guided-Attention mechanism based solution for the short-term anticipation (STA) challenge for the EGO4D challenge. It combines the object detections, and the spatiotemporal features extracted from video clips, enhancing the motion and contextual information, and further decodin...
2023-05-25T13:56:30Z
Winner of CVPR@2023 Ego4D STA challenge. arXiv admin note: substantial text overlap with arXiv:2305.12953
null
null
null
null
null
null
null
null
null
2,305.16264
Scaling Data-Constrained Language Models
['Niklas Muennighoff', 'Alexander M. Rush', 'Boaz Barak', 'Teven Le Scao', 'Aleksandra Piktus', 'Nouamane Tazi', 'Sampo Pyysalo', 'Thomas Wolf', 'Colin Raffel']
['cs.CL', 'cs.AI', 'cs.LG']
The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-...
2023-05-25T17:18:55Z
50 pages (9 main), 39 figures, 15 tables
null
null
null
null
null
null
null
null
null
2,305.163
Landmark Attention: Random-Access Infinite Context Length for Transformers
['Amirkeivan Mohtashami', 'Martin Jaggi']
['cs.CL', 'cs.LG']
While Transformers have shown remarkable success in natural language processing, their attention mechanism's large memory requirements have limited their ability to handle longer contexts. Prior approaches, such as recurrent memory or retrieval-based augmentation, have either compromised the random-access flexibility o...
2023-05-25T17:53:42Z
Published as a conference paper at NeurIPS 2023 - 37th Conference on Neural Information Processing Systems
null
null
null
null
null
null
null
null
null
2,305.16307
IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages
['Jay Gala', 'Pranjal A. Chitale', 'Raghavan AK', 'Varun Gumma', 'Sumanth Doddapaneni', 'Aswanth Kumar', 'Janki Nawale', 'Anupama Sujatha', 'Ratish Puduppully', 'Vivek Raghavan', 'Pratyush Kumar', 'Mitesh M. Khapra', 'Raj Dabre', 'Anoop Kunchukuttan']
['cs.CL']
India has a rich linguistic landscape with languages from 4 major language families spoken by over a billion people. 22 of these languages are listed in the Constitution of India (referred to as scheduled languages) are the focus of this work. Given the linguistic diversity, high-quality and accessible Machine Translat...
2023-05-25T17:57:43Z
Accepted at TMLR
null
null
null
null
null
null
null
null
null
2,305.16315
NAP: Neural 3D Articulation Prior
['Jiahui Lei', 'Congyue Deng', 'Bokui Shen', 'Leonidas Guibas', 'Kostas Daniilidis']
['cs.CV']
We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models. Despite the extensive research on generating 3D objects, compositions, or scenes, there remains a lack of focus on capturing the distribution of articulated objects, a common object category for...
2023-05-25T17:59:35Z
project page: https://www.cis.upenn.edu/~leijh/projects/nap
null
null
null
null
null
null
null
null
null
2,305.16355
PandaGPT: One Model To Instruction-Follow Them All
['Yixuan Su', 'Tian Lan', 'Huayang Li', 'Jialu Xu', 'Yan Wang', 'Deng Cai']
['cs.CL', 'cs.CV']
We present PandaGPT, an approach to emPower large lANguage moDels with visual and Auditory instruction-following capabilities. Our pilot experiments show that PandaGPT can perform complex tasks such as detailed image description generation, writing stories inspired by videos, and answering questions about audios. More ...
2023-05-25T04:16:07Z
Technical report, work in progress. Our project page is at https://panda-gpt.github.io/
null
null
PandaGPT: One Model To Instruction-Follow Them All
['Yixuan Su', 'Tian Lan', 'Huayang Li', 'Jialu Xu', 'Yan Wang', 'Deng Cai']
2,023
Tsinghua Interdisciplinary Workshop on Logic, Language and Meaning
295
37
['Computer Science']
2,305.16504
On the Tool Manipulation Capability of Open-source Large Language Models
['Qiantong Xu', 'Fenglu Hong', 'Bo Li', 'Changran Hu', 'Zhengyu Chen', 'Jian Zhang']
['cs.CL', 'cs.AI', 'cs.LG']
Recent studies on software tool manipulation with large language models (LLMs) mostly rely on closed model APIs. The industrial adoption of these models is substantially constrained due to the security and robustness risks in exposing information to closed LLM API services. In this paper, we ask can we enhance open-sou...
2023-05-25T22:10:20Z
null
null
null
On the Tool Manipulation Capability of Open-source Large Language Models
['Qiantong Xu', 'Fenglu Hong', 'B. Li', 'Changran Hu', 'Zheng Chen', 'Jian Zhang']
2,023
arXiv.org
78
68
['Computer Science']
2,305.16636
DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions
['Vijay Viswanathan', 'Luyu Gao', 'Tongshuang Wu', 'Pengfei Liu', 'Graham Neubig']
['cs.IR', 'cs.CL', 'cs.DL']
Modern machine learning relies on datasets to develop and validate research ideas. Given the growth of publicly available data, finding the right dataset to use is increasingly difficult. Any research question imposes explicit and implicit constraints on how well a given dataset will enable researchers to answer this q...
2023-05-26T05:22:36Z
To appear at ACL 2023. Code published at https://github.com/viswavi/datafinder
null
null
DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions
['Vijay Viswanathan', 'Luyu Gao', 'Tongshuang Sherry Wu', 'Pengfei Liu', 'Graham Neubig']
2,023
Annual Meeting of the Association for Computational Linguistics
13
53
['Computer Science']
2,305.16739
AlignScore: Evaluating Factual Consistency with a Unified Alignment Function
['Yuheng Zha', 'Yichi Yang', 'Ruichen Li', 'Zhiting Hu']
['cs.CL']
Many text generation applications require the generated text to be factually consistent with input information. Automatic evaluation of factual consistency is challenging. Previous work has developed various metrics that often depend on specific functions, such as natural language inference (NLI) or question answering ...
2023-05-26T08:41:59Z
19 pages, 5 figures, ACL2023
null
null
null
null
null
null
null
null
null
2,305.16765
Backpack Language Models
['John Hewitt', 'John Thickstun', 'Christopher D. Manning', 'Percy Liang']
['cs.CL']
We present Backpacks: a new neural architecture that marries strong modeling performance with an interface for interpretability and control. Backpacks learn multiple non-contextual sense vectors for each word in a vocabulary, and represent a word in a sequence as a context-dependent, non-negative linear combination of ...
2023-05-26T09:26:23Z
ACL 2023 Camera-Ready
null
null
Backpack Language Models
['John Hewitt', 'John Thickstun', 'Christopher D. Manning', 'Percy Liang']
2,023
Annual Meeting of the Association for Computational Linguistics
16
59
['Computer Science']
2,305.16799
To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support
['Gabriella Skitalinskaya', 'Henning Wachsmuth']
['cs.CL']
Optimizing the phrasing of argumentative text is crucial in higher education and professional development. However, assessing whether and how the different claims in a text should be revised is a hard task, especially for novice writers. In this work, we explore the main challenges to identifying argumentative claims i...
2023-05-26T10:19:54Z
Accepted as a long paper at ACL 2023
null
null
null
null
null
null
null
null
null
2,305.16944
Learning to Imagine: Visually-Augmented Natural Language Generation
['Tianyi Tang', 'Yushuo Chen', 'Yifan Du', 'Junyi Li', 'Wayne Xin Zhao', 'Ji-Rong Wen']
['cs.CL']
People often imagine relevant scenes to aid in the writing process. In this work, we aim to utilize visual information for composition in the same manner as humans. We propose a method, LIVE, that makes pre-trained language models (PLMs) Learn to Imagine for Visuallyaugmented natural language gEneration. First, we imag...
2023-05-26T13:59:45Z
Accepted by ACL 2023
null
null
null
null
null
null
null
null
null
2,305.16958
MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies
['Shiyue Zhang', 'Shijie Wu', 'Ozan Irsoy', 'Steven Lu', 'Mohit Bansal', 'Mark Dredze', 'David Rosenberg']
['cs.CL', 'cs.AI', 'cs.LG']
Autoregressive language models are trained by minimizing the cross-entropy of the model distribution Q relative to the data distribution P -- that is, minimizing the forward cross-entropy, which is equivalent to maximum likelihood estimation (MLE). We have observed that models trained in this way may "over-generalize",...
2023-05-26T14:14:51Z
ACL 2023 (22 pages)
null
null
null
null
null
null
null
null
null
2,305.1696
Training Socially Aligned Language Models on Simulated Social Interactions
['Ruibo Liu', 'Ruixin Yang', 'Chenyan Jia', 'Ge Zhang', 'Denny Zhou', 'Andrew M. Dai', 'Diyi Yang', 'Soroush Vosoughi']
['cs.CL', 'cs.AI', 'cs.CY', 'cs.HC']
Social alignment in AI systems aims to ensure that these models behave according to established societal values. However, unlike humans, who derive consensus on value judgments through social interaction, current language models (LMs) are trained to rigidly replicate their training corpus in isolation, leading to subpa...
2023-05-26T14:17:36Z
Code, data, and models can be downloaded via https://github.com/agi-templar/Stable-Alignment
null
null
Training Socially Aligned Language Models on Simulated Social Interactions
['Ruibo Liu', 'Ruixin Yang', 'Chenyan Jia', 'Ge Zhang', 'Denny Zhou', 'Andrew M. Dai', 'Diyi Yang', 'Soroush Vosoughi']
2,023
International Conference on Learning Representations
56
79
['Computer Science']
2,305.17438
On the Importance of Backbone to the Adversarial Robustness of Object Detectors
['Xiao Li', 'Hang Chen', 'Xiaolin Hu']
['cs.CV', 'cs.AI', 'cs.CR', 'cs.LG']
Object detection is a critical component of various security-sensitive applications, such as autonomous driving and video surveillance. However, existing object detectors are vulnerable to adversarial attacks, which poses a significant challenge to their reliability and security. Through experiments, first, we found th...
2023-05-27T10:26:23Z
Accepted by IEEE TIFS
null
null
null
null
null
null
null
null
null
2,305.17493
The Curse of Recursion: Training on Generated Data Makes Models Forget
['Ilia Shumailov', 'Zakhar Shumaylov', 'Yiren Zhao', 'Yarin Gal', 'Nicolas Papernot', 'Ross Anderson']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CR', 'cs.CV']
Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring abou...
2023-05-27T15:10:41Z
Fixed typos in eqn 4,5
null
null
null
null
null
null
null
null
null
2,305.17718
FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions
['Noam Rotstein', 'David Bensaid', 'Shaked Brody', 'Roy Ganz', 'Ron Kimmel']
['cs.CV', 'cs.AI', 'cs.CL']
The advent of vision-language pre-training techniques enhanced substantial progress in the development of models for image captioning. However, these models frequently produce generic captions and may omit semantically important image details. This limitation can be traced back to the image-text datasets; while their c...
2023-05-28T13:16:03Z
null
null
null
null
null
null
null
null
null
null
2,305.17721
Rethinking Masked Language Modeling for Chinese Spelling Correction
['Hongqiu Wu', 'Shaohua Zhang', 'Yuchen Zhang', 'Hai Zhao']
['cs.CL']
In this paper, we study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model. Through empirical analysis, we find that fine-tuning BERT tends to over-fit the error model while under-fit the language model, resulting in poor generalization to out-of-distr...
2023-05-28T13:19:12Z
Accepted by ACL'2023
null
null
null
null
null
null
null
null
null
2,305.17746
Whitening-based Contrastive Learning of Sentence Embeddings
['Wenjie Zhuo', 'Yifan Sun', 'Xiaohan Wang', 'Linchao Zhu', 'Yi Yang']
['cs.CL']
This paper presents a whitening-based contrastive learning method for sentence embedding learning (WhitenedCSE), which combines contrastive learning with a novel shuffled group whitening. Generally, contrastive learning pulls distortions of a single sample (i.e., positive samples) close and push negative samples far aw...
2023-05-28T14:58:10Z
ACL 2023 Main Conference(Oral)
null
null
WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings
['Wenjie Zhuo', 'Yifan Sun', 'Xiaohan Wang', 'Linchao Zhu', 'Yezhou Yang']
2,023
Annual Meeting of the Association for Computational Linguistics
21
54
['Computer Science']
2,305.18098
BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages
['Wen Yang', 'Chong Li', 'Jiajun Zhang', 'Chengqing Zong']
['cs.CL']
Large language models (LLMs) demonstrate promising translation performance among various natural languages. However, many LLMs especially the open-sourced ones, such as BLOOM and LLaMA, are English-dominant and support only dozens of natural languages, making the potential of LLMs on language translation less explored....
2023-05-29T14:07:52Z
16 pages, 4 figures. Our model is available at https://github.com/ZNLP/BigTranslate
null
null
null
null
null
null
null
null
null
2,305.18149
Multiscale Positive-Unlabeled Detection of AI-Generated Texts
['Yuchuan Tian', 'Hanting Chen', 'Xutao Wang', 'Zheyuan Bai', 'Qinghua Zhang', 'Ruifeng Li', 'Chao Xu', 'Yunhe Wang']
['cs.CL', 'cs.AI']
Recent releases of Large Language Models (LLMs), e.g. ChatGPT, are astonishing at generating human-like texts, but they may impact the authenticity of texts. Previous works proposed methods to detect these AI-generated texts, including simple ML classifiers, pretrained-model-based zero-shot methods, and finetuned langu...
2023-05-29T15:25:00Z
ICLR2024 (Spotlight)
null
null
null
null
null
null
null
null
null
2,305.18203
Concept Decomposition for Visual Exploration and Inspiration
['Yael Vinker', 'Andrey Voynov', 'Daniel Cohen-Or', 'Ariel Shamir']
['cs.CV']
A creative idea is often born from transforming, combining, and modifying ideas from existing visual examples capturing various concepts. However, one cannot simply copy the concept as a whole, and inspiration is achieved by examining certain aspects of the concept. Hence, it is often necessary to separate a concept in...
2023-05-29T16:56:56Z
https://inspirationtree.github.io/inspirationtree/
null
null
null
null
null
null
null
null
null
2,305.18283
CommonAccent: Exploring Large Acoustic Pretrained Models for Accent Classification Based on Common Voice
['Juan Zuluaga-Gomez', 'Sara Ahmed', 'Danielius Visockas', 'Cem Subakan']
['cs.CL', 'cs.AI', 'cs.LG', 'eess.AS']
Despite the recent advancements in Automatic Speech Recognition (ASR), the recognition of accented speech still remains a dominant problem. In order to create more inclusive ASR systems, research has shown that the integration of accent information, as part of a larger ASR framework, can lead to the mitigation of accen...
2023-05-29T17:53:35Z
To appear in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023
null
null
null
null
null
null
null
null
null
2,305.1829
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
['Rafael Rafailov', 'Archit Sharma', 'Eric Mitchell', 'Stefano Ermon', 'Christopher D. Manning', 'Chelsea Finn']
['cs.LG', 'cs.AI', 'cs.CL']
While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality of...
2023-05-29T17:57:46Z
null
null
null
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
['Rafael Rafailov', 'Archit Sharma', 'E. Mitchell', 'Stefano Ermon', 'Christopher D. Manning', 'Chelsea Finn']
2,023
Neural Information Processing Systems
4,190
60
['Computer Science']
2,305.18474
Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation
['Jiawei Huang', 'Yi Ren', 'Rongjie Huang', 'Dongchao Yang', 'Zhenhui Ye', 'Chen Zhang', 'Jinglin Liu', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao']
['cs.SD', 'cs.LG', 'cs.MM', 'eess.AS']
Large diffusion models have been successful in text-to-audio (T2A) synthesis tasks, but they often suffer from common issues such as semantic misalignment and poor temporal consistency due to limited natural language understanding and data scarcity. Additionally, 2D spatial structures widely used in T2A works lead to u...
2023-05-29T10:41:28Z
null
null
null
Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation
['Jia-Bin Huang', 'Yi Ren', 'Rongjie Huang', 'Dongchao Yang', 'Zhenhui Ye', 'Chen Zhang', 'Jinglin Liu', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao']
2,023
arXiv.org
64
56
['Computer Science', 'Engineering']
2,305.18701
Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures
['Devdhar Patel', 'Terrence Sejnowski', 'Hava Siegelmann']
['cs.AI', 'cs.SY', 'eess.SY']
The current reinforcement learning framework focuses exclusively on performance, often at the expense of efficiency. In contrast, biological control achieves remarkable performance while also optimizing computational energy expenditure and decision frequency. We propose a Decision Bounded Markov Decision Process (DB-MD...
2023-05-30T02:59:06Z
50 Pages, 9 Figures, 6 Tables. Replacement after being published in the journal Neural Computation
Neural Computation, 1-30 (2024)
10.1162/neco_a_01718
Optimizing Attention and Cognitive Control Costs Using Temporally Layered Architectures
['Devdhar Patel', 'T. Sejnowski', 'H. Siegelmann']
2,023
Neural Computation
2
52
['Computer Science', 'Engineering', 'Medicine']
2,305.18802
LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
['Yuma Koizumi', 'Heiga Zen', 'Shigeki Karita', 'Yifan Ding', 'Kohei Yatabe', 'Nobuyuki Morioka', 'Michiel Bacchiani', 'Yu Zhang', 'Wei Han', 'Ankur Bapna']
['eess.AS', 'cs.SD']
This paper introduces a new speech dataset called ``LibriTTS-R'' designed for text-to-speech (TTS) use. It is derived by applying speech restoration to the LibriTTS corpus, which consists of 585 hours of speech data at 24 kHz sampling rate from 2,456 speakers and the corresponding texts. The constituent samples of Libr...
2023-05-30T07:30:21Z
Accepted to Interspeech 2023
null
null
null
null
null
null
null
null
null
2,305.18939
DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
['Regina Stodden', 'Omar Momen', 'Laura Kallmeyer']
['cs.CL']
Text simplification is an intralingual translation task in which documents, or sentences of a complex source text are simplified for a target audience. The success of automatic text simplification systems is highly dependent on the quality of parallel data used for training and evaluation. To advance sentence simplific...
2023-05-30T11:07:46Z
Accepted to ACL 2023
null
10.18653/v1/2023.acl-long.908
DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
['Regina Stodden', 'Omar Momen', 'Laura Kallmeyer']
2,023
Annual Meeting of the Association for Computational Linguistics
15
72
['Computer Science']
2,305.19269
Make-A-Voice: Unified Voice Synthesis With Discrete Representation
['Rongjie Huang', 'Chunlei Zhang', 'Yongqi Wang', 'Dongchao Yang', 'Luping Liu', 'Zhenhui Ye', 'Ziyue Jiang', 'Chao Weng', 'Zhou Zhao', 'Dong Yu']
['eess.AS', 'cs.AI', 'cs.CL', 'cs.SD']
Various applications of voice synthesis have been developed independently despite the fact that they generate "voice" as output in common. In addition, the majority of voice synthesis models currently rely on annotated audio data, but it is crucial to scale them to self-supervised datasets in order to effectively captu...
2023-05-30T17:59:26Z
null
null
null
Make-A-Voice: Unified Voice Synthesis With Discrete Representation
['Rongjie Huang', 'Chunlei Zhang', 'Yongqiang Wang', 'Dongchao Yang', 'Lu Liu', 'Zhenhui Ye', 'Ziyue Jiang', 'Chao Weng', 'Zhou Zhao', 'Dong Yu']
2,023
arXiv.org
27
47
['Engineering', 'Computer Science']
2,305.1937
Blockwise Parallel Transformer for Large Context Models
['Hao Liu', 'Pieter Abbeel']
['cs.CL', 'cs.LG']
Transformers have emerged as the cornerstone of state-of-the-art natural language processing models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands posed by the self-attention mechanism and the large feedforward network in Transformers limit their ability to handl...
2023-05-30T19:25:51Z
null
null
null
Blockwise Parallel Transformer for Large Context Models
['Hao Liu', 'P. Abbeel']
2,023
null
11
61
['Computer Science']
2,305.19435
AdANNS: A Framework for Adaptive Semantic Search
['Aniket Rege', 'Aditya Kusupati', 'Sharan Ranjit S', 'Alan Fan', 'Qingqing Cao', 'Sham Kakade', 'Prateek Jain', 'Ali Farhadi']
['cs.LG', 'cs.IR']
Web-scale search systems learn an encoder to embed a given query which is then hooked into an approximate nearest neighbor search (ANNS) pipeline to retrieve similar data points. To accurately capture tail queries and data points, learned representations typically are rigid, high-dimensional vectors that are generally ...
2023-05-30T22:05:47Z
25 pages, 15 figures. NeurIPS 2023 camera ready publication
null
null
AdANNS: A Framework for Adaptive Semantic Search
['Aniket Rege', 'Aditya Kusupati', 'S. SharanRanjit', 'Alan Fan', 'Qingqing Cao', 'S. Kakade', 'Prateek Jain', 'Ali Farhadi']
2,023
Neural Information Processing Systems
6
56
['Computer Science']
2,305.19466
The Impact of Positional Encoding on Length Generalization in Transformers
['Amirhossein Kazemnejad', 'Inkit Padhi', 'Karthikeyan Natesan Ramamurthy', 'Payel Das', 'Siva Reddy']
['cs.CL', 'cs.AI', 'cs.LG']
Length generalization, the ability to generalize from small training context sizes to larger ones, is a critical challenge in the development of Transformer-based language models. Positional encoding (PE) has been identified as a major factor influencing length generalization, but the exact impact of different PE schem...
2023-05-31T00:29:55Z
Accepted at NeurIPS 2023; 15 pages and 22 pages Appendix
null
null
The Impact of Positional Encoding on Length Generalization in Transformers
['Amirhossein Kazemnejad', 'Inkit Padhi', 'K. Ramamurthy', 'Payel Das', 'Siva Reddy']
2,023
Neural Information Processing Systems
209
62
['Computer Science']
2,305.19689
Assessing Word Importance Using Models Trained for Semantic Tasks
['Dávid Javorský', 'Ondřej Bojar', 'François Yvon']
['cs.CL']
Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive i...
2023-05-31T09:34:26Z
Published in the Findings of ACL 2023
null
null
null
null
null
null
null
null
null
2,305.1984
BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language
['Konrad Wojtasik', 'Vadim Shishkin', 'Kacper Wołowiec', 'Arkadiusz Janz', 'Maciej Piasecki']
['cs.IR', 'cs.AI', 'cs.CL']
The BEIR dataset is a large, heterogeneous benchmark for Information Retrieval (IR) in zero-shot settings, garnering considerable attention within the research community. However, BEIR and analogous datasets are predominantly restricted to the English language. Our objective is to establish extensive large-scale resour...
2023-05-31T13:29:07Z
null
null
null
BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language
['Konrad Wojtasik', 'Vadim Shishkin', 'Kacper Wolowiec', 'Arkadiusz Janz', 'Maciej Piasecki']
2,023
International Conference on Language Resources and Evaluation
11
52
['Computer Science']
2,305.19974
Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding
['Parker Glenn', 'Parag Pravin Dakle', 'Preethi Raghavan']
['cs.CL']
In addressing the task of converting natural language to SQL queries, there are several semantic and syntactic challenges. It becomes increasingly important to understand and remedy the points of failure as the performance of semantic parsing systems improve. We explore semantic parse correction with natural language f...
2023-05-31T16:01:57Z
ACL 2023 Workshop on NLP for Conversational AI
null
null
Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding
['Parker Glenn', 'Parag Dakle', 'Preethi Raghavan']
2,023
NLP4CONVAI
3
33
['Computer Science']
2,305.2005
Let's Verify Step by Step
['Hunter Lightman', 'Vineet Kosaraju', 'Yura Burda', 'Harri Edwards', 'Bowen Baker', 'Teddy Lee', 'Jan Leike', 'John Schulman', 'Ilya Sutskever', 'Karl Cobbe']
['cs.LG', 'cs.AI', 'cs.CL']
In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still regularly produce logical mistakes. To train more reliable models, we can turn either to outcome supervision, which provides feedback for a final result, or ...
2023-05-31T17:24:00Z
null
null
null
Let's Verify Step by Step
['Hunter Lightman', 'Vineet Kosaraju', 'Yura Burda', 'Harrison Edwards', 'Bowen Baker', 'Teddy Lee', 'Jan Leike', 'John Schulman', 'I. Sutskever', 'K. Cobbe']
2,023
International Conference on Learning Representations
1,241
33
['Computer Science']
2,306.00103
ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning
['Xiao Xu', 'Bei Li', 'Chenfei Wu', 'Shao-Yen Tseng', 'Anahita Bhiwandiwalla', 'Shachar Rosenman', 'Vasudev Lal', 'Wanxiang Che', 'Nan Duan']
['cs.CV', 'cs.CL', 'cs.LG']
Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different lev...
2023-05-31T18:23:57Z
Accepted by ACL 2023 Main Conference, Oral
null
null
null
null
null
null
null
null
null
2,306.00107
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
['Yizhi Li', 'Ruibin Yuan', 'Ge Zhang', 'Yinghao Ma', 'Xingran Chen', 'Hanzhi Yin', 'Chenghao Xiao', 'Chenghua Lin', 'Anton Ragni', 'Emmanouil Benetos', 'Norbert Gyenge', 'Roger Dannenberg', 'Ruibo Liu', 'Wenhu Chen', 'Gus Xia', 'Yemin Shi', 'Wenhao Huang', 'Zili Wang', 'Yike Guo', 'Jie Fu']
['cs.SD', 'cs.AI', 'cs.CL', 'cs.LG', 'eess.AS']
Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is partially due...
2023-05-31T18:27:43Z
accepted by ICLR 2024
null
null
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
['Yizhi Li', 'Ruibin Yuan', 'Ge Zhang', 'Yi Ma', 'Xingran Chen', 'Hanzhi Yin', 'Chen-Li Lin', 'A. Ragni', 'Emmanouil Benetos', 'N. Gyenge', 'R. Dannenberg', 'Ruibo Liu', 'Wenhu Chen', 'Gus G. Xia', 'Yemin Shi', 'Wen-Fen Huang', 'Yi-Ting Guo', 'Jie Fu']
2,023
International Conference on Learning Representations
130
64
['Computer Science', 'Engineering']
2,306.0011
MuseCoco: Generating Symbolic Music from Text
['Peiling Lu', 'Xin Xu', 'Chenfei Kang', 'Botao Yu', 'Chengyi Xing', 'Xu Tan', 'Jiang Bian']
['cs.SD', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM', 'eess.AS']
Generating music from text descriptions is a user-friendly mode since the text is a relatively easy interface for user engagement. While some approaches utilize texts to control music audio generation, editing musical elements in generated audio is challenging for users. In contrast, symbolic music offers ease of editi...
2023-05-31T18:34:16Z
null
null
null
null
null
null
null
null
null
null
2,306.00121
Multilingual Multi-Figurative Language Detection
['Huiyuan Lai', 'Antonio Toral', 'Malvina Nissim']
['cs.CL']
Figures of speech help people express abstract concepts and evoke stronger emotions than literal expressions, thereby making texts more creative and engaging. Due to its pervasive and fundamental character, figurative language understanding has been addressed in Natural Language Processing, but it's highly understudied...
2023-05-31T18:52:41Z
Accepted to ACL 2023 (Findings)
null
null
Multilingual Multi-Figurative Language Detection
['Huiyuan Lai', 'Antonio Toral', 'M. Nissim']
2,023
Annual Meeting of the Association for Computational Linguistics
1
50
['Computer Science']
2,306.00124
Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation
['Chunliu Wang', 'Huiyuan Lai', 'Malvina Nissim', 'Johan Bos']
['cs.CL']
Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics. However, these tasks do not fully benefit from PLMs since meaning representations are not explicitly included in the pre-training stage. We introduce multilingual pre-trained language-...
2023-05-31T19:00:33Z
Accepted by ACL2023 findings
null
null
Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation
['Chunliu Wang', 'Huiyuan Lai', 'M. Nissim', 'Johan Bos']
2,023
Annual Meeting of the Association for Computational Linguistics
13
55
['Computer Science']
2,306.00437
Responsibility Perspective Transfer for Italian Femicide News
['Gosse Minnema', 'Huiyuan Lai', 'Benedetta Muscato', 'Malvina Nissim']
['cs.CL']
Different ways of linguistically expressing the same real-world event can lead to different perceptions of what happened. Previous work has shown that different descriptions of gender-based violence (GBV) influence the reader's perception of who is to blame for the violence, possibly reinforcing stereotypes which see t...
2023-06-01T08:27:00Z
Accepted for publication in Findings of ACL 2023
null
null
Responsibility Perspective Transfer for Italian Femicide News
['Gosse Minnema', 'Huiyuan Lai', 'Benedetta Muscato', 'M. Nissim']
2,023
Annual Meeting of the Association for Computational Linguistics
3
29
['Computer Science']
2,306.00637
Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models
['Pablo Pernias', 'Dominic Rampas', 'Mats L. Richter', 'Christopher J. Pal', 'Marc Aubreville']
['cs.CV']
We introduce W\"urstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely comp...
2023-06-01T13:00:53Z
Corresponding to "W\"urstchen v2"
The Twelfth International Conference on Learning Representations (ICLR), 2024
null
null
null
null
null
null
null
null
2,306.00745
Column Type Annotation using ChatGPT
['Keti Korini', 'Christian Bizer']
['cs.CL']
Column type annotation is the task of annotating the columns of a relational table with the semantic type of the values contained in each column. Column type annotation is an important pre-processing step for data search and data integration in the context of data lakes. State-of-the-art column type annotation methods ...
2023-06-01T14:40:52Z
null
null
null
Column Type Annotation using ChatGPT
['Keti Korini', 'Christian Bizer']
2,023
VLDB Workshops
28
37
['Computer Science']
2,306.00814
Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis
['Hubert Siuzdak']
['cs.SD', 'cs.LG', 'eess.AS']
Recent advancements in neural vocoding are predominantly driven by Generative Adversarial Networks (GANs) operating in the time-domain. While effective, this approach neglects the inductive bias offered by time-frequency representations, resulting in reduntant and computionally-intensive upsampling operations. Fourier-...
2023-06-01T15:40:32Z
null
null
null
null
null
null
null
null
null
null
2,306.0089
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day
['Chunyuan Li', 'Cliff Wong', 'Sheng Zhang', 'Naoto Usuyama', 'Haotian Liu', 'Jianwei Yang', 'Tristan Naumann', 'Hoifung Poon', 'Jianfeng Gao']
['cs.CV', 'cs.CL']
Conversational generative AI has demonstrated remarkable promise for empowering biomedical practitioners, but current investigations focus on unimodal text. Multimodal conversational AI has seen rapid progress by leveraging billions of image-text pairs from the public web, but such general-domain vision-language models...
2023-06-01T16:50:07Z
17 pages; Website: https://aka.ms/llava-med
null
null
null
null
null
null
null
null
null
2,306.00917
Vocabulary-free Image Classification
['Alessandro Conti', 'Enrico Fini', 'Massimiliano Mancini', 'Paolo Rota', 'Yiming Wang', 'Elisa Ricci']
['cs.CV']
Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, a.k.a. the vocabulary, is assumed at test time for composing the textual prompts. However, such assumption can be impractical when th...
2023-06-01T17:19:43Z
Accepted at NeurIPS2023, 19 pages, 8 figures, code is available at https://github.com/altndrr/vic
null
null
null
null
null
null
null
null
null
2,306.00926
Inserting Anybody in Diffusion Models via Celeb Basis
['Ge Yuan', 'Xiaodong Cun', 'Yong Zhang', 'Maomao Li', 'Chenyang Qi', 'Xintao Wang', 'Ying Shan', 'Huicheng Zheng']
['cs.CV']
Exquisite demand exists for customizing the pretrained large text-to-image model, $\textit{e.g.}$, Stable Diffusion, to generate innovative concepts, such as the users themselves. However, the newly-added concept from previous customization methods often shows weaker combination abilities than the original ones even gi...
2023-06-01T17:30:24Z
Project page: http://celeb-basis.github.io ; Github repository: https://github.com/ygtxr1997/CelebBasis
null
null
null
null
null
null
null
null
null
2,306.00958
LIV: Language-Image Representations and Rewards for Robotic Control
['Yecheng Jason Ma', 'William Liang', 'Vaidehi Som', 'Vikash Kumar', 'Amy Zhang', 'Osbert Bastani', 'Dinesh Jayaraman']
['cs.RO', 'cs.AI', 'cs.LG']
We present Language-Image Value learning (LIV), a unified objective for vision-language representation and reward learning from action-free videos with text annotations. Exploiting a novel connection between dual reinforcement learning and mutual information contrastive learning, the LIV objective trains a multi-modal ...
2023-06-01T17:52:23Z
Extended version of ICML 2023 camera-ready; Project website: https://penn-pal-lab.github.io/LIV/
null
null
LIV: Language-Image Representations and Rewards for Robotic Control
['Yecheng Jason Ma', 'William Liang', 'Vaidehi Som', 'Vikash Kumar', 'Amy Zhang', 'Osbert Bastani', 'Dinesh Jayaraman']
2,023
International Conference on Machine Learning
130
69
['Computer Science']
2,306.00973
Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models
['Chang Liu', 'Haoning Wu', 'Yujie Zhong', 'Xiaoyun Zhang', 'Yanfeng Wang', 'Weidi Xie']
['cs.CV']
Generative models have recently exhibited exceptional capabilities in text-to-image generation, but still struggle to generate image sequences coherently. In this work, we focus on a novel, yet challenging task of generating a coherent image sequence based on a given storyline, denoted as open-ended visual storytelling...
2023-06-01T17:58:50Z
Accepted by CVPR 2024. Project Page: https://haoningwu3639.github.io/StoryGen_Webpage/
null
null
Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
['Chang Liu', 'Haoning Wu', 'Yujie Zhong', 'Xiaoyu Zhang', 'Weidi Xie']
2,023
Computer Vision and Pattern Recognition
44
72
['Computer Science']
2,306.00978
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
['Ji Lin', 'Jiaming Tang', 'Haotian Tang', 'Shang Yang', 'Wei-Ming Chen', 'Wei-Chen Wang', 'Guangxuan Xiao', 'Xingyu Dang', 'Chuang Gan', 'Song Han']
['cs.CL']
Large language models (LLMs) have transformed numerous AI applications. On-device LLM is becoming increasingly important: running LLMs locally on edge devices can reduce the cloud computing cost and protect users' privacy. However, the astronomical model size and the limited hardware resource pose significant deploymen...
2023-06-01T17:59:10Z
MLSys 2024 Best Paper Award. Code available at: https://github.com/mit-han-lab/llm-awq
null
null
null
null
null
null
null
null
null
2,306.00983
StyleDrop: Text-to-Image Generation in Any Style
['Kihyuk Sohn', 'Nataniel Ruiz', 'Kimin Lee', 'Daniel Castro Chin', 'Irina Blok', 'Huiwen Chang', 'Jarred Barber', 'Lu Jiang', 'Glenn Entis', 'Yuanzhen Li', 'Yuan Hao', 'Irfan Essa', 'Michael Rubinstein', 'Dilip Krishnan']
['cs.CV', 'cs.AI']
Pre-trained large text-to-image models synthesize impressive images with an appropriate use of text prompts. However, ambiguities inherent in natural language and out-of-distribution effects make it hard to synthesize image styles, that leverage a specific design pattern, texture or material. In this paper, we introduc...
2023-06-01T17:59:51Z
Preprint. Project page at https://styledrop.github.io
null
null
null
null
null
null
null
null
null
2,306.00989
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
['Chaitanya Ryali', 'Yuan-Ting Hu', 'Daniel Bolya', 'Chen Wei', 'Haoqi Fan', 'Po-Yao Huang', 'Vaibhav Aggarwal', 'Arkabandhu Chowdhury', 'Omid Poursaeed', 'Judy Hoffman', 'Jitendra Malik', 'Yanghao Li', 'Christoph Feichtenhofer']
['cs.CV', 'cs.LG']
Modern hierarchical vision transformers have added several vision-specific components in the pursuit of supervised classification performance. While these components lead to effective accuracies and attractive FLOP counts, the added complexity actually makes these transformers slower than their vanilla ViT counterparts...
2023-06-01T17:59:58Z
ICML 2023 Oral version. Code+Models: https://github.com/facebookresearch/hiera
null
null
null
null
null
null
null
null
null
2,306.01116
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
['Guilherme Penedo', 'Quentin Malartic', 'Daniel Hesslow', 'Ruxandra Cojocaru', 'Alessandro Cappelli', 'Hamza Alobeidli', 'Baptiste Pannier', 'Ebtesam Almazrouei', 'Julien Launay']
['cs.CL', 'cs.AI']
Large language models are commonly trained on a mixture of filtered web data and curated high-quality corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger ...
2023-06-01T20:03:56Z
null
null
null
null
null
null
null
null
null
null
2,306.01533
Enhance Temporal Relations in Audio Captioning with Sound Event Detection
['Zeyu Xie', 'Xuenan Xu', 'Mengyue Wu', 'Kai Yu']
['cs.SD', 'eess.AS']
Automated audio captioning aims at generating natural language descriptions for given audio clips, not only detecting and classifying sounds, but also summarizing the relationships between audio events. Recent research advances in audio captioning have introduced additional guidance to improve the accuracy of audio eve...
2023-06-02T13:36:34Z
Interspeech 2023
null
10.21437/Interspeech.2023-1614
null
null
null
null
null
null
null
2,306.01545
PassGPT: Password Modeling and (Guided) Generation with Large Language Models
['Javier Rando', 'Fernando Perez-Cruz', 'Briland Hitaj']
['cs.CL', 'cs.AI', 'cs.CR']
Large language models (LLMs) successfully model natural language from vast amounts of text without the need for explicit supervision. In this paper, we investigate the efficacy of LLMs in modeling passwords. We present PassGPT, a LLM trained on password leaks for password generation. PassGPT outperforms existing method...
2023-06-02T13:49:53Z
null
null
null
PassGPT: Password Modeling and (Guided) Generation with Large Language Models
['Javier Rando', 'F. Pérez-Cruz', 'B. Hitaj']
2,023
European Symposium on Research in Computer Security
10
53
['Computer Science']
2,306.01567
Segment Anything in High Quality
['Lei Ke', 'Mingqiao Ye', 'Martin Danelljan', 'Yifan Liu', 'Yu-Wing Tai', 'Chi-Keung Tang', 'Fisher Yu']
['cs.CV']
The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have in...
2023-06-02T14:23:59Z
NeurIPS 2023. We propose HQ-SAM to upgrade SAM for high-quality zero-shot segmentation. Github: https://github.com/SysCV/SAM-HQ
null
null
Segment Anything in High Quality
['Lei Ke', 'Mingqiao Ye', 'Martin Danelljan', 'Yifan Liu', 'Yu-Wing Tai', 'Chi-Keung Tang', 'F. Yu']
2,023
Neural Information Processing Systems
341
59
['Computer Science']
2,306.01708
TIES-Merging: Resolving Interference When Merging Models
['Prateek Yadav', 'Derek Tam', 'Leshem Choshen', 'Colin Raffel', 'Mohit Bansal']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV']
Transfer learning - i.e., further fine-tuning a pre-trained model on a downstream task - can confer significant advantages, including improved downstream performance, faster convergence, and better sample efficiency. These advantages have led to a proliferation of task-specific fine-tuned models, which typically can on...
2023-06-02T17:31:32Z
Published at NeurIPS 2023, 23 Pages, 13 Figures, 14 Tables
null
null
TIES-Merging: Resolving Interference When Merging Models
['Prateek Yadav', 'Derek Tam', 'Leshem Choshen', 'Colin Raffel', 'Mohit Bansal']
2,023
Neural Information Processing Systems
319
90
['Computer Science']
2,306.02018
VideoComposer: Compositional Video Synthesis with Motion Controllability
['Xiang Wang', 'Hangjie Yuan', 'Shiwei Zhang', 'Dayou Chen', 'Jiuniu Wang', 'Yingya Zhang', 'Yujun Shen', 'Deli Zhao', 'Jingren Zhou']
['cs.CV']
The pursuit of controllability as a higher standard of visual content creation has yielded remarkable progress in customizable image synthesis. However, achieving controllable video synthesis remains challenging due to the large variation of temporal dynamics and the requirement of cross-frame temporal consistency. Bas...
2023-06-03T06:29:02Z
The first four authors contributed equally. Project page: https://videocomposer.github.io
null
null
null
null
null
null
null
null
null
2,306.02069
MultiLegalPile: A 689GB Multilingual Legal Corpus
['Joel Niklaus', 'Veton Matoshi', 'Matthias Stürmer', 'Ilias Chalkidis', 'Daniel E. Ho']
['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2']
Large, high-quality datasets are crucial for training Large Language Models (LLMs). However, so far, there are few datasets available for specialized critical domains such as law and the available ones are often only for the English language. We curate and release MultiLegalPile, a 689GB corpus in 24 languages from 17 ...
2023-06-03T10:10:38Z
Accepted to ACL 2024
null
null
MultiLegalPile: A 689GB Multilingual Legal Corpus
['Joel Niklaus', 'Veton Matoshi', 'Matthias Sturmer', 'Ilias Chalkidis', 'Daniel E. Ho']
2,023
Annual Meeting of the Association for Computational Linguistics
44
79
['Computer Science']
2,306.02231
Fine-Tuning Language Models with Advantage-Induced Policy Alignment
['Banghua Zhu', 'Hiteshi Sharma', 'Felipe Vieira Frujeri', 'Shi Dong', 'Chenguang Zhu', 'Michael I. Jordan', 'Jiantao Jiao']
['cs.CL', 'cs.AI', 'cs.LG', 'cs.SY', 'eess.SY']
Reinforcement learning from human feedback (RLHF) has emerged as a reliable approach to aligning large language models (LLMs) to human preferences. Among the plethora of RLHF techniques, proximal policy optimization (PPO) is of the most widely used methods. Despite its popularity, however, PPO may suffer from mode coll...
2023-06-04T01:59:40Z
null
null
null
Fine-Tuning Language Models with Advantage-Induced Policy Alignment
['Banghua Zhu', 'Hiteshi Sharma', 'F. Frujeri', 'Shi Dong', 'Chenguang Zhu', 'Michael I. Jordan', 'Jiantao Jiao']
2,023
arXiv.org
41
37
['Computer Science', 'Engineering']
2,306.02254
A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models
['Hyunwoong Ko', 'Kichang Yang', 'Minho Ryu', 'Taekyoon Choi', 'Seungmu Yang', 'Jiwung Hyun', 'Sungho Park', 'Kyubyong Park']
['cs.CL']
Polyglot is a pioneering project aimed at enhancing the non-English language performance of multilingual language models. Despite the availability of various multilingual models such as mBERT (Devlin et al., 2019), XGLM (Lin et al., 2022), and BLOOM (Scao et al., 2022), researchers and developers often resort to buildi...
2023-06-04T04:04:04Z
null
null
null
A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models
['H. Ko', 'Kichang Yang', 'Minho Ryu', 'Taekyoon Choi', 'Seungmu Yang', 'Jiwung Hyun', 'Sung-Yong Park', 'Kyubyong Park']
2,023
arXiv.org
30
18
['Computer Science']
2,306.02317
SpellMapper: A non-autoregressive neural spellchecker for ASR customization with candidate retrieval based on n-gram mappings
['Alexandra Antonova', 'Evelina Bakhturina', 'Boris Ginsburg']
['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS']
Contextual spelling correction models are an alternative to shallow fusion to improve automatic speech recognition (ASR) quality given user vocabulary. To deal with large user vocabularies, most of these models include candidate retrieval mechanisms, usually based on minimum edit distance between fragments of ASR hypot...
2023-06-04T10:00:12Z
Accepted by INTERSPEECH 2023
null
null
SpellMapper: A non-autoregressive neural spellchecker for ASR customization with candidate retrieval based on n-gram mappings
['Alexandra Antonova', 'E. Bakhturina', 'Boris Ginsburg']
2,023
Interspeech
6
25
['Computer Science', 'Engineering']
2,306.02507
Deep learning powered real-time identification of insects using citizen science data
['Shivani Chiranjeevi', 'Mojdeh Sadaati', 'Zi K Deng', 'Jayanth Koushik', 'Talukder Z Jubery', 'Daren Mueller', 'Matthew E O Neal', 'Nirav Merchant', 'Aarti Singh', 'Asheesh K Singh', 'Soumik Sarkar', 'Arti Singh', 'Baskar Ganapathysubramanian']
['cs.CV']
Insect-pests significantly impact global agricultural productivity and quality. Effective management involves identifying the full insect community, including beneficial insects and harmful pests, to develop and implement integrated pest management strategies. Automated identification of insects under real-world condit...
2023-06-04T23:56:53Z
null
null
null
Deep learning powered real-time identification of insects using citizen science data
['Shivani Chiranjeevi', 'Mojdeh Sadaati', 'Ziqing Deng', 'Jayanth Koushik', 'T. Jubery', 'D. Mueller', 'Matthew E O Neal', 'Nirav C. Merchant', 'Aarti Singh', 'Ashutosh Kumar Singh', 'S. Sarkar', 'Arti Singh', 'B. Ganapathysubramanian']
2,023
arXiv.org
14
42
['Computer Science']
2,306.02561
LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion
['Dongfu Jiang', 'Xiang Ren', 'Bill Yuchen Lin']
['cs.CL', 'cs.AI', 'cs.LG']
We present LLM-Blender, an ensembling framework designed to attain consistently superior performance by leveraging the diverse strengths of multiple open-source large language models (LLMs). Our framework consists of two modules: PairRanker and GenFuser, addressing the observation that optimal LLMs for different exampl...
2023-06-05T03:32:26Z
Accepted to ACL 2023 (main conference); Project website: https://yuchenlin.xyz/LLM-Blender/ V3 update: fix a few typos and update a few citations; V2 update: The experiments on summarization, translation, and constrained generation tasks in the prior version have been moved to the appendix
null
null
LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion
['Dongfu Jiang', 'Xiang Ren', 'Bill Yuchen Lin']
2,023
Annual Meeting of the Association for Computational Linguistics
334
53
['Computer Science']
2,306.02707
Orca: Progressive Learning from Complex Explanation Traces of GPT-4
['Subhabrata Mukherjee', 'Arindam Mitra', 'Ganesh Jawahar', 'Sahaj Agarwal', 'Hamid Palangi', 'Ahmed Awadallah']
['cs.CL', 'cs.LG']
Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous traini...
2023-06-05T08:58:39Z
null
null
null
null
null
null
null
null
null
null
2,306.02771
Identifying the style by a qualified reader on a short fragment of generated poetry
['Boris Orekhov']
['cs.CL', 'cs.AI', 'cs.LG']
Style is an important concept in today's challenges in natural language generating. After the success in the field of image style transfer, the task of text style transfer became actual and attractive. Researchers are also interested in the tasks of style reproducing in generation of the poetic text. Evaluation of styl...
2023-06-05T10:55:15Z
6 pages, 2 tables
null
null
null
null
null
null
null
null
null
2,306.02796
MCTS: A Multi-Reference Chinese Text Simplification Dataset
['Ruining Chong', 'Luming Lu', 'Liner Yang', 'Jinran Nie', 'Zhenghao Liu', 'Shuo Wang', 'Shuhan Zhou', 'Yaoxin Li', 'Erhong Yang']
['cs.CL']
Text simplification aims to make the text easier to understand by applying rewriting transformations. There has been very little research on Chinese text simplification for a long time. The lack of generic evaluation data is an essential reason for this phenomenon. In this paper, we introduce MCTS, a multi-reference Ch...
2023-06-05T11:46:36Z
Accepted to COLING 2024
null
null
MCTS: A Multi-Reference Chinese Text Simplification Dataset
['Ruining Chong', 'Luming Lu', 'Liner Yang', 'Jinran Nie', 'Shuhan Zhou', 'Yaoxin Li', 'Erhong Yang']
2,023
International Conference on Language Resources and Evaluation
1
42
['Computer Science']
2,306.02858
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
['Hang Zhang', 'Xin Li', 'Lidong Bing']
['cs.CL', 'cs.CV', 'cs.SD', 'eess.AS']
We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that ...
2023-06-05T13:17:27Z
Accepted by EMNLP 2023's demo track; Code, Pretrained Model, and Dataset: https://github.com/DAMO-NLP-SG/Video-LLaMA
null
null
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
['Hang Zhang', 'Xin Li', 'Lidong Bing']
2,023
Conference on Empirical Methods in Natural Language Processing
1,068
42
['Computer Science', 'Engineering']