arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,002.09836
Fill in the BLANC: Human-free quality estimation of document summaries
['Oleg Vasilyev', 'Vedant Dharnidharka', 'John Bohannon']
['cs.CL']
We present BLANC, a new approach to the automatic estimation of document summary quality. Our goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. Our approach achieves this by measuring the performance boost gained by a pre-trained language model with access to a document summary while carrying out its language understanding task on the document's text. We present evidence that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements. And unlike ROUGE, the BLANC method does not require human-written reference summaries, allowing for fully human-free summary quality estimation.
2020-02-23T06:21:43Z
10 pages, 9 figures, 3 tables. In: Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems (Eval4NLP, Nov. 2020) p.11-20, ACL
Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems (Nov.2020) 11-20
null
Fill in the BLANC: Human-free quality estimation of document summaries
['Oleg V. Vasilyev', 'Vedant Dharnidharka', 'John Bohannon']
2,020
EVAL4NLP
119
32
['Computer Science']
2,002.10857
Circle Loss: A Unified Perspective of Pair Similarity Optimization
['Yifan Sun', 'Changmao Cheng', 'Yuhan Zhang', 'Chi Zhang', 'Liang Zheng', 'Zhongdao Wang', 'Yichen Wei']
['cs.CV']
This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$. We find a majority of loss functions, including the triplet loss and the softmax plus cross-entropy loss, embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning approaches, i.e. learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing $(s_n-s_p)$. Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several fine-grained image retrieval datasets, the achieved performance is on par with the state of the art.
2020-02-25T13:56:40Z
null
null
null
null
null
null
null
null
null
null
2,002.10957
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
['Wenhui Wang', 'Furu Wei', 'Li Dong', 'Hangbo Bao', 'Nan Yang', 'Ming Zhou']
['cs.CL']
Pre-trained language models (e.g., BERT (Devlin et al., 2018) and its variants) have achieved remarkable success in varieties of NLP tasks. However, these models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this work, we present a simple and effective approach to compress large Transformer (Vaswani et al., 2017) based pre-trained models, termed as deep self-attention distillation. The small model (student) is trained by deeply mimicking the self-attention module, which plays a vital role in Transformer networks, of the large model (teacher). Specifically, we propose distilling the self-attention module of the last Transformer layer of the teacher, which is effective and flexible for the student. Furthermore, we introduce the scaled dot-product between values in the self-attention module as the new deep self-attention knowledge, in addition to the attention distributions (i.e., the scaled dot-product of queries and keys) that have been used in existing works. Moreover, we show that introducing a teacher assistant (Mirzadeh et al., 2019) also helps the distillation of large pre-trained Transformer models. Experimental results demonstrate that our monolingual model outperforms state-of-the-art baselines in different parameter size of student models. In particular, it retains more than 99% accuracy on SQuAD 2.0 and several GLUE benchmark tasks using 50% of the Transformer parameters and computations of the teacher model. We also obtain competitive results in applying deep self-attention distillation to multilingual pre-trained models.
2020-02-25T15:21:10Z
Code and models: https://github.com/microsoft/unilm/tree/master/minilm
null
null
null
null
null
null
null
null
null
2,002.12328
Few-shot Natural Language Generation for Task-Oriented Dialog
['Baolin Peng', 'Chenguang Zhu', 'Chunyuan Li', 'Xiujun Li', 'Jinchao Li', 'Michael Zeng', 'Jianfeng Gao']
['cs.CL']
As a crucial component in task-oriented dialog systems, the Natural Language Generation (NLG) module converts a dialog act represented in a semantic form into a response in natural language. The success of traditional template-based or statistical models typically relies on heavily annotated data, which is infeasible for new domains. Therefore, it is pivotal for an NLG system to generalize well with limited labelled data in real applications. To this end, we present FewShotWoz, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems. Further, we develop the SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains. Experiments on FewShotWoz and the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly outperforms existing methods, measured by various automatic metrics and human evaluations.
2020-02-27T18:48:33Z
Project website: https://aka.ms/scgpt ; Code and data: https://github.com/pengbaolin/SC-GPT
null
null
null
null
null
null
null
null
null
2,003.00104
AraBERT: Transformer-based Model for Arabic Language Understanding
['Wissam Antoun', 'Fady Baly', 'Hazem Hajj']
['cs.CL']
The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. The performance of AraBERT is compared to multilingual BERT from Google and other state-of-the-art approaches. The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks. The pretrained araBERT models are publicly available on https://github.com/aub-mind/arabert hoping to encourage research and applications for Arabic NLP.
2020-02-28T22:59:24Z
Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), Marseille, France (2020)
null
null
null
null
null
null
null
null
null
2,003.00196
First Order Motion Model for Image Animation
['Aliaksandr Siarohin', 'Stéphane Lathuilière', 'Sergey Tulyakov', 'Elisa Ricci', 'Nicu Sebe']
['cs.CV', 'cs.AI']
Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories. Our source code is publicly available.
2020-02-29T07:08:56Z
NeurIPS 2019
null
null
null
null
null
null
null
null
null
2,003.00653
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
['Wei Jin', 'Yaxin Li', 'Han Xu', 'Yiqi Wang', 'Shuiwang Ji', 'Charu Aggarwal', 'Jiliang Tang']
['cs.LG', 'cs.CR', 'stat.ML']
Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks. As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. This vulnerability has arisen tremendous concerns for adapting GNNs in safety-critical applications and has attracted increasing research attention in recent years. Thus, it is necessary and timely to provide a comprehensive overview of existing graph adversarial attacks and the countermeasures. In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods. Furthermore, we have developed a repository with representative algorithms (https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). The repository enables us to conduct empirical studies to deepen our understandings on attacks and defenses on graphs.
2020-03-02T04:32:38Z
Accepted by SIGKDD Explorations
null
null
null
null
null
null
null
null
null
2,003.00744
PhoBERT: Pre-trained language models for Vietnamese
['Dat Quoc Nguyen', 'Anh Tuan Nguyen']
['cs.CL', 'cs.AI']
We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT models are available at https://github.com/VinAIResearch/PhoBERT
2020-03-02T10:21:17Z
EMNLP 2020 (Findings)
null
null
null
null
null
null
null
null
null
2,003.01309
Controllable Time-Delay Transformer for Real-Time Punctuation Prediction and Disfluency Detection
['Qian Chen', 'Mengzhe Chen', 'Bo Li', 'Wen Wang']
['cs.CL', 'cs.SD', 'eess.AS']
With the increased applications of automatic speech recognition (ASR) in recent years, it is essential to automatically insert punctuation marks and remove disfluencies in transcripts, to improve the readability of the transcripts as well as the performance of subsequent applications, such as machine translation, dialogue systems, and so forth. In this paper, we propose a Controllable Time-delay Transformer (CT-Transformer) model that jointly completes the punctuation prediction and disfluency detection tasks in real time. The CT-Transformer model facilitates freezing partial outputs with controllable time delay to fulfill the real-time constraints in partial decoding required by subsequent applications. We further propose a fast decoding strategy to minimize latency while maintaining competitive performance. Experimental results on the IWSLT2011 benchmark dataset and an in-house Chinese annotated dataset demonstrate that the proposed approach outperforms the previous state-of-the-art models on F-scores and achieves a competitive inference speed.
2020-03-03T03:17:29Z
4 pages, 2 figures, accepted by ICASSP 2020
null
null
Controllable Time-Delay Transformer for Real-Time Punctuation Prediction and Disfluency Detection
['Qian Chen', 'Mengzhe Chen', 'Bo Li', 'Wen Wang']
2,020
IEEE International Conference on Acoustics, Speech, and Signal Processing
36
35
['Computer Science', 'Engineering']
2,003.01355
CLUECorpus2020: A Large-scale Chinese Corpus for Pre-training Language Model
['Liang Xu', 'Xuanwei Zhang', 'Qianqian Dong']
['cs.CL']
In this paper, we introduce the Chinese corpus from CLUE organization, CLUECorpus2020, a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl. To better understand this corpus, we conduct language understanding experiments on both small and large scale, and results show that the models trained on this corpus can achieve excellent performance on Chinese. We release a new Chinese vocabulary with a size of 8K, which is only one-third of the vocabulary size used in Chinese Bert released by Google. It saves computational cost and memory while works as good as original vocabulary. We also release both large and tiny versions of the pre-trained model on this corpus. The former achieves the state-of-the-art result, and the latter retains most precision while accelerating training and prediction speed for eight times compared to Bert-base. To facilitate future work on self-supervised learning on Chinese, we release our dataset, new vocabulary, codes, and pre-trained models on Github.
2020-03-03T06:39:27Z
8 pages, 9 tables
null
null
CLUECorpus2020: A Large-scale Chinese Corpus for Pre-training Language Model
['Liang Xu', 'Xuanwei Zhang', 'Qianqian Dong']
2,020
arXiv.org
71
14
['Computer Science']
2,003.0195
AlignTTS: Efficient Feed-Forward Text-to-Speech System without Explicit Alignment
['Zhen Zeng', 'Jianzong Wang', 'Ning Cheng', 'Tian Xia', 'Jing Xiao']
['eess.AS', 'cs.CL', 'cs.SD']
Targeting at both high efficiency and performance, we propose AlignTTS to predict the mel-spectrum in parallel. AlignTTS is based on a Feed-Forward Transformer which generates mel-spectrum from a sequence of characters, and the duration of each character is determined by a duration predictor.Instead of adopting the attention mechanism in Transformer TTS to align text to mel-spectrum, the alignment loss is presented to consider all possible alignments in training by use of dynamic programming. Experiments on the LJSpeech dataset show that our model achieves not only state-of-the-art performance which outperforms Transformer TTS by 0.03 in mean option score (MOS), but also a high efficiency which is more than 50 times faster than real-time.
2020-03-04T08:44:32Z
will be presented in ICASSP 2020
null
null
Aligntts: Efficient Feed-Forward Text-to-Speech System Without Explicit Alignment
['Zhen Zeng', 'Jianzong Wang', 'Ning Cheng', 'Tian Xia', 'Jing Xiao']
2,020
IEEE International Conference on Acoustics, Speech, and Signal Processing
56
21
['Computer Science', 'Engineering']
2,003.02838
Accelerator-aware Neural Network Design using AutoML
['Suyog Gupta', 'Berkin Akin']
['eess.SP', 'cs.LG', 'stat.ML']
While neural network hardware accelerators provide a substantial amount of raw compute throughput, the models deployed on them must be co-designed for the underlying hardware architecture to obtain the optimal system performance. We present a class of computer vision models designed using hardware-aware neural architecture search and customized to run on the Edge TPU, Google's neural network hardware accelerator for low-power, edge devices. For the Edge TPU in Coral devices, these models enable real-time image classification performance while achieving accuracy typically seen only with larger, compute-heavy models running in data centers. On Pixel 4's Edge TPU, these models improve the accuracy-latency tradeoff over existing SoTA mobile models.
2020-03-05T21:34:22Z
Accepted paper at the On-device Intelligence Workshop at MLSys Conference 2020
null
null
null
null
null
null
null
null
null
2,003.05002
TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages
['Jonathan H. Clark', 'Eunsol Choi', 'Michael Collins', 'Dan Garrette', 'Tom Kwiatkowski', 'Vitaly Nikolaev', 'Jennimaria Palomaki']
['cs.CL', 'cs.LG']
Confidently making progress on multilingual modeling requires challenging, trustworthy evaluations. We present TyDi QA---a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology---the set of linguistic features each language expresses---such that we expect models performing well on this set to generalize across a large number of the world's languages. We present a quantitative analysis of the data quality and example-level qualitative linguistic analyses of observed language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don't know the answer yet, and the data is collected directly in each language without the use of translation.
2020-03-10T21:11:53Z
To appear in Transactions of the Association for Computational Linguistics (TACL) 2020. Please use this as the citation
null
null
null
null
null
null
null
null
null
2,003.06505
AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data
['Nick Erickson', 'Jonas Mueller', 'Alexander Shirkov', 'Hang Zhang', 'Pedro Larroy', 'Mu Li', 'Alexander Smola']
['stat.ML', 'cs.LG']
We introduce AutoGluon-Tabular, an open-source AutoML framework that requires only a single line of Python to train highly accurate machine learning models on an unprocessed tabular dataset such as a CSV file. Unlike existing AutoML frameworks that primarily focus on model/hyperparameter selection, AutoGluon-Tabular succeeds by ensembling multiple models and stacking them in multiple layers. Experiments reveal that our multi-layer combination of many models offers better use of allocated training time than seeking out the best. A second contribution is an extensive evaluation of public and commercial AutoML platforms including TPOT, H2O, AutoWEKA, auto-sklearn, AutoGluon, and Google AutoML Tables. Tests on a suite of 50 classification and regression tasks from Kaggle and the OpenML AutoML Benchmark reveal that AutoGluon is faster, more robust, and much more accurate. We find that AutoGluon often even outperforms the best-in-hindsight combination of all of its competitors. In two popular Kaggle competitions, AutoGluon beat 99% of the participating data scientists after merely 4h of training on the raw data.
2020-03-13T23:10:39Z
null
null
null
null
null
null
null
null
null
null
2,003.06713
Document Ranking with a Pretrained Sequence-to-Sequence Model
['Rodrigo Nogueira', 'Zhiying Jiang', 'Jimmy Lin']
['cs.IR', 'cs.LG']
This work proposes a novel adaptation of a pretrained sequence-to-sequence model to the task of document ranking. Our approach is fundamentally different from a commonly-adopted classification-based formulation of ranking, based on encoder-only pretrained transformer architectures such as BERT. We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words", and how the underlying logits of these target words can be interpreted as relevance probabilities for ranking. On the popular MS MARCO passage ranking task, experimental results show that our approach is at least on par with previous classification-based models and can surpass them with larger, more-recent models. On the test collection from the TREC 2004 Robust Track, we demonstrate a zero-shot transfer-based approach that outperforms previous state-of-the-art models requiring in-dataset cross-validation. Furthermore, we find that our approach significantly outperforms an encoder-only model in a data-poor regime (i.e., with few training examples). We investigate this observation further by varying target words to probe the model's use of latent knowledge.
2020-03-14T22:29:50Z
null
null
null
null
null
null
null
null
null
null
2,003.06792
Learning Enriched Features for Real Image Restoration and Enhancement
['Syed Waqas Zamir', 'Aditya Arora', 'Salman Khan', 'Munawar Hayat', 'Fahad Shahbaz Khan', 'Ming-Hsuan Yang', 'Ling Shao']
['cs.CV']
With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in surveillance, computational photography, medical imaging, and remote sensing. Recently, convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task. Existing CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatially precise but contextually less robust results are achieved, while in the latter case, semantically reliable but spatially less accurate outputs are generated. In this paper, we present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network and receiving strong contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing several key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) spatial and channel attention mechanisms for capturing contextual information, and (d) attention based multi-scale feature aggregation. In a nutshell, our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on five real image benchmark datasets demonstrate that our method, named as MIRNet, achieves state-of-the-art results for a variety of image processing tasks, including image denoising, super-resolution, and image enhancement. The source code and pre-trained models are available at https://github.com/swz30/MIRNet.
2020-03-15T11:04:30Z
Accepted for publication at ECCV 2020
null
null
null
null
null
null
null
null
null
2,003.10286
PathVQA: 30000+ Questions for Medical Visual Question Answering
['Xuehai He', 'Yichen Zhang', 'Luntian Mou', 'Eric Xing', 'Pengtao Xie']
['cs.CL', 'cs.AI']
Is it possible to develop an "AI Pathologist" to pass the board-certified examination of the American Board of Pathology? To achieve this goal, the first step is to create a visual question answering (VQA) dataset where the AI agent is presented with a pathology image together with a question and is asked to give the correct answer. Our work makes the first attempt to build such a dataset. Different from creating general-domain VQA datasets where the images are widely accessible and there are many crowdsourcing workers available and capable of generating question-answer pairs, developing a medical VQA dataset is much more challenging. First, due to privacy concerns, pathology images are usually not publicly available. Second, only well-trained pathologists can understand pathology images, but they barely have time to help create datasets for AI research. To address these challenges, we resort to pathology textbooks and online digital libraries. We develop a semi-automated pipeline to extract pathology images and captions from textbooks and generate question-answer pairs from captions using natural language processing. We collect 32,799 open-ended questions from 4,998 pathology images where each question is manually checked to ensure correctness. To our best knowledge, this is the first dataset for pathology VQA. Our dataset will be released publicly to promote research in medical VQA.
2020-03-07T17:55:41Z
null
null
null
null
null
null
null
null
null
null
2,003.10555
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
['Kevin Clark', 'Minh-Thang Luong', 'Quoc V. Le', 'Christopher D. Manning']
['cs.CL']
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
2020-03-23T21:17:42Z
ICLR 2020
null
null
null
null
null
null
null
null
null
2,003.10564
Improving Yorùbá Diacritic Restoration
['Iroro Orife', 'David I. Adelani', 'Timi Fasubaa', 'Victor Williamson', 'Wuraola Fisayo Oyewusi', 'Olamilekan Wahab', 'Kola Tubosun']
['cs.CL']
Yor\`ub\'a is a widely spoken West African language with a writing system rich in orthographic and tonal diacritics. They provide morphological information, are crucial for lexical disambiguation, pronunciation and are vital for any computational Speech or Natural Language Processing tasks. However diacritic marks are commonly excluded from electronic texts due to limited device and application support as well as general education on proper usage. We report on recent efforts at dataset cultivation. By aggregating and improving disparate texts from the web and various personal libraries, we were able to significantly grow our clean Yor\`ub\'a dataset from a majority Bibilical text corpora with three sources to millions of tokens from over a dozen sources. We evaluate updated diacritic restoration models on a new, general purpose, public-domain Yor\`ub\'a evaluation dataset of modern journalistic news text, selected to be multi-purpose and reflecting contemporary usage. All pre-trained models, datasets and source-code have been released as an open-source project to advance efforts on Yor\`ub\'a language technology.
2020-03-23T22:07:15Z
Accepted to ICLR 2020 AfricaNLP workshop
null
null
null
null
null
null
null
null
null
2,003.1108
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
['Junjie Hu', 'Sebastian Ruder', 'Aditya Siddhant', 'Graham Neubig', 'Orhan Firat', 'Melvin Johnson']
['cs.CL', 'cs.LG']
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
2020-03-24T19:09:37Z
In Proceedings of the 37th International Conference on Machine Learning (ICML). July 2020
null
null
null
null
null
null
null
null
null
2,003.11982
In defence of metric learning for speaker recognition
['Joon Son Chung', 'Jaesung Huh', 'Seongkyu Mun', 'Minjae Lee', 'Hee Soo Heo', 'Soyeon Choe', 'Chiheon Ham', 'Sunghwan Jung', 'Bong-Jin Lee', 'Icksang Han']
['eess.AS', 'cs.SD']
The objective of this paper is 'open-set' speaker recognition of unseen speakers, where ideal embeddings should be able to condense information into a compact utterance-level representation that has small intra-speaker and large inter-speaker distance. A popular belief in speaker recognition is that networks trained with classification objectives outperform metric learning methods. In this paper, we present an extensive evaluation of most popular loss functions for speaker recognition on the VoxCeleb dataset. We demonstrate that the vanilla triplet loss shows competitive performance compared to classification-based losses, and those trained with our proposed metric learning objective outperform state-of-the-art methods.
2020-03-26T15:43:10Z
The code can be found at https://github.com/clovaai/voxceleb_trainer
null
10.21437/Interspeech.2020-1064
null
null
null
null
null
null
null
2,003.12039
RAFT: Recurrent All-Pairs Field Transforms for Optical Flow
['Zachary Teed', 'Jia Deng']
['cs.CV']
We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts per-pixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves state-of-the-art performance. On KITTI, RAFT achieves an F1-all error of 5.10%, a 16% error reduction from the best published result (6.10%). On Sintel (final pass), RAFT obtains an end-point-error of 2.855 pixels, a 30% error reduction from the best published result (4.098 pixels). In addition, RAFT has strong cross-dataset generalization as well as high efficiency in inference time, training speed, and parameter count. Code is available at https://github.com/princeton-vl/RAFT.
2020-03-26T17:12:42Z
fixed a formatting issue, Eq 7. no change in content
null
null
null
null
null
null
null
null
null
2,003.13016
A Dataset of German Legal Documents for Named Entity Recognition
['Elena Leitner', 'Georg Rehm', 'Julián Moreno-Schneider']
['cs.CL', 'cs.IR']
We describe a dataset developed for Named Entity Recognition in German federal court decisions. It consists of approx. 67,000 sentences with over 2 million tokens. The resource contains 54,000 manually annotated entities, mapped to 19 fine-grained semantic classes: person, judge, lawyer, country, city, street, landscape, organization, company, institution, court, brand, law, ordinance, European legal norm, regulation, contract, court decision, and legal literature. The legal documents were, furthermore, automatically annotated with more than 35,000 TimeML-based time expressions. The dataset, which is available under a CC-BY 4.0 license in the CoNNL-2002 format, was developed for training an NER service for German legal documents in the EU project Lynx.
2020-03-29T13:20:43Z
Proceedings of the 12th Language Resources and Evaluation Conference (LREC 2020). To appear
null
null
A Dataset of German Legal Documents for Named Entity Recognition
['Elena Leitner', 'Georg Rehm', 'J. Moreno-Schneider']
2,020
International Conference on Language Resources and Evaluation
51
39
['Computer Science']
2,003.13145
Can AI help in screening Viral and COVID-19 pneumonia?
['Muhammad E. H. Chowdhury', 'Tawsifur Rahman', 'Amith Khandakar', 'Rashid Mazhar', 'Muhammad Abdul Kadir', 'Zaid Bin Mahbub', 'Khandaker Reajul Islam', 'Muhammad Salman Khan', 'Atif Iqbal', 'Nasser Al-Emadi', 'Mamun Bin Ibne Reaz', 'T. I. Islam']
['cs.LG', 'cs.CV']
Coronavirus disease (COVID-19) is a pandemic disease, which has already caused thousands of causalities and infected several millions of people worldwide. Any technological tool enabling rapid screening of the COVID-19 infection with high accuracy can be crucially helpful to healthcare professionals. The main clinical tool currently in use for the diagnosis of COVID-19 is the Reverse transcription polymerase chain reaction (RT-PCR), which is expensive, less-sensitive and requires specialized medical personnel. X-ray imaging is an easily accessible tool that can be an excellent alternative in the COVID-19 diagnosis. This research was taken to investigate the utility of artificial intelligence (AI) in the rapid and accurate detection of COVID-19 from chest X-ray images. The aim of this paper is to propose a robust technique for automatic detection of COVID-19 pneumonia from digital chest X-ray images applying pre-trained deep-learning algorithms while maximizing the detection accuracy. A public database was created by the authors combining several public databases and also by collecting images from recently published articles. The database contains a mixture of 423 COVID-19, 1485 viral pneumonia, and 1579 normal chest X-ray images. Transfer learning technique was used with the help of image augmentation to train and validate several pre-trained deep Convolutional Neural Networks (CNNs). The networks were trained to classify two different schemes: i) normal and COVID-19 pneumonia; ii) normal, viral and COVID-19 pneumonia with and without image augmentation. The classification accuracy, precision, sensitivity, and specificity for both the schemes were 99.7%, 99.7%, 99.7% and 99.55% and 97.9%, 97.95%, 97.9%, and 98.8%, respectively.
2020-03-29T21:37:21Z
12 pages, 9 Figures
IEEE Access 2020
10.1109/ACCESS.2020.3010287
Can AI Help in Screening Viral and COVID-19 Pneumonia?
['M. Chowdhury', 'Tawsifur Rahman', 'A. Khandakar', 'R. Mazhar', 'M. A. Kadir', 'Z. Mahbub', 'Khandakar R. Islam', 'Muhammad Salman Khan', 'A. Iqbal', 'N. Al-Emadi', 'M. Reaz']
2,020
IEEE Access
1,356
74
['Computer Science', 'Engineering']
2,003.1363
TResNet: High Performance GPU-Dedicated Architecture
['Tal Ridnik', 'Hussam Lawen', 'Asaf Noy', 'Emanuel Ben Baruch', 'Gilad Sharir', 'Itamar Friedman']
['cs.CV', 'cs.LG', 'eess.IV']
Many deep learning models, developed in recent years, reach higher ImageNet accuracy than ResNet50, with fewer or comparable FLOPS count. While FLOPs are often seen as a proxy for network efficiency, when measuring actual GPU training and inference throughput, vanilla ResNet50 is usually significantly faster than its recent competitors, offering better throughput-accuracy trade-off. In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency. We first demonstrate and discuss the bottlenecks induced by FLOPs-optimizations. We then suggest alternative designs that better utilize GPU structure and assets. Finally, we introduce a new family of GPU-dedicated models, called TResNet, which achieve better accuracy and efficiency than previous ConvNets. Using a TResNet model, with similar GPU throughput to ResNet50, we reach 80.8 top-1 accuracy on ImageNet. Our TResNet models also transfer well and achieve state-of-the-art accuracy on competitive single-label classification datasets such as Stanford cars (96.0%), CIFAR-10 (99.0%), CIFAR-100 (91.5%) and Oxford-Flowers (99.1%). They also perform well on multi-label classification and object detection tasks. Implementation is available at: https://github.com/mrT23/TResNet.
2020-03-30T17:04:47Z
11 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,003.13678
Designing Network Design Spaces
['Ilija Radosavovic', 'Raj Prateek Kosaraju', 'Ross Girshick', 'Kaiming He', 'Piotr Dollár']
['cs.CV', 'cs.LG']
In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.
2020-03-30T17:57:47Z
CVPR 2020
null
null
null
null
null
null
null
null
null
2,004.00033
Give your Text Representation Models some Love: the Case for Basque
['Rodrigo Agerri', 'Iñaki San Vicente', 'Jon Ander Campos', 'Ander Barrena', 'Xabier Saralegi', 'Aitor Soroa', 'Eneko Agirre']
['cs.CL']
Word embeddings and pre-trained language models allow to build rich representations of text and have enabled improvements across most NLP tasks. Unfortunately they are very expensive to train, and many small companies and research groups tend to use models that have been pre-trained and made available by third parties, rather than building their own. This is suboptimal as, for many languages, the models have been trained on smaller (or lower quality) corpora. In addition, monolingual pre-trained models for non-English languages are not always available. At best, models for those languages are included in multilingual versions, where each language shares the quota of substrings and parameters with the rest of the languages. This is particularly true for smaller languages such as Basque. In this paper we show that a number of monolingual models (FastText word embeddings, FLAIR and BERT language models) trained with larger Basque corpora produce much better results than publicly available versions in downstream NLP tasks, including topic classification, sentiment classification, PoS tagging and NER. This work sets a new state-of-the-art in those tasks for Basque. All benchmarks and models used in this work are publicly available.
2020-03-31T18:01:56Z
Accepted at LREC 2020; 8 pages, 7 tables
null
null
Give your Text Representation Models some Love: the Case for Basque
['Rodrigo Agerri', 'Iñaki San Vicente', 'Jon Ander Campos', 'Ander Barrena', 'X. Saralegi', 'Aitor Soroa Etxabe', 'Eneko Agirre']
2,020
International Conference on Language Resources and Evaluation
63
24
['Computer Science']
2,004.00584
Deep Entity Matching with Pre-Trained Language Models
['Yuliang Li', 'Jinfeng Li', 'Yoshihiko Suhara', 'AnHai Doan', 'Wang-Chiew Tan']
['cs.DB', 'cs.CL']
We present Ditto, a novel entity matching system based on pre-trained Transformer-based language models. We fine-tune and cast EM as a sequence-pair classification problem to leverage such models with a simple architecture. Our experiments show that a straightforward application of language models such as BERT, DistilBERT, or RoBERTa pre-trained on large text corpora already significantly improves the matching quality and outperforms previous state-of-the-art (SOTA), by up to 29% of F1 score on benchmark datasets. We also developed three optimization techniques to further improve Ditto's matching capability. Ditto allows domain knowledge to be injected by highlighting important pieces of input information that may be of interest when making matching decisions. Ditto also summarizes strings that are too long so that only the essential information is retained and used for EM. Finally, Ditto adapts a SOTA technique on data augmentation for text to EM to augment the training data with (difficult) examples. This way, Ditto is forced to learn "harder" to improve the model's matching capability. The optimizations we developed further boost the performance of Ditto by up to 9.8%. Perhaps more surprisingly, we establish that Ditto can achieve the previous SOTA results with at most half the number of labeled data. Finally, we demonstrate Ditto's effectiveness on a real-world large-scale EM task. On matching two company datasets consisting of 789K and 412K records, Ditto achieves a high F1 score of 96.5%.
2020-04-01T17:14:10Z
To appear in VLDB 2021
null
10.14778/3421424.3421431
Deep entity matching with pre-trained language models
['Yuliang Li', 'Jinfeng Li', 'Yoshihiko Suhara', 'A. Doan', 'W. Tan']
2,020
Proceedings of the VLDB Endowment
391
68
['Computer Science']
2,004.01092
NUBES: A Corpus of Negation and Uncertainty in Spanish Clinical Texts
['Salvador Lima', 'Naiara Perez', 'Montse Cuadros', 'German Rigau']
['cs.CL']
This paper introduces the first version of the NUBes corpus (Negation and Uncertainty annotations in Biomedical texts in Spanish). The corpus is part of an on-going research and currently consists of 29,682 sentences obtained from anonymised health records annotated with negation and uncertainty. The article includes an exhaustive comparison with similar corpora in Spanish, and presents the main annotation and design decisions. Additionally, we perform preliminary experiments using deep learning algorithms to validate the annotated dataset. As far as we know, NUBes is the largest publicly available corpus for negation in Spanish and the first that also incorporates the annotation of speculation cues, scopes, and events.
2020-04-02T15:51:31Z
Accepted at the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)
null
null
null
null
null
null
null
null
null
2,004.01401
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
['Yaobo Liang', 'Nan Duan', 'Yeyun Gong', 'Ning Wu', 'Fenfei Guo', 'Weizhen Qi', 'Ming Gong', 'Linjun Shou', 'Daxin Jiang', 'Guihong Cao', 'Xiaodong Fan', 'Ruofei Zhang', 'Rahul Agrawal', 'Edward Cui', 'Sining Wei', 'Taroon Bharti', 'Ying Qiao', 'Jiun-Hung Chen', 'Winnie Wu', 'Shuguang Liu', 'Fan Yang', 'Daniel Campos', 'Rangan Majumder', 'Ming Zhou']
['cs.CL']
In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks. Comparing to GLUE(Wang et al., 2019), which is labeled in English for natural language understanding tasks only, XGLUE has two main advantages: (1) it provides 11 diversified tasks that cover both natural language understanding and generation scenarios; (2) for each task, it provides labeled data in multiple languages. We extend a recent cross-lingual pre-trained model Unicoder(Huang et al., 2019) to cover both understanding and generation tasks, which is evaluated on XGLUE as a strong baseline. We also evaluate the base versions (12-layer) of Multilingual BERT, XLM and XLM-R for comparison.
2020-04-03T07:03:12Z
null
null
null
null
null
null
null
null
null
null
2,004.01804
Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition and Retrieval
['Tobias Weyand', 'Andre Araujo', 'Bingyi Cao', 'Jack Sim']
['cs.CV']
While image retrieval and instance recognition techniques are progressing rapidly, there is a need for challenging datasets to accurately measure their performance -- while posing novel challenges that are relevant for practical applications. We introduce the Google Landmarks Dataset v2 (GLDv2), a new benchmark for large-scale, fine-grained instance recognition and image retrieval in the domain of human-made and natural landmarks. GLDv2 is the largest such dataset to date by a large margin, including over 5M images and 200k distinct instance labels. Its test set consists of 118k images with ground truth annotations for both the retrieval and recognition tasks. The ground truth construction involved over 800 hours of human annotator work. Our new dataset has several challenging properties inspired by real world applications that previous datasets did not consider: An extremely long-tailed class distribution, a large fraction of out-of-domain test photos and large intra-class variability. The dataset is sourced from Wikimedia Commons, the world's largest crowdsourced collection of landmark photos. We provide baseline results for both recognition and retrieval tasks based on state-of-the-art methods as well as competitive results from a public challenge. We further demonstrate the suitability of the dataset for transfer learning by showing that image embeddings trained on it achieve competitive retrieval performance on independent datasets. The dataset images, ground-truth and metric scoring code are available at https://github.com/cvdfoundation/google-landmark.
2020-04-03T22:52:17Z
CVPR20 camera-ready (oral) + appendices
null
null
Google Landmarks Dataset v2 – A Large-Scale Benchmark for Instance-Level Recognition and Retrieval
['Tobias Weyand', 'A. Araújo', 'Bingyi Cao', 'Jack Sim']
2,020
Computer Vision and Pattern Recognition
373
70
['Computer Science']
2,004.02349
TAPAS: Weakly Supervised Table Parsing via Pre-training
['Jonathan Herzig', 'Paweł Krzysztof Nowak', 'Thomas Müller', 'Francesco Piccinno', 'Julian Martin Eisenschlos']
['cs.IR', 'cs.AI', 'cs.CL', 'cs.LG']
Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.
2020-04-05T23:18:37Z
Accepted to ACL 2020
null
10.18653/v1/2020.acl-main.398
null
null
null
null
null
null
null
2,004.02814
Leveraging the Inherent Hierarchy of Vacancy Titles for Automated Job Ontology Expansion
['Jeroen Van Hautte', 'Vincent Schelstraete', 'Mikaël Wornoo']
['cs.CL', 'cs.LG']
Machine learning plays an ever-bigger part in online recruitment, powering intelligent matchmaking and job recommendations across many of the world's largest job platforms. However, the main text is rarely enough to fully understand a job posting: more often than not, much of the required information is condensed into the job title. Several organised efforts have been made to map job titles onto a hand-made knowledge base as to provide this information, but these only cover around 60\% of online vacancies. We introduce a novel, purely data-driven approach towards the detection of new job titles. Our method is conceptually simple, extremely efficient and competitive with traditional NER-based approaches. Although the standalone application of our method does not outperform a finetuned BERT model, it can be applied as a preprocessing step as well, substantially boosting accuracy across several architectures.
2020-04-06T16:55:41Z
Accepted to the Proceedings of the 6th International Workshop on Computational Terminology (COMPUTERM 2020)
null
null
Leveraging the Inherent Hierarchy of Vacancy Titles for Automated Job Ontology Expansion
['Jeroen Van Hautte', 'Vincent Schelstraete', 'Mikael Wornoo']
2,020
COMPUTERM
4
16
['Computer Science']
2,004.02967
Evolving Normalization-Activation Layers
['Hanxiao Liu', 'Andrew Brock', 'Karen Simonyan', 'Quoc V. Le']
['cs.LG', 'cs.CV', 'cs.NE', 'stat.ML']
Normalization layers and activation functions are fundamental components in deep networks and typically co-locate with each other. Here we propose to design them using an automated approach. Instead of designing them separately, we unify them into a single tensor-to-tensor computation graph, and evolve its structure starting from basic mathematical functions. Examples of such mathematical functions are addition, multiplication and statistical moments. The use of low-level mathematical functions, in contrast to the use of high-level modules in mainstream NAS, leads to a highly sparse and large search space which can be challenging for search methods. To address the challenge, we develop efficient rejection protocols to quickly filter out candidate layers that do not work well. We also use multi-objective evolution to optimize each layer's performance across many architectures to prevent overfitting. Our method leads to the discovery of EvoNorms, a set of new normalization-activation layers with novel, and sometimes surprising structures that go beyond existing design patterns. For example, some EvoNorms do not assume that normalization and activation functions must be applied sequentially, nor need to center the feature maps, nor require explicit activation functions. Our experiments show that EvoNorms work well on image classification models including ResNets, MobileNets and EfficientNets but also transfer well to Mask R-CNN with FPN/SpineNet for instance segmentation and to BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers in many cases.
2020-04-06T19:52:48Z
null
null
null
null
null
null
null
null
null
null
2,004.02984
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
['Zhiqing Sun', 'Hongkun Yu', 'Xiaodan Song', 'Renjie Liu', 'Yiming Yang', 'Denny Zhou']
['cs.CL', 'cs.LG']
Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE).
2020-04-06T20:20:58Z
Accepted to ACL 2020
null
null
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
['Zhiqing Sun', 'Hongkun Yu', 'Xiaodan Song', 'Renjie Liu', 'Yiming Yang', 'Denny Zhou']
2,020
Annual Meeting of the Association for Computational Linguistics
820
66
['Computer Science']
2,004.03289
KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding
['Jiyeon Ham', 'Yo Joong Choe', 'Kyubyong Park', 'Ilji Choi', 'Hyungjoon Soh']
['cs.CL']
Natural language inference (NLI) and semantic textual similarity (STS) are key tasks in natural language understanding (NLU). Although several benchmark datasets for those tasks have been released in English and a few other languages, there are no publicly available NLI or STS datasets in the Korean language. Motivated by this, we construct and release new datasets for Korean NLI and STS, dubbed KorNLI and KorSTS, respectively. Following previous approaches, we machine-translate existing English training sets and manually translate development and test sets into Korean. To accelerate research on Korean NLU, we also establish baselines on KorNLI and KorSTS. Our datasets are publicly available at https://github.com/kakaobrain/KorNLUDatasets.
2020-04-07T11:49:15Z
Findings of EMNLP 2020. Datasets available at https://github.com/kakaobrain/KorNLUDatasets
null
null
null
null
null
null
null
null
null
2,004.03329
MedDialog: Two Large-scale Medical Dialogue Datasets
['Xuehai He', 'Shu Chen', 'Zeqian Ju', 'Xiangyu Dong', 'Hongchao Fang', 'Sicheng Wang', 'Yue Yang', 'Jiaqi Zeng', 'Ruisi Zhang', 'Ruoyu Zhang', 'Meng Zhou', 'Penghui Zhu', 'Pengtao Xie']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs. To facilitate the research and development of medical dialogue systems, we build two large-scale medical dialogue datasets: MedDialog-EN and MedDialog-CN. MedDialog-EN is an English dataset containing 0.3 million conversations between patients and doctors and 0.5 million utterances. MedDialog-CN is an Chinese dataset containing 1.1 million conversations and 4 million utterances. To our best knowledge, MedDialog-(EN,CN) are the largest medical dialogue datasets to date. The dataset is available at https://github.com/UCSD-AI4H/Medical-Dialogue-System
2020-04-07T13:07:09Z
null
null
null
MedDialog: A Large-scale Medical Dialogue Dataset
['Shu Chen', 'Zeqian Ju', 'Xiangyu Dong', 'Hongchao Fang', 'Sicheng Wang', 'Yue Yang', 'Jiaqi Zeng', 'Ruisi Zhang', 'Ruoyu Zhang', 'Meng Zhou', 'Penghui Zhu', 'Pengtao Xie']
2,020
arXiv.org
179
2
['Computer Science', 'Mathematics']
2,004.03659
The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews
['Elena Tutubalina', 'Ilseyar Alimova', 'Zulfat Miftahutdinov', 'Andrey Sakhovskiy', 'Valentin Malykh', 'Sergey Nikolenko']
['cs.CL']
The Russian Drug Reaction Corpus (RuDReC) is a new partially annotated corpus of consumer reviews in Russian about pharmaceutical products for the detection of health-related named entities and the effectiveness of pharmaceutical products. The corpus itself consists of two parts, the raw one and the labelled one. The raw part includes 1.4 million health-related user-generated texts collected from various Internet sources, including social media. The labelled part contains 500 consumer reviews about drug therapy with drug- and disease-related information. Labels for sentences include health-related issues or their absence. The sentences with one are additionally labelled at the expression level for identification of fine-grained subtypes such as drug classes and drug forms, drug indications, and drug reactions. Further, we present a baseline model for named entity recognition (NER) and multi-label sentence classification tasks on this corpus. The macro F1 score of 74.85% in the NER task was achieved by our RuDR-BERT model. For the sentence classification task, our model achieves the macro F1 score of 68.82% gaining 7.47% over the score of BERT model trained on Russian data. We make the RuDReC corpus and pretrained weights of domain-specific BERT models freely available at https://github.com/cimm-kzn/RuDReC
2020-04-07T19:26:13Z
9 pages, 9 tables, 4 figures
Bioinformatics, 2020
10.1093/bioinformatics/btaa675
The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews
['E. Tutubalina', 'I. Alimova', 'Z. Miftahutdinov', 'Andrey Sakhovskiy', 'Valentin Malykh', 'S. Nikolenko']
2,020
Bioinform.
44
26
['Computer Science', 'Medicine']
2,004.0372
Byte Pair Encoding is Suboptimal for Language Model Pretraining
['Kaj Bostrom', 'Greg Durrett']
['cs.CL', 'I.2.7']
The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE's greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE.
2020-04-07T21:21:06Z
5 pages, 3 figures. To be published in Findings of EMNLP 2020
null
null
Byte Pair Encoding is Suboptimal for Language Model Pretraining
['Kaj Bostrom', 'Greg Durrett']
2,020
Findings
215
28
['Computer Science']
2,004.04037
DynaBERT: Dynamic BERT with Adaptive Width and Depth
['Lu Hou', 'Zhiqi Huang', 'Lifeng Shang', 'Xin Jiang', 'Xiao Chen', 'Qun Liu']
['cs.CL', 'cs.LG']
The pre-trained language models like BERT, though powerful in many natural language processing tasks, are both computation and memory expensive. To alleviate this problem, one approach is to compress them for specific tasks before deployment. However, recent works on BERT compression usually compress the large BERT model to a fixed smaller size. They can not fully satisfy the requirements of different edge devices with various hardware performances. In this paper, we propose a novel dynamic BERT model (abbreviated as DynaBERT), which can flexibly adjust the size and latency by selecting adaptive width and depth. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth, by distilling knowledge from the full-sized model to small sub-networks. Network rewiring is also used to keep the more important attention heads and neurons shared by more sub-networks. Comprehensive experiments under various efficiency constraints demonstrate that our proposed dynamic BERT (or RoBERTa) at its largest size has comparable performance as BERT-base (or RoBERTa-base), while at smaller widths and depths consistently outperforms existing BERT compression methods. Code is available at https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/DynaBERT.
2020-04-08T15:06:28Z
NeurIPS-2020 (Spotlight)
null
null
null
null
null
null
null
null
null
2,004.0427
The Spotify Podcast Dataset
['Ann Clifton', 'Aasish Pappu', 'Sravana Reddy', 'Yongze Yu', 'Jussi Karlgren', 'Ben Carterette', 'Rosie Jones']
['cs.CL']
Podcasts are a relatively new form of audio media. Episodes appear on a regular cadence, and come in many different formats and levels of formality. They can be formal news journalism or conversational chat; fiction or non-fiction. They are rapidly growing in popularity and yet have been relatively little studied. As an audio format, podcasts are more varied in style and production types than, say, broadcast news, and contain many more genres than typically studied in video research. The medium is therefore a rich domain with many research avenues for the IR and NLP communities. We present the Spotify Podcast Dataset, a set of approximately 100K podcast episodes comprised of raw audio files along with accompanying ASR transcripts. This represents over 47,000 hours of transcribed audio, and is an order of magnitude larger than previous speech-to-text corpora.
2020-04-08T21:25:00Z
4 pages, 3 figures
null
null
null
null
null
null
null
null
null
2,004.04315
Large Arabic Twitter Dataset on COVID-19
['Sarah Alqurashi', 'Ahmad Alhindi', 'Eisa Alanazi']
['cs.SI', 'cs.CL']
The 2019 coronavirus disease (COVID-19), emerged late December 2019 in China, is now rapidly spreading across the globe. At the time of writing this paper, the number of global confirmed cases has passed two millions and half with over 180,000 fatalities. Many countries have enforced strict social distancing policies to contain the spread of the virus. This have changed the daily life of tens of millions of people, and urged people to turn their discussions online, e.g., via online social media sites like Twitter. In this work, we describe the first Arabic tweets dataset on COVID-19 that we have been collecting since January 1st, 2020. The dataset would help researchers and policy makers in studying different societal issues related to the pandemic. Many other tasks related to behavioral change, information sharing, misinformation and rumors spreading can also be analyzed.
2020-04-09T01:07:12Z
null
null
null
null
null
null
null
null
null
null
2,004.0441
Att-HACK: An Expressive Speech Database with Social Attitudes
['Clément Le Moine', 'Nicolas Obin']
['eess.AS']
This paper presents Att-HACK, the first large database of acted speech with social attitudes. Available databases of expressive speech are rare and very often restricted to the primary emotions: anger, joy, sadness, fear. This greatly limits the scope of the research on expressive speech. Besides, a fundamental aspect of speech prosody is always ignored and missing from such databases: its variety, i.e. the possibility to repeat an utterance while varying its prosody. This paper represents a first attempt to widen the scope of expressivity in speech, by providing a database of acted speech with social attitudes: friendly, seductive, dominant, and distant. The proposed database comprises 25 speakers interpreting 100 utterances in 4 social attitudes, with 3-5 repetitions each per attitude for a total of around 30 hours of speech. The Att-HACK is freely available for academic research under a Creative Commons Licence.
2020-04-09T08:09:59Z
5 pages, 5 figures
null
null
Att-HACK: An Expressive Speech Database with Social Attitudes
['Clément Le Moine', 'Nicolas Obin']
2,020
Proceedings of the International Conference on Speech Prosody
19
35
['Engineering', 'Computer Science', 'Psychology']
2,004.0446
PANDORA Talks: Personality and Demographics on Reddit
['Matej Gjurković', 'Mladen Karan', 'Iva Vukojević', 'Mihaela Bošnjak', 'Jan Šnajder']
['cs.CL', 'cs.CY', 'cs.SI']
Personality and demographics are important variables in social sciences, while in NLP they can aid in interpretability and removal of societal biases. However, datasets with both personality and demographic labels are scarce. To address this, we present PANDORA, the first large-scale dataset of Reddit comments labeled with three personality models (including the well-established Big 5 model) and demographics (age, gender, and location) for more than 10k users. We showcase the usefulness of this dataset on three experiments, where we leverage the more readily available data from other personality models to predict the Big 5 traits, analyze gender classification biases arising from psycho-demographic variables, and carry out a confirmatory and exploratory analysis based on psychological theories. Finally, we present benchmark prediction models for all personality and demographic variables.
2020-04-09T10:08:05Z
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, NAACL 2021, https://www.aclweb.org/anthology/2021.socialnlp-1.12
null
null
null
null
null
null
null
null
null
2,004.04696
BLEURT: Learning Robust Metrics for Text Generation
['Thibault Sellam', 'Dipanjan Das', 'Ankur P. Parikh']
['cs.CL']
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.
2020-04-09T17:26:52Z
Accepted at ACL 2020
null
null
BLEURT: Learning Robust Metrics for Text Generation
['Thibault Sellam', 'Dipanjan Das', 'Ankur P. Parikh']
2,020
Annual Meeting of the Association for Computational Linguistics
1,511
52
['Computer Science']
2,004.04906
Dense Passage Retrieval for Open-Domain Question Answering
['Vladimir Karpukhin', 'Barlas Oğuz', 'Sewon Min', 'Patrick Lewis', 'Ledell Wu', 'Sergey Edunov', 'Danqi Chen', 'Wen-tau Yih']
['cs.CL']
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.
2020-04-10T04:53:17Z
EMNLP 2020
null
null
null
null
null
null
null
null
null
2,004.0515
Longformer: The Long-Document Transformer
['Iz Beltagy', 'Matthew E. Peters', 'Arman Cohan']
['cs.CL']
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.
2020-04-10T17:54:09Z
Version 2 introduces the Longformer-Encoder-Decoder (LED) model
null
null
null
null
null
null
null
null
null
2,004.05665
Minimizing FLOPs to Learn Efficient Sparse Representations
['Biswajit Paria', 'Chih-Kuan Yeh', 'Ian E. H. Yen', 'Ning Xu', 'Pradeep Ravikumar', 'Barnabás Póczos']
['cs.LG', 'stat.ML']
Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification. Retrieval of such representations from a large database is however computationally challenging. Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA. In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication. Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations (FLOPs) incurred during retrieval. Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets.
2020-04-12T18:09:02Z
Published at ICLR 2020
null
null
Minimizing FLOPs to Learn Efficient Sparse Representations
['Biswajit Paria', 'Chih-Kuan Yeh', 'N. Xu', 'B. Póczos', 'Pradeep Ravikumar', 'I. E. Yen']
2,020
International Conference on Learning Representations
69
90
['Computer Science', 'Mathematics']
2,004.05707
VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification
['Zhibin Lu', 'Pan Du', 'Jian-Yun Nie']
['cs.CL', 'cs.LG', 'stat.ML', 'I.2.4; I.2.7']
Much progress has been made recently on text classification with methods based on neural networks. In particular, models using attention mechanism such as BERT have shown to have the capability of capturing the contextual information within a sentence or document. However, their ability of capturing the global information about the vocabulary of a language is more limited. This latter is the strength of Graph Convolutional Networks (GCN). In this paper, we propose VGCN-BERT model which combines the capability of BERT with a Vocabulary Graph Convolutional Network (VGCN). Local information and global information interact through different layers of BERT, allowing them to influence mutually and to build together a final representation for classification. In our experiments on several text classification datasets, our approach outperforms BERT and GCN alone, and achieve higher effectiveness than that reported in previous studies.
2020-04-12T22:02:33Z
12 pages, 2 figures
in J. M. Jose et al. (Eds.): ECIR 2020, LNCS 12035, pp.369-382, 2020
null
VGCN-BERT: Augmenting BERT with Graph Embedding for Text Classification
['Zhibin Lu', 'Pan Du', 'J. Nie']
2,020
European Conference on Information Retrieval
127
34
['Computer Science']
2,004.06364
Polar nano-clusters in nominally paraelectric ceramics demonstrating high microwave tunability for wireless communication
['Hangfeng Zhang', 'Henry Giddens', 'Yajun Yue', 'Xinzhao Xu', 'Vicente Araullo-Peters', 'Vladimir Koval', 'Matteo Palma', 'Isaac Abrahams', 'Haixue Yan', 'Yang Hao']
['physics.app-ph', 'cond-mat.mtrl-sci']
Dielectric materials, with high tunability at microwave frequencies, are key components in the design of microwave communication systems. Dense Ba0.6Sr0.4TiO3 (BST) ceramics, with different grain sizes, were prepared in order to optimise the dielectric tunability via polar nano cluster effects. Dielectric permittivity and loss measurements were carried at both high and low frequencies and were supported by results from X-ray powder diffraction, scanning and transmission electron microscopies, Raman spectroscopy and piezoresponse force microscopy. The concentration of polar nano clusters, whose sizes are found to be in the range 20 to 50 nm, and the dielectric tunability increase with increasing grain size. A novel method for measurement of the microwave tunability in bulk dielectrics is presented. The highest tunability of 32% is achieved in ceramics with an average grain size of 10 um. The tunability of BST ceramics with applied DC field is demonstrated in a prototype small resonant antenna.
2020-04-14T09:00:33Z
25pages, 6 figures
Journal of the European Ceramic Society,2020
null
Polar nano-clusters in nominally paraelectric ceramics demonstrating high microwave tunability for wireless communication
['Hangfeng Zhang', 'H. Giddens', 'Y. Yue', 'Xinzhao Xu', 'V. Araullo-Peters', 'V. Koval', 'M. Palma', 'I. Abrahams', 'Haixue Yan', 'Y. Hao']
2,020
null
37
45
['Materials Science', 'Physics']
2,004.06465
Deep Learning Models for Multilingual Hate Speech Detection
['Sai Saketh Aluru', 'Binny Mathew', 'Punyajoy Saha', 'Animesh Mukherjee']
['cs.SI', 'cs.CL']
Hate speech detection is a challenging problem with most of the datasets available in only one language: English. In this paper, we conduct a large scale analysis of multilingual hate speech in 9 languages from 16 different sources. We observe that in low resource setting, simple models such as LASER embedding with logistic regression performs the best, while in high resource setting BERT based models perform better. In case of zero-shot classification, languages such as Italian and Portuguese achieve good results. Our proposed framework could be used as an efficient solution for low-resource languages. These models could also act as good baselines for future multilingual hate speech detection tasks. We have made our code and experimental settings public for other researchers at https://github.com/punyajoy/DE-LIMIT.
2020-04-14T13:14:27Z
16 pages, Accepted at ECML-PKDD 2020
null
null
null
null
null
null
null
null
null
2,004.06824
Melanoma Detection using Adversarial Training and Deep Transfer Learning
['Hasib Zunair', 'A. Ben Hamza']
['eess.IV', 'cs.CV']
Skin lesion datasets consist predominantly of normal samples with only a small percentage of abnormal ones, giving rise to the class imbalance problem. Also, skin lesion images are largely similar in overall appearance owing to the low inter-class variability. In this paper, we propose a two-stage framework for automatic classification of skin lesion images using adversarial training and transfer learning toward melanoma detection. In the first stage, we leverage the inter-class variation of the data distribution for the task of conditional image synthesis by learning the inter-class mapping and synthesizing under-represented class samples from the over-represented ones using unpaired image-to-image translation. In the second stage, we train a deep convolutional neural network for skin lesion classification using the original training set combined with the newly synthesized under-represented class samples. The training of this classifier is carried out by minimizing the focal loss function, which assists the model in learning from hard examples, while down-weighting the easy ones. Experiments conducted on a dermatology image benchmark demonstrate the superiority of our proposed approach over several standard baseline methods, achieving significant performance improvements. Interestingly, we show through feature visualization and analysis that our method leads to context based lesion assessment that can reach an expert dermatologist level.
2020-04-14T22:46:20Z
Published in the Journal of Physics in Medicine and Biology (PMB), April 2020. Codes at https://github.com/hasibzunair/adversarial-lesions
null
10.1088/1361-6560/ab86d3
null
null
null
null
null
null
null
2,004.0687
Coreferential Reasoning Learning for Language Representation
['Deming Ye', 'Yankai Lin', 'Jiaju Du', 'Zhenghao Liu', 'Peng Li', 'Maosong Sun', 'Zhiyuan Liu']
['cs.CL']
Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning. However, most existing language representation models cannot explicitly handle coreference, which is essential to the coherent understanding of the whole discourse. To address this issue, we present CorefBERT, a novel language representation model that can capture the coreferential relations in context. The experimental results show that, compared with existing baseline models, CorefBERT can achieve significant improvements consistently on various downstream NLP tasks that require coreferential reasoning, while maintaining comparable performance to previous models on other common NLP tasks. The source code and experiment details of this paper can be obtained from https://github.com/thunlp/CorefBERT.
2020-04-15T03:57:45Z
Accepted by EMNLP2020
null
null
null
null
null
null
null
null
null
2,004.07067
Gestalt: a Stacking Ensemble for SQuAD2.0
['Mohamed El-Geish']
['cs.CL', 'cs.LG', 'stat.ML']
We propose a deep-learning system -- for the SQuAD2.0 task -- that finds, or indicates the lack of, a correct answer to a question in a context paragraph. Our goal is to learn an ensemble of heterogeneous SQuAD2.0 models that, when blended properly, outperforms the best model in the ensemble per se. We created a stacking ensemble that combines top-N predictions from two models, based on ALBERT and RoBERTa, into a multiclass classification task to pick the best answer out of their predictions. We explored various ensemble configurations, input representations, and model architectures. For evaluation, we examined test-set EM and F1 scores; our best-performing ensemble incorporated a CNN-based meta-model and scored 87.117 and 90.306, respectively -- a relative improvement of 0.55% for EM and 0.61% for F1 scores, compared to the baseline performance of the best model in the ensemble, an ALBERT-based model, at 86.644 for EM and 89.760 for F1.
2020-04-02T08:09:22Z
11 pages, 7 figures, Stanford CS224n Natural Language Processing with Deep Learning
null
null
null
null
null
null
null
null
null
2,004.0718
SPECTER: Document-level Representation Learning using Citation-informed Transformers
['Arman Cohan', 'Sergey Feldman', 'Iz Beltagy', 'Doug Downey', 'Daniel S. Weld']
['cs.CL']
Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, the embeddings power strong performance on end tasks. We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that SPECTER outperforms a variety of competitive baselines on the benchmark.
2020-04-15T16:05:51Z
ACL 2020
null
null
null
null
null
null
null
null
null
2,004.07667
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
['Shauli Ravfogel', 'Yanai Elazar', 'Hila Gonen', 'Michael Twiton', 'Yoav Goldberg']
['cs.CL', 'cs.LG']
The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models. We present Iterative Null-space Projection (INLP), a novel method for removing information from neural representations. Our method is based on repeated training of linear classifiers that predict a certain property we aim to remove, followed by projection of the representations on their null-space. By doing so, the classifiers become oblivious to that target property, making it hard to linearly separate the data according to it. While applicable for multiple uses, we evaluate our method on bias and fairness use-cases, and show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.
2020-04-16T14:02:50Z
Accepted as a long paper in ACL 2020
null
null
null
null
null
null
null
null
null
2,004.07807
Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network
['Md. Rezaul Karim', 'Bharathi Raja Chakravarthi', 'John P. McCrae', 'Michael Cochez']
['cs.CL', 'cs.LG', 'stat.ML']
Exponential growths of social media and micro-blogging sites not only provide platforms for empowering freedom of expressions and individual voices but also enables people to express anti-social behaviour like online harassment, cyberbullying, and hate speech. Numerous works have been proposed to utilize these data for social and anti-social behaviours analysis, document characterization, and sentiment analysis by predicting the contexts mostly for highly resourced languages such as English. However, there are languages that are under-resources, e.g., South Asian languages like Bengali, Tamil, Assamese, Telugu that lack of computational resources for the NLP tasks. In this paper, we provide several classification benchmarks for Bengali, an under-resourced language. We prepared three datasets of expressing hate, commonly used topics, and opinions for hate speech detection, document classification, and sentiment analysis, respectively. We built the largest Bengali word embedding models to date based on 250 million articles, which we call BengFastText. We perform three different experiments, covering document classification, sentiment analysis, and hate speech detection. We incorporate word embeddings into a Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different types of hate speech, document classification, and sentiment analysis. Experiments demonstrate that BengFastText can capture the semantics of words from respective contexts correctly. Evaluations against several baseline embedding models, e.g., Word2Vec and GloVe yield up to 92.30%, 82.25%, and 90.45% F1-scores in case of document classification, sentiment analysis, and hate speech detection, respectively during 5-fold cross-validation tests.
2020-04-11T22:17:04Z
This paper is under review in the Journal of Natural Language Engineering
null
null
null
null
null
null
null
null
null
2,004.08531
MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition
['Somshubra Majumdar', 'Boris Ginsburg']
['eess.AS']
We present an MatchboxNet - an end-to-end neural network for speech command recognition. MatchboxNet is a deep residual network composed from blocks of 1D time-channel separable convolution, batch-normalization, ReLU and dropout layers. MatchboxNet reaches state-of-the-art accuracy on the Google Speech Commands dataset while having significantly fewer parameters than similar models. The small footprint of MatchboxNet makes it an attractive candidate for devices with limited computational resources. The model is highly scalable, so model accuracy can be improved with modest additional memory and compute. Finally, we show how intensive data augmentation using an auxiliary noise dataset improves robustness in the presence of background noise.
2020-04-18T05:49:27Z
null
null
10.21437/Interspeech.2020-1058
null
null
null
null
null
null
null
2,004.08955
ResNeSt: Split-Attention Networks
['Hang Zhang', 'Chongruo Wu', 'Zhongyue Zhang', 'Yi Zhu', 'Haibin Lin', 'Zhi Zhang', 'Yue Sun', 'Tong He', 'Jonas Mueller', 'R. Manmatha', 'Mu Li', 'Alexander Smola']
['cs.CV']
It is well known that featuremap attention and multi-path representation are important for visual recognition. In this paper, we present a modularized architecture, which applies the channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations. Our design results in a simple and unified computation block, which can be parameterized using only a few variables. Our model, named ResNeSt, outperforms EfficientNet in accuracy and latency trade-off on image classification. In addition, ResNeSt has achieved superior transfer learning results on several public benchmarks serving as the backbone, and has been adopted by the winning entries of COCO-LVIS challenge. The source code for complete system and pretrained models are publicly available.
2020-04-19T20:40:31Z
null
null
null
null
null
null
null
null
null
null
2,004.09015
Incorporating External Knowledge through Pre-training for Natural Language to Code Generation
['Frank F. Xu', 'Zhengbao Jiang', 'Pengcheng Yin', 'Bogdan Vasilescu', 'Graham Neubig']
['cs.CL']
Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents. Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation. Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa. The code and resources are available at https://github.com/neulab/external-knowledge-codegen.
2020-04-20T01:45:27Z
Accepted by ACL 2020
null
null
Incorporating External Knowledge through Pre-training for Natural Language to Code Generation
['Frank F. Xu', 'Zhengbao Jiang', 'Pengcheng Yin', 'Bogdan Vasilescu', 'Graham Neubig']
2,020
Annual Meeting of the Association for Computational Linguistics
84
32
['Computer Science']
2,004.09813
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation
['Nils Reimers', 'Iryna Gurevych']
['cs.CL']
We present an easy and efficient method to extend existing sentence embedding models to new languages. This allows to create multilingual versions from previously monolingual models. The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. We use the original (monolingual) model to generate sentence embeddings for the source language and then train a new system on translated sentences to mimic the original model. Compared to other methods for training multilingual sentence embeddings, this approach has several advantages: It is easy to extend existing models with relatively few samples to new languages, it is easier to ensure desired properties for the vector space, and the hardware requirements for training is lower. We demonstrate the effectiveness of our approach for 50+ languages from various language families. Code to extend sentence embeddings models to more than 400 languages is publicly available.
2020-04-21T08:20:25Z
Accepted at EMNLP 2020
null
null
Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation
['Nils Reimers', 'Iryna Gurevych']
2,020
Conference on Empirical Methods in Natural Language Processing
1,035
39
['Computer Science']
2,004.10404
Logical Natural Language Generation from Open-Domain Tables
['Wenhu Chen', 'Jianshu Chen', 'Yu Su', 'Zhiyu Chen', 'William Yang Wang']
['cs.CL', 'cs.AI']
Neural natural language generation (NLG) models have recently shown remarkable progress in fluency and coherence. However, existing studies on neural NLG are primarily focused on surface-level realizations with limited emphasis on logical inference, an important aspect of human thinking and language. In this paper, we suggest a new NLG task where a model is tasked with generating natural language statements that can be \emph{logically entailed} by the facts in an open-domain semi-structured table. To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset \cite{chen2019tabfact} featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w.r.t.\ logical inference. The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order. In our experiments, we comprehensively survey different generation architectures (LSTM, Transformer, Pre-Trained LM) trained with different algorithms (RL, Adversarial Training, Coarse-to-Fine) on the dataset and made following observations: 1) Pre-Trained LM can significantly boost both the fluency and logical fidelity metrics, 2) RL and Adversarial Training are trading fluency for fidelity, 3) Coarse-to-Fine generation can help partially alleviate the fidelity issue while maintaining high language fluency. The code and data are available at \url{https://github.com/wenhuchen/LogicNLG}.
2020-04-22T06:03:10Z
Accepted to ACL 2020 as Long Paper
null
null
null
null
null
null
null
null
null
2,004.10568
Up or Down? Adaptive Rounding for Post-Training Quantization
['Markus Nagel', 'Rana Ali Amjad', 'Mart van Baalen', 'Christos Louizos', 'Tijmen Blankevoort']
['cs.LG', 'cs.CV', 'stat.ML']
When quantizing neural networks, assigning each floating-point weight to its nearest fixed-point value is the predominant approach. We find that, perhaps surprisingly, this is not the best we can do. In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss. AdaRound is fast, does not require fine-tuning of the network, and only uses a small amount of unlabelled data. We start by theoretically analyzing the rounding problem for a pre-trained neural network. By approximating the task loss with a Taylor series expansion, the rounding task is posed as a quadratic unconstrained binary optimization problem. We simplify this to a layer-wise local loss and propose to optimize this loss with a soft relaxation. AdaRound not only outperforms rounding-to-nearest by a significant margin but also establishes a new state-of-the-art for post-training quantization on several networks and tasks. Without fine-tuning, we can quantize the weights of Resnet18 and Resnet50 to 4 bits while staying within an accuracy loss of 1%.
2020-04-22T13:44:28Z
Published as a conference paper at ICML 2020
null
null
null
null
null
null
null
null
null
2,004.10934
YOLOv4: Optimal Speed and Accuracy of Object Detection
['Alexey Bochkovskiy', 'Chien-Yao Wang', 'Hong-Yuan Mark Liao']
['cs.CV', 'eess.IV']
There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100. Source code is at https://github.com/AlexeyAB/darknet
2020-04-23T02:10:02Z
null
null
null
null
null
null
null
null
null
null
2,004.10964
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
['Suchin Gururangan', 'Ana Marasović', 'Swabha Swayamdipta', 'Kyle Lo', 'Iz Beltagy', 'Doug Downey', 'Noah A. Smith']
['cs.CL', 'cs.LG']
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Moreover, adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multi-phase adaptive pretraining offers large gains in task performance.
2020-04-23T04:21:19Z
ACL 2020
null
null
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
['Suchin Gururangan', 'Ana Marasović', 'Swabha Swayamdipta', 'Kyle Lo', 'Iz Beltagy', 'Doug Downey', 'Noah A. Smith']
2,020
Annual Meeting of the Association for Computational Linguistics
2,454
77
['Computer Science']
2,004.11362
Supervised Contrastive Learning
['Prannay Khosla', 'Piotr Teterwak', 'Chen Wang', 'Aaron Sarna', 'Yonglong Tian', 'Phillip Isola', 'Aaron Maschinot', 'Ce Liu', 'Dilip Krishnan']
['cs.LG', 'cs.CV', 'stat.ML']
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at https://t.ly/supcon.
2020-04-23T17:58:56Z
null
null
null
null
null
null
null
null
null
null
2,004.11579
Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
['Yi Liao', 'Xin Jiang', 'Qun Liu']
['cs.CL']
Masked language model and autoregressive language model are two types of language models. While pretrained masked language models such as BERT overwhelm the line of natural language understanding (NLU) tasks, autoregressive language models such as GPT are especially capable in natural language generation (NLG). In this paper, we propose a probabilistic masking scheme for the masked language model, which we call probabilistically masked language model (PMLM). We implement a specific PMLM with a uniform prior distribution on the masking ratio named u-PMLM. We prove that u-PMLM is equivalent to an autoregressive permutated language model. One main advantage of the model is that it supports text generation in arbitrary order with surprisingly good quality, which could potentially enable new applications over traditional unidirectional generation. Besides, the pretrained u-PMLM also outperforms BERT on a set of downstream NLU tasks.
2020-04-24T07:38:19Z
Accepted by ACL 2020
null
null
Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order
['Yi Liao', 'Xin Jiang', 'Qun Liu']
2,020
Annual Meeting of the Association for Computational Linguistics
40
34
['Computer Science']
2,004.11867
Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation
['Biao Zhang', 'Philip Williams', 'Ivan Titov', 'Rico Sennrich']
['cs.CL']
Massively multilingual models for neural machine translation (NMT) are theoretically attractive, but often underperform bilingual models and deliver poor zero-shot translations. In this paper, we explore ways to improve them. We argue that multilingual NMT requires stronger modeling capacity to support language pairs with varying typological characteristics, and overcome this bottleneck via language-specific components and deepening NMT architectures. We identify the off-target translation issue (i.e. translating into a wrong target language) as the major source of the inferior zero-shot performance, and propose random online backtranslation to enforce the translation of unseen training language pairs. Experiments on OPUS-100 (a novel multilingual dataset with 100 languages) show that our approach substantially narrows the performance gap with bilingual models in both one-to-many and many-to-many settings, and improves zero-shot performance by ~10 BLEU, approaching conventional pivot-based methods.
2020-04-24T17:21:32Z
ACL2020
null
null
null
null
null
null
null
null
null
2,004.12184
A Named Entity Based Approach to Model Recipes
['Nirav Diwan', 'Devansh Batra', 'Ganesh Bagler']
['cs.CL', 'cs.IR']
Traditional cooking recipes follow a structure which can be modelled very well if the rules and semantics of the different sections of the recipe text are analyzed and represented accurately. We propose a structure that can accurately represent the recipe as well as a pipeline to infer the best representation of the recipe in this uniform structure. The Ingredients section in a recipe typically lists down the ingredients required and corresponding attributes such as quantity, temperature, and processing state. This can be modelled by defining these attributes and their values. The physical entities which make up a recipe can be broadly classified into utensils, ingredients and their combinations that are related by cooking techniques. The instruction section lists down a series of events in which a cooking technique or process is applied upon these utensils and ingredients. We model these relationships in the form of tuples. Thus, using a combination of these methods we model cooking recipe in the dataset RecipeDB to show the efficacy of our method. This mined information model can have several applications which include translating recipes between languages, determining similarity between recipes, generation of novel recipes and estimation of the nutritional profile of recipes. For the purpose of recognition of ingredient attributes, we train the Named Entity Relationship (NER) models and analyze the inferences with the help of K-Means clustering. Our model presented with an F1 score of 0.95 across all datasets. We use a similar NER tagging model for labelling cooking techniques (F1 score = 0.88) and utensils (F1 score = 0.90) within the instructions section. Finally, we determine the temporal sequence of relationships between ingredients, utensils and cooking techniques for modeling the instruction steps.
2020-04-25T16:37:26Z
36th IEEE International Conference on Data Engineering (ICDE 2020), DECOR Workshop; 6 pages, 5 figures
null
null
A Named Entity Based Approach to Model Recipes
['Nirav Diwan', 'Devansh Batra', 'Ganesh Bagler']
2,020
2020 IEEE 36th International Conference on Data Engineering Workshops (ICDEW)
12
13
['Computer Science', 'Physics']
2,004.12832
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT
['Omar Khattab', 'Matei Zaharia']
['cs.IR', 'cs.CL']
Recent progress in Natural Language Understanding (NLU) is driving fast-paced advances in Information Retrieval (IR), largely owed to fine-tuning deep language models (LMs) for document ranking. While remarkably effective, the ranking models based on these LMs increase computational cost by orders of magnitude over prior approaches, particularly as they must feed each query-document pair through a massive neural network to compute a single relevance score. To tackle this, we present ColBERT, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval. ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity. By delaying and yet retaining this fine-granular interaction, ColBERT can leverage the expressiveness of deep LMs while simultaneously gaining the ability to pre-compute document representations offline, considerably speeding up query processing. Beyond reducing the cost of re-ranking the documents retrieved by a traditional model, ColBERT's pruning-friendly interaction mechanism enables leveraging vector-similarity indexes for end-to-end retrieval directly from a large document collection. We extensively evaluate ColBERT using two recent passage search datasets. Results show that ColBERT's effectiveness is competitive with existing BERT-based models (and outperforms every non-BERT baseline), while executing two orders-of-magnitude faster and requiring four orders-of-magnitude fewer FLOPs per query.
2020-04-27T14:21:03Z
Accepted at SIGIR 2020
null
null
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT
['O. Khattab', 'M. Zaharia']
2,020
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
1,389
45
['Computer Science']
2,004.12992
MakeItTalk: Speaker-Aware Talking-Head Animation
['Yang Zhou', 'Xintong Han', 'Eli Shechtman', 'Jose Echevarria', 'Evangelos Kalogerakis', 'Dingzeyu Li']
['cs.CV', 'cs.GR']
We present a method that generates expressive talking heads from a single facial image with audio as the only input. In contrast to previous approaches that attempt to learn direct mappings from audio to raw pixels or points for creating talking faces, our method first disentangles the content and speaker information in the input audio signal. The audio content robustly controls the motion of lips and nearby facial regions, while the speaker information determines the specifics of facial expressions and the rest of the talking head dynamics. Another key component of our method is the prediction of facial landmarks reflecting speaker-aware dynamics. Based on this intermediate representation, our method is able to synthesize photorealistic videos of entire talking heads with full range of motion and also animate artistic paintings, sketches, 2D cartoon characters, Japanese mangas, stylized caricatures in a single unified framework. We present extensive quantitative and qualitative evaluation of our method, in addition to user studies, demonstrating generated talking heads of significantly higher quality compared to prior state-of-the-art.
2020-04-27T17:56:15Z
SIGGRAPH Asia 2020, 15 pages, 13 figures
null
10.1145/3414685.3417774
null
null
null
null
null
null
null
2,004.13637
Recipes for building an open-domain chatbot
['Stephen Roller', 'Emily Dinan', 'Naman Goyal', 'Da Ju', 'Mary Williamson', 'Yinhan Liu', 'Jing Xu', 'Myle Ott', 'Kurt Shuster', 'Eric M. Smith', 'Y-Lan Boureau', 'Jason Weston']
['cs.CL', 'cs.AI']
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
2020-04-28T16:33:25Z
null
null
null
Recipes for Building an Open-Domain Chatbot
['Stephen Roller', 'Emily Dinan', 'Naman Goyal', 'Da Ju', 'Mary Williamson', 'Yinhan Liu', 'Jing Xu', 'Myle Ott', 'Kurt Shuster', 'Eric Michael Smith', 'Y-Lan Boureau', 'J. Weston']
2,020
Conference of the European Chapter of the Association for Computational Linguistics
1,021
84
['Computer Science']
2,004.13796
TextGAIL: Generative Adversarial Imitation Learning for Text Generation
['Qingyang Wu', 'Lei Li', 'Zhou Yu']
['cs.CL', 'cs.LG']
Generative Adversarial Networks (GANs) for text generation have recently received many criticisms, as they perform worse than their MLE counterparts. We suspect previous text GANs' inferior performance is due to the lack of a reliable guiding signal in their discriminators. To address this problem, we propose a generative adversarial imitation learning framework for text generation that uses large pre-trained language models to provide more reliable reward guidance. Our approach uses contrastive discriminator, and proximal policy optimization (PPO) to stabilize and improve text generation performance. For evaluation, we conduct experiments on a diverse set of unconditional and conditional text generation tasks. Experimental results show that TextGAIL achieves better performance in terms of both quality and diversity than the MLE baseline. We also validate our intuition that TextGAIL's discriminator demonstrates the capability of providing reasonable rewards with an additional task.
2020-04-07T00:24:35Z
AAAI 2021
null
null
null
null
null
null
null
null
null
2,004.13845
DARE: Data Augmented Relation Extraction with GPT-2
['Yannis Papanikolaou', 'Andrea Pierleoni']
['cs.CL', 'cs.LG', 'stat.ML']
Real-world Relation Extraction (RE) tasks are challenging to deal with, either due to limited training data or class imbalance issues. In this work, we present Data Augmented Relation Extraction(DARE), a simple method to augment training data by properly fine-tuning GPT-2 to generate examples for specific relation types. The generated training data is then used in combination with the gold dataset to train a BERT-based RE classifier. In a series of experiments we show the advantages of our method, which leads in improvements of up to 11 F1 score points against a strong base-line. Also, DARE achieves new state of the art in three widely used biomedical RE datasets surpassing the previous best results by 4.7 F1 points on average.
2020-04-06T14:38:36Z
null
null
null
null
null
null
null
null
null
null
2,004.13922
Revisiting Pre-Trained Models for Chinese Natural Language Processing
['Yiming Cui', 'Wanxiang Che', 'Ting Liu', 'Bing Qin', 'Shijin Wang', 'Guoping Hu']
['cs.CL']
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models. In this paper, we target on revisiting Chinese pre-trained language models to examine their effectiveness in a non-English language and release the Chinese pre-trained language model series to the community. We also propose a simple but effective model called MacBERT, which improves upon RoBERTa in several ways, especially the masking strategy that adopts MLM as correction (Mac). We carried out extensive experiments on eight Chinese NLP tasks to revisit the existing pre-trained language models as well as the proposed MacBERT. Experimental results show that MacBERT could achieve state-of-the-art performances on many NLP tasks, and we also ablate details with several findings that may help future research. Resources available: https://github.com/ymcui/MacBERT
2020-04-29T02:08:30Z
12 pages, to appear at Findings of EMNLP 2020
null
10.18653/v1/2020.findings-emnlp.58
null
null
null
null
null
null
null
2,004.14166
SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check
['Xingyi Cheng', 'Weidi Xu', 'Kunlong Chen', 'Shaohua Jiang', 'Feng Wang', 'Taifeng Wang', 'Wei Chu', 'Yuan Qi']
['cs.CL']
Chinese Spelling Check (CSC) is a task to detect and correct spelling errors in Chinese natural language. Existing methods have made attempts to incorporate the similarity knowledge between Chinese characters. However, they take the similarity knowledge as either an external input resource or just heuristic rules. This paper proposes to incorporate phonological and visual similarity knowledge into language models for CSC via a specialized graph convolutional network (SpellGCN). The model builds a graph over the characters, and SpellGCN is learned to map this graph into a set of inter-dependent character classifiers. These classifiers are applied to the representations extracted by another network, such as BERT, enabling the whole network to be end-to-end trainable. Experiments (The dataset and all code for this paper are available at https://github.com/ACL2020SpellGCN/SpellGCN) are conducted on three human-annotated datasets. Our method achieves superior performance against previous models by a large margin.
2020-04-26T03:34:06Z
Accepted by ACL2020
null
null
null
null
null
null
null
null
null
2,004.14253
GePpeTto Carves Italian into a Language Model
['Lorenzo De Mattei', 'Michele Cafagna', "Felice Dell'Orletta", 'Malvina Nissim', 'Marco Guerini']
['cs.CL']
In the last few years, pre-trained neural architectures have provided impressive improvements across several NLP tasks. Still, generative language models are available mainly for English. We develop GePpeTto, the first generative language model for Italian, built using the GPT-2 architecture. We provide a thorough analysis of GePpeTto's quality by means of both an automatic and a human-based evaluation. The automatic assessment consists in (i) calculating perplexity across different genres and (ii) a profiling analysis over GePpeTto's writing characteristics. We find that GePpeTto's production is a sort of bonsai version of human production, with shorter but yet complex sentences. Human evaluation is performed over a sentence completion task, where GePpeTto's output is judged as natural more often than not, and much closer to the original human texts than to a simpler language model which we take as baseline.
2020-04-29T15:02:01Z
null
null
null
null
null
null
null
null
null
null
2,004.14255
Efficient Document Re-Ranking for Transformers by Precomputing Term Representations
['Sean MacAvaney', 'Franco Maria Nardini', 'Raffaele Perego', 'Nicola Tonellotto', 'Nazli Goharian', 'Ophir Frieder']
['cs.IR']
Deep pretrained transformer networks are effective at various ranking tasks, such as question answering and ad-hoc document ranking. However, their computational expenses deem them cost-prohibitive in practice. Our proposed approach, called PreTTR (Precomputing Transformer Term Representations), considerably reduces the query-time latency of deep transformer networks (up to a 42x speedup on web document ranking) making these networks more practical to use in a real-time ranking scenario. Specifically, we precompute part of the document term representations at indexing time (without a query), and merge them with the query representation at query time to compute the final ranking score. Due to the large size of the token representations, we also propose an effective approach to reduce the storage requirement by training a compression layer to match attention scores. Our compression technique reduces the storage required up to 95% and it can be applied without a substantial degradation in ranking performance.
2020-04-29T15:04:22Z
Accepted at SIGIR 2020 (long)
null
10.1145/3397271.3401093
Efficient Document Re-Ranking for Transformers by Precomputing Term Representations
['Sean MacAvaney', 'F. M. Nardini', 'R. Perego', 'N. Tonellotto', 'Nazli Goharian', 'O. Frieder']
2,020
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
120
53
['Computer Science']
2,004.149
MLSUM: The Multilingual Summarization Corpus
['Thomas Scialom', 'Paul-Alexis Dray', 'Sylvain Lamprier', 'Benjamin Piwowarski', 'Jacopo Staiano']
['cs.CL']
We present MLSUM, the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
2020-04-30T15:58:34Z
null
null
null
MLSUM: The Multilingual Summarization Corpus
['Thomas Scialom', 'Paul-Alexis Dray', 'S. Lamprier', 'Benjamin Piwowarski', 'Jacopo Staiano']
2,020
Conference on Empirical Methods in Natural Language Processing
177
75
['Computer Science']
2,004.14963
Data and Representation for Turkish Natural Language Inference
['Emrah Budur', 'Rıza Özçelik', 'Tunga Güngör', 'Christopher Potts']
['cs.CL']
Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.
2020-04-30T17:12:52Z
Accepted to EMNLP 2020
null
null
null
null
null
null
null
null
null
2,004.15011
TLDR: Extreme Summarization of Scientific Documents
['Isabel Cachola', 'Kyle Lo', 'Arman Cohan', 'Daniel S. Weld']
['cs.CL']
We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at https://github.com/allenai/scitldr.
2020-04-30T17:56:18Z
null
null
null
null
null
null
null
null
null
null
2,005.00052
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer
['Jonas Pfeiffer', 'Ivan Vulić', 'Iryna Gurevych', 'Sebastian Ruder']
['cs.CL']
The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pre-training. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pre-trained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and causal commonsense reasoning, and achieves competitive results on question answering. Our code and adapters are available at AdapterHub.ml
2020-04-30T18:54:43Z
EMNLP 2020
null
null
null
null
null
null
null
null
null
2,005.00085
AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages
['Anoop Kunchukuttan', 'Divyanshu Kakwani', 'Satish Golla', 'Gokul N. C.', 'Avik Bhattacharyya', 'Mitesh M. Khapra', 'Pratyush Kumar']
['cs.CL']
We present the IndicNLP corpus, a large-scale, general-domain corpus containing 2.7 billion words for 10 Indian languages from two language families. We share pre-trained word embeddings trained on these corpora. We create news article category classification datasets for 9 languages to evaluate the embeddings. We show that the IndicNLP embeddings significantly outperform publicly available pre-trained embedding on multiple evaluation tasks. We hope that the availability of the corpus will accelerate Indic NLP research. The resources are available at https://github.com/ai4bharat-indicnlp/indicnlp_corpus.
2020-04-30T20:21:02Z
7 pages, 8 tables, https://github.com/ai4bharat-indicnlp/indicnlp_corpus
null
null
AI4Bharat-IndicNLP Corpus: Monolingual Corpora and Word Embeddings for Indic Languages
['Anoop Kunchukuttan', 'Divyanshu Kakwani', 'S. Golla', 'C. GokulN.', 'Avik Bhattacharyya', 'Mitesh M. Khapra', 'Pratyush Kumar']
2,020
arXiv.org
83
27
['Computer Science']
2,005.00247
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
['Jonas Pfeiffer', 'Aishwarya Kamath', 'Andreas Rücklé', 'Kyunghyun Cho', 'Iryna Gurevych']
['cs.CL']
Sequential fine-tuning and multi-task learning are methods aiming to incorporate knowledge from multiple tasks; however, they suffer from catastrophic forgetting and difficulties in dataset balancing. To address these shortcomings, we propose AdapterFusion, a new two stage learning algorithm that leverages knowledge from multiple tasks. First, in the knowledge extraction stage we learn task specific parameters called adapters, that encapsulate the task-specific information. We then combine the adapters in a separate knowledge composition step. We show that by separating the two stages, i.e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner. We empirically evaluate AdapterFusion on 16 diverse NLU tasks, and find that it effectively combines various types of knowledge at different layers of the model. We show that our approach outperforms traditional strategies such as full fine-tuning as well as multi-task learning. Our code and adapters are available at AdapterHub.ml.
2020-05-01T07:03:42Z
null
Proceedings of EACL 2021
null
null
null
null
null
null
null
null
2,005.00341
Jukebox: A Generative Model for Music
['Prafulla Dhariwal', 'Heewoo Jun', 'Christine Payne', 'Jong Wook Kim', 'Alec Radford', 'Ilya Sutskever']
['eess.AS', 'cs.LG', 'cs.SD', 'stat.ML']
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multi-scale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples at https://jukebox.openai.com, along with model weights and code at https://github.com/openai/jukebox
2020-04-30T09:02:45Z
null
null
null
null
null
null
null
null
null
null
2,005.00547
GoEmotions: A Dataset of Fine-Grained Emotions
['Dorottya Demszky', 'Dana Movshovitz-Attias', 'Jeongwoo Ko', 'Alan Cowen', 'Gaurav Nemade', 'Sujith Ravi']
['cs.CL']
Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a fine-grained typology, adaptable to multiple downstream tasks. We introduce GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral. We demonstrate the high quality of the annotations via Principal Preserved Component Analysis. We conduct transfer learning experiments with existing emotion benchmarks to show that our dataset generalizes well to other domains and different emotion taxonomies. Our BERT-based model achieves an average F1-score of .46 across our proposed taxonomy, leaving much room for improvement.
2020-05-01T18:00:02Z
Accepted to ACL 2020
null
null
GoEmotions: A Dataset of Fine-Grained Emotions
['Dorottya Demszky', 'Dana Movshovitz-Attias', 'Jeongwoo Ko', 'Alan S. Cowen', 'Gaurav Nemade', 'Sujith Ravi']
2,020
Annual Meeting of the Association for Computational Linguistics
726
44
['Computer Science']
2,005.00628
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
['Yada Pruksachatkun', 'Jason Phang', 'Haokun Liu', 'Phu Mon Htut', 'Xiaoyi Zhang', 'Richard Yuanzhe Pang', 'Clara Vania', 'Katharina Kann', 'Samuel R. Bowman']
['cs.CL']
While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.
2020-05-01T21:49:34Z
ACL 2020
null
null
Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
['Yada Pruksachatkun', 'Jason Phang', 'Haokun Liu', 'Phu Mon Htut', 'Xiaoyi Zhang', 'Richard Yuanzhe Pang', 'Clara Vania', 'Katharina Kann', 'Samuel R. Bowman']
2,020
Annual Meeting of the Association for Computational Linguistics
197
60
['Computer Science']
2,005.0063
KLEJ: Comprehensive Benchmark for Polish Language Understanding
['Piotr Rybak', 'Robert Mroczkowski', 'Janusz Tracz', 'Ireneusz Gawlik']
['cs.CL']
In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.
2020-05-01T21:55:40Z
null
null
null
null
null
null
null
null
null
null
2,005.00661
On Faithfulness and Factuality in Abstractive Summarization
['Joshua Maynez', 'Shashi Narayan', 'Bernd Bohnet', 'Ryan McDonald']
['cs.CL']
It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document. We conducted a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. Our human annotators found substantial amounts of hallucinated content in all model generated summaries. However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans. Furthermore, we show that textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.
2020-05-02T00:09:16Z
ACL 2020, 14 pages
null
null
null
null
null
null
null
null
null
2,005.01107
Simplifying Paragraph-level Question Generation via Transformer Language Models
['Luis Enrico Lopez', 'Diane Kathryn Cruz', 'Jan Christian Blaise Cruz', 'Charibeth Cheng']
['cs.CL']
Question generation (QG) is a natural language generation task where a model is trained to ask questions corresponding to some input text. Most recent approaches frame QG as a sequence-to-sequence problem and rely on additional features and mechanisms to increase performance; however, these often increase model complexity, and can rely on auxiliary data unavailable in practical use. A single Transformer-based unidirectional language model leveraging transfer learning can be used to produce high quality questions while disposing of additional task-specific complexity. Our QG model, finetuned from GPT-2 Small, outperforms several paragraph-level QG baselines on the SQuAD dataset by 0.95 METEOR points. Human evaluators rated questions as easy to answer, relevant to their context paragraph, and corresponding well to natural human speech. Also introduced is a new set of baseline scores on the RACE dataset, which has not previously been used for QG tasks. Further experimentation with varying model capacities and datasets with non-identification type questions is recommended in order to further verify the robustness of pretrained Transformer-based LMs as question generators.
2020-05-03T14:57:24Z
To appear in PRICAI 2021. Formerly titled "Transformer-based End-to-End Question Generation."
null
null
Simplifying Paragraph-Level Question Generation via Transformer Language Models
['Luis Enrico Lopez', 'Diane Kathryn Cruz', 'Jan Christian Blaise Cruz', 'C. Cheng']
2,020
Pacific Rim International Conference on Artificial Intelligence
28
20
['Computer Science']
2,005.01643
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
['Sergey Levine', 'Aviral Kumar', 'George Tucker', 'Justin Fu']
['cs.LG', 'cs.AI', 'stat.ML']
In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.
2020-05-04T17:00:15Z
null
null
null
null
null
null
null
null
null
null
2,005.01996
NTIRE 2020 Challenge on Real-World Image Super-Resolution: Methods and Results
['Andreas Lugmayr', 'Martin Danelljan', 'Radu Timofte', 'Namhyuk Ahn', 'Dongwoon Bai', 'Jie Cai', 'Yun Cao', 'Junyang Chen', 'Kaihua Cheng', 'SeYoung Chun', 'Wei Deng', 'Mostafa El-Khamy', 'Chiu Man Ho', 'Xiaozhong Ji', 'Amin Kheradmand', 'Gwantae Kim', 'Hanseok Ko', 'Kanghyu Lee', 'Jungwon Lee', 'Hao Li', 'Ziluan Liu', 'Zhi-Song Liu', 'Shuai Liu', 'Yunhua Lu', 'Zibo Meng', 'Pablo Navarrete Michelini', 'Christian Micheloni', 'Kalpesh Prajapati', 'Haoyu Ren', 'Yong Hyeok Seo', 'Wan-Chi Siu', 'Kyung-Ah Sohn', 'Ying Tai', 'Rao Muhammad Umer', 'Shuangquan Wang', 'Huibing Wang', 'Timothy Haoning Wu', 'Haoning Wu', 'Biao Yang', 'Fuzhi Yang', 'Jaejun Yoo', 'Tongtong Zhao', 'Yuanbo Zhou', 'Haijie Zhuo', 'Ziyao Zong', 'Xueyi Zou']
['eess.IV', 'cs.CV']
This paper reviews the NTIRE 2020 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided along with a set of unpaired high-quality target images. In Track 1: Image Processing artifacts, the aim is to super-resolve images with synthetically generated image processing artifacts. This allows for quantitative benchmarking of the approaches \wrt a ground-truth image. In Track 2: Smartphone Images, real low-quality smart phone images have to be super-resolved. In both tracks, the ultimate goal is to achieve the best perceptual quality, evaluated using a human study. This is the second challenge on the subject, following AIM 2019, targeting to advance the state-of-the-art in super-resolution. To measure the performance we use the benchmark protocol from AIM 2019. In total 22 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem.
2020-05-05T08:17:04Z
null
null
null
null
null
null
null
null
null
null
2,005.02068
Establishing Baselines for Text Classification in Low-Resource Languages
['Jan Christian Blaise Cruz', 'Charibeth Cheng']
['cs.CL']
While transformer-based finetuning techniques have proven effective in tasks that involve low-resource, low-data environments, a lack of properly established baselines and benchmark datasets make it hard to compare different approaches that are aimed at tackling the low-resource setting. In this work, we provide three contributions. First, we introduce two previously unreleased datasets as benchmark datasets for text classification and low-resource multilabel text classification for the low-resource language Filipino. Second, we pretrain better BERT and DistilBERT models for use within the Filipino setting. Third, we introduce a simple degradation test that benchmarks a model's resistance to performance degradation as the number of training samples are reduced. We analyze our pretrained model's degradation speeds and look towards the use of this method for comparing models aimed at operating within the low-resource setting. We release all our models and datasets for the research community to use.
2020-05-05T11:17:07Z
We release all our models, finetuning code, and data at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
null
null
null
null
null
null
null
null
null
2,005.02539
Speak to your Parser: Interactive Text-to-SQL with Natural Language Feedback
['Ahmed Elgohary', 'Saghar Hosseini', 'Ahmed Hassan Awadallah']
['cs.CL']
We study the task of semantic parse correction with natural language feedback. Given a natural language utterance, most semantic parsing systems pose the problem as one-shot translation where the utterance is mapped to a corresponding logical form. In this paper, we investigate a more interactive scenario where humans can further interact with the system by providing free-form natural language feedback to correct the system when it generates an inaccurate interpretation of an initial utterance. We focus on natural language to SQL systems and construct, SPLASH, a dataset of utterances, incorrect SQL interpretations and the corresponding natural language feedback. We compare various reference models for the correction task and show that incorporating such a rich form of feedback can significantly improve the overall semantic parsing accuracy while retaining the flexibility of natural language interaction. While we estimated human correction accuracy is 81.5%, our best model achieves only 25.1%, which leaves a large gap for improvement in future research. SPLASH is publicly available at https://aka.ms/Splash_dataset.
2020-05-05T23:58:09Z
ACL 2020
null
null
null
null
null
null
null
null
null
2,005.03521
The Danish Gigaword Project
['Leon Strømberg-Derczynski', 'Manuel R. Ciosici', 'Rebekah Baglini', 'Morten H. Christiansen', 'Jacob Aarup Dalsgaard', 'Riccardo Fusaroli', 'Peter Juel Henrichsen', 'Rasmus Hvingelby', 'Andreas Kirkedal', 'Alex Speed Kjeldsen', 'Claus Ladefoged', 'Finn Årup Nielsen', 'Malte Lau Petersen', 'Jonathan Hvithamar Rystrøm', 'Daniel Varab']
['cs.CL']
Danish language technology has been hindered by a lack of broad-coverage corpora at the scale modern NLP prefers. This paper describes the Danish Gigaword Corpus, the result of a focused effort to provide a diverse and freely-available one billion word corpus of Danish text. The Danish Gigaword corpus covers a wide array of time periods, domains, speakers' socio-economic status, and Danish dialects.
2020-05-07T14:40:56Z
Identical to the NoDaLiDa 2021 version
null
null
The Danish Gigaword Corpus
['Leon Derczynski', 'Manuel R. Ciosici', 'R. Baglini', 'Morten H. Christiansen', 'Jacob Aarup Dalsgaard', 'Riccardo Fusaroli', 'P. Henrichsen', 'Rasmus Hvingelby', 'Andreas Søeborg Kirkedal', 'Alex Speed Kjeldsen', 'Claus Ladefoged', 'F. Nielsen', 'Jens Madsen', 'M. Petersen', 'Jonathan H. Rystrøm', 'Daniel Varab']
2,020
Nordic Conference of Computational Linguistics
19
34
['Computer Science']
2,005.03754
FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
['Esin Durmus', 'He He', 'Mona Diab']
['cs.CL']
Neural abstractive summarization models are prone to generate content inconsistent with the source document, i.e. unfaithful. Existing automatic metrics do not capture such mistakes effectively. We tackle the problem of evaluating faithfulness of a generated summary given its source document. We first collected human annotations of faithfulness for outputs from numerous models on two datasets. We find that current models exhibit a trade-off between abstractiveness and faithfulness: outputs with less word overlap with the source document are more likely to be unfaithful. Next, we propose an automatic question answering (QA) based metric for faithfulness, FEQA, which leverages recent advances in reading comprehension. Given question-answer pairs generated from the summary, a QA model extracts answers from the document; non-matched answers indicate unfaithful information in the summary. Among metrics based on word overlap, embedding similarity, and learned language understanding models, our QA-based metric has significantly higher correlation with human faithfulness scores, especially on highly abstractive summaries.
2020-05-07T21:00:08Z
Accepted to ACL 2020
null
10.18653/v1/2020.acl-main.454
null
null
null
null
null
null
null
2,005.04132
Asteroid: the PyTorch-based audio source separation toolkit for researchers
['Manuel Pariente', 'Samuele Cornell', 'Joris Cosentino', 'Sunit Sivasankaran', 'Efthymios Tzinis', 'Jens Heitkaemper', 'Michel Olvera', 'Fabian-Robert Stöter', 'Mathieu Hu', 'Juan M. Martín-Doñas', 'David Ditter', 'Ariel Frank', 'Antoine Deleforge', 'Emmanuel Vincent']
['eess.AS', 'cs.SD']
This paper describes Asteroid, the PyTorch-based audio source separation toolkit for researchers. Inspired by the most successful neural source separation systems, it provides all neural building blocks required to build such a system. To improve reproducibility, Kaldi-style recipes on common audio source separation datasets are also provided. This paper describes the software architecture of Asteroid and its most important features. By showing experimental results obtained with Asteroid's recipes, we show that our implementations are at least on par with most results reported in reference papers. The toolkit is publicly available at https://github.com/mpariente/asteroid .
2020-05-08T16:18:34Z
Submitted to Interspeech 2020
null
null
null
null
null
null
null
null
null
2,005.05106
Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech
['Geng Yang', 'Shan Yang', 'Kai Liu', 'Peng Fang', 'Wei Chen', 'Lei Xie']
['cs.SD', 'eess.AS']
In this paper, we propose multi-band MelGAN, a much faster waveform generation model targeting to high-quality text-to-speech. Specifically, we improve the original MelGAN by the following aspects. First, we increase the receptive field of the generator, which is proven to be beneficial to speech generation. Second, we substitute the feature matching loss with the multi-resolution STFT loss to better measure the difference between fake and real speech. Together with pre-training, this improvement leads to both better quality and better training stability. More importantly, we extend MelGAN with multi-band processing: the generator takes mel-spectrograms as input and produces sub-band signals which are subsequently summed back to full-band signals as discriminator input. The proposed multi-band MelGAN has achieved high MOS of 4.34 and 4.22 in waveform generation and TTS, respectively. With only 1.91M parameters, our model effectively reduces the total computational complexity of the original MelGAN from 5.85 to 0.95 GFLOPS. Our Pytorch implementation, which will be open-resourced shortly, can achieve a real-time factor of 0.03 on CPU without hardware specific optimization.
2020-05-11T13:48:41Z
Submitted to Interspeech2020
null
null
null
null
null
null
null
null
null
2,005.05535
DeepFaceLab: Integrated, flexible and extensible face-swapping framework
['Ivan Perov', 'Daiheng Gao', 'Nikolay Chervoniy', 'Kunlin Liu', 'Sugasa Marangonda', 'Chris Umé', 'Dpfks', 'Carl Shift Facenheim', 'Luis RP', 'Jian Jiang', 'Sheng Zhang', 'Pingyu Wu', 'Bo Zhou', 'Weiming Zhang']
['cs.CV', 'cs.LG', 'cs.MM', 'eess.IV']
Deepfake defense not only requires the research of detection but also requires the efforts of generation methods. However, current deepfake methods suffer the effects of obscure workflow and poor performance. To solve this problem, we present DeepFaceLab, the current dominant deepfake framework for face-swapping. It provides the necessary tools as well as an easy-to-use way to conduct high-quality face-swapping. It also offers a flexible and loose coupling structure for people who need to strengthen their pipeline with other features without writing complicated boilerplate code. We detail the principles that drive the implementation of DeepFaceLab and introduce its pipeline, through which every aspect of the pipeline can be modified painlessly by users to achieve their customization purpose. It is noteworthy that DeepFaceLab could achieve cinema-quality results with high fidelity. We demonstrate the advantage of our system by comparing our approach with other face-swapping methods.For more information, please visit:https://github.com/iperov/DeepFaceLab/.
2020-05-12T03:26:55Z
null
null
null
Deepfacelab: Integrated, flexible and extensible face-swapping framework
['Kunlin Liu', 'Ivan Perov', 'Daiheng Gao', 'Nikolay Chervoniy', 'Wenbo Zhou', 'Weiming Zhang']
2,020
Pattern Recognition
232
46
['Computer Science', 'Engineering']
2,005.05635
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis
['Hao Tian', 'Can Gao', 'Xinyan Xiao', 'Hao Liu', 'Bolei He', 'Hua Wu', 'Haifeng Wang', 'Feng Wu']
['cs.CL']
Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches. However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches. In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets. We release our code at https://github.com/baidu/Senta.
2020-05-12T09:23:32Z
Accepted by ACL2020
null
null
null
null
null
null
null
null
null