arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,009.01719
Grounded Language Learning Fast and Slow
['Felix Hill', 'Olivier Tieleman', 'Tamara von Glehn', 'Nathaniel Wong', 'Hamza Merzic', 'Stephen Clark']
['cs.CL', 'cs.AI']
Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning. Here, we show that an embodied agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional reinforcement learning algorithms. After a single introduction to a novel object via continuous visual perception and a language prompt ("This is a dax"), the agent can re-identify the object and manipulate it as instructed ("Put the dax on the bed"). In doing so, it seamlessly integrates short-term, within-episode knowledge of the appropriate referent for the word "dax" with long-term lexical and motor knowledge acquired across episodes (i.e. "bed" and "putting"). We find that, under certain training conditions and with a particular memory writing mechanism, the agent's one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category, and is effective in settings with unfamiliar numbers of objects. We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful for later executing instructions. Together, the results demonstrate that deep neural networks can exploit meta-learning, episodic memory and an explicitly multi-modal environment to account for 'fast-mapping', a fundamental pillar of human cognitive development and a potentially transformative capacity for agents that interact with human users.
2020-09-03T14:52:03Z
null
null
null
null
null
null
null
null
null
null
2,009.02252
KILT: a Benchmark for Knowledge Intensive Language Tasks
['Fabio Petroni', 'Aleksandra Piktus', 'Angela Fan', 'Patrick Lewis', 'Majid Yazdani', 'Nicola De Cao', 'James Thorne', 'Yacine Jernite', 'Vladimir Karpukhin', 'Jean Maillard', 'Vassilis Plachouras', 'Tim Rocktäschel', 'Sebastian Riedel']
['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG']
Challenging problems such as open-domain question answering, fact checking, slot filling and entity linking require access to large, external knowledge sources. While some models do well on individual tasks, developing general models is difficult as each task might require computationally expensive indexing of custom knowledge sources, in addition to dedicated infrastructure. To catalyze research on models that condition on specific information in large textual resources, we present a benchmark for knowledge-intensive language tasks (KILT). All tasks in KILT are grounded in the same snapshot of Wikipedia, reducing engineering turnaround through the re-use of components, as well as accelerating research into task-agnostic memory architectures. We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance. We find that a shared dense vector index coupled with a seq2seq model is a strong baseline, outperforming more tailor-made approaches for fact checking, open-domain question answering and dialogue, and yielding competitive results on entity linking and slot filling, by generating disambiguated text. KILT data and code are available at https://github.com/facebookresearch/KILT.
2020-09-04T15:32:19Z
accepted at NAACL 2021
null
null
null
null
null
null
null
null
null
2,009.033
Measuring Massive Multitask Language Understanding
['Dan Hendrycks', 'Collin Burns', 'Steven Basart', 'Andy Zou', 'Mantas Mazeika', 'Dawn Song', 'Jacob Steinhardt']
['cs.CY', 'cs.AI', 'cs.CL', 'cs.LG']
We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.
2020-09-07T17:59:25Z
ICLR 2021; the test and code is available at https://github.com/hendrycks/test
null
null
Measuring Massive Multitask Language Understanding
['Dan Hendrycks', 'Collin Burns', 'Steven Basart', 'Andy Zou', 'Mantas Mazeika', 'D. Song', 'J. Steinhardt']
2,020
International Conference on Learning Representations
4,587
35
['Computer Science']
2,009.04534
Pay Attention when Required
['Swetha Mandava', 'Szymon Migacz', 'Alex Fit Florea']
['cs.LG', 'cs.CL']
Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.
2020-09-09T19:39:15Z
9 pages, 5 figures, 7 tables
null
null
null
null
null
null
null
null
null
2,009.05166
FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding
['Yuwei Fang', 'Shuohang Wang', 'Zhe Gan', 'Siqi Sun', 'Jingjing Liu']
['cs.CL']
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that proves essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-language fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. To tackle this issue, we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.
2020-09-10T22:42:15Z
Accepted to AAAI 2021; Top-1 Performance on XTREME (https://sites.research.google/xtreme, September 8, 2020) and XGLUE (https://microsoft.github.io/XGLUE, September 14, 2020) benchmark
null
null
FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding
['Yuwei Fang', 'Shuohang Wang', 'Zhe Gan', 'S. Sun', 'Jingjing Liu']
2,020
AAAI Conference on Artificial Intelligence
58
33
['Computer Science']
2,009.05387
IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding
['Bryan Wilie', 'Karissa Vincentio', 'Genta Indra Winata', 'Samuel Cahyawijaya', 'Xiaohong Li', 'Zhi Yuan Lim', 'Sidik Soleman', 'Rahmad Mahendra', 'Pascale Fung', 'Syafri Bahar', 'Ayu Purwarianti']
['cs.CL']
Although Indonesian is known to be the fourth most frequently used language over the internet, the research progress on this language in the natural language processing (NLP) is slow-moving due to a lack of available resources. In response, we introduce the first-ever vast resource for the training, evaluating, and benchmarking on Indonesian natural language understanding (IndoNLU) tasks. IndoNLU includes twelve tasks, ranging from single sentence classification to pair-sentences sequence labeling with different levels of complexity. The datasets for the tasks lie in different domains and styles to ensure task diversity. We also provide a set of Indonesian pre-trained models (IndoBERT) trained from a large and clean Indonesian dataset Indo4B collected from publicly available sources such as social media texts, blogs, news, and websites. We release baseline models for all twelve tasks, as well as the framework for benchmark evaluation, and thus it enables everyone to benchmark their system performances.
2020-09-11T12:21:41Z
This paper will be presented in AACL-IJCNLP 2020 (with new results and acknowledgment)
null
null
null
null
null
null
null
null
null
2,009.06978
Dialogue Response Ranking Training with Large-Scale Human Feedback Data
['Xiang Gao', 'Yizhe Zhang', 'Michel Galley', 'Chris Brockett', 'Bill Dolan']
['cs.CL']
Existing open-domain dialog models are generally trained to minimize the perplexity of target human responses. However, some human replies are more engaging than others, spawning more followup interactions. Current conversational models are increasingly capable of producing turns that are context-relevant, but in order to produce compelling agents, these models need to be able to predict and optimize for turns that are genuinely engaging. We leverage social media feedback data (number of replies and upvotes) to build a large-scale training dataset for feedback prediction. To alleviate possible distortion between the feedback and engagingness, we convert the ranking problem to a comparison of response pairs which involve few confounding factors. We trained DialogRPT, a set of GPT-2 based models on 133M pairs of human feedback data and the resulting ranker outperformed several baselines. Particularly, our ranker outperforms the conventional dialog perplexity baseline with a large margin on predicting Reddit feedback. We finally combine the feedback prediction models and a human-like scoring model to rank the machine-generated dialog responses. Crowd-sourced human evaluation shows that our ranking method correlates better with real human preferences than baseline models.
2020-09-15T10:50:05Z
Accepted to appear at EMNLP 2020
null
null
Dialogue Response Ranking Training with Large-Scale Human Feedback Data
['Xiang Gao', 'Yizhe Zhang', 'Michel Galley', 'Chris Brockett', 'Bill Dolan']
2,020
Conference on Empirical Methods in Natural Language Processing
107
35
['Computer Science']
2,009.07047
Old Photo Restoration via Deep Latent Space Translation
['Ziyu Wan', 'Bo Zhang', 'Dongdong Chen', 'Pan Zhang', 'Dong Chen', 'Jing Liao', 'Fang Wen']
['cs.CV', 'cs.GR']
We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces. And the translation between these two latent spaces is learned with synthetic paired data. This translation generalizes well to real photos because the domain gap is closed in the compact latent space. Besides, to address multiple degradations mixed in one old photo, we design a global branch with apartial nonlocal block targeting to the structured defects, such as scratches and dust spots, and a local branch targeting to the unstructured defects, such as noises and blurriness. Two branches are fused in the latent space, leading to improved capability to restore old photos from multiple defects. Furthermore, we apply another face refinement network to recover fine details of faces in the old photos, thus ultimately generating photos with enhanced perceptual quality. With comprehensive experiments, the proposed pipeline demonstrates superior performance over state-of-the-art methods as well as existing commercial tools in terms of visual quality for old photos restoration.
2020-09-14T08:51:53Z
15 pages. arXiv admin note: substantial text overlap with arXiv:2004.09484
null
null
Old Photo Restoration via Deep Latent Space Translation
['Ziyu Wan', 'Bo Zhang', 'Dongdong Chen', 'P. Zhang', 'Dong Chen', 'Jing Liao', 'Fang Wen']
2,020
IEEE Transactions on Pattern Analysis and Machine Intelligence
68
88
['Computer Science', 'Medicine']
2,009.07185
Critical Thinking for Language Models
['Gregor Betz', 'Christian Voigt', 'Kyle Richardson']
['cs.CL', 'cs.AI']
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic corpus of deductively valid arguments, and generate artificial argumentative texts to train and evaluate GPT-2. Significant transfer learning effects can be observed: Training a model on three simple core schemes allows it to accurately complete conclusions of different, and more complex types of arguments, too. The language models generalize the core argument schemes in a correct way. Moreover, we obtain consistent and promising results for NLU benchmarks. In particular, pre-training on the argument schemes raises zero-shot accuracy on the GLUE diagnostics by up to 15 percentage points. The findings suggest that intermediary pre-training on texts that exemplify basic reasoning abilities (such as typically covered in critical thinking textbooks) might help language models to acquire a broad range of reasoning skills. The synthetic argumentative texts presented in this paper are a promising starting point for building such a "critical thinking curriculum for language models."
2020-09-15T15:49:19Z
null
null
null
null
null
null
null
null
null
null
2,009.08366
GraphCodeBERT: Pre-training Code Representations with Data Flow
['Daya Guo', 'Shuo Ren', 'Shuai Lu', 'Zhangyin Feng', 'Duyu Tang', 'Shujie Liu', 'Long Zhou', 'Nan Duan', 'Alexey Svyatkovskiy', 'Shengyu Fu', 'Michele Tufano', 'Shao Kun Deng', 'Colin Clement', 'Dawn Drain', 'Neel Sundaresan', 'Jian Yin', 'Daxin Jiang', 'Ming Zhou']
['cs.SE', 'cs.CL']
Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search.
2020-09-17T15:25:56Z
Accepted by ICLR2021
null
null
null
null
null
null
null
null
null
2,009.0882
FarsTail: A Persian Natural Language Inference Dataset
['Hossein Amirkhani', 'Mohammad AzariJafari', 'Zohreh Pourjafari', 'Soroush Faridan-Jahromi', 'Zeinab Kouhkan', 'Azadeh Amirak']
['cs.CL']
Natural language inference (NLI) is known as one of the central tasks in natural language processing (NLP) which encapsulates many fundamental aspects of language understanding. With the considerable achievements of data-hungry deep learning methods in NLP tasks, a great amount of effort has been devoted to develop more diverse datasets for different languages. In this paper, we present a new dataset for the NLI task in the Persian language, also known as Farsi, which is one of the dominant languages in the Middle East. This dataset, named FarsTail, includes 10,367 samples which are provided in both the Persian language as well as the indexed format to be useful for non-Persian researchers. The samples are generated from 3,539 multiple-choice questions with the least amount of annotator interventions in a way similar to the SciTail dataset. A carefully designed multi-step process is adopted to ensure the quality of the dataset. We also present the results of traditional and state-of-the-art methods on FarsTail including different embedding methods such as word2vec, fastText, ELMo, BERT, and LASER, as well as different modeling approaches such as DecompAtt, ESIM, HBMP, and ULMFiT to provide a solid baseline for the future research. The best obtained test accuracy is 83.38% which shows that there is a big room for improving the current methods to be useful for real-world NLP applications in different languages. We also investigate the extent to which the models exploit superficial clues, also known as dataset biases, in FarsTail, and partition the test set into easy and hard subsets according to the success of biased models. The dataset is available at https://github.com/dml-qom/FarsTail
2020-09-18T13:04:04Z
null
Soft Computing (2023)
10.1007/s00500-023-08959-3
null
null
null
null
null
null
null
2,009.09761
DiffWave: A Versatile Diffusion Model for Audio Synthesis
['Zhifeng Kong', 'Wei Ping', 'Jiaji Huang', 'Kexin Zhao', 'Bryan Catanzaro']
['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD', 'stat.ML']
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audios in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations.
2020-09-21T11:20:38Z
ICLR 2021 (oral)
null
null
null
null
null
null
null
null
null
2,009.10053
Latin BERT: A Contextual Language Model for Classical Philology
['David Bamman', 'Patrick J. Burns']
['cs.CL']
We present Latin BERT, a contextual language model for the Latin language, trained on 642.7 million words from a variety of sources spanning the Classical era to the 21st century. In a series of case studies, we illustrate the affordances of this language-specific model both for work in natural language processing for Latin and in using computational methods for traditional scholarship: we show that Latin BERT achieves a new state of the art for part-of-speech tagging on all three Universal Dependency datasets for Latin and can be used for predicting missing text (including critical emendations); we create a new dataset for assessing word sense disambiguation for Latin and demonstrate that Latin BERT outperforms static word embeddings; and we show that it can be used for semantically-informed search by querying contextual nearest neighbors. We publicly release trained models to help drive future work in this space.
2020-09-21T17:47:44Z
null
null
null
Latin BERT: A Contextual Language Model for Classical Philology
['David Bamman', 'P. Burns']
2,020
arXiv.org
79
61
['Computer Science']
2,009.10277
Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application
['Chris J. Kennedy', 'Geoff Bacon', 'Alexander Sahn', 'Claudia von Vacano']
['cs.CL', 'cs.LG', 'cs.SI', 'I.2.7']
We propose a general method for measuring complex variables on a continuous, interval spectrum by combining supervised deep learning with the Constructing Measures approach to faceted Rasch item response theory (IRT). We decompose the target construct, hate speech in our case, into multiple constituent components that are labeled as ordinal survey items. Those survey responses are transformed via IRT into a debiased, continuous outcome measure. Our method estimates the survey interpretation bias of the human labelers and eliminates that influence on the generated continuous measure. We further estimate the response quality of each labeler using faceted IRT, allowing responses from low-quality labelers to be removed. Our faceted Rasch scaling procedure integrates naturally with a multitask deep learning architecture for automated prediction on new data. The ratings on the theorized components of the target outcome are used as supervised, ordinal variables for the neural networks' internal concept learning. We test the use of an activation function (ordinal softmax) and loss function (ordinal cross-entropy) designed to exploit the structure of ordinal outcome variables. Our multitask architecture leads to a new form of model interpretation because each continuous prediction can be directly explained by the constituent components in the penultimate layer. We demonstrate this new method on a dataset of 50,000 social media comments sourced from YouTube, Twitter, and Reddit and labeled by 11,000 U.S.-based Amazon Mechanical Turk workers to measure a continuous spectrum from hate speech to counterspeech. We evaluate Universal Sentence Encoders, BERT, and RoBERTa as language representation models for the comment text, and compare our predictive accuracy to Google Jigsaw's Perspective API models, showing significant improvement over this standard benchmark.
2020-09-22T02:15:05Z
35 pages, 10 figures
null
null
null
null
null
null
null
null
null
2,009.10297
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
['Shuo Ren', 'Daya Guo', 'Shuai Lu', 'Long Zhou', 'Shujie Liu', 'Duyu Tang', 'Neel Sundaresan', 'Ming Zhou', 'Ambrosio Blanco', 'Shuai Ma']
['cs.SE', 'cs.CL']
Evaluation metrics play a vital role in the growth of an area as it defines the standard of distinguishing between good and bad models. In the area of code synthesis, the commonly used evaluation metric is BLEU or perfect accuracy, but they are not suitable enough to evaluate codes, because BLEU is originally designed to evaluate the natural language, neglecting important syntactic and semantic features of codes, and perfect accuracy is too strict thus it underestimates different outputs with the same semantic logic. To remedy this, we introduce a new automatic evaluation metric, dubbed CodeBLEU. It absorbs the strength of BLEU in the n-gram match and further injects code syntax via abstract syntax trees (AST) and code semantics via data-flow. We conduct experiments by evaluating the correlation coefficient between CodeBLEU and quality scores assigned by the programmers on three code synthesis tasks, i.e., text-to-code, code translation, and code refinement. Experimental results show that our proposed CodeBLEU can achieve a better correlation with programmer assigned scores compared with BLEU and accuracy.
2020-09-22T03:10:49Z
8 pages, 6 figures
null
null
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
['Shuo Ren', 'Daya Guo', 'Shuai Lu', 'Long Zhou', 'Shujie Liu', 'Duyu Tang', 'M. Zhou', 'Ambrosio Blanco', 'Shuai Ma']
2,020
arXiv.org
546
32
['Computer Science']
2,009.11462
RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
['Samuel Gehman', 'Suchin Gururangan', 'Maarten Sap', 'Yejin Choi', 'Noah A. Smith']
['cs.CL']
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration. We create and release RealToxicityPrompts, a dataset of 100K naturally occurring, sentence-level prompts derived from a large corpus of English web text, paired with toxicity scores from a widely-used toxicity classifier. Using RealToxicityPrompts, we find that pretrained LMs can degenerate into toxic text even from seemingly innocuous prompts. We empirically assess several controllable generation methods, and find that while data- or compute-intensive methods (e.g., adaptive pretraining on non-toxic data) are more effective at steering away from toxicity than simpler solutions (e.g., banning "bad" words), no current method is failsafe against neural toxic degeneration. To pinpoint the potential cause of such persistent toxic degeneration, we analyze two web text corpora used to pretrain several LMs (including GPT-2; Radford et. al, 2019), and find a significant amount of offensive, factually unreliable, and otherwise toxic content. Our work provides a test bed for evaluating toxic generations by LMs and stresses the need for better data selection processes for pretraining.
2020-09-24T03:17:19Z
Findings in EMNLP 2020
null
null
null
null
null
null
null
null
null
2,009.11616
N-LTP: An Open-source Neural Language Technology Platform for Chinese
['Wanxiang Che', 'Yunlong Feng', 'Libo Qin', 'Ting Liu']
['cs.CL']
We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling). Unlike the existing state-of-the-art toolkits, such as \texttt{Stanza}, that adopt an independent model for each task, \texttt{N-LTP} adopts the multi-task framework by using a shared pre-trained model, which has the advantage of capturing the shared knowledge across relevant Chinese tasks. In addition, a knowledge distillation method \cite{DBLP:journals/corr/abs-1907-04829} where the single-task model teaches the multi-task model is further introduced to encourage the multi-task model to surpass its single-task teacher. Finally, we provide a collection of easy-to-use APIs and a visualization tool to make users to use and view the processing results more easily and directly. To the best of our knowledge, this is the first toolkit to support six Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at \url{https://github.com/HIT-SCIR/ltp}.
2020-09-24T11:45:39Z
Accepted to appear in EMNLP 2021 (Demo)
null
null
N-LTP: An Open-source Neural Language Technology Platform for Chinese
['Wanxiang Che', 'ylfeng', 'Libo Qin', 'Ting Liu']
2,020
Conference on Empirical Methods in Natural Language Processing
113
38
['Computer Science']
2,009.12756
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
['Wenhan Xiong', 'Xiang Lorraine Li', 'Srini Iyer', 'Jingfei Du', 'Patrick Lewis', 'William Yang Wang', 'Yashar Mehdad', 'Wen-tau Yih', 'Sebastian Riedel', 'Douwe Kiela', 'Barlas Oğuz']
['cs.CL']
We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER. Contrary to previous work, our method does not require access to any corpus-specific information, such as inter-document hyperlinks or human-annotated entity markers, and can be applied to any unstructured text corpus. Our system also yields a much better efficiency-accuracy trade-off, matching the best published accuracy on HotpotQA while being 10 times faster at inference time.
2020-09-27T06:12:29Z
null
null
null
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
['Wenhan Xiong', 'Xiang Lorraine Li', 'Srini Iyer', 'Jingfei Du', 'Patrick Lewis', 'William Yang Wang', 'Yashar Mehdad', 'Wen-tau Yih', 'Sebastian Riedel', 'Douwe Kiela', 'Barlas Oğuz']
2,020
International Conference on Learning Representations
194
61
['Computer Science']
2,009.13013
SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval
['Tiancheng Zhao', 'Xiaopeng Lu', 'Kyusong Lee']
['cs.CL', 'cs.LG']
We introduce SPARTA, a novel neural retrieval method that shows great promise in performance, generalization, and interpretability for open-domain question answering. Unlike many neural ranking methods that use dense vector nearest neighbor search, SPARTA learns a sparse representation that can be efficiently implemented as an Inverted Index. The resulting representation enables scalable neural retrieval that does not require expensive approximate vector search and leads to better performance than its dense counterpart. We validated our approaches on 4 open-domain question answering (OpenQA) tasks and 11 retrieval question answering (ReQA) tasks. SPARTA achieves new state-of-the-art results across a variety of open-domain question answering tasks in both English and Chinese datasets, including open SQuAD, Natuarl Question, CMRC and etc. Analysis also confirms that the proposed method creates human interpretable representation and allows flexible control over the trade-off between performance and efficiency.
2020-09-28T02:11:02Z
11 pages
null
null
null
null
null
null
null
null
null
2,009.13081
What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams
['Di Jin', 'Eileen Pan', 'Nassim Oufattole', 'Wei-Hung Weng', 'Hanyi Fang', 'Peter Szolovits']
['cs.CL', 'cs.AI']
Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community. In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA, collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. We implement both rule-based and popular neural methods by sequentially combining a document retriever and a machine comprehension model. Through experiments, we find that even the current best method can only achieve 36.7\%, 42.0\%, and 70.1\% of test accuracy on the English, traditional Chinese, and simplified Chinese questions, respectively. We expect MedQA to present great challenges to existing OpenQA systems and hope that it can serve as a platform to promote much stronger OpenQA models from the NLP community in the future.
2020-09-28T05:07:51Z
Submitted to AAAI 2021
null
null
What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams
['Di Jin', 'Eileen Pan', 'Nassim Oufattole', 'W. Weng', 'Hanyi Fang', 'Peter Szolovits']
2,020
Applied Sciences
820
49
['Computer Science']
2,009.14725
A Vietnamese Dataset for Evaluating Machine Reading Comprehension
['Kiet Van Nguyen', 'Duc-Vu Nguyen', 'Anh Gia-Tuan Nguyen', 'Ngan Luu-Thuy Nguyen']
['cs.CL']
Over 97 million people speak Vietnamese as their native language in the world. However, there are few research studies on machine reading comprehension (MRC) for Vietnamese, the task of understanding a text and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods for English and Chinese as the first experimental models on UIT-ViQuAD. We also estimate human performance on the dataset and compare it to the experimental results of powerful machine learning models. As a result, the substantial differences between human performance and the best model performance on the dataset indicate that improvements can be made on UIT-ViQuAD in future research. Our dataset is freely available on our website to encourage the research community to overcome challenges in Vietnamese MRC.
2020-09-30T15:06:56Z
Accepted by The 28th International Conference on Computational Linguistics (COLING 2020)
null
null
null
null
null
null
null
null
null
2,009.14794
Rethinking Attention with Performers
['Krzysztof Choromanski', 'Valerii Likhosherstov', 'David Dohan', 'Xingyou Song', 'Andreea Gane', 'Tamas Sarlos', 'Peter Hawkins', 'Jared Davis', 'Afroz Mohiuddin', 'Lukasz Kaiser', 'David Belanger', 'Lucy Colwell', 'Adrian Weller']
['cs.LG', 'cs.CL', 'stat.ML']
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
2020-09-30T17:09:09Z
Published as a conference paper + oral presentation at ICLR 2021. 38 pages. See https://github.com/google-research/google-research/tree/master/protein_lm for protein language model code, and https://github.com/google-research/google-research/tree/master/performer for Performer code. See https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html for Google AI Blog
null
null
null
null
null
null
null
null
null
2,010.00133
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
['Nikita Nangia', 'Clara Vania', 'Rasika Bhalerao', 'Samuel R. Bowman']
['cs.CL', 'cs.AI']
Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress.
2020-09-30T22:38:40Z
EMNLP 2020
null
null
null
null
null
null
null
null
null
2,010.00571
Understanding tables with intermediate pre-training
['Julian Martin Eisenschlos', 'Syrine Krichene', 'Thomas Müller']
['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG']
Table entailment, the binary classification task of finding if a sentence is supported or refuted by the content of a table, requires parsing language and table structure as well as numerical and discrete reasoning. While there is extensive work on textual entailment, table entailment is less well studied. We adapt TAPAS (Herzig et al., 2020), a table-based BERT model, to recognize entailment. Motivated by the benefits of data augmentation, we create a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. This new data is not only useful for table entailment, but also for SQA (Iyyer et al., 2017), a sequential table QA task. To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy. The different methods set the new state-of-the-art on the TabFact (Chen et al., 2020) and SQA datasets.
2020-10-01T17:43:27Z
Accepted to EMNLP Findings 2020
null
null
Understanding tables with intermediate pre-training
['Julian Martin Eisenschlos', 'Syrine Krichene', 'Thomas Müller']
2,020
Findings
121
58
['Computer Science']
2,010.00747
Contrastive Learning of Medical Visual Representations from Paired Images and Text
['Yuhao Zhang', 'Hang Jiang', 'Yasuhide Miura', 'Christopher D. Manning', 'Curtis P. Langlotz']
['cs.CV', 'cs.CL', 'cs.LG']
Learning visual representations of medical images (e.g., X-rays) is core to medical image understanding but its progress has been held back by the scarcity of human annotations. Existing work commonly relies on fine-tuning weights transferred from ImageNet pretraining, which is suboptimal due to drastically different image characteristics, or rule-based label extraction from the textual report data paired with medical images, which is inaccurate and hard to generalize. Meanwhile, several recent studies show exciting results from unsupervised contrastive learning from natural images, but we find these methods help little on medical images because of their high inter-class similarity. We propose ConVIRT, an alternative unsupervised strategy to learn medical visual representations by exploiting naturally occurring paired descriptive text. Our new method of pretraining medical image encoders with the paired text data via a bidirectional contrastive objective between the two modalities is domain-agnostic, and requires no additional expert input. We test ConVIRT by transferring our pretrained weights to 4 medical image classification tasks and 2 zero-shot retrieval tasks, and show that it leads to image representations that considerably outperform strong baselines in most settings. Notably, in all 4 classification tasks, our method requires only 10\% as much labeled training data as an ImageNet initialized counterpart to achieve better or comparable performance, demonstrating superior data efficiency.
2020-10-02T02:10:18Z
First published in 2020. Accepted at Machine Learning for Healthcare (MLHC) 2022
null
null
Contrastive Learning of Medical Visual Representations from Paired Images and Text
['Yuhao Zhang', 'Hang Jiang', 'Yasuhide Miura', 'Christopher D. Manning', 'C. Langlotz']
2,020
Machine Learning in Health Care
774
59
['Computer Science']
2,010.00904
Autoregressive Entity Retrieval
['Nicola De Cao', 'Gautier Izacard', 'Sebastian Riedel', 'Fabio Petroni']
['cs.CL', 'cs.IR', 'cs.LG', 'stat.ML']
Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. Current approaches can be understood as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach has several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion. This mitigates the aforementioned technical issues since: (i) the autoregressive formulation directly captures relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the softmax loss is computed without subsampling negative data. We experiment with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their names. Code and pre-trained models at https://github.com/facebookresearch/GENRE.
2020-10-02T10:13:31Z
Accepted (spotlight) at International Conference on Learning Representations (ICLR) 2021. Code at https://github.com/facebookresearch/GENRE. 20 pages, 9 figures, 8 tables
null
null
null
null
null
null
null
null
null
2,010.0098
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
['Andreas Rücklé', 'Jonas Pfeiffer', 'Iryna Gurevych']
['cs.CL', 'cs.IR']
We study the zero-shot transfer capabilities of text matching models on a massive scale, by self-supervised training on 140 source domains from community question answering forums in English. We investigate the model performances on nine benchmarks of answer selection and question similarity tasks, and show that all 140 models transfer surprisingly well, where the large majority of models substantially outperforms common IR baselines. We also demonstrate that considering a broad selection of source domains is crucial for obtaining the best zero-shot transfer performances, which contrasts the standard procedure that merely relies on the largest and most similar domains. In addition, we extensively study how to best combine multiple source domains. We propose to incorporate self-supervised with supervised multi-task learning on all available source domains. Our best zero-shot transfer model considerably outperforms in-domain BERT and the previous state of the art on six benchmarks. Fine-tuning of our model with in-domain data results in additional large gains and achieves the new state of the art on all nine benchmarks.
2020-10-02T13:22:12Z
EMNLP-2020
null
null
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
['Andreas Rücklé', 'Jonas Pfeiffer', 'Iryna Gurevych']
2,020
Conference on Empirical Methods in Natural Language Processing
38
46
['Computer Science']
2,010.01057
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
['Ikuya Yamada', 'Akari Asai', 'Hiroyuki Shindo', 'Hideaki Takeda', 'Yuji Matsumoto']
['cs.CL', 'cs.LG']
Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.
2020-10-02T15:38:03Z
EMNLP 2020
null
null
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
['Ikuya Yamada', 'Akari Asai', 'Hiroyuki Shindo', 'Hideaki Takeda', 'Yuji Matsumoto']
2,020
Conference on Empirical Methods in Natural Language Processing
676
46
['Computer Science']
2,010.01073
Efficient Image Super-Resolution Using Pixel Attention
['Hengyuan Zhao', 'Xiangtao Kong', 'Jingwen He', 'Yu Qiao', 'Chao Dong']
['eess.IV', 'cs.CV']
This work aims at designing a lightweight convolutional neural network for image super resolution (SR). With simplicity bare in mind, we construct a pretty concise and effective network with a newly proposed pixel attention scheme. Pixel attention (PA) is similar as channel attention and spatial attention in formulation. The difference is that PA produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. On the basis of PA, we propose two building blocks for the main branch and the reconstruction branch, respectively. The first one - SC-PA block has the same structure as the Self-Calibrated convolution but with our PA layer. This block is much more efficient than conventional residual/dense blocks, for its twobranch architecture and attention scheme. While the second one - UPA block combines the nearest-neighbor upsampling, convolution and PA layers. It improves the final reconstruction quality with little parameter cost. Our final model- PAN could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters (17.92% of SRResNet and 17.09% of CARN). The effectiveness of each proposed component is also validated by ablation study. The code is available at https://github.com/zhaohengyuan1/PAN.
2020-10-02T16:04:33Z
17 pages, 5 figures, conference, accpeted by ECCVW (AIM2020 ESR Challenge)
null
null
null
null
null
null
null
null
null
2,010.01815
High-resolution Piano Transcription with Pedals by Regressing Onset and Offset Times
['Qiuqiang Kong', 'Bochen Li', 'Xuchen Song', 'Yuan Wan', 'Yuxuan Wang']
['cs.SD', 'eess.AS']
Automatic music transcription (AMT) is the task of transcribing audio recordings into symbolic representations. Recently, neural network-based methods have been applied to AMT, and have achieved state-of-the-art results. However, many previous systems only detect the onset and offset of notes frame-wise, so the transcription resolution is limited to the frame hop size. There is a lack of research on using different strategies to encode onset and offset targets for training. In addition, previous AMT systems are sensitive to the misaligned onset and offset labels of audio recordings. Furthermore, there are limited researches on sustain pedal transcription on large-scale datasets. In this article, we propose a high-resolution AMT system trained by regressing precise onset and offset times of piano notes. At inference, we propose an algorithm to analytically calculate the precise onset and offset times of piano notes and pedal events. We show that our AMT system is robust to the misaligned onset and offset labels compared to previous systems. Our proposed system achieves an onset F1 of 96.72% on the MAESTRO dataset, outperforming previous onsets and frames system of 94.80%. Our system achieves a pedal onset F1 score of 91.86\%, which is the first benchmark result on the MAESTRO dataset. We have released the source code and checkpoints of our work at https://github.com/bytedance/piano_transcription.
2020-10-05T06:57:11Z
12 pages
null
null
null
null
null
null
null
null
null
2,010.02405
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning
['Yi Yang', 'Arzoo Katiyar']
['cs.CL']
We present a simple few-shot named entity recognition (NER) system based on nearest neighbor learning and structured inference. Our system uses a supervised NER model trained on the source domain, as a feature extractor. Across several test domains, we show that a nearest neighbor classifier in this feature-space is far more effective than the standard meta-learning approaches. We further propose a cheap but effective method to capture the label dependencies between entity tags without expensive CRF training. We show that our method of combining structured decoding with nearest neighbor learning achieves state-of-the-art performance on standard few-shot NER evaluation tasks, improving F1 scores by $6\%$ to $16\%$ absolute points over prior meta-learning based systems.
2020-10-06T00:25:50Z
Accepted by EMNLP 2020
null
null
Frustratingly Simple Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning
['Yi Yang', 'Arzoo Katiyar']
2,020
Conference on Empirical Methods in Natural Language Processing
65
29
['Computer Science']
2,010.02502
Denoising Diffusion Implicit Models
['Jiaming Song', 'Chenlin Meng', 'Stefano Ermon']
['cs.LG', 'cs.CV']
Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples $10 \times$ to $50 \times$ faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
2020-10-06T06:15:51Z
ICLR 2021; updated connections with ODEs at page 6, fixed some typos in the proof
null
null
null
null
null
null
null
null
null
2,010.02559
LEGAL-BERT: The Muppets straight out of Law School
['Ilias Chalkidis', 'Manos Fergadiotis', 'Prodromos Malakasiotis', 'Nikolaos Aletras', 'Ion Androutsopoulos']
['cs.CL']
BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets. Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain. Thus we propose a systematic investigation of the available strategies when applying BERT in specialised domains. These are: (a) use the original BERT out of the box, (b) adapt BERT by additional pre-training on domain-specific corpora, and (c) pre-train BERT from scratch on domain-specific corpora. We also propose a broader hyper-parameter search space when fine-tuning for downstream tasks and we release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.
2020-10-06T09:06:07Z
5 pages, short paper in Findings of EMNLP 2020
null
null
LEGAL-BERT: “Preparing the Muppets for Court’”
['Ilias Chalkidis', 'Manos Fergadiotis', 'Prodromos Malakasiotis', 'Nikolaos Aletras', 'Ion Androutsopoulos']
2,020
Findings
265
32
['Computer Science']
2,010.02666
Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
['Sebastian Hofstätter', 'Sophia Althammer', 'Michael Schröder', 'Mete Sertkan', 'Allan Hanbury']
['cs.IR']
Retrieval and ranking models are the backbone of many applications such as web search, open domain QA, or text-based recommender systems. The latency of neural ranking models at query time is largely dependent on the architecture and deliberate choices by their designers to trade-off effectiveness for higher efficiency. This focus on low query latency of a rising number of efficient ranking architectures make them feasible for production deployment. In machine learning an increasingly common approach to close the effectiveness gap of more efficient models is to apply knowledge distillation from a large teacher model to a smaller student model. We find that different ranking architectures tend to produce output scores in different magnitudes. Based on this finding, we propose a cross-architecture training procedure with a margin focused loss (Margin-MSE), that adapts knowledge distillation to the varying score output distributions of different BERT and non-BERT passage ranking architectures. We apply the teachable information as additional fine-grained labels to existing training triples of the MSMARCO-Passage collection. We evaluate our procedure of distilling knowledge from state-of-the-art concatenated BERT models to four different efficient architectures (TK, ColBERT, PreTT, and a BERT CLS dot product model). We show that across our evaluated architectures our Margin-MSE knowledge distillation significantly improves re-ranking effectiveness without compromising their efficiency. Additionally, we show our general distillation method to improve nearest neighbor based index retrieval with the BERT dot product model, offering competitive results with specialized and much more costly training methods. To benefit the community, we publish the teacher-score training files in a ready-to-use package.
2020-10-06T12:35:53Z
Updated paper with dense retrieval results and query-level analysis
null
null
null
null
null
null
null
null
null
2,010.0281
Swiss Parliaments Corpus, an Automatically Aligned Swiss German Speech to Standard German Text Corpus
['Michel Plüss', 'Lukas Neukom', 'Christian Scheller', 'Manfred Vogel']
['cs.CL', 'cs.LG']
We present the Swiss Parliaments Corpus (SPC), an automatically aligned Swiss German speech to Standard German text corpus. This first version of the corpus is based on publicly available data of the Bernese cantonal parliament and consists of 293 hours of data. It was created using a novel forced sentence alignment procedure and an alignment quality estimator, which can be used to trade off corpus size and quality. We trained Automatic Speech Recognition (ASR) models as baselines on different subsets of the data and achieved a Word Error Rate (WER) of 0.278 and a BLEU score of 0.586 on the SPC test set. The corpus is freely available for download.
2020-10-06T15:18:21Z
8 pages, 0 figures
null
null
null
null
null
null
null
null
null
2,010.03295
COMETA: A Corpus for Medical Entity Linking in the Social Media
['Marco Basaldella', 'Fangyu Liu', 'Ehsan Shareghi', 'Nigel Collier']
['cs.CL']
Whilst there has been growing progress in Entity Linking (EL) for general language, existing datasets fail to address the complex nature of health terminology in layman's language. Meanwhile, there is a growing need for applications that can understand the public's voice in the health domain. To address this we introduce a new corpus called COMETA, consisting of 20k English biomedical entity mentions from Reddit expert-annotated with links to SNOMED CT, a widely-used medical knowledge graph. Our corpus satisfies a combination of desirable properties, from scale and coverage to diversity and quality, that to the best of our knowledge has not been met by any of the existing resources in the field. Through benchmark experiments on 20 EL baselines from string- to neural-based models we shed light on the ability of these systems to perform complex inference on entities and concepts under 2 challenging evaluation scenarios. Our experimental results on COMETA illustrate that no golden bullet exists and even the best mainstream techniques still have a significant performance gap to fill, while the best solution relies on combining different views of data.
2020-10-07T09:16:45Z
Accepted to EMNLP 2020
null
null
null
null
null
null
null
null
null
2,010.03636
MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
['Anthony Chen', 'Gabriel Stanovsky', 'Sameer Singh', 'Matt Gardner']
['cs.CL', 'cs.LG']
Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading comprehension. To address this, we introduce a benchmark for training and evaluating generative reading comprehension metrics: MOdeling Correctness with Human Annotations. MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. Using MOCHA, we train a Learned Evaluation metric for Reading Comprehension, LERC, to mimic human judgement scores. LERC outperforms baseline metrics by 10 to 36 absolute Pearson points on held-out annotations. When we evaluate robustness on minimal pairs, LERC achieves 80% accuracy, outperforming baselines by 14 to 26 absolute percentage points while leaving significant room for improvement. MOCHA presents a challenging problem for developing accurate and robust generative reading comprehension metrics.
2020-10-07T20:22:54Z
null
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
10.18653/v1/2020.emnlp-main.528
null
null
null
null
null
null
null
2,010.04159
Deformable DETR: Deformable Transformers for End-to-End Object Detection
['Xizhou Zhu', 'Weijie Su', 'Lewei Lu', 'Bin Li', 'Xiaogang Wang', 'Jifeng Dai']
['cs.CV']
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.
2020-10-08T17:59:21Z
ICLR 2021 Oral
null
null
null
null
null
null
null
null
null
2,010.04245
Query-Key Normalization for Transformers
['Alex Henry', 'Prudhvi Raj Dachapally', 'Shubham Pawar', 'Yuxuan Chen']
['cs.CL', 'cs.AI', 'cs.LG']
Low-resource language translation is a challenging but socially valuable NLP task. Building on recent work adapting the Transformer's normalization to this setting, we propose QKNorm, a normalization technique that modifies the attention mechanism to make the softmax function less prone to arbitrary saturation without sacrificing expressivity. Specifically, we apply $\ell_2$ normalization along the head dimension of each query and key matrix prior to multiplying them and then scale up by a learnable parameter instead of dividing by the square root of the embedding dimension. We show improvements averaging 0.928 BLEU over state-of-the-art bilingual benchmarks for 5 low-resource translation pairs from the TED Talks corpus and IWSLT'15.
2020-10-08T20:12:35Z
8 pages, 2 figures, accepted at Findings of EMNLP 2020
null
null
Query-Key Normalization for Transformers
['Alex Henry', 'Prudhvi Raj Dachapally', 'S. Pawar', 'Yuxuan Chen']
2,020
Findings
91
41
['Computer Science']
2,010.04295
Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements
['Yang Li', 'Gang Li', 'Luheng He', 'Jingjie Zheng', 'Hong Li', 'Zhiwei Guan']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.HC']
Natural language descriptions of user interface (UI) elements such as alternative text are crucial for accessibility and language-based interaction in general. Yet, these descriptions are constantly missing in mobile UIs. We propose widget captioning, a novel task for automatically generating language descriptions for UI elements from multimodal input including both the image and the structural representations of user interfaces. We collected a large-scale dataset for widget captioning with crowdsourcing. Our dataset contains 162,859 language phrases created by human workers for annotating 61,285 UI elements across 21,750 unique UI screens. We thoroughly analyze the dataset, and train and evaluate a set of deep model configurations to investigate how each feature modality as well as the choice of learning strategies impact the quality of predicted captions. The task formulation and the dataset as well as our benchmark models contribute a solid basis for this novel multimodal captioning task that connects language and user interfaces.
2020-10-08T22:56:03Z
16 pages, EMNLP 2020
null
null
Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements
['Y. Li', 'Gang Li', 'Luheng He', 'Jingjie Zheng', 'Hong Li', 'Zhiwei Guan']
2,020
Conference on Empirical Methods in Natural Language Processing
110
39
['Computer Science']
2,010.04806
AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data
['Silei Xu', 'Sina J. Semnani', 'Giovanni Campagna', 'Monica S. Lam']
['cs.CL']
We propose AutoQA, a methodology and toolkit to generate semantic parsers that answer questions on databases, with no manual effort. Given a database schema and its data, AutoQA automatically generates a large set of high-quality questions for training that covers different database operations. It uses automatic paraphrasing combined with template-based parsing to find alternative expressions of an attribute in different parts of speech. It also uses a novel filtered auto-paraphraser to generate correct paraphrases of entire sentences. We apply AutoQA to the Schema2QA dataset and obtain an average logical form accuracy of 62.9% when tested on natural questions, which is only 6.4% lower than a model trained with expert natural language annotations and paraphrase data collected from crowdworkers. To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset. AutoQA achieves 69.8% answer accuracy, 16.4% higher than the state-of-the-art zero-shot models and only 5.2% lower than the same model trained with human data.
2020-10-09T21:06:57Z
To appear in EMNLP 2020
null
null
null
null
null
null
null
null
null
2,010.05171
fairseq S2T: Fast Speech-to-Text Modeling with fairseq
['Changhan Wang', 'Yun Tang', 'Xutai Ma', 'Anne Wu', 'Sravya Popuri', 'Dmytro Okhonko', 'Juan Pino']
['cs.CL', 'eess.AS']
We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. It follows fairseq's careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based, Transformer-based as well as Conformer-based models and open-source detailed training recipes. Fairseq's machine translation models and language models can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. Fairseq S2T documentation and examples are available at https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text.
2020-10-11T05:36:54Z
Post-conference updates (accepted to AACL 2020 Demo)
null
null
null
null
null
null
null
null
null
2,010.05338
We Can Detect Your Bias: Predicting the Political Ideology of News Articles
['Ramy Baly', 'Giovanni Da San Martino', 'James Glass', 'Preslav Nakov']
['cs.CL']
We explore the task of predicting the leading political ideology or bias of news articles. First, we collect and release a large dataset of 34,737 articles that were manually annotated for political ideology -left, center, or right-, which is well-balanced across both topics and media. We further use a challenging experimental setup where the test examples come from media that were not seen during training, which prevents the model from learning to detect the source of the target news article instead of predicting its political ideology. From a modeling perspective, we propose an adversarial media adaptation, as well as a specially adapted triplet loss. We further add background information about the source, and we show that it is quite helpful for improving article-level prediction. Our experimental results show very sizable improvements over using state-of-the-art pre-trained Transformers in this challenging setup.
2020-10-11T20:27:55Z
Political bias, bias in news, neural networks bias, adversarial adaptation, triplet loss, transformers, recurrent neural networks
EMNLP-2020
null
null
null
null
null
null
null
null
2,010.05609
Load What You Need: Smaller Versions of Multilingual BERT
['Amine Abdaoui', 'Camille Pradel', 'Grégoire Sigel']
['cs.CL', 'cs.AI', 'cs.LG']
Pre-trained Transformer-based models are achieving state-of-the-art results on a variety of Natural Language Processing data sets. However, the size of these models is often a drawback for their deployment in real production applications. In the case of multilingual models, most of the parameters are located in the embeddings layer. Therefore, reducing the vocabulary size should have an important impact on the total number of parameters. In this paper, we propose to generate smaller models that handle fewer number of languages according to the targeted corpora. We present an evaluation of smaller versions of multilingual BERT on the XNLI data set, but we believe that this method may be applied to other multilingual transformers. The obtained results confirm that we can generate smaller models that keep comparable results, while reducing up to 45% of the total number of parameters. We compared our models with DistilmBERT (a distilled version of multilingual BERT) and showed that unlike language reduction, distillation induced a 1.7% to 6% drop in the overall accuracy on the XNLI data set. The presented models and code are publicly available.
2020-10-12T11:29:06Z
null
SustaiNLP / EMNLP 2020
null
null
null
null
null
null
null
null
2,010.05646
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
['Jungil Kong', 'Jaehyeon Kim', 'Jaekyoung Bae']
['cs.SD', 'cs.LG', 'eess.AS']
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart.
2020-10-12T12:33:43Z
NeurIPS 2020. Code available at https://github.com/jik876/hifi-gan
null
null
null
null
null
null
null
null
null
2,010.057
Reformulating Unsupervised Style Transfer as Paraphrase Generation
['Kalpesh Krishna', 'John Wieting', 'Mohit Iyyer']
['cs.CL']
Modern NLP defines the task of style transfer as modifying the style of a given sentence without appreciably changing its semantics, which implies that the outputs of style transfer systems should be paraphrases of their inputs. However, many existing systems purportedly designed for style transfer inherently warp the input's meaning through attribute transfer, which changes semantic properties such as sentiment. In this paper, we reformulate unsupervised style transfer as a paraphrase generation problem, and present a simple methodology based on fine-tuning pretrained language models on automatically generated paraphrase data. Despite its simplicity, our method significantly outperforms state-of-the-art style transfer systems on both human and automatic evaluations. We also survey 23 style transfer papers and discover that existing automatic metrics can be easily gamed and propose fixed variants. Finally, we pivot to a more real-world style transfer setting by collecting a large dataset of 15M sentences in 11 diverse styles, which we use for an in-depth analysis of our system.
2020-10-12T13:31:01Z
EMNLP 2020 camera-ready (26 pages)
null
null
Reformulating Unsupervised Style Transfer as Paraphrase Generation
['Kalpesh Krishna', 'J. Wieting', 'Mohit Iyyer']
2,020
Conference on Empirical Methods in Natural Language Processing
242
112
['Computer Science']
2,010.05987
SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search
['Sean MacAvaney', 'Arman Cohan', 'Nazli Goharian']
['cs.CL', 'cs.IR']
With worldwide concerns surrounding the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), there is a rapidly growing body of scientific literature on the virus. Clinicians, researchers, and policy-makers need to be able to search these articles effectively. In this work, we present a zero-shot ranking algorithm that adapts to COVID-related scientific literature. Our approach filters training data from another collection down to medical-related queries, uses a neural re-ranking model pre-trained on scientific text (SciBERT), and filters the target document collection. This approach ranks top among zero-shot methods on the TREC COVID Round 1 leaderboard, and exhibits a P@5 of 0.80 and an nDCG@10 of 0.68 when evaluated on both Round 1 and 2 judgments. Despite not relying on TREC-COVID data, our method outperforms models that do. As one of the first search methods to thoroughly evaluate COVID-19 search, we hope that this serves as a strong baseline and helps in the global crisis.
2020-10-12T19:28:29Z
EMNLP 2020. This article draws heavily from arXiv:2005.02365
null
null
null
null
null
null
null
null
null
2,010.06
MedICaT: A Dataset of Medical Images, Captions, and Textual References
['Sanjay Subramanian', 'Lucy Lu Wang', 'Sachin Mehta', 'Ben Bogin', 'Madeleine van Zuylen', 'Sravanthi Parasa', 'Sameer Singh', 'Matt Gardner', 'Hannaneh Hajishirzi']
['cs.CV', 'cs.CL']
Understanding the relationship between figures and text is key to scientific document understanding. Medical figures in particular are quite complex, often consisting of several subfigures (75% of figures in our dataset), with detailed text describing their content. Previous work studying figures in scientific papers focused on classifying figure content rather than understanding how images relate to the text. To address challenges in figure retrieval and figure-to-text alignment, we introduce MedICaT, a dataset of medical images in context. MedICaT consists of 217K images from 131K open access biomedical papers, and includes captions, inline references for 74% of figures, and manually annotated subfigures and subcaptions for a subset of figures. Using MedICaT, we introduce the task of subfigure to subcaption alignment in compound figures and demonstrate the utility of inline references in image-text matching. Our data and code can be accessed at https://github.com/allenai/medicat.
2020-10-12T19:56:08Z
EMNLP-Findings 2020
null
null
MedICaT: A Dataset of Medical Images, Captions, and Textual References
['Sanjay Subramanian', 'Lucy Lu Wang', 'Sachin Mehta', 'Ben Bogin', 'Madeleine van Zuylen', 'S. Parasa', 'Sameer Singh', 'Matt Gardner', 'Hannaneh Hajishirzi']
2,020
Findings
74
24
['Computer Science']
2,010.06032
Measuring and Reducing Gendered Correlations in Pre-trained Models
['Kellie Webster', 'Xuezhi Wang', 'Ian Tenney', 'Alex Beutel', 'Emily Pitler', 'Ellie Pavlick', 'Jilin Chen', 'Ed Chi', 'Slav Petrov']
['cs.CL']
Pre-trained models have revolutionized natural language understanding. However, researchers have found they can encode artifacts undesired in many applications, such as professions correlating with one gender more than another. We explore such gendered correlations as a case study for how to address unintended correlations in pre-trained models. We define metrics and reveal that it is possible for models with similar accuracy to encode correlations at very different rates. We show how measured correlations can be reduced with general-purpose techniques, and highlight the trade offs different strategies have. With these results, we make recommendations for training robust models: (1) carefully evaluate unintended correlations, (2) be mindful of seemingly innocuous configuration differences, and (3) focus on general mitigations.
2020-10-12T21:15:29Z
null
null
null
null
null
null
null
null
null
null
2,010.0606
BioMegatron: Larger Biomedical Domain Language Model
['Hoo-Chang Shin', 'Yang Zhang', 'Evelina Bakhturina', 'Raul Puri', 'Mostofa Patwary', 'Mohammad Shoeybi', 'Raghav Mani']
['cs.CL']
There has been an influx of biomedical domain-specific language models, showing language models pre-trained on biomedical text perform better on biomedical domain benchmarks than those trained on general domain text corpora such as Wikipedia and Books. Yet, most works do not study the factors affecting each domain language application deeply. Additionally, the study of model size on domain-specific models has been mostly missing. We empirically study and evaluate several factors that can affect performance on domain language applications, such as the sub-word vocabulary set, model size, pre-training corpus, and domain transfer. We show consistent improvements on benchmarks with our larger BioMegatron model trained on a larger domain corpus, contributing to our understanding of domain language model applications. We demonstrate noticeable improvements over the previous state-of-the-art (SOTA) on standard biomedical NLP benchmarks of named entity recognition, relation extraction, and question answering. Model checkpoints and code are available at [https://ngc.nvidia.com] and [https://github.com/NVIDIA/NeMo].
2020-10-12T22:46:10Z
Accepted for publication at EMNLP 2020
null
null
null
null
null
null
null
null
null
2,010.06192
Revisiting BFloat16 Training
['Pedram Zamirai', 'Jian Zhang', 'Christopher R. Aberger', 'Christopher De Sa']
['cs.LG', 'stat.ML']
State-of-the-art generic low-precision training algorithms use a mix of 16-bit and 32-bit precision, creating the folklore that 16-bit hardware compute units alone are not enough to maximize model accuracy. As a result, deep learning accelerators are forced to support both 16-bit and 32-bit floating-point units (FPUs), which is more costly than only using 16-bit FPUs for hardware design. We ask: can we train deep learning models only with 16-bit floating-point units, while still matching the model accuracy attained by 32-bit training? Towards this end, we study 16-bit-FPU training on the widely adopted BFloat16 unit. While these units conventionally use nearest rounding to cast output to 16-bit precision, we show that nearest rounding for model weight updates often cancels small updates, which degrades the convergence and model accuracy. Motivated by this, we study two simple techniques well-established in numerical analysis, stochastic rounding and Kahan summation, to remedy the model accuracy degradation in 16-bit-FPU training. We demonstrate that these two techniques can enable up to 7% absolute validation accuracy gain in 16-bit-FPU training. This leads to 0.1% lower to 0.2% higher validation accuracy compared to 32-bit training across seven deep learning applications.
2020-10-13T05:38:07Z
null
null
null
null
null
null
null
null
null
null
2,010.06354
The Tatoeba Translation Challenge -- Realistic Data Sets for Low Resource and Multilingual MT
['Jörg Tiedemann']
['cs.CL']
This paper describes the development of a new benchmark for machine translation that provides training and test data for thousands of language pairs covering over 500 languages and tools for creating state-of-the-art translation models from that collection. The main goal is to trigger the development of open translation tools and models with a much broader coverage of the World's languages. Using the package it is possible to work on realistic low-resource scenarios avoiding artificially reduced setups that are common when demonstrating zero-shot or few-shot learning. For the first time, this package provides a comprehensive collection of diverse data sets in hundreds of languages with systematic language and script annotation and data splits to extend the narrow coverage of existing benchmarks. Together with the data release, we also provide a growing number of pre-trained baseline models for individual language pairs and selected language groups.
2020-10-13T13:12:21Z
to be appear at the 5th Conference on Machine Translation (WMT20)
null
null
The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT
['J. Tiedemann']
2,020
Conference on Machine Translation
171
16
['Computer Science']
2,010.06395
Aspect-based Document Similarity for Research Papers
['Malte Ostendorff', 'Terry Ruas', 'Till Blume', 'Bela Gipp', 'Georg Rehm']
['cs.CL', 'cs.IR']
Traditional document similarity measures provide a coarse-grained distinction between similar and dissimilar documents. Typically, they do not consider in what aspects two documents are similar. This limits the granularity of applications like recommender systems that rely on document similarity. In this paper, we extend similarity with aspect information by performing a pairwise document classification task. We evaluate our aspect-based document similarity for research papers. Paper citations indicate the aspect-based similarity, i.e., the section title in which a citation occurs acts as a label for the pair of citing and cited paper. We apply a series of Transformer models such as RoBERTa, ELECTRA, XLNet, and BERT variations and compare them to an LSTM baseline. We perform our experiments on two newly constructed datasets of 172,073 research paper pairs from the ACL Anthology and CORD-19 corpus. Our results show SciBERT as the best performing system. A qualitative examination validates our quantitative results. Our findings motivate future research of aspect-based document similarity and the development of a recommender system based on the evaluated techniques. We make our datasets, code, and trained models publicly available.
2020-10-13T13:51:21Z
Accepted for publication at COLING 2020
null
null
Aspect-based Document Similarity for Research Papers
['Malte Ostendorff', 'Terry Ruas', 'Till Blume', 'Bela Gipp', 'Georg Rehm']
2,020
International Conference on Computational Linguistics
27
56
['Computer Science']
2,010.0729
XPDNet for MRI Reconstruction: an application to the 2020 fastMRI challenge
['Zaccharie Ramzi', 'Philippe Ciuciu', 'Jean-Luc Starck']
['eess.IV', 'cs.CV', 'cs.LG', 'physics.med-ph', 'stat.ML']
We present a new neural network, the XPDNet, for MRI reconstruction from periodically under-sampled multi-coil data. We inform the design of this network by taking best practices from MRI reconstruction and computer vision. We show that this network can achieve state-of-the-art reconstruction results, as shown by its ranking of second in the fastMRI 2020 challenge.
2020-10-15T14:45:00Z
8 pages, 3 figures, presented as an oral to the 2021 ISMRM conference
null
null
null
null
null
null
null
null
null
2,010.07611
Layer-adaptive sparsity for the Magnitude-based Pruning
['Jaeho Lee', 'Sejun Park', 'Sangwoo Mo', 'Sungsoo Ahn', 'Jinwoo Shin']
['cs.LG']
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance. However, without a clear consensus on "how to choose," the layerwise sparsities are mostly selected algorithm-by-algorithm, often resorting to handcrafted heuristics or an extensive hyperparameter search. To fill this gap, we propose a novel importance score for global pruning, coined layer-adaptive magnitude-based pruning (LAMP) score; the score is a rescaled version of weight magnitude that incorporates the model-level $\ell_2$ distortion incurred by pruning, and does not require any hyperparameter tuning or heavy computation. Under various image classification setups, LAMP consistently outperforms popular existing schemes for layerwise sparsity selection. Furthermore, we observe that LAMP continues to outperform baselines even in weight-rewinding setups, while the connectivity-oriented layerwise sparsity (the strongest baseline overall) performs worse than a simple global magnitude-based pruning in this case. Code: https://github.com/jaeho-lee/layer-adaptive-sparsity
2020-10-15T09:14:02Z
ICLR 2021. Changed title (previous ver: A deeper look at the layerwise sparsity of magnitude-based pruning)
null
null
null
null
null
null
null
null
null
2,010.0824
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks
['Nandan Thakur', 'Nils Reimers', 'Johannes Daxenberger', 'Iryna Gurevych']
['cs.CL']
There are two approaches for pairwise sentence scoring: Cross-encoders, which perform full-attention over the input pair, and Bi-encoders, which map each input independently to a dense vector space. While cross-encoders often achieve higher performance, they are too slow for many practical use cases. Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance. We present a simple yet efficient data augmentation strategy called Augmented SBERT, where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder. We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method. We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task. Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance.
2020-10-16T08:43:27Z
Accepted at NAACL 2021
null
null
null
null
null
null
null
null
null
2,010.08895
Fourier Neural Operator for Parametric Partial Differential Equations
['Zongyi Li', 'Nikola Kovachki', 'Kamyar Azizzadenesheli', 'Burigede Liu', 'Kaushik Bhattacharya', 'Andrew Stuart', 'Anima Anandkumar']
['cs.LG', 'cs.NA', 'math.NA']
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation. The Fourier neural operator is the first ML-based method to successfully model turbulent flows with zero-shot super-resolution. It is up to three orders of magnitude faster compared to traditional PDE solvers. Additionally, it achieves superior accuracy compared to previous learning-based solvers under fixed resolution.
2020-10-18T00:34:21Z
null
null
null
null
null
null
null
null
null
null
2,010.09885
ChemBERTa: Large-Scale Self-Supervised Pretraining for Molecular Property Prediction
['Seyone Chithrananda', 'Gabriel Grand', 'Bharath Ramsundar']
['cs.LG', 'cs.CL', 'physics.chem-ph', 'q-bio.BM', 'I.2.7; I.2.1; J.2; J.3']
GNNs and chemical fingerprints are the predominant approaches to representing molecules for property prediction. However, in NLP, transformers have become the de-facto standard for representation learning thanks to their strong downstream task transfer. In parallel, the software ecosystem around transformers is maturing rapidly, with libraries like HuggingFace and BertViz enabling streamlined training and introspection. In this work, we make one of the first attempts to systematically evaluate transformers on molecular property prediction tasks via our ChemBERTa model. ChemBERTa scales well with pretraining dataset size, offering competitive downstream performance on MoleculeNet and useful attention-based visualization modalities. Our results suggest that transformers offer a promising avenue of future work for molecular representation learning and property prediction. To facilitate these efforts, we release a curated dataset of 77M SMILES from PubChem suitable for large-scale self-supervised pretraining.
2020-10-19T21:41:41Z
Submitted to NeurIPS 2020 ML for Molecules Workshop
null
null
null
null
null
null
null
null
null
2,010.09931
Smooth activations and reproducibility in deep networks
['Gil I. Shamir', 'Dong Lin', 'Lorenzo Coviello']
['cs.LG', 'cs.NE', 'stat.ML']
Deep networks are gradually penetrating almost every domain in our lives due to their amazing success. However, with substantive performance accuracy improvements comes the price of \emph{irreproducibility}. Two identical models, trained on the exact same training dataset may exhibit large differences in predictions on individual examples even when average accuracy is similar, especially when trained on highly distributed parallel systems. The popular Rectified Linear Unit (ReLU) activation has been key to recent success of deep networks. We demonstrate, however, that ReLU is also a catalyzer to irreproducibility in deep networks. We show that not only can activations smoother than ReLU provide better accuracy, but they can also provide better accuracy-reproducibility tradeoffs. We propose a new family of activations; Smooth ReLU (\emph{SmeLU}), designed to give such better tradeoffs, while also keeping the mathematical expression simple, and thus implementation cheap. SmeLU is monotonic, mimics ReLU, while providing continuous gradients, yielding better reproducibility. We generalize SmeLU to give even more flexibility and then demonstrate that SmeLU and its generalized form are special cases of a more general methodology of REctified Smooth Continuous Unit (RESCU) activations. Empirical results demonstrate the superior accuracy-reproducibility tradeoffs with smooth activations, SmeLU in particular.
2020-10-20T00:06:47Z
null
null
null
null
null
null
null
null
null
null
2,010.10137
PROP: Pre-training with Representative Words Prediction for Ad-hoc Retrieval
['Xinyu Ma', 'Jiafeng Guo', 'Ruqing Zhang', 'Yixing Fan', 'Xiang Ji', 'Xueqi Cheng']
['cs.IR', 'H.3.3']
Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR). However, pre-training objectives tailored for ad-hoc retrieval have not been well explored. In this paper, we propose Pre-training with Representative wOrds Prediction (PROP) for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the "ideal" document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. Given an input document, we sample a pair of word sets according to the document language model, where the set with higher likelihood is deemed as more representative of the document. We then pre-train the Transformer model to predict the pairwise preference between the two word sets, jointly with the Masked Language Model (MLM) objective. By further fine-tuning on a variety of representative downstream ad-hoc retrieval tasks, PROP achieves significant improvements over baselines without pre-training or with other pre-training methods. We also show that PROP can achieve exciting performance under both the zero- and low-resource IR settings. The code and pre-trained models are available at https://github.com/Albert-Ma/PROP.
2020-10-20T09:04:56Z
Accepted by WSDM2021
null
10.1145/3437963.3441777
PROP: Pre-training with Representative Words Prediction for Ad-hoc Retrieval
['Xinyu Ma', 'Jiafeng Guo', 'Ruqing Zhang', 'Yixing Fan', 'Xiang Ji', 'Xueqi Cheng']
2,020
Web Search and Data Mining
98
50
['Computer Science']
2,010.10392
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters
['Hicham El Boukkouri', 'Olivier Ferret', 'Thomas Lavergne', 'Hiroshi Noji', 'Pierre Zweigenbaum', 'Junichi Tsujii']
['cs.CL']
Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level and open-vocabulary representations.
2020-10-20T15:58:53Z
13 pages, 8 figures and 3 tables. Accepted at COLING 2020
null
null
null
null
null
null
null
null
null
2,010.10499
Optimal Subarchitecture Extraction For BERT
['Adrian de Wynter', 'Daniel J. Perry']
['cs.CL', 'cs.LG']
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of $5.5\%$ the original BERT-large architecture, and $16\%$ of the net size. Bort is also able to be pretrained in $288$ GPU hours, which is $1.2\%$ of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about $33\%$ of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also $7.9$x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between $0.3\%$ and $31\%$, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
2020-10-20T17:53:01Z
Preprint. Under review. Corrected typos on v2
null
null
null
null
null
null
null
null
null
2,010.10864
A Short Note on the Kinetics-700-2020 Human Action Dataset
['Lucas Smaira', 'João Carreira', 'Eric Noland', 'Ellen Clancy', 'Amy Wu', 'Andrew Zisserman']
['cs.CV', 'cs.LG']
We describe the 2020 edition of the DeepMind Kinetics human action dataset, which replenishes and extends the Kinetics-700 dataset. In this new version, there are at least 700 video clips from different YouTube videos for each of the 700 classes. This paper details the changes introduced for this new release of the dataset and includes a comprehensive set of statistics as well as baseline results using the I3D network.
2020-10-21T09:47:09Z
null
null
null
null
null
null
null
null
null
null
2,010.10906
German's Next Language Model
['Branden Chan', 'Stefan Schweter', 'Timo Möller']
['cs.CL', 'cs.LG']
In this work we present the experiments which lead to the creation of our BERT and ELECTRA based German language models, GBERT and GELECTRA. By varying the input training data, model size, and the presence of Whole Word Masking (WWM) we were able to attain SoTA performance across a set of document classification and named entity recognition (NER) tasks for both models of base and large size. We adopt an evaluation driven approach in training these models and our results indicate that both adding more data and utilizing WWM improve model performance. By benchmarking against existing German models, we show that these models are the best German models to date. Our trained models will be made publicly available to the research community.
2020-10-21T11:28:23Z
Accepted by COLING2020
null
null
null
null
null
null
null
null
null
2,010.10999
Is Retriever Merely an Approximator of Reader?
['Sohee Yang', 'Minjoon Seo']
['cs.CL']
The state of the art in open-domain question answering (QA) relies on an efficient retriever that drastically reduces the search space for the expensive reader. A rather overlooked question in the community is the relationship between the retriever and the reader, and in particular, if the whole purpose of the retriever is just a fast approximation for the reader. Our empirical evidence indicates that the answer is no, and that the reader and the retriever are complementary to each other even in terms of accuracy only. We make a careful conjecture that the architectural constraint of the retriever, which has been originally intended for enabling approximate search, seems to also make the model more robust in large-scale search. We then propose to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. Experimental results show that our method can enhance the document recall rate as well as the end-to-end QA accuracy of off-the-shelf retrievers in open-domain QA tasks.
2020-10-21T13:40:15Z
null
null
null
null
null
null
null
null
null
null
2,010.11125
Beyond English-Centric Multilingual Machine Translation
['Angela Fan', 'Shruti Bhosale', 'Holger Schwenk', 'Zhiyi Ma', 'Ahmed El-Kishky', 'Siddharth Goyal', 'Mandeep Baines', 'Onur Celebi', 'Guillaume Wenzek', 'Vishrav Chaudhary', 'Naman Goyal', 'Tom Birch', 'Vitaliy Liptchinsky', 'Sergey Edunov', 'Edouard Grave', 'Michael Auli', 'Armand Joulin']
['cs.CL', 'cs.LG']
Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages. However, much of this work is English-Centric by training only on data which was translated from or to English. While this is supported by large sources of training data, it does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. We build and open source a training dataset that covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively to the best single systems of WMT. We open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.
2020-10-21T17:01:23Z
null
null
null
null
null
null
null
null
null
null
2,010.11386
Distilling Dense Representations for Ranking using Tightly-Coupled Teachers
['Sheng-Chieh Lin', 'Jheng-Hong Yang', 'Jimmy Lin']
['cs.IR', 'cs.CL']
We present an approach to ranking with dense representations that applies knowledge distillation to improve the recently proposed late-interaction ColBERT model. Specifically, we distill the knowledge from ColBERT's expressive MaxSim operator for computing relevance scores into a simple dot product, thus enabling single-step ANN search. Our key insight is that during distillation, tight coupling between the teacher model and the student model enables more flexible distillation strategies and yields better learned representations. We empirically show that our approach improves query latency and greatly reduces the onerous storage requirements of ColBERT, while only making modest sacrifices in terms of effectiveness. By combining our dense representations with sparse representations derived from document expansion, we are able to approach the effectiveness of a standard cross-encoder reranker using BERT that is orders of magnitude slower.
2020-10-22T02:26:01Z
null
null
null
Distilling Dense Representations for Ranking using Tightly-Coupled Teachers
['Sheng-Chieh Lin', 'Jheng-Hong Yang', 'Jimmy J. Lin']
2,020
arXiv.org
122
28
['Computer Science']
2,010.1143
Self-training and Pre-training are Complementary for Speech Recognition
['Qiantong Xu', 'Alexei Baevski', 'Tatiana Likhomanenko', 'Paden Tomasello', 'Alexis Conneau', 'Ronan Collobert', 'Gabriel Synnaeve', 'Michael Auli']
['cs.LG', 'cs.SD', 'eess.AS']
Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%.
2020-10-22T04:15:37Z
null
null
null
Self-Training and Pre-Training are Complementary for Speech Recognition
['Qiantong Xu', 'Alexei Baevski', 'Tatiana Likhomanenko', 'Paden Tomasello', 'Alexis Conneau', 'R. Collobert', 'Gabriel Synnaeve', 'Michael Auli']
2,020
IEEE International Conference on Acoustics, Speech, and Signal Processing
173
38
['Computer Science', 'Engineering']
2,010.11784
Self-Alignment Pretraining for Biomedical Entity Representations
['Fangyu Liu', 'Ehsan Shareghi', 'Zaiqiao Meng', 'Marco Basaldella', 'Nigel Collier']
['cs.CL', 'cs.AI', 'cs.LG']
Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.
2020-10-22T14:59:57Z
NAACL 2021 camera-ready version
null
null
null
null
null
null
null
null
null
2,010.11856
XOR QA: Cross-lingual Open-Retrieval Question Answering
['Akari Asai', 'Jungo Kasai', 'Jonathan H. Clark', 'Kenton Lee', 'Eunsol Choi', 'Hannaneh Hajishirzi']
['cs.CL']
Multilingual question answering tasks typically assume answers exist in the same language as the question. Yet in practice, many languages face both information scarcity -- where languages have few reference articles -- and information asymmetry -- where questions reference concepts from other cultures. This work extends open-retrieval question answering to a cross-lingual setting enabling questions from one language to be answered via answer content from another language. We construct a large-scale dataset built on questions from TyDi QA lacking same-language answers. Our task formulation, called Cross-lingual Open Retrieval Question Answering (XOR QA), includes 40k information-seeking questions from across 7 diverse non-English languages. Based on this dataset, we introduce three new tasks that involve cross-lingual document retrieval using multi-lingual and English resources. We establish baselines with state-of-the-art machine translation systems and cross-lingual pretrained models. Experimental results suggest that XOR QA is a challenging task that will facilitate the development of novel techniques for multilingual question answering. Our data and code are available at https://nlp.cs.washington.edu/xorqa.
2020-10-22T16:47:17Z
Published as a conference paper at NAACL-HLT 2021 (long)
null
null
null
null
null
null
null
null
null
2,010.11929
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
['Alexey Dosovitskiy', 'Lucas Beyer', 'Alexander Kolesnikov', 'Dirk Weissenborn', 'Xiaohua Zhai', 'Thomas Unterthiner', 'Mostafa Dehghani', 'Matthias Minderer', 'Georg Heigold', 'Sylvain Gelly', 'Jakob Uszkoreit', 'Neil Houlsby']
['cs.CV', 'cs.AI', 'cs.LG']
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
2020-10-22T17:55:59Z
Fine-tuning code and pre-trained models are available at https://github.com/google-research/vision_transformer. ICLR camera-ready version with 2 small modifications: 1) Added a discussion of CLS vs GAP classifier in the appendix, 2) Fixed an error in exaFLOPs computation in Figure 5 and Table 6 (relative performance of models is basically not affected)
null
null
null
null
null
null
null
null
null
2,010.11934
mT5: A massively multilingual pre-trained text-to-text transformer
['Linting Xue', 'Noah Constant', 'Adam Roberts', 'Mihir Kale', 'Rami Al-Rfou', 'Aditya Siddhant', 'Aditya Barua', 'Colin Raffel']
['cs.CL']
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.
2020-10-22T17:58:14Z
null
null
null
mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer
['Linting Xue', 'Noah Constant', 'Adam Roberts', 'Mihir Kale', 'Rami Al-Rfou', 'Aditya Siddhant', 'Aditya Barua', 'Colin Raffel']
2,020
North American Chapter of the Association for Computational Linguistics
2,570
55
['Computer Science']
2,010.12148
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
['Dongling Xiao', 'Yu-Kun Li', 'Han Zhang', 'Yu Sun', 'Hao Tian', 'Hua Wu', 'Haifeng Wang']
['cs.CL', 'cs.LG']
Coarse-grained linguistic information, such as named entities or phrases, facilitates adequately representation learning in pre-training. Previous works mainly focus on extending the objective of BERT's Masked Language Modeling (MLM) from masking individual tokens to contiguous sequences of n tokens. We argue that such contiguously masking method neglects to model the intra-dependencies and inter-relation of coarse-grained linguistic information. As an alternative, we propose ERNIE-Gram, an explicitly n-gram masking method to enhance the integration of coarse-grained information into pre-training. In ERNIE-Gram, n-grams are masked and predicted directly using explicit n-gram identities rather than contiguous sequences of n tokens. Furthermore, ERNIE-Gram employs a generator model to sample plausible n-gram identities as optional n-gram masks and predict them in both coarse-grained and fine-grained manners to enable comprehensive n-gram prediction and relation modeling. We pre-train ERNIE-Gram on English and Chinese text corpora and fine-tune on 19 downstream tasks. Experimental results show that ERNIE-Gram outperforms previous pre-training models like XLNet and RoBERTa by a large margin, and achieves comparable results with state-of-the-art methods. The source codes and pre-trained models have been released at https://github.com/PaddlePaddle/ERNIE.
2020-10-23T03:42:20Z
Accepted by NAACL-HLT 2021. Codes will be released at https://github.com/PaddlePaddle/ERNIE
null
null
null
null
null
null
null
null
null
2,010.12321
BARThez: a Skilled Pretrained French Sequence-to-Sequence Model
['Moussa Kamal Eddine', 'Antoine J. -P. Tixier', 'Michalis Vazirgiannis']
['cs.CL']
Inductive transfer learning has taken the entire NLP field by storm, with models such as BERT and BART setting new state of the art on countless NLU tasks. However, most of the available models and research have been conducted for English. In this work, we introduce BARThez, the first large-scale pretrained seq2seq model for French. Being based on BART, BARThez is particularly well-suited for generative tasks. We evaluate BARThez on five discriminative tasks from the FLUE benchmark and two generative tasks from a novel summarization dataset, OrangeSum, that we created for this research. We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT. We also continue the pretraining of a multilingual BART on BARThez' corpus, and show our resulting model, mBARThez, to significantly boost BARThez' generative performance. Code, data and models are publicly available.
2020-10-23T11:57:33Z
More experiments and results, human evaluation, reorganization of paper
null
null
null
null
null
null
null
null
null
2,010.12421
TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification
['Francesco Barbieri', 'Jose Camacho-Collados', 'Leonardo Neves', 'Luis Espinosa-Anke']
['cs.CL', 'cs.SI']
The experimental landscape in natural language processing for social media is too fragmented. Each year, new shared tasks and datasets are proposed, ranging from classics like sentiment analysis to irony detection or emoji prediction. Therefore, it is unclear what the current state of the art is, as there is no standardized evaluation protocol, neither a strong set of baselines trained on such domain-specific data. In this paper, we propose a new evaluation framework (TweetEval) consisting of seven heterogeneous Twitter-specific classification tasks. We also provide a strong set of baselines as starting point, and compare different language modeling pre-training strategies. Our initial experiments show the effectiveness of starting off with existing pre-trained generic language models, and continue training them on Twitter corpora.
2020-10-23T14:11:04Z
Findings of EMNLP 2020. TweetEval benchmark available at https://github.com/cardiffnlp/tweeteval
null
null
null
null
null
null
null
null
null
2,010.12725
Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?
['Peter Shaw', 'Ming-Wei Chang', 'Panupong Pasupat', 'Kristina Toutanova']
['cs.CL']
Sequence-to-sequence models excel at handling natural language variation, but have been shown to struggle with out-of-distribution compositional generalization. This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation. In this work we ask: can we develop a semantic parsing approach that handles both natural language variation and compositional generalization? To better assess this capability, we propose new train and test splits of non-synthetic datasets. We demonstrate that strong existing approaches do not perform well across a broad set of evaluations. We also propose NQG-T5, a hybrid model that combines a high-precision grammar-based approach with a pre-trained sequence-to-sequence model. It outperforms existing approaches across several compositional generalization challenges on non-synthetic data, while also being competitive with the state-of-the-art on standard evaluations. While still far from solving this problem, our study highlights the importance of diverse evaluations and the open challenge of handling both compositional generalization and natural language variation in semantic parsing.
2020-10-24T00:38:27Z
ACL 2021
null
null
null
null
null
null
null
null
null
2,010.12821
Rethinking embedding coupling in pre-trained language models
['Hyung Won Chung', 'Thibault Févry', 'Henry Tsai', 'Melvin Johnson', 'Sebastian Ruder']
['cs.CL', 'cs.LG']
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.
2020-10-24T07:43:00Z
null
null
null
Rethinking embedding coupling in pre-trained language models
['Hyung Won Chung', 'Thibault Févry', 'Henry Tsai', 'Melvin Johnson', 'Sebastian Ruder']
2,020
International Conference on Learning Representations
143
73
['Computer Science']
2,010.12871
Large Scale Legal Text Classification Using Transformer Models
['Zein Shaheen', 'Gerhard Wohlgenannt', 'Erwin Filtz']
['cs.CL', 'cs.AI']
Large multi-label text classification is a challenging Natural Language Processing (NLP) problem that is concerned with text classification for datasets with thousands of labels. We tackle this problem in the legal domain, where datasets, such as JRC-Acquis and EURLEX57K labeled with the EuroVoc vocabulary were created within the legal information systems of the European Union. The EuroVoc taxonomy includes around 7000 concepts. In this work, we study the performance of various recent transformer-based models in combination with strategies such as generative pretraining, gradual unfreezing and discriminative learning rates in order to reach competitive classification performance, and present new state-of-the-art results of 0.661 (F1) for JRC-Acquis and 0.754 for EURLEX57K. Furthermore, we quantify the impact of individual steps, such as language model fine-tuning or gradual unfreezing in an ablation study, and provide reference dataset splits created with an iterative stratification algorithm.
2020-10-24T11:03:01Z
null
null
null
null
null
null
null
null
null
null
2,010.13002
Pre-trained Summarization Distillation
['Sam Shleifer', 'Alexander M. Rush']
['cs.CL', 'cs.AI']
Recent state-of-the-art approaches to summarization utilize large pre-trained Transformer models. Distilling these models to smaller student models has become critically important for practical use; however there are many different distillation methods proposed by the NLP literature. Recent work on distilling BERT for classification and regression tasks shows strong performance using direct knowledge distillation. Alternatively, machine translation practitioners distill using pseudo-labeling, where a small model is trained on the translations of a larger model. A third, simpler approach is to 'shrink and fine-tune' (SFT), which avoids any explicit distillation by copying parameters to a smaller student model and then fine-tuning. We compare these three approaches for distillation of Pegasus and BART, the current and former state of the art, pre-trained summarization models, and find that SFT outperforms knowledge distillation and pseudo-labeling on the CNN/DailyMail dataset, but under-performs pseudo-labeling on the more abstractive XSUM dataset. PyTorch Code and checkpoints of different sizes are available through Hugging Face transformers here http://tiny.cc/4iy0tz.
2020-10-24T23:15:43Z
null
null
null
Pre-trained Summarization Distillation
['Sam Shleifer', 'Alexander M. Rush']
2,020
arXiv.org
103
32
['Computer Science']
2,010.13154
Attention is All You Need in Speech Separation
['Cem Subakan', 'Mirco Ravanelli', 'Samuele Cornell', 'Mirko Bronzi', 'Jianyuan Zhong']
['eess.AS', 'cs.LG', 'cs.SD', 'eess.SP']
Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism. In this paper, we propose the SepFormer, a novel RNN-free Transformer-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. The proposed model achieves state-of-the-art (SOTA) performance on the standard WSJ0-2/3mix datasets. It reaches an SI-SNRi of 22.3 dB on WSJ0-2mix and an SI-SNRi of 19.5 dB on WSJ0-3mix. The SepFormer inherits the parallelization advantages of Transformers and achieves a competitive performance even when downsampling the encoded representation by a factor of 8. It is thus significantly faster and it is less memory-demanding than the latest speech separation systems with comparable performance.
2020-10-25T16:28:54Z
Accepted to ICASSP 2021
null
null
null
null
null
null
null
null
null
2,010.13652
Dutch Humor Detection by Generating Negative Examples
['Thomas Winters', 'Pieter Delobelle']
['cs.CL', 'cs.AI', '68T50', 'I.2.7; I.2.6']
Detecting if a text is humorous is a hard task to do computationally, as it usually requires linguistic and common sense insights. In machine learning, humor detection is usually modeled as a binary classification task, trained to predict if the given text is a joke or another type of text. Rather than using completely different non-humorous texts, we propose using text generation algorithms for imitating the original joke dataset to increase the difficulty for the learning algorithm. We constructed several different joke and non-joke datasets to test the humor detection abilities of different language technologies. In particular, we compare the humor detection capabilities of classic neural network approaches with the state-of-the-art Dutch language model RobBERT. In doing so, we create and compare the first Dutch humor detection systems. We found that while other language models perform well when the non-jokes came from completely different domains, RobBERT was the only one that was able to distinguish jokes from generated negative examples. This performance illustrates the usefulness of using text generation to create negative datasets for humor recognition, and also shows that transformer models are a large step forward in humor detection.
2020-10-26T15:15:10Z
Accepted at the Proceedings of the 32st Benelux Conference on Artificial Intelligence (BNAIC 2020) and the 29th Belgian Dutch Conference on Machine Learning (Benelearn 2020)
null
null
Dutch Humor Detection by Generating Negative Examples
['Thomas Winters', 'Pieter Delobelle']
2,020
arXiv.org
11
39
['Computer Science']
2,010.13886
MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection
['Fei Jia', 'Somshubra Majumdar', 'Boris Ginsburg']
['eess.AS', 'cs.SD']
We present MarbleNet, an end-to-end neural network for Voice Activity Detection (VAD). MarbleNet is a deep residual network composed from blocks of 1D time-channel separable convolution, batch-normalization, ReLU and dropout layers. When compared to a state-of-the-art VAD model, MarbleNet is able to achieve similar performance with roughly 1/10-th the parameter cost. We further conduct extensive ablation studies on different training methods and choices of parameters in order to study the robustness of MarbleNet in real-world VAD tasks.
2020-10-26T20:26:05Z
Accepted to ICASSP 2021
null
null
null
null
null
null
null
null
null
2,010.13956
Recent Developments on ESPnet Toolkit Boosted by Conformer
['Pengcheng Guo', 'Florian Boyer', 'Xuankai Chang', 'Tomoki Hayashi', 'Yosuke Higuchi', 'Hirofumi Inaguma', 'Naoyuki Kamo', 'Chenda Li', 'Daniel Garcia-Romero', 'Jiatong Shi', 'Jing Shi', 'Shinji Watanabe', 'Kun Wei', 'Wangyou Zhang', 'Yuekai Zhang']
['eess.AS', 'cs.SD']
In this study, we present recent developments on ESPnet: End-to-End Speech Processing toolkit, which mainly involves a recently proposed architecture called Conformer, Convolution-augmented Transformer. This paper shows the results for a wide range of end-to-end speech processing applications, such as automatic speech recognition (ASR), speech translations (ST), speech separation (SS) and text-to-speech (TTS). Our experiments reveal various training tips and significant performance benefits obtained with the Conformer on different tasks. These results are competitive or even outperform the current state-of-art Transformer models. We are preparing to release all-in-one recipes using open source and publicly available corpora for all the above tasks with pre-trained models. Our aim for this work is to contribute to our research community by reducing the burden of preparing state-of-the-art research environments usually requiring high resources.
2020-10-26T23:49:23Z
null
null
null
null
null
null
null
null
null
null
2,010.14235
Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles
['Yao Lu', 'Yue Dong', 'Laurent Charlin']
['cs.CL', 'cs.AI']
Multi-document summarization is a challenging task for which there exists little large-scale datasets. We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal that Multi-XScience is well suited for abstractive models.
2020-10-27T12:10:19Z
EMNLP 2020
null
null
Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles
['Yao Lu', 'Yue Dong', 'Laurent Charlin']
2,020
Conference on Empirical Methods in Natural Language Processing
120
31
['Computer Science']
2,010.14568
Strongly Incremental Constituency Parsing with Graph Neural Networks
['Kaiyu Yang', 'Jia Deng']
['cs.CL']
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based parsers build trees by executing actions in a state transition system. They are computationally efficient, and can leverage machine learning to predict actions based on partial trees. However, existing transition-based parsers are predominantly based on the shift-reduce transition system, which does not align with how humans are known to parse sentences. Psycholinguistic research suggests that human parsing is strongly incremental: humans grow a single parse tree by adding exactly one token at each step. In this paper, we propose a novel transition system called attach-juxtapose. It is strongly incremental; it represents a partial sentence using a single tree; each action adds exactly one token into the partial tree. Based on our transition system, we develop a strongly incremental parser. At each step, it encodes the partial tree using a graph neural network and predicts an action. We evaluate our parser on Penn Treebank (PTB) and Chinese Treebank (CTB). On PTB, it outperforms existing parsers trained with only constituency trees; and it performs on par with state-of-the-art parsers that use dependency trees as additional training data. On CTB, our parser establishes a new state of the art. Code is available at https://github.com/princeton-vl/attach-juxtapose-parser.
2020-10-27T19:19:38Z
Accepted to NeurIPS 2020
null
null
null
null
null
null
null
null
null
2,010.14819
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
['Kai Han', 'Yunhe Wang', 'Qiulin Zhang', 'Wei Zhang', 'Chunjing Xu', 'Tong Zhang']
['cs.CV']
To obtain excellent deep neural architectures, a series of techniques are carefully designed in EfficientNets. The giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik's cube for neural networks. So that we can find networks with high efficiency and excellent performance by twisting the three dimensions. This paper aims to explore the twisting rules for obtaining deep neural networks with minimum model sizes and computational costs. Different from the network enlarging, we observe that resolution and depth are more important than width for tiny networks. Therefore, the original method, i.e., the compound scaling in EfficientNet is no longer suitable. To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint. Experimental results on the ImageNet benchmark illustrate that our TinyNet performs much better than the smaller version of EfficientNets using the inversed giant formula. For instance, our TinyNet-E achieves a 59.9% Top-1 accuracy with only 24M FLOPs, which is about 1.9% higher than that of the previous best MobileNetV3 with similar computational cost. Code will be available at https://github.com/huawei-noah/ghostnet/tree/master/tinynet_pytorch, and https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/tinynet.
2020-10-28T08:49:45Z
NeurIPS 2020
null
null
null
null
null
null
null
null
null
2,010.15052
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
['Ryan Steed', 'Aylin Caliskan']
['cs.CY', 'cs.CV']
Recent advances in machine learning leverage massive datasets of unlabeled images from the web to learn general-purpose image representations for tasks from image classification to face recognition. But do unsupervised computer vision models automatically learn implicit patterns and embed social biases that could have harmful downstream effects? We develop a novel method for quantifying biased associations between representations of social concepts and attributes in images. We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset curated from internet images, automatically learn racial, gender, and intersectional biases. We replicate 8 previously documented human biases from social psychology, from the innocuous, as with insects and flowers, to the potentially harmful, as with race and gender. Our results closely match three hypotheses about intersectional bias from social psychology. For the first time in unsupervised computer vision, we also quantify implicit human biases about weight, disabilities, and several ethnicities. When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.
2020-10-28T15:55:49Z
10 pages, 3 figures. Replaced example image completions of real people with completions of artificial people
null
10.1145/3442188.3445932
null
null
null
null
null
null
null
2,011.00677
IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP
['Fajri Koto', 'Afshin Rahimi', 'Jey Han Lau', 'Timothy Baldwin']
['cs.CL']
Although the Indonesian language is spoken by almost 200 million people and the 10th most spoken language in the world, it is under-represented in NLP research. Previous work on Indonesian has been hampered by a lack of annotated datasets, a sparsity of language resources, and a lack of resource standardization. In this work, we release the IndoLEM dataset comprising seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse. We additionally release IndoBERT, a new pre-trained language model for Indonesian, and evaluate it over IndoLEM, in addition to benchmarking it against existing resources. Our experiments show that IndoBERT achieves state-of-the-art performance over most of the tasks in IndoLEM.
2020-11-02T01:54:56Z
Accepted at COLING 2020 - The 28th International Conference on Computational Linguistics
null
null
IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP
['Fajri Koto', 'Afshin Rahimi', 'Jey Han Lau', 'Timothy Baldwin']
2,020
International Conference on Computational Linguistics
263
66
['Computer Science']
2,011.01513
CharBERT: Character-aware Pre-trained Language Model
['Wentao Ma', 'Yiming Cui', 'Chenglei Si', 'Ting Liu', 'Shijin Wang', 'Guoping Hu']
['cs.CL']
Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable. However, those methods split a word into subword units and make the representation incomplete and fragile. In this paper, we propose a character-aware pre-trained language model named CharBERT improving on the previous methods (such as BERT, RoBERTa) to tackle these problems. We first construct the contextual word embedding for each token from the sequential character representations, then fuse the representations of characters and the subword representations by a novel heterogeneous interaction module. We also propose a new pre-training task named NLM (Noisy LM) for unsupervised character representation learning. We evaluate our method on question answering, sequence labeling, and text classification tasks, both on the original datasets and adversarial misspelling test sets. The experimental results show that our method can significantly improve the performance and robustness of PLMs simultaneously. Pretrained models, evaluation sets, and code are available at https://github.com/wtma/CharBERT
2020-11-03T07:13:06Z
12 pages, to appear at COLING 2020
null
10.18653/v1/2020.coling-main.4
null
null
null
null
null
null
null
2,011.03706
ESPnet-se: end-to-end speech enhancement and separation toolkit designed for asr integration
['Chenda Li', 'Jing Shi', 'Wangyou Zhang', 'Aswin Shanmugam Subramanian', 'Xuankai Chang', 'Naoyuki Kamo', 'Moto Hira', 'Tomoki Hayashi', 'Christoph Boeddeker', 'Zhuo Chen', 'Shinji Watanabe']
['eess.AS', 'cs.SD']
We present ESPnet-SE, which is designed for the quick development of speech enhancement and speech separation systems in a single framework, along with the optional downstream speech recognition module. ESPnet-SE is a new project which integrates rich automatic speech recognition related models, resources and systems to support and validate the proposed front-end implementation (i.e. speech enhancement and separation).It is capable of processing both single-channel and multi-channel data, with various functionalities including dereverberation, denoising and source separation. We provide all-in-one recipes including data pre-processing, feature extraction, training and evaluation pipelines for a wide range of benchmark datasets. This paper describes the design of the toolkit, several important functionalities, especially the speech recognition integration, which differentiates ESPnet-SE from other open source toolkits, and experimental results with major benchmark datasets.
2020-11-07T06:14:18Z
Accepted by SLT 2021
null
10.1109/SLT48900.2021.9383615
ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for ASR Integration
['Chenda Li', 'Jing Shi', 'Wangyou Zhang', 'A. Subramanian', 'Xuankai Chang', 'Naoyuki Kamo', 'Moto Hira', 'Tomoki Hayashi', 'Christoph Boeddeker', 'Zhuo Chen', 'Shinji Watanabe']
2,020
Spoken Language Technology Workshop
82
54
['Computer Science', 'Engineering']
2,011.04784
EstBERT: A Pretrained Language-Specific BERT for Estonian
['Hasan Tanvir', 'Claudia Kittask', 'Sandra Eiche', 'Kairit Sirts']
['cs.CL']
This paper presents EstBERT, a large pretrained transformer-based language-specific BERT model for Estonian. Recent work has evaluated multilingual BERT models on Estonian tasks and found them to outperform the baselines. Still, based on existing studies on other languages, a language-specific BERT model is expected to improve over the multilingual ones. We first describe the EstBERT pretraining process and then present the results of the models based on finetuned EstBERT for multiple NLP tasks, including POS and morphological tagging, named entity recognition and text classification. The evaluation results show that the models based on EstBERT outperform multilingual BERT models on five tasks out of six, providing further evidence towards a view that training language-specific BERT models are still useful, even when multilingual models are available.
2020-11-09T21:33:53Z
NoDaLiDa 2021
null
null
EstBERT: A Pretrained Language-Specific BERT for Estonian
['Hasan Tanvir', 'Claudia Kittask', 'Kairit Sirts']
2,020
Nordic Conference of Computational Linguistics
37
26
['Computer Science']
2,011.06294
Real-Time Intermediate Flow Estimation for Video Frame Interpolation
['Zhewei Huang', 'Tianyuan Zhang', 'Wen Heng', 'Boxin Shi', 'Shuchang Zhou']
['cs.CV', 'cs.LG']
Real-time video frame interpolation (VFI) is very useful in video processing, media players, and display devices. We propose RIFE, a Real-time Intermediate Flow Estimation algorithm for VFI. To realize a high-quality flow-based VFI method, RIFE uses a neural network named IFNet that can estimate the intermediate flows end-to-end with much faster speed. A privileged distillation scheme is designed for stable IFNet training and improve the overall performance. RIFE does not rely on pre-trained optical flow models and can support arbitrary-timestep frame interpolation with the temporal encoding input. Experiments demonstrate that RIFE achieves state-of-the-art performance on several public benchmarks. Compared with the popular SuperSlomo and DAIN methods, RIFE is 4--27 times faster and produces better results. Furthermore, RIFE can be extended to wider applications thanks to temporal encoding. The code is available at https://github.com/megvii-research/ECCV2022-RIFE.
2020-11-12T10:12:06Z
Accepted to ECCV 2022
null
null
null
null
null
null
null
null
null
2,011.06993
FLERT: Document-Level Features for Named Entity Recognition
['Stefan Schweter', 'Alan Akbik']
['cs.CL']
Current state-of-the-art approaches for named entity recognition (NER) typically consider text at the sentence-level and thus do not model information that crosses sentence boundaries. However, the use of transformer-based models for NER offers natural options for capturing document-level features. In this paper, we perform a comparative evaluation of document-level features in the two standard NER architectures commonly considered in the literature, namely "fine-tuning" and "feature-based LSTM-CRF". We evaluate different hyperparameters for document-level features such as context window size and enforcing document-locality. We present experiments from which we derive recommendations for how to model document context and present new state-of-the-art scores on several CoNLL-03 benchmark datasets. Our approach is integrated into the Flair framework to facilitate reproduction of our experiments.
2020-11-13T16:13:59Z
null
null
null
null
null
null
null
null
null
null
2,011.09127
3D-FRONT: 3D Furnished Rooms with layOuts and semaNTics
['Huan Fu', 'Bowen Cai', 'Lin Gao', 'Lingxiao Zhang', 'Jiaming Wang Cao Li', 'Zengqi Xun', 'Chengyue Sun', 'Rongfei Jia', 'Binqiang Zhao', 'Hao Zhang']
['cs.CV']
We introduce 3D-FRONT (3D Furnished Rooms with layOuts and semaNTics), a new, large-scale, and comprehensive repository of synthetic indoor scenes highlighted by professionally designed layouts and a large number of rooms populated by high-quality textured 3D models with style compatibility. From layout semantics down to texture details of individual objects, our dataset is freely available to the academic community and beyond. Currently, 3D-FRONT contains 18,968 rooms diversely furnished by 3D objects, far surpassing all publicly available scene datasets. In addition, the 13,151 furniture objects all come with high-quality textures. While the floorplans and layout designs are directly sourced from professional creations, the interior designs in terms of furniture styles, color, and textures have been carefully curated based on a recommender system we develop to attain consistent styles as expert designs. Furthermore, we release Trescope, a light-weight rendering tool, to support benchmark rendering of 2D images and annotations from 3D-FRONT. We demonstrate two applications, interior scene synthesis and texture synthesis, that are especially tailored to the strengths of our new dataset. The project page is at: https://tianchi.aliyun.com/specials/promotion/alibaba-3d-scene-dataset.
2020-11-18T07:14:55Z
Project page: https://tianchi.aliyun.com/specials/promotion/alibaba-3d-scene-dataset
null
null
3D-FRONT: 3D Furnished Rooms with layOuts and semaNTics
['Huan Fu', 'Bowen Cai', 'Lin Gao', 'Ling-Xiao Zhang', 'Cao Li', 'Zengqi Xun', 'Chengyue Sun', 'Yiyun Fei', 'Yu-qiong Zheng', 'Ying Li', 'Yi Liu', 'Peng Liu', 'Lin Ma', 'Le Weng', 'Xiaohang Hu', 'Xin Ma', 'Qian Qian', 'Rongfei Jia', 'Binqiang Zhao', 'H. Zhang']
2,020
IEEE International Conference on Computer Vision
276
50
['Computer Science']
2,011.09468
Gradient Starvation: A Learning Proclivity in Neural Networks
['Mohammad Pezeshki', 'Sékou-Oumar Kaba', 'Yoshua Bengio', 'Aaron Courville', 'Doina Precup', 'Guillaume Lajoie']
['cs.LG', 'math.DS', 'stat.ML']
We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.
2020-11-18T18:52:08Z
Proceeding of NeurIPS 2021
null
null
Gradient Starvation: A Learning Proclivity in Neural Networks
['M. Pezeshki', 'S. Kaba', 'Y. Bengio', 'Aaron C. Courville', 'Doina Precup', 'Guillaume Lajoie']
2,020
Neural Information Processing Systems
269
117
['Computer Science', 'Mathematics']
2,011.1245
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals
['Peize Sun', 'Rufeng Zhang', 'Yi Jiang', 'Tao Kong', 'Chenfeng Xu', 'Wei Zhan', 'Masayoshi Tomizuka', 'Lei Li', 'Zehuan Yuan', 'Changhu Wang', 'Ping Luo']
['cs.CV']
We present Sparse R-CNN, a purely sparse method for object detection in images. Existing works on object detection heavily rely on dense object candidates, such as $k$ anchor boxes pre-defined on all grids of image feature map of size $H\times W$. In our method, however, a fixed sparse set of learned object proposals, total length of $N$, are provided to object recognition head to perform classification and location. By eliminating $HWk$ (up to hundreds of thousands) hand-designed object candidates to $N$ (e.g. 100) learnable proposals, Sparse R-CNN completely avoids all efforts related to object candidates design and many-to-one label assignment. More importantly, final predictions are directly output without non-maximum suppression post-procedure. Sparse R-CNN demonstrates accuracy, run-time and training convergence performance on par with the well-established detector baselines on the challenging COCO dataset, e.g., achieving 45.0 AP in standard $3\times$ training schedule and running at 22 fps using ResNet-50 FPN model. We hope our work could inspire re-thinking the convention of dense prior in object detectors. The code is available at: https://github.com/PeizeSun/SparseR-CNN.
2020-11-25T00:01:28Z
add test-dev; add crowdhuman
null
null
null
null
null
null
null
null
null
2,011.13205
SLURP: A Spoken Language Understanding Resource Package
['Emanuele Bastianelli', 'Andrea Vanzo', 'Pawel Swietojanski', 'Verena Rieser']
['cs.CL', 'cs.LG']
Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https: //github.com/pswietojanski/slurp.
2020-11-26T09:58:20Z
Published at the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP-2020)
null
null
null
null
null
null
null
null
null
2,011.13456
Score-Based Generative Modeling through Stochastic Differential Equations
['Yang Song', 'Jascha Sohl-Dickstein', 'Diederik P. Kingma', 'Abhishek Kumar', 'Stefano Ermon', 'Ben Poole']
['cs.LG', 'stat.ML']
Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
2020-11-26T19:39:10Z
ICLR 2021 (Oral)
null
null
null
null
null
null
null
null
null
2,012.00413
CPM: A Large-scale Generative Chinese Pre-trained Language Model
['Zhengyan Zhang', 'Xu Han', 'Hao Zhou', 'Pei Ke', 'Yuxian Gu', 'Deming Ye', 'Yujia Qin', 'Yusheng Su', 'Haozhe Ji', 'Jian Guan', 'Fanchao Qi', 'Xiaozhi Wang', 'Yanan Zheng', 'Guoyang Zeng', 'Huanqi Cao', 'Shengqi Chen', 'Daixuan Li', 'Zhenbo Sun', 'Zhiyuan Liu', 'Minlie Huang', 'Wentao Han', 'Jie Tang', 'Juanzi Li', 'Xiaoyan Zhu', 'Maosong Sun']
['cs.CL']
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning. The code and parameters are available at https://github.com/TsinghuaAI/CPM-Generate.
2020-12-01T11:32:56Z
null
null
null
CPM: A Large-scale Generative Chinese Pre-trained Language Model
['Zhengyan Zhang', 'Xu Han', 'Hao Zhou', 'Pei Ke', 'Yuxian Gu', 'Deming Ye', 'Yujia Qin', 'Yusheng Su', 'Haozhe Ji', 'Jian Guan', 'Fanchao Qi', 'Xiaozhi Wang', 'Yanan Zheng', 'Guoyang Zeng', 'Huanqi Cao', 'S. Chen', 'Daixuan Li', 'Zhenbo Sun', 'Zhiyuan Liu', 'Minlie Huang', 'Wentao Han', 'Jie Tang', 'Juan-Zi Li', 'Xiaoyan Zhu', 'Maosong Sun']
2,020
AI Open
119
42
['Computer Science']
2,012.00483
ClimaText: A Dataset for Climate Change Topic Detection
['Francesco S. Varini', 'Jordan Boyd-Graber', 'Massimiliano Ciaramita', 'Markus Leippold']
['cs.CL', 'cs.AI']
Climate change communication in the mass media and other textual sources may affect and shape public perception. Extracting climate change information from these sources is an important task, e.g., for filtering content and e-discovery, sentiment analysis, automatic summarization, question-answering, and fact-checking. However, automating this process is a challenge, as climate change is a complex, fast-moving, and often ambiguous topic with scarce resources for popular text-based AI tasks. In this paper, we introduce \textsc{ClimaText}, a dataset for sentence-based climate change topic detection, which we make publicly available. We explore different approaches to identify the climate change topic in various text sources. We find that popular keyword-based models are not adequate for such a complex and evolving task. Context-based algorithms like BERT \cite{devlin2018bert} can detect, in addition to many trivial cases, a variety of complex and implicit topic patterns. Nevertheless, our analysis reveals a great potential for improvement in several directions, such as, e.g., capturing the discussion on indirect effects of climate change. Hence, we hope this work can serve as a good starting point for further research on this topic.
2020-12-01T13:42:37Z
Accepted for the Tackling Climate Change with Machine Learning Workshop at NeurIPS 2020
null
null
null
null
null
null
null
null
null