arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,910.12592 | BUT System Description to VoxCeleb Speaker Recognition Challenge 2019 | ['Hossein Zeinali', 'Shuai Wang', 'Anna Silnova', 'Pavel Matějka', 'Oldřich Plchot'] | ['eess.AS', 'cs.CL', 'cs.SD'] | In this report, we describe the submission of Brno University of Technology
(BUT) team to the VoxCeleb Speaker Recognition Challenge (VoxSRC) 2019. We also
provide a brief analysis of different systems on VoxCeleb-1 test sets.
Submitted systems for both Fixed and Open conditions are a fusion of 4
Convolutional Neural Network (CNN) topologies. The first and second networks
have ResNet34 topology and use two-dimensional CNNs. The last two networks are
one-dimensional CNN and are based on the x-vector extraction topology. Some of
the networks are fine-tuned using additive margin angular softmax. Kaldi FBanks
and Kaldi PLPs were used as features. The difference between Fixed and Open
systems lies in the used training data and fusion strategy. The best systems
for Fixed and Open conditions achieved 1.42% and 1.26% ERR on the challenge
evaluation set respectively. | 2019-10-16T11:27:27Z | null | null | null | BUT System Description to VoxCeleb Speaker Recognition Challenge 2019 | ['Hossein Zeinali', 'Shuai Wang', 'Anna Silnova', 'P. Matejka', 'Oldrich Plchot'] | 2,019 | arXiv.org | 248 | 16 | ['Engineering', 'Computer Science'] |
1,910.1284 | Evaluating the Factual Consistency of Abstractive Text Summarization | ['Wojciech Kryściński', 'Bryan McCann', 'Caiming Xiong', 'Richard Socher'] | ['cs.CL'] | Currently used metrics for assessing summarization algorithms do not account
for whether summaries are factually consistent with source documents. We
propose a weakly-supervised, model-based approach for verifying factual
consistency and identifying conflicts between source documents and a generated
summary. Training data is generated by applying a series of rule-based
transformations to the sentences of source documents. The factual consistency
model is then trained jointly for three tasks: 1) identify whether sentences
remain factually consistent after transformation, 2) extract a span in the
source documents to support the consistency prediction, 3) extract a span in
the summary sentence that is inconsistent if one exists. Transferring this
model to summaries generated by several state-of-the art models reveals that
this highly scalable approach substantially outperforms previous models,
including those trained with strong supervision using standard datasets for
natural language inference and fact checking. Additionally, human evaluation
shows that the auxiliary span extraction tasks provide useful assistance in the
process of verifying factual consistency. | 2019-10-28T17:51:44Z | 11 pages, 7 tables, 1 algorithm | null | null | null | null | null | null | null | null | null |
1,910.13267 | BPE-Dropout: Simple and Effective Subword Regularization | ['Ivan Provilkov', 'Dmitrii Emelianenko', 'Elena Voita'] | ['cs.CL'] | Subword segmentation is widely used to address the open vocabulary problem in
machine translation. The dominant approach to subword segmentation is Byte Pair
Encoding (BPE), which keeps the most frequent words intact while splitting the
rare ones into multiple tokens. While multiple segmentations are possible even
with the same vocabulary, BPE splits words into unique sequences; this may
prevent a model from better learning the compositionality of words and being
robust to segmentation errors. So far, the only way to overcome this BPE
imperfection, its deterministic nature, was to create another subword
segmentation algorithm (Kudo, 2018). In contrast, we show that BPE itself
incorporates the ability to produce multiple segmentations of the same word. We
introduce BPE-dropout - simple and effective subword regularization method
based on and compatible with conventional BPE. It stochastically corrupts the
segmentation procedure of BPE, which leads to producing multiple segmentations
within the same fixed BPE framework. Using BPE-dropout during training and the
standard BPE during inference improves translation quality up to 3 BLEU
compared to BPE and up to 0.9 BLEU compared to the previous subword
regularization. | 2019-10-29T13:42:56Z | ACL 2020 (camera-ready) | null | null | BPE-Dropout: Simple and Effective Subword Regularization | ['Ivan Provilkov', 'Dmitrii Emelianenko', 'Elena Voita'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 289 | 31 | ['Computer Science'] |
1,910.13461 | BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension | ['Mike Lewis', 'Yinhan Liu', 'Naman Goyal', 'Marjan Ghazvininejad', 'Abdelrahman Mohamed', 'Omer Levy', 'Ves Stoyanov', 'Luke Zettlemoyer'] | ['cs.CL', 'cs.LG', 'stat.ML'] | We present BART, a denoising autoencoder for pretraining sequence-to-sequence
models. BART is trained by (1) corrupting text with an arbitrary noising
function, and (2) learning a model to reconstruct the original text. It uses a
standard Tranformer-based neural machine translation architecture which,
despite its simplicity, can be seen as generalizing BERT (due to the
bidirectional encoder), GPT (with the left-to-right decoder), and many other
more recent pretraining schemes. We evaluate a number of noising approaches,
finding the best performance by both randomly shuffling the order of the
original sentences and using a novel in-filling scheme, where spans of text are
replaced with a single mask token. BART is particularly effective when fine
tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE
and SQuAD, achieves new state-of-the-art results on a range of abstractive
dialogue, question answering, and summarization tasks, with gains of up to 6
ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system
for machine translation, with only target language pretraining. We also report
ablation experiments that replicate other pretraining schemes within the BART
framework, to better measure which factors most influence end-task performance. | 2019-10-29T18:01:00Z | null | null | null | BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension | ['M. Lewis', 'Yinhan Liu', 'Naman Goyal', 'Marjan Ghazvininejad', 'Abdel-rahman Mohamed', 'Omer Levy', 'Veselin Stoyanov', 'Luke Zettlemoyer'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 10,934 | 36 | ['Computer Science', 'Mathematics'] |
1,910.13793 | Time to Take Emoji Seriously: They Vastly Improve Casual Conversational
Models | ['Pieter Delobelle', 'Bettina Berendt'] | ['cs.CL'] | Graphical emoji are ubiquitous in modern-day online conversations. So is a
single thumbs-up emoji able to signify an agreement, without any words. We
argue that the current state-of-the-art systems are ill-equipped to correctly
interpret these emoji, especially in a conversational context. However, in a
casual context, the benefits might be high: a better understanding of users'
utterances and more natural, emoji-rich responses.
With this in mind, we modify BERT to fully support emoji, both from the
Unicode Standard and custom emoji. This modified BERT is then trained on a
corpus of question-answer (QA) tuples with a high number of emoji, where we're
able to increase the 1-of-100 accuracy from 12.7% for the current
state-of-the-art to 17.8% for our model with emoji support. | 2019-10-30T12:11:36Z | Accepted at Benelearn 2019 | null | null | Time to Take Emoji Seriously: They Vastly Improve Casual Conversational Models | ['Pieter Delobelle', 'Bettina Berendt'] | 2,019 | BNAIC/BENELEARN | 11 | 37 | ['Computer Science'] |
1,910.14296 | LIMIT-BERT : Linguistic Informed Multi-Task BERT | ['Junru Zhou', 'Zhuosheng Zhang', 'Hai Zhao', 'Shuailiang Zhang'] | ['cs.CL', 'cs.LG'] | In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-BERT)
for learning language representations across multiple linguistic tasks by
Multi-Task Learning (MTL). LIMIT-BERT includes five key linguistic syntax and
semantics tasks: Part-Of-Speech (POS) tags, constituent and dependency
syntactic parsing, span and dependency semantic role labeling (SRL). Besides,
LIMIT-BERT adopts linguistics mask strategy: Syntactic and Semantic Phrase
Masking which mask all of the tokens corresponding to a syntactic/semantic
phrase. Different from recent Multi-Task Deep Neural Networks (MT-DNN) (Liu et
al., 2019), our LIMIT-BERT is linguistically motivated and learning in a
semi-supervised method which provides large amounts of linguistic-task data as
same as BERT learning corpus. As a result, LIMIT-BERT not only improves
linguistic tasks performance but also benefits from a regularization effect and
linguistic information that leads to more general representations to help adapt
to new tasks and domains. LIMIT-BERT obtains new state-of-the-art or
competitive results on both span and dependency semantic parsing on Propbank
benchmarks and both dependency and constituent syntactic parsing on Penn
Treebank. | 2019-10-31T08:14:51Z | EMNLP 2020, ACL Findings | null | null | null | null | null | null | null | null | null |
1,910.14659 | Masked Language Model Scoring | ['Julian Salazar', 'Davis Liang', 'Toan Q. Nguyen', 'Katrin Kirchhoff'] | ['cs.CL', 'cs.LG', 'eess.AS', 'stat.ML'] | Pretrained masked language models (MLMs) require finetuning for most NLP
tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood
scores (PLLs), which are computed by masking tokens one by one. We show that
PLLs outperform scores from autoregressive language models like GPT-2 in a
variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an
end-to-end LibriSpeech model's WER by 30% relative and adds up to +1.7 BLEU on
state-of-the-art baselines for low-resource translation pairs, with further
gains from domain adaptation. We attribute this success to PLL's unsupervised
expression of linguistic acceptability without a left-to-right bias, greatly
improving on scores from GPT-2 (+10 points on island effects, NPI licensing in
BLiMP). One can finetune MLMs to give scores without masking, enabling
computation in a single inference pass. In all, PLLs and their associated
pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of
pretrained MLMs; e.g., we use a single cross-lingual model to rescore
translations in multiple languages. We release our library for language model
scoring at https://github.com/awslabs/mlm-scoring. | 2019-10-31T17:51:21Z | ACL 2020 camera-ready (presented July 2020) | Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics (2020), 2699-2712 | 10.18653/v1/2020.acl-main.240 | null | null | null | null | null | null | null |
1,911.00536 | DialoGPT: Large-Scale Generative Pre-training for Conversational
Response Generation | ['Yizhe Zhang', 'Siqi Sun', 'Michel Galley', 'Yen-Chun Chen', 'Chris Brockett', 'Xiang Gao', 'Jianfeng Gao', 'Jingjing Liu', 'Bill Dolan'] | ['cs.CL', 'cs.LG'] | We present a large, tunable neural conversational response generation model,
DialoGPT (dialogue generative pre-trained transformer). Trained on 147M
conversation-like exchanges extracted from Reddit comment chains over a period
spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch
transformer to attain a performance close to human both in terms of automatic
and human evaluation in single-turn dialogue settings. We show that
conversational systems that leverage DialoGPT generate more relevant,
contentful and context-consistent responses than strong baseline systems. The
pre-trained model and training pipeline are publicly released to facilitate
research into neural response generation and the development of more
intelligent open-domain dialogue systems. | 2019-11-01T18:16:54Z | Accepted by ACL 2020 system demonstration | null | null | DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation | ['Yizhe Zhang', 'Siqi Sun', 'Michel Galley', 'Yen-Chun Chen', 'Chris Brockett', 'Xiang Gao', 'Jianfeng Gao', 'Jingjing Liu', 'W. Dolan'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 1,529 | 32 | ['Computer Science'] |
1,911.01547 | On the Measure of Intelligence | ['François Chollet'] | ['cs.AI'] | To make deliberate progress towards more intelligent and more human-like
artificial systems, we need to be following an appropriate feedback signal: we
need to be able to define and evaluate intelligence in a way that enables
comparisons between two systems, as well as comparisons with humans. Over the
past hundred years, there has been an abundance of attempts to define and
measure intelligence, across both the fields of psychology and AI. We summarize
and critically assess these definitions and evaluation approaches, while making
apparent the two historical conceptions of intelligence that have implicitly
guided them. We note that in practice, the contemporary AI community still
gravitates towards benchmarking intelligence by comparing the skill exhibited
by AIs and humans at specific tasks such as board games and video games. We
argue that solely measuring skill at any given task falls short of measuring
intelligence, because skill is heavily modulated by prior knowledge and
experience: unlimited priors or unlimited training data allow experimenters to
"buy" arbitrary levels of skills for a system, in a way that masks the system's
own generalization power. We then articulate a new formal definition of
intelligence based on Algorithmic Information Theory, describing intelligence
as skill-acquisition efficiency and highlighting the concepts of scope,
generalization difficulty, priors, and experience. Using this definition, we
propose a set of guidelines for what a general AI benchmark should look like.
Finally, we present a benchmark closely following these guidelines, the
Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors
designed to be as close as possible to innate human priors. We argue that ARC
can be used to measure a human-like form of general fluid intelligence and that
it enables fair general intelligence comparisons between AI systems and humans. | 2019-11-05T00:31:38Z | null | null | null | null | null | null | null | null | null | null |
1,911.02116 | Unsupervised Cross-lingual Representation Learning at Scale | ['Alexis Conneau', 'Kartikay Khandelwal', 'Naman Goyal', 'Vishrav Chaudhary', 'Guillaume Wenzek', 'Francisco Guzmán', 'Edouard Grave', 'Myle Ott', 'Luke Zettlemoyer', 'Veselin Stoyanov'] | ['cs.CL'] | This paper shows that pretraining multilingual language models at scale leads
to significant performance gains for a wide range of cross-lingual transfer
tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our
model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a
variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI,
+13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs
particularly well on low-resource languages, improving 15.7% in XNLI accuracy
for Swahili and 11.4% for Urdu over previous XLM models. We also present a
detailed empirical analysis of the key factors that are required to achieve
these gains, including the trade-offs between (1) positive transfer and
capacity dilution and (2) the performance of high and low resource languages at
scale. Finally, we show, for the first time, the possibility of multilingual
modeling without sacrificing per-language performance; XLM-R is very
competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
will make our code, data and models publicly available. | 2019-11-05T22:42:00Z | ACL 2020 (+ updated results) | null | null | Unsupervised Cross-lingual Representation Learning at Scale | ['Alexis Conneau', 'Kartikay Khandelwal', 'Naman Goyal', 'Vishrav Chaudhary', 'Guillaume Wenzek', 'Francisco Guzmán', 'Edouard Grave', 'Myle Ott', 'Luke Zettlemoyer', 'Veselin Stoyanov'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 6,627 | 42 | ['Computer Science'] |
1,911.0215 | Fast Transformer Decoding: One Write-Head is All You Need | ['Noam Shazeer'] | ['cs.NE', 'cs.CL', 'cs.LG'] | Multi-head attention layers, as used in the Transformer neural sequence
model, are a powerful alternative to RNNs for moving information across and
between sequences. While training these layers is generally fast and simple,
due to parallelizability across the length of the sequence, incremental
inference (where such paralleization is impossible) is often slow, due to the
memory-bandwidth cost of repeatedly loading the large "keys" and "values"
tensors. We propose a variant called multi-query attention, where the keys and
values are shared across all of the different attention "heads", greatly
reducing the size of these tensors and hence the memory bandwidth requirements
of incremental decoding. We verify experimentally that the resulting models can
indeed be much faster to decode, and incur only minor quality degradation from
the baseline. | 2019-11-06T00:19:05Z | null | null | null | null | null | null | null | null | null | null |
1,911.02671 | Open Domain Web Keyphrase Extraction Beyond Language Modeling | ['Lee Xiong', 'Chuan Hu', 'Chenyan Xiong', 'Daniel Campos', 'Arnold Overwijk'] | ['cs.CL', 'cs.IR'] | This paper studies keyphrase extraction in real-world scenarios where
documents are from diverse domains and have variant content quality. We curate
and release OpenKP, a large scale open domain keyphrase extraction dataset with
near one hundred thousand web documents and expert keyphrase annotations. To
handle the variations of domain and content quality, we develop BLING-KPE, a
neural keyphrase extraction model that goes beyond language understanding using
visual presentations of documents and weak supervision from search queries.
Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the
contributions of its neural architecture, visual features, and search log weak
supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved
generalization ability of learning from the open domain data compared to a
specific domain. | 2019-11-06T23:12:56Z | null | EMNLP-IJCNLP 2019 | null | null | null | null | null | null | null | null |
1,911.02782 | S2ORC: The Semantic Scholar Open Research Corpus | ['Kyle Lo', 'Lucy Lu Wang', 'Mark Neumann', 'Rodney Kinney', 'Dan S. Weld'] | ['cs.CL', 'cs.DL'] | We introduce S2ORC, a large corpus of 81.1M English-language academic papers
spanning many academic disciplines. The corpus consists of rich metadata, paper
abstracts, resolved bibliographic references, as well as structured full text
for 8.1M open access papers. Full text is annotated with automatically-detected
inline mentions of citations, figures, and tables, each linked to their
corresponding paper objects. In S2ORC, we aggregate papers from hundreds of
academic publishers and digital archives into a unified source, and create the
largest publicly-available collection of machine-readable academic text to
date. We hope this resource will facilitate research and development of tools
and tasks for text mining over academic text. | 2019-11-07T07:34:43Z | ACL 2020 | null | null | GORC: A large contextual citation graph of academic papers | ['Kyle Lo', 'Lucy Lu Wang', 'Mark Neumann', 'Rodney Michael Kinney', 'Daniel S. Weld'] | 2,019 | arXiv.org | 10 | 53 | ['Computer Science'] |
1,911.02855 | Dice Loss for Data-imbalanced NLP Tasks | ['Xiaoya Li', 'Xiaofei Sun', 'Yuxian Meng', 'Junjun Liang', 'Fei Wu', 'Jiwei Li'] | ['cs.CL'] | Many NLP tasks such as tagging and machine reading comprehension are faced
with the severe data imbalance issue: negative examples significantly outnumber
positive examples, and the huge number of background examples (or easy-negative
examples) overwhelms the training. The most commonly used cross entropy (CE)
criteria is actually an accuracy-oriented objective, and thus creates a
discrepancy between training and test: at training time, each training instance
contributes equally to the objective function, while at test time F1 score
concerns more about positive examples. In this paper, we propose to use dice
loss in replacement of the standard cross-entropy objective for data-imbalanced
NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky
index, which attaches similar importance to false positives and false
negatives, and is more immune to the data-imbalance issue. To further alleviate
the dominating influence from easy-negative examples in training, we propose to
associate training examples with dynamically adjusted weights to deemphasize
easy-negative examples.Theoretical analysis shows that this strategy narrows
down the gap between the F1 score in evaluation and the dice loss in training.
With the proposed training objective, we observe significant performance boost
on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve
SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA
results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity
recognition task; along with competitive results on the tasks of machine
reading comprehension and paraphrase identification. | 2019-11-07T11:14:05Z | null | null | null | null | null | null | null | null | null | null |
1,911.02969 | BERTs of a feather do not generalize together: Large variability in
generalization across models with similar test set performance | ['R. Thomas McCoy', 'Junghyun Min', 'Tal Linzen'] | ['cs.CL'] | If the same neural network architecture is trained multiple times on the same
dataset, will it make similar linguistic generalizations across runs? To study
this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural
Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which
evaluates syntactic generalization in natural language inference. On the MNLI
development set, the behavior of all instances was remarkably consistent, with
accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models
varied widely in their generalization performance. For example, on the simple
case of subject-object swap (e.g., determining that "the doctor visited the
lawyer" does not entail "the lawyer visited the doctor"), accuracy ranged from
0.00% to 66.2%. Such variation is likely due to the presence of many local
minima that are equally attractive to a low-bias learner such as a neural
network; decreasing the variability may therefore require models with stronger
inductive biases. | 2019-11-07T16:20:40Z | 11 pages, 7 figures; accepted to the 2020 BlackboxNLP workshop | null | null | null | null | null | null | null | null | null |
1,911.0309 | What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning | ['Jaejun Lee', 'Raphael Tang', 'Jimmy Lin'] | ['cs.CL'] | Pretrained transformer-based language models have achieved state of the art
across countless tasks in natural language processing. These models are highly
expressive, comprising at least a hundred million parameters and a dozen
layers. Recent evidence suggests that only a few of the final layers need to be
fine-tuned for high quality on downstream tasks. Naturally, a subsequent
research question is, "how many of the last layers do we need to fine-tune?" In
this paper, we precisely answer this question. We examine two recent pretrained
language models, BERT and RoBERTa, across standard tasks in textual entailment,
semantic similarity, sentiment analysis, and linguistic acceptability. We vary
the number of final layers that are fine-tuned, then study the resulting change
in task-specific effectiveness. We show that only a fourth of the final layers
need to be fine-tuned to achieve 90% of the original quality. Surprisingly, we
also find that fine-tuning all layers does not always help. | 2019-11-08T07:05:20Z | 5 pages | null | null | null | null | null | null | null | null | null |
1,911.03531 | Neural Arabic Text Diacritization: State of the Art Results and a Novel
Approach for Machine Translation | ['Ali Fadel', 'Ibraheem Tuffaha', "Bara' Al-Jawarneh", 'Mahmoud Al-Ayyoub'] | ['cs.CL', 'cs.LG'] | In this work, we present several deep learning models for the automatic
diacritization of Arabic text. Our models are built using two main approaches,
viz. Feed-Forward Neural Network (FFNN) and Recurrent Neural Network (RNN),
with several enhancements such as 100-hot encoding, embeddings, Conditional
Random Field (CRF) and Block-Normalized Gradient (BNG). The models are tested
on the only freely available benchmark dataset and the results show that our
models are either better or on par with other models, which require
language-dependent post-processing steps, unlike ours. Moreover, we show that
diacritics in Arabic can be used to enhance the models of NLP tasks such as
Machine Translation (MT) by proposing the Translation over Diacritization (ToD)
approach. | 2019-11-08T20:52:12Z | 18 pages, 17 figures, 14 tables | null | 10.18653/v1/D19-5229 | Neural Arabic Text Diacritization: State of the Art Results and a Novel Approach for Machine Translation | ['A. Fadel', 'Ibraheem Tuffaha', "Bara' Al-Jawarneh", 'M. Al-Ayyoub'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 31 | 27 | ['Computer Science'] |
1,911.03705 | CommonGen: A Constrained Text Generation Challenge for Generative
Commonsense Reasoning | ['Bill Yuchen Lin', 'Wangchunshu Zhou', 'Ming Shen', 'Pei Zhou', 'Chandra Bhagavatula', 'Yejin Choi', 'Xiang Ren'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Recently, large-scale pre-trained language models have demonstrated
impressive performance on several commonsense-reasoning benchmark datasets.
However, building machines with commonsense to compose realistically plausible
sentences remains challenging. In this paper, we present a constrained text
generation task, CommonGen associated with a benchmark dataset, to explicitly
test machines for the ability of generative commonsense reasoning. Given a set
of common concepts (e.g., {dog, frisbee, catch, throw}); the task is to
generate a coherent sentence describing an everyday scenario using these
concepts (e.g., "a man throws a frisbee and his dog catches it").
The CommonGen task is challenging because it inherently requires 1)
relational reasoning with background commonsense knowledge, and 2)
compositional generalization ability to work on unseen concept combinations.
Our dataset, constructed through a combination of crowdsourced and existing
caption corpora, consists of 79k commonsense descriptions over 35k unique
concept-sets. Experiments show that there is a large gap between
state-of-the-art text generation models (e.g., T5) and human performance.
Furthermore, we demonstrate that the learned generative commonsense reasoning
capability can be transferred to improve downstream tasks such as CommonsenseQA
by generating additional context. | 2019-11-09T14:53:59Z | Accepted to EMNLP 2020 Findings. Add one more human reference for
each test example: Table 1,3 & Figure 4 & Section 3.3, 3.4 are updated.
Project page: https://inklab.usc.edu/CommonGen/ | null | null | CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning | ['Bill Yuchen Lin', 'Ming Shen', 'Yu Xing', 'Pei Zhou', 'Xiang Ren'] | 2,019 | arXiv.org | 16 | 53 | ['Computer Science'] |
1,911.03814 | Scalable Zero-shot Entity Linking with Dense Entity Retrieval | ['Ledell Wu', 'Fabio Petroni', 'Martin Josifoski', 'Sebastian Riedel', 'Luke Zettlemoyer'] | ['cs.CL'] | This paper introduces a conceptually simple, scalable, and highly effective
BERT-based entity linking model, along with an extensive evaluation of its
accuracy-speed trade-off. We present a two-stage zero-shot linking algorithm,
where each entity is defined only by a short textual description. The first
stage does retrieval in a dense space defined by a bi-encoder that
independently embeds the mention context and the entity descriptions. Each
candidate is then re-ranked with a cross-encoder, that concatenates the mention
and entity text. Experiments demonstrate that this approach is state of the art
on recent zero-shot benchmarks (6 point absolute gains) and also on more
established non-zero-shot evaluations (e.g. TACKBP-2010), despite its relative
simplicity (e.g. no explicit entity embeddings or manually engineered mention
tables). We also show that bi-encoder linking is very fast with nearest
neighbour search (e.g. linking with 5.9 million candidates in 2 milliseconds),
and that much of the accuracy gain from the more expensive cross-encoder can be
transferred to the bi-encoder via knowledge distillation. Our code and models
are available at https://github.com/facebookresearch/BLINK. | 2019-11-10T01:01:45Z | accepted at EMNLP 2020 | null | null | Zero-shot Entity Linking with Dense Entity Retrieval | ['Ledell Yu Wu', 'F. Petroni', 'Martin Josifoski', 'Sebastian Riedel', 'Luke Zettlemoyer'] | 2,019 | arXiv.org | 181 | 23 | ['Computer Science'] |
1,911.03854 | r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake
News Detection | ['Kai Nakamura', 'Sharon Levy', 'William Yang Wang'] | ['cs.CL', 'cs.CY', 'cs.IR'] | Fake news has altered society in negative ways in politics and culture. It
has adversely affected both online social network systems as well as offline
communities and conversations. Using automatic machine learning classification
models is an efficient way to combat the widespread dissemination of fake news.
However, a lack of effective, comprehensive datasets has been a problem for
fake news research and detection model development. Prior fake news datasets do
not provide multimodal text and image data, metadata, comment data, and
fine-grained fake news categorization at the scale and breadth of our dataset.
We present Fakeddit, a novel multimodal dataset consisting of over 1 million
samples from multiple categories of fake news. After being processed through
several stages of review, the samples are labeled according to 2-way, 3-way,
and 6-way classification categories through distant supervision. We construct
hybrid text+image models and perform extensive experiments for multiple
variations of classification, demonstrating the importance of the novel aspect
of multimodality and fine-grained classification unique to Fakeddit. | 2019-11-10T05:06:38Z | Accepted LREC 2020 | null | null | null | null | null | null | null | null | null |
1,911.03882 | Pre-train and Plug-in: Flexible Conditional Text Generation with
Variational Auto-Encoders | ['Yu Duan', 'Canwen Xu', 'Jiaxin Pei', 'Jialong Han', 'Chenliang Li'] | ['cs.CL', 'cs.LG', 'stat.ML'] | Conditional Text Generation has drawn much attention as a topic of Natural
Language Generation (NLG) which provides the possibility for humans to control
the properties of generated contents. Current conditional generation models
cannot handle emerging conditions due to their joint end-to-end learning
fashion. When a new condition added, these techniques require full retraining.
In this paper, we present a new framework named Pre-train and Plug-in
Variational Auto-Encoder (PPVAE) towards flexible conditional text generation.
PPVAE decouples the text generation module from the condition representation
module to allow "one-to-many" conditional generation. When a fresh condition
emerges, only a lightweight network needs to be trained and works as a plug-in
for PPVAE, which is efficient and desirable for real-world applications.
Extensive experiments demonstrate the superiority of PPVAE against the existing
alternatives with better conditionality and diversity but less training effort. | 2019-11-10T09:23:42Z | Accepted as a long paper at ACL 2020 | null | null | Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders | ['Yu Duan', 'Jiaxin Pei', 'Canwen Xu', 'Chenliang Li'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 43 | 42 | ['Computer Science', 'Mathematics'] |
1,911.03894 | CamemBERT: a Tasty French Language Model | ['Louis Martin', 'Benjamin Muller', 'Pedro Javier Ortiz Suárez', 'Yoann Dupont', 'Laurent Romary', 'Éric Villemonte de la Clergerie', 'Djamé Seddah', 'Benoît Sagot'] | ['cs.CL'] | Pretrained language models are now ubiquitous in Natural Language Processing.
Despite their success, most available models have either been trained on
English data or on the concatenation of data in multiple languages. This makes
practical use of such models --in all languages except English-- very limited.
In this paper, we investigate the feasibility of training monolingual
Transformer-based language models for other languages, taking French as an
example and evaluating our language models on part-of-speech tagging,
dependency parsing, named entity recognition and natural language inference
tasks. We show that the use of web crawled data is preferable to the use of
Wikipedia data. More surprisingly, we show that a relatively small web crawled
dataset (4GB) leads to results that are as good as those obtained using larger
datasets (130+GB). Our best performing model CamemBERT reaches or improves the
state of the art in all four downstream tasks. | 2019-11-10T10:46:37Z | ACL 2020 long paper. Web site: https://camembert-model.fr | Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics, July 2020, Online | 10.18653/v1/2020.acl-main.645 | null | null | null | null | null | null | null |
1,911.04211 | NegBERT: A Transfer Learning Approach for Negation Detection and Scope
Resolution | ['Aditya Khandelwal', 'Suraj Sawant'] | ['cs.CL'] | Negation is an important characteristic of language, and a major component of
information extraction from text. This subtask is of considerable importance to
the biomedical domain. Over the years, multiple approaches have been explored
to address this problem: Rule-based systems, Machine Learning classifiers,
Conditional Random Field Models, CNNs and more recently BiLSTMs. In this paper,
we look at applying Transfer Learning to this problem. First, we extensively
review previous literature addressing Negation Detection and Scope Resolution
across the 3 datasets that have gained popularity over the years: the BioScope
Corpus, the Sherlock dataset, and the SFU Review Corpus. We then explore the
decision choices involved with using BERT, a popular transfer learning model,
for this task, and report state-of-the-art results for scope resolution across
all 3 datasets. Our model, referred to as NegBERT, achieves a token level F1
score on scope resolution of 92.36 on the Sherlock dataset, 95.68 on the
BioScope Abstracts subcorpus, 91.24 on the BioScope Full Papers subcorpus,
90.95 on the SFU Review Corpus, outperforming the previous state-of-the-art
systems by a significant margin. We also analyze the model's generalizability
to datasets on which it is not trained. | 2019-11-11T12:28:29Z | The 12th Language Resources and Evaluation Conference (LREC 2020) | null | null | null | null | null | null | null | null | null |
1,911.04252 | Self-training with Noisy Student improves ImageNet classification | ['Qizhe Xie', 'Minh-Thang Luong', 'Eduard Hovy', 'Quoc V. Le'] | ['cs.LG', 'cs.CV', 'stat.ML'] | We present Noisy Student Training, a semi-supervised learning approach that
works well even when labeled data is abundant. Noisy Student Training achieves
88.4% top-1 accuracy on ImageNet, which is 2.0% better than the
state-of-the-art model that requires 3.5B weakly labeled Instagram images. On
robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to
83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces
ImageNet-P mean flip rate from 27.8 to 12.2.
Noisy Student Training extends the idea of self-training and distillation
with the use of equal-or-larger student models and noise added to the student
during learning. On ImageNet, we first train an EfficientNet model on labeled
images and use it as a teacher to generate pseudo labels for 300M unlabeled
images. We then train a larger EfficientNet as a student model on the
combination of labeled and pseudo labeled images. We iterate this process by
putting back the student as the teacher. During the learning of the student, we
inject noise such as dropout, stochastic depth, and data augmentation via
RandAugment to the student so that the student generalizes better than the
teacher. Models are available at
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet.
Code is available at https://github.com/google-research/noisystudent. | 2019-11-11T18:59:27Z | CVPR 2020 | null | null | Self-Training With Noisy Student Improves ImageNet Classification | ['Qizhe Xie', 'E. Hovy', 'Minh-Thang Luong', 'Quoc V. Le'] | 2,019 | Computer Vision and Pattern Recognition | 2,398 | 110 | ['Computer Science', 'Mathematics'] |
1,911.04944 | CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB | ['Holger Schwenk', 'Guillaume Wenzek', 'Sergey Edunov', 'Edouard Grave', 'Armand Joulin'] | ['cs.CL'] | We show that margin-based bitext mining in a multilingual sentence space can
be applied to monolingual corpora of billions of sentences. We are using ten
snapshots of a curated common crawl corpus (Wenzek et al., 2019) totalling 32.7
billion unique sentences. Using one unified approach for 38 languages, we were
able to mine 4.5 billions parallel sentences, out of which 661 million are
aligned with English. 20 language pairs have more then 30 million parallel
sentences, 112 more then 10 million, and most more than one million, including
direct alignments between many European or Asian languages.
To evaluate the quality of the mined bitexts, we train NMT systems for most
of the language pairs and evaluate them on TED, WMT and WAT test sets. Using
our mined bitexts only and no human translated parallel data, we achieve a new
state-of-the-art for a single system on the WMT'19 test set for translation
between English and German, Russian and Chinese, as well as German/French. In
particular, our English/German system outperforms the best single one by close
to 4 BLEU points and is almost on pair with best WMT'19 evaluation system which
uses system combination and back-translation. We also achieve excellent results
for distant languages pairs like Russian/Japanese, outperforming the best
submission at the 2019 workshop on Asian Translation (WAT). | 2019-11-10T12:09:46Z | 13 pages, 4 figures. arXiv admin note: text overlap with
arXiv:1907.05791 | null | null | CCMatrix: Mining Billions of High-Quality Parallel Sentences on the Web | ['Holger Schwenk', 'Guillaume Wenzek', 'Sergey Edunov', 'Edouard Grave', 'Armand Joulin'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 263 | 62 | ['Computer Science'] |
1,911.05405 | Identification of Rhetorical Roles of Sentences in Indian Legal
Judgments | ['Paheli Bhattacharya', 'Shounak Paul', 'Kripabandhu Ghosh', 'Saptarshi Ghosh', 'Adam Wyner'] | ['cs.IR'] | Automatically understanding the rhetorical roles of sentences in a legal case
judgement is an important problem to solve, since it can help in several
downstream tasks like summarization of legal judgments, legal search, and so
on. The task is challenging since legal case documents are usually not
well-structured, and these rhetorical roles may be subjective (as evident from
variation of opinions between legal experts). In this paper, we address this
task for judgments from the Supreme Court of India. We label sentences in 50
documents using multiple human annotators, and perform an extensive analysis of
the human-assigned labels. We also attempt automatic identification of the
rhetorical roles of sentences. While prior approaches towards this task used
Conditional Random Fields over manually handcrafted features, we explore the
use of deep neural models which do not require hand-crafting of features.
Experiments show that neural models perform much better in this task than
baseline methods which use handcrafted features. | 2019-11-13T11:21:20Z | Accepted at the 32nd International Conference on Legal Knowledge and
Information Systems (JURIX) 2019 | null | null | null | null | null | null | null | null | null |
1,911.05507 | Compressive Transformers for Long-Range Sequence Modelling | ['Jack W. Rae', 'Anna Potapenko', 'Siddhant M. Jayakumar', 'Timothy P. Lillicrap'] | ['cs.LG', 'stat.ML'] | We present the Compressive Transformer, an attentive sequence model which
compresses past memories for long-range sequence learning. We find the
Compressive Transformer obtains state-of-the-art language modelling results in
the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97 bpc
respectively. We also find it can model high-frequency speech effectively and
can be used as a memory mechanism for RL, demonstrated on an object matching
task. To promote the domain of long-range sequence learning, we propose a new
open-vocabulary language modelling benchmark derived from books, PG-19. | 2019-11-13T14:36:01Z | 19 pages, 6 figures, 10 tables | null | null | null | null | null | null | null | null | null |
1,911.05722 | Momentum Contrast for Unsupervised Visual Representation Learning | ['Kaiming He', 'Haoqi Fan', 'Yuxin Wu', 'Saining Xie', 'Ross Girshick'] | ['cs.CV'] | We present Momentum Contrast (MoCo) for unsupervised visual representation
learning. From a perspective on contrastive learning as dictionary look-up, we
build a dynamic dictionary with a queue and a moving-averaged encoder. This
enables building a large and consistent dictionary on-the-fly that facilitates
contrastive unsupervised learning. MoCo provides competitive results under the
common linear protocol on ImageNet classification. More importantly, the
representations learned by MoCo transfer well to downstream tasks. MoCo can
outperform its supervised pre-training counterpart in 7 detection/segmentation
tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large
margins. This suggests that the gap between unsupervised and supervised
representation learning has been largely closed in many vision tasks. | 2019-11-13T18:53:26Z | CVPR 2020 camera-ready. Code:
https://github.com/facebookresearch/moco | null | null | Momentum Contrast for Unsupervised Visual Representation Learning | ['Kaiming He', 'Haoqi Fan', 'Yuxin Wu', 'Saining Xie', 'Ross B. Girshick'] | 2,019 | Computer Vision and Pattern Recognition | 12,184 | 66 | ['Computer Science'] |
1,911.06667 | CenterMask : Real-Time Anchor-Free Instance Segmentation | ['Youngwan Lee', 'Jongyoul Park'] | ['cs.CV'] | We propose a simple yet efficient anchor-free instance segmentation, called
CenterMask, that adds a novel spatial attention-guided mask (SAG-Mask) branch
to anchor-free one stage object detector (FCOS) in the same vein with Mask
R-CNN. Plugged into the FCOS object detector, the SAG-Mask branch predicts a
segmentation mask on each box with the spatial attention map that helps to
focus on informative pixels and suppress noise. We also present an improved
backbone networks, VoVNetV2, with two effective strategies: (1) residual
connection for alleviating the optimization problem of larger VoVNet
\cite{lee2019energy} and (2) effective Squeeze-Excitation (eSE) dealing with
the channel information loss problem of original SE. With SAG-Mask and
VoVNetV2, we deign CenterMask and CenterMask-Lite that are targeted to large
and small models, respectively. Using the same ResNet-101-FPN backbone,
CenterMask achieves 38.3%, surpassing all previous state-of-the-art methods
while at a much faster speed. CenterMask-Lite also outperforms the
state-of-the-art by large margins at over 35fps on Titan Xp. We hope that
CenterMask and VoVNetV2 can serve as a solid baseline of real-time instance
segmentation and backbone network for various vision tasks, respectively. The
Code is available at https://github.com/youngwanLEE/CenterMask. | 2019-11-15T14:38:12Z | CVPR 2020 | null | null | null | null | null | null | null | null | null |
1,911.07023 | Effectively Unbiased FID and Inception Score and where to find them | ['Min Jin Chong', 'David Forsyth'] | ['cs.CV', 'cs.LG'] | This paper shows that two commonly used evaluation metrics for generative
models, the Fr\'echet Inception Distance (FID) and the Inception Score (IS),
are biased -- the expected value of the score computed for a finite sample set
is not the true value of the score. Worse, the paper shows that the bias term
depends on the particular model being evaluated, so model A may get a better
score than model B simply because model A's bias term is smaller. This effect
cannot be fixed by evaluating at a fixed number of samples. This means all
comparisons using FID or IS as currently computed are unreliable.
We then show how to extrapolate the score to obtain an effectively bias-free
estimate of scores computed with an infinite number of samples, which we term
$\overline{\textrm{FID}}_\infty$ and $\overline{\textrm{IS}}_\infty$. In turn,
this effectively bias-free estimate requires good estimates of scores with a
finite number of samples. We show that using Quasi-Monte Carlo integration
notably improves estimates of FID and IS for finite sample sets. Our
extrapolated scores are simple, drop-in replacements for the finite sample
scores. Additionally, we show that using low discrepancy sequence in GAN
training offers small improvements in the resulting generator. | 2019-11-16T12:54:05Z | CVPR 2020 | null | null | null | null | null | null | null | null | null |
1,911.07067 | ResUNet++: An Advanced Architecture for Medical Image Segmentation | ['Debesh Jha', 'Pia H. Smedsrud', 'Michael A. Riegler', 'Dag Johansen', 'Thomas de Lange', 'Pal Halvorsen', 'Havard D. Johansen'] | ['eess.IV', 'cs.CV'] | Accurate computer-aided polyp detection and segmentation during colonoscopy
examinations can help endoscopists resect abnormal tissue and thereby decrease
chances of polyps growing into cancer. Towards developing a fully automated
model for pixel-wise polyp segmentation, we propose ResUNet++, which is an
improved ResUNet architecture for colonoscopic image segmentation. Our
experimental evaluations show that the suggested architecture produces good
segmentation results on publicly available datasets. Furthermore, ResUNet++
significantly outperforms U-Net and ResUNet, two key state-of-the-art deep
learning architectures, by achieving high evaluation scores with a dice
coefficient of 81.33%, and a mean Intersection over Union (mIoU) of 79.27% for
the Kvasir-SEG dataset and a dice coefficient of 79.55%, and a mIoU of 79.62%
with CVC-612 dataset. | 2019-11-16T18:04:17Z | 7 pages, 3 figures, 21st IEEE International Symposium on Multimedia | null | null | null | null | null | null | null | null | null |
1,911.0907 | EfficientDet: Scalable and Efficient Object Detection | ['Mingxing Tan', 'Ruoming Pang', 'Quoc V. Le'] | ['cs.CV', 'cs.LG', 'eess.IV'] | Model efficiency has become increasingly important in computer vision. In
this paper, we systematically study neural network architecture design choices
for object detection and propose several key optimizations to improve
efficiency. First, we propose a weighted bi-directional feature pyramid network
(BiFPN), which allows easy and fast multiscale feature fusion; Second, we
propose a compound scaling method that uniformly scales the resolution, depth,
and width for all backbone, feature network, and box/class prediction networks
at the same time. Based on these optimizations and better backbones, we have
developed a new family of object detectors, called EfficientDet, which
consistently achieve much better efficiency than prior art across a wide
spectrum of resource constraints. In particular, with single model and
single-scale, our EfficientDet-D7 achieves state-of-the-art 55.1 AP on COCO
test-dev with 77M parameters and 410B FLOPs, being 4x - 9x smaller and using
13x - 42x fewer FLOPs than previous detectors. Code is available at
https://github.com/google/automl/tree/master/efficientdet. | 2019-11-20T18:16:09Z | CVPR 2020 | Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (2020) | null | EfficientDet: Scalable and Efficient Object Detection | ['Mingxing Tan', 'Ruoming Pang', 'Quoc V. Le'] | 2,019 | Computer Vision and Pattern Recognition | 5,136 | 45 | ['Computer Science', 'Engineering'] |
1,911.09099 | SINet: Extreme Lightweight Portrait Segmentation Networks with Spatial
Squeeze Modules and Information Blocking Decoder | ['Hyojin Park', 'Lars Lowe Sjösund', 'YoungJoon Yoo', 'Nicolas Monet', 'Jihwan Bang', 'Nojun Kwak'] | ['cs.CV'] | Designing a lightweight and robust portrait segmentation algorithm is an
important task for a wide range of face applications. However, the problem has
been considered as a subset of the object segmentation problem and less handled
in the semantic segmentation field. Obviously, portrait segmentation has its
unique requirements. First, because the portrait segmentation is performed in
the middle of a whole process of many real-world applications, it requires
extremely lightweight models. Second, there has not been any public datasets in
this domain that contain a sufficient number of images with unbiased
statistics. To solve the first problem, we introduce the new extremely
lightweight portrait segmentation model SINet, containing an information
blocking decoder and spatial squeeze modules. The information blocking decoder
uses confidence estimates to recover local spatial information without spoiling
global consistency. The spatial squeeze module uses multiple receptive fields
to cope with various sizes of consistency in the image. To tackle the second
problem, we propose a simple method to create additional portrait segmentation
data which can improve accuracy on the EG1800 dataset. In our qualitative and
quantitative analysis on the EG1800 dataset, we show that our method
outperforms various existing lightweight segmentation models. Our method
reduces the number of parameters from 2.1M to 86.9K (around 95.9% reduction),
while maintaining the accuracy under an 1% margin from the state-of-the-art
portrait segmentation method. We also show our model is successfully executed
on a real mobile device with 100.6 FPS. In addition, we demonstrate that our
method can be used for general semantic segmentation on the Cityscapes dataset.
The code and dataset are available in
https://github.com/HYOJINPARK/ExtPortraitSeg . | 2019-11-20T15:39:24Z | https://github.com/HYOJINPARK/ExtPortraitSeg. arXiv admin note: text
overlap with arXiv:1908.03093 | null | null | null | null | null | null | null | null | null |
1,911.09665 | Adversarial Examples Improve Image Recognition | ['Cihang Xie', 'Mingxing Tan', 'Boqing Gong', 'Jiang Wang', 'Alan Yuille', 'Quoc V. Le'] | ['cs.CV'] | Adversarial examples are commonly viewed as a threat to ConvNets. Here we
present an opposite perspective: adversarial examples can be used to improve
image recognition models if harnessed in the right manner. We propose AdvProp,
an enhanced adversarial training scheme which treats adversarial examples as
additional examples, to prevent overfitting. Key to our method is the usage of
a separate auxiliary batch norm for adversarial examples, as they have
different underlying distributions to normal examples.
We show that AdvProp improves a wide range of models on various image
recognition tasks and performs better when the models are bigger. For instance,
by applying AdvProp to the latest EfficientNet-B7 [28] on ImageNet, we achieve
significant improvements on ImageNet (+0.7%), ImageNet-C (+6.5%), ImageNet-A
(+7.0%), Stylized-ImageNet (+4.8%). With an enhanced EfficientNet-B8, our
method achieves the state-of-the-art 85.5% ImageNet top-1 accuracy without
extra data. This result even surpasses the best model in [20] which is trained
with 3.5B Instagram images (~3000X more than ImageNet) and ~9.4X more
parameters. Models are available at
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet. | 2019-11-21T18:53:23Z | CVPR 2020, models are available at
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet | null | null | null | null | null | null | null | null | null |
1,911.09709 | Automatically Neutralizing Subjective Bias in Text | ['Reid Pryzant', 'Richard Diehl Martinez', 'Nathan Dass', 'Sadao Kurohashi', 'Dan Jurafsky', 'Diyi Yang'] | ['cs.CL', 'cs.AI'] | Texts like news, encyclopedias, and some social media strive for objectivity.
Yet bias in the form of inappropriate subjectivity - introducing attitudes via
framing, presupposing truth, and casting doubt - remains ubiquitous. This kind
of bias erodes our collective trust and fuels social conflict. To address this
issue, we introduce a novel testbed for natural language generation:
automatically bringing inappropriately subjective text into a neutral point of
view ("neutralizing" biased text). We also offer the first parallel corpus of
biased language. The corpus contains 180,000 sentence pairs and originates from
Wikipedia edits that removed various framings, presuppositions, and attitudes
from biased sentences. Last, we propose two strong encoder-decoder baselines
for the task. A straightforward yet opaque CONCURRENT system uses a BERT
encoder to identify subjective words as part of the generation process. An
interpretable and controllable MODULAR algorithm separates these steps, using
(1) a BERT-based classifier to identify problematic words and (2) a novel join
embedding through which the classifier can edit the hidden states of the
encoder. Large-scale human evaluation across four domains (encyclopedias, news
headlines, books, and political speeches) suggests that these algorithms are a
first step towards the automatic identification and reduction of bias. | 2019-11-21T19:15:03Z | To appear at AAAI 2020 | null | null | null | null | null | null | null | null | null |
1,911.10436 | ScienceExamCER: A High-Density Fine-Grained Science-Domain Corpus for
Common Entity Recognition | ['Hannah Smith', 'Zeyu Zhang', 'John Culnan', 'Peter Jansen'] | ['cs.CL'] | Named entity recognition identifies common classes of entities in text, but
these entity labels are generally sparse, limiting utility to downstream tasks.
In this work we present ScienceExamCER, a densely-labeled semantic
classification corpus of 133k mentions in the science exam domain where nearly
all (96%) of content words have been annotated with one or more fine-grained
semantic class labels including taxonomic groups, meronym groups, verb/action
groups, properties and values, and synonyms. Semantic class labels are drawn
from a manually-constructed fine-grained typology of 601 classes generated
through a data-driven analysis of 4,239 science exam questions. We show an
off-the-shelf BERT-based named entity recognition model modified for
multi-label classification achieves an accuracy of 0.85 F1 on this task,
suggesting strong utility for downstream tasks in science domain question
answering requiring densely-labeled semantic classification. | 2019-11-24T00:08:09Z | null | null | null | null | null | null | null | null | null | null |
1,911.10683 | Image-based table recognition: data, model, and evaluation | ['Xu Zhong', 'Elaheh ShafieiBavani', 'Antonio Jimeno Yepes'] | ['cs.CV'] | Important information that relates to a specific topic in a document is often
organized in tabular format to assist readers with information retrieval and
comparison, which may be difficult to provide in natural language. However,
tabular data in unstructured digital documents, e.g., Portable Document Format
(PDF) and images, are difficult to parse into structured machine-readable
format, due to complexity and diversity in their structure and style. To
facilitate image-based table recognition with deep learning, we develop the
largest publicly available table recognition dataset PubTabNet
(https://github.com/ibm-aur-nlp/PubTabNet), containing 568k table images with
corresponding structured HTML representation. PubTabNet is automatically
generated by matching the XML and PDF representations of the scientific
articles in PubMed Central Open Access Subset (PMCOA). We also propose a novel
attention-based encoder-dual-decoder (EDD) architecture that converts images of
tables into HTML code. The model has a structure decoder which reconstructs the
table structure and helps the cell decoder to recognize cell content. In
addition, we propose a new Tree-Edit-Distance-based Similarity (TEDS) metric
for table recognition, which more appropriately captures multi-hop cell
misalignment and OCR errors than the pre-established metric. The experiments
demonstrate that the EDD model can accurately recognize complex tables solely
relying on the image representation, outperforming the state-of-the-art by 9.7%
absolute TEDS score. | 2019-11-25T03:25:03Z | null | null | null | Image-based table recognition: data, model, and evaluation | ['Xu Zhong', 'Elaheh Shafieibavani', 'Antonio Jimeno-Yepes'] | 2,019 | European Conference on Computer Vision | 223 | 42 | ['Computer Science'] |
1,911.11641 | PIQA: Reasoning about Physical Commonsense in Natural Language | ['Yonatan Bisk', 'Rowan Zellers', 'Ronan Le Bras', 'Jianfeng Gao', 'Yejin Choi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | To apply eyeshadow without a brush, should I use a cotton swab or a
toothpick? Questions requiring this kind of physical commonsense pose a
challenge to today's natural language understanding systems. While recent
pretrained models (such as BERT) have made progress on question answering over
more abstract domains - such as news articles and encyclopedia entries, where
text is plentiful - in more physical domains, text is inherently limited due to
reporting bias. Can AI systems learn to reliably answer physical common-sense
questions without experiencing the physical world? In this paper, we introduce
the task of physical commonsense reasoning and a corresponding benchmark
dataset Physical Interaction: Question Answering or PIQA. Though humans find
the dataset easy (95% accuracy), large pretrained models struggle (77%). We
provide analysis about the dimensions of knowledge that existing models lack,
which offers significant opportunities for future research. | 2019-11-26T15:31:46Z | AAAI 2020 | null | null | null | null | null | null | null | null | null |
1,911.11763 | SuperGlue: Learning Feature Matching with Graph Neural Networks | ['Paul-Edouard Sarlin', 'Daniel DeTone', 'Tomasz Malisiewicz', 'Andrew Rabinovich'] | ['cs.CV'] | This paper introduces SuperGlue, a neural network that matches two sets of
local features by jointly finding correspondences and rejecting non-matchable
points. Assignments are estimated by solving a differentiable optimal transport
problem, whose costs are predicted by a graph neural network. We introduce a
flexible context aggregation mechanism based on attention, enabling SuperGlue
to reason about the underlying 3D scene and feature assignments jointly.
Compared to traditional, hand-designed heuristics, our technique learns priors
over geometric transformations and regularities of the 3D world through
end-to-end training from image pairs. SuperGlue outperforms other learned
approaches and achieves state-of-the-art results on the task of pose estimation
in challenging real-world indoor and outdoor environments. The proposed method
performs matching in real-time on a modern GPU and can be readily integrated
into modern SfM or SLAM systems. The code and trained weights are publicly
available at https://github.com/magicleap/SuperGluePretrainedNetwork. | 2019-11-26T18:57:21Z | Oral at CVPR 2020, with appendix and link to publicly available code | null | null | null | null | null | null | null | null | null |
1,911.11907 | GhostNet: More Features from Cheap Operations | ['Kai Han', 'Yunhe Wang', 'Qi Tian', 'Jianyuan Guo', 'Chunjing Xu', 'Chang Xu'] | ['cs.CV'] | Deploying convolutional neural networks (CNNs) on embedded devices is
difficult due to the limited memory and computation resources. The redundancy
in feature maps is an important characteristic of those successful CNNs, but
has rarely been investigated in neural architecture design. This paper proposes
a novel Ghost module to generate more feature maps from cheap operations. Based
on a set of intrinsic feature maps, we apply a series of linear transformations
with cheap cost to generate many ghost feature maps that could fully reveal
information underlying intrinsic features. The proposed Ghost module can be
taken as a plug-and-play component to upgrade existing convolutional neural
networks. Ghost bottlenecks are designed to stack Ghost modules, and then the
lightweight GhostNet can be easily established. Experiments conducted on
benchmarks demonstrate that the proposed Ghost module is an impressive
alternative of convolution layers in baseline models, and our GhostNet can
achieve higher recognition performance (e.g. $75.7\%$ top-1 accuracy) than
MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012
classification dataset. Code is available at
https://github.com/huawei-noah/ghostnet | 2019-11-27T01:36:42Z | CVPR 2020. Code is available at
https://github.com/huawei-noah/ghostnet | null | null | GhostNet: More Features From Cheap Operations | ['Kai Han', 'Yunhe Wang', 'Qi Tian', 'Jianyuan Guo', 'Chunjing Xu', 'Chang Xu'] | 2,019 | Computer Vision and Pattern Recognition | 2,724 | 72 | ['Computer Science'] |
1,911.11929 | CSPNet: A New Backbone that can Enhance Learning Capability of CNN | ['Chien-Yao Wang', 'Hong-Yuan Mark Liao', 'I-Hau Yeh', 'Yueh-Hua Wu', 'Ping-Yang Chen', 'Jun-Wei Hsieh'] | ['cs.CV'] | Neural networks have enabled state-of-the-art approaches to achieve
incredible results on computer vision tasks such as object detection. However,
such success greatly relies on costly computation resources, which hinders
people with cheap devices from appreciating the advanced technology. In this
paper, we propose Cross Stage Partial Network (CSPNet) to mitigate the problem
that previous works require heavy inference computations from the network
architecture perspective. We attribute the problem to the duplicate gradient
information within network optimization. The proposed networks respect the
variability of the gradients by integrating feature maps from the beginning and
the end of a network stage, which, in our experiments, reduces computations by
20% with equivalent or even superior accuracy on the ImageNet dataset, and
significantly outperforms state-of-the-art approaches in terms of AP50 on the
MS COCO object detection dataset. The CSPNet is easy to implement and general
enough to cope with architectures based on ResNet, ResNeXt, and DenseNet.
Source code is at https://github.com/WongKinYiu/CrossStagePartialNetworks. | 2019-11-27T03:15:27Z | null | null | null | null | null | null | null | null | null | null |
1,911.12146 | NorNE: Annotating Named Entities for Norwegian | ['Fredrik Jørgensen', 'Tobias Aasmoe', 'Anne-Stine Ruud Husevåg', 'Lilja Øvrelid', 'Erik Velldal'] | ['cs.CL'] | This paper presents NorNE, a manually annotated corpus of named entities
which extends the annotation of the existing Norwegian Dependency Treebank.
Comprising both of the official standards of written Norwegian (Bokm{\aa}l and
Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of
entity types including persons, organizations, locations, geo-political
entities, products, and events, in addition to a class corresponding to
nominals derived from names. We here present details on the annotation effort,
guidelines, inter-annotator agreement and an experimental analysis of the
corpus using a neural sequence labeling architecture. | 2019-11-27T13:30:36Z | Accepted for LREC 2020 | null | null | NorNE: Annotating Named Entities for Norwegian | ['Fredrik Jørgensen', 'Tobias Aasmoe', 'Anne-Stine Ruud Husevaag', 'Lilja Ovrelid', 'Erik Velldal'] | 2,019 | International Conference on Language Resources and Evaluation | 32 | 36 | ['Computer Science'] |
1,911.12559 | KPTimes: A Large-Scale Dataset for Keyphrase Generation on News
Documents | ['Ygor Gallina', 'Florian Boudin', 'Béatrice Daille'] | ['cs.IR', 'cs.CL'] | Keyphrase generation is the task of predicting a set of lexical units that
conveys the main content of a source text. Existing datasets for keyphrase
generation are only readily available for the scholarly domain and include
non-expert annotations. In this paper we present KPTimes, a large-scale dataset
of news texts paired with editor-curated keyphrases. Exploring the dataset, we
show how editors tag documents, and how their annotations differ from those
found in existing datasets. We also train and evaluate state-of-the-art neural
keyphrase generation models on KPTimes to gain insights on how well they
perform on the news domain. The dataset is available online at
https://github.com/ygorg/KPTimes . | 2019-11-28T07:12:30Z | Accepted at the International Conference on Natural Language
Generation (INLG), 2019 | null | null | null | null | null | null | null | null | null |
1,912.0069 | EduBERT: Pretrained Deep Language Models for Learning Analytics | ['Benjamin Clavié', 'Kobi Gal'] | ['cs.CY', 'cs.AI', 'cs.CL', 'cs.LG'] | The use of large pretrained neural networks to create contextualized word
embeddings has drastically improved performance on several natural language
processing (NLP) tasks. These computationally expensive models have begun to be
applied to domain-specific NLP tasks such as re-hospitalization prediction from
clinical notes. This paper demonstrates that using large pretrained models
produces excellent results on common learning analytics tasks. Pre-training
deep language models using student forum data from a wide array of online
courses improves performance beyond the state of the art on three text
classification tasks. We also show that a smaller, distilled version of our
model produces the best results on two of the three tasks while limiting
computational cost. We make both models available to the research community at
large. | 2019-12-02T11:32:53Z | Accepted for poster presentation at the 10th International Learning
Analytics and Knowledge (LAK20) Conference | null | null | EduBERT: Pretrained Deep Language Models for Learning Analytics | ['Benjamin Clavié', 'K. Gal'] | 2,019 | arXiv.org | 16 | 10 | ['Computer Science'] |
1,912.01603 | Dream to Control: Learning Behaviors by Latent Imagination | ['Danijar Hafner', 'Timothy Lillicrap', 'Jimmy Ba', 'Mohammad Norouzi'] | ['cs.LG', 'cs.AI', 'cs.RO'] | Learned world models summarize an agent's experience to facilitate learning
complex behaviors. While learning world models from high-dimensional sensory
inputs is becoming feasible through deep learning, there are many potential
ways for deriving behaviors from them. We present Dreamer, a reinforcement
learning agent that solves long-horizon tasks from images purely by latent
imagination. We efficiently learn behaviors by propagating analytic gradients
of learned state values back through trajectories imagined in the compact state
space of a learned world model. On 20 challenging visual control tasks, Dreamer
exceeds existing approaches in data-efficiency, computation time, and final
performance. | 2019-12-03T18:57:16Z | 9 pages, 12 figures | null | null | Dream to Control: Learning Behaviors by Latent Imagination | ['Danijar Hafner', 'T. Lillicrap', 'Jimmy Ba', 'Mohammad Norouzi'] | 2,019 | International Conference on Learning Representations | 1,378 | 71 | ['Computer Science'] |
1,912.01865 | StarGAN v2: Diverse Image Synthesis for Multiple Domains | ['Yunjey Choi', 'Youngjung Uh', 'Jaejun Yoo', 'Jung-Woo Ha'] | ['cs.CV', 'cs.LG'] | A good image-to-image translation model should learn a mapping between
different visual domains while satisfying the following properties: 1)
diversity of generated images and 2) scalability over multiple domains.
Existing methods address either of the issues, having limited diversity or
multiple models for all domains. We propose StarGAN v2, a single framework that
tackles both and shows significantly improved results over the baselines.
Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our
superiority in terms of visual quality, diversity, and scalability. To better
assess image-to-image translation models, we release AFHQ, high-quality animal
faces with large inter- and intra-domain differences. The code, pretrained
models, and dataset can be found at https://github.com/clovaai/stargan-v2. | 2019-12-04T09:42:22Z | Accepted to CVPR 2020 | null | null | null | null | null | null | null | null | null |
1,912.02424 | Bridging the Gap Between Anchor-based and Anchor-free Detection via
Adaptive Training Sample Selection | ['Shifeng Zhang', 'Cheng Chi', 'Yongqiang Yao', 'Zhen Lei', 'Stan Z. Li'] | ['cs.CV'] | Object detection has been dominated by anchor-based detectors for several
years. Recently, anchor-free detectors have become popular due to the proposal
of FPN and Focal Loss. In this paper, we first point out that the essential
difference between anchor-based and anchor-free detection is actually how to
define positive and negative training samples, which leads to the performance
gap between them. If they adopt the same definition of positive and negative
samples during training, there is no obvious difference in the final
performance, no matter regressing from a box or a point. This shows that how to
select positive and negative training samples is important for current object
detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to
automatically select positive and negative samples according to statistical
characteristics of object. It significantly improves the performance of
anchor-based and anchor-free detectors and bridges the gap between them.
Finally, we discuss the necessity of tiling multiple anchors per location on
the image to detect objects. Extensive experiments conducted on MS COCO support
our aforementioned analysis and conclusions. With the newly introduced ATSS, we
improve state-of-the-art detectors by a large margin to $50.7\%$ AP without
introducing any overhead. The code is available at
https://github.com/sfzhang15/ATSS | 2019-12-05T07:49:56Z | Accepted by CVPR 2020 as Oral; Best Paper Nomination | null | null | Bridging the Gap Between Anchor-Based and Anchor-Free Detection via Adaptive Training Sample Selection | ['Shifeng Zhang', 'Cheng Chi', 'Yongqiang Yao', 'Zhen Lei', 'Stan Z. Li'] | 2,019 | Computer Vision and Pattern Recognition | 1,566 | 74 | ['Computer Science'] |
1,912.04958 | Analyzing and Improving the Image Quality of StyleGAN | ['Tero Karras', 'Samuli Laine', 'Miika Aittala', 'Janne Hellsten', 'Jaakko Lehtinen', 'Timo Aila'] | ['cs.CV', 'cs.LG', 'cs.NE', 'eess.IV', 'stat.ML'] | The style-based GAN architecture (StyleGAN) yields state-of-the-art results
in data-driven unconditional generative image modeling. We expose and analyze
several of its characteristic artifacts, and propose changes in both model
architecture and training methods to address them. In particular, we redesign
the generator normalization, revisit progressive growing, and regularize the
generator to encourage good conditioning in the mapping from latent codes to
images. In addition to improving image quality, this path length regularizer
yields the additional benefit that the generator becomes significantly easier
to invert. This makes it possible to reliably attribute a generated image to a
particular network. We furthermore visualize how well the generator utilizes
its output resolution, and identify a capacity problem, motivating us to train
larger models for additional quality improvements. Overall, our improved model
redefines the state of the art in unconditional image modeling, both in terms
of existing distribution quality metrics as well as perceived image quality. | 2019-12-03T11:44:01Z | null | null | null | null | null | null | null | null | null | null |
1,912.05007 | Oktoberfest Food Dataset | ['Alexander Ziller', 'Julius Hansjakob', 'Vitalii Rusinov', 'Daniel Zügner', 'Peter Vogel', 'Stephan Günnemann'] | ['cs.CV', 'cs.LG', 'stat.ML'] | We release a realistic, diverse, and challenging dataset for object detection
on images. The data was recorded at a beer tent in Germany and consists of 15
different categories of food and drink items. We created more than 2,500 object
annotations by hand for 1,110 images captured by a video camera above the
checkout. We further make available the remaining 600GB of (unlabeled) data
containing days of footage. Additionally, we provide our trained models as a
benchmark. Possible applications include automated checkout systems which could
significantly speed up the process. | 2019-11-22T09:28:59Z | Dataset publication of Oktoberfest Food Dataset. 4 pages, 6 figures | null | null | Oktoberfest Food Dataset | ['Alexander Ziller', 'Julius Hansjakob', 'Vitalii Rusinov', 'Daniel Zügner', 'P. Vogel', 'Stephan Günnemann'] | 2,019 | arXiv.org | 7 | 10 | ['Computer Science', 'Mathematics'] |
1,912.05027 | SpineNet: Learning Scale-Permuted Backbone for Recognition and
Localization | ['Xianzhi Du', 'Tsung-Yi Lin', 'Pengchong Jin', 'Golnaz Ghiasi', 'Mingxing Tan', 'Yin Cui', 'Quoc V. Le', 'Xiaodan Song'] | ['cs.CV', 'cs.LG', 'eess.IV'] | Convolutional neural networks typically encode an input image into a series
of intermediate features with decreasing resolutions. While this structure is
suited to classification tasks, it does not perform well for tasks requiring
simultaneous recognition and localization (e.g., object detection). The
encoder-decoder architectures are proposed to resolve this by applying a
decoder network onto a backbone model designed for classification tasks. In
this paper, we argue encoder-decoder architecture is ineffective in generating
strong multi-scale features because of the scale-decreased backbone. We propose
SpineNet, a backbone with scale-permuted intermediate features and cross-scale
connections that is learned on an object detection task by Neural Architecture
Search. Using similar building blocks, SpineNet models outperform ResNet-FPN
models by ~3% AP at various scales while using 10-20% fewer FLOPs. In
particular, SpineNet-190 achieves 52.5% AP with a MaskR-CNN detector and
achieves 52.1% AP with a RetinaNet detector on COCO for a single model without
test-time augmentation, significantly outperforms prior art of detectors.
SpineNet can transfer to classification tasks, achieving 5% top-1 accuracy
improvement on a challenging iNaturalist fine-grained dataset. Code is at:
https://github.com/tensorflow/tpu/tree/master/models/official/detection. | 2019-12-10T22:13:42Z | CVPR 2020 | null | null | null | null | null | null | null | null | null |
1,912.0667 | Common Voice: A Massively-Multilingual Speech Corpus | ['Rosana Ardila', 'Megan Branson', 'Kelly Davis', 'Michael Henretty', 'Michael Kohler', 'Josh Meyer', 'Reuben Morais', 'Lindsay Saunders', 'Francis M. Tyers', 'Gregor Weber'] | ['cs.CL', 'cs.LG'] | The Common Voice corpus is a massively-multilingual collection of transcribed
speech intended for speech technology research and development. Common Voice is
designed for Automatic Speech Recognition purposes but can be useful in other
domains (e.g. language identification). To achieve scale and sustainability,
the Common Voice project employs crowdsourcing for both data collection and
data validation. The most recent release includes 29 languages, and as of
November 2019 there are a total of 38 languages collecting data. Over 50,000
individuals have participated so far, resulting in 2,500 hours of collected
audio. To our knowledge this is the largest audio corpus in the public domain
for speech recognition, both in terms of number of hours and number of
languages. As an example use case for Common Voice, we present speech
recognition experiments using Mozilla's DeepSpeech Speech-to-Text toolkit. By
applying transfer learning from a source English model, we find an average
Character Error Rate improvement of 5.99 +/- 5.48 for twelve target languages
(German, French, Italian, Turkish, Catalan, Slovenian, Welsh, Irish, Breton,
Tatar, Chuvash, and Kabyle). For most of these languages, these are the first
ever published results on end-to-end Automatic Speech Recognition. | 2019-12-13T19:22:44Z | Accepted to LREC 2020 | null | null | null | null | null | null | null | null | null |
1,912.07076 | Multilingual is not enough: BERT for Finnish | ['Antti Virtanen', 'Jenna Kanerva', 'Rami Ilo', 'Jouni Luoma', 'Juhani Luotolahti', 'Tapio Salakoski', 'Filip Ginter', 'Sampo Pyysalo'] | ['cs.CL'] | Deep learning-based language models pretrained on large unannotated text
corpora have been demonstrated to allow efficient transfer learning for natural
language processing, with recent approaches such as the transformer-based BERT
model advancing the state of the art across a variety of tasks. While most work
on these models has focused on high-resource languages, in particular English,
a number of recent efforts have introduced multilingual models that can be
fine-tuned to address tasks in a large number of different languages. However,
we still lack a thorough understanding of the capabilities of these models, in
particular for lower-resourced languages. In this paper, we focus on Finnish
and thoroughly evaluate the multilingual BERT model on a range of tasks,
comparing it with a new Finnish BERT model trained from scratch. The new
language-specific model is shown to systematically and clearly outperform the
multilingual. While the multilingual model largely fails to reach the
performance of previously proposed methods, the custom Finnish BERT model
establishes new state-of-the-art results on all corpora for all reference
tasks: part-of-speech tagging, named entity recognition, and dependency
parsing. We release the model and all related resources created for this study
with open licenses at https://turkunlp.org/finbert . | 2019-12-15T17:50:56Z | null | null | null | null | null | null | null | null | null | null |
1,912.07726 | Towards Fairer Datasets: Filtering and Balancing the Distribution of the
People Subtree in the ImageNet Hierarchy | ['Kaiyu Yang', 'Klint Qinami', 'Li Fei-Fei', 'Jia Deng', 'Olga Russakovsky'] | ['cs.CV'] | Computer vision technology is being used by many but remains representative
of only a few. People have reported misbehavior of computer vision models,
including offensive prediction results and lower performance for
underrepresented groups. Current computer vision models are typically developed
using datasets consisting of manually annotated images or videos; the data and
label distributions in these datasets are critical to the models' behavior. In
this paper, we examine ImageNet, a large-scale ontology of images that has
spurred the development of many modern computer vision methods. We consider
three key factors within the "person" subtree of ImageNet that may lead to
problematic behavior in downstream computer vision technology: (1) the stagnant
concept vocabulary of WordNet, (2) the attempt at exhaustive illustration of
all categories with images, and (3) the inequality of representation in the
images within concepts. We seek to illuminate the root causes of these concerns
and take the first steps to mitigate them constructively. | 2019-12-16T22:03:05Z | Accepted to FAT* 2020 | null | 10.1145/3351095.3375709 | Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy | ['Kaiyu Yang', 'Klint Qinami', 'Li Fei-Fei', 'Jia Deng', 'Olga Russakovsky'] | 2,019 | FAT* | 325 | 87 | ['Computer Science'] |
1,912.07875 | Libri-Light: A Benchmark for ASR with Limited or No Supervision | ['Jacob Kahn', 'Morgane Rivière', 'Weiyi Zheng', 'Evgeny Kharitonov', 'Qiantong Xu', 'Pierre-Emmanuel Mazaré', 'Julien Karadayi', 'Vitaliy Liptchinsky', 'Ronan Collobert', 'Christian Fuegen', 'Tatiana Likhomanenko', 'Gabriel Synnaeve', 'Armand Joulin', 'Abdelrahman Mohamed', 'Emmanuel Dupoux'] | ['cs.CL', 'cs.SD', 'eess.AS'] | We introduce a new collection of spoken English audio suitable for training
speech recognition systems under limited or no supervision. It is derived from
open-source audio books from the LibriVox project. It contains over 60K hours
of audio, which is, to our knowledge, the largest freely-available corpus of
speech. The audio has been segmented using voice activity detection and is
tagged with SNR, speaker ID and genre descriptions. Additionally, we provide
baseline systems and evaluation metrics working under three settings: (1) the
zero resource/unsupervised setting (ABX), (2) the semi-supervised setting (PER,
CER) and (3) the distant supervision setting (WER). Settings (2) and (3) use
limited textual resources (10 minutes to 10 hours) aligned with the speech.
Setting (3) uses large amounts of unaligned text. They are evaluated on the
standard LibriSpeech dev and test sets for comparison with the supervised
state-of-the-art. | 2019-12-17T08:47:30Z | null | null | 10.1109/ICASSP40776.2020.9052942 | null | null | null | null | null | null | null |
1,912.08777 | PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
Summarization | ['Jingqing Zhang', 'Yao Zhao', 'Mohammad Saleh', 'Peter J. Liu'] | ['cs.CL'] | Recent work pre-training Transformers with self-supervised objectives on
large text corpora has shown great success when fine-tuned on downstream NLP
tasks including text summarization. However, pre-training objectives tailored
for abstractive text summarization have not been explored. Furthermore there is
a lack of systematic evaluation across diverse domains. In this work, we
propose pre-training large Transformer-based encoder-decoder models on massive
text corpora with a new self-supervised objective. In PEGASUS, important
sentences are removed/masked from an input document and are generated together
as one output sequence from the remaining sentences, similar to an extractive
summary. We evaluated our best PEGASUS model on 12 downstream summarization
tasks spanning news, science, stories, instructions, emails, patents, and
legislative bills. Experiments demonstrate it achieves state-of-the-art
performance on all 12 downstream datasets measured by ROUGE scores. Our model
also shows surprising performance on low-resource summarization, surpassing
previous state-of-the-art results on 6 datasets with only 1000 examples.
Finally we validated our results using human evaluation and show that our model
summaries achieve human performance on multiple datasets. | 2019-12-18T18:16:20Z | Added results from mixed+stochastic model, test-set overlapping
analysis; Code link added; Accepted for ICML 2020. arXiv admin note: text
overlap with arXiv:1605.06560, arXiv:1205.2395, arXiv:0902.4351,
arXiv:1610.09932, arXiv:nucl-ex/0512029 by other authors | null | null | PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization | ['Jingqing Zhang', 'Yao Zhao', 'Mohammad Saleh', 'Peter J. Liu'] | 2,019 | International Conference on Machine Learning | 2,059 | 58 | ['Computer Science'] |
1,912.09363 | Temporal Fusion Transformers for Interpretable Multi-horizon Time Series
Forecasting | ['Bryan Lim', 'Sercan O. Arik', 'Nicolas Loeff', 'Tomas Pfister'] | ['stat.ML', 'cs.LG'] | Multi-horizon forecasting problems often contain a complex mix of inputs --
including static (i.e. time-invariant) covariates, known future inputs, and
other exogenous time series that are only observed historically -- without any
prior information on how they interact with the target. While several deep
learning models have been proposed for multi-step prediction, they typically
comprise black-box models which do not account for the full range of inputs
present in common scenarios. In this paper, we introduce the Temporal Fusion
Transformer (TFT) -- a novel attention-based architecture which combines
high-performance multi-horizon forecasting with interpretable insights into
temporal dynamics. To learn temporal relationships at different scales, the TFT
utilizes recurrent layers for local processing and interpretable self-attention
layers for learning long-term dependencies. The TFT also uses specialized
components for the judicious selection of relevant features and a series of
gating layers to suppress unnecessary components, enabling high performance in
a wide range of regimes. On a variety of real-world datasets, we demonstrate
significant performance improvements over existing benchmarks, and showcase
three practical interpretability use-cases of TFT. | 2019-12-19T16:45:40Z | null | null | null | null | null | null | null | null | null | null |
1,912.09582 | BERTje: A Dutch BERT Model | ['Wietse de Vries', 'Andreas van Cranenburgh', 'Arianna Bisazza', 'Tommaso Caselli', 'Gertjan van Noord', 'Malvina Nissim'] | ['cs.CL'] | The transformer-based pre-trained language model BERT has helped to improve
state-of-the-art performance on many natural language processing (NLP) tasks.
Using the same architecture and parameters, we developed and evaluated a
monolingual Dutch BERT model called BERTje. Compared to the multilingual BERT
model, which includes Dutch but is only based on Wikipedia text, BERTje is
based on a large and diverse dataset of 2.4 billion tokens. BERTje consistently
outperforms the equally-sized multilingual BERT model on downstream NLP tasks
(part-of-speech tagging, named-entity recognition, semantic role labeling, and
sentiment analysis). Our pre-trained Dutch BERT model is made available at
https://github.com/wietsedv/bertje. | 2019-12-19T22:59:26Z | null | null | null | null | null | null | null | null | null | null |
1,912.09723 | SberQuAD -- Russian Reading Comprehension Dataset: Description and
Analysis | ['Pavel Efimov', 'Andrey Chertok', 'Leonid Boytsov', 'Pavel Braslavski'] | ['cs.CL'] | SberQuAD -- a large scale analog of Stanford SQuAD in the Russian language -
is a valuable resource that has not been properly presented to the scientific
community. We fill this gap by providing a description, a thorough analysis,
and baseline experimental results. | 2019-12-20T09:44:42Z | null | null | 10.1007/978-3-030-58219-7_1 | SberQuAD - Russian Reading Comprehension Dataset: Description and Analysis | ['Pavel Efimov', 'Andrey Chertok', 'Leonid Boytsov', 'Pavel Braslavski'] | 2,019 | Conference and Labs of the Evaluation Forum | 61 | 41 | ['Computer Science'] |
1,912.10205 | Decoupled Attention Network for Text Recognition | ['Tianwei Wang', 'Yuanzhi Zhu', 'Lianwen Jin', 'Canjie Luo', 'Xiaoxue Chen', 'Yaqiang Wu', 'Qianying Wang', 'Mingxiang Cai'] | ['cs.CV'] | Text recognition has attracted considerable research interests because of its
various applications. The cutting-edge text recognition methods are based on
attention mechanisms. However, most of attention methods usually suffer from
serious alignment problem due to its recurrency alignment operation, where the
alignment relies on historical decoding results. To remedy this issue, we
propose a decoupled attention network (DAN), which decouples the alignment
operation from using historical decoding results. DAN is an effective, flexible
and robust end-to-end text recognizer, which consists of three components: 1) a
feature encoder that extracts visual features from the input image; 2) a
convolutional alignment module that performs the alignment operation based on
visual features from the encoder; and 3) a decoupled text decoder that makes
final prediction by jointly using the feature map and attention maps.
Experimental results show that DAN achieves state-of-the-art performance on
multiple text recognition tasks, including offline handwritten text recognition
and regular/irregular scene text recognition. | 2019-12-21T05:51:58Z | 9 pages, 8 figures, 6 tables, accepted by AAAI-2020 | null | null | Decoupled Attention Network for Text Recognition | ['Tianwei Wang', 'Yuanzhi Zhu', 'Lianwen Jin', 'Canjie Luo', 'Xiaoxue Chen', 'Y. Wu', 'Qianying Wang', 'Mingxiang Cai'] | 2,019 | AAAI Conference on Artificial Intelligence | 255 | 49 | ['Computer Science'] |
1,912.10211 | PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern
Recognition | ['Qiuqiang Kong', 'Yin Cao', 'Turab Iqbal', 'Yuxuan Wang', 'Wenwu Wang', 'Mark D. Plumbley'] | ['cs.SD', 'eess.AS'] | Audio pattern recognition is an important research topic in the machine
learning area, and includes several tasks such as audio tagging, acoustic scene
classification, music classification, speech emotion classification and sound
event detection. Recently, neural networks have been applied to tackle audio
pattern recognition problems. However, previous systems are built on specific
datasets with limited durations. Recently, in computer vision and natural
language processing, systems pretrained on large-scale datasets have
generalized well to several tasks. However, there is limited research on
pretraining systems on large-scale datasets for audio pattern recognition. In
this paper, we propose pretrained audio neural networks (PANNs) trained on the
large-scale AudioSet dataset. These PANNs are transferred to other audio
related tasks. We investigate the performance and computational complexity of
PANNs modeled by a variety of convolutional neural networks. We propose an
architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and
waveform as input feature. Our best PANN system achieves a state-of-the-art
mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the
best previous system of 0.392. We transfer PANNs to six audio pattern
recognition tasks, and demonstrate state-of-the-art performance in several of
those tasks. We have released the source code and pretrained models of PANNs:
https://github.com/qiuqiangkong/audioset_tagging_cnn. | 2019-12-21T06:53:14Z | 14 pages | null | null | null | null | null | null | null | null | null |
1,912.10389 | Lessons from Archives: Strategies for Collecting Sociocultural Data in
Machine Learning | ['Eun Seo Jo', 'Timnit Gebru'] | ['cs.LG', 'cs.AI', 'cs.CY', 'I.2.0'] | A growing body of work shows that many problems in fairness, accountability,
transparency, and ethics in machine learning systems are rooted in decisions
surrounding the data collection and annotation process. In spite of its
fundamental nature however, data collection remains an overlooked part of the
machine learning (ML) pipeline. In this paper, we argue that a new
specialization should be formed within ML that is focused on methodologies for
data collection and annotation: efforts that require institutional frameworks
and procedures. Specifically for sociocultural data, parallels can be drawn
from archives and libraries. Archives are the longest standing communal effort
to gather human information and archive scholars have already developed the
language and procedures to address and discuss many challenges pertaining to
data collection such as consent, power, inclusivity, transparency, and ethics &
privacy. We discuss these five key approaches in document collection practices
in archives that can inform data collection in sociocultural ML. By showing
data collection practices from another field, we encourage ML research to be
more cognizant and systematic in data collection and draw from
interdisciplinary expertise. | 2019-12-22T05:56:55Z | To be published in Conference on Fairness, Accountability, and
Transparency FAT* '20, January 27-30, 2020, Barcelona, Spain. ACM, New York,
NY, USA, 11 pages | null | 10.1145/3351095.3372829 | Lessons from archives: strategies for collecting sociocultural data in machine learning | ['Eun Seo Jo', 'Timnit Gebru'] | 2,019 | FAT* | 317 | 66 | ['Computer Science'] |
1,912.10458 | Emotion Recognition from Speech | ['Kannan Venkataramanan', 'Haresh Rengaraj Rajamohan'] | ['cs.SD', 'cs.CL', 'eess.AS'] | In this work, we conduct an extensive comparison of various approaches to
speech based emotion recognition systems. The analyses were carried out on
audio recordings from Ryerson Audio-Visual Database of Emotional Speech and
Song (RAVDESS). After pre-processing the raw audio files, features such as
Log-Mel Spectrogram, Mel-Frequency Cepstral Coefficients (MFCCs), pitch and
energy were considered. The significance of these features for emotion
classification was compared by applying methods such as Long Short Term Memory
(LSTM), Convolutional Neural Networks (CNNs), Hidden Markov Models (HMMs) and
Deep Neural Networks (DNNs). On the 14-class (2 genders x 7 emotions)
classification task, an accuracy of 68% was achieved with a 4-layer 2
dimensional CNN using the Log-Mel Spectrogram features. We also observe that,
in emotion recognition, the choice of audio features impacts the results much
more than the model complexity. | 2019-12-22T14:43:14Z | null | null | null | Emotion Recognition from Speech | ['Kannan Venkataramanan', 'H. Rajamohan'] | 2,019 | arXiv.org | 15 | 26 | ['Computer Science', 'Engineering'] |
1,912.1137 | Big Transfer (BiT): General Visual Representation Learning | ['Alexander Kolesnikov', 'Lucas Beyer', 'Xiaohua Zhai', 'Joan Puigcerver', 'Jessica Yung', 'Sylvain Gelly', 'Neil Houlsby'] | ['cs.CV', 'cs.LG'] | Transfer of pre-trained representations improves sample efficiency and
simplifies hyperparameter tuning when training deep neural networks for vision.
We revisit the paradigm of pre-training on large supervised datasets and
fine-tuning the model on a target task. We scale up pre-training, and propose a
simple recipe that we call Big Transfer (BiT). By combining a few carefully
selected components, and transferring using a simple heuristic, we achieve
strong performance on over 20 datasets. BiT performs well across a surprisingly
wide range of data regimes -- from 1 example per class to 1M total examples.
BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3%
on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT
attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10
with 10 examples per class. We conduct detailed analysis of the main components
that lead to high transfer performance. | 2019-12-24T14:04:11Z | The first three authors contributed equally. Results on ObjectNet are
reported in v3 | null | null | null | null | null | null | null | null | null |
1,912.12142 | Lung and Colon Cancer Histopathological Image Dataset (LC25000) | ['Andrew A. Borkowski', 'Marilyn M. Bui', 'L. Brannon Thomas', 'Catherine P. Wilson', 'Lauren A. DeLand', 'Stephen M. Mastorides'] | ['eess.IV', 'cs.CV', 'q-bio.QM'] | The field of Machine Learning, a subset of Artificial Intelligence, has led
to remarkable advancements in many areas, including medicine. Machine Learning
algorithms require large datasets to train computer models successfully.
Although there are medical image datasets available, more image datasets are
needed from a variety of medical entities, especially cancer pathology. Even
more scarce are ML-ready image datasets. To address this need, we created an
image dataset (LC25000) with 25,000 color images in 5 classes. Each class
contains 5,000 images of the following histologic entities: colon
adenocarcinoma, benign colonic tissue, lung adenocarcinoma, lung squamous cell
carcinoma, and benign lung tissue. All images are de-identified, HIPAA
compliant, validated, and freely available for download to AI researchers. | 2019-12-16T16:28:00Z | 2 pages | null | null | null | null | null | null | null | null | null |
1,912.1218 | Axial Attention in Multidimensional Transformers | ['Jonathan Ho', 'Nal Kalchbrenner', 'Dirk Weissenborn', 'Tim Salimans'] | ['cs.CV'] | We propose Axial Transformers, a self-attention-based autoregressive model
for images and other data organized as high dimensional tensors. Existing
autoregressive models either suffer from excessively large computational
resource requirements for high dimensional data, or make compromises in terms
of distribution expressiveness or ease of implementation in order to decrease
resource requirements. Our architecture, by contrast, maintains both full
expressiveness over joint distributions over data and ease of implementation
with standard deep learning frameworks, while requiring reasonable memory and
computation and achieving state-of-the-art results on standard generative
modeling benchmarks. Our models are based on axial attention, a simple
generalization of self-attention that naturally aligns with the multiple
dimensions of the tensors in both the encoding and the decoding settings.
Notably the proposed structure of the layers allows for the vast majority of
the context to be computed in parallel during decoding without introducing any
independence assumptions. This semi-parallel structure goes a long way to
making decoding from even a very large Axial Transformer broadly applicable. We
demonstrate state-of-the-art results for the Axial Transformer on the
ImageNet-32 and ImageNet-64 image benchmarks as well as on the BAIR Robotic
Pushing video benchmark. We open source the implementation of Axial
Transformers. | 2019-12-20T13:27:27Z | 10 pages | null | null | null | null | null | null | null | null | null |
1,912.13318 | LayoutLM: Pre-training of Text and Layout for Document Image
Understanding | ['Yiheng Xu', 'Minghao Li', 'Lei Cui', 'Shaohan Huang', 'Furu Wei', 'Ming Zhou'] | ['cs.CL'] | Pre-training techniques have been verified successfully in a variety of NLP
tasks in recent years. Despite the widespread use of pre-training models for
NLP applications, they almost exclusively focus on text-level manipulation,
while neglecting layout and style information that is vital for document image
understanding. In this paper, we propose the \textbf{LayoutLM} to jointly model
interactions between text and layout information across scanned document
images, which is beneficial for a great number of real-world document image
understanding tasks such as information extraction from scanned documents.
Furthermore, we also leverage image features to incorporate words' visual
information into LayoutLM. To the best of our knowledge, this is the first time
that text and layout are jointly learned in a single framework for
document-level pre-training. It achieves new state-of-the-art results in
several downstream tasks, including form understanding (from 70.72 to 79.27),
receipt understanding (from 94.02 to 95.24) and document image classification
(from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly
available at \url{https://aka.ms/layoutlm}. | 2019-12-31T14:31:29Z | KDD 2020 | null | 10.1145/3394486.3403172 | null | null | null | null | null | null | null |
1,912.1344 | Approximate Inference for Fully Bayesian Gaussian Process Regression | ['Vidhi Lalchand', 'Carl Edward Rasmussen'] | ['stat.ML', 'cs.LG'] | Learning in Gaussian Process models occurs through the adaptation of
hyperparameters of the mean and the covariance function. The classical approach
entails maximizing the marginal likelihood yielding fixed point estimates (an
approach called \textit{Type II maximum likelihood} or ML-II). An alternative
learning procedure is to infer the posterior over hyperparameters in a
hierarchical specification of GPs we call \textit{Fully Bayesian Gaussian
Process Regression} (GPR). This work considers two approximation schemes for
the intractable hyperparameter posterior: 1) Hamiltonian Monte Carlo (HMC)
yielding a sampling-based approximation and 2) Variational Inference (VI) where
the posterior over hyperparameters is approximated by a factorized Gaussian
(mean-field) or a full-rank Gaussian accounting for correlations between
hyperparameters. We analyze the predictive performance for fully Bayesian GPR
on a range of benchmark data sets. | 2019-12-31T17:18:48Z | Presented at 2nd Symposium on Advances in Approximate Bayesian
Inference 2019 | Proceedings of Machine Learning Research, Volume 118 (2019) 1-12 | null | null | null | null | null | null | null | null |
2,001.02943 | Binary and Multitask Classification Model for Dutch Anaphora Resolution:
Die/Dat Prediction | ['Liesbeth Allein', 'Artuur Leeuwenberg', 'Marie-Francine Moens'] | ['cs.CL'] | The correct use of Dutch pronouns 'die' and 'dat' is a stumbling block for
both native and non-native speakers of Dutch due to the multiplicity of
syntactic functions and the dependency on the antecedent's gender and number.
Drawing on previous research conducted on neural context-dependent dt-mistake
correction models (Heyman et al. 2018), this study constructs the first neural
network model for Dutch demonstrative and relative pronoun resolution that
specifically focuses on the correction and part-of-speech prediction of these
two pronouns. Two separate datasets are built with sentences obtained from,
respectively, the Dutch Europarl corpus (Koehn 2015) - which contains the
proceedings of the European Parliament from 1996 to the present - and the SoNaR
corpus (Oostdijk et al. 2013) - which contains Dutch texts from a variety of
domains such as newspapers, blogs and legal texts. Firstly, a binary
classification model solely predicts the correct 'die' or 'dat'. The classifier
with a bidirectional long short-term memory architecture achieves 84.56%
accuracy. Secondly, a multitask classification model simultaneously predicts
the correct 'die' or 'dat' and its part-of-speech tag. The model containing a
combination of a sentence and context encoder with both a bidirectional long
short-term memory architecture results in 88.63% accuracy for die/dat
prediction and 87.73% accuracy for part-of-speech prediction. More
evenly-balanced data, larger word embeddings, an extra bidirectional long
short-term memory layer and integrated part-of-speech knowledge positively
affects die/dat prediction performance, while a context encoder architecture
raises part-of-speech prediction performance. This study shows promising
results and can serve as a starting point for future research on machine
learning models for Dutch anaphora resolution. | 2020-01-09T12:34:01Z | null | Computational Linguistics in the Netherlands Journal, 10, 19-36
(2020) | null | null | null | null | null | null | null | null |
2,001.03653 | Towards GAN Benchmarks Which Require Generalization | ['Ishaan Gulrajani', 'Colin Raffel', 'Luke Metz'] | ['cs.LG', 'stat.ML'] | For many evaluation metrics commonly used as benchmarks for unconditional
image generation, trivially memorizing the training set attains a better score
than models which are considered state-of-the-art; we consider this
problematic. We clarify a necessary condition for an evaluation metric not to
behave this way: estimating the function must require a large sample from the
model. In search of such a metric, we turn to neural network divergences
(NNDs), which are defined in terms of a neural network trained to distinguish
between distributions. The resulting benchmarks cannot be "won" by training set
memorization, while still being perceptually correlated and computable only
from samples. We survey past work on using NNDs for evaluation and implement an
example black-box metric based on these ideas. Through experimental validation
we show that it can effectively measure diversity, sample quality, and
generalization. | 2020-01-10T20:18:47Z | ICLR 2019 conference paper | null | null | null | null | null | null | null | null | null |
2,001.04063 | ProphetNet: Predicting Future N-gram for Sequence-to-Sequence
Pre-training | ['Weizhen Qi', 'Yu Yan', 'Yeyun Gong', 'Dayiheng Liu', 'Nan Duan', 'Jiusheng Chen', 'Ruofei Zhang', 'Ming Zhou'] | ['cs.CL'] | This paper presents a new sequence-to-sequence pre-training model called
ProphetNet, which introduces a novel self-supervised objective named future
n-gram prediction and the proposed n-stream self-attention mechanism. Instead
of optimizing one-step-ahead prediction in the traditional sequence-to-sequence
model, the ProphetNet is optimized by n-step ahead prediction that predicts the
next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for
the future tokens and prevent overfitting on strong local correlations. We
pre-train ProphetNet using a base scale dataset (16GB) and a large-scale
dataset (160GB), respectively. Then we conduct experiments on CNN/DailyMail,
Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question
generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the
same scale pre-training corpus. | 2020-01-13T05:12:38Z | Accepted to EMNLP 2020 Findings. Project page:
https://github.com/microsoft/ProphetNet | null | null | ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training | ['Yu Yan', 'Weizhen Qi', 'Yeyun Gong', 'Dayiheng Liu', 'Nan Duan', 'Jiusheng Chen', 'Ruofei Zhang', 'Ming Zhou'] | 2,020 | Findings | 450 | 50 | ['Computer Science'] |
2,001.04351 | CLUENER2020: Fine-grained Named Entity Recognition Dataset and Benchmark
for Chinese | ['Liang Xu', 'Yu tong', 'Qianqian Dong', 'Yixuan Liao', 'Cong Yu', 'Yin Tian', 'Weitang Liu', 'Lu Li', 'Caiquan Liu', 'Xuanwei Zhang'] | ['cs.CL', 'cs.IR', 'cs.LG'] | In this paper, we introduce the NER dataset from CLUE organization
(CLUENER2020), a well-defined fine-grained dataset for named entity recognition
in Chinese. CLUENER2020 contains 10 categories. Apart from common labels like
person, organization, and location, it contains more diverse categories. It is
more challenging than current other Chinese NER datasets and could better
reflect real-world applications. For comparison, we implement several
state-of-the-art baselines as sequence labeling tasks and report human
performance, as well as its analysis. To facilitate future work on fine-grained
NER for Chinese, we release our dataset, baselines, and leader-board. | 2020-01-13T15:39:56Z | 6 pages, 5 tables, 1 figure | null | null | null | null | null | null | null | null | null |
2,001.04643 | DDSP: Differentiable Digital Signal Processing | ['Jesse Engel', 'Lamtharn Hantrakul', 'Chenjie Gu', 'Adam Roberts'] | ['cs.LG', 'cs.SD', 'eess.AS', 'eess.SP', 'stat.ML'] | Most generative models of audio directly generate samples in one of two
domains: time or frequency. While sufficient to express any signal, these
representations are inefficient, as they do not utilize existing knowledge of
how sound is generated and perceived. A third approach (vocoders/synthesizers)
successfully incorporates strong domain knowledge of signal processing and
perception, but has been less actively researched due to limited expressivity
and difficulty integrating with modern auto-differentiation-based machine
learning methods. In this paper, we introduce the Differentiable Digital Signal
Processing (DDSP) library, which enables direct integration of classic signal
processing elements with deep learning methods. Focusing on audio synthesis, we
achieve high-fidelity generation without the need for large autoregressive
models or adversarial losses, demonstrating that DDSP enables utilizing strong
inductive biases without losing the expressive power of neural networks.
Further, we show that combining interpretable modules permits manipulation of
each separate model component, with applications such as independent control of
pitch and loudness, realistic extrapolation to pitches not seen during
training, blind dereverberation of room acoustics, transfer of extracted room
acoustics to new environments, and transformation of timbre between disparate
sources. In short, DDSP enables an interpretable and modular approach to
generative modeling, without sacrificing the benefits of deep learning. The
library is publicly available at https://github.com/magenta/ddsp and we welcome
further contributions from the community and domain experts. | 2020-01-14T06:49:37Z | null | null | null | DDSP: Differentiable Digital Signal Processing | ['Jesse Engel', 'Lamtharn Hantrakul', 'Chenjie Gu', 'Adam Roberts'] | 2,020 | International Conference on Learning Representations | 381 | 41 | ['Computer Science', 'Engineering', 'Mathematics'] |
2,001.06286 | RobBERT: a Dutch RoBERTa-based Language Model | ['Pieter Delobelle', 'Thomas Winters', 'Bettina Berendt'] | ['cs.CL', 'cs.LG'] | Pre-trained language models have been dominating the field of natural
language processing in recent years, and have led to significant performance
gains for various complex natural language tasks. One of the most prominent
pre-trained language models is BERT, which was released as an English as well
as a multilingual version. Although multilingual BERT performs well on many
tasks, recent studies show that BERT models trained on a single language
significantly outperform the multilingual version. Training a Dutch BERT model
thus has a lot of potential for a wide range of Dutch NLP tasks. While previous
approaches have used earlier implementations of BERT to train a Dutch version
of BERT, we used RoBERTa, a robustly optimized BERT approach, to train a Dutch
language model called RobBERT. We measured its performance on various tasks as
well as the importance of the fine-tuning dataset size. We also evaluated the
importance of language-specific tokenizers and the model's fairness. We found
that RobBERT improves state-of-the-art results for various tasks, and
especially significantly outperforms other models when dealing with smaller
datasets. These results indicate that it is a powerful pre-trained model for a
large variety of Dutch language tasks. The pre-trained and fine-tuned models
are publicly available to support further downstream Dutch NLP applications. | 2020-01-17T13:25:44Z | 11 pages, 4 tables, 3 figures. Accepted in EMNLP Findings | null | null | null | null | null | null | null | null | null |
2,001.07487 | Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the
Politically Incorrect Board | ['Antonis Papasavva', 'Savvas Zannettou', 'Emiliano De Cristofaro', 'Gianluca Stringhini', 'Jeremy Blackburn'] | ['cs.CY', 'cs.SI'] | This paper presents a dataset with over 3.3M threads and 134.5M posts from
the Politically Incorrect board (/pol/) of the imageboard forum 4chan, posted
over a period of almost 3.5 years (June 2016-November 2019). To the best of our
knowledge, this represents the largest publicly available 4chan dataset,
providing the community with an archive of posts that have been permanently
deleted from 4chan and are otherwise inaccessible. We augment the data with a
set of additional labels, including toxicity scores and the named entities
mentioned in each post. We also present a statistical analysis of the dataset,
providing an overview of what researchers interested in using it can expect, as
well as a simple content analysis, shedding light on the most prominent
discussion topics, the most popular entities mentioned, and the toxicity level
of each post. Overall, we are confident that our work will motivate and assist
researchers in studying and understanding 4chan, as well as its role on the
greater Web. For instance, we hope this dataset may be used for cross-platform
studies of social media, as well as being useful for other types of research
like natural language processing. Finally, our dataset can assist qualitative
work focusing on in-depth case studies of specific narratives, events, or
social theories. | 2020-01-21T12:52:24Z | null | Published at the 14th International AAAI Conference on Web and
Social Media (ICWSM 2020). Please cite the ICWSM version | null | Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board | ['Antonis Papasavva', 'Savvas Zannettou', 'Emiliano De Cristofaro', 'G. Stringhini', 'Jeremy Blackburn'] | 2,020 | International Conference on Web and Social Media | 94 | 46 | ['Computer Science'] |
2,001.0821 | Multilingual Denoising Pre-training for Neural Machine Translation | ['Yinhan Liu', 'Jiatao Gu', 'Naman Goyal', 'Xian Li', 'Sergey Edunov', 'Marjan Ghazvininejad', 'Mike Lewis', 'Luke Zettlemoyer'] | ['cs.CL'] | This paper demonstrates that multilingual denoising pre-training produces
significant performance gains across a wide variety of machine translation (MT)
tasks. We present mBART -- a sequence-to-sequence denoising auto-encoder
pre-trained on large-scale monolingual corpora in many languages using the BART
objective. mBART is one of the first methods for pre-training a complete
sequence-to-sequence model by denoising full texts in multiple languages, while
previous approaches have focused only on the encoder, decoder, or
reconstructing parts of the text. Pre-training a complete model allows it to be
directly fine tuned for supervised (both sentence-level and document-level) and
unsupervised machine translation, with no task-specific modifications. We
demonstrate that adding mBART initialization produces performance gains in all
but the highest-resource settings, including up to 12 BLEU points for low
resource MT and over 5 BLEU points for many document-level and unsupervised
models. We also show it also enables new types of transfer to language pairs
with no bi-text or that were not in the pre-training corpus, and present
extensive analysis of which factors contribute the most to effective
pre-training. | 2020-01-22T18:59:17Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,001.08361 | Scaling Laws for Neural Language Models | ['Jared Kaplan', 'Sam McCandlish', 'Tom Henighan', 'Tom B. Brown', 'Benjamin Chess', 'Rewon Child', 'Scott Gray', 'Alec Radford', 'Jeffrey Wu', 'Dario Amodei'] | ['cs.LG', 'stat.ML'] | We study empirical scaling laws for language model performance on the
cross-entropy loss. The loss scales as a power-law with model size, dataset
size, and the amount of compute used for training, with some trends spanning
more than seven orders of magnitude. Other architectural details such as
network width or depth have minimal effects within a wide range. Simple
equations govern the dependence of overfitting on model/dataset size and the
dependence of training speed on model size. These relationships allow us to
determine the optimal allocation of a fixed compute budget. Larger models are
significantly more sample-efficient, such that optimally compute-efficient
training involves training very large models on a relatively modest amount of
data and stopping significantly before convergence. | 2020-01-23T03:59:20Z | 19 pages, 15 figures | null | null | Scaling Laws for Neural Language Models | ['J. Kaplan', 'Sam McCandlish', 'T. Henighan', 'Tom B. Brown', 'Benjamin Chess', 'R. Child', 'Scott Gray', 'Alec Radford', 'Jeff Wu', 'Dario Amodei'] | 2,020 | arXiv.org | 4,948 | 59 | ['Computer Science', 'Mathematics'] |
2,001.09694 | Retrospective Reader for Machine Reading Comprehension | ['Zhuosheng Zhang', 'Junjie Yang', 'Hai Zhao'] | ['cs.CL', 'cs.AI', 'cs.IR'] | Machine reading comprehension (MRC) is an AI challenge that requires machine
to determine the correct answers to questions based on a given passage. MRC
systems must not only answer question when necessary but also distinguish when
no answer is available according to the given passage and then tactfully
abstain from answering. When unanswerable questions are involved in the MRC
task, an essential verification module called verifier is especially required
in addition to the encoder, though the latest practice on MRC modeling still
most benefits from adopting well pre-trained language models as the encoder
block by only focusing on the "reading". This paper devotes itself to exploring
better verifier design for the MRC task with unanswerable questions. Inspired
by how humans solve reading comprehension questions, we proposed a
retrospective reader (Retro-Reader) that integrates two stages of reading and
verification strategies: 1) sketchy reading that briefly investigates the
overall interactions of passage and question, and yield an initial judgment; 2)
intensive reading that verifies the answer and gives the final prediction. The
proposed reader is evaluated on two benchmark MRC challenge datasets SQuAD2.0
and NewsQA, achieving new state-of-the-art results. Significance tests show
that our model is significantly better than the strong ELECTRA and ALBERT
baselines. A series of analysis is also conducted to interpret the
effectiveness of the proposed reader. | 2020-01-27T11:14:34Z | Accepted by AAAI 2021 | null | null | Retrospective Reader for Machine Reading Comprehension | ['Zhuosheng Zhang', 'Junjie Yang', 'Hai Zhao'] | 2,020 | AAAI Conference on Artificial Intelligence | 227 | 57 | ['Computer Science'] |
2,001.09977 | Towards a Human-like Open-Domain Chatbot | ['Daniel Adiwardana', 'Minh-Thang Luong', 'David R. So', 'Jamie Hall', 'Noah Fiedel', 'Romal Thoppilan', 'Zi Yang', 'Apoorv Kulshreshtha', 'Gaurav Nemade', 'Yifeng Lu', 'Quoc V. Le'] | ['cs.CL', 'cs.LG', 'cs.NE', 'stat.ML'] | We present Meena, a multi-turn open-domain chatbot trained end-to-end on data
mined and filtered from public domain social media conversations. This 2.6B
parameter neural network is simply trained to minimize perplexity of the next
token. We also propose a human evaluation metric called Sensibleness and
Specificity Average (SSA), which captures key elements of a human-like
multi-turn conversation. Our experiments show strong correlation between
perplexity and SSA. The fact that the best perplexity end-to-end trained Meena
scores high on SSA (72% on multi-turn evaluation) suggests that a human-level
SSA of 86% is potentially within reach if we can better optimize perplexity.
Additionally, the full version of Meena (with a filtering mechanism and tuned
decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots
we evaluated. | 2020-01-27T18:53:15Z | 38 pages, 12 figures | null | null | null | null | null | null | null | null | null |
2,001.1119 | 2018 Robotic Scene Segmentation Challenge | ['Max Allan', 'Satoshi Kondo', 'Sebastian Bodenstedt', 'Stefan Leger', 'Rahim Kadkhodamohammadi', 'Imanol Luengo', 'Felix Fuentes', 'Evangello Flouty', 'Ahmed Mohammed', 'Marius Pedersen', 'Avinash Kori', 'Varghese Alex', 'Ganapathy Krishnamurthi', 'David Rauber', 'Robert Mendel', 'Christoph Palm', 'Sophia Bano', 'Guinther Saibro', 'Chi-Sheng Shih', 'Hsun-An Chiang', 'Juntang Zhuang', 'Junlin Yang', 'Vladimir Iglovikov', 'Anton Dobrenkii', 'Madhu Reddiboina', 'Anubhav Reddy', 'Xingtong Liu', 'Cong Gao', 'Mathias Unberath', 'Myeonghyeon Kim', 'Chanho Kim', 'Chaewon Kim', 'Hyejin Kim', 'Gyeongmin Lee', 'Ihsan Ullah', 'Miguel Luna', 'Sang Hyun Park', 'Mahdi Azizian', 'Danail Stoyanov', 'Lena Maier-Hein', 'Stefanie Speidel'] | ['cs.CV', 'cs.RO'] | In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich
using endoscope images of ex-vivo tissue with automatically generated
annotations from robot forward kinematics and instrument CAD models. However,
the limited background variation and simple motion rendered the dataset
uninformative in learning about which techniques would be suitable for
segmentation in real surgery. In 2017, at the same workshop in Quebec we
introduced the robotic instrument segmentation dataset with 10 teams
participating in the challenge to perform binary, articulating parts and type
segmentation of da Vinci instruments. This challenge included realistic
instrument motion and more complex porcine tissue as background and was widely
addressed with modifications on U-Nets and other popular CNN architectures. In
2018 we added to the complexity by introducing a set of anatomical objects and
medical devices to the segmented classes. To avoid over-complicating the
challenge, we continued with porcine data which is dramatically simpler than
human tissue due to the lack of fatty tissue occluding many organs. | 2020-01-30T06:37:07Z | null | null | null | null | null | null | null | null | null | null |
2,001.11314 | ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework
for Natural Language Generation | ['Dongling Xiao', 'Han Zhang', 'Yukun Li', 'Yu Sun', 'Hao Tian', 'Hua Wu', 'Haifeng Wang'] | ['cs.CL', 'cs.LG'] | Current pre-training works in natural language generation pay little
attention to the problem of exposure bias on downstream tasks. To address this
issue, we propose an enhanced multi-flow sequence to sequence pre-training and
fine-tuning framework named ERNIE-GEN, which bridges the discrepancy between
training and inference with an infilling generation mechanism and a noise-aware
generation method. To make generation closer to human writing patterns, this
framework introduces a span-by-span generation flow that trains the model to
predict semantically-complete spans consecutively rather than predicting word
by word. Unlike existing pre-training methods, ERNIE-GEN incorporates
multi-granularity target sampling to construct pre-training data, which
enhances the correlation between encoder and decoder. Experimental results
demonstrate that ERNIE-GEN achieves state-of-the-art results with a much
smaller amount of pre-training data and parameters on a range of language
generation tasks, including abstractive summarization (Gigaword and
CNN/DailyMail), question generation (SQuAD), dialogue generation (Persona-Chat)
and generative question answering (CoQA). | 2020-01-26T02:54:49Z | The source codes and pre-trained models have been released at
https://github.com/PaddlePaddle/ERNIE. We have also updated the performances
of ERNIE-GEN under a larger scaled pre-training corpora in appendix A | null | null | null | null | null | null | null | null | null |
2,002.00212 | Pop Music Transformer: Beat-based Modeling and Generation of Expressive
Pop Piano Compositions | ['Yu-Siang Huang', 'Yi-Hsuan Yang'] | ['cs.SD', 'cs.AI', 'eess.AS', 'stat.ML'] | A great number of deep learning based models have been recently proposed for
automatic music composition. Among these models, the Transformer stands out as
a prominent approach for generating expressive classical piano performance with
a coherent structure of up to one minute. The model is powerful in that it
learns abstractions of data on its own, without much human-imposed domain
knowledge or constraints. In contrast with this general approach, this paper
shows that Transformers can do even better for music modeling, when we improve
the way a musical score is converted into the data fed to a Transformer model.
In particular, we seek to impose a metrical structure in the input data, so
that Transformers can be more easily aware of the beat-bar-phrase hierarchical
structure in music. The new data representation maintains the flexibility of
local tempo changes, and provides hurdles to control the rhythmic and harmonic
structure of music. With this approach, we build a Pop Music Transformer that
composes Pop piano music with better rhythmic structure than existing
Transformer models. | 2020-02-01T14:12:35Z | Accepted at ACM Multimedia 2020 | null | null | Pop Music Transformer: Generating Music with Rhythm and Harmony | ['Yu-Siang Huang', 'Yi-Hsuan Yang'] | 2,020 | arXiv.org | 39 | 32 | ['Computer Science', 'Engineering', 'Mathematics'] |
2,002.00293 | Beat the AI: Investigating Adversarial Human Annotation for Reading
Comprehension | ['Max Bartolo', 'Alastair Roberts', 'Johannes Welbl', 'Sebastian Riedel', 'Pontus Stenetorp'] | ['cs.CL'] | Innovations in annotation methodology have been a catalyst for Reading
Comprehension (RC) datasets and models. One recent trend to challenge current
RC models is to involve a model in the annotation process: humans create
questions adversarially, such that the model fails to answer them correctly. In
this work we investigate this annotation methodology and apply it in three
different settings, collecting a total of 36,000 samples with progressively
stronger models in the annotation loop. This allows us to explore questions
such as the reproducibility of the adversarial effect, transfer from data
collected with varying model-in-the-loop strengths, and generalisation to data
collected without a model. We find that training on adversarially collected
samples leads to strong generalisation to non-adversarially collected datasets,
yet with progressive performance deterioration with increasingly stronger
models-in-the-loop. Furthermore, we find that stronger models can still learn
from datasets collected with substantially weaker models-in-the-loop. When
trained on data collected with a BiDAF model in the loop, RoBERTa achieves
39.9F1 on questions that it cannot answer when trained on SQuAD - only
marginally lower than when trained on data collected using RoBERTa itself
(41.0F1). | 2020-02-02T00:22:55Z | null | Transactions of the Association for Computational Linguistics,
Volume 8, 2020 p.662-678 | 10.1162/tacl_a_00338 | Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension | ['Max Bartolo', 'A. Roberts', 'Johannes Welbl', 'Sebastian Riedel', 'Pontus Stenetorp'] | 2,020 | Transactions of the Association for Computational Linguistics | 175 | 58 | ['Computer Science'] |
2,002.01322 | Training Keyword Spotters with Limited and Synthesized Speech Data | ['James Lin', 'Kevin Kilgour', 'Dominik Roblek', 'Matthew Sharifi'] | ['eess.AS', 'cs.LG', 'cs.SD', 'stat.ML'] | With the rise of low power speech-enabled devices, there is a growing demand
to quickly produce models for recognizing arbitrary sets of keywords. As with
many machine learning tasks, one of the most challenging parts in the model
creation process is obtaining a sufficient amount of training data. In this
paper, we explore the effectiveness of synthesized speech data in training
small, spoken term detection models of around 400k parameters. Instead of
training such models directly on the audio or low level features such as MFCCs,
we use a pre-trained speech embedding model trained to extract useful features
for keyword spotting models. Using this speech embedding, we show that a model
which detects 10 keywords when trained on only synthetic speech is equivalent
to a model trained on over 500 real examples. We also show that a model without
our speech embeddings would need to be trained on over 4000 real examples to
reach the same accuracy. | 2020-01-31T07:50:42Z | null | null | null | null | null | null | null | null | null | null |
2,002.01808 | K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters | ['Ruize Wang', 'Duyu Tang', 'Nan Duan', 'Zhongyu Wei', 'Xuanjing Huang', 'Jianshu ji', 'Guihong Cao', 'Daxin Jiang', 'Ming Zhou'] | ['cs.CL', 'cs.LG'] | We study the problem of injecting knowledge into large pre-trained models
like BERT and RoBERTa. Existing methods typically update the original
parameters of pre-trained models when injecting knowledge. However, when
multiple kinds of knowledge are injected, the historically injected knowledge
would be flushed away. To address this, we propose K-Adapter, a framework that
retains the original parameters of the pre-trained model fixed and supports the
development of versatile knowledge-infused model. Taking RoBERTa as the
backbone model, K-Adapter has a neural adapter for each kind of infused
knowledge, like a plug-in connected to RoBERTa. There is no information flow
between different adapters, thus multiple adapters can be efficiently trained
in a distributed way. As a case study, we inject two kinds of knowledge in this
work, including (1) factual knowledge obtained from automatically aligned
text-triplets on Wikipedia and Wikidata and (2) linguistic knowledge obtained
via dependency parsing. Results on three knowledge-driven tasks, including
relation classification, entity typing, and question answering, demonstrate
that each adapter improves the performance and the combination of both adapters
brings further improvements. Further analysis indicates that K-Adapter captures
versatile knowledge than RoBERTa. | 2020-02-05T14:30:49Z | null | null | null | K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters | ['Ruize Wang', 'Duyu Tang', 'Nan Duan', 'Zhongyu Wei', 'Xuanjing Huang', 'Jianshu Ji', 'Guihong Cao', 'Daxin Jiang', 'Ming Zhou'] | 2,020 | Findings | 557 | 53 | ['Computer Science'] |
2,002.02497 | On the limits of cross-domain generalization in automated X-ray
prediction | ['Joseph Paul Cohen', 'Mohammad Hashir', 'Rupert Brooks', 'Hadrien Bertrand'] | ['eess.IV', 'cs.LG', 'q-bio.QM', 'stat.ML'] | This large scale study focuses on quantifying what X-rays diagnostic
prediction tasks generalize well across multiple different datasets. We present
evidence that the issue of generalization is not due to a shift in the images
but instead a shift in the labels. We study the cross-domain performance,
agreement between models, and model representations. We find interesting
discrepancies between performance and agreement where models which both achieve
good performance disagree in their predictions as well as models which agree
yet achieve poor performance. We also test for concept similarity by
regularizing a network to group tasks across multiple datasets together and
observe variation across the tasks. All code is made available online and data
is publicly available: https://github.com/mlmed/torchxrayvision | 2020-02-06T20:07:54Z | Full paper at MIDL2020 | null | null | On the limits of cross-domain generalization in automated X-ray prediction | ['Joseph Paul Cohen', 'Mohammad Hashir', 'Rupert Brooks', 'H. Bertrand'] | 2,020 | International Conference on Medical Imaging with Deep Learning | 130 | 39 | ['Computer Science', 'Physics', 'Engineering', 'Biology', 'Mathematics'] |
2,002.02925 | BERT-of-Theseus: Compressing BERT by Progressive Module Replacing | ['Canwen Xu', 'Wangchunshu Zhou', 'Tao Ge', 'Furu Wei', 'Ming Zhou'] | ['cs.CL', 'cs.LG'] | In this paper, we propose a novel model compression approach to effectively
compress BERT by progressive module replacing. Our approach first divides the
original BERT into several modules and builds their compact substitutes. Then,
we randomly replace the original modules with their substitutes to train the
compact modules to mimic the behavior of the original modules. We progressively
increase the probability of replacement through the training. In this way, our
approach brings a deeper level of interaction between the original and compact
models. Compared to the previous knowledge distillation approaches for BERT
compression, our approach does not introduce any additional loss function. Our
approach outperforms existing knowledge distillation approaches on GLUE
benchmark, showing a new perspective of model compression. | 2020-02-07T17:52:16Z | EMNLP 2020 | null | null | null | null | null | null | null | null | null |
2,002.04745 | On Layer Normalization in the Transformer Architecture | ['Ruibin Xiong', 'Yunchang Yang', 'Di He', 'Kai Zheng', 'Shuxin Zheng', 'Chen Xing', 'Huishuai Zhang', 'Yanyan Lan', 'Liwei Wang', 'Tie-Yan Liu'] | ['cs.LG', 'cs.CL', 'stat.ML'] | The Transformer is widely used in natural language processing tasks. To train
a Transformer however, one usually needs a carefully designed learning rate
warm-up stage, which is shown to be crucial to the final performance but will
slow down the optimization and bring more hyper-parameter tunings. In this
paper, we first study theoretically why the learning rate warm-up stage is
essential and show that the location of layer normalization matters.
Specifically, we prove with mean field theory that at initialization, for the
original-designed Post-LN Transformer, which places the layer normalization
between the residual blocks, the expected gradients of the parameters near the
output layer are large. Therefore, using a large learning rate on those
gradients makes the training unstable. The warm-up stage is practically helpful
for avoiding this problem. On the other hand, our theory also shows that if the
layer normalization is put inside the residual blocks (recently proposed as
Pre-LN Transformer), the gradients are well-behaved at initialization. This
motivates us to remove the warm-up stage for the training of Pre-LN
Transformers. We show in our experiments that Pre-LN Transformers without the
warm-up stage can reach comparable results with baselines while requiring
significantly less training time and hyper-parameter tuning on a wide range of
applications. | 2020-02-12T00:33:03Z | null | Published on ICML 2020 | null | null | null | null | null | null | null | null |
2,002.04815 | Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis
and Natural Language Inference | ['Youwei Song', 'Jiahai Wang', 'Zhiwei Liang', 'Zhiyue Liu', 'Tao Jiang'] | ['cs.CL', 'cs.LG'] | Aspect based sentiment analysis aims to identify the sentimental tendency
towards a given aspect in text. Fine-tuning of pretrained BERT performs
excellent on this task and achieves state-of-the-art performances. Existing
BERT-based works only utilize the last output layer of BERT and ignore the
semantic knowledge in the intermediate layers. This paper explores the
potential of utilizing BERT intermediate layers to enhance the performance of
fine-tuning of BERT. To the best of our knowledge, no existing work has been
done on this research. To show the generality, we also apply this approach to a
natural language inference task. Experimental results demonstrate the
effectiveness and generality of the proposed approach. | 2020-02-12T06:11:48Z | 5 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,002.05202 | GLU Variants Improve Transformer | ['Noam Shazeer'] | ['cs.LG', 'cs.NE', 'stat.ML'] | Gated Linear Units (arXiv:1612.08083) consist of the component-wise product
of two linear projections, one of which is first passed through a sigmoid
function. Variations on GLU are possible, using different nonlinear (or even
linear) functions in place of sigmoid. We test these variants in the
feed-forward sublayers of the Transformer (arXiv:1706.03762)
sequence-to-sequence model, and find that some of them yield quality
improvements over the typically-used ReLU or GELU activations. | 2020-02-12T19:57:13Z | null | null | null | null | null | null | null | null | null | null |
2,002.05709 | A Simple Framework for Contrastive Learning of Visual Representations | ['Ting Chen', 'Simon Kornblith', 'Mohammad Norouzi', 'Geoffrey Hinton'] | ['cs.LG', 'cs.CV', 'stat.ML'] | This paper presents SimCLR: a simple framework for contrastive learning of
visual representations. We simplify recently proposed contrastive
self-supervised learning algorithms without requiring specialized architectures
or a memory bank. In order to understand what enables the contrastive
prediction tasks to learn useful representations, we systematically study the
major components of our framework. We show that (1) composition of data
augmentations plays a critical role in defining effective predictive tasks, (2)
introducing a learnable nonlinear transformation between the representation and
the contrastive loss substantially improves the quality of the learned
representations, and (3) contrastive learning benefits from larger batch sizes
and more training steps compared to supervised learning. By combining these
findings, we are able to considerably outperform previous methods for
self-supervised and semi-supervised learning on ImageNet. A linear classifier
trained on self-supervised representations learned by SimCLR achieves 76.5%
top-1 accuracy, which is a 7% relative improvement over previous
state-of-the-art, matching the performance of a supervised ResNet-50. When
fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy,
outperforming AlexNet with 100X fewer labels. | 2020-02-13T18:50:45Z | ICML'2020. Code and pretrained models at
https://github.com/google-research/simclr | null | null | null | null | null | null | null | null | null |
2,002.0581 | RNA Secondary Structure Prediction By Learning Unrolled Algorithms | ['Xinshi Chen', 'Yu Li', 'Ramzan Umarov', 'Xin Gao', 'Le Song'] | ['cs.LG', 'stat.ML'] | In this paper, we propose an end-to-end deep learning model, called E2Efold,
for RNA secondary structure prediction which can effectively take into account
the inherent constraints in the problem. The key idea of E2Efold is to directly
predict the RNA base-pairing matrix, and use an unrolled algorithm for
constrained programming as the template for deep architectures to enforce
constraints. With comprehensive experiments on benchmark datasets, we
demonstrate the superior performance of E2Efold: it predicts significantly
better structures compared to previous SOTA (especially for pseudoknotted
structures), while being as efficient as the fastest algorithms in terms of
inference time. | 2020-02-13T23:21:25Z | International Conference on Learning Representations 2020 | International Conference on Learning Representations 2020,
https://openreview.net/forum?id=S1eALyrYDH | null | RNA Secondary Structure Prediction By Learning Unrolled Algorithms | ['Xinshi Chen', 'Yu Li', 'Ramzan Umarov', 'Xin Gao', 'Le Song'] | 2,020 | International Conference on Learning Representations | 119 | 39 | ['Computer Science', 'Mathematics'] |
2,002.06071 | FQuAD: French Question Answering Dataset | ["Martin d'Hoffschmidt", 'Wacim Belblidia', 'Tom Brendlé', 'Quentin Heinrich', 'Maxime Vidal'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent advances in the field of language modeling have improved
state-of-the-art results on many Natural Language Processing tasks. Among them,
Reading Comprehension has made significant progress over the past few years.
However, most results are reported in English since labeled resources available
in other languages, such as French, remain scarce. In the present work, we
introduce the French Question Answering Dataset (FQuAD). FQuAD is a French
Native Reading Comprehension dataset of questions and answers on a set of
Wikipedia articles that consists of 25,000+ samples for the 1.0 version and
60,000+ samples for the 1.1 version. We train a baseline model which achieves
an F1 score of 92.2 and an exact match ratio of 82.1 on the test set. In order
to track the progress of French Question Answering models we propose a
leader-board and we have made the 1.0 version of our dataset freely available
at https://illuin-tech.github.io/FQuAD-explorer/. | 2020-02-14T15:23:38Z | 15 pages, 5 figures | null | null | FQuAD: French Question Answering Dataset | ["Martin d'Hoffschmidt", 'Maxime Vidal', 'Wacim Belblidia', 'Quentin Heinrich', "Tom Brendl'e"] | 2,020 | Findings | 100 | 39 | ['Computer Science'] |
2,002.07651 | Listwise Learning to Rank with Deep Q-Networks | ['Abhishek Sharma'] | ['cs.LG', 'cs.IR'] | Learning to Rank is the problem involved with ranking a sequence of documents
based on their relevance to a given query. Deep Q-Learning has been shown to be
a useful method for training an agent in sequential decision making. In this
paper, we show that DeepQRank, our deep q-learning to rank agent, demonstrates
performance that can be considered state-of-the-art. Though less
computationally efficient than a supervised learning approach such as linear
regression, our agent has fewer limitations in terms of which format of data it
can use for training and evaluation. We run our algorithm against Microsoft's
LETOR listwise dataset and achieve an NDCG@1 (ranking accuracy in the range
[0,1]) of 0.5075, narrowly beating out the leading supervised learning model,
SVMRank (0.4958). | 2020-02-13T22:45:56Z | null | null | null | Listwise Learning to Rank with Deep Q-Networks | ['Abhishek Sharma'] | 2,020 | arXiv.org | 1 | 10 | ['Computer Science'] |
2,002.08155 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | ['Zhangyin Feng', 'Daya Guo', 'Duyu Tang', 'Nan Duan', 'Xiaocheng Feng', 'Ming Gong', 'Linjun Shou', 'Bing Qin', 'Ting Liu', 'Daxin Jiang', 'Ming Zhou'] | ['cs.CL', 'cs.PL'] | We present CodeBERT, a bimodal pre-trained model for programming language
(PL) and nat-ural language (NL). CodeBERT learns general-purpose
representations that support downstream NL-PL applications such as natural
language codesearch, code documentation generation, etc. We develop CodeBERT
with Transformer-based neural architecture, and train it with a hybrid
objective function that incorporates the pre-training task of replaced token
detection, which is to detect plausible alternatives sampled from generators.
This enables us to utilize both bimodal data of NL-PL pairs and unimodal data,
where the former provides input tokens for model training while the latter
helps to learn better generators. We evaluate CodeBERT on two NL-PL
applications by fine-tuning model parameters. Results show that CodeBERT
achieves state-of-the-art performance on both natural language code search and
code documentation generation tasks. Furthermore, to investigate what type of
knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and
evaluate in a zero-shot setting where parameters of pre-trained models are
fixed. Results show that CodeBERT performs better than previous pre-trained
models on NL-PL probing. | 2020-02-19T13:09:07Z | Accepted to Findings of EMNLP 2020. 12 pages | null | null | null | null | null | null | null | null | null |
2,002.08258 | Knapsack Pruning with Inner Distillation | ['Yonathan Aflalo', 'Asaf Noy', 'Ming Lin', 'Itamar Friedman', 'Lihi Zelnik'] | ['cs.LG', 'stat.ML'] | Neural network pruning reduces the computational cost of an
over-parameterized network to improve its efficiency. Popular methods vary from
$\ell_1$-norm sparsification to Neural Architecture Search (NAS). In this work,
we propose a novel pruning method that optimizes the final accuracy of the
pruned network and distills knowledge from the over-parameterized parent
network's inner layers. To enable this approach, we formulate the network
pruning as a Knapsack Problem which optimizes the trade-off between the
importance of neurons and their associated computational cost. Then we prune
the network channels while maintaining the high-level structure of the network.
The pruned network is fine-tuned under the supervision of the parent network
using its inner network knowledge, a technique we refer to as the Inner
Knowledge Distillation. Our method leads to state-of-the-art pruning results on
ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones. To prune complex
network structures such as convolutions with skip-links and depth-wise
convolutions, we propose a block grouping approach to cope with these
structures. Through this we produce compact architectures with the same FLOPs
as EfficientNet-B0 and MobileNetV3 but with higher accuracy, by $1\%$ and
$0.3\%$ respectively on ImageNet, and faster runtime on GPU. | 2020-02-19T16:04:48Z | null | null | null | Knapsack Pruning with Inner Distillation | ['Y. Aflalo', 'Asaf Noy', 'Ming Lin', 'Itamar Friedman', 'Lihi Zelnik-Manor'] | 2,020 | arXiv.org | 34 | 57 | ['Computer Science', 'Mathematics'] |
2,002.08653 | Detecting Code Clones with Graph Neural Networkand Flow-Augmented
Abstract Syntax Tree | ['Wenhan Wang', 'Ge Li', 'Bo Ma', 'Xin Xia', 'Zhi Jin'] | ['cs.SE', 'cs.AI'] | Code clones are semantically similar code fragments pairs that are
syntactically similar or different. Detection of code clones can help to reduce
the cost of software maintenance and prevent bugs. Numerous approaches of
detecting code clones have been proposed previously, but most of them focus on
detecting syntactic clones and do not work well on semantic clones with
different syntactic features. To detect semantic clones, researchers have tried
to adopt deep learning for code clone detection to automatically learn latent
semantic features from data. Especially, to leverage grammar information,
several approaches used abstract syntax trees (AST) as input and achieved
significant progress on code clone benchmarks in various programming languages.
However, these AST-based approaches still can not fully leverage the structural
information of code fragments, especially semantic information such as control
flow and data flow. To leverage control and data flow information, in this
paper, we build a graph representation of programs called flow-augmented
abstract syntax tree (FA-AST). We construct FA-AST by augmenting original ASTs
with explicit control and data flow edges. Then we apply two different types of
graph neural networks (GNN) on FA-AST to measure the similarity of code pairs.
As far as we have concerned, we are the first to apply graph neural networks on
the domain of code clone detection.
We apply our FA-AST and graph neural networks on two Java datasets: Google
Code Jam and BigCloneBench. Our approach outperforms the state-of-the-art
approaches on both Google Code Jam and BigCloneBench tasks. | 2020-02-20T10:18:37Z | Accepted by SANER 2020 | null | null | null | null | null | null | null | null | null |
2,002.08909 | REALM: Retrieval-Augmented Language Model Pre-Training | ['Kelvin Guu', 'Kenton Lee', 'Zora Tung', 'Panupong Pasupat', 'Ming-Wei Chang'] | ['cs.CL', 'cs.LG'] | Language model pre-training has been shown to capture a surprising amount of
world knowledge, crucial for NLP tasks such as question answering. However,
this knowledge is stored implicitly in the parameters of a neural network,
requiring ever-larger networks to cover more facts.
To capture knowledge in a more modular and interpretable way, we augment
language model pre-training with a latent knowledge retriever, which allows the
model to retrieve and attend over documents from a large corpus such as
Wikipedia, used during pre-training, fine-tuning and inference. For the first
time, we show how to pre-train such a knowledge retriever in an unsupervised
manner, using masked language modeling as the learning signal and
backpropagating through a retrieval step that considers millions of documents.
We demonstrate the effectiveness of Retrieval-Augmented Language Model
pre-training (REALM) by fine-tuning on the challenging task of Open-domain
Question Answering (Open-QA). We compare against state-of-the-art models for
both explicit and implicit knowledge storage on three popular Open-QA
benchmarks, and find that we outperform all previous methods by a significant
margin (4-16% absolute accuracy), while also providing qualitative benefits
such as interpretability and modularity. | 2020-02-10T18:40:59Z | null | null | null | REALM: Retrieval-Augmented Language Model Pre-Training | ['Kelvin Guu', 'Kenton Lee', 'Zora Tung', 'Panupong Pasupat', 'Ming-Wei Chang'] | 2,020 | International Conference on Machine Learning | 2,133 | 43 | ['Computer Science'] |
2,002.0891 | How Much Knowledge Can You Pack Into the Parameters of a Language Model? | ['Adam Roberts', 'Colin Raffel', 'Noam Shazeer'] | ['cs.CL', 'cs.LG', 'stat.ML'] | It has recently been observed that neural language models trained on
unstructured text can implicitly store and retrieve knowledge using natural
language queries. In this short paper, we measure the practical utility of this
approach by fine-tuning pre-trained models to answer questions without access
to any external context or knowledge. We show that this approach scales with
model size and performs competitively with open-domain systems that explicitly
retrieve answers from an external knowledge source when answering questions. To
facilitate reproducibility and future work, we release our code and trained
models at https://goo.gle/t5-cbqa. | 2020-02-10T18:55:58Z | Camera-ready version for EMNLP | null | null | How Much Knowledge Can You Pack into the Parameters of a Language Model? | ['Adam Roberts', 'Colin Raffel', 'Noam M. Shazeer'] | 2,020 | Conference on Empirical Methods in Natural Language Processing | 898 | 40 | ['Computer Science', 'Mathematics'] |
2,002.09018 | Scalable Second Order Optimization for Deep Learning | ['Rohan Anil', 'Vineet Gupta', 'Tomer Koren', 'Kevin Regan', 'Yoram Singer'] | ['cs.LG', 'math.OC', 'stat.ML'] | Optimization in machine learning, both theoretical and applied, is presently
dominated by first-order gradient methods such as stochastic gradient descent.
Second-order optimization methods, that involve second derivatives and/or
second order statistics of the data, are far less prevalent despite strong
theoretical properties, due to their prohibitive computation, memory and
communication costs. In an attempt to bridge this gap between theoretical and
practical optimization, we present a scalable implementation of a second-order
preconditioned method (concretely, a variant of full-matrix Adagrad), that
along with several critical algorithmic and numerical improvements, provides
significant convergence and wall-clock time improvements compared to
conventional first-order methods on state-of-the-art deep models. Our novel
design effectively utilizes the prevalent heterogeneous hardware architecture
for training deep models, consisting of a multicore CPU coupled with multiple
accelerator units. We demonstrate superior performance compared to
state-of-the-art on very large learning tasks such as machine translation with
Transformers, language modeling with BERT, click-through rate prediction on
Criteo, and image classification on ImageNet with ResNet-50. | 2020-02-20T20:51:33Z | 24 pages, Code available here: https://bit.ly/3uXXtKy | null | null | null | null | null | null | null | null | null |
2,002.09219 | Stochastic Latent Residual Video Prediction | ['Jean-Yves Franceschi', 'Edouard Delasalles', 'Mickaël Chen', 'Sylvain Lamprier', 'Patrick Gallinari'] | ['cs.CV', 'cs.LG', 'stat.ML'] | Designing video prediction models that account for the inherent uncertainty
of the future is challenging. Most works in the literature are based on
stochastic image-autoregressive recurrent networks, which raises several
performance and applicability issues. An alternative is to use fully latent
temporal models which untie frame synthesis and temporal dynamics. However, no
such model for stochastic video prediction has been proposed in the literature
yet, due to design and training difficulties. In this paper, we overcome these
difficulties by introducing a novel stochastic temporal model whose dynamics
are governed in a latent space by a residual update rule. This first-order
scheme is motivated by discretization schemes of differential equations. It
naturally models video dynamics as it allows our simpler, more interpretable,
latent model to outperform prior state-of-the-art methods on challenging
datasets. | 2020-02-21T10:44:01Z | null | Thirty-seventh International Conference on Machine Learning,
International Machine Learning Society, Jul 2020, Vienne, Austria. pp.89--102 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.