arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,906.03741 | BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent
Summarization | ['Eva Sharma', 'Chen Li', 'Lu Wang'] | ['cs.CL', 'cs.LG'] | Most existing text summarization datasets are compiled from the news domain,
where summaries have a flattened discourse structure. In such datasets,
summary-worthy content often appears in the beginning of input articles.
Moreover, large segments from input articles are present verbatim in their
respective summaries. These issues impede the learning and evaluation of
systems that can understand an article's global content structure as well as
produce abstractive summaries with high compression ratio. In this work, we
present a novel dataset, BIGPATENT, consisting of 1.3 million records of U.S.
patent documents along with human written abstractive summaries. Compared to
existing summarization datasets, BIGPATENT has the following properties: i)
summaries contain a richer discourse structure with more recurring entities,
ii) salient content is evenly distributed in the input, and iii) lesser and
shorter extractive fragments are present in the summaries. Finally, we train
and evaluate baselines and popular learning models on BIGPATENT to shed light
on new challenges and motivate future directions for summarization research. | 2019-06-10T00:24:26Z | Proceedings of the 57th Annual Meeting of the Association for
Computational Linguistics. ACL 2019 (10 pages) | null | null | BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization | ['Eva Sharma', 'Chen Li', 'Lu Wang'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 224 | 40 | ['Computer Science'] |
1,906.04032 | Neural Spline Flows | ['Conor Durkan', 'Artur Bekasov', 'Iain Murray', 'George Papamakarios'] | ['stat.ML', 'cs.LG'] | A normalizing flow models a complex probability density as an invertible
transformation of a simple base density. Flows based on either coupling or
autoregressive transforms both offer exact density evaluation and sampling, but
rely on the parameterization of an easily invertible elementwise
transformation, whose choice determines the flexibility of these models.
Building upon recent work, we propose a fully-differentiable module based on
monotonic rational-quadratic splines, which enhances the flexibility of both
coupling and autoregressive transforms while retaining analytic invertibility.
We demonstrate that neural spline flows improve density estimation, variational
inference, and generative modeling of images. | 2019-06-10T14:43:23Z | Published at the 33rd Conference on Neural Information Processing
Systems (NeurIPS 2019), Vancouver, Canada | null | null | null | null | null | null | null | null | null |
1,906.04571 | Counterfactual Data Augmentation for Mitigating Gender Stereotypes in
Languages with Rich Morphology | ['Ran Zmigrod', 'Sabrina J. Mielke', 'Hanna Wallach', 'Ryan Cotterell'] | ['cs.CL'] | Gender stereotypes are manifest in most of the world's languages and are
consequently propagated or amplified by NLP systems. Although research has
focused on mitigating gender stereotypes in English, the approaches that are
commonly employed produce ungrammatical sentences in morphologically rich
languages. We present a novel approach for converting between
masculine-inflected and feminine-inflected sentences in such languages. For
Spanish and Hebrew, our approach achieves F1 scores of 82% and 73% at the level
of tags and accuracies of 90% and 87% at the level of forms. By evaluating our
approach using four different languages, we show that, on average, it reduces
gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. | 2019-06-11T13:22:24Z | ACL 2019 | null | null | null | null | null | null | null | null | null |
1,906.05317 | COMET: Commonsense Transformers for Automatic Knowledge Graph
Construction | ['Antoine Bosselut', 'Hannah Rashkin', 'Maarten Sap', 'Chaitanya Malaviya', 'Asli Celikyilmaz', 'Yejin Choi'] | ['cs.CL', 'cs.AI'] | We present the first comprehensive study on automatic knowledge base
construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et
al., 2019) and ConceptNet (Speer et al., 2017). Contrary to many conventional
KBs that store knowledge with canonical templates, commonsense KBs only store
loosely structured open-text descriptions of knowledge. We posit that an
important step toward automatic commonsense completion is the development of
generative models of commonsense knowledge, and propose COMmonsEnse
Transformers (COMET) that learn to generate rich and diverse commonsense
descriptions in natural language. Despite the challenges of commonsense
modeling, our investigation reveals promising results when implicit knowledge
from deep pre-trained language models is transferred to generate explicit
knowledge in commonsense knowledge graphs. Empirical results demonstrate that
COMET is able to generate novel knowledge that humans rate as high quality,
with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which
approaches human performance for these resources. Our findings suggest that
using generative commonsense models for automatic commonsense KB completion
could soon be a plausible alternative to extractive methods. | 2019-06-12T18:11:20Z | Accepted to ACL 2019 | null | null | null | null | null | null | null | null | null |
1,906.05474 | Transfer Learning in Biomedical Natural Language Processing: An
Evaluation of BERT and ELMo on Ten Benchmarking Datasets | ['Yifan Peng', 'Shankai Yan', 'Zhiyong Lu'] | ['cs.CL'] | Inspired by the success of the General Language Understanding Evaluation
benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE)
benchmark to facilitate research in the development of pre-training language
representations in the biomedicine domain. The benchmark consists of five tasks
with ten datasets that cover both biomedical and clinical texts with different
dataset sizes and difficulties. We also evaluate several baselines based on
BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and
MIMIC-III clinical notes achieves the best results. We make the datasets,
pre-trained models, and codes publicly available at
https://github.com/ncbi-nlp/BLUE_Benchmark. | 2019-06-13T04:07:12Z | Accepted by BioNLP 2019 | null | null | Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets | ['Yifan Peng', 'Shankai Yan', 'Zhiyong Lu'] | 2,019 | BioNLP@ACL | 847 | 44 | ['Computer Science'] |
1,906.05856 | Detecting Photoshopped Faces by Scripting Photoshop | ['Sheng-Yu Wang', 'Oliver Wang', 'Andrew Owens', 'Richard Zhang', 'Alexei A. Efros'] | ['cs.CV'] | Most malicious photo manipulations are created using standard image editing
tools, such as Adobe Photoshop. We present a method for detecting one very
popular Photoshop manipulation -- image warping applied to human faces -- using
a model trained entirely using fake images that were automatically generated by
scripting Photoshop itself. We show that our model outperforms humans at the
task of recognizing manipulated images, can predict the specific location of
edits, and in some cases can be used to "undo" a manipulation to reconstruct
the original, unedited image. We demonstrate that the system can be
successfully applied to real, artist-created image manipulations. | 2019-06-13T17:59:02Z | null | null | null | null | null | null | null | null | null | null |
1,906.05963 | Image Captioning: Transforming Objects into Words | ['Simao Herdade', 'Armin Kappeler', 'Kofi Boakye', 'Joao Soares'] | ['cs.CV', 'cs.CL'] | Image captioning models typically follow an encoder-decoder architecture
which uses abstract image feature vectors as input to the encoder. One of the
most successful algorithms uses feature vectors extracted from the region
proposals obtained from an object detector. In this work we introduce the
Object Relation Transformer, that builds upon this approach by explicitly
incorporating information about the spatial relationship between input detected
objects through geometric attention. Quantitative and qualitative results
demonstrate the importance of such geometric attention for image captioning,
leading to improvements on all common captioning metrics on the MS-COCO
dataset. | 2019-06-14T00:00:29Z | 10 pages | null | null | Image Captioning: Transforming Objects into Words | ['Simão Herdade', 'Armin Kappeler', 'K. Boakye', 'Joao Soares'] | 2,019 | Neural Information Processing Systems | 476 | 31 | ['Computer Science'] |
1,906.06972 | EnlightenGAN: Deep Light Enhancement without Paired Supervision | ['Yifan Jiang', 'Xinyu Gong', 'Ding Liu', 'Yu Cheng', 'Chen Fang', 'Xiaohui Shen', 'Jianchao Yang', 'Pan Zhou', 'Zhangyang Wang'] | ['cs.CV', 'eess.IV'] | Deep learning-based methods have achieved remarkable success in image
restoration and enhancement, but are they still competitive when there is a
lack of paired training data? As one such example, this paper explores the
low-light image enhancement problem, where in practice it is extremely
challenging to simultaneously take a low-light and a normal-light photo of the
same visual scene. We propose a highly effective unsupervised generative
adversarial network, dubbed EnlightenGAN, that can be trained without
low/normal-light image pairs, yet proves to generalize very well on various
real-world test images. Instead of supervising the learning using ground truth
data, we propose to regularize the unpaired training using the information
extracted from the input itself, and benchmark a series of innovations for the
low-light image enhancement problem, including a global-local discriminator
structure, a self-regularized perceptual loss fusion, and attention mechanism.
Through extensive experiments, our proposed approach outperforms recent methods
under a variety of metrics in terms of visual quality and subjective user
study. Thanks to the great flexibility brought by unpaired training,
EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world
images from various domains. The code is available at
\url{https://github.com/yueruchen/EnlightenGAN} | 2019-06-17T11:54:20Z | null | null | null | EnlightenGAN: Deep Light Enhancement Without Paired Supervision | ['Yifan Jiang', 'Xinyu Gong', 'Ding Liu', 'Yu Cheng', 'Chen Fang', 'Xiaohui Shen', 'Jianchao Yang', 'Pan Zhou', 'Zhangyang Wang'] | 2,019 | IEEE Transactions on Image Processing | 1,602 | 60 | ['Computer Science', 'Medicine', 'Engineering'] |
1,906.07348 | Zero-Shot Entity Linking by Reading Entity Descriptions | ['Lajanugen Logeswaran', 'Ming-Wei Chang', 'Kenton Lee', 'Kristina Toutanova', 'Jacob Devlin', 'Honglak Lee'] | ['cs.CL', 'cs.LG'] | We present the zero-shot entity linking task, where mentions must be linked
to unseen entities without in-domain labeled data. The goal is to enable robust
transfer to highly specialized domains, and so no metadata or alias tables are
assumed. In this setting, entities are only identified by text descriptions,
and models must rely strictly on language understanding to resolve the new
entities. First, we show that strong reading comprehension models pre-trained
on large unlabeled data can be used to generalize to unseen entities. Second,
we propose a simple and effective adaptive pre-training strategy, which we term
domain-adaptive pre-training (DAP), to address the domain shift problem
associated with linking unseen entities in a new domain. We present experiments
on a new dataset that we construct for this task and show that DAP improves
over strong pre-training baselines, including BERT. The data and code are
available at https://github.com/lajanugen/zeshel. | 2019-06-18T02:36:39Z | ACL 2019 | null | null | null | null | null | null | null | null | null |
1,906.08101 | Pre-Training with Whole Word Masking for Chinese BERT | ['Yiming Cui', 'Wanxiang Che', 'Ting Liu', 'Bing Qin', 'Ziqing Yang'] | ['cs.CL', 'cs.LG'] | Bidirectional Encoder Representations from Transformers (BERT) has shown
marvelous improvements across various NLP tasks, and its consecutive variants
have been proposed to further improve the performance of the pre-trained
language models. In this paper, we aim to first introduce the whole word
masking (wwm) strategy for Chinese BERT, along with a series of Chinese
pre-trained language models. Then we also propose a simple but effective model
called MacBERT, which improves upon RoBERTa in several ways. Especially, we
propose a new masking strategy called MLM as correction (Mac). To demonstrate
the effectiveness of these models, we create a series of Chinese pre-trained
language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
We carried out extensive experiments on ten Chinese NLP tasks to evaluate the
created Chinese pre-trained language models as well as the proposed MacBERT.
Experimental results show that MacBERT could achieve state-of-the-art
performances on many NLP tasks, and we also ablate details with several
findings that may help future research. We open-source our pre-trained language
models for further facilitating our research community. Resources are
available: https://github.com/ymcui/Chinese-BERT-wwm | 2019-06-19T13:54:25Z | 11 pages. Journal extension to arXiv:2004.13922 | IEEE/ACM Transactions on Audio, Speech, and Language Processing
(2021) | 10.1109/TASLP.2021.3124365 | Pre-Training With Whole Word Masking for Chinese BERT | ['Yiming Cui', 'Wanxiang Che', 'Ting Liu', 'Bing Qin', 'Ziqing Yang'] | 2,019 | IEEE/ACM Transactions on Audio Speech and Language Processing | 186 | 46 | ['Computer Science'] |
1,906.08237 | XLNet: Generalized Autoregressive Pretraining for Language Understanding | ['Zhilin Yang', 'Zihang Dai', 'Yiming Yang', 'Jaime Carbonell', 'Ruslan Salakhutdinov', 'Quoc V. Le'] | ['cs.CL', 'cs.LG'] | With the capability of modeling bidirectional contexts, denoising
autoencoding based pretraining like BERT achieves better performance than
pretraining approaches based on autoregressive language modeling. However,
relying on corrupting the input with masks, BERT neglects dependency between
the masked positions and suffers from a pretrain-finetune discrepancy. In light
of these pros and cons, we propose XLNet, a generalized autoregressive
pretraining method that (1) enables learning bidirectional contexts by
maximizing the expected likelihood over all permutations of the factorization
order and (2) overcomes the limitations of BERT thanks to its autoregressive
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the
state-of-the-art autoregressive model, into pretraining. Empirically, under
comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a
large margin, including question answering, natural language inference,
sentiment analysis, and document ranking. | 2019-06-19T17:35:48Z | Pretrained models and code are available at
https://github.com/zihangdai/xlnet | null | null | null | null | null | null | null | null | null |
1,906.12021 | Densely Residual Laplacian Super-Resolution | ['Saeed Anwar', 'Nick Barnes'] | ['eess.IV', 'cs.CV'] | Super-Resolution convolutional neural networks have recently demonstrated
high-quality restoration for single images. However, existing algorithms often
require very deep architectures and long training times. Furthermore, current
convolutional neural networks for super-resolution are unable to exploit
features at multiple scales and weigh them equally, limiting their learning
capability. In this exposition, we present a compact and accurate
super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN).
The proposed network employs cascading residual on the residual structure to
allow the flow of low-frequency information to focus on learning high and
mid-level features. In addition, deep supervision is achieved via the densely
concatenated residual blocks settings, which also helps in learning from
high-level complex features. Moreover, we propose Laplacian attention to model
the crucial features to learn the inter and intra-level dependencies between
the feature maps. Furthermore, comprehensive quantitative and qualitative
evaluations on low-resolution, noisy low-resolution, and real historical image
benchmark datasets illustrate that our DRLN algorithm performs favorably
against the state-of-the-art methods visually and accurately. | 2019-06-28T02:32:44Z | null | null | null | Densely Residual Laplacian Super-Resolution | ['Saeed Anwar', 'Nick Barnes'] | 2,019 | IEEE Transactions on Pattern Analysis and Machine Intelligence | 230 | 57 | ['Computer Science', 'Engineering', 'Medicine'] |
1,907.00409 | Evaluating Language Model Finetuning Techniques for Low-resource
Languages | ['Jan Christian Blaise Cruz', 'Charibeth Cheng'] | ['cs.CL'] | Unlike mainstream languages (such as English and French), low-resource
languages often suffer from a lack of expert-annotated corpora and benchmark
resources that make it hard to apply state-of-the-art techniques directly. In
this paper, we alleviate this scarcity problem for the low-resourced Filipino
language in two ways. First, we introduce a new benchmark language modeling
dataset in Filipino which we call WikiText-TL-39. Second, we show that language
model finetuning techniques such as BERT and ULMFiT can be used to consistently
train robust classifiers in low-resource settings, experiencing at most a
0.0782 increase in validation error when the number of training examples is
decreased from 10K to 1K while finetuning using a privately-held sentiment
dataset. | 2019-06-30T16:32:28Z | Pretrained models and datasets available at
https://github.com/jcblaisecruz02/Tagalog-BERT | null | 10.13140/RG.2.2.23028.40322 | null | null | null | null | null | null | null |
1,907.00837 | XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera | ['Dushyant Mehta', 'Oleksandr Sotnychenko', 'Franziska Mueller', 'Weipeng Xu', 'Mohamed Elgharib', 'Pascal Fua', 'Hans-Peter Seidel', 'Helge Rhodin', 'Gerard Pons-Moll', 'Christian Theobalt'] | ['cs.CV', 'cs.GR'] | We present a real-time approach for multi-person 3D motion capture at over 30
fps using a single RGB camera. It operates successfully in generic scenes which
may contain occlusions by objects and by other people. Our method operates in
subsequent stages. The first stage is a convolutional neural network (CNN) that
estimates 2D and 3D pose features along with identity assignments for all
visible joints of all individuals.We contribute a new architecture for this
CNN, called SelecSLS Net, that uses novel selective long and short range skip
connections to improve the information flow allowing for a drastically faster
network without compromising accuracy. In the second stage, a fully connected
neural network turns the possibly partial (on account of occlusion) 2Dpose and
3Dpose features for each subject into a complete 3Dpose estimate per
individual. The third stage applies space-time skeletal model fitting to the
predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose,
and enforce temporal coherence. Our method returns the full skeletal pose in
joint angles for each subject. This is a further key distinction from previous
work that do not produce joint angle results of a coherent skeleton in real
time for multi-person scenes. The proposed system runs on consumer hardware at
a previously unseen speed of more than 30 fps given 512x320 images as input
while achieving state-of-the-art accuracy, which we will demonstrate on a range
of challenging real-world scenes. | 2019-07-01T14:59:02Z | To appear in ACM Transactions on Graphics (SIGGRAPH) 2020 | null | 10.1145/3386569.3392410 | null | null | null | null | null | null | null |
1,907.01341 | Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot
Cross-dataset Transfer | ['René Ranftl', 'Katrin Lasinger', 'David Hafner', 'Konrad Schindler', 'Vladlen Koltun'] | ['cs.CV'] | The success of monocular depth estimation relies on large and diverse
training sets. Due to the challenges associated with acquiring dense
ground-truth depth across different environments at scale, a number of datasets
with distinct characteristics and biases have emerged. We develop tools that
enable mixing multiple datasets during training, even if their annotations are
incompatible. In particular, we propose a robust training objective that is
invariant to changes in depth range and scale, advocate the use of principled
multi-objective learning to combine data from different sources, and highlight
the importance of pretraining encoders on auxiliary tasks. Armed with these
tools, we experiment with five diverse training datasets, including a new,
massive data source: 3D films. To demonstrate the generalization power of our
approach we use zero-shot cross-dataset transfer}, i.e. we evaluate on datasets
that were not seen during training. The experiments confirm that mixing data
from complementary sources greatly improves monocular depth estimation. Our
approach clearly outperforms competing methods across diverse datasets, setting
a new state of the art for monocular depth estimation. Some results are shown
in the supplementary video at https://youtu.be/D46FzVyL9I8 | 2019-07-02T13:16:52Z | To appear in TPAMI (accepted August 2020) | null | null | Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer | ['René Ranftl', 'Katrin Lasinger', 'David Hafner', 'K. Schindler', 'V. Koltun'] | 2,019 | IEEE Transactions on Pattern Analysis and Machine Intelligence | 1,814 | 67 | ['Computer Science', 'Medicine'] |
1,907.0147 | Augmenting Self-attention with Persistent Memory | ['Sainbayar Sukhbaatar', 'Edouard Grave', 'Guillaume Lample', 'Herve Jegou', 'Armand Joulin'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Transformer networks have lead to important progress in language modeling and
machine translation. These models include two consecutive modules, a
feed-forward layer and a self-attention layer. The latter allows the network to
capture long term dependencies and are often regarded as the key ingredient in
the success of Transformers. Building upon this intuition, we propose a new
model that solely consists of attention layers. More precisely, we augment the
self-attention layers with persistent memory vectors that play a similar role
as the feed-forward layer. Thanks to these vectors, we can remove the
feed-forward layer without degrading the performance of a transformer. Our
evaluation shows the benefits brought by our model on standard character and
word level language modeling benchmarks. | 2019-07-02T15:56:20Z | null | null | null | null | null | null | null | null | null | null |
1,907.04307 | Multilingual Universal Sentence Encoder for Semantic Retrieval | ['Yinfei Yang', 'Daniel Cer', 'Amin Ahmad', 'Mandy Guo', 'Jax Law', 'Noah Constant', 'Gustavo Hernandez Abrego', 'Steve Yuan', 'Chris Tar', 'Yun-Hsuan Sung', 'Brian Strope', 'Ray Kurzweil'] | ['cs.CL'] | We introduce two pre-trained retrieval focused multilingual sentence encoding
models, respectively based on the Transformer and CNN model architectures. The
models embed text from 16 languages into a single semantic space using a
multi-task trained dual-encoder that learns tied representations using
translation based bridge tasks (Chidambaram al., 2018). The models provide
performance that is competitive with the state-of-the-art on: semantic
retrieval (SR), translation pair bitext retrieval (BR) and retrieval question
answering (ReQA). On English transfer learning tasks, our sentence-level
embeddings approach, and in some cases exceed, the performance of monolingual,
English only, sentence embedding models. Our models are made available for
download on TensorFlow Hub. | 2019-07-09T17:46:17Z | 6 pages, 6 tables, 2 listings, and 1 figure | null | null | null | null | null | null | null | null | null |
1,907.05047 | BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs | ['Valentin Bazarevsky', 'Yury Kartynnik', 'Andrey Vakunov', 'Karthik Raveendran', 'Matthias Grundmann'] | ['cs.CV'] | We present BlazeFace, a lightweight and well-performing face detector
tailored for mobile GPU inference. It runs at a speed of 200-1000+ FPS on
flagship devices. This super-realtime performance enables it to be applied to
any augmented reality pipeline that requires an accurate facial region of
interest as an input for task-specific models, such as 2D/3D facial keypoint or
geometry estimation, facial features or expression classification, and face
region segmentation. Our contributions include a lightweight feature extraction
network inspired by, but distinct from MobileNetV1/V2, a GPU-friendly anchor
scheme modified from Single Shot MultiBox Detector (SSD), and an improved tie
resolution strategy alternative to non-maximum suppression. | 2019-07-11T08:40:08Z | 4 pages, 3 figures; CVPR Workshop on Computer Vision for Augmented
and Virtual Reality, Long Beach, CA, USA, 2019 | null | null | null | null | null | null | null | null | null |
1,907.05791 | WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from
Wikipedia | ['Holger Schwenk', 'Vishrav Chaudhary', 'Shuo Sun', 'Hongyu Gong', 'Francisco Guzmán'] | ['cs.CL'] | We present an approach based on multilingual sentence embeddings to
automatically extract parallel sentences from the content of Wikipedia articles
in 85 languages, including several dialects or low-resource languages. We do
not limit the the extraction process to alignments with English, but
systematically consider all possible language pairs. In total, we are able to
extract 135M parallel sentences for 1620 different language pairs, out of which
only 34M are aligned with English. This corpus of parallel sentences is freely
available at
https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix. To get
an indication on the quality of the extracted bitexts, we train neural MT
baseline systems on the mined data only for 1886 languages pairs, and evaluate
them on the TED corpus, achieving strong BLEU scores for many language pairs.
The WikiMatrix bitexts seem to be particularly interesting to train MT systems
between distant languages without the need to pivot through English. | 2019-07-10T23:57:30Z | 13 pages, 3 figures, 6 tables | null | null | null | null | null | null | null | null | null |
1,907.06292 | TWEETQA: A Social Media Focused Question Answering Dataset | ['Wenhan Xiong', 'Jiawei Wu', 'Hong Wang', 'Vivek Kulkarni', 'Mo Yu', 'Shiyu Chang', 'Xiaoxiao Guo', 'William Yang Wang'] | ['cs.CL'] | With social media becoming increasingly pop-ular on which lots of news and
real-time eventsare reported, developing automated questionanswering systems is
critical to the effective-ness of many applications that rely on real-time
knowledge. While previous datasets haveconcentrated on question answering (QA)
forformal text like news and Wikipedia, wepresent the first large-scale dataset
for QA oversocial media data. To ensure that the tweetswe collected are useful,
we only gather tweetsused by journalists to write news articles. Wethen ask
human annotators to write questionsand answers upon these tweets. Unlike
otherQA datasets like SQuAD in which the answersare extractive, we allow the
answers to be ab-stractive. We show that two recently proposedneural models
that perform well on formaltexts are limited in their performance when ap-plied
to our dataset. In addition, even the fine-tuned BERT model is still lagging
behind hu-man performance with a large margin. Our re-sults thus point to the
need of improved QAsystems targeting social media text. | 2019-07-14T22:20:59Z | ACL 2019 | null | null | null | null | null | null | null | null | null |
1,907.06616 | Facebook FAIR's WMT19 News Translation Task Submission | ['Nathan Ng', 'Kyra Yee', 'Alexei Baevski', 'Myle Ott', 'Michael Auli', 'Sergey Edunov'] | ['cs.CL'] | This paper describes Facebook FAIR's submission to the WMT19 shared news
translation task. We participate in two language pairs and four language
directions, English <-> German and English <-> Russian. Following our
submission from last year, our baseline systems are large BPE-based transformer
models trained with the Fairseq sequence modeling toolkit which rely on sampled
back-translations. This year we experiment with different bitext data filtering
schemes, as well as with adding filtered back-translated data. We also ensemble
and fine-tune our models on domain-specific data, then decode using noisy
channel model reranking. Our submissions are ranked first in all four
directions of the human evaluation campaign. On En->De, our system
significantly outperforms other systems as well as human translations. This
system improves upon our WMT'18 submission by 4.5 BLEU points. | 2019-07-15T17:22:54Z | 7 pages; WMT | null | null | Facebook FAIR’s WMT19 News Translation Task Submission | ['Nathan Ng', 'Kyra Yee', 'Alexei Baevski', 'Myle Ott', 'Michael Auli', 'Sergey Edunov'] | 2,019 | Conference on Machine Translation | 397 | 12 | ['Computer Science'] |
1,907.09006 | Forward-Backward Decoding for Regularizing End-to-End TTS | ['Yibin Zheng', 'Xi Wang', 'Lei He', 'Shifeng Pan', 'Frank K. Soong', 'Zhengqi Wen', 'Jianhua Tao'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Neural end-to-end TTS can generate very high-quality synthesized speech, and
even close to human recording within similar domain text. However, it performs
unsatisfactory when scaling it to challenging test sets. One concern is that
the encoder-decoder with attention-based network adopts autoregressive
generative sequence model with the limitation of "exposure bias" To address
this issue, we propose two novel methods, which learn to predict future by
improving agreement between forward and backward decoding sequence. The first
one is achieved by introducing divergence regularization terms into model
training objective to reduce the mismatch between two directional models,
namely L2R and R2L (which generates targets from left-to-right and
right-to-left, respectively). While the second one operates on decoder-level
and exploits the future information during decoding. In addition, we employ a
joint training strategy to allow forward and backward decoding to improve each
other in an interactive process. Experimental results show our proposed methods
especially the second one (bidirectional decoder regularization), leads a
significantly improvement on both robustness and overall naturalness, as
outperforming baseline (the revised version of Tacotron2) with a MOS gap of
0.14 in a challenging test, and achieving close to human quality (4.42 vs. 4.49
in MOS) on general test. | 2019-07-18T12:24:30Z | Accepted by INTERSPEECH2019. arXiv admin note: text overlap with
arXiv:1808.04064, arXiv:1804.05374 by other authors | null | null | null | null | null | null | null | null | null |
1,907.09595 | MixConv: Mixed Depthwise Convolutional Kernels | ['Mingxing Tan', 'Quoc V. Le'] | ['cs.CV', 'cs.LG'] | Depthwise convolution is becoming increasingly popular in modern efficient
ConvNets, but its kernel size is often overlooked. In this paper, we
systematically study the impact of different kernel sizes, and observe that
combining the benefits of multiple kernel sizes can lead to better accuracy and
efficiency. Based on this observation, we propose a new mixed depthwise
convolution (MixConv), which naturally mixes up multiple kernel sizes in a
single convolution. As a simple drop-in replacement of vanilla depthwise
convolution, our MixConv improves the accuracy and efficiency for existing
MobileNets on both ImageNet classification and COCO object detection. To
demonstrate the effectiveness of MixConv, we integrate it into AutoML search
space and develop a new family of models, named as MixNets, which outperform
previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy
+4.2%), ShuffleNetV2 [16] (+3.5%), MnasNet [26] (+1.3%), ProxylessNAS [2]
(+2.2%), and FBNet [27] (+2.0%). In particular, our MixNet-L achieves a new
state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings
(<600M FLOPS). Code is at https://github.com/
tensorflow/tpu/tree/master/models/official/mnasnet/mixnet | 2019-07-22T21:49:25Z | BMVC 2019 | BMVC 2019 | null | null | null | null | null | null | null | null |
1,907.10529 | SpanBERT: Improving Pre-training by Representing and Predicting Spans | ['Mandar Joshi', 'Danqi Chen', 'Yinhan Liu', 'Daniel S. Weld', 'Luke Zettlemoyer', 'Omer Levy'] | ['cs.CL', 'cs.LG'] | We present SpanBERT, a pre-training method that is designed to better
represent and predict spans of text. Our approach extends BERT by (1) masking
contiguous random spans, rather than random tokens, and (2) training the span
boundary representations to predict the entire content of the masked span,
without relying on the individual token representations within it. SpanBERT
consistently outperforms BERT and our better-tuned baselines, with substantial
gains on span selection tasks such as question answering and coreference
resolution. In particular, with the same training data and model size as
BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0,
respectively. We also achieve a new state of the art on the OntoNotes
coreference resolution task (79.6\% F1), strong performance on the TACRED
relation extraction benchmark, and even show gains on GLUE. | 2019-07-24T15:43:40Z | Accepted at TACL | null | null | SpanBERT: Improving Pre-training by Representing and Predicting Spans | ['Mandar Joshi', 'Danqi Chen', 'Yinhan Liu', 'Daniel S. Weld', 'Luke Zettlemoyer', 'Omer Levy'] | 2,019 | Transactions of the Association for Computational Linguistics | 1,974 | 58 | ['Computer Science'] |
1,907.10641 | WinoGrande: An Adversarial Winograd Schema Challenge at Scale | ['Keisuke Sakaguchi', 'Ronan Le Bras', 'Chandra Bhagavatula', 'Yejin Choi'] | ['cs.CL'] | The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern 2011),
a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun
resolution problems originally designed to be unsolvable for statistical models
that rely on selectional preferences or word associations. However, recent
advances in neural language models have already reached around 90% accuracy on
variants of WSC. This raises an important question whether these models have
truly acquired robust commonsense capabilities or whether they rely on spurious
biases in the datasets that lead to an overestimation of the true capabilities
of machine commonsense. To investigate this question, we introduce WinoGrande,
a large-scale dataset of 44k problems, inspired by the original WSC design, but
adjusted to improve both the scale and the hardness of the dataset. The key
steps of the dataset construction consist of (1) a carefully designed
crowdsourcing procedure, followed by (2) systematic bias reduction using a
novel AfLite algorithm that generalizes human-detectable word associations to
machine-detectable embedding associations. The best state-of-the-art methods on
WinoGrande achieve 59.4-79.1%, which are 15-35% below human performance of
94.0%, depending on the amount of the training data allowed. Furthermore, we
establish new state-of-the-art results on five related benchmarks - WSC
(90.1%), DPR (93.1%), COPA (90.6%), KnowRef (85.6%), and Winogender (97.1%).
These results have dual implications: on one hand, they demonstrate the
effectiveness of WinoGrande when used as a resource for transfer learning. On
the other hand, they raise a concern that we are likely to be overestimating
the true capabilities of machine commonsense across all these benchmarks. We
emphasize the importance of algorithmic bias reduction in existing and future
benchmarks to mitigate such overestimation. | 2019-07-24T18:11:59Z | null | null | null | null | null | null | null | null | null | null |
1,907.11692 | RoBERTa: A Robustly Optimized BERT Pretraining Approach | ['Yinhan Liu', 'Myle Ott', 'Naman Goyal', 'Jingfei Du', 'Mandar Joshi', 'Danqi Chen', 'Omer Levy', 'Mike Lewis', 'Luke Zettlemoyer', 'Veselin Stoyanov'] | ['cs.CL'] | Language model pretraining has led to significant performance gains but
careful comparison between different approaches is challenging. Training is
computationally expensive, often done on private datasets of different sizes,
and, as we will show, hyperparameter choices have significant impact on the
final results. We present a replication study of BERT pretraining (Devlin et
al., 2019) that carefully measures the impact of many key hyperparameters and
training data size. We find that BERT was significantly undertrained, and can
match or exceed the performance of every model published after it. Our best
model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results
highlight the importance of previously overlooked design choices, and raise
questions about the source of recently reported improvements. We release our
models and code. | 2019-07-26T17:48:29Z | null | null | null | null | null | null | null | null | null | null |
1,907.12237 | KNEEL: Knee Anatomical Landmark Localization Using Hourglass Networks | ['Aleksei Tiulpin', 'Iaroslav Melekhov', 'Simo Saarakkala'] | ['cs.CV'] | This paper addresses the challenge of localization of anatomical landmarks in
knee X-ray images at different stages of osteoarthritis (OA). Landmark
localization can be viewed as regression problem, where the landmark position
is directly predicted by using the region of interest or even full-size images
leading to large memory footprint, especially in case of high resolution
medical images. In this work, we propose an efficient deep neural networks
framework with an hourglass architecture utilizing a soft-argmax layer to
directly predict normalized coordinates of the landmark points. We provide an
extensive evaluation of different regularization techniques and various loss
functions to understand their influence on the localization performance.
Furthermore, we introduce the concept of transfer learning from low-budget
annotations, and experimentally demonstrate that such approach is improving the
accuracy of landmark localization. Compared to the prior methods, we validate
our model on two datasets that are independent from the train data and assess
the performance of the method for different stages of OA severity. The proposed
approach demonstrates better generalization performance compared to the current
state-of-the-art. | 2019-07-29T07:18:54Z | Accepted for Publication at ICCV 2019 VRMI Workshop | null | null | KNEEL: Knee Anatomical Landmark Localization Using Hourglass Networks | ['A. Tiulpin', 'Iaroslav Melekhov', 'S. Saarakkala'] | 2,019 | 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) | 45 | 47 | ['Computer Science'] |
1,907.12412 | ERNIE 2.0: A Continual Pre-training Framework for Language Understanding | ['Yu Sun', 'Shuohuan Wang', 'Yukun Li', 'Shikun Feng', 'Hao Tian', 'Hua Wu', 'Haifeng Wang'] | ['cs.CL'] | Recently, pre-trained models have achieved state-of-the-art results in
various language understanding tasks, which indicates that pre-training on
large-scale corpora may play a crucial role in natural language processing.
Current pre-training procedures usually focus on training the model with
several simple tasks to grasp the co-occurrence of words or sentences. However,
besides co-occurring, there exists other valuable lexical, syntactic and
semantic information in training corpora, such as named entity, semantic
closeness and discourse relations. In order to extract to the fullest extent,
the lexical, syntactic and semantic information from training corpora, we
propose a continual pre-training framework named ERNIE 2.0 which builds and
learns incrementally pre-training tasks through constant multi-task learning.
Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on
16 tasks including English tasks on GLUE benchmarks and several common tasks in
Chinese. The source codes and pre-trained models have been released at
https://github.com/PaddlePaddle/ERNIE. | 2019-07-29T13:25:37Z | 11 pages, 3 figures and 7 tables; Accepted by AAAI 2020 | null | null | null | null | null | null | null | null | null |
1,907.12461 | Leveraging Pre-trained Checkpoints for Sequence Generation Tasks | ['Sascha Rothe', 'Shashi Narayan', 'Aliaksei Severyn'] | ['cs.CL'] | Unsupervised pre-training of large neural models has recently revolutionized
Natural Language Processing. By warm-starting from the publicly released
checkpoints, NLP practitioners have pushed the state-of-the-art on multiple
benchmarks while saving significant amounts of compute time. So far the focus
has been mainly on the Natural Language Understanding tasks. In this paper, we
demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We
developed a Transformer-based sequence-to-sequence model that is compatible
with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and
conducted an extensive empirical study on the utility of initializing our
model, both encoder and decoder, with these checkpoints. Our models result in
new state-of-the-art results on Machine Translation, Text Summarization,
Sentence Splitting, and Sentence Fusion. | 2019-07-29T14:42:30Z | To be published in Transactions of the Association for Computational
Linguistics (TACL) | null | 10.1162/tacl_a_00313 | Leveraging Pre-trained Checkpoints for Sequence Generation Tasks | ['S. Rothe', 'Shashi Narayan', 'A. Severyn'] | 2,019 | Transactions of the Association for Computational Linguistics | 438 | 67 | ['Computer Science'] |
1,908.0266 | SpatialSense: An Adversarially Crowdsourced Benchmark for Spatial
Relation Recognition | ['Kaiyu Yang', 'Olga Russakovsky', 'Jia Deng'] | ['cs.CV'] | Understanding the spatial relations between objects in images is a
surprisingly challenging task. A chair may be "behind" a person even if it
appears to the left of the person in the image (depending on which way the
person is facing). Two students that appear close to each other in the image
may not in fact be "next to" each other if there is a third student between
them.
We introduce SpatialSense, a dataset specializing in spatial relation
recognition which captures a broad spectrum of such challenges, allowing for
proper benchmarking of computer vision techniques. SpatialSense is constructed
through adversarial crowdsourcing, in which human annotators are tasked with
finding spatial relations that are difficult to predict using simple cues such
as 2D spatial configuration or language priors. Adversarial crowdsourcing
significantly reduces dataset bias and samples more interesting relations in
the long tail compared to existing datasets. On SpatialSense, state-of-the-art
recognition models perform comparably to simple baselines, suggesting that they
rely on straightforward cues instead of fully reasoning about this complex
task. The SpatialSense benchmark provides a path forward to advancing the
spatial reasoning capabilities of computer vision systems. The dataset and code
are available at https://github.com/princeton-vl/SpatialSense. | 2019-08-07T14:41:30Z | Accepted to ICCV 2019 | null | null | null | null | null | null | null | null | null |
1,908.03557 | VisualBERT: A Simple and Performant Baseline for Vision and Language | ['Liunian Harold Li', 'Mark Yatskar', 'Da Yin', 'Cho-Jui Hsieh', 'Kai-Wei Chang'] | ['cs.CV', 'cs.CL', 'cs.LG'] | We propose VisualBERT, a simple and flexible framework for modeling a broad
range of vision-and-language tasks. VisualBERT consists of a stack of
Transformer layers that implicitly align elements of an input text and regions
in an associated input image with self-attention. We further propose two
visually-grounded language model objectives for pre-training VisualBERT on
image caption data. Experiments on four vision-and-language tasks including
VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with
state-of-the-art models while being significantly simpler. Further analysis
demonstrates that VisualBERT can ground elements of language to image regions
without any explicit supervision and is even sensitive to syntactic
relationships, tracking, for example, associations between verbs and image
regions corresponding to their arguments. | 2019-08-09T17:57:13Z | Work in Progress | null | null | VisualBERT: A Simple and Performant Baseline for Vision and Language | ['Liunian Harold Li', 'Mark Yatskar', 'Da Yin', 'Cho-Jui Hsieh', 'Kai-Wei Chang'] | 2,019 | arXiv.org | 1,975 | 42 | ['Computer Science'] |
1,908.03636 | Star-convex Polyhedra for 3D Object Detection and Segmentation in
Microscopy | ['Martin Weigert', 'Uwe Schmidt', 'Robert Haase', 'Ko Sugawara', 'Gene Myers'] | ['cs.CV'] | Accurate detection and segmentation of cell nuclei in volumetric (3D)
fluorescence microscopy datasets is an important step in many biomedical
research projects. Although many automated methods for these tasks exist, they
often struggle for images with low signal-to-noise ratios and/or dense packing
of nuclei. It was recently shown for 2D microscopy images that these issues can
be alleviated by training a neural network to directly predict a suitable shape
representation (star-convex polygon) for cell nuclei. In this paper, we adopt
and extend this approach to 3D volumes by using star-convex polyhedra to
represent cell nuclei and similar shapes. To that end, we overcome the
challenges of 1) finding parameter-efficient star-convex polyhedra
representations that can faithfully describe cell nuclei shapes, 2) adapting to
anisotropic voxel sizes often found in fluorescence microscopy datasets, and 3)
efficiently computing intersections between pairs of star-convex polyhedra
(required for non-maximum suppression). Although our approach is quite general,
since star-convex polyhedra include common shapes like bounding boxes and
spheres as special cases, our focus is on accurate detection and segmentation
of cell nuclei. Finally, we demonstrate on two challenging datasets that our
approach (StarDist-3D) leads to superior results when compared to classical and
deep learning based methods. | 2019-08-09T21:22:29Z | Conference paper at WACV 2020 | null | 10.1109/WACV45572.2020.9093435 | null | null | null | null | null | null | null |
1,908.04212 | A Finnish News Corpus for Named Entity Recognition | ['Teemu Ruokolainen', 'Pekka Kauppinen', 'Miikka Silfverberg', 'Krister Lindén'] | ['cs.CL'] | We present a corpus of Finnish news articles with a manually prepared named
entity annotation. The corpus consists of 953 articles (193,742 word tokens)
with six named entity classes (organization, location, person, product, event,
and date). The articles are extracted from the archives of Digitoday, a Finnish
online technology news source. The corpus is available for research purposes.
We present baseline experiments on the corpus using a rule-based and two deep
learning systems on two, in-domain and out-of-domain, test sets. | 2019-08-12T15:49:57Z | null | null | 10.1007/s10579-019-09471-7 | A Finnish news corpus for named entity recognition | ['T. Ruokolainen', 'Pekka Kauppinen', 'Miikka Silfverberg', 'Krister Lindén'] | 2,019 | Language Resources and Evaluation | 67 | 65 | ['Computer Science'] |
1,908.04577 | StructBERT: Incorporating Language Structures into Pre-training for Deep
Language Understanding | ['Wei Wang', 'Bin Bi', 'Ming Yan', 'Chen Wu', 'Zuyi Bao', 'Jiangnan Xia', 'Liwei Peng', 'Luo Si'] | ['cs.CL'] | Recently, the pre-trained language model, BERT (and its robustly optimized
version RoBERTa), has attracted a lot of attention in natural language
understanding (NLU), and achieved state-of-the-art accuracy in various NLU
tasks, such as sentiment classification, natural language inference, semantic
textual similarity and question answering. Inspired by the linearization
exploration work of Elman [8], we extend BERT to a new model, StructBERT, by
incorporating language structures into pre-training. Specifically, we pre-train
StructBERT with two auxiliary tasks to make the most of the sequential order of
words and sentences, which leverage language structures at the word and
sentence levels, respectively. As a result, the new model is adapted to
different levels of language understanding required by downstream tasks. The
StructBERT with structural pre-training gives surprisingly good empirical
results on a variety of downstream tasks, including pushing the
state-of-the-art on the GLUE benchmark to 89.0 (outperforming all published
models), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on
SNLI to 91.7. | 2019-08-13T11:12:58Z | 10 Pages | null | null | null | null | null | null | null | null | null |
1,908.04913 | FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age | ['Kimmo Kärkkäinen', 'Jungseock Joo'] | ['cs.CV', 'cs.LG'] | Existing public face datasets are strongly biased toward Caucasian faces, and
other races (e.g., Latino) are significantly underrepresented. This can lead to
inconsistent model accuracy, limit the applicability of face analytic systems
to non-White race groups, and adversely affect research findings based on such
skewed data. To mitigate the race bias in these datasets, we construct a novel
face image dataset, containing 108,501 images, with an emphasis of balanced
race composition in the dataset. We define 7 race groups: White, Black, Indian,
East Asian, Southeast Asian, Middle East, and Latino. Images were collected
from the YFCC-100M Flickr dataset and labeled with race, gender, and age
groups. Evaluations were performed on existing face attribute datasets as well
as novel image datasets to measure generalization performance. We find that the
model trained from our dataset is substantially more accurate on novel datasets
and the accuracy is consistent between race and gender groups. | 2019-08-14T01:42:41Z | null | null | null | null | null | null | null | null | null | null |
1,908.0676 | Self-Attention Based Molecule Representation for Predicting Drug-Target
Interaction | ['Bonggun Shin', 'Sungsoo Park', 'Keunsoo Kang', 'Joyce C. Ho'] | ['cs.LG', 'stat.ML'] | Predicting drug-target interactions (DTI) is an essential part of the drug
discovery process, which is an expensive process in terms of time and cost.
Therefore, reducing DTI cost could lead to reduced healthcare costs for a
patient. In addition, a precisely learned molecule representation in a DTI
model could contribute to developing personalized medicine, which will help
many patient cohorts. In this paper, we propose a new molecule representation
based on the self-attention mechanism, and a new DTI model using our molecule
representation. The experiments show that our DTI model outperforms the state
of the art by up to 4.9% points in terms of area under the precision-recall
curve. Moreover, a study using the DrugBank database proves that our model
effectively lists all known drugs targeting a specific cancer biomarker in the
top-30 candidate list. | 2019-08-15T21:39:15Z | 18 pages, Proceedings of Machine Learning for Healthcare, 2019
(MLHC'19) | null | null | Self-Attention Based Molecule Representation for Predicting Drug-Target Interaction | ['Bonggun Shin', 'Sungsoo Park', 'Keunsoo Kang', 'Joyce Ho'] | 2,019 | Machine Learning in Health Care | 140 | 44 | ['Computer Science', 'Mathematics'] |
1,908.07245 | GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge | ['Luyao Huang', 'Chi Sun', 'Xipeng Qiu', 'Xuanjing Huang'] | ['cs.CL'] | Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous
word in a particular context. Traditional supervised methods rarely take into
consideration the lexical resources like WordNet, which are widely utilized in
knowledge-based methods. Recent studies have shown the effectiveness of
incorporating gloss (sense definition) into neural networks for WSD. However,
compared with traditional word expert supervised methods, they have not
achieved much improvement. In this paper, we focus on how to better leverage
gloss knowledge in a supervised neural WSD system. We construct context-gloss
pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained
BERT model on SemCor3.0 training corpus and the experimental results on several
English all-words WSD benchmark datasets show that our approach outperforms the
state-of-the-art systems. | 2019-08-20T09:37:42Z | EMNLP-IJCNLP 2019 | null | null | GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge | ['Luyao Huang', 'Chi Sun', 'Xipeng Qiu', 'Xuanjing Huang'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 244 | 20 | ['Computer Science'] |
1,908.0749 | LXMERT: Learning Cross-Modality Encoder Representations from
Transformers | ['Hao Tan', 'Mohit Bansal'] | ['cs.CL', 'cs.CV', 'cs.LG'] | Vision-and-language reasoning requires an understanding of visual concepts,
language semantics, and, most importantly, the alignment and relationships
between these two modalities. We thus propose the LXMERT (Learning
Cross-Modality Encoder Representations from Transformers) framework to learn
these vision-and-language connections. In LXMERT, we build a large-scale
Transformer model that consists of three encoders: an object relationship
encoder, a language encoder, and a cross-modality encoder. Next, to endow our
model with the capability of connecting vision and language semantics, we
pre-train the model with large amounts of image-and-sentence pairs, via five
diverse representative pre-training tasks: masked language modeling, masked
object prediction (feature regression and label classification), cross-modality
matching, and image question answering. These tasks help in learning both
intra-modality and cross-modality relationships. After fine-tuning from our
pre-trained parameters, our model achieves the state-of-the-art results on two
visual question answering datasets (i.e., VQA and GQA). We also show the
generalizability of our pre-trained cross-modality model by adapting it to a
challenging visual-reasoning task, NLVR2, and improve the previous best result
by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies
to prove that both our novel model components and pre-training strategies
significantly contribute to our strong results; and also present several
attention visualizations for the different encoders. Code and pre-trained
models publicly available at: https://github.com/airsplay/lxmert | 2019-08-20T17:05:18Z | EMNLP 2019 (14 pages; with new attention visualizations) | null | null | null | null | null | null | null | null | null |
1,908.07836 | PubLayNet: largest dataset ever for document layout analysis | ['Xu Zhong', 'Jianbin Tang', 'Antonio Jimeno Yepes'] | ['cs.CL'] | Recognizing the layout of unstructured digital documents is an important step
when parsing the documents into structured machine-readable format for
downstream applications. Deep neural networks that are developed for computer
vision have been proven to be an effective method to analyze layout of document
images. However, document layout datasets that are currently publicly available
are several magnitudes smaller than established computing vision datasets.
Models have to be trained by transfer learning from a base model that is
pre-trained on a traditional computer vision dataset. In this paper, we develop
the PubLayNet dataset for document layout analysis by automatically matching
the XML representations and the content of over 1 million PDF articles that are
publicly available on PubMed Central. The size of the dataset is comparable to
established computer vision datasets, containing over 360 thousand document
images, where typical document layout elements are annotated. The experiments
demonstrate that deep neural networks trained on PubLayNet accurately recognize
the layout of scientific articles. The pre-trained models are also a more
effective base mode for transfer learning on a different document domain. We
release the dataset (https://github.com/ibm-aur-nlp/PubLayNet) to support
development and evaluation of more advanced models for document layout
analysis. | 2019-08-16T00:40:08Z | null | null | null | PubLayNet: Largest Dataset Ever for Document Layout Analysis | ['Xu Zhong', 'Jianbin Tang', 'Antonio Jimeno-Yepes'] | 2,019 | IEEE International Conference on Document Analysis and Recognition | 465 | 22 | ['Computer Science'] |
1,908.07919 | Deep High-Resolution Representation Learning for Visual Recognition | ['Jingdong Wang', 'Ke Sun', 'Tianheng Cheng', 'Borui Jiang', 'Chaorui Deng', 'Yang Zhao', 'Dong Liu', 'Yadong Mu', 'Mingkui Tan', 'Xinggang Wang', 'Wenyu Liu', 'Bin Xiao'] | ['cs.CV'] | High-resolution representations are essential for position-sensitive vision
problems, such as human pose estimation, semantic segmentation, and object
detection. Existing state-of-the-art frameworks first encode the input image as
a low-resolution representation through a subnetwork that is formed by
connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet,
VGGNet), and then recover the high-resolution representation from the encoded
low-resolution representation. Instead, our proposed network, named as
High-Resolution Network (HRNet), maintains high-resolution representations
through the whole process. There are two key characteristics: (i) Connect the
high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly
exchange the information across resolutions. The benefit is that the resulting
representation is semantically richer and spatially more precise. We show the
superiority of the proposed HRNet in a wide range of applications, including
human pose estimation, semantic segmentation, and object detection, suggesting
that the HRNet is a stronger backbone for computer vision problems. All the
codes are available at~{\url{https://github.com/HRNet}}. | 2019-08-20T10:47:46Z | To appear in TPAMI. State-of-the-art performance on human pose
estimation, semantic segmentation, object detection, instance segmentation,
and face alignment. Full version of arXiv:1904.04514. (arXiv admin note: text
overlap with arXiv:1904.04514) | null | null | null | null | null | null | null | null | null |
1,908.08962 | Well-Read Students Learn Better: On the Importance of Pre-training
Compact Models | ['Iulia Turc', 'Ming-Wei Chang', 'Kenton Lee', 'Kristina Toutanova'] | ['cs.CL'] | Recent developments in natural language representations have been accompanied
by large and expensive models that leverage vast amounts of general-domain text
through self-supervised pre-training. Due to the cost of applying such models
to down-stream tasks, several model compression techniques on pre-trained
language representations have been proposed (Sun et al., 2019; Sanh, 2019).
However, surprisingly, the simple baseline of just pre-training and fine-tuning
compact models has been overlooked. In this paper, we first show that
pre-training remains important in the context of smaller architectures, and
fine-tuning pre-trained compact models can be competitive to more elaborate
methods proposed in concurrent work. Starting with pre-trained compact models,
we then explore transferring task knowledge from large fine-tuned models
through standard knowledge distillation. The resulting simple, yet effective
and general algorithm, Pre-trained Distillation, brings further improvements.
Through extensive experiments, we more generally explore the interaction
between pre-training and distillation under two variables that have been
under-studied: model size and properties of unlabeled task data. One surprising
observation is that they have a compound effect even when sequentially applied
on the same data. To accelerate future research, we will make our 24
pre-trained miniature BERT models publicly available. | 2019-08-23T18:02:05Z | Added comparison to concurrent work | null | null | Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation | ['Iulia Turc', 'Ming-Wei Chang', 'Kenton Lee', 'Kristina Toutanova'] | 2,019 | arXiv.org | 225 | 41 | ['Computer Science'] |
1,908.09203 | Release Strategies and the Social Impacts of Language Models | ['Irene Solaiman', 'Miles Brundage', 'Jack Clark', 'Amanda Askell', 'Ariel Herbert-Voss', 'Jeff Wu', 'Alec Radford', 'Gretchen Krueger', 'Jong Wook Kim', 'Sarah Kreps', 'Miles McCain', 'Alex Newhouse', 'Jason Blazakis', 'Kris McGuffie', 'Jasmine Wang'] | ['cs.CL', 'cs.AI', 'cs.CY', 'I.2; I.2.7; K.4'] | Large language models have a range of beneficial uses: they can assist in
prose, poetry, and programming; analyze dataset biases; and more. However,
their flexibility and generative capabilities also raise misuse concerns. This
report discusses OpenAI's work related to the release of its GPT-2 language
model. It discusses staged release, which allows time between model releases to
conduct risk and benefit analyses as model sizes increased. It also discusses
ongoing partnership-based research and provides recommendations for better
coordination and responsible publication in AI. | 2019-08-24T20:41:40Z | 71 pages, report | null | null | Release Strategies and the Social Impacts of Language Models | ['Irene Solaiman', 'Miles Brundage', 'Jack Clark', 'Amanda Askell', 'Ariel Herbert-Voss', 'Jeff Wu', 'Alec Radford', 'Jasmine Wang'] | 2,019 | arXiv.org | 635 | 98 | ['Computer Science'] |
1,908.10063 | FinBERT: Financial Sentiment Analysis with Pre-trained Language Models | ['Dogu Araci'] | ['cs.CL', 'cs.LG'] | Financial sentiment analysis is a challenging task due to the specialized
language and lack of labeled data in that domain. General-purpose models are
not effective enough because of the specialized language used in a financial
context. We hypothesize that pre-trained language models can help with this
problem because they require fewer labeled examples and they can be further
trained on domain-specific corpora. We introduce FinBERT, a language model
based on BERT, to tackle NLP tasks in the financial domain. Our results show
improvement in every measured metric on current state-of-the-art results for
two financial sentiment analysis datasets. We find that even with a smaller
training set and fine-tuning only a part of the model, FinBERT outperforms
state-of-the-art machine learning methods. | 2019-08-27T07:40:48Z | This thesis is submitted in partial fulfillment for the degree of
Master of Science in Information Studies: Data Science, University of
Amsterdam. June 25, 2019 | null | null | null | null | null | null | null | null | null |
1,908.10084 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | ['Nils Reimers', 'Iryna Gurevych'] | ['cs.CL'] | BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new
state-of-the-art performance on sentence-pair regression tasks like semantic
textual similarity (STS). However, it requires that both sentences are fed into
the network, which causes a massive computational overhead: Finding the most
similar pair in a collection of 10,000 sentences requires about 50 million
inference computations (~65 hours) with BERT. The construction of BERT makes it
unsuitable for semantic similarity search as well as for unsupervised tasks
like clustering.
In this publication, we present Sentence-BERT (SBERT), a modification of the
pretrained BERT network that use siamese and triplet network structures to
derive semantically meaningful sentence embeddings that can be compared using
cosine-similarity. This reduces the effort for finding the most similar pair
from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while
maintaining the accuracy from BERT.
We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning
tasks, where it outperforms other state-of-the-art sentence embeddings methods. | 2019-08-27T08:50:17Z | Published at EMNLP 2019 | null | null | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | ['Nils Reimers', 'Iryna Gurevych'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 12,366 | 38 | ['Computer Science'] |
1,908.11828 | PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase
Identification | ['Yinfei Yang', 'Yuan Zhang', 'Chris Tar', 'Jason Baldridge'] | ['cs.CL'] | Most existing work on adversarial data generation focuses on English. For
example, PAWS (Paraphrase Adversaries from Word Scrambling) consists of
challenging English paraphrase identification pairs from Wikipedia and Quora.
We remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS
evaluation pairs in six typologically distinct languages: French, Spanish,
German, Chinese, Japanese, and Korean. We provide baseline numbers for three
models with different capacity to capture non-local context and sentence
structure, and using different multilingual training and evaluation regimes.
Multilingual BERT fine-tuned on PAWS English plus machine-translated data
performs the best, with a range of 83.1-90.8 accuracy across the non-English
languages and an average accuracy gain of 23% over the next best model. PAWS-X
shows the effectiveness of deep, multilingual pre-training while also leaving
considerable headroom as a new challenge to drive multilingual research that
better captures structure and contextual information. | 2019-08-30T16:40:00Z | Accepted by EMNLP2019 | null | null | null | null | null | null | null | null | null |
1,909.00161 | Benchmarking Zero-shot Text Classification: Datasets, Evaluation and
Entailment Approach | ['Wenpeng Yin', 'Jamaal Hay', 'Dan Roth'] | ['cs.CL'] | Zero-shot text classification (0Shot-TC) is a challenging NLU problem to
which little attention has been paid by the research community. 0Shot-TC aims
to associate an appropriate label with a piece of text, irrespective of the
text domain and the aspect (e.g., topic, emotion, event, etc.) described by the
label. And there are only a few articles studying 0Shot-TC, all focusing only
on topical categorization which, we argue, is just the tip of the iceberg in
0Shot-TC. In addition, the chaotic experiments in literature make no uniform
comparison, which blurs the progress.
This work benchmarks the 0Shot-TC problem by providing unified datasets,
standardized evaluations, and state-of-the-art baselines. Our contributions
include: i) The datasets we provide facilitate studying 0Shot-TC relative to
conceptually different and diverse aspects: the ``topic'' aspect includes
``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy''
and ``anger''; the ``situation'' aspect includes ``medical assistance'' and
``water shortage''. ii) We extend the existing evaluation setup
(label-partially-unseen) -- given a dataset, train on some labels, test on all
labels -- to include a more challenging yet realistic evaluation
label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text
snippets without seeing task specific training data at all. iii) We unify the
0Shot-TC of diverse aspects within a textual entailment formulation and study
it this way.
Code & Data: https://github.com/yinwenpeng/BenchmarkingZeroShot | 2019-08-31T07:42:11Z | EMNLP2019 camera-ready, 10 pages | null | null | Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach | ['Wenpeng Yin', 'Jamaal Hay', 'D. Roth'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 553 | 29 | ['Computer Science'] |
1,909.00204 | NEZHA: Neural Contextualized Representation for Chinese Language
Understanding | ['Junqiu Wei', 'Xiaozhe Ren', 'Xiaoguang Li', 'Wenyong Huang', 'Yi Liao', 'Yasheng Wang', 'Jiashu Lin', 'Xin Jiang', 'Xiao Chen', 'Qun Liu'] | ['cs.CL'] | The pre-trained language models have achieved great successes in various
natural language understanding (NLU) tasks due to its capacity to capture the
deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language
models named NEZHA (NEural contextualiZed representation for CHinese lAnguage
understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The
current version of NEZHA is based on BERT with a collection of proven
improvements, which include Functional Relative Positional Encoding as an
effective positional encoding scheme, Whole Word Masking strategy, Mixed
Precision Training and the LAMB Optimizer in training the models. The
experimental results show that NEZHA achieves the state-of-the-art performances
when finetuned on several representative Chinese tasks, including named entity
recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment
classification (ChnSenti) and natural language inference (XNLI). | 2019-08-31T12:08:53Z | null | null | null | null | null | null | null | null | null | null |
1,909.00277 | Cosmos QA: Machine Reading Comprehension with Contextual Commonsense
Reasoning | ['Lifu Huang', 'Ronan Le Bras', 'Chandra Bhagavatula', 'Yejin Choi'] | ['cs.CL', 'cs.AI'] | Understanding narratives requires reading between the lines, which in turn,
requires interpreting the likely causes and effects of events, even when they
are not mentioned explicitly. In this paper, we introduce Cosmos QA, a
large-scale dataset of 35,600 problems that require commonsense-based reading
comprehension, formulated as multiple-choice questions. In stark contrast to
most existing reading comprehension datasets where the questions focus on
factual and literal understanding of the context paragraph, our dataset focuses
on reading between the lines over a diverse collection of people's everyday
narratives, asking such questions as "what might be the possible reason of
...?", or "what would have happened if ..." that require reasoning beyond the
exact text spans in the context. To establish baseline performances on Cosmos
QA, we experiment with several state-of-the-art neural architectures for
reading comprehension, and also propose a new architecture that improves over
the competitive baselines. Experimental results demonstrate a significant gap
between machine (68.4%) and human performance (94%), pointing to avenues for
future research on commonsense machine comprehension. Dataset, code and
leaderboard is publicly available at https://wilburone.github.io/cosmos. | 2019-08-31T19:55:44Z | EMNLP'2019 | null | null | null | null | null | null | null | null | null |
1,909.01247 | Introducing RONEC -- the Romanian Named Entity Corpus | ['Stefan Daniel Dumitrescu', 'Andrei-Marius Avram'] | ['cs.CL'] | We present RONEC - the Named Entity Corpus for the Romanian language. The
corpus contains over 26000 entities in ~5000 annotated sentences, belonging to
16 distinct classes. The sentences have been extracted from a copy-right free
newspaper, covering several styles. This corpus represents the first initiative
in the Romanian language space specifically targeted for named entity
recognition. It is available in BRAT and CoNLL-U Plus formats, and it is free
to use and extend at github.com/dumitrescustefan/ronec . | 2019-09-03T15:20:44Z | 8 pages + annex, accepted to LREC2020 in the main conference | null | null | Introducing RONEC - the Romanian Named Entity Corpus | ['Stefan Daniel Dumitrescu', 'Andrei-Marius Avram'] | 2,019 | International Conference on Language Resources and Evaluation | 23 | 17 | ['Computer Science'] |
1,909.01326 | The Woman Worked as a Babysitter: On Biases in Language Generation | ['Emily Sheng', 'Kai-Wei Chang', 'Premkumar Natarajan', 'Nanyun Peng'] | ['cs.CL', 'cs.AI'] | We present a systematic study of biases in natural language generation (NLG)
by analyzing text generated from prompts that contain mentions of different
demographic groups. In this work, we introduce the notion of the regard towards
a demographic, use the varying levels of regard towards different demographics
as a defining metric for bias in NLG, and analyze the extent to which sentiment
scores are a relevant proxy metric for regard. To this end, we collect
strategically-generated text from language models and manually annotate the
text with both sentiment and regard scores. Additionally, we build an automatic
regard classifier through transfer learning, so that we can analyze biases in
unseen text. Together, these methods reveal the extent of the biased nature of
language model generations. Our analysis provides a study of biases in NLG,
bias metrics and correlated human judgments, and empirical evidence on the
usefulness of our annotated dataset. | 2019-09-03T17:50:44Z | EMNLP 2019 short paper (5 pages); Updated references and examples,
changed figure 2 & 3 order, fixed grammar, results unmodified | null | null | null | null | null | null | null | null | null |
1,909.02027 | An Evaluation Dataset for Intent Classification and Out-of-Scope
Prediction | ['Stefan Larson', 'Anish Mahendran', 'Joseph J. Peper', 'Christopher Clarke', 'Andrew Lee', 'Parker Hill', 'Jonathan K. Kummerfeld', 'Kevin Leach', 'Michael A. Laurenzano', 'Lingjia Tang', 'Jason Mars'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Task-oriented dialog systems need to know when a query falls outside their
range of supported intents, but current text classification corpora only define
label sets that cover every example. We introduce a new dataset that includes
queries that are out-of-scope---i.e., queries that do not fall into any of the
system's supported intents. This poses a new challenge because models cannot
assume that every query at inference time belongs to a system-supported intent
class. Our dataset also covers 150 intent classes over 10 domains, capturing
the breadth that a production task-oriented agent must handle. We evaluate a
range of benchmark classifiers on our dataset along with several different
out-of-scope identification schemes. We find that while the classifiers perform
well on in-scope intent classification, they struggle to identify out-of-scope
queries. Our dataset and evaluation fill an important gap in the field,
offering a way of more rigorously and realistically benchmarking text
classification in task-driven dialog systems. | 2019-09-04T18:04:56Z | Accepted to EMNLP-IJCNLP 2019 | null | null | null | null | null | null | null | null | null |
1,909.03601 | Unbiased Recommender Learning from Missing-Not-At-Random Implicit
Feedback | ['Yuta Saito', 'Suguru Yaginuma', 'Yuta Nishino', 'Hayato Sakata', 'Kazuhide Nakata'] | ['stat.ML', 'cs.IR', 'cs.LG'] | Recommender systems widely use implicit feedback such as click data because
of its general availability. Although the presence of clicks signals the users'
preference to some extent, the lack of such clicks does not necessarily
indicate a negative response from the users, as it is possible that the users
were not exposed to the items (positive-unlabeled problem). This leads to a
difficulty in predicting the users' preferences from implicit feedback.
Previous studies addressed the positive-unlabeled problem by uniformly
upweighting the loss for the positive feedback data or estimating the
confidence of each data having relevance information via the EM-algorithm.
However, these methods failed to address the missing-not-at-random problem in
which popular or frequently recommended items are more likely to be clicked
than other items even if a user does not have a considerable interest in them.
To overcome these limitations, we first define an ideal loss function to be
optimized to realize recommendations that maximize the relevance and propose an
unbiased estimator for the ideal loss. Subsequently, we analyze the variance of
the proposed unbiased estimator and further propose a clipped estimator that
includes the unbiased estimator as a special case. We demonstrate that the
clipped estimator is expected to improve the performance of the recommender
system, by considering the bias-variance trade-off. We conduct semi-synthetic
and real-world experiments and demonstrate that the proposed method largely
outperforms the baselines. In particular, the proposed method works better for
rare items that are less frequently observed in the training data. The findings
indicate that the proposed method can better achieve the objective of
recommending items with the highest relevance. | 2019-09-09T02:54:20Z | accepted at WSDM'20 | null | null | Unbiased Recommender Learning from Missing-Not-At-Random Implicit Feedback | ['Yuta Saito', 'Suguru Yaginuma', 'Yuta Nishino', 'Hayato Sakata', 'K. Nakata'] | 2,019 | Web Search and Data Mining | 268 | 39 | ['Computer Science', 'Mathematics'] |
1,909.05645 | Learning Alignment for Multimodal Emotion Recognition from Speech | ['Haiyang Xu', 'Hui Zhang', 'Kun Han', 'Yun Wang', 'Yiping Peng', 'Xiangang Li'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Speech emotion recognition is a challenging problem because human convey
emotions in subtle and complex ways. For emotion recognition on human speech,
one can either extract emotion related features from audio signals or employ
speech recognition techniques to generate text from speech and then apply
natural language processing to analyze the sentiment. Further, emotion
recognition will be beneficial from using audio-textual multimodal information,
it is not trivial to build a system to learn from multimodality. One can build
models for two input sources separately and combine them in a decision level,
but this method ignores the interaction between speech and text in the temporal
domain. In this paper, we propose to use an attention mechanism to learn the
alignment between speech frames and text words, aiming to produce more accurate
multimodal feature representations. The aligned multimodal features are fed
into a sequential model for emotion recognition. We evaluate the approach on
the IEMOCAP dataset and the experimental results show the proposed approach
achieves the state-of-the-art performance on the dataset. | 2019-09-06T03:06:38Z | InterSpeech 2019 | null | null | null | null | null | null | null | null | null |
1,909.05658 | UER: An Open-Source Toolkit for Pre-training Models | ['Zhe Zhao', 'Hui Chen', 'Jinbin Zhang', 'Xin Zhao', 'Tao Liu', 'Wei Lu', 'Xi Chen', 'Haotang Deng', 'Qi Ju', 'Xiaoyong Du'] | ['cs.CL', 'cs.LG'] | Existing works, including ELMO and BERT, have revealed the importance of
pre-training for NLP tasks. While there does not exist a single pre-training
model that works best in all cases, it is of necessity to develop a framework
that is able to deploy various pre-training models efficiently. For this
purpose, we propose an assemble-on-demand pre-training toolkit, namely
Universal Encoder Representations (UER). UER is loosely coupled, and
encapsulated with rich modules. By assembling modules on demand, users can
either reproduce a state-of-the-art pre-training model or develop a
pre-training model that remains unexplored. With UER, we have built a model
zoo, which contains pre-trained models based on different corpora, encoders,
and targets (objectives). With proper pre-trained models, we could achieve new
state-of-the-art results on a range of downstream datasets. | 2019-09-12T13:46:58Z | null | null | null | null | null | null | null | null | null | null |
1,909.05858 | CTRL: A Conditional Transformer Language Model for Controllable
Generation | ['Nitish Shirish Keskar', 'Bryan McCann', 'Lav R. Varshney', 'Caiming Xiong', 'Richard Socher'] | ['cs.CL'] | Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We
release CTRL, a 1.63 billion-parameter conditional transformer language model,
trained to condition on control codes that govern style, content, and
task-specific behavior. Control codes were derived from structure that
naturally co-occurs with raw text, preserving the advantages of unsupervised
learning while providing more explicit control over text generation. These
codes also allow CTRL to predict which parts of the training data are most
likely given a sequence. This provides a potential method for analyzing large
amounts of data via model-based source attribution. We have released multiple
full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl. | 2019-09-11T17:57:18Z | null | null | null | null | null | null | null | null | null | null |
1,909.06146 | PubMedQA: A Dataset for Biomedical Research Question Answering | ['Qiao Jin', 'Bhuwan Dhingra', 'Zhengping Liu', 'William W. Cohen', 'Xinghua Lu'] | ['cs.CL', 'cs.LG', 'q-bio.QM'] | We introduce PubMedQA, a novel biomedical question answering (QA) dataset
collected from PubMed abstracts. The task of PubMedQA is to answer research
questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial
fibrillation after coronary artery bypass grafting?) using the corresponding
abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k
artificially generated QA instances. Each PubMedQA instance is composed of (1)
a question which is either an existing research article title or derived from
one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably,
answers the research question, and (4) a yes/no/maybe answer which summarizes
the conclusion. PubMedQA is the first QA dataset where reasoning over
biomedical research texts, especially their quantitative contents, is required
to answer the questions. Our best performing model, multi-phase fine-tuning of
BioBERT with long answer bag-of-word statistics as additional supervision,
achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy
and majority-baseline of 55.2% accuracy, leaving much room for improvement.
PubMedQA is publicly available at https://pubmedqa.github.io. | 2019-09-13T11:18:20Z | EMNLP 2019 | null | null | PubMedQA: A Dataset for Biomedical Research Question Answering | ['Qiao Jin', 'Bhuwan Dhingra', 'Zhengping Liu', 'William W. Cohen', 'Xinghua Lu'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 918 | 23 | ['Computer Science', 'Biology'] |
1,909.07005 | KorQuAD1.0: Korean QA Dataset for Machine Reading Comprehension | ['Seungyoung Lim', 'Myungji Kim', 'Jooyoul Lee'] | ['cs.CL'] | Machine Reading Comprehension (MRC) is a task that requires machine to
understand natural language and answer questions by reading a document. It is
the core of automatic response technology such as chatbots and automatized
customer supporting systems. We present Korean Question Answering
Dataset(KorQuAD), a large-scale Korean dataset for extractive machine reading
comprehension task. It consists of 70,000+ human generated question-answer
pairs on Korean Wikipedia articles. We release KorQuAD1.0 and launch a
challenge at https://KorQuAD.github.io to encourage the development of
multilingual natural language processing research. | 2019-09-16T06:15:27Z | null | null | null | null | null | null | null | null | null | null |
1,909.07528 | Emergent Tool Use From Multi-Agent Autocurricula | ['Bowen Baker', 'Ingmar Kanitscheider', 'Todor Markov', 'Yi Wu', 'Glenn Powell', 'Bob McGrew', 'Igor Mordatch'] | ['cs.LG', 'cs.AI', 'cs.MA', 'stat.ML'] | Through multi-agent competition, the simple objective of hide-and-seek, and
standard reinforcement learning algorithms at scale, we find that agents create
a self-supervised autocurriculum inducing multiple distinct rounds of emergent
strategy, many of which require sophisticated tool use and coordination. We
find clear evidence of six emergent phases in agent strategy in our
environment, each of which creates a new pressure for the opposing team to
adapt; for instance, agents learn to build multi-object shelters using moveable
boxes which in turn leads to agents discovering that they can overcome
obstacles using ramps. We further provide evidence that multi-agent competition
may scale better with increasing environment complexity and leads to behavior
that centers around far more human-relevant skills than other self-supervised
reinforcement learning methods such as intrinsic motivation. Finally, we
propose transfer and fine-tuning as a way to quantitatively evaluate targeted
capabilities, and we compare hide-and-seek agents to both intrinsic motivation
and random initialization baselines in a suite of domain-specific intelligence
tests. | 2019-09-17T00:17:02Z | null | null | null | null | null | null | null | null | null | null |
1,909.07846 | Multimodal Multitask Representation Learning for Pathology Biobank
Metadata Prediction | ['Wei-Hung Weng', 'Yuannan Cai', 'Angela Lin', 'Fraser Tan', 'Po-Hsuan Cameron Chen'] | ['cs.CV', 'cs.LG'] | Metadata are general characteristics of the data in a well-curated and
condensed format, and have been proven to be useful for decision making,
knowledge discovery, and also heterogeneous data organization of biobank. Among
all data types in the biobank, pathology is the key component of the biobank
and also serves as the gold standard of diagnosis. To maximize the utility of
biobank and allow the rapid progress of biomedical science, it is essential to
organize the data with well-populated pathology metadata. However, manual
annotation of such information is tedious and time-consuming. In the study, we
develop a multimodal multitask learning framework to predict four major
slide-level metadata of pathology images. The framework learns generalizable
representations across tissue slides, pathology reports, and case-level
structured data. We demonstrate improved performance across all four tasks with
the proposed method compared to a single modal single task baseline on two test
sets, one external test set from a distinct data source (TCGA) and one internal
held-out test set (TTH). In the test sets, the performance improvements on the
averaged area under receiver operating characteristic curve across the four
tasks are 16.48% and 9.05% on TCGA and TTH, respectively. Such pathology
metadata prediction system may be adopted to mitigate the effort of expert
annotation and ultimately accelerate the data-driven research by better
utilization of the pathology biobank. | 2019-09-17T14:34:37Z | preprint version | null | null | null | null | null | null | null | null | null |
1,909.0793 | Ludwig: a type-based declarative deep learning toolbox | ['Piero Molino', 'Yaroslav Dudin', 'Sai Sumanth Miryala'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.SE', 'stat.ML'] | In this work we present Ludwig, a flexible, extensible and easy to use
toolbox which allows users to train deep learning models and use them for
obtaining predictions without writing code. Ludwig implements a novel approach
to deep learning model building based on two main abstractions: data types and
declarative configuration files. The data type abstraction allows for easier
code and sub-model reuse, and the standardized interfaces imposed by this
abstraction allow for encapsulation and make the code easy to extend.
Declarative model definition configuration files enable inexperienced users to
obtain effective models and increase the productivity of expert users.
Alongside these two innovations, Ludwig introduces a general modularized deep
learning architecture called Encoder-Combiner-Decoder that can be instantiated
to perform a vast amount of machine learning tasks. These innovations make it
possible for engineers, scientists from other fields and, in general, a much
broader audience to adopt deep learning models for their tasks, concretely
helping in its democratization. | 2019-09-17T16:54:29Z | null | null | null | null | null | null | null | null | null | null |
1,909.08053 | Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism | ['Mohammad Shoeybi', 'Mostofa Patwary', 'Raul Puri', 'Patrick LeGresley', 'Jared Casper', 'Bryan Catanzaro'] | ['cs.CL'] | Recent work in language modeling demonstrates that training large transformer
models advances the state of the art in Natural Language Processing
applications. However, very large models can be quite difficult to train due to
memory constraints. In this work, we present our techniques for training very
large transformer models and implement a simple, efficient intra-layer model
parallel approach that enables training transformer models with billions of
parameters. Our approach does not require a new compiler or library changes, is
orthogonal and complimentary to pipeline model parallelism, and can be fully
implemented with the insertion of a few communication operations in native
PyTorch. We illustrate this approach by converging transformer based models up
to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the
entire application with 76% scaling efficiency when compared to a strong single
GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To
demonstrate that large language models can further advance the state of the art
(SOTA), we train an 8.3 billion parameter transformer language model similar to
GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful
attention to the placement of layer normalization in BERT-like models is
critical to achieving increased performance as the model size grows. Using the
GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA
perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%)
datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9%
compared to SOTA accuracy of 89.4%). | 2019-09-17T19:42:54Z | null | null | null | Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism | ['M. Shoeybi', 'M. Patwary', 'Raul Puri', 'P. LeGresley', 'J. Casper', 'Bryan Catanzaro'] | 2,019 | arXiv.org | 1,926 | 62 | ['Computer Science'] |
1,909.08072 | Adversarial Attacks and Defenses in Images, Graphs and Text: A Review | ['Han Xu', 'Yao Ma', 'Haochen Liu', 'Debayan Deb', 'Hui Liu', 'Jiliang Tang', 'Anil K. Jain'] | ['cs.LG', 'cs.CR', 'stat.ML'] | Deep neural networks (DNN) have achieved unprecedented success in numerous
machine learning tasks in various domains. However, the existence of
adversarial examples has raised concerns about applying deep learning to
safety-critical applications. As a result, we have witnessed increasing
interests in studying attack and defense mechanisms for DNN models on different
data types, such as images, graphs and text. Thus, it is necessary to provide a
systematic and comprehensive overview of the main threats of attacks and the
success of corresponding countermeasures. In this survey, we review the state
of the art algorithms for generating adversarial examples and the
countermeasures against adversarial examples, for the three popular data types,
i.e., images, graphs and text. | 2019-09-17T20:07:23Z | survey, adversarial attacks, defenses | null | null | null | null | null | null | null | null | null |
1,909.08593 | Fine-Tuning Language Models from Human Preferences | ['Daniel M. Ziegler', 'Nisan Stiennon', 'Jeffrey Wu', 'Tom B. Brown', 'Alec Radford', 'Dario Amodei', 'Paul Christiano', 'Geoffrey Irving'] | ['cs.CL', 'cs.LG', 'stat.ML'] | Reward learning enables the application of reinforcement learning (RL) to
tasks where reward is defined by human judgment, building a model of reward by
asking humans questions. Most work on reward learning has used simulated
environments, but complex information about values is often expressed in
natural language, and we believe reward learning for language is a key to
making RL practical and safe for real-world tasks. In this paper, we build on
advances in generative pretraining of language models to apply reward learning
to four natural language tasks: continuing text with positive sentiment or
physically descriptive language, and summarization tasks on the TL;DR and
CNN/Daily Mail datasets. For stylistic continuation we achieve good results
with only 5,000 comparisons evaluated by humans. For summarization, models
trained with 60,000 comparisons copy whole sentences from the input but skip
irrelevant preamble; this leads to reasonable ROUGE scores and very good
performance according to our human labelers, but may be exploiting the fact
that labelers rely on simple heuristics. | 2019-09-18T17:33:39Z | null | null | null | Fine-Tuning Language Models from Human Preferences | ['Daniel M. Ziegler', 'Nisan Stiennon', 'Jeff Wu', 'Tom B. Brown', 'Alec Radford', 'Dario Amodei', 'Paul Christiano', 'G. Irving'] | 2,019 | arXiv.org | 1,776 | 53 | ['Computer Science', 'Mathematics'] |
1,909.09436 | CodeSearchNet Challenge: Evaluating the State of Semantic Code Search | ['Hamel Husain', 'Ho-Hsiang Wu', 'Tiferet Gazit', 'Miltiadis Allamanis', 'Marc Brockschmidt'] | ['cs.LG', 'cs.IR', 'cs.SE', 'stat.ML'] | Semantic code search is the task of retrieving relevant code given a natural
language query. While related to other information retrieval tasks, it requires
bridging the gap between the language used in code (often abbreviated and
highly technical) and natural language more suitable to describe vague concepts
and ideas.
To enable evaluation of progress on code search, we are releasing the
CodeSearchNet Corpus and are presenting the CodeSearchNet Challenge, which
consists of 99 natural language queries with about 4k expert relevance
annotations of likely results from CodeSearchNet Corpus. The corpus contains
about 6 million functions from open-source code spanning six programming
languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet
Corpus also contains automatically generated query-like natural language for 2
million functions, obtained from mechanically scraping and preprocessing
associated function documentation. In this article, we describe the methodology
used to obtain the corpus and expert labels, as well as a number of simple
baseline solutions for the task.
We hope that CodeSearchNet Challenge encourages researchers and practitioners
to study this interesting task further and will host a competition and
leaderboard to track the progress on the challenge. We are also keen on
extending CodeSearchNet Challenge to more queries and programming languages in
the future. | 2019-09-20T11:52:45Z | Updated evaluation numbers after fixing indexing bug | null | null | null | null | null | null | null | null | null |
1,909.09577 | NeMo: a toolkit for building AI applications using Neural Modules | ['Oleksii Kuchaiev', 'Jason Li', 'Huyen Nguyen', 'Oleksii Hrinchuk', 'Ryan Leary', 'Boris Ginsburg', 'Samuel Kriman', 'Stanislav Beliaev', 'Vitaly Lavrukhin', 'Jack Cook', 'Patrice Castonguay', 'Mariya Popova', 'Jocelyn Huang', 'Jonathan M. Cohen'] | ['cs.LG', 'cs.CL', 'cs.SD', 'eess.AS'] | NeMo (Neural Modules) is a Python framework-agnostic toolkit for creating AI
applications through re-usability, abstraction, and composition. NeMo is built
around neural modules, conceptual blocks of neural networks that take typed
inputs and produce typed outputs. Such modules typically represent data layers,
encoders, decoders, language models, loss functions, or methods of combining
activations. NeMo makes it easy to combine and re-use these building blocks
while providing a level of semantic correctness checking via its neural type
system. The toolkit comes with extendable collections of pre-built modules for
automatic speech recognition and natural language processing. Furthermore, NeMo
provides built-in support for distributed training and mixed precision on
latest NVIDIA GPUs. NeMo is open-source https://github.com/NVIDIA/NeMo | 2019-09-14T03:51:46Z | 6 pages plus references | null | null | NeMo: a toolkit for building AI applications using Neural Modules | ['Oleksii Kuchaiev', 'Jason Li', 'Huyen Nguyen', 'Oleksii Hrinchuk', 'Ryan Leary', 'Boris Ginsburg', 'Samuel Kriman', 'Stanislav Beliaev', 'Vitaly Lavrukhin', 'Jack Cook', 'P. Castonguay', 'Mariya Popova', 'Jocelyn Huang', 'Jonathan M. Cohen'] | 2,019 | arXiv.org | 308 | 18 | ['Mathematics', 'Computer Science', 'Engineering'] |
1,909.10351 | TinyBERT: Distilling BERT for Natural Language Understanding | ['Xiaoqi Jiao', 'Yichun Yin', 'Lifeng Shang', 'Xin Jiang', 'Xiao Chen', 'Linlin Li', 'Fang Wang', 'Qun Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Language model pre-training, such as BERT, has significantly improved the
performances of many natural language processing tasks. However, pre-trained
language models are usually computationally expensive, so it is difficult to
efficiently execute them on resource-restricted devices. To accelerate
inference and reduce model size while maintaining accuracy, we first propose a
novel Transformer distillation method that is specially designed for knowledge
distillation (KD) of the Transformer-based models. By leveraging this new KD
method, the plenty of knowledge encoded in a large teacher BERT can be
effectively transferred to a small student Tiny-BERT. Then, we introduce a new
two-stage learning framework for TinyBERT, which performs Transformer
distillation at both the pretraining and task-specific learning stages. This
framework ensures that TinyBERT can capture he general-domain as well as the
task-specific knowledge in BERT.
TinyBERT with 4 layers is empirically effective and achieves more than 96.8%
the performance of its teacher BERTBASE on GLUE benchmark, while being 7.5x
smaller and 9.4x faster on inference. TinyBERT with 4 layers is also
significantly better than 4-layer state-of-the-art baselines on BERT
distillation, with only about 28% parameters and about 31% inference time of
them. Moreover, TinyBERT with 6 layers performs on-par with its teacher
BERTBASE. | 2019-09-23T13:05:35Z | Findings of EMNLP 2020; results have been updated; code and model:
https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT | null | null | null | null | null | null | null | null | null |
1,909.10649 | Portuguese Named Entity Recognition using BERT-CRF | ['Fábio Souza', 'Rodrigo Nogueira', 'Roberto Lotufo'] | ['cs.CL', 'cs.IR', 'cs.LG'] | Recent advances in language representation using neural networks have made it
viable to transfer the learned internal states of a trained model to downstream
natural language processing tasks, such as named entity recognition (NER) and
question answering. It has been shown that the leverage of pre-trained language
models improves the overall performance on many tasks and is highly beneficial
when labeled data is scarce. In this work, we train Portuguese BERT models and
employ a BERT-CRF architecture to the NER task on the Portuguese language,
combining the transfer capabilities of BERT with the structured predictions of
CRF. We explore feature-based and fine-tuning training strategies for the BERT
model. Our fine-tuning approach obtains new state-of-the-art results on the
HAREM I dataset, improving the F1-score by 1 point on the selective scenario (5
NE classes) and by 4 points on the total scenario (10 NE classes). | 2019-09-23T23:21:42Z | null | null | null | null | null | null | null | null | null | null |
1,909.11065 | Segmentation Transformer: Object-Contextual Representations for Semantic
Segmentation | ['Yuhui Yuan', 'Xiaokang Chen', 'Xilin Chen', 'Jingdong Wang'] | ['cs.CV'] | In this paper, we address the semantic segmentation problem with a focus on
the context aggregation strategy. Our motivation is that the label of a pixel
is the category of the object that the pixel belongs to. We present a simple
yet effective approach, object-contextual representations, characterizing a
pixel by exploiting the representation of the corresponding object class.
First, we learn object regions under the supervision of ground-truth
segmentation. Second, we compute the object region representation by
aggregating the representations of the pixels lying in the object region. Last,
% the representation similarity we compute the relation between each pixel and
each object region and augment the representation of each pixel with the
object-contextual representation which is a weighted aggregation of all the
object region representations according to their relations with the pixel. We
empirically demonstrate that the proposed approach achieves competitive
performance on various challenging semantic segmentation benchmarks:
Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Cityscapes, ADE20K,
LIP, PASCAL-Context, and COCO-Stuff. Our submission "HRNet + OCR + SegFix"
achieves 1-st place on the Cityscapes leaderboard by the time of submission.
Code is available at: https://git.io/openseg and https://git.io/HRNet.OCR. We
rephrase the object-contextual representation scheme using the Transformer
encoder-decoder framework. The details are presented in~Section3.3. | 2019-09-24T17:39:23Z | We rephrase the object-contextual representation scheme using the
Transformer encoder-decoder framework. ECCV 2020 Spotlight. Project Page:
https://github.com/openseg-group/openseg.pytorch
https://github.com/HRNet/HRNet-Semantic-Segmentation/tree/HRNet-OCR | ECCV 2020 | null | null | null | null | null | null | null | null |
1,909.11229 | Pretraining boosts out-of-domain robustness for pose estimation | ['Alexander Mathis', 'Thomas Biasi', 'Steffen Schneider', 'Mert Yüksekgönül', 'Byron Rogers', 'Matthias Bethge', 'Mackenzie W. Mathis'] | ['cs.CV', 'cs.LG'] | Neural networks are highly effective tools for pose estimation. However, as
in other computer vision tasks, robustness to out-of-domain data remains a
challenge, especially for small training sets that are common for real-world
applications. Here, we probe the generalization ability with three architecture
classes (MobileNetV2s, ResNets, and EfficientNets) for pose estimation. We
developed a dataset of 30 horses that allowed for both "within-domain" and
"out-of-domain" (unseen horse) benchmarking - this is a crucial test for
robustness that current human pose estimation benchmarks do not directly
address. We show that better ImageNet-performing architectures perform better
on both within- and out-of-domain data if they are first pretrained on
ImageNet. We additionally show that better ImageNet models generalize better
across animal species. Furthermore, we introduce Horse-C, a new benchmark for
common corruptions for pose estimation, and confirm that pretraining increases
performance in this domain shift context as well. Overall, our results
demonstrate that transfer learning is beneficial for out-of-domain robustness. | 2019-09-24T23:40:39Z | A.M. and T.B. co-first authors. Dataset available at http://horse10.
deeplabcut.org . WACV 2021 conference | https://openaccess.thecvf.com/content/WACV2021/html/Mathis_Pretraining_Boosts_Out-of-Domain_Robustness_for_Pose_Estimation_WACV_2021_paper.html | null | null | null | null | null | null | null | null |
1,909.11646 | High Fidelity Speech Synthesis with Adversarial Networks | ['Mikołaj Bińkowski', 'Jeff Donahue', 'Sander Dieleman', 'Aidan Clark', 'Erich Elsen', 'Norman Casagrande', 'Luis C. Cobo', 'Karen Simonyan'] | ['cs.SD', 'cs.LG', 'eess.AS'] | Generative adversarial networks have seen rapid development in recent years
and have led to remarkable improvements in generative modelling of images.
However, their application in the audio domain has received limited attention,
and autoregressive models, such as WaveNet, remain the state of the art in
generative modelling of audio signals such as human speech. To address this
paucity, we introduce GAN-TTS, a Generative Adversarial Network for
Text-to-Speech. Our architecture is composed of a conditional feed-forward
generator producing raw speech audio, and an ensemble of discriminators which
operate on random windows of different sizes. The discriminators analyse the
audio both in terms of general realism, as well as how well the audio
corresponds to the utterance that should be pronounced. To measure the
performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean
Opinion Score), as well as novel quantitative metrics (Fr\'echet DeepSpeech
Distance and Kernel DeepSpeech Distance), which we find to be well correlated
with MOS. We show that GAN-TTS is capable of generating high-fidelity speech
with naturalness comparable to the state-of-the-art models, and unlike
autoregressive models, it is highly parallelisable thanks to an efficient
feed-forward generator. Listen to GAN-TTS reading this abstract at
https://storage.googleapis.com/deepmind-media/research/abstract.wav. | 2019-09-25T17:47:49Z | null | null | null | null | null | null | null | null | null | null |
1,909.11687 | Extremely Small BERT Models from Mixed-Vocabulary Training | ['Sanqiang Zhao', 'Raghav Gupta', 'Yang Song', 'Denny Zhou'] | ['cs.CL'] | Pretrained language models like BERT have achieved good results on NLP tasks,
but are impractical on resource-limited devices due to memory footprint. A
large fraction of this footprint comes from the input embeddings with large
input vocabulary and embedding dimensions. Existing knowledge distillation
methods used for model compression cannot be directly applied to train student
models with reduced vocabulary sizes. To this end, we propose a distillation
method to align the teacher and student embeddings via mixed-vocabulary
training. Our method compresses BERT-LARGE to a task-agnostic model with
smaller vocabulary and hidden dimensions, which is an order of magnitude
smaller than other distilled BERT models and offers a better size-accuracy
trade-off on language understanding benchmarks as well as a practical dialogue
task. | 2019-09-25T18:07:35Z | To appear at EACL 2021 | null | null | Extreme Language Model Compression with Optimal Subwords and Shared Projections | ['Sanqiang Zhao', 'Raghav Gupta', 'Yang Song', 'Denny Zhou'] | 2,019 | arXiv.org | 53 | 46 | ['Computer Science'] |
1,909.11942 | ALBERT: A Lite BERT for Self-supervised Learning of Language
Representations | ['Zhenzhong Lan', 'Mingda Chen', 'Sebastian Goodman', 'Kevin Gimpel', 'Piyush Sharma', 'Radu Soricut'] | ['cs.CL', 'cs.AI'] | Increasing model size when pretraining natural language representations often
results in improved performance on downstream tasks. However, at some point
further model increases become harder due to GPU/TPU memory limitations and
longer training times. To address these problems, we present two
parameter-reduction techniques to lower memory consumption and increase the
training speed of BERT. Comprehensive empirical evidence shows that our
proposed methods lead to models that scale much better compared to the original
BERT. We also use a self-supervised loss that focuses on modeling
inter-sentence coherence, and show it consistently helps downstream tasks with
multi-sentence inputs. As a result, our best model establishes new
state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having
fewer parameters compared to BERT-large. The code and the pretrained models are
available at https://github.com/google-research/ALBERT. | 2019-09-26T07:06:13Z | null | null | null | ALBERT: A Lite BERT for Self-supervised Learning of Language Representations | ['Zhenzhong Lan', 'Mingda Chen', 'Sebastian Goodman', 'Kevin Gimpel', 'Piyush Sharma', 'Radu Soricut'] | 2,019 | International Conference on Learning Representations | 6,488 | 72 | ['Computer Science'] |
1,909.12475 | Hidden Stratification Causes Clinically Meaningful Failures in Machine
Learning for Medical Imaging | ['Luke Oakden-Rayner', 'Jared Dunnmon', 'Gustavo Carneiro', 'Christopher Ré'] | ['cs.LG', 'stat.ML'] | Machine learning models for medical image analysis often suffer from poor
performance on important subsets of a population that are not identified during
training or testing. For example, overall performance of a cancer detection
model may be high, but the model still consistently misses a rare but
aggressive cancer subtype. We refer to this problem as hidden stratification,
and observe that it results from incompletely describing the meaningful
variation in a dataset. While hidden stratification can substantially reduce
the clinical efficacy of machine learning models, its effects remain difficult
to measure. In this work, we assess the utility of several possible techniques
for measuring and describing hidden stratification effects, and characterize
these effects on multiple medical imaging datasets. We find evidence that
hidden stratification can occur in unidentified imaging subsets with low
prevalence, low label quality, subtle distinguishing features, or spurious
correlates, and that it can result in relative performance differences of over
20% on clinically important subsets. Finally, we explore the clinical
implications of our findings, and suggest that evaluation of hidden
stratification should be a critical component of any machine learning
deployment in medical imaging. | 2019-09-27T02:42:58Z | Machine Learning for Health (ML4H) at NeurIPS 2019 - Extended
Abstract | null | null | Hidden stratification causes clinically meaningful failures in machine learning for medical imaging | ['Luke Oakden-Rayner', 'Jared A. Dunnmon', 'G. Carneiro', 'Christopher Ré'] | 2,019 | ACM Conference on Health, Inference, and Learning | 385 | 44 | ['Computer Science', 'Mathematics', 'Medicine'] |
1,909.13447 | DiPCo -- Dinner Party Corpus | ['Maarten Van Segbroeck', 'Ahmed Zaid', 'Ksenia Kutsenko', 'Cirenia Huerta', 'Tinh Nguyen', 'Xuewen Luo', 'Björn Hoffmeister', 'Jan Trmal', 'Maurizio Omologo', 'Roland Maas'] | ['eess.AS', 'cs.CL', 'cs.SD'] | We present a speech data corpus that simulates a "dinner party" scenario
taking place in an everyday home environment. The corpus was created by
recording multiple groups of four Amazon employee volunteers having a natural
conversation in English around a dining table. The participants were recorded
by a single-channel close-talk microphone and by five far-field 7-microphone
array devices positioned at different locations in the recording room. The
dataset contains the audio recordings and human labeled transcripts of a total
of 10 sessions with a duration between 15 and 45 minutes. The corpus was
created to advance in the field of noise robust and distant speech processing
and is intended to serve as a public research and benchmarking data set. | 2019-09-30T04:15:59Z | null | null | null | null | null | null | null | null | null | null |
1,909.13719 | RandAugment: Practical automated data augmentation with a reduced search
space | ['Ekin D. Cubuk', 'Barret Zoph', 'Jonathon Shlens', 'Quoc V. Le'] | ['cs.CV'] | Recent work has shown that data augmentation has the potential to
significantly improve the generalization of deep learning models. Recently,
automated augmentation strategies have led to state-of-the-art results in image
classification and object detection. While these strategies were optimized for
improving validation accuracy, they also led to state-of-the-art results in
semi-supervised learning and improved robustness to common corruptions of
images. An obstacle to a large-scale adoption of these methods is a separate
search phase which increases the training complexity and may substantially
increase the computational cost. Additionally, due to the separate search
phase, these approaches are unable to adjust the regularization strength based
on model or dataset size. Automated augmentation policies are often found by
training small models on small datasets and subsequently applied to train
larger models. In this work, we remove both of these obstacles. RandAugment has
a significantly reduced search space which allows it to be trained on the
target task with no need for a separate proxy task. Furthermore, due to the
parameterization, the regularization strength may be tailored to different
model and dataset sizes. RandAugment can be used uniformly across different
tasks and datasets and works out of the box, matching or surpassing all
previous automated augmentation approaches on CIFAR-10/100, SVHN, and ImageNet.
On the ImageNet dataset we achieve 85.0% accuracy, a 0.6% increase over the
previous state-of-the-art and 1.0% increase over baseline augmentation. On
object detection, RandAugment leads to 1.0-1.3% improvement over baseline
augmentation, and is within 0.3% mAP of AutoAugment on COCO. Finally, due to
its interpretable hyperparameter, RandAugment may be used to investigate the
role of data augmentation with varying model and dataset size. Code is
available online. | 2019-09-30T14:05:14Z | Added ablation experiments | null | null | Randaugment: Practical automated data augmentation with a reduced search space | ['E. D. Cubuk', 'Barret Zoph', 'Jonathon Shlens', 'Quoc V. Le'] | 2,019 | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | 3,522 | 61 | ['Computer Science'] |
1,910.00523 | BillSum: A Corpus for Automatic Summarization of US Legislation | ['Anastassia Kornilova', 'Vlad Eidelman'] | ['cs.CL'] | Automatic summarization methods have been studied on a variety of domains,
including news and scientific articles. Yet, legislation has not previously
been considered for this task, despite US Congress and state governments
releasing tens of thousands of bills every year. In this paper, we introduce
BillSum, the first dataset for summarization of US Congressional and California
state bills (https://github.com/FiscalNote/BillSum). We explain the properties
of the dataset that make it more challenging to process than other domains.
Then, we benchmark extractive methods that consider neural sentence
representations and traditional contextual features. Finally, we demonstrate
that models built on Congressional bills can be used to summarize California
bills, thus, showing that methods developed on this dataset can transfer to
states without human-written summaries. | 2019-10-01T16:25:12Z | null | null | 10.18653/v1/D19-5406 | null | null | null | null | null | null | null |
1,910.01108 | DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
lighter | ['Victor Sanh', 'Lysandre Debut', 'Julien Chaumond', 'Thomas Wolf'] | ['cs.CL'] | As Transfer Learning from large-scale pre-trained models becomes more
prevalent in Natural Language Processing (NLP), operating these large models in
on-the-edge and/or under constrained computational training or inference
budgets remains challenging. In this work, we propose a method to pre-train a
smaller general-purpose language representation model, called DistilBERT, which
can then be fine-tuned with good performances on a wide range of tasks like its
larger counterparts. While most prior work investigated the use of distillation
for building task-specific models, we leverage knowledge distillation during
the pre-training phase and show that it is possible to reduce the size of a
BERT model by 40%, while retaining 97% of its language understanding
capabilities and being 60% faster. To leverage the inductive biases learned by
larger models during pre-training, we introduce a triple loss combining
language modeling, distillation and cosine-distance losses. Our smaller, faster
and lighter model is cheaper to pre-train and we demonstrate its capabilities
for on-device computations in a proof-of-concept experiment and a comparative
on-device study. | 2019-10-02T17:56:28Z | February 2020 - Revision: fix bug in evaluation metrics, updated
metrics, argumentation unchanged. 5 pages, 1 figure, 4 tables. Accepted at
the 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing
- NeurIPS 2019 | null | null | null | null | null | null | null | null | null |
1,910.02054 | ZeRO: Memory Optimizations Toward Training Trillion Parameter Models | ['Samyam Rajbhandari', 'Jeff Rasley', 'Olatunji Ruwase', 'Yuxiong He'] | ['cs.LG', 'cs.DC', 'stat.ML'] | Large deep learning models offer significant accuracy gains, but training
billions to trillions of parameters is challenging. Existing solutions such as
data and model parallelisms exhibit fundamental limitations to fit these models
into limited device memory, while obtaining computation, communication and
development efficiency. We develop a novel solution, Zero Redundancy Optimizer
(ZeRO), to optimize memory, vastly improving training speed while increasing
the model size that can be efficiently trained. ZeRO eliminates memory
redundancies in data- and model-parallel training while retaining low
communication volume and high computational granularity, allowing us to scale
the model size proportional to the number of devices with sustained high
efficiency. Our analysis on memory requirements and communication volume
demonstrates: ZeRO has the potential to scale beyond 1 Trillion parameters
using today's hardware.
We implement and evaluate ZeRO: it trains large models of over 100B parameter
with super-linear speedup on 400 GPUs, achieving throughput of 15 Petaflops.
This represents an 8x increase in model size and 10x increase in achievable
performance over state-of-the-art. In terms of usability, ZeRO can train large
models of up to 13B parameters (e.g., larger than Megatron GPT 8.3B and T5 11B)
without requiring model parallelism which is harder for scientists to apply.
Last but not the least, researchers have used the system breakthroughs of ZeRO
to create the world's largest language model (Turing-NLG, 17B parameters) with
record breaking accuracy. | 2019-10-04T17:29:39Z | null | null | null | null | null | null | null | null | null | null |
1,910.02677 | Controllable Sentence Simplification | ['Louis Martin', 'Benoît Sagot', 'Éric de la Clergerie', 'Antoine Bordes'] | ['cs.CL'] | Text simplification aims at making a text easier to read and understand by
simplifying grammar and structure while keeping the underlying information
identical. It is often considered an all-purpose generic task where the same
simplification is suitable for all; however multiple audiences can benefit from
simplified text in different ways. We adapt a discrete parametrization
mechanism that provides explicit control on simplification systems based on
Sequence-to-Sequence models. As a result, users can condition the
simplifications returned by a model on attributes such as length, amount of
paraphrasing, lexical complexity and syntactic complexity. We also show that
carefully chosen values of these attributes allow out-of-the-box
Sequence-to-Sequence models to outperform their standard counterparts on
simplification benchmarks. Our model, which we call ACCESS (as shorthand for
AudienCe-CEntric Sentence Simplification), establishes the state of the art at
41.87 SARI on the WikiLarge test set, a +1.42 improvement over the best
previously reported score. | 2019-10-07T09:00:26Z | Code and models: https://github.com/facebookresearch/access | null | null | Controllable Sentence Simplification | ['Louis Martin', 'Benoît Sagot', 'Eric Villemonte de la Clergerie', 'Antoine Bordes'] | 2,019 | International Conference on Language Resources and Evaluation | 147 | 63 | ['Computer Science'] |
1,910.03151 | ECA-Net: Efficient Channel Attention for Deep Convolutional Neural
Networks | ['Qilong Wang', 'Banggu Wu', 'Pengfei Zhu', 'Peihua Li', 'Wangmeng Zuo', 'Qinghua Hu'] | ['cs.CV'] | Recently, channel attention mechanism has demonstrated to offer great
potential in improving the performance of deep convolutional neural networks
(CNNs). However, most existing methods dedicate to developing more
sophisticated attention modules for achieving better performance, which
inevitably increase model complexity. To overcome the paradox of performance
and complexity trade-off, this paper proposes an Efficient Channel Attention
(ECA) module, which only involves a handful of parameters while bringing clear
performance gain. By dissecting the channel attention module in SENet, we
empirically show avoiding dimensionality reduction is important for learning
channel attention, and appropriate cross-channel interaction can preserve
performance while significantly decreasing model complexity. Therefore, we
propose a local cross-channel interaction strategy without dimensionality
reduction, which can be efficiently implemented via $1D$ convolution.
Furthermore, we develop a method to adaptively select kernel size of $1D$
convolution, determining coverage of local cross-channel interaction. The
proposed ECA module is efficient yet effective, e.g., the parameters and
computations of our modules against backbone of ResNet50 are 80 vs. 24.37M and
4.7e-4 GFLOPs vs. 3.86 GFLOPs, respectively, and the performance boost is more
than 2% in terms of Top-1 accuracy. We extensively evaluate our ECA module on
image classification, object detection and instance segmentation with backbones
of ResNets and MobileNetV2. The experimental results show our module is more
efficient while performing favorably against its counterparts. | 2019-10-08T01:14:26Z | Accepted to CVPR 2020; Project Page:
https://github.com/BangguWu/ECANet | null | null | null | null | null | null | null | null | null |
1,910.03771 | HuggingFace's Transformers: State-of-the-art Natural Language Processing | ['Thomas Wolf', 'Lysandre Debut', 'Victor Sanh', 'Julien Chaumond', 'Clement Delangue', 'Anthony Moi', 'Pierric Cistac', 'Tim Rault', 'Rémi Louf', 'Morgan Funtowicz', 'Joe Davison', 'Sam Shleifer', 'Patrick von Platen', 'Clara Ma', 'Yacine Jernite', 'Julien Plu', 'Canwen Xu', 'Teven Le Scao', 'Sylvain Gugger', 'Mariama Drame', 'Quentin Lhoest', 'Alexander M. Rush'] | ['cs.CL'] | Recent progress in natural language processing has been driven by advances in
both model architecture and model pretraining. Transformer architectures have
facilitated building higher-capacity models and pretraining has made it
possible to effectively utilize this capacity for a wide variety of tasks.
\textit{Transformers} is an open-source library with the goal of opening up
these advances to the wider machine learning community. The library consists of
carefully engineered state-of-the art Transformer architectures under a unified
API. Backing this library is a curated collection of pretrained models made by
and available for the community. \textit{Transformers} is designed to be
extensible by researchers, simple for practitioners, and fast and robust in
industrial deployments. The library is available at
\url{https://github.com/huggingface/transformers}. | 2019-10-09T03:23:22Z | 8 pages, 4 figures, more details at
https://github.com/huggingface/transformers | null | null | HuggingFace's Transformers: State-of-the-art Natural Language Processing | ['Thomas Wolf', 'Lysandre Debut', 'Victor Sanh', 'Julien Chaumond', 'Clement Delangue', 'Anthony Moi', 'Pierric Cistac', 'Tim Rault', 'Rémi Louf', 'Morgan Funtowicz', 'Joe Davison', 'Sam Shleifer', 'Patrick von Platen', 'Clara Ma', 'Yacine Jernite', 'J. Plu', 'Canwen Xu', 'Teven Le Scao', 'Sylvain Gugger', 'Mariama Drame', 'Quentin Lhoest', 'Alexander M. Rush'] | 2,019 | arXiv.org | 1,981 | 65 | ['Computer Science'] |
1,910.04073 | BHAAV- A Text Corpus for Emotion Analysis from Hindi Stories | ['Yaman Kumar', 'Debanjan Mahata', 'Sagar Aggarwal', 'Anmol Chugh', 'Rajat Maheshwari', 'Rajiv Ratn Shah'] | ['cs.CL'] | In this paper, we introduce the first and largest Hindi text corpus, named
BHAAV, which means emotions in Hindi, for analyzing emotions that a writer
expresses through his characters in a story, as perceived by a narrator/reader.
The corpus consists of 20,304 sentences collected from 230 different short
stories spanning across 18 genres such as Inspirational and Mystery. Each
sentence has been annotated into one of the five emotion categories - anger,
joy, suspense, sad, and neutral, by three native Hindi speakers with at least
ten years of formal education in Hindi. We also discuss challenges in the
annotation of low resource languages such as Hindi, and discuss the scope of
the proposed corpus along with its possible uses. We also provide a detailed
analysis of the dataset and train strong baseline classifiers reporting their
performances. | 2019-10-09T15:42:25Z | null | null | 10.5281/zenodo.3457467 | BHAAV- A Text Corpus for Emotion Analysis from Hindi Stories | ['Yaman Kumar Singla', 'Debanjan Mahata', 'Sagar Aggarwal', 'Anmol Chugh', 'Rajat Maheshwari', 'R. Shah'] | 2,019 | arXiv.org | 23 | 53 | ['Computer Science'] |
1,910.04396 | On Recognizing Texts of Arbitrary Shapes with 2D Self-Attention | ['Junyeop Lee', 'Sungrae Park', 'Jeonghun Baek', 'Seong Joon Oh', 'Seonghyeon Kim', 'Hwalsuk Lee'] | ['cs.CV'] | Scene text recognition (STR) is the task of recognizing character sequences
in natural scenes. While there have been great advances in STR methods, current
methods still fail to recognize texts in arbitrary shapes, such as heavily
curved or rotated texts, which are abundant in daily life (e.g. restaurant
signs, product labels, company logos, etc). This paper introduces a novel
architecture to recognizing texts of arbitrary shapes, named Self-Attention
Text Recognition Network (SATRN), which is inspired by the Transformer. SATRN
utilizes the self-attention mechanism to describe two-dimensional (2D) spatial
dependencies of characters in a scene text image. Exploiting the full-graph
propagation of self-attention, SATRN can recognize texts with arbitrary
arrangements and large inter-character spacing. As a result, SATRN outperforms
existing STR models by a large margin of 5.7 pp on average in "irregular text"
benchmarks. We provide empirical analyses that illustrate the inner mechanisms
and the extent to which the model is applicable (e.g. rotated and multi-line
text). We will open-source the code. | 2019-10-10T07:20:54Z | null | null | null | null | null | null | null | null | null | null |
1,910.04867 | A Large-scale Study of Representation Learning with the Visual Task
Adaptation Benchmark | ['Xiaohua Zhai', 'Joan Puigcerver', 'Alexander Kolesnikov', 'Pierre Ruyssen', 'Carlos Riquelme', 'Mario Lucic', 'Josip Djolonga', 'Andre Susano Pinto', 'Maxim Neumann', 'Alexey Dosovitskiy', 'Lucas Beyer', 'Olivier Bachem', 'Michael Tschannen', 'Marcin Michalski', 'Olivier Bousquet', 'Sylvain Gelly', 'Neil Houlsby'] | ['cs.CV', 'cs.LG', 'stat.ML'] | Representation learning promises to unlock deep learning for the long tail of
vision tasks without expensive labelled datasets. Yet, the absence of a unified
evaluation for general visual representations hinders progress. Popular
protocols are often too constrained (linear classification), limited in
diversity (ImageNet, CIFAR, Pascal-VOC), or only weakly related to
representation quality (ELBO, reconstruction error). We present the Visual Task
Adaptation Benchmark (VTAB), which defines good representations as those that
adapt to diverse, unseen tasks with few examples. With VTAB, we conduct a
large-scale study of many popular publicly-available representation learning
algorithms. We carefully control confounders such as architecture and tuning
budget. We address questions like: How effective are ImageNet representations
beyond standard natural datasets? How do representations trained via generative
and discriminative models compare? To what extent can self-supervision replace
labels? And, how close are we to general visual representations? | 2019-10-01T17:06:29Z | null | null | null | null | null | null | null | null | null | null |
1,910.0618 | KonIQ-10k: An ecologically valid database for deep learning of blind
image quality assessment | ['Vlad Hosu', 'Hanhe Lin', 'Tamas Sziranyi', 'Dietmar Saupe'] | ['cs.CV', 'cs.MM', 'I.4.9; I.4.m'] | Deep learning methods for image quality assessment (IQA) are limited due to
the small size of existing datasets. Extensive datasets require substantial
resources both for generating publishable content and annotating it accurately.
We present a systematic and scalable approach to creating KonIQ-10k, the
largest IQA dataset to date, consisting of 10,073 quality scored images. It is
the first in-the-wild database aiming for ecological validity, concerning the
authenticity of distortions, the diversity of content, and quality-related
indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable
quality ratings from 1,459 crowd workers, paving the way for more general IQA
models. We propose a novel, deep learning model (KonCept512), to show an
excellent generalization beyond the test set (0.921 SROCC), to the current
state-of-the-art database LIVE-in-the-Wild (0.825 SROCC). The model derives its
core performance from the InceptionResNet architecture, being trained at a
higher resolution than previous models (512x384). Correlation analysis shows
that KonCept512 performs similar to having 9 subjective scores for each test
image. | 2019-10-14T14:38:48Z | Published | Trans. Image Proc. 29 (2020) 4041-4056 | 10.1109/TIP.2020.2967829 | null | null | null | null | null | null | null |
1,910.06711 | MelGAN: Generative Adversarial Networks for Conditional Waveform
Synthesis | ['Kundan Kumar', 'Rithesh Kumar', 'Thibault de Boissiere', 'Lucas Gestin', 'Wei Zhen Teoh', 'Jose Sotelo', 'Alexandre de Brebisson', 'Yoshua Bengio', 'Aaron Courville'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Previous works (Donahue et al., 2018a; Engel et al., 2019a) have found that
generating coherent raw audio waveforms with GANs is challenging. In this
paper, we show that it is possible to train GANs reliably to generate high
quality coherent waveforms by introducing a set of architectural changes and
simple training techniques. Subjective evaluation metric (Mean Opinion Score,
or MOS) shows the effectiveness of the proposed approach for high quality
mel-spectrogram inversion. To establish the generality of the proposed
techniques, we show qualitative results of our model in speech synthesis, music
domain translation and unconditional music synthesis. We evaluate the various
components of the model through ablation studies and suggest a set of
guidelines to design general purpose discriminators and generators for
conditional sequence synthesis tasks. Our model is non-autoregressive, fully
convolutional, with significantly fewer parameters than competing models and
generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch
implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU
and more than 2x faster than real-time on CPU, without any hardware specific
optimization tricks. | 2019-10-08T15:03:08Z | null | null | null | null | null | null | null | null | null | null |
1,910.06764 | Stabilizing Transformers for Reinforcement Learning | ['Emilio Parisotto', 'H. Francis Song', 'Jack W. Rae', 'Razvan Pascanu', 'Caglar Gulcehre', 'Siddhant M. Jayakumar', 'Max Jaderberg', 'Raphael Lopez Kaufman', 'Aidan Clark', 'Seb Noury', 'Matthew M. Botvinick', 'Nicolas Heess', 'Raia Hadsell'] | ['cs.LG', 'cs.AI', 'stat.ML'] | Owing to their ability to both effectively integrate information over long
time horizons and scale to massive amounts of data, self-attention
architectures have recently shown breakthrough success in natural language
processing (NLP), achieving state-of-the-art results in domains such as
language modeling and machine translation. Harnessing the transformer's ability
to process long time horizons of information could provide a similar
performance boost in partially observable reinforcement learning (RL) domains,
but the large-scale transformers used in NLP have yet to be successfully
applied to the RL setting. In this work we demonstrate that the standard
transformer architecture is difficult to optimize, which was previously
observed in the supervised learning setting but becomes especially pronounced
with RL objectives. We propose architectural modifications that substantially
improve the stability and learning speed of the original Transformer and XL
variant. The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses
LSTMs on challenging memory environments and achieves state-of-the-art results
on the multi-task DMLab-30 benchmark suite, exceeding the performance of an
external memory architecture. We show that the GTrXL, trained using the same
losses, has stability and performance that consistently matches or exceeds a
competitive LSTM baseline, including on more reactive tasks where memory is
less critical. GTrXL offers an easy-to-train, simple-to-implement but
substantially more expressive architectural alternative to the standard
multi-layer LSTM ubiquitously used for RL agents in partially observable
environments. | 2019-10-13T20:02:15Z | null | null | null | null | null | null | null | null | null | null |
1,910.06827 | Learning Generalisable Omni-Scale Representations for Person
Re-Identification | ['Kaiyang Zhou', 'Yongxin Yang', 'Andrea Cavallaro', 'Tao Xiang'] | ['cs.CV'] | An effective person re-identification (re-ID) model should learn feature
representations that are both discriminative, for distinguishing
similar-looking people, and generalisable, for deployment across datasets
without any adaptation. In this paper, we develop novel CNN architectures to
address both challenges. First, we present a re-ID CNN termed omni-scale
network (OSNet) to learn features that not only capture different spatial
scales but also encapsulate a synergistic combination of multiple scales,
namely omni-scale features. The basic building block consists of multiple
convolutional streams, each detecting features at a certain scale. For
omni-scale feature learning, a unified aggregation gate is introduced to
dynamically fuse multi-scale features with channel-wise weights. OSNet is
lightweight as its building blocks comprise factorised convolutions. Second, to
improve generalisable feature learning, we introduce instance normalisation
(IN) layers into OSNet to cope with cross-dataset discrepancies. Further, to
determine the optimal placements of these IN layers in the architecture, we
formulate an efficient differentiable architecture search algorithm. Extensive
experiments show that, in the conventional same-dataset setting, OSNet achieves
state-of-the-art performance, despite being much smaller than existing re-ID
models. In the more challenging yet practical cross-dataset setting, OSNet
beats most recent unsupervised domain adaptation methods without using any
target data. Our code and models are released at
\texttt{https://github.com/KaiyangZhou/deep-person-reid}. | 2019-10-15T14:44:16Z | TPAMI 2021. Journal extension of arXiv:1905.00953. Updates: added
appendix. arXiv admin note: text overlap with arXiv:1905.00953 | null | null | null | null | null | null | null | null | null |
1,910.07467 | Root Mean Square Layer Normalization | ['Biao Zhang', 'Rico Sennrich'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Layer normalization (LayerNorm) has been successfully applied to various deep
neural networks to help stabilize training and boost model convergence because
of its capability in handling re-centering and re-scaling of both inputs and
weight matrix. However, the computational overhead introduced by LayerNorm
makes these improvements expensive and significantly slows the underlying
network, e.g. RNN in particular. In this paper, we hypothesize that
re-centering invariance in LayerNorm is dispensable and propose root mean
square layer normalization, or RMSNorm. RMSNorm regularizes the summed inputs
to a neuron in one layer according to root mean square (RMS), giving the model
re-scaling invariance property and implicit learning rate adaptation ability.
RMSNorm is computationally simpler and thus more efficient than LayerNorm. We
also present partial RMSNorm, or pRMSNorm where the RMS is estimated from p% of
the summed inputs without breaking the above properties. Extensive experiments
on several tasks using diverse network architectures show that RMSNorm achieves
comparable performance against LayerNorm but reduces the running time by 7%~64%
on different models. Source code is available at
https://github.com/bzhangGo/rmsnorm. | 2019-10-16T16:44:22Z | NeurIPS 2019 | null | null | null | null | null | null | null | null | null |
1,910.07475 | MLQA: Evaluating Cross-lingual Extractive Question Answering | ['Patrick Lewis', 'Barlas Oğuz', 'Ruty Rinott', 'Sebastian Riedel', 'Holger Schwenk'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Question answering (QA) models have shown rapid progress enabled by the
availability of large, high-quality benchmark datasets. Such annotated datasets
are difficult and costly to collect, and rarely exist in languages other than
English, making training QA systems in other languages challenging. An
alternative to building large monolingual training datasets is to develop
cross-lingual systems which can transfer to a target language without requiring
training data in that language. In order to develop such systems, it is crucial
to invest in high quality multilingual evaluation benchmarks to measure
progress. We present MLQA, a multi-way aligned extractive QA evaluation
benchmark intended to spur research in this area. MLQA contains QA instances in
7 languages, namely English, Arabic, German, Spanish, Hindi, Vietnamese and
Simplified Chinese. It consists of over 12K QA instances in English and 5K in
each other language, with each QA instance being parallel between 4 languages
on average. MLQA is built using a novel alignment context strategy on Wikipedia
articles, and serves as a cross-lingual extension to existing extractive QA
datasets. We evaluate current state-of-the-art cross-lingual representations on
MLQA, and also provide machine-translation-based baselines. In all cases,
transfer results are shown to be significantly behind training-language
performance. | 2019-10-16T17:05:21Z | To appear in ACL 2020 | null | null | null | null | null | null | null | null | null |
1,910.097 | Quantifying the Carbon Emissions of Machine Learning | ['Alexandre Lacoste', 'Alexandra Luccioni', 'Victor Schmidt', 'Thomas Dandres'] | ['cs.CY', 'cs.LG'] | From an environmental standpoint, there are a few crucial aspects of training
a neural network that have a major impact on the quantity of carbon that it
emits. These factors include: the location of the server used for training and
the energy grid that it uses, the length of the training procedure, and even
the make and model of hardware on which the training takes place. In order to
approximate these emissions, we present our Machine Learning Emissions
Calculator, a tool for our community to better understand the environmental
impact of training ML models. We accompany this tool with an explanation of the
factors cited above, as well as concrete actions that individual practitioners
and organizations can take to mitigate their carbon emissions. | 2019-10-21T23:57:32Z | Machine Learning Emissions Calculator:
https://mlco2.github.io/impact/ | null | null | Quantifying the Carbon Emissions of Machine Learning | ['Alexandre Lacoste', 'A. Luccioni', 'Victor Schmidt', 'Thomas Dandres'] | 2,019 | arXiv.org | 715 | 26 | ['Computer Science'] |
1,910.10093 | Torchreid: A Library for Deep Learning Person Re-Identification in
Pytorch | ['Kaiyang Zhou', 'Tao Xiang'] | ['cs.CV'] | Person re-identification (re-ID), which aims to re-identify people across
different camera views, has been significantly advanced by deep learning in
recent years, particularly with convolutional neural networks (CNNs). In this
paper, we present Torchreid, a software library built on PyTorch that allows
fast development and end-to-end training and evaluation of deep re-ID models.
As a general-purpose framework for person re-ID research, Torchreid provides
(1) unified data loaders that support 15 commonly used re-ID benchmark datasets
covering both image and video domains, (2) streamlined pipelines for quick
development and benchmarking of deep re-ID models, and (3) implementations of
the latest re-ID CNN architectures along with their pre-trained models to
facilitate reproducibility as well as future research. With a high-level
modularity in its design, Torchreid offers a great flexibility to allow easy
extension to new datasets, CNN models and loss functions. | 2019-10-22T16:33:05Z | Tech report | null | null | null | null | null | null | null | null | null |
1,910.10261 | QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel
Separable Convolutions | ['Samuel Kriman', 'Stanislav Beliaev', 'Boris Ginsburg', 'Jocelyn Huang', 'Oleksii Kuchaiev', 'Vitaly Lavrukhin', 'Ryan Leary', 'Jason Li', 'Yang Zhang'] | ['eess.AS'] | We propose a new end-to-end neural acoustic model for automatic speech
recognition. The model is composed of multiple blocks with residual connections
between them. Each block consists of one or more modules with 1D time-channel
separable convolutional layers, batch normalization, and ReLU layers. It is
trained with CTC loss. The proposed network achieves near state-of-the-art
accuracy on LibriSpeech and Wall Street Journal, while having fewer parameters
than all competing models. We also demonstrate that this model can be
effectively fine-tuned on new datasets. | 2019-10-22T22:34:04Z | Submitted to ICASSP 2020 | null | null | null | null | null | null | null | null | null |
1,910.10288 | Location-Relative Attention Mechanisms For Robust Long-Form Speech
Synthesis | ['Eric Battenberg', 'RJ Skerry-Ryan', 'Soroosh Mariooryad', 'Daisy Stanton', 'David Kao', 'Matt Shannon', 'Tom Bagby'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | Despite the ability to produce human-level speech for in-domain text,
attention-based end-to-end text-to-speech (TTS) systems suffer from text
alignment failures that increase in frequency for out-of-domain text. We show
that these failures can be addressed using simple location-relative attention
mechanisms that do away with content-based query/key comparisons. We compare
two families of attention mechanisms: location-relative GMM-based mechanisms
and additive energy-based mechanisms. We suggest simple modifications to
GMM-based attention that allow it to align quickly and consistently during
training, and introduce a new location-relative attention mechanism to the
additive energy-based family, called Dynamic Convolution Attention (DCA). We
compare the various mechanisms in terms of alignment speed and consistency
during training, naturalness, and ability to generalize to long utterances, and
conclude that GMM attention and DCA can generalize to very long utterances,
while preserving naturalness for shorter, in-domain utterances. | 2019-10-23T00:21:33Z | Accepted to ICASSP 2020 | null | null | Location-Relative Attention Mechanisms for Robust Long-Form Speech Synthesis | ['Eric Battenberg', 'R. Skerry-Ryan', 'Soroosh Mariooryad', 'Daisy Stanton', 'David Kao', 'Matt Shannon', 'Tom Bagby'] | 2,019 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 114 | 16 | ['Computer Science', 'Engineering'] |
1,910.10655 | End-to-end Domain-Adversarial Voice Activity Detection | ['Marvin Lavechin', 'Marie-Philippe Gill', 'Ruben Bousbib', 'Hervé Bredin', 'Leibny Paola Garcia-Perera'] | ['eess.AS', 'I.2.7'] | Voice activity detection is the task of detecting speech regions in a given
audio stream or recording. First, we design a neural network combining
trainable filters and recurrent layers to tackle voice activity detection
directly from the waveform. Experiments on the challenging DIHARD dataset show
that the proposed end-to-end model reaches state-of-the-art performance and
outperforms a variant where trainable filters are replaced by standard cepstral
coefficients. Our second contribution aims at making the proposed voice
activity detection model robust to domain mismatch. To that end, a domain
classification branch is added to the network and trained in an adversarial
manner. The same DIHARD dataset, drawn from 11 different domains is used for
evaluation under two scenarios. In the in-domain scenario where the training
and test sets cover the exact same domains, we show that the domain-adversarial
approach does not degrade performance of the proposed end-to-end model. In the
out-domain scenario where the test domain is different from training domains,
it brings a relative improvement of more than 10%. Finally, our last
contribution is the provision of a fully reproducible open-source pipeline than
can be easily adapted to other datasets. | 2019-10-23T16:24:40Z | submitted to Interspeech 2020 | null | null | null | null | null | null | null | null | null |
1,910.10683 | Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer | ['Colin Raffel', 'Noam Shazeer', 'Adam Roberts', 'Katherine Lee', 'Sharan Narang', 'Michael Matena', 'Yanqi Zhou', 'Wei Li', 'Peter J. Liu'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Transfer learning, where a model is first pre-trained on a data-rich task
before being fine-tuned on a downstream task, has emerged as a powerful
technique in natural language processing (NLP). The effectiveness of transfer
learning has given rise to a diversity of approaches, methodology, and
practice. In this paper, we explore the landscape of transfer learning
techniques for NLP by introducing a unified framework that converts all
text-based language problems into a text-to-text format. Our systematic study
compares pre-training objectives, architectures, unlabeled data sets, transfer
approaches, and other factors on dozens of language understanding tasks. By
combining the insights from our exploration with scale and our new ``Colossal
Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks
covering summarization, question answering, text classification, and more. To
facilitate future work on transfer learning for NLP, we release our data set,
pre-trained models, and code. | 2019-10-23T17:37:36Z | null | null | null | Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | ['Colin Raffel', 'Noam M. Shazeer', 'Adam Roberts', 'Katherine Lee', 'Sharan Narang', 'Michael Matena', 'Yanqi Zhou', 'Wei Li', 'Peter J. Liu'] | 2,019 | Journal of machine learning research | 20,462 | 134 | ['Mathematics', 'Computer Science'] |
1,910.10687 | Context-Aware Sentence/Passage Term Importance Estimation For First
Stage Retrieval | ['Zhuyun Dai', 'Jamie Callan'] | ['cs.IR'] | Term frequency is a common method for identifying the importance of a term in
a query or document. But it is a weak signal, especially when the frequency
distribution is flat, such as in long queries or short documents where the text
is of sentence/passage-length. This paper proposes a Deep Contextualized Term
Weighting framework that learns to map BERT's contextualized text
representations to context-aware term weights for sentences and passages. When
applied to passages, DeepCT-Index produces term weights that can be stored in
an ordinary inverted index for passage retrieval. When applied to query text,
DeepCT-Query generates a weighted bag-of-words query. Both types of term weight
can be used directly by typical first-stage retrieval algorithms. This is novel
because most deep neural network based ranking models have higher computational
costs, and thus are restricted to later-stage rankers. Experiments on four
datasets demonstrate that DeepCT's deep contextualized text understanding
greatly improves the accuracy of first-stage retrieval algorithms. | 2019-10-23T17:42:35Z | null | null | null | Context-Aware Sentence/Passage Term Importance Estimation For First Stage Retrieval | ['Zhuyun Dai', 'Jamie Callan'] | 2,019 | arXiv.org | 192 | 38 | ['Computer Science'] |
1,910.1148 | Parallel WaveGAN: A fast waveform generation model based on generative
adversarial networks with multi-resolution spectrogram | ['Ryuichi Yamamoto', 'Eunwoo Song', 'Jae-Min Kim'] | ['eess.AS', 'cs.LG', 'cs.SD', 'eess.SP'] | We propose Parallel WaveGAN, a distillation-free, fast, and small-footprint
waveform generation method using a generative adversarial network. In the
proposed method, a non-autoregressive WaveNet is trained by jointly optimizing
multi-resolution spectrogram and adversarial loss functions, which can
effectively capture the time-frequency distribution of the realistic speech
waveform. As our method does not require density distillation used in the
conventional teacher-student framework, the entire model can be easily trained.
Furthermore, our model is able to generate high-fidelity speech even with its
compact architecture. In particular, the proposed Parallel WaveGAN has only
1.44 M parameters and can generate 24 kHz speech waveform 28.68 times faster
than real-time on a single GPU environment. Perceptual listening test results
verify that our proposed method achieves 4.16 mean opinion score within a
Transformer-based text-to-speech framework, which is comparative to the best
distillation-based Parallel WaveNet system. | 2019-10-25T01:16:38Z | Accepted to the conference of ICASSP 2020 | null | null | null | null | null | null | null | null | null |
1,910.11769 | DENS: A Dataset for Multi-class Emotion Analysis | ['Chen Liu', 'Muhammad Osama', 'Anderson de Andrade'] | ['cs.CL'] | We introduce a new dataset for multi-class emotion analysis from long-form
narratives in English. The Dataset for Emotions of Narrative Sequences (DENS)
was collected from both classic literature available on Project Gutenberg and
modern online narratives available on Wattpad, annotated using Amazon
Mechanical Turk. A number of statistics and baseline benchmarks are provided
for the dataset. Of the tested techniques, we find that the fine-tuning of a
pre-trained BERT model achieves the best results, with an average micro-F1
score of 60.4%. Our results show that the dataset provides a novel opportunity
in emotion analysis that requires moving beyond existing sentence-level
techniques. | 2019-10-25T14:40:14Z | Accepted to EMNLP 2019 | null | null | DENS: A Dataset for Multi-class Emotion Analysis | ['Chen Cecilia Liu', 'Muhammad Osama', 'Anderson de Andrade'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 37 | 23 | ['Computer Science'] |
1,910.11856 | On the Cross-lingual Transferability of Monolingual Representations | ['Mikel Artetxe', 'Sebastian Ruder', 'Dani Yogatama'] | ['cs.CL', 'cs.AI', 'cs.LG'] | State-of-the-art unsupervised multilingual models (e.g., multilingual BERT)
have been shown to generalize in a zero-shot cross-lingual setting. This
generalization ability has been attributed to the use of a shared subword
vocabulary and joint training across multiple languages giving rise to deep
multilingual abstractions. We evaluate this hypothesis by designing an
alternative approach that transfers a monolingual model to new languages at the
lexical level. More concretely, we first train a transformer-based masked
language model on one language, and transfer it to a new language by learning a
new embedding matrix with the same masked language modeling objective, freezing
parameters of all other layers. This approach does not rely on a shared
vocabulary or joint training. However, we show that it is competitive with
multilingual BERT on standard cross-lingual classification benchmarks and on a
new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict
common beliefs of the basis of the generalization ability of multilingual
models and suggest that deep monolingual models learn some abstractions that
generalize across languages. We also release XQuAD as a more comprehensive
cross-lingual benchmark, which comprises 240 paragraphs and 1190
question-answer pairs from SQuAD v1.1 translated into ten languages by
professional translators. | 2019-10-25T17:30:20Z | ACL 2020 | null | 10.18653/v1/2020.acl-main.421 | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.