arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,901.0286 | Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context | ['Zihang Dai', 'Zhilin Yang', 'Yiming Yang', 'Jaime Carbonell', 'Quoc V. Le', 'Ruslan Salakhutdinov'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch. | 2019-01-09T18:28:19Z | ACL 2019 long paper. Code and pretrained models are available at
https://github.com/kimiyoung/transformer-xl | null | null | Transformer-XL: Attentive Language Models beyond a Fixed-Length Context | ['Zihang Dai', 'Zhilin Yang', 'Yiming Yang', 'J. Carbonell', 'Quoc V. Le', 'R. Salakhutdinov'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 3,761 | 71 | ['Mathematics', 'Computer Science'] |
1,901.04085 | Passage Re-ranking with BERT | ['Rodrigo Nogueira', 'Kyunghyun Cho'] | ['cs.IR', 'cs.CL', 'cs.LG'] | Recently, neural models pretrained on a language modeling task, such as ELMo
(Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et
al., 2018), have achieved impressive results on various natural language
processing tasks such as question-answering and natural language inference. In
this paper, we describe a simple re-implementation of BERT for query-based
passage re-ranking. Our system is the state of the art on the TREC-CAR dataset
and the top entry in the leaderboard of the MS MARCO passage retrieval task,
outperforming the previous state of the art by 27% (relative) in MRR@10. The
code to reproduce our results is available at
https://github.com/nyu-dl/dl4marco-bert | 2019-01-13T23:27:58Z | null | null | null | Passage Re-ranking with BERT | ['Rodrigo Nogueira', 'Kyunghyun Cho'] | 2,019 | arXiv.org | 1,099 | 24 | ['Computer Science'] |
1,901.0478 | DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion | ['Chen Wang', 'Danfei Xu', 'Yuke Zhu', 'Roberto Martín-Martín', 'Cewu Lu', 'Li Fei-Fei', 'Silvio Savarese'] | ['cs.CV', 'cs.RO'] | A key technical challenge in performing 6D object pose estimation from RGB-D
image is to fully leverage the two complementary data sources. Prior works
either extract information from the RGB image and depth separately or use
costly post-processing steps, limiting their performances in highly cluttered
scenes and real-time applications. In this work, we present DenseFusion, a
generic framework for estimating 6D pose of a set of known objects from RGB-D
images. DenseFusion is a heterogeneous architecture that processes the two data
sources individually and uses a novel dense fusion network to extract
pixel-wise dense feature embedding, from which the pose is estimated.
Furthermore, we integrate an end-to-end iterative pose refinement procedure
that further improves the pose estimation while achieving near real-time
inference. Our experiments show that our method outperforms state-of-the-art
approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed
method to a real robot to grasp and manipulate objects based on the estimated
pose. | 2019-01-15T11:58:04Z | null | null | null | DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion | ['Chen Wang', 'Danfei Xu', 'Yuke Zhu', 'Roberto Martín-Martín', 'Cewu Lu', 'Li Fei-Fei', 'S. Savarese'] | 2,019 | Computer Vision and Pattern Recognition | 965 | 45 | ['Computer Science'] |
1,901.04856 | Sharing emotions at scale: The Vent dataset | ['Nikolaos Lykousas', 'Costantinos Patsakis', 'Andreas Kaltenbrunner', 'Vicenç Gómez'] | ['cs.SI', 'cs.HC'] | The continuous and increasing use of social media has enabled the expression
of human thoughts, opinions, and everyday actions publicly at an unprecedented
scale. We present the Vent dataset, the largest annotated dataset of text,
emotions, and social connections to date. It comprises more than 33 millions of
posts by nearly a million of users together with their social connections. Each
post has an associated emotion. There are 705 different emotions, organized in
63 "emotion categories", forming a two-level taxonomy of affects. Our initial
statistical analysis describes the global patterns of activity in the Vent
platform, revealing large heterogenities and certain remarkable regularities
regarding the use of the different emotions. We focus on the aggregated use of
emotions, the temporal activity, and the social network of users, and outline
possible methods to infer emotion networks based on the user activity. We also
analyze the text and describe the affective landscape of Vent, finding
agreements with existing (small scale) annotated corpus in terms of emotion
categories and positive/negative valences. Finally, we discuss possible
research questions that can be addressed from this unique dataset. | 2019-01-15T14:39:34Z | 9 pages, 12 figures, 2 tables. Accepted at the 13th International
AAAI Conference on Web and Social Media (ICWSM 2019) | null | null | null | null | null | null | null | null | null |
1,901.06081 | DeepOtsu: Document Enhancement and Binarization using Iterative Deep
Learning | ['Sheng He', 'Lambert Schomaker'] | ['cs.CV'] | This paper presents a novel iterative deep learning framework and apply it
for document enhancement and binarization. Unlike the traditional methods which
predict the binary label of each pixel on the input image, we train the neural
network to learn the degradations in document images and produce the uniform
images of the degraded input images, which allows the network to refine the
output iteratively. Two different iterative methods have been studied in this
paper: recurrent refinement (RR) which uses the same trained neural network in
each iteration for document enhancement and stacked refinement (SR) which uses
a stack of different neural networks for iterative output refinement. Given the
learned uniform and enhanced image, the binarization map can be easy to obtain
by a global or local threshold. The experimental results on several public
benchmark data sets show that our proposed methods provide a new clean version
of the degraded image which is suitable for visualization and promising results
of binarization using the global Otsu's threshold based on the enhanced images
learned iteratively by the neural network. | 2019-01-18T04:23:51Z | Accepted by Pattern Recognition | null | 10.1016/j.patcog.2019.01.025 | null | null | null | null | null | null | null |
1,901.07042 | MIMIC-CXR-JPG, a large publicly available database of labeled chest
radiographs | ['Alistair E. W. Johnson', 'Tom J. Pollard', 'Nathaniel R. Greenbaum', 'Matthew P. Lungren', 'Chih-ying Deng', 'Yifan Peng', 'Zhiyong Lu', 'Roger G. Mark', 'Seth J. Berkowitz', 'Steven Horng'] | ['cs.CV', 'cs.LG', 'eess.IV'] | Chest radiography is an extremely powerful imaging modality, allowing for a
detailed inspection of a patient's thorax, but requiring specialized training
for proper interpretation. With the advent of high performance general purpose
computer vision algorithms, the accurate automated analysis of chest
radiographs is becoming increasingly of interest to researchers. However, a key
challenge in the development of these techniques is the lack of sufficient
data. Here we describe MIMIC-CXR-JPG v2.0.0, a large dataset of 377,110 chest
x-rays associated with 227,827 imaging studies sourced from the Beth Israel
Deaconess Medical Center between 2011 - 2016. Images are provided with 14
labels derived from two natural language processing tools applied to the
corresponding free-text radiology reports. MIMIC-CXR-JPG is derived entirely
from the MIMIC-CXR database, and aims to provide a convenient processed version
of MIMIC-CXR, as well as to provide a standard reference for data splits and
image labels. All images have been de-identified to protect patient privacy.
The dataset is made freely available to facilitate and encourage a wide range
of research in medical computer vision. | 2019-01-21T19:01:00Z | null | null | null | MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs | ['Alistair E. W. Johnson', 'T. Pollard', 'Nathaniel R. Greenbaum', 'M. Lungren', 'Chih-ying Deng', 'Yifan Peng', 'Zhiyong Lu', 'R. Mark', 'S. Berkowitz', 'S. Horng'] | 2,019 | null | 825 | 20 | ['Computer Science', 'Engineering'] |
1,901.07291 | Cross-lingual Language Model Pretraining | ['Guillaume Lample', 'Alexis Conneau'] | ['cs.CL'] | Recent studies have demonstrated the efficiency of generative pretraining for
English natural language understanding. In this work, we extend this approach
to multiple languages and show the effectiveness of cross-lingual pretraining.
We propose two methods to learn cross-lingual language models (XLMs): one
unsupervised that only relies on monolingual data, and one supervised that
leverages parallel data with a new cross-lingual language model objective. We
obtain state-of-the-art results on cross-lingual classification, unsupervised
and supervised machine translation. On XNLI, our approach pushes the state of
the art by an absolute gain of 4.9% accuracy. On unsupervised machine
translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the
previous state of the art by more than 9 BLEU. On supervised machine
translation, we obtain a new state of the art of 38.5 BLEU on WMT'16
Romanian-English, outperforming the previous best approach by more than 4 BLEU.
Our code and pretrained models will be made publicly available. | 2019-01-22T13:22:34Z | null | null | null | Cross-lingual Language Model Pretraining | ['Guillaume Lample', 'Alexis Conneau'] | 2,019 | Neural Information Processing Systems | 2,753 | 52 | ['Computer Science'] |
1,901.07441 | PadChest: A large chest x-ray image dataset with multi-label annotated
reports | ['Aurelia Bustos', 'Antonio Pertusa', 'Jose-Maria Salinas', 'Maria de la Iglesia-Vayá'] | ['eess.IV', 'cs.CV', '92B20, 92C50, 68T50, 92B10'] | We present a labeled large-scale, high resolution chest x-ray dataset for the
automated exploration of medical images along with their associated reports.
This dataset includes more than 160,000 images obtained from 67,000 patients
that were interpreted and reported by radiologists at Hospital San Juan
Hospital (Spain) from 2009 to 2017, covering six different position views and
additional information on image acquisition and patient demography. The reports
were labeled with 174 different radiographic findings, 19 differential
diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and
mapped onto standard Unified Medical Language System (UMLS) terminology. Of
these reports, 27% were manually annotated by trained physicians and the
remaining set was labeled using a supervised method based on a recurrent neural
network with attention mechanisms. The labels generated were then validated in
an independent test set achieving a 0.93 Micro-F1 score. To the best of our
knowledge, this is one of the largest public chest x-ray database suitable for
training supervised models concerning radiographs, and the first to contain
radiographic reports in Spanish. The PadChest dataset can be downloaded from
http://bimcv.cipf.es/bimcv-projects/padchest/. | 2019-01-22T16:04:27Z | null | Med. Image Anal., 66 (2020), 101797 | 10.1016/j.media.2020.101797 | null | null | null | null | null | null | null |
1,901.08149 | TransferTransfo: A Transfer Learning Approach for Neural Network Based
Conversational Agents | ['Thomas Wolf', 'Victor Sanh', 'Julien Chaumond', 'Clement Delangue'] | ['cs.CL'] | We introduce a new approach to generative data-driven dialogue systems (e.g.
chatbots) called TransferTransfo which is a combination of a Transfer learning
based training scheme and a high-capacity Transformer model. Fine-tuning is
performed by using a multi-task objective which combines several unsupervised
prediction tasks. The resulting fine-tuned model shows strong improvements over
the current state-of-the-art end-to-end conversational models like memory
augmented seq2seq and information-retrieval models. On the privately held
PERSONA-CHAT dataset of the Conversational Intelligence Challenge 2, this
approach obtains a new state-of-the-art, with respective perplexity, Hits@1 and
F1 metrics of 16.28 (45 % absolute improvement), 80.7 (46 % absolute
improvement) and 19.5 (20 % absolute improvement). | 2019-01-23T22:08:01Z | 6 pages, 2 figures, 2 tables, NeurIPS 2018 CAI Workshop | null | null | TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents | ['Thomas Wolf', 'Victor Sanh', 'Julien Chaumond', 'Clement Delangue'] | 2,019 | arXiv.org | 500 | 18 | ['Computer Science'] |
1,901.08746 | BioBERT: a pre-trained biomedical language representation model for
biomedical text mining | ['Jinhyuk Lee', 'Wonjin Yoon', 'Sungdong Kim', 'Donghyeon Kim', 'Sunkyu Kim', 'Chan Ho So', 'Jaewoo Kang'] | ['cs.CL'] | Biomedical text mining is becoming increasingly important as the number of
biomedical documents rapidly grows. With the progress in natural language
processing (NLP), extracting valuable information from biomedical literature
has gained popularity among researchers, and deep learning has boosted the
development of effective biomedical text mining models. However, directly
applying the advancements in NLP to biomedical text mining often yields
unsatisfactory results due to a word distribution shift from general domain
corpora to biomedical corpora. In this article, we investigate how the recently
introduced pre-trained language model BERT can be adapted for biomedical
corpora. We introduce BioBERT (Bidirectional Encoder Representations from
Transformers for Biomedical Text Mining), which is a domain-specific language
representation model pre-trained on large-scale biomedical corpora. With almost
the same architecture across tasks, BioBERT largely outperforms BERT and
previous state-of-the-art models in a variety of biomedical text mining tasks
when pre-trained on biomedical corpora. While BERT obtains performance
comparable to that of previous state-of-the-art models, BioBERT significantly
outperforms them on the following three representative biomedical text mining
tasks: biomedical named entity recognition (0.62% F1 score improvement),
biomedical relation extraction (2.80% F1 score improvement) and biomedical
question answering (12.24% MRR improvement). Our analysis results show that
pre-training BERT on biomedical corpora helps it to understand complex
biomedical texts. We make the pre-trained weights of BioBERT freely available
at https://github.com/naver/biobert-pretrained, and the source code for
fine-tuning BioBERT available at https://github.com/dmis-lab/biobert. | 2019-01-25T05:57:24Z | Bioinformatics | null | 10.1093/bioinformatics/btz682 | null | null | null | null | null | null | null |
1,901.10995 | Go-Explore: a New Approach for Hard-Exploration Problems | ['Adrien Ecoffet', 'Joost Huizinga', 'Joel Lehman', 'Kenneth O. Stanley', 'Jeff Clune'] | ['cs.LG', 'cs.AI', 'stat.ML'] | A grand challenge in reinforcement learning is intelligent exploration,
especially when rewards are sparse or deceptive. Two Atari games serve as
benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall.
On both games, current RL algorithms perform poorly, even those with intrinsic
motivation, which is the dominant method to improve performance on
hard-exploration domains. To address this shortfall, we introduce a new
algorithm called Go-Explore. It exploits the following principles: (1) remember
previously visited states, (2) first return to a promising state (without
exploration), then explore from it, and (3) solve simulated environments
through any available means (including by introducing determinism), then
robustify via imitation learning. The combined effect of these principles is a
dramatic performance improvement on hard-exploration problems. On Montezuma's
Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the
previous state of the art. Go-Explore can also harness human-provided domain
knowledge and, when augmented with it, scores a mean of over 650k points on
Montezuma's Revenge. Its max performance of nearly 18 million surpasses the
human world record, meeting even the strictest definition of "superhuman"
performance. On Pitfall, Go-Explore with domain knowledge is the first
algorithm to score above zero. Its mean score of almost 60k points exceeds
expert human performance. Because Go-Explore produces high-performing
demonstrations automatically and cheaply, it also outperforms imitation
learning work where humans provide solution demonstrations. Go-Explore opens up
many new research directions into improving it and weaving its insights into
current RL algorithms. It may also enable progress on previously unsolvable
hard-exploration problems in many domains, especially those that harness a
simulator during training (e.g. robotics). | 2019-01-30T18:40:37Z | 37 pages, 14 figures; added references to Goyal et al. and Oh et al.,
updated reference to Colas et al; updated author emails; point readers to
updated paper | null | null | null | null | null | null | null | null | null |
1,902.00751 | Parameter-Efficient Transfer Learning for NLP | ['Neil Houlsby', 'Andrei Giurgiu', 'Stanislaw Jastrzebski', 'Bruna Morrone', 'Quentin de Laroussilhe', 'Andrea Gesmundo', 'Mona Attariyan', 'Sylvain Gelly'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Fine-tuning large pre-trained models is an effective transfer mechanism in
NLP. However, in the presence of many downstream tasks, fine-tuning is
parameter inefficient: an entire new model is required for every task. As an
alternative, we propose transfer with adapter modules. Adapter modules yield a
compact and extensible model; they add only a few trainable parameters per
task, and new tasks can be added without revisiting previous ones. The
parameters of the original network remain fixed, yielding a high degree of
parameter sharing. To demonstrate adapter's effectiveness, we transfer the
recently proposed BERT Transformer model to 26 diverse text classification
tasks, including the GLUE benchmark. Adapters attain near state-of-the-art
performance, whilst adding only a few parameters per task. On GLUE, we attain
within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters
per task. By contrast, fine-tuning trains 100% of the parameters per task. | 2019-02-02T16:29:47Z | null | null | null | null | null | null | null | null | null | null |
1,902.06426 | 2017 Robotic Instrument Segmentation Challenge | ['Max Allan', 'Alex Shvets', 'Thomas Kurmann', 'Zichen Zhang', 'Rahul Duggal', 'Yun-Hsuan Su', 'Nicola Rieke', 'Iro Laina', 'Niveditha Kalavakonda', 'Sebastian Bodenstedt', 'Luis Herrera', 'Wenqi Li', 'Vladimir Iglovikov', 'Huoling Luo', 'Jian Yang', 'Danail Stoyanov', 'Lena Maier-Hein', 'Stefanie Speidel', 'Mahdi Azizian'] | ['cs.CV'] | In mainstream computer vision and machine learning, public datasets such as
ImageNet, COCO and KITTI have helped drive enormous improvements by enabling
researchers to understand the strengths and limitations of different algorithms
via performance comparison. However, this type of approach has had limited
translation to problems in robotic assisted surgery as this field has never
established the same level of common datasets and benchmarking methods. In 2015
a sub-challenge was introduced at the EndoVis workshop where a set of robotic
images were provided with automatically generated annotations from robot
forward kinematics. However, there were issues with this dataset due to the
limited background variation, lack of complex motion and inaccuracies in the
annotation. In this work we present the results of the 2017 challenge on
robotic instrument segmentation which involved 10 teams participating in
binary, parts and type based segmentation of articulated da Vinci robotic
instruments. | 2019-02-18T07:08:36Z | null | null | null | null | null | null | null | null | null | null |
1,902.06634 | Contextual Encoder-Decoder Network for Visual Saliency Prediction | ['Alexander Kroner', 'Mario Senden', 'Kurt Driessens', 'Rainer Goebel'] | ['cs.CV'] | Predicting salient regions in natural images requires the detection of
objects that are present in a scene. To develop robust representations for this
challenging task, high-level visual features at multiple spatial scales must be
extracted and augmented with contextual information. However, existing models
aimed at explaining human fixation maps do not incorporate such a mechanism
explicitly. Here we propose an approach based on a convolutional neural network
pre-trained on a large-scale image classification task. The architecture forms
an encoder-decoder structure and includes a module with multiple convolutional
layers at different dilation rates to capture multi-scale features in parallel.
Moreover, we combine the resulting representations with global scene
information for accurately predicting visual saliency. Our model achieves
competitive and consistent results across multiple evaluation metrics on two
public saliency benchmarks and we demonstrate the effectiveness of the
suggested approach on five datasets and selected examples. Compared to state of
the art approaches, the network is based on a lightweight image classification
backbone and hence presents a suitable choice for applications with limited
computational resources, such as (virtual) robotic systems, to estimate human
fixations across complex natural scenes. | 2019-02-18T16:15:25Z | Updated contact information | Neural Networks, 2020, Volume 129, Pages 261-270, ISSN 0893-6080 | 10.1016/j.neunet.2020.05.004 | null | null | null | null | null | null | null |
1,902.09212 | Deep High-Resolution Representation Learning for Human Pose Estimation | ['Ke Sun', 'Bin Xiao', 'Dong Liu', 'Jingdong Wang'] | ['cs.CV'] | This is an official pytorch implementation of Deep High-Resolution
Representation Learning for Human Pose Estimation. In this work, we are
interested in the human pose estimation problem with a focus on learning
reliable high-resolution representations. Most existing methods recover
high-resolution representations from low-resolution representations produced by
a high-to-low resolution network. Instead, our proposed network maintains
high-resolution representations through the whole process. We start from a
high-resolution subnetwork as the first stage, gradually add high-to-low
resolution subnetworks one by one to form more stages, and connect the
mutli-resolution subnetworks in parallel. We conduct repeated multi-scale
fusions such that each of the high-to-low resolution representations receives
information from other parallel representations over and over, leading to rich
high-resolution representations. As a result, the predicted keypoint heatmap is
potentially more accurate and spatially more precise. We empirically
demonstrate the effectiveness of our network through the superior pose
estimation results over two benchmark datasets: the COCO keypoint detection
dataset and the MPII Human Pose dataset. The code and models have been publicly
available at
\url{https://github.com/leoxiaobin/deep-high-resolution-net.pytorch}. | 2019-02-25T11:55:28Z | accepted by CVPR2019 | null | null | null | null | null | null | null | null | null |
1,902.09476 | MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts | ['Sunil Mohan', 'Donghui Li'] | ['cs.CL', 'cs.LG'] | This paper presents the formal release of MedMentions, a new manually
annotated resource for the recognition of biomedical concepts. What
distinguishes MedMentions from other annotated biomedical corpora is its size
(over 4,000 abstracts and over 350,000 linked mentions), as well as the size of
the concept ontology (over 3 million concepts from UMLS 2017) and its broad
coverage of biomedical disciplines. In addition to the full corpus, a
sub-corpus of MedMentions is also presented, comprising annotations for a
subset of UMLS 2017 targeted towards document retrieval. To encourage research
in Biomedical Named Entity Recognition and Linking, data splits for training
and testing are included in the release, and a baseline model and its metrics
for entity linking are also described. | 2019-02-25T17:53:20Z | To appear in AKBC 2019 | null | null | null | null | null | null | null | null | null |
1,902.09811 | LaSO: Label-Set Operations networks for multi-label few-shot learning | ['Amit Alfassy', 'Leonid Karlinsky', 'Amit Aides', 'Joseph Shtok', 'Sivan Harary', 'Rogerio Feris', 'Raja Giryes', 'Alex M. Bronstein'] | ['cs.CV'] | Example synthesis is one of the leading methods to tackle the problem of
few-shot learning, where only a small number of samples per class are
available. However, current synthesis approaches only address the scenario of a
single category label per image. In this work, we propose a novel technique for
synthesizing samples with multiple labels for the (yet unhandled) multi-label
few-shot classification scenario. We propose to combine pairs of given examples
in feature space, so that the resulting synthesized feature vectors will
correspond to examples whose label sets are obtained through certain set
operations on the label sets of the corresponding input pairs. Thus, our method
is capable of producing a sample containing the intersection, union or
set-difference of labels present in two input samples. As we show, these set
operations generalize to labels unseen during training. This enables performing
augmentation on examples of novel categories, thus, facilitating multi-label
few-shot classifier learning. We conduct numerous experiments showing promising
results for the label-set manipulation capabilities of the proposed approach,
both directly (using the classification and retrieval metrics), and in the
context of performing data augmentation for multi-label few-shot learning. We
propose a benchmark for this new and challenging task and show that our method
compares favorably to all the common baselines. | 2019-02-26T09:12:09Z | null | null | null | null | null | null | null | null | null | null |
1,902.10191 | EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs | ['Aldo Pareja', 'Giacomo Domeniconi', 'Jie Chen', 'Tengfei Ma', 'Toyotaro Suzumura', 'Hiroki Kanezashi', 'Tim Kaler', 'Tao B. Schardl', 'Charles E. Leiserson'] | ['cs.LG', 'cs.SI', 'stat.ML'] | Graph representation learning resurges as a trending research subject owing
to the widespread use of deep learning for Euclidean data, which inspire
various creative designs of neural networks in the non-Euclidean domain,
particularly graphs. With the success of these graph neural networks (GNN) in
the static setting, we approach further practical scenarios where the graph
dynamically evolves. Existing approaches typically resort to node embeddings
and use a recurrent neural network (RNN, broadly speaking) to regulate the
embeddings and learn the temporal dynamics. These methods require the knowledge
of a node in the full time span (including both training and testing) and are
less applicable to the frequent change of the node set. In some extreme
scenarios, the node sets at different time steps may completely differ. To
resolve this challenge, we propose EvolveGCN, which adapts the graph
convolutional network (GCN) model along the temporal dimension without
resorting to node embeddings. The proposed approach captures the dynamism of
the graph sequence through using an RNN to evolve the GCN parameters. Two
architectures are considered for the parameter evolution. We evaluate the
proposed approach on tasks including link prediction, edge classification, and
node classification. The experimental results indicate a generally higher
performance of EvolveGCN compared with related approaches. The code is
available at \url{https://github.com/IBM/EvolveGCN}. | 2019-02-26T20:07:34Z | AAAI 2020. The code is available at https://github.com/IBM/EvolveGCN | null | null | null | null | null | null | null | null | null |
1,902.10909 | BERT for Joint Intent Classification and Slot Filling | ['Qian Chen', 'Zhu Zhuo', 'Wen Wang'] | ['cs.CL'] | Intent classification and slot filling are two essential tasks for natural
language understanding. They often suffer from small-scale human-labeled
training data, resulting in poor generalization capability, especially for rare
words. Recently a new language representation model, BERT (Bidirectional
Encoder Representations from Transformers), facilitates pre-training deep
bidirectional representations on large-scale unlabeled corpora, and has created
state-of-the-art models for a wide variety of natural language processing tasks
after simple fine-tuning. However, there has not been much effort on exploring
BERT for natural language understanding. In this work, we propose a joint
intent classification and slot filling model based on BERT. Experimental
results demonstrate that our proposed model achieves significant improvement on
intent classification accuracy, slot filling F1, and sentence-level semantic
frame accuracy on several public benchmark datasets, compared to the
attention-based recurrent neural network models and slot-gated models. | 2019-02-28T05:54:16Z | 4 pages, 1 figure | null | null | BERT for Joint Intent Classification and Slot Filling | ['Qian Chen', 'Zhu Zhuo', 'Wen Wang'] | 2,019 | arXiv.org | 558 | 26 | ['Computer Science'] |
1,903.00161 | DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning
Over Paragraphs | ['Dheeru Dua', 'Yizhong Wang', 'Pradeep Dasigi', 'Gabriel Stanovsky', 'Sameer Singh', 'Matt Gardner'] | ['cs.CL'] | Reading comprehension has recently seen rapid progress, with systems matching
humans on the most popular datasets for the task. However, a large body of work
has highlighted the brittleness of these systems, showing that there is much
work left to be done. We introduce a new English reading comprehension
benchmark, DROP, which requires Discrete Reasoning Over the content of
Paragraphs. In this crowdsourced, adversarially-created, 96k-question
benchmark, a system must resolve references in a question, perhaps to multiple
input positions, and perform discrete operations over them (such as addition,
counting, or sorting). These operations require a much more comprehensive
understanding of the content of paragraphs than what was necessary for prior
datasets. We apply state-of-the-art methods from both the reading comprehension
and semantic parsing literature on this dataset and show that the best systems
only achieve 32.7% F1 on our generalized accuracy metric, while expert human
performance is 96.0%. We additionally present a new model that combines reading
comprehension methods with simple numerical reasoning to achieve 47.0% F1. | 2019-03-01T05:32:01Z | null | null | null | null | null | null | null | null | null | null |
1,903.01435 | An Optimistic Acceleration of AMSGrad for Nonconvex Optimization | ['Jun-Kun Wang', 'Xiaoyun Li', 'Belhal Karimi', 'Ping Li'] | ['stat.ML', 'cs.LG'] | We propose a new variant of AMSGrad, a popular adaptive gradient based
optimization algorithm widely used for training deep neural networks. Our
algorithm adds prior knowledge about the sequence of consecutive mini-batch
gradients and leverages its underlying structure making the gradients
sequentially predictable. By exploiting the predictability and ideas from
optimistic online learning, the proposed algorithm can accelerate the
convergence and increase sample efficiency. After establishing a tighter upper
bound under some convexity conditions on the regret, we offer a complimentary
view of our algorithm which generalizes the offline and stochastic version of
nonconvex optimization. In the nonconvex case, we establish a non-asymptotic
convergence bound independently of the initialization. We illustrate the
practical speedup on several deep learning models via numerical experiments. | 2019-03-04T18:56:40Z | null | null | null | null | null | null | null | null | null | null |
1,903.02428 | Fast Graph Representation Learning with PyTorch Geometric | ['Matthias Fey', 'Jan Eric Lenssen'] | ['cs.LG', 'stat.ML'] | We introduce PyTorch Geometric, a library for deep learning on irregularly
structured input data such as graphs, point clouds and manifolds, built upon
PyTorch. In addition to general graph data structures and processing methods,
it contains a variety of recently published methods from the domains of
relational learning and 3D data processing. PyTorch Geometric achieves high
data throughput by leveraging sparse GPU acceleration, by providing dedicated
CUDA kernels and by introducing efficient mini-batch handling for input
examples of different size. In this work, we present the library in detail and
perform a comprehensive comparative study of the implemented methods in
homogeneous evaluation scenarios. | 2019-03-06T14:50:02Z | ICLR 2019 (RLGM Workshop) | null | null | Fast Graph Representation Learning with PyTorch Geometric | ['Matthias Fey', 'J. E. Lenssen'] | 2,019 | arXiv.org | 4,381 | 51 | ['Computer Science', 'Mathematics'] |
1,903.05566 | Benchmarking Natural Language Understanding Services for building
Conversational Agents | ['Xingkun Liu', 'Arash Eshghi', 'Pawel Swietojanski', 'Verena Rieser'] | ['cs.CL', 'cs.LG'] | We have recently seen the emergence of several publicly available Natural
Language Understanding (NLU) toolkits, which map user utterances to structured,
but more abstract, Dialogue Act (DA) or Intent specifications, while making
this process accessible to the lay developer. In this paper, we present the
first wide coverage evaluation and comparison of some of the most popular NLU
services, on a large, multi-domain (21 domains) dataset of 25K user utterances
that we have collected and annotated with Intent and Entity Type specifications
and which will be released as part of this submission. The results show that on
Intent classification Watson significantly outperforms the other platforms,
namely, Dialogflow, LUIS and Rasa; though these also perform well.
Interestingly, on Entity Type recognition, Watson performs significantly worse
due to its low Precision. Again, Dialogflow, LUIS and Rasa perform well on this
task. | 2019-03-13T16:08:46Z | Accepted by IWSDS2019 | null | null | null | null | null | null | null | null | null |
1,903.06586 | Selective Kernel Networks | ['Xiang Li', 'Wenhai Wang', 'Xiaolin Hu', 'Jian Yang'] | ['cs.CV'] | In standard Convolutional Neural Networks (CNNs), the receptive fields of
artificial neurons in each layer are designed to share the same size. It is
well-known in the neuroscience community that the receptive field size of
visual cortical neurons are modulated by the stimulus, which has been rarely
considered in constructing CNNs. We propose a dynamic selection mechanism in
CNNs that allows each neuron to adaptively adjust its receptive field size
based on multiple scales of input information. A building block called
Selective Kernel (SK) unit is designed, in which multiple branches with
different kernel sizes are fused using softmax attention that is guided by the
information in these branches. Different attentions on these branches yield
different sizes of the effective receptive fields of neurons in the fusion
layer. Multiple SK units are stacked to a deep network termed Selective Kernel
Networks (SKNets). On the ImageNet and CIFAR benchmarks, we empirically show
that SKNet outperforms the existing state-of-the-art architectures with lower
model complexity. Detailed analyses show that the neurons in SKNet can capture
target objects with different scales, which verifies the capability of neurons
for adaptively adjusting their receptive field sizes according to the input.
The code and models are available at https://github.com/implus/SKNet. | 2019-03-15T15:04:22Z | CVPR 2019 | null | null | Selective Kernel Networks | ['Xiang Li', 'Wenhai Wang', 'Xiaolin Hu', 'Jian Yang'] | 2,019 | Computer Vision and Pattern Recognition | 2,066 | 63 | ['Computer Science'] |
1,903.07291 | Semantic Image Synthesis with Spatially-Adaptive Normalization | ['Taesung Park', 'Ming-Yu Liu', 'Ting-Chun Wang', 'Jun-Yan Zhu'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG', 'I.5; I.5.4; I.3.3'] | We propose spatially-adaptive normalization, a simple but effective layer for
synthesizing photorealistic images given an input semantic layout. Previous
methods directly feed the semantic layout as input to the deep network, which
is then processed through stacks of convolution, normalization, and
nonlinearity layers. We show that this is suboptimal as the normalization
layers tend to ``wash away'' semantic information. To address the issue, we
propose using the input layout for modulating the activations in normalization
layers through a spatially-adaptive, learned transformation. Experiments on
several challenging datasets demonstrate the advantage of the proposed method
over existing approaches, regarding both visual fidelity and alignment with
input layouts. Finally, our model allows user control over both semantic and
style. Code is available at https://github.com/NVlabs/SPADE . | 2019-03-18T08:12:23Z | Accepted as a CVPR 2019 oral paper | CVPR 2019 | null | null | null | null | null | null | null | null |
1,903.07785 | Cloze-driven Pretraining of Self-attention Networks | ['Alexei Baevski', 'Sergey Edunov', 'Yinhan Liu', 'Luke Zettlemoyer', 'Michael Auli'] | ['cs.CL'] | We present a new approach for pretraining a bi-directional transformer model
that provides significant performance gains across a variety of language
understanding problems. Our model solves a cloze-style word reconstruction
task, where each word is ablated and must be predicted given the rest of the
text. Experiments demonstrate large performance gains on GLUE and new state of
the art results on NER as well as constituency parsing benchmarks, consistent
with the concurrently introduced BERT model. We also present a detailed
analysis of a number of factors that contribute to effective pretraining,
including data domain and size, model capacity, and variations on the cloze
objective. | 2019-03-19T01:19:06Z | null | null | null | Cloze-driven Pretraining of Self-attention Networks | ['Alexei Baevski', 'Sergey Edunov', 'Yinhan Liu', 'Luke Zettlemoyer', 'Michael Auli'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 198 | 41 | ['Computer Science'] |
1,903.08205 | Interactive segmentation of medical images through fully convolutional
neural networks | ['Tomas Sakinis', 'Fausto Milletari', 'Holger Roth', 'Panagiotis Korfiatis', 'Petro Kostandy', 'Kenneth Philbrick', 'Zeynettin Akkus', 'Ziyue Xu', 'Daguang Xu', 'Bradley J. Erickson'] | ['cs.CV'] | Image segmentation plays an essential role in medicine for both diagnostic
and interventional tasks. Segmentation approaches are either manual,
semi-automated or fully-automated. Manual segmentation offers full control over
the quality of the results, but is tedious, time consuming and prone to
operator bias. Fully automated methods require no human effort, but often
deliver sub-optimal results without providing users with the means to make
corrections. Semi-automated approaches keep users in control of the results by
providing means for interaction, but the main challenge is to offer a good
trade-off between precision and required interaction. In this paper we present
a deep learning (DL) based semi-automated segmentation approach that aims to be
a "smart" interactive tool for region of interest delineation in medical
images. We demonstrate its use for segmenting multiple organs on computed
tomography (CT) of the abdomen. Our approach solves some of the most pressing
clinical challenges: (i) it requires only one to a few user clicks to deliver
excellent 2D segmentations in a fast and reliable fashion; (ii) it can
generalize to previously unseen structures and "corner cases"; (iii) it
delivers results that can be corrected quickly in a smart and intuitive way up
to an arbitrary degree of precision chosen by the user and (iv) ensures high
accuracy. We present our approach and compare it to other techniques and
previous work to show the advantages brought by our method. | 2019-03-19T18:28:49Z | null | null | null | null | null | null | null | null | null | null |
1,903.1052 | Micro-Batch Training with Batch-Channel Normalization and Weight
Standardization | ['Siyuan Qiao', 'Huiyu Wang', 'Chenxi Liu', 'Wei Shen', 'Alan Yuille'] | ['cs.CV', 'cs.LG'] | Batch Normalization (BN) has become an out-of-box technique to improve deep
network training. However, its effectiveness is limited for micro-batch
training, i.e., each GPU typically has only 1-2 images for training, which is
inevitable for many computer vision tasks, e.g., object detection and semantic
segmentation, constrained by memory consumption. To address this issue, we
propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to
bring two success factors of BN into micro-batch training: 1) the smoothing
effects on the loss landscape and 2) the ability to avoid harmful elimination
singularities along the training trajectory. WS standardizes the weights in
convolutional layers to smooth the loss landscape by reducing the Lipschitz
constants of the loss and the gradients; BCN combines batch and channel
normalizations and leverages estimated statistics of the activations in
convolutional layers to keep networks away from elimination singularities. We
validate WS and BCN on comprehensive computer vision tasks, including image
classification, object detection, instance segmentation, video recognition and
semantic segmentation. All experimental results consistently show that WS and
BCN improve micro-batch training significantly. Moreover, using WS and BCN with
micro-batch training is even able to match or outperform the performances of BN
with large-batch training. | 2019-03-25T18:00:05Z | null | null | null | null | null | null | null | null | null | null |
1,903.10676 | SciBERT: A Pretrained Language Model for Scientific Text | ['Iz Beltagy', 'Kyle Lo', 'Arman Cohan'] | ['cs.CL'] | Obtaining large-scale annotated data for NLP tasks in the scientific domain
is challenging and expensive. We release SciBERT, a pretrained language model
based on BERT (Devlin et al., 2018) to address the lack of high-quality,
large-scale labeled scientific data. SciBERT leverages unsupervised pretraining
on a large multi-domain corpus of scientific publications to improve
performance on downstream scientific NLP tasks. We evaluate on a suite of tasks
including sequence tagging, sentence classification and dependency parsing,
with datasets from a variety of scientific domains. We demonstrate
statistically significant improvements over BERT and achieve new
state-of-the-art results on several of these tasks. The code and pretrained
models are available at https://github.com/allenai/scibert/. | 2019-03-26T05:11:46Z | https://github.com/allenai/scibert | EMNLP 2019 | null | null | null | null | null | null | null | null |
1,903.12261 | Benchmarking Neural Network Robustness to Common Corruptions and
Perturbations | ['Dan Hendrycks', 'Thomas Dietterich'] | ['cs.LG', 'cs.CV', 'stat.ML'] | In this paper we establish rigorous benchmarks for image classifier
robustness. Our first benchmark, ImageNet-C, standardizes and expands the
corruption robustness topic, while showing which classifiers are preferable in
safety-critical applications. Then we propose a new dataset called ImageNet-P
which enables researchers to benchmark a classifier's robustness to common
perturbations. Unlike recent robustness research, this benchmark evaluates
performance on common corruptions and perturbations not worst-case adversarial
perturbations. We find that there are negligible changes in relative corruption
robustness from AlexNet classifiers to ResNet classifiers. Afterward we
discover ways to enhance corruption and perturbation robustness. We even find
that a bypassed adversarial defense provides substantial common perturbation
robustness. Together our benchmarks may aid future work toward networks that
robustly generalize. | 2019-03-28T20:56:37Z | ICLR 2019 camera-ready; datasets available at
https://github.com/hendrycks/robustness ; this article supersedes
arXiv:1807.01697 | null | null | null | null | null | null | null | null | null |
1,903.12519 | A Provable Defense for Deep Residual Networks | ['Matthew Mirman', 'Gagandeep Singh', 'Martin Vechev'] | ['cs.LG', 'cs.AI', 'cs.CR', 'cs.PL', 'stat.ML'] | We present a training system, which can provably defend significantly larger
neural networks than previously possible, including ResNet-34 and DenseNet-100.
Our approach is based on differentiable abstract interpretation and introduces
two novel concepts: (i) abstract layers for fine-tuning the precision and
scalability of the abstraction, (ii) a flexible domain specific language (DSL)
for describing training objectives that combine abstract and concrete losses
with arbitrary specifications. Our training method is implemented in the DiffAI
system. | 2019-03-29T13:35:31Z | null | null | null | null | null | null | null | null | null | null |
1,904.00625 | Med3D: Transfer Learning for 3D Medical Image Analysis | ['Sihong Chen', 'Kai Ma', 'Yefeng Zheng'] | ['cs.CV'] | The performance on deep learning is significantly affected by volume of
training data. Models pre-trained from massive dataset such as ImageNet become
a powerful weapon for speeding up training convergence and improving accuracy.
Similarly, models based on large dataset are important for the development of
deep learning in 3D medical images. However, it is extremely challenging to
build a sufficiently large dataset due to difficulty of data acquisition and
annotation in 3D medical imaging. We aggregate the dataset from several medical
challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and
pathologies. To extract general medical three-dimension (3D) features, we
design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8
so as to make a series of pre-trained models. We transfer Med3D pre-trained
models to lung segmentation in LIDC dataset, pulmonary nodule classification in
LIDC dataset and liver segmentation on LiTS challenge. Experiments show that
the Med3D can accelerate the training convergence speed of target 3D medical
tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times
compared with training from scratch as well as improve accuracy ranging from 3%
to 20%. Transferring our Med3D model on state-the-of-art DenseASPP segmentation
network, in case of single model, we achieve 94.6\% Dice coefficient which
approaches the result of top-ranged algorithms on the LiTS challenge. | 2019-04-01T08:14:29Z | null | null | null | null | null | null | null | null | null | null |
1,904.00962 | Large Batch Optimization for Deep Learning: Training BERT in 76 minutes | ['Yang You', 'Jing Li', 'Sashank Reddi', 'Jonathan Hseu', 'Sanjiv Kumar', 'Srinadh Bhojanapalli', 'Xiaodan Song', 'James Demmel', 'Kurt Keutzer', 'Cho-Jui Hsieh'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML'] | Training large deep neural networks on massive datasets is computationally
very challenging. There has been recent surge in interest in using large batch
stochastic optimization methods to tackle this issue. The most prominent
algorithm in this line of research is LARS, which by employing layerwise
adaptive learning rates trains ResNet on ImageNet in a few minutes. However,
LARS performs poorly for attention models like BERT, indicating that its
performance gains are not consistent across tasks. In this paper, we first
study a principled layerwise adaptation strategy to accelerate training of deep
neural networks using large mini-batches. Using this strategy, we develop a new
layerwise adaptive large batch optimization technique called LAMB; we then
provide convergence analysis of LAMB as well as LARS, showing convergence to a
stationary point in general nonconvex settings. Our empirical results
demonstrate the superior performance of LAMB across various tasks such as BERT
and ResNet-50 training with very little hyperparameter tuning. In particular,
for BERT training, our optimizer enables use of very large batch sizes of 32868
without any degradation of performance. By increasing the batch size to the
memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to
just 76 minutes (Table 1). The LAMB implementation is available at
https://github.com/tensorflow/addons/blob/master/tensorflow_addons/optimizers/lamb.py | 2019-04-01T16:53:35Z | Published as a conference paper at ICLR 2020 | null | null | null | null | null | null | null | null | null |
1,904.0113 | PAWS: Paraphrase Adversaries from Word Scrambling | ['Yuan Zhang', 'Jason Baldridge', 'Luheng He'] | ['cs.CL'] | Existing paraphrase identification datasets lack sentence pairs that have
high lexical overlap without being paraphrases. Models trained on such data
fail to distinguish pairs like flights from New York to Florida and flights
from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries
from Word Scrambling), a new dataset with 108,463 well-formed paraphrase and
non-paraphrase pairs with high lexical overlap. Challenging pairs are generated
by controlled word swapping and back translation, followed by fluency and
paraphrase judgments by human raters. State-of-the-art models trained on
existing datasets have dismal performance on PAWS (<40% accuracy); however,
including PAWS training data for these models improves their accuracy to 85%
while maintaining performance on existing tasks. In contrast, models that do
not capture non-local contextual information fail even with PAWS training
examples. As such, PAWS provides an effective instrument for driving further
progress on models that better exploit structure, context, and pairwise
comparisons. | 2019-04-01T22:21:14Z | NAACL 2019 | null | null | PAWS: Paraphrase Adversaries from Word Scrambling | ['Yuan Zhang', 'Jason Baldridge', 'Luheng He'] | 2,019 | North American Chapter of the Association for Computational Linguistics | 545 | 36 | ['Computer Science'] |
1,904.01169 | Res2Net: A New Multi-scale Backbone Architecture | ['Shang-Hua Gao', 'Ming-Ming Cheng', 'Kai Zhao', 'Xin-Yu Zhang', 'Ming-Hsuan Yang', 'Philip Torr'] | ['cs.CV'] | Representing features at multiple scales is of great importance for numerous
vision tasks. Recent advances in backbone convolutional neural networks (CNNs)
continually demonstrate stronger multi-scale representation ability, leading to
consistent performance gains on a wide range of applications. However, most
existing methods represent the multi-scale features in a layer-wise manner. In
this paper, we propose a novel building block for CNNs, namely Res2Net, by
constructing hierarchical residual-like connections within one single residual
block. The Res2Net represents multi-scale features at a granular level and
increases the range of receptive fields for each network layer. The proposed
Res2Net block can be plugged into the state-of-the-art backbone CNN models,
e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these
models and demonstrate consistent performance gains over baseline models on
widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies
and experimental results on representative computer vision tasks, i.e., object
detection, class activation mapping, and salient object detection, further
verify the superiority of the Res2Net over the state-of-the-art baseline
methods. The source code and trained models are available on
https://mmcheng.net/res2net/. | 2019-04-02T01:56:34Z | 11 pages, 7 figures | IEEE TPAMI 2021 | 10.1109/TPAMI.2019.2938758 | Res2Net: A New Multi-Scale Backbone Architecture | ['Shanghua Gao', 'Ming-Ming Cheng', 'Kai Zhao', 'Xinyu Zhang', 'Ming-Hsuan Yang', 'Philip H. S. Torr'] | 2,019 | IEEE Transactions on Pattern Analysis and Machine Intelligence | 2,429 | 83 | ['Computer Science', 'Medicine'] |
1,904.01355 | FCOS: Fully Convolutional One-Stage Object Detection | ['Zhi Tian', 'Chunhua Shen', 'Hao Chen', 'Tong He'] | ['cs.CV'] | We propose a fully convolutional one-stage object detector (FCOS) to solve
object detection in a per-pixel prediction fashion, analogue to semantic
segmentation. Almost all state-of-the-art object detectors such as RetinaNet,
SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast,
our proposed detector FCOS is anchor box free, as well as proposal free. By
eliminating the predefined set of anchor boxes, FCOS completely avoids the
complicated computation related to anchor boxes such as calculating overlapping
during training. More importantly, we also avoid all hyper-parameters related
to anchor boxes, which are often very sensitive to the final detection
performance. With the only post-processing non-maximum suppression (NMS), FCOS
with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale
testing, surpassing previous one-stage detectors with the advantage of being
much simpler. For the first time, we demonstrate a much simpler and flexible
detection framework achieving improved detection accuracy. We hope that the
proposed FCOS framework can serve as a simple and strong alternative for many
other instance-level tasks. Code is available at:Code is available at:
https://tinyurl.com/FCOSv1 | 2019-04-02T11:56:36Z | Accepted to Proc. Int. Conf. Computer Vision 2019. 13 pages. Code is
available at: https://github.com/tianzhi0549/FCOS/ | null | null | FCOS: Fully Convolutional One-Stage Object Detection | ['Zhi Tian', 'Chunhua Shen', 'Hao Chen', 'Tong He'] | 2,019 | IEEE International Conference on Computer Vision | 5,038 | 37 | ['Computer Science'] |
1,904.01557 | Analysing Mathematical Reasoning Abilities of Neural Models | ['David Saxton', 'Edward Grefenstette', 'Felix Hill', 'Pushmeet Kohli'] | ['cs.LG', 'stat.ML'] | Mathematical reasoning---a core ability within human intelligence---presents
some unique challenges as a domain: we do not come to understand and solve
mathematical problems primarily on the back of experience and evidence, but on
the basis of inferring, learning, and exploiting laws, axioms, and symbol
manipulation rules. In this paper, we present a new challenge for the
evaluation (and eventually the design) of neural architectures and similar
system, developing a task suite of mathematics problems involving sequential
questions and answers in a free-form textual input/output format. The
structured nature of the mathematics domain, covering arithmetic, algebra,
probability and calculus, enables the construction of training and test splits
designed to clearly illuminate the capabilities and failure-modes of different
architectures, as well as evaluate their ability to compose and relate
knowledge and learned processes. Having described the data generation process
and its potential future expansions, we conduct a comprehensive analysis of
models from two broad classes of the most powerful sequence-to-sequence
architectures and find notable differences in their ability to resolve
mathematical problems and generalize their knowledge. | 2019-04-02T17:26:41Z | null | null | null | null | null | null | null | null | null | null |
1,904.01941 | Character Region Awareness for Text Detection | ['Youngmin Baek', 'Bado Lee', 'Dongyoon Han', 'Sangdoo Yun', 'Hwalsuk Lee'] | ['cs.CV'] | Scene text detection methods based on neural networks have emerged recently
and have shown promising results. Previous methods trained with rigid
word-level bounding boxes exhibit limitations in representing the text region
in an arbitrary shape. In this paper, we propose a new scene text detection
method to effectively detect text area by exploring each character and affinity
between characters. To overcome the lack of individual character level
annotations, our proposed framework exploits both the given character-level
annotations for synthetic images and the estimated character-level
ground-truths for real images acquired by the learned interim model. In order
to estimate affinity between characters, the network is trained with the newly
proposed representation for affinity. Extensive experiments on six benchmarks,
including the TotalText and CTW-1500 datasets which contain highly curved texts
in natural images, demonstrate that our character-level text detection
significantly outperforms the state-of-the-art detectors. According to the
results, our proposed method guarantees high flexibility in detecting
complicated scene text images, such as arbitrarily-oriented, curved, or
deformed texts. | 2019-04-03T12:00:33Z | 12 pages, 11 figures, Accepted by CVPR 2019 | null | null | null | null | null | null | null | null | null |
1,904.02099 | 75 Languages, 1 Model: Parsing Universal Dependencies Universally | ['Dan Kondratyuk', 'Milan Straka'] | ['cs.CL', 'cs.LG'] | We present UDify, a multilingual multi-task model capable of accurately
predicting universal part-of-speech, morphological features, lemmas, and
dependency trees simultaneously for all 124 Universal Dependencies treebanks
across 75 languages. By leveraging a multilingual BERT self-attention model
pretrained on 104 languages, we found that fine-tuning it on all datasets
concatenated together with simple softmax classifiers for each UD task can
result in state-of-the-art UPOS, UFeats, Lemmas, UAS, and LAS scores, without
requiring any recurrent or language-specific components. We evaluate UDify for
multilingual learning, showing that low-resource languages benefit the most
from cross-linguistic annotations. We also evaluate for zero-shot learning,
with results suggesting that multilingual training provides strong UD
predictions even for languages that neither UDify nor BERT have ever been
trained on. Code for UDify is available at
https://github.com/hyperparticle/udify. | 2019-04-03T16:52:55Z | Accepted for publication at EMNLP 2019. 17 pages, 6 figures | null | null | 75 Languages, 1 Model: Parsing Universal Dependencies Universally | ['D. Kondratyuk'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 264 | 54 | ['Computer Science'] |
1,904.02285 | HoloDetect: Few-Shot Learning for Error Detection | ['Alireza Heidari', 'Joshua McGrath', 'Ihab F. Ilyas', 'Theodoros Rekatsinas'] | ['cs.DB'] | We introduce a few-shot learning framework for error detection. We show that
data augmentation (a form of weak supervision) is key to training high-quality,
ML-based error detection models that require minimal human involvement. Our
framework consists of two parts: (1) an expressive model to learn rich
representations that capture the inherent syntactic and semantic heterogeneity
of errors; and (2) a data augmentation model that, given a small seed of clean
records, uses dataset-specific transformations to automatically generate
additional training data. Our key insight is to learn data augmentation
policies from the noisy input dataset in a weakly supervised manner. We show
that our framework detects errors with an average precision of ~94% and an
average recall of ~93% across a diverse array of datasets that exhibit
different types and amounts of errors. We compare our approach to a
comprehensive collection of error detection methods, ranging from traditional
rule-based methods to ensemble-based and active learning approaches. We show
that data augmentation yields an average improvement of 20 F1 points while it
requires access to 3x fewer labeled examples compared to other ML approaches. | 2019-04-04T00:38:59Z | 18 pages, | ACM SIGMOD 2019 | 10.1145/3299869.3319888 | null | null | null | null | null | null | null |
1,904.02358 | Lightweight Image Super-Resolution with Adaptive Weighted Learning
Network | ['Chaofeng Wang', 'Zheng Li', 'Jun Shi'] | ['cs.CV', 'I.2.10; I.4'] | Deep learning has been successfully applied to the single-image
super-resolution (SISR) task with great performance in recent years. However,
most convolutional neural network based SR models require heavy computation,
which limit their real-world applications. In this work, a lightweight SR
network, named Adaptive Weighted Super-Resolution Network (AWSRN), is proposed
for SISR to address this issue. A novel local fusion block (LFB) is designed in
AWSRN for efficient residual learning, which consists of stacked adaptive
weighted residual units (AWRU) and a local residual fusion unit (LRFU).
Moreover, an adaptive weighted multi-scale (AWMS) module is proposed to make
full use of features in reconstruction layer. AWMS consists of several
different scale convolutions, and the redundancy scale branch can be removed
according to the contribution of adaptive weights in AWMS for lightweight
network. The experimental results on the commonly used datasets show that the
proposed lightweight AWSRN achieves superior performance on x2, x3, x4, and x8
scale factors to state-of-the-art methods with similar parameters and
computational overhead. Code is avaliable at:
https://github.com/ChaofWang/AWSRN | 2019-04-04T05:44:32Z | 9 pages, 6 figures | null | null | Lightweight Image Super-Resolution with Adaptive Weighted Learning Network | ['Chaofeng Wang', 'Zheng Li', 'Jun Shi'] | 2,019 | arXiv.org | 102 | 40 | ['Computer Science'] |
1,904.02701 | Libra R-CNN: Towards Balanced Learning for Object Detection | ['Jiangmiao Pang', 'Kai Chen', 'Jianping Shi', 'Huajun Feng', 'Wanli Ouyang', 'Dahua Lin'] | ['cs.CV'] | Compared with model architectures, the training process, which is also
crucial to the success of detectors, has received relatively less attention in
object detection. In this work, we carefully revisit the standard training
practice of detectors, and find that the detection performance is often limited
by the imbalance during the training process, which generally consists in three
levels - sample level, feature level, and objective level. To mitigate the
adverse effects caused thereby, we propose Libra R-CNN, a simple but effective
framework towards balanced learning for object detection. It integrates three
novel components: IoU-balanced sampling, balanced feature pyramid, and balanced
L1 loss, respectively for reducing the imbalance at sample, feature, and
objective level. Benefitted from the overall balanced design, Libra R-CNN
significantly improves the detection performance. Without bells and whistles,
it achieves 2.5 points and 2.0 points higher Average Precision (AP) than FPN
Faster R-CNN and RetinaNet respectively on MSCOCO. | 2019-04-04T17:58:22Z | To appear at CVPR 2019 | null | null | Libra R-CNN: Towards Balanced Learning for Object Detection | ['Jiangmiao Pang', 'Kai Chen', 'Jianping Shi', 'H. Feng', 'Wanli Ouyang', 'Dahua Lin'] | 2,019 | Computer Vision and Pattern Recognition | 1,297 | 38 | ['Computer Science'] |
1,904.02877 | Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4
Hours | ['Dimitrios Stamoulis', 'Ruizhou Ding', 'Di Wang', 'Dimitrios Lymberopoulos', 'Bodhi Priyantha', 'Jie Liu', 'Diana Marculescu'] | ['cs.LG', 'cs.CV', 'stat.ML'] | Can we automatically design a Convolutional Network (ConvNet) with the
highest image classification accuracy under the runtime constraint of a mobile
device? Neural architecture search (NAS) has revolutionized the design of
hardware-efficient ConvNets by automating this process. However, the NAS
problem remains challenging due to the combinatorially large design space,
causing a significant searching time (at least 200 GPU-hours). To alleviate
this complexity, we propose Single-Path NAS, a novel differentiable NAS method
for designing hardware-efficient ConvNets in less than 4 hours. Our
contributions are as follows: 1. Single-path search space: Compared to previous
differentiable NAS methods, Single-Path NAS uses one single-path
over-parameterized ConvNet to encode all architectural decisions with shared
convolutional kernel parameters, hence drastically decreasing the number of
trainable parameters and the search cost down to few epochs. 2.
Hardware-efficient ImageNet classification: Single-Path NAS achieves 74.96%
top-1 accuracy on ImageNet with 79ms latency on a Pixel 1 phone, which is
state-of-the-art accuracy compared to NAS methods with similar constraints
(<80ms). 3. NAS efficiency: Single-Path NAS search cost is only 8 epochs (30
TPU-hours), which is up to 5,000x faster compared to prior work. 4.
Reproducibility: Unlike all recent mobile-efficient NAS methods which only
release pretrained models, we open-source our entire codebase at:
https://github.com/dstamoulis/single-path-nas. | 2019-04-05T05:49:41Z | null | null | null | null | null | null | null | null | null | null |
1,904.02882 | LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech | ['Heiga Zen', 'Viet Dang', 'Rob Clark', 'Yu Zhang', 'Ron J. Weiss', 'Ye Jia', 'Zhifeng Chen', 'Yonghui Wu'] | ['cs.SD', 'eess.AS'] | This paper introduces a new speech corpus called "LibriTTS" designed for
text-to-speech use. It is derived from the original audio and text materials of
the LibriSpeech corpus, which has been used for training and evaluating
automatic speech recognition systems. The new corpus inherits desired
properties of the LibriSpeech corpus while addressing a number of issues which
make LibriSpeech less than ideal for text-to-speech work. The released corpus
consists of 585 hours of speech data at 24kHz sampling rate from 2,456 speakers
and the corresponding texts. Experimental results show that neural end-to-end
TTS models trained from the LibriTTS corpus achieved above 4.0 in mean opinion
scores in naturalness in five out of six evaluation speakers. The corpus is
freely available for download from http://www.openslr.org/60/. | 2019-04-05T06:05:00Z | Submitted for Interspeech 2019, 7 pages | null | null | null | null | null | null | null | null | null |
1,904.03323 | Publicly Available Clinical BERT Embeddings | ['Emily Alsentzer', 'John R. Murphy', 'Willie Boag', 'Wei-Hung Weng', 'Di Jin', 'Tristan Naumann', 'Matthew B. A. McDermott'] | ['cs.CL'] | Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT
(Devlin et al., 2018) have dramatically improved performance for many natural
language processing (NLP) tasks in recent months. However, these models have
been minimally explored on specialty corpora, such as clinical text; moreover,
in the clinical domain, no publicly-available pre-trained BERT models yet
exist. In this work, we address this need by exploring and releasing BERT
models for clinical text: one for generic clinical text and another for
discharge summaries specifically. We demonstrate that using a domain-specific
model yields performance improvements on three common clinical NLP tasks as
compared to nonspecific embeddings. These domain-specific models are not as
performant on two clinical de-identification tasks, and argue that this is a
natural consequence of the differences between de-identified source text and
synthetically non de-identified task text. | 2019-04-06T00:34:39Z | Clinical Natural Language Processing (ClinicalNLP) Workshop at NAACL
2019 | null | null | null | null | null | null | null | null | null |
1,904.03493 | VATEX: A Large-Scale, High-Quality Multilingual Dataset for
Video-and-Language Research | ['Xin Wang', 'Jiawei Wu', 'Junkun Chen', 'Lei Li', 'Yuan-Fang Wang', 'William Yang Wang'] | ['cs.CV', 'cs.CL', 'cs.LG'] | We present a new large-scale multilingual video description dataset, VATEX,
which contains over 41,250 videos and 825,000 captions in both English and
Chinese. Among the captions, there are over 206,000 English-Chinese parallel
translation pairs. Compared to the widely-used MSR-VTT dataset, VATEX is
multilingual, larger, linguistically complex, and more diverse in terms of both
video and natural language descriptions. We also introduce two tasks for
video-and-language research based on VATEX: (1) Multilingual Video Captioning,
aimed at describing a video in various languages with a compact unified
captioning model, and (2) Video-guided Machine Translation, to translate a
source language description into the target language using the video
information as additional spatiotemporal context. Extensive experiments on the
VATEX dataset show that, first, the unified multilingual model can not only
produce both English and Chinese descriptions for a video more efficiently, but
also offer improved performance over the monolingual models. Furthermore, we
demonstrate that the spatiotemporal video context can be effectively utilized
to align source and target languages and thus assist machine translation. In
the end, we discuss the potentials of using VATEX for other video-and-language
research. | 2019-04-06T16:50:31Z | ICCV 2019 Oral. 17 pages, 14 figures, 6 tables (updated the VATEX
website link: vatex-challenge.org) | null | null | null | null | null | null | null | null | null |
1,904.0367 | Speech Model Pre-training for End-to-End Spoken Language Understanding | ['Loren Lugosch', 'Mirco Ravanelli', 'Patrick Ignoto', 'Vikrant Singh Tomar', 'Yoshua Bengio'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Whereas conventional spoken language understanding (SLU) systems map speech
to text, and then text to intent, end-to-end SLU systems map speech directly to
intent through a single trainable model. Achieving high accuracy with these
end-to-end models without a large amount of training data is difficult. We
propose a method to reduce the data requirements of end-to-end SLU in which the
model is first pre-trained to predict words and phonemes, thus learning good
features for SLU. We introduce a new SLU dataset, Fluent Speech Commands, and
show that our method improves performance both when the full dataset is used
for training and when only a small subset is used. We also describe preliminary
experiments to gauge the model's ability to generalize to new phrases not heard
during training. | 2019-04-07T15:24:32Z | Accepted to Interspeech 2019 | null | null | Speech Model Pre-training for End-to-End Spoken Language Understanding | ['Loren Lugosch', 'M. Ravanelli', 'Patrick Ignoto', 'Vikrant Singh Tomar', 'Yoshua Bengio'] | 2,019 | Interspeech | 356 | 43 | ['Computer Science', 'Engineering'] |
1,904.03969 | Issue Framing in Online Discussion Fora | ['Mareike Hartmann', 'Tallulah Jansen', 'Isabelle Augenstein', 'Anders Søgaard'] | ['cs.CL', 'cs.LG'] | In online discussion fora, speakers often make arguments for or against
something, say birth control, by highlighting certain aspects of the topic. In
social science, this is referred to as issue framing. In this paper, we
introduce a new issue frame annotated corpus of online discussions. We explore
to what extent models trained to detect issue frames in newswire and social
media can be transferred to the domain of discussion fora, using a combination
of multi-task and adversarial training, assuming only unlabeled training data
in the target domain. | 2019-04-08T11:36:53Z | To appear in NAACL-HLT 2019 | null | null | null | null | null | null | null | null | null |
1,904.04971 | CondConv: Conditionally Parameterized Convolutions for Efficient
Inference | ['Brandon Yang', 'Gabriel Bender', 'Quoc V. Le', 'Jiquan Ngiam'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Convolutional layers are one of the basic building blocks of modern deep
neural networks. One fundamental assumption is that convolutional kernels
should be shared for all examples in a dataset. We propose conditionally
parameterized convolutions (CondConv), which learn specialized convolutional
kernels for each example. Replacing normal convolutions with CondConv enables
us to increase the size and capacity of a network, while maintaining efficient
inference. We demonstrate that scaling networks with CondConv improves the
performance and inference cost trade-off of several existing convolutional
neural network architectures on both classification and detection tasks. On
ImageNet classification, our CondConv approach applied to EfficientNet-B0
achieves state-of-the-art performance of 78.3% accuracy with only 413M
multiply-adds. Code and checkpoints for the CondConv Tensorflow layer and
CondConv-EfficientNet models are available at:
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/condconv. | 2019-04-10T01:46:48Z | null | NeurIPS 2019 | null | null | null | null | null | null | null | null |
1,904.06472 | A Repository of Conversational Datasets | ['Matthew Henderson', 'Paweł Budzianowski', 'Iñigo Casanueva', 'Sam Coope', 'Daniela Gerz', 'Girish Kumar', 'Nikola Mrkšić', 'Georgios Spithourakis', 'Pei-Hao Su', 'Ivan Vulić', 'Tsung-Hsien Wen'] | ['cs.CL'] | Progress in Machine Learning is often driven by the availability of large
datasets, and consistent evaluation metrics for comparing modeling approaches.
To this end, we present a repository of conversational datasets consisting of
hundreds of millions of examples, and a standardised evaluation procedure for
conversational response selection models using '1-of-100 accuracy'. The
repository contains scripts that allow researchers to reproduce the standard
datasets, or to adapt the pre-processing and data filtering steps to their
needs. We introduce and evaluate several competitive baselines for
conversational response selection, whose implementations are shared in the
repository, as well as a neural encoder model that is trained on the entire
training set. | 2019-04-13T02:59:48Z | null | Proceedings of the Workshop on NLP for Conversational AI (2019) | null | null | null | null | null | null | null | null |
1,904.07396 | Real Image Denoising with Feature Attention | ['Saeed Anwar', 'Nick Barnes'] | ['cs.CV', 'cs.LG'] | Deep convolutional neural networks perform better on images containing
spatially invariant noise (synthetic noise); however, their performance is
limited on real-noisy photographs and requires multiple stage network modeling.
To advance the practicability of denoising algorithms, this paper proposes a
novel single-stage blind real image denoising network (RIDNet) by employing a
modular architecture. We use a residual on the residual structure to ease the
flow of low-frequency information and apply feature attention to exploit the
channel dependencies. Furthermore, the evaluation in terms of quantitative
metrics and visual quality on three synthetic and four real noisy datasets
against 19 state-of-the-art algorithms demonstrate the superiority of our
RIDNet. | 2019-04-16T01:55:08Z | Accepted in ICCV (Oral), 2019 | null | null | null | null | null | null | null | null | null |
1,904.07733 | Subjective Assessment of Text Complexity: A Dataset for German Language | ['Babak Naderi', 'Salar Mohtaj', 'Kaspar Ensikat', 'Sebastian Möller'] | ['cs.CL'] | This paper presents TextComplexityDE, a dataset consisting of 1000 sentences
in German language taken from 23 Wikipedia articles in 3 different
article-genres to be used for developing text-complexity predictor models and
automatic text simplification in German language. The dataset includes
subjective assessment of different text-complexity aspects provided by German
learners in level A and B. In addition, it contains manual simplification of
250 of those sentences provided by native speakers and subjective assessment of
the simplified sentences by participants from the target group. The subjective
ratings were collected using both laboratory studies and crowdsourcing
approach. | 2019-04-16T14:39:21Z | null | null | null | null | null | null | null | null | null | null |
1,904.0785 | Objects as Points | ['Xingyi Zhou', 'Dequan Wang', 'Philipp Krähenbühl'] | ['cs.CV'] | Detection identifies objects as axis-aligned boxes in an image. Most
successful object detectors enumerate a nearly exhaustive list of potential
object locations and classify each. This is wasteful, inefficient, and requires
additional post-processing. In this paper, we take a different approach. We
model an object as a single point --- the center point of its bounding box. Our
detector uses keypoint estimation to find center points and regresses to all
other object properties, such as size, 3D location, orientation, and even pose.
Our center point based approach, CenterNet, is end-to-end differentiable,
simpler, faster, and more accurate than corresponding bounding box based
detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO
dataset, with 28.1% AP at 142 FPS, 37.4% AP at 52 FPS, and 45.1% AP with
multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D
bounding box in the KITTI benchmark and human pose on the COCO keypoint
dataset. Our method performs competitively with sophisticated multi-stage
methods and runs in real-time. | 2019-04-16T17:54:26Z | 12 pages, 5 figures | null | null | Objects as Points | ['Xingyi Zhou', 'Dequan Wang', 'Philipp Krähenbühl'] | 2,019 | arXiv.org | 3,266 | 64 | ['Computer Science'] |
1,904.08375 | Document Expansion by Query Prediction | ['Rodrigo Nogueira', 'Wei Yang', 'Jimmy Lin', 'Kyunghyun Cho'] | ['cs.IR', 'cs.LG'] | One technique to improve the retrieval effectiveness of a search engine is to
expand documents with terms that are related or representative of the
documents' content.From the perspective of a question answering system, this
might comprise questions the document can potentially answer. Following this
observation, we propose a simple method that predicts which queries will be
issued for a given document and then expands it with those predictions with a
vanilla sequence-to-sequence model, trained using datasets consisting of pairs
of query and relevant documents. By combining our method with a
highly-effective re-ranking component, we achieve the state of the art in two
retrieval tasks. In a latency-critical regime, retrieval results alone (without
re-ranking) approach the effectiveness of more computationally expensive neural
re-rankers but are much faster. | 2019-04-17T17:20:14Z | null | null | null | null | null | null | null | null | null | null |
1,904.08779 | SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition | ['Daniel S. Park', 'William Chan', 'Yu Zhang', 'Chung-Cheng Chiu', 'Barret Zoph', 'Ekin D. Cubuk', 'Quoc V. Le'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD', 'stat.ML'] | We present SpecAugment, a simple data augmentation method for speech
recognition. SpecAugment is applied directly to the feature inputs of a neural
network (i.e., filter bank coefficients). The augmentation policy consists of
warping the features, masking blocks of frequency channels, and masking blocks
of time steps. We apply SpecAugment on Listen, Attend and Spell networks for
end-to-end speech recognition tasks. We achieve state-of-the-art performance on
the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work.
On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language
model, and 5.8% WER with shallow fusion with a language model. This compares to
the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we
achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set
without the use of a language model, and 6.8%/14.1% with shallow fusion, which
compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER. | 2019-04-18T17:53:38Z | 5 pages, 3 figures, 6 tables; v3: references added | Proc. Interspeech 2019, 2613-2617 | 10.21437/Interspeech.2019-2680 | null | null | null | null | null | null | null |
1,904.09077 | Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT | ['Shijie Wu', 'Mark Dredze'] | ['cs.CL'] | Pretrained contextual representation models (Peters et al., 2018; Devlin et
al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new
release of BERT (Devlin, 2018) includes a model simultaneously pretrained on
104 languages with impressive performance for zero-shot cross-lingual transfer
on a natural language inference task. This paper explores the broader
cross-lingual potential of mBERT (multilingual) as a zero shot language
transfer model on 5 NLP tasks covering a total of 39 languages from various
language families: NLI, document classification, NER, POS tagging, and
dependency parsing. We compare mBERT with the best-published methods for
zero-shot cross-lingual transfer and find mBERT competitive on each task.
Additionally, we investigate the most effective strategy for utilizing mBERT in
this manner, determine to what extent mBERT generalizes away from language
specific features, and measure factors that influence cross-lingual transfer. | 2019-04-19T04:45:44Z | EMNLP 2019 Camera Ready | null | null | Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT | ['Shijie Wu', 'Mark Dredze'] | 2,019 | Conference on Empirical Methods in Natural Language Processing | 681 | 46 | ['Computer Science'] |
1,904.09223 | ERNIE: Enhanced Representation through Knowledge Integration | ['Yu Sun', 'Shuohuan Wang', 'Yukun Li', 'Shikun Feng', 'Xuyi Chen', 'Han Zhang', 'Xin Tian', 'Danxiang Zhu', 'Hao Tian', 'Hua Wu'] | ['cs.CL'] | We present a novel language representation model enhanced by knowledge called
ERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the
masking strategy of BERT, ERNIE is designed to learn language representation
enhanced by knowledge masking strategies, which includes entity-level masking
and phrase-level masking. Entity-level strategy masks entities which are
usually composed of multiple words.Phrase-level strategy masks the whole phrase
which is composed of several words standing together as a conceptual
unit.Experimental results show that ERNIE outperforms other baseline methods,
achieving new state-of-the-art results on five Chinese natural language
processing tasks including natural language inference, semantic similarity,
named entity recognition, sentiment analysis and question answering. We also
demonstrate that ERNIE has more powerful knowledge inference capacity on a
cloze test. | 2019-04-19T15:10:56Z | 8 pages | null | null | ERNIE: Enhanced Representation through Knowledge Integration | ['Yu Sun', 'Shuohuan Wang', 'Yukun Li', 'Shikun Feng', 'Xuyi Chen', 'Han Zhang', 'Xin Tian', 'Danxiang Zhu', 'Hao Tian', 'Hua Wu'] | 2,019 | arXiv.org | 907 | 23 | ['Computer Science'] |
1,904.09675 | BERTScore: Evaluating Text Generation with BERT | ['Tianyi Zhang', 'Varsha Kishore', 'Felix Wu', 'Kilian Q. Weinberger', 'Yoav Artzi'] | ['cs.CL'] | We propose BERTScore, an automatic evaluation metric for text generation.
Analogously to common metrics, BERTScore computes a similarity score for each
token in the candidate sentence with each token in the reference sentence.
However, instead of exact matches, we compute token similarity using contextual
embeddings. We evaluate using the outputs of 363 machine translation and image
captioning systems. BERTScore correlates better with human judgments and
provides stronger model selection performance than existing metrics. Finally,
we use an adversarial paraphrase detection task to show that BERTScore is more
robust to challenging examples when compared to existing metrics. | 2019-04-21T23:08:53Z | Code available at https://github.com/Tiiiger/bert_score; To appear in
ICLR2020 | null | null | null | null | null | null | null | null | null |
1,904.09728 | SocialIQA: Commonsense Reasoning about Social Interactions | ['Maarten Sap', 'Hannah Rashkin', 'Derek Chen', 'Ronan LeBras', 'Yejin Choi'] | ['cs.CL'] | We introduce Social IQa, the first largescale benchmark for commonsense
reasoning about social situations. Social IQa contains 38,000 multiple choice
questions for probing emotional and social intelligence in a variety of
everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan
leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could
hear"). Through crowdsourcing, we collect commonsense questions along with
correct and incorrect answers about social interactions, using a new framework
that mitigates stylistic artifacts in incorrect answers by asking workers to
provide the right answer to a different but related question. Empirical results
show that our benchmark is challenging for existing question-answering models
based on pretrained language models, compared to human performance (>20% gap).
Notably, we further establish Social IQa as a resource for transfer learning of
commonsense knowledge, achieving state-of-the-art performance on multiple
commonsense reasoning tasks (Winograd Schemas, COPA). | 2019-04-22T05:36:37Z | the first two authors contributed equally; accepted to EMNLP 2019;
camera ready version | null | null | null | null | null | null | null | null | null |
1,904.0973 | An Energy and GPU-Computation Efficient Backbone Network for Real-Time
Object Detection | ['Youngwan Lee', 'Joong-won Hwang', 'Sangrok Lee', 'Yuseok Bae', 'Jongyoul Park'] | ['cs.CV'] | As DenseNet conserves intermediate features with diverse receptive fields by
aggregating them with dense connection, it shows good performance on the object
detection task. Although feature reuse enables DenseNet to produce strong
features with a small number of model parameters and FLOPs, the detector with
DenseNet backbone shows rather slow speed and low energy efficiency. We find
the linearly increasing input channel by dense connection leads to heavy memory
access cost, which causes computation overhead and more energy consumption. To
solve the inefficiency of DenseNet, we propose an energy and computation
efficient architecture called VoVNet comprised of One-Shot Aggregation (OSA).
The OSA not only adopts the strength of DenseNet that represents diversified
features with multi receptive fields but also overcomes the inefficiency of
dense connection by aggregating all features only once in the last feature
maps. To validate the effectiveness of VoVNet as a backbone network, we design
both lightweight and large-scale VoVNet and apply them to one-stage and
two-stage object detectors. Our VoVNet based detectors outperform DenseNet
based ones with 2x faster speed and the energy consumptions are reduced by 1.6x
- 4.1x. In addition to DenseNet, VoVNet also outperforms widely used ResNet
backbone with faster speed and better energy efficiency. In particular, the
small object detection performance has been significantly improved over
DenseNet and ResNet. | 2019-04-22T05:45:57Z | CVPR2019 CEFRL Workshop | null | null | An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection | ['Youngwan Lee', 'Joong-won Hwang', 'Sangrok Lee', 'Yuseok Bae', 'Jongyoul Park'] | 2,019 | 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | 374 | 33 | ['Computer Science'] |
1,904.09751 | The Curious Case of Neural Text Degeneration | ['Ari Holtzman', 'Jan Buys', 'Li Du', 'Maxwell Forbes', 'Yejin Choi'] | ['cs.CL'] | Despite considerable advancements with deep neural language models, the
enigma of neural text degeneration persists when these models are tested as
text generators. The counter-intuitive empirical observation is that even
though the use of likelihood as training objective leads to high quality models
for a broad range of language understanding tasks, using likelihood as a
decoding objective leads to text that is bland and strangely repetitive.
In this paper, we reveal surprising distributional differences between human
text and machine text. In addition, we find that decoding strategies alone can
dramatically effect the quality of machine text, even when generated from
exactly the same neural language model. Our findings motivate Nucleus Sampling,
a simple but effective method to draw the best out of neural generation. By
sampling text from the dynamic nucleus of the probability distribution, which
allows for diversity while effectively truncating the less reliable tail of the
distribution, the resulting text better demonstrates the quality of human text,
yielding enhanced diversity without sacrificing fluency and coherence. | 2019-04-22T07:17:18Z | Published in ICLR 2020 | null | null | null | null | null | null | null | null | null |
1,904.10635 | Better Automatic Evaluation of Open-Domain Dialogue Systems with
Contextualized Embeddings | ['Sarik Ghazarian', 'Johnny Tian-Zheng Wei', 'Aram Galstyan', 'Nanyun Peng'] | ['cs.CL'] | Despite advances in open-domain dialogue systems, automatic evaluation of
such systems is still a challenging problem. Traditional reference-based
metrics such as BLEU are ineffective because there could be many valid
responses for a given context that share no common words with reference
responses. A recent work proposed Referenced metric and Unreferenced metric
Blended Evaluation Routine (RUBER) to combine a learning-based metric, which
predicts relatedness between a generated response and a given query, with
reference-based metric; it showed high correlation with human judgments. In
this paper, we explore using contextualized word embeddings to compute more
accurate relatedness scores, thus better evaluation metrics. Experiments show
that our evaluation metrics outperform RUBER, which is trained on static
embeddings. | 2019-04-24T04:16:44Z | 8 pages, 2 figures, NAACL 2019 Methods for Optimizing and Evaluating
Neural Language Generation (NeuralGen workshop) | null | null | null | null | null | null | null | null | null |
1,904.11486 | Making Convolutional Networks Shift-Invariant Again | ['Richard Zhang'] | ['cs.CV', 'cs.LG'] | Modern convolutional networks are not shift-invariant, as small input shifts
or translations can cause drastic changes in the output. Commonly used
downsampling methods, such as max-pooling, strided-convolution, and
average-pooling, ignore the sampling theorem. The well-known signal processing
fix is anti-aliasing by low-pass filtering before downsampling. However, simply
inserting this module into deep networks degrades performance; as a result, it
is seldomly used today. We show that when integrated correctly, it is
compatible with existing architectural components, such as max-pooling and
strided-convolution. We observe \textit{increased accuracy} in ImageNet
classification, across several commonly-used architectures, such as ResNet,
DenseNet, and MobileNet, indicating effective regularization. Furthermore, we
observe \textit{better generalization}, in terms of stability and robustness to
input corruptions. Our results demonstrate that this classical signal
processing technique has been undeservingly overlooked in modern deep networks.
Code and anti-aliased versions of popular networks are available at
https://richzhang.github.io/antialiased-cnns/ . | 2019-04-25T17:56:21Z | Accepted to ICML 2019 | null | null | Making Convolutional Networks Shift-Invariant Again | ['Richard Zhang'] | 2,019 | International Conference on Machine Learning | 801 | 80 | ['Computer Science'] |
1,904.11491 | Local Relation Networks for Image Recognition | ['Han Hu', 'Zheng Zhang', 'Zhenda Xie', 'Stephen Lin'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The convolution layer has been the dominant feature extractor in computer
vision for years. However, the spatial aggregation in convolution is basically
a pattern matching process that applies fixed filters which are inefficient at
modeling visual elements with varying spatial distributions. This paper
presents a new image feature extractor, called the local relation layer, that
adaptively determines aggregation weights based on the compositional
relationship of local pixel pairs. With this relational approach, it can
composite visual elements into higher-level entities in a more efficient manner
that benefits semantic inference. A network built with local relation layers,
called the Local Relation Network (LR-Net), is found to provide greater
modeling capacity than its counterpart built with regular convolution on
large-scale recognition tasks such as ImageNet classification. | 2019-04-25T17:59:35Z | null | null | null | Local Relation Networks for Image Recognition | ['Han Hu', 'Zheng Zhang', 'Zhenda Xie', 'Stephen Lin'] | 2,019 | IEEE International Conference on Computer Vision | 503 | 39 | ['Computer Science'] |
1,904.11492 | GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond | ['Yue Cao', 'Jiarui Xu', 'Stephen Lin', 'Fangyun Wei', 'Han Hu'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The Non-Local Network (NLNet) presents a pioneering approach for capturing
long-range dependencies, via aggregating query-specific global context to each
query position. However, through a rigorous empirical analysis, we have found
that the global contexts modeled by non-local network are almost the same for
different query positions within an image. In this paper, we take advantage of
this finding to create a simplified network based on a query-independent
formulation, which maintains the accuracy of NLNet but with significantly less
computation. We further observe that this simplified design shares similar
structure with Squeeze-Excitation Network (SENet). Hence we unify them into a
three-step general framework for global context modeling. Within the general
framework, we design a better instantiation, called the global context (GC)
block, which is lightweight and can effectively model the global context. The
lightweight property allows us to apply it for multiple layers in a backbone
network to construct a global context network (GCNet), which generally
outperforms both simplified NLNet and SENet on major benchmarks for various
recognition tasks. The code and configurations are released at
https://github.com/xvjiarui/GCNet. | 2019-04-25T17:59:42Z | null | null | null | null | null | null | null | null | null | null |
1,904.12848 | Unsupervised Data Augmentation for Consistency Training | ['Qizhe Xie', 'Zihang Dai', 'Eduard Hovy', 'Minh-Thang Luong', 'Quoc V. Le'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'stat.ML'] | Semi-supervised learning lately has shown much promise in improving deep
learning models when labeled data is scarce. Common among recent approaches is
the use of consistency training on a large amount of unlabeled data to
constrain model predictions to be invariant to input noise. In this work, we
present a new perspective on how to effectively noise unlabeled examples and
argue that the quality of noising, specifically those produced by advanced data
augmentation methods, plays a crucial role in semi-supervised learning. By
substituting simple noising operations with advanced data augmentation methods
such as RandAugment and back-translation, our method brings substantial
improvements across six language and three vision tasks under the same
consistency training framework. On the IMDb text classification dataset, with
only 20 labeled examples, our method achieves an error rate of 4.20,
outperforming the state-of-the-art model trained on 25,000 labeled examples. On
a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms
all previous approaches and achieves an error rate of 5.43 with only 250
examples. Our method also combines well with transfer learning, e.g., when
finetuning from BERT, and yields improvements in high-data regime, such as
ImageNet, whether when there is only 10% labeled data or when a full labeled
set with 1.3M extra unlabeled examples is used. Code is available at
https://github.com/google-research/uda. | 2019-04-29T17:56:59Z | NeurIPS 2020 | null | null | null | null | null | null | null | null | null |
1,905.00537 | SuperGLUE: A Stickier Benchmark for General-Purpose Language
Understanding Systems | ['Alex Wang', 'Yada Pruksachatkun', 'Nikita Nangia', 'Amanpreet Singh', 'Julian Michael', 'Felix Hill', 'Omer Levy', 'Samuel R. Bowman'] | ['cs.CL', 'cs.AI'] | In the last year, new models and methods for pretraining and transfer
learning have driven striking performance improvements across a range of
language understanding tasks. The GLUE benchmark, introduced a little over one
year ago, offers a single-number metric that summarizes progress on a diverse
set of such tasks, but performance on the benchmark has recently surpassed the
level of non-expert humans, suggesting limited headroom for further research.
In this paper we present SuperGLUE, a new benchmark styled after GLUE with a
new set of more difficult language understanding tasks, a software toolkit, and
a public leaderboard. SuperGLUE is available at super.gluebenchmark.com. | 2019-05-02T00:41:50Z | NeurIPS 2019, super.gluebenchmark.com updating acknowledegments | null | null | SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems | ['Alex Wang', 'Yada Pruksachatkun', 'Nikita Nangia', 'Amanpreet Singh', 'Julian Michael', 'Felix Hill', 'Omer Levy', 'Samuel R. Bowman'] | 2,019 | Neural Information Processing Systems | 2,331 | 86 | ['Computer Science'] |
1,905.00546 | Billion-scale semi-supervised learning for image classification | ['I. Zeki Yalniz', 'Hervé Jégou', 'Kan Chen', 'Manohar Paluri', 'Dhruv Mahajan'] | ['cs.CV'] | This paper presents a study of semi-supervised learning with large
convolutional networks. We propose a pipeline, based on a teacher/student
paradigm, that leverages a large collection of unlabelled images (up to 1
billion). Our main goal is to improve the performance for a given target
architecture, like ResNet-50 or ResNext. We provide an extensive analysis of
the success factors of our approach, which leads us to formulate some
recommendations to produce high-accuracy models for image classification with
semi-supervised learning. As a result, our approach brings important gains to
standard architectures for image, video and fine-grained classification. For
instance, by leveraging one billion unlabelled images, our learned vanilla
ResNet-50 achieves 81.2% top-1 accuracy on the ImageNet benchmark. | 2019-05-02T02:08:18Z | null | null | null | null | null | null | null | null | null | null |
1,905.00641 | RetinaFace: Single-stage Dense Face Localisation in the Wild | ['Jiankang Deng', 'Jia Guo', 'Yuxiang Zhou', 'Jinke Yu', 'Irene Kotsia', 'Stefanos Zafeiriou'] | ['cs.CV'] | Though tremendous strides have been made in uncontrolled face detection,
accurate and efficient face localisation in the wild remains an open challenge.
This paper presents a robust single-stage face detector, named RetinaFace,
which performs pixel-wise face localisation on various scales of faces by
taking advantages of joint extra-supervised and self-supervised multi-task
learning. Specifically, We make contributions in the following five aspects:
(1) We manually annotate five facial landmarks on the WIDER FACE dataset and
observe significant improvement in hard face detection with the assistance of
this extra supervision signal. (2) We further add a self-supervised mesh
decoder branch for predicting a pixel-wise 3D shape face information in
parallel with the existing supervised branches. (3) On the WIDER FACE hard test
set, RetinaFace outperforms the state of the art average precision (AP) by 1.1%
(achieving AP equal to 91.4%). (4) On the IJB-C test set, RetinaFace enables
state of the art methods (ArcFace) to improve their results in face
verification (TAR=89.59% for FAR=1e-6). (5) By employing light-weight backbone
networks, RetinaFace can run real-time on a single CPU core for a
VGA-resolution image. Extra annotations and code have been made available at:
https://github.com/deepinsight/insightface/tree/master/RetinaFace. | 2019-05-02T09:45:23Z | null | null | null | null | null | null | null | null | null | null |
1,905.00953 | Omni-Scale Feature Learning for Person Re-Identification | ['Kaiyang Zhou', 'Yongxin Yang', 'Andrea Cavallaro', 'Tao Xiang'] | ['cs.CV'] | As an instance-level recognition problem, person re-identification (ReID)
relies on discriminative features, which not only capture different spatial
scales but also encapsulate an arbitrary combination of multiple scales. We
call features of both homogeneous and heterogeneous scales omni-scale features.
In this paper, a novel deep ReID CNN is designed, termed Omni-Scale Network
(OSNet), for omni-scale feature learning. This is achieved by designing a
residual block composed of multiple convolutional streams, each detecting
features at a certain scale. Importantly, a novel unified aggregation gate is
introduced to dynamically fuse multi-scale features with input-dependent
channel-wise weights. To efficiently learn spatial-channel correlations and
avoid overfitting, the building block uses pointwise and depthwise
convolutions. By stacking such block layer-by-layer, our OSNet is extremely
lightweight and can be trained from scratch on existing ReID benchmarks.
Despite its small model size, OSNet achieves state-of-the-art performance on
six person ReID datasets, outperforming most large-sized models, often by a
clear margin. Code and models are available at:
\url{https://github.com/KaiyangZhou/deep-person-reid}. | 2019-05-02T20:42:26Z | ICCV 2019; This version adds additional training recipes for
practitioners | null | null | Omni-Scale Feature Learning for Person Re-Identification | ['Kaiyang Zhou', 'Yongxin Yang', 'A. Cavallaro', 'T. Xiang'] | 2,019 | IEEE International Conference on Computer Vision | 839 | 93 | ['Computer Science'] |
1,905.01969 | Poly-encoders: Transformer Architectures and Pre-training Strategies for
Fast and Accurate Multi-sentence Scoring | ['Samuel Humeau', 'Kurt Shuster', 'Marie-Anne Lachaux', 'Jason Weston'] | ['cs.CL', 'cs.AI'] | The use of deep pre-trained bidirectional transformers has led to remarkable
progress in a number of applications (Devlin et al., 2018). For tasks that make
pairwise comparisons between sequences, matching a given input with a
corresponding label, two approaches are common: Cross-encoders performing full
self-attention over the pair and Bi-encoders encoding the pair separately. The
former often performs better, but is too slow for practical use. In this work,
we develop a new transformer architecture, the Poly-encoder, that learns global
rather than token level self-attention features. We perform a detailed
comparison of all three approaches, including what pre-training and fine-tuning
strategies work best. We show our models achieve state-of-the-art results on
three existing tasks; that Poly-encoders are faster than Cross-encoders and
more accurate than Bi-encoders; and that the best results are obtained by
pre-training on large datasets similar to the downstream tasks. | 2019-04-22T02:18:00Z | ICLR 2020 | null | null | Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring | ['Samuel Humeau', 'Kurt Shuster', 'M. Lachaux', 'J. Weston'] | 2,019 | International Conference on Learning Representations | 289 | 34 | ['Computer Science'] |
1,905.02244 | Searching for MobileNetV3 | ['Andrew Howard', 'Mark Sandler', 'Grace Chu', 'Liang-Chieh Chen', 'Bo Chen', 'Mingxing Tan', 'Weijun Wang', 'Yukun Zhu', 'Ruoming Pang', 'Vijay Vasudevan', 'Quoc V. Le', 'Hartwig Adam'] | ['cs.CV'] | We present the next generation of MobileNets based on a combination of
complementary search techniques as well as a novel architecture design.
MobileNetV3 is tuned to mobile phone CPUs through a combination of
hardware-aware network architecture search (NAS) complemented by the NetAdapt
algorithm and then subsequently improved through novel architecture advances.
This paper starts the exploration of how automated search algorithms and
network design can work together to harness complementary approaches improving
the overall state of the art. Through this process we create two new MobileNet
models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted
for high and low resource use cases. These models are then adapted and applied
to the tasks of object detection and semantic segmentation. For the task of
semantic segmentation (or any dense pixel prediction), we propose a new
efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling
(LR-ASPP). We achieve new state of the art results for mobile classification,
detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on
ImageNet classification while reducing latency by 15\% compared to MobileNetV2.
MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared
to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same
accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\%
faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. | 2019-05-06T19:38:31Z | ICCV 2019 | null | null | null | null | null | null | null | null | null |
1,905.0245 | MASS: Masked Sequence to Sequence Pre-training for Language Generation | ['Kaitao Song', 'Xu Tan', 'Tao Qin', 'Jianfeng Lu', 'Tie-Yan Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Pre-training and fine-tuning, e.g., BERT, have achieved great success in
language understanding by transferring knowledge from rich-resource
pre-training task to the low/zero-resource downstream tasks. Inspired by the
success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for
the encoder-decoder based language generation tasks. MASS adopts the
encoder-decoder framework to reconstruct a sentence fragment given the
remaining part of the sentence: its encoder takes a sentence with randomly
masked fragment (several consecutive tokens) as input, and its decoder tries to
predict this masked fragment. In this way, MASS can jointly train the encoder
and decoder to develop the capability of representation extraction and language
modeling. By further fine-tuning on a variety of zero/low-resource language
generation tasks, including neural machine translation, text summarization and
conversational response generation (3 tasks and totally 8 datasets), MASS
achieves significant improvements over the baselines without pre-training or
with other pre-training methods. Specially, we achieve the state-of-the-art
accuracy (37.5 in terms of BLEU score) on the unsupervised English-French
translation, even beating the early attention-based supervised model. | 2019-05-07T10:13:04Z | Accepted by ICML 2019 | null | null | MASS: Masked Sequence to Sequence Pre-training for Language Generation | ['Kaitao Song', 'Xu Tan', 'Tao Qin', 'Jianfeng Lu', 'Tie-Yan Liu'] | 2,019 | International Conference on Machine Learning | 967 | 60 | ['Computer Science'] |
1,905.04899 | CutMix: Regularization Strategy to Train Strong Classifiers with
Localizable Features | ['Sangdoo Yun', 'Dongyoon Han', 'Seong Joon Oh', 'Sanghyuk Chun', 'Junsuk Choe', 'Youngjoon Yoo'] | ['cs.CV', 'cs.LG'] | Regional dropout strategies have been proposed to enhance the performance of
convolutional neural network classifiers. They have proved to be effective for
guiding the model to attend on less discriminative parts of objects (e.g. leg
as opposed to head of a person), thereby letting the network generalize better
and have better object localization capabilities. On the other hand, current
methods for regional dropout remove informative pixels on training images by
overlaying a patch of either black pixels or random noise. Such removal is not
desirable because it leads to information loss and inefficiency during
training. We therefore propose the CutMix augmentation strategy: patches are
cut and pasted among training images where the ground truth labels are also
mixed proportionally to the area of the patches. By making efficient use of
training pixels and retaining the regularization effect of regional dropout,
CutMix consistently outperforms the state-of-the-art augmentation strategies on
CIFAR and ImageNet classification tasks, as well as on the ImageNet
weakly-supervised localization task. Moreover, unlike previous augmentation
methods, our CutMix-trained ImageNet classifier, when used as a pretrained
model, results in consistent performance gains in Pascal detection and MS-COCO
image captioning benchmarks. We also show that CutMix improves the model
robustness against input corruptions and its out-of-distribution detection
performances. Source code and pretrained models are available at
https://github.com/clovaai/CutMix-PyTorch . | 2019-05-13T08:10:22Z | Accepted at ICCV 2019 (oral talk). 14 pages, 5 figures | null | null | null | null | null | null | null | null | null |
1,905.05583 | How to Fine-Tune BERT for Text Classification? | ['Chi Sun', 'Xipeng Qiu', 'Yige Xu', 'Xuanjing Huang'] | ['cs.CL'] | Language model pre-training has proven to be useful in learning universal
language representations. As a state-of-the-art language model pre-training
model, BERT (Bidirectional Encoder Representations from Transformers) has
achieved amazing results in many language understanding tasks. In this paper,
we conduct exhaustive experiments to investigate different fine-tuning methods
of BERT on text classification task and provide a general solution for BERT
fine-tuning. Finally, the proposed solution obtains new state-of-the-art
results on eight widely-studied text classification datasets. | 2019-05-14T13:17:26Z | null | null | null | null | null | null | null | null | null | null |
1,905.057 | Learning meters of Arabic and English poems with Recurrent Neural
Networks: a step forward for language understanding and synthesis | ['Waleed A. Yousef', 'Omar M. Ibrahime', 'Taha M. Madbouly', 'Moustafa A. Mahmoud'] | ['cs.CL', 'cs.AI', 'cs.LG', 'stat.ML'] | Recognizing a piece of writing as a poem or prose is usually easy for the
majority of people; however, only specialists can determine which meter a poem
belongs to. In this paper, we build Recurrent Neural Network (RNN) models that
can classify poems according to their meters from plain text. The input text is
encoded at the character level and directly fed to the models without feature
handcrafting. This is a step forward for machine understanding and synthesis of
languages in general, and Arabic language in particular. Among the 16 poem
meters of Arabic and the 4 meters of English the networks were able to
correctly classify poem with an overall accuracy of 96.38\% and 82.31\%
respectively. The poem datasets used to conduct this research were massive,
over 1.5 million of verses, and were crawled from different nontechnical
sources, almost Arabic and English literature sites, and in different
heterogeneous and unstructured formats. These datasets are now made publicly
available in clean, structured, and documented format for other future
research. To the best of the authors' knowledge, this research is the first to
address classifying poem meters in a machine learning approach, in general, and
in RNN featureless based approach, in particular. In addition, the dataset is
the first publicly available dataset ready for the purpose of future
computational research. | 2019-05-07T21:14:03Z | null | null | null | null | null | null | null | null | null | null |
1,905.05879 | AUTOVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss | ['Kaizhi Qian', 'Yang Zhang', 'Shiyu Chang', 'Xuesong Yang', 'Mark Hasegawa-Johnson'] | ['eess.AS', 'cs.AI', 'cs.LG', 'cs.SD', 'stat.ML'] | Non-parallel many-to-many voice conversion, as well as zero-shot voice
conversion, remain under-explored areas. Deep style transfer algorithms, such
as generative adversarial networks (GAN) and conditional variational
autoencoder (CVAE), are being applied as new solutions in this field. However,
GAN training is sophisticated and difficult, and there is no strong evidence
that its generated speech is of good perceptual quality. On the other hand,
CVAE training is simple but does not come with the distribution-matching
property of a GAN. In this paper, we propose a new style transfer scheme that
involves only an autoencoder with a carefully designed bottleneck. We formally
show that this scheme can achieve distribution-matching style transfer by
training only on a self-reconstruction loss. Based on this scheme, we proposed
AUTOVC, which achieves state-of-the-art results in many-to-many voice
conversion with non-parallel data, and which is the first to perform zero-shot
voice conversion. | 2019-05-14T23:19:04Z | To Appear in Thirty-sixth International Conference on Machine
Learning (ICML 2019) | null | null | null | null | null | null | null | null | null |
1,905.0629 | A Surprisingly Robust Trick for Winograd Schema Challenge | ['Vid Kocijan', 'Ana-Maria Cretu', 'Oana-Maria Camburu', 'Yordan Yordanov', 'Thomas Lukasiewicz'] | ['cs.CL'] | The Winograd Schema Challenge (WSC) dataset WSC273 and its inference
counterpart WNLI are popular benchmarks for natural language understanding and
commonsense reasoning. In this paper, we show that the performance of three
language models on WSC273 strongly improves when fine-tuned on a similar
pronoun disambiguation problem dataset (denoted WSCR). We additionally generate
a large unsupervised WSC-like dataset. By fine-tuning the BERT language model
both on the introduced and on the WSCR dataset, we achieve overall accuracies
of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art
solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models
are also consistently more robust on the "complex" subsets of WSC273,
introduced by Trichelair et al. (2018). | 2019-05-15T16:47:11Z | Appeared as part of the ACL 2019 conference | null | 10.18653/v1/P19-1478 | A Surprisingly Robust Trick for the Winograd Schema Challenge | ['Vid Kocijan', 'Ana-Maria Cretu', 'Oana-Maria Camburu', 'Yordan Yordanov', 'Thomas Lukasiewicz'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 101 | 22 | ['Computer Science'] |
1,905.07213 | Adaptation of Deep Bidirectional Multilingual Transformers for Russian
Language | ['Yuri Kuratov', 'Mikhail Arkhipov'] | ['cs.CL'] | The paper introduces methods of adaptation of multilingual masked language
models for a specific language. Pre-trained bidirectional language models show
state-of-the-art performance on a wide range of tasks including reading
comprehension, natural language inference, and sentiment analysis. At the
moment there are two alternative approaches to train such models: monolingual
and multilingual. While language specific models show superior performance,
multilingual models allow to perform a transfer from one language to another
and solve tasks for different languages simultaneously. This work shows that
transfer learning from a multilingual model to monolingual model results in
significant growth of performance on such tasks as reading comprehension,
paraphrase detection, and sentiment analysis. Furthermore, multilingual
initialization of monolingual model substantially reduces training time.
Pre-trained models for the Russian language are open sourced. | 2019-05-17T11:39:21Z | null | null | null | Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language | ['Yuri Kuratov', 'M. Arkhipov'] | 2,019 | arXiv.org | 275 | 18 | ['Computer Science'] |
1,905.0783 | HellaSwag: Can a Machine Really Finish Your Sentence? | ['Rowan Zellers', 'Ari Holtzman', 'Yonatan Bisk', 'Ali Farhadi', 'Yejin Choi'] | ['cs.CL'] | Recent work by Zellers et al. (2018) introduced a new task of commonsense
natural language inference: given an event description such as "A woman sits at
a piano," a machine must select the most likely followup: "She sets her fingers
on the keys." With the introduction of BERT, near human-level performance was
reached. Does this mean that machines can perform human level commonsense
inference?
In this paper, we show that commonsense inference still proves difficult for
even state-of-the-art models, by presenting HellaSwag, a new challenge dataset.
Though its questions are trivial for humans (>95% accuracy), state-of-the-art
models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data
collection paradigm wherein a series of discriminators iteratively select an
adversarial set of machine-generated wrong answers. AF proves to be
surprisingly robust. The key insight is to scale up the length and complexity
of the dataset examples towards a critical 'Goldilocks' zone wherein generated
text is ridiculous to humans, yet often misclassified by state-of-the-art
models.
Our construction of HellaSwag, and its resulting difficulty, sheds light on
the inner workings of deep pretrained models. More broadly, it suggests a new
path forward for NLP research, in which benchmarks co-evolve with the evolving
state-of-the-art in an adversarial way, so as to present ever-harder
challenges. | 2019-05-19T23:57:23Z | ACL 2019. Project page at https://rowanzellers.com/hellaswag | null | null | HellaSwag: Can a Machine Really Finish Your Sentence? | ['Rowan Zellers', 'Ari Holtzman', 'Yonatan Bisk', 'Ali Farhadi', 'Yejin Choi'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 2,538 | 22 | ['Computer Science'] |
1,905.09263 | FastSpeech: Fast, Robust and Controllable Text to Speech | ['Yi Ren', 'Yangjun Ruan', 'Xu Tan', 'Tao Qin', 'Sheng Zhao', 'Zhou Zhao', 'Tie-Yan Liu'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | Neural network based end-to-end text to speech (TTS) has significantly
improved the quality of synthesized speech. Prominent methods (e.g., Tacotron
2) usually first generate mel-spectrogram from text, and then synthesize speech
from the mel-spectrogram using vocoder such as WaveNet. Compared with
traditional concatenative and statistical parametric approaches, neural network
based end-to-end models suffer from slow inference speed, and the synthesized
speech is usually not robust (i.e., some words are skipped or repeated) and
lack of controllability (voice speed or prosody control). In this work, we
propose a novel feed-forward network based on Transformer to generate
mel-spectrogram in parallel for TTS. Specifically, we extract attention
alignments from an encoder-decoder based teacher model for phoneme duration
prediction, which is used by a length regulator to expand the source phoneme
sequence to match the length of the target mel-spectrogram sequence for
parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show
that our parallel model matches autoregressive models in terms of speech
quality, nearly eliminates the problem of word skipping and repeating in
particularly hard cases, and can adjust voice speed smoothly. Most importantly,
compared with autoregressive Transformer TTS, our model speeds up
mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.
Therefore, we call our model FastSpeech. | 2019-05-22T17:50:21Z | Accepted by NeurIPS2019 | null | null | null | null | null | null | null | null | null |
1,905.09381 | Learning to Prove Theorems via Interacting with Proof Assistants | ['Kaiyu Yang', 'Jia Deng'] | ['cs.LO', 'cs.AI', 'cs.LG', 'stat.ML'] | Humans prove theorems by relying on substantial high-level reasoning and
problem-specific insights. Proof assistants offer a formalism that resembles
human mathematical reasoning, representing theorems in higher-order logic and
proofs as high-level tactics. However, human experts have to construct proofs
manually by entering tactics into the proof assistant. In this paper, we study
the problem of using machine learning to automate the interaction with proof
assistants. We construct CoqGym, a large-scale dataset and learning environment
containing 71K human-written proofs from 123 projects developed with the Coq
proof assistant. We develop ASTactic, a deep learning-based model that
generates tactics as programs in the form of abstract syntax trees (ASTs).
Experiments show that ASTactic trained on CoqGym can generate effective tactics
and can be used to prove new theorems not previously provable by automated
methods. Code is available at https://github.com/princeton-vl/CoqGym. | 2019-05-21T17:56:02Z | Accepted to ICML 2019 | null | null | null | null | null | null | null | null | null |
1,905.10044 | BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions | ['Christopher Clark', 'Kenton Lee', 'Ming-Wei Chang', 'Tom Kwiatkowski', 'Michael Collins', 'Kristina Toutanova'] | ['cs.CL'] | In this paper we study yes/no questions that are naturally occurring ---
meaning that they are generated in unprompted and unconstrained settings. We
build a reading comprehension dataset, BoolQ, of such questions, and show that
they are unexpectedly challenging. They often query for complex, non-factoid
information, and require difficult entailment-like inference to solve. We also
explore the effectiveness of a range of transfer learning baselines. We find
that transferring from entailment data is more effective than transferring from
paraphrase or extractive QA data, and that it, surprisingly, continues to be
very beneficial even when starting from massive pre-trained language models
such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on
our train set. It achieves 80.4% accuracy compared to 90% accuracy of human
annotators (and 62% majority-baseline), leaving a significant gap for future
work. | 2019-05-24T05:48:49Z | In NAACL 2019 | null | null | BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions | ['Christopher Clark', 'Kenton Lee', 'Ming-Wei Chang', 'T. Kwiatkowski', 'Michael Collins', 'Kristina Toutanova'] | 2,019 | North American Chapter of the Association for Computational Linguistics | 1,565 | 50 | ['Computer Science'] |
1,905.10892 | Extreme Multi-Label Legal Text Classification: A case study in EU
Legislation | ['Ilias Chalkidis', 'Manos Fergadiotis', 'Prodromos Malakasiotis', 'Nikolaos Aletras', 'Ion Androutsopoulos'] | ['cs.CL'] | We consider the task of Extreme Multi-Label Text Classification (XMTC) in the
legal domain. We release a new dataset of 57k legislative documents from
EURLEX, the European Union's public document database, annotated with concepts
from EUROVOC, a multidisciplinary thesaurus. The dataset is substantially
larger than previous EURLEX datasets and suitable for XMTC, few-shot and
zero-shot learning. Experimenting with several neural classifiers, we show that
BIGRUs with self-attention outperform the current multi-label state-of-the-art
methods, which employ label-wise attention. Replacing CNNs with BIGRUs in
label-wise attention networks leads to the best overall performance. | 2019-05-26T21:50:15Z | 10 pages, long paper at NLLP Workshop of NAACL-HLT 2019 | null | null | null | null | null | null | null | null | null |
1,905.11901 | Revisiting Low-Resource Neural Machine Translation: A Case Study | ['Rico Sennrich', 'Biao Zhang'] | ['cs.CL'] | It has been shown that the performance of neural machine translation (NMT)
drops starkly in low-resource conditions, underperforming phrase-based
statistical machine translation (PBSMT) and requiring large amounts of
auxiliary data to achieve competitive results. In this paper, we re-assess the
validity of these results, arguing that they are the result of lack of system
adaptation to low-resource settings. We discuss some pitfalls to be aware of
when training low-resource NMT systems, and recent techniques that have shown
to be especially helpful in low-resource settings, resulting in a set of best
practices for low-resource NMT. In our experiments on German--English with
different amounts of IWSLT14 training data, we show that, without the use of
any auxiliary monolingual or multilingual data, an optimized NMT system can
outperform PBSMT with far less data than previously claimed. We also apply
these techniques to a low-resource Korean-English dataset, surpassing
previously reported results by 4 BLEU. | 2019-05-28T15:59:21Z | to appear at ACL 2019 | null | null | Revisiting Low-Resource Neural Machine Translation: A Case Study | ['Rico Sennrich', 'Biao Zhang'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 223 | 56 | ['Computer Science'] |
1,905.11946 | EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks | ['Mingxing Tan', 'Quoc V. Le'] | ['cs.LG', 'cs.CV', 'stat.ML'] | Convolutional Neural Networks (ConvNets) are commonly developed at a fixed
resource budget, and then scaled up for better accuracy if more resources are
available. In this paper, we systematically study model scaling and identify
that carefully balancing network depth, width, and resolution can lead to
better performance. Based on this observation, we propose a new scaling method
that uniformly scales all dimensions of depth/width/resolution using a simple
yet highly effective compound coefficient. We demonstrate the effectiveness of
this method on scaling up MobileNets and ResNet.
To go even further, we use neural architecture search to design a new
baseline network and scale it up to obtain a family of models, called
EfficientNets, which achieve much better accuracy and efficiency than previous
ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3%
top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on
inference than the best existing ConvNet. Our EfficientNets also transfer well
and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%),
and 3 other transfer learning datasets, with an order of magnitude fewer
parameters. Source code is at
https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet. | 2019-05-28T17:05:32Z | ICML 2019 | International Conference on Machine Learning, 2019 | null | null | null | null | null | null | null | null |
1,905.12516 | Racial Bias in Hate Speech and Abusive Language Detection Datasets | ['Thomas Davidson', 'Debasmita Bhattacharya', 'Ingmar Weber'] | ['cs.CL', 'cs.LG'] | Technologies for abusive language detection are being developed and applied
with little consideration of their potential biases. We examine racial bias in
five different sets of Twitter data annotated for hate speech and abusive
language. We train classifiers on these datasets and compare the predictions of
these classifiers on tweets written in African-American English with those
written in Standard American English. The results show evidence of systematic
racial bias in all datasets, as classifiers trained on them tend to predict
that tweets written in African-American English are abusive at substantially
higher rates. If these abusive language detection systems are used in the field
they will therefore have a disproportionate negative impact on African-American
social media users. Consequently, these systems may discriminate against the
groups who are often the targets of the abuse we are trying to detect. | 2019-05-29T15:12:58Z | To appear in the proceedings of the Third Abusive Language Workshop
(https://sites.google.com/view/alw3/) at the Annual Meeting for the
Association for Computational Linguistics 2019. Please cite the published
version | null | null | null | null | null | null | null | null | null |
1,905.13648 | Scene Text Visual Question Answering | ['Ali Furkan Biten', 'Ruben Tito', 'Andres Mafla', 'Lluis Gomez', 'Marçal Rusiñol', 'Ernest Valveny', 'C. V. Jawahar', 'Dimosthenis Karatzas'] | ['cs.CV'] | Current visual question answering datasets do not consider the rich semantic
information conveyed by text within an image. In this work, we present a new
dataset, ST-VQA, that aims to highlight the importance of exploiting high-level
semantic information present in images as textual cues in the VQA process. We
use this dataset to define a series of tasks of increasing difficulty for which
reading the scene text in the context provided by the visual information is
necessary to reason and generate an appropriate answer. We propose a new
evaluation metric for these tasks to account both for reasoning errors as well
as shortcomings of the text recognition module. In addition we put forward a
series of baseline methods, which provide further insight to the newly released
dataset, and set the scene for further research. | 2019-05-31T14:47:55Z | International Conference on Computer Vision (ICCV 2019) | null | null | Scene Text Visual Question Answering | ['Ali Furkan Biten', 'Rubèn Pérez Tito', 'Andrés Mafla', 'Lluís Gómez', 'Marçal Rusiñol', 'Ernest Valveny', 'C. V. Jawahar', 'Dimosthenis Karatzas'] | 2,019 | IEEE International Conference on Computer Vision | 361 | 68 | ['Computer Science'] |
1,906.01502 | How multilingual is Multilingual BERT? | ['Telmo Pires', 'Eva Schlinger', 'Dan Garrette'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et
al. (2018) as a single language model pre-trained from monolingual corpora in
104 languages, is surprisingly good at zero-shot cross-lingual model transfer,
in which task-specific annotations in one language are used to fine-tune the
model for evaluation in another language. To understand why, we present a large
number of probing experiments, showing that transfer is possible even to
languages in different scripts, that transfer works best between typologically
similar languages, that monolingual corpora can train models for
code-switching, and that the model can find translation pairs. From these
results, we can conclude that M-BERT does create multilingual representations,
but that these representations exhibit systematic deficiencies affecting
certain language pairs. | 2019-06-04T15:12:47Z | null | null | null | How Multilingual is Multilingual BERT? | ['Telmo Pires', 'Eva Schlinger', 'Dan Garrette'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 1,418 | 19 | ['Computer Science'] |
1,906.01569 | Sequence Tagging with Contextual and Non-Contextual Subword
Representations: A Multilingual Evaluation | ['Benjamin Heinzerling', 'Michael Strube'] | ['cs.CL'] | Pretrained contextual and non-contextual subword embeddings have become
available in over 250 languages, allowing massively multilingual NLP. However,
while there is no dearth of pretrained embeddings, the distinct lack of
systematic evaluations makes it difficult for practitioners to choose between
them. In this work, we conduct an extensive evaluation comparing non-contextual
subword embeddings, namely FastText and BPEmb, and a contextual representation
method, namely BERT, on multilingual named entity recognition and
part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and
character representations works best across languages and tasks. A more
detailed analysis reveals different strengths and weaknesses: Multilingual BERT
performs well in medium- to high-resource languages, but is outperformed by
non-contextual subword embeddings in a low-resource setting. | 2019-06-04T16:36:53Z | ACL 2019 | null | null | Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation | ['Benjamin Heinzerling', 'M. Strube'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 36 | 45 | ['Computer Science'] |
1,906.01591 | Pair State Transfer | ['Qiuting Chen', 'Chris Godsil'] | ['math.CO', 'math-ph', 'math.MP', 'quant-ph'] | Let $L$ denote the Laplacian matrix of a graph $G$. We study continuous
quantum walks on $G$ defined by the transition matrix
$U(t)=\exp\left(itL\right)$. The initial state is of the pair state form,
$e_a-e_b$ with $a,b$ being any two vertices of $G$. We provide two ways to
construct infinite families of graphs that have perfect pair transfer. We study
a "transitivity" phenomenon which cannot occur in vertex state transfer. We
characterize perfect pair state transfer on paths and cycles. We also study the
case when quantum walks are generated by the unsigned Laplacians of underlying
graphs and the initial states are of the plus state form, $e_a+e_b$. When the
underlying graphs are bipartite, plus state transfer is equivalent to pair
state transfer. | 2019-06-04T17:09:10Z | null | null | null | null | null | null | null | null | null | null |
1,906.01749 | Multi-News: a Large-Scale Multi-Document Summarization Dataset and
Abstractive Hierarchical Model | ['Alexander R. Fabbri', 'Irene Li', 'Tianwei She', 'Suyi Li', 'Dragomir R. Radev'] | ['cs.CL'] | Automatic generation of summaries from multiple news articles is a valuable
tool as the number of online publications grows rapidly. Single document
summarization (SDS) systems have benefited from advances in neural
encoder-decoder model thanks to the availability of large datasets. However,
multi-document summarization (MDS) of news articles has been limited to
datasets of a couple of hundred examples. In this paper, we introduce
Multi-News, the first large-scale MDS news dataset. Additionally, we propose an
end-to-end model which incorporates a traditional extractive summarization
model with a standard SDS model and achieves competitive results on MDS
datasets. We benchmark several methods on Multi-News and release our data and
code in hope that this work will promote advances in summarization in the
multi-document setting. | 2019-06-04T23:00:43Z | ACL 2019, 57th Annual Meeting of the Association for Computational
Linguistics, Florence, Italy, 2019 | null | null | Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model | ['Alexander R. Fabbri', 'Irene Li', 'Tianwei She', 'Suyi Li', 'Dragomir R. Radev'] | 2,019 | Annual Meeting of the Association for Computational Linguistics | 590 | 46 | ['Computer Science'] |
1,906.02045 | The FRENK Datasets of Socially Unacceptable Discourse in Slovene and
English | ['Nikola Ljubešić', 'Darja Fišer', 'Tomaž Erjavec'] | ['cs.CL'] | In this paper we present datasets of Facebook comment threads to mainstream
media posts in Slovene and English developed inside the Slovene national
project FRENK which cover two topics, migrants and LGBT, and are manually
annotated for different types of socially unacceptable discourse (SUD). The
main advantages of these datasets compared to the existing ones are identical
sampling procedures, producing comparable data across languages and an
annotation schema that takes into account six types of SUD and five targets at
which SUD is directed. We describe the sampling and annotation procedures, and
analyze the annotation distributions and inter-annotator agreements. We
consider this dataset to be an important milestone in understanding and
combating SUD for both languages. | 2019-06-05T14:23:01Z | null | null | null | null | null | null | null | null | null | null |
1,906.02192 | Large-Scale Multi-Label Text Classification on EU Legislation | ['Ilias Chalkidis', 'Manos Fergadiotis', 'Prodromos Malakasiotis', 'Ion Androutsopoulos'] | ['cs.CL'] | We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal
domain. We release a new dataset of 57k legislative documents from EURLEX,
annotated with ~4.3k EUROVOC labels, which is suitable for LMTC, few- and
zero-shot learning. Experimenting with several neural classifiers, we show that
BIGRUs with label-wise attention perform better than other current state of the
art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings
further improve performance. We also find that considering only particular
zones of the documents is sufficient. This allows us to bypass BERT's maximum
text length limit and fine-tune BERT, obtaining the best results in all but
zero-shot learning cases. | 2019-06-05T14:41:01Z | 9 pages, short paper at ACL 2019. arXiv admin note: text overlap with
arXiv:1905.10892 | null | null | null | null | null | null | null | null | null |
1,906.02243 | Energy and Policy Considerations for Deep Learning in NLP | ['Emma Strubell', 'Ananya Ganesh', 'Andrew McCallum'] | ['cs.CL'] | Recent progress in hardware and methodology for training neural networks has
ushered in a new generation of large networks trained on abundant data. These
models have obtained notable gains in accuracy across many NLP tasks. However,
these accuracy improvements depend on the availability of exceptionally large
computational resources that necessitate similarly substantial energy
consumption. As a result these models are costly to train and develop, both
financially, due to the cost of hardware and electricity or cloud compute time,
and environmentally, due to the carbon footprint required to fuel modern tensor
processing hardware. In this paper we bring this issue to the attention of NLP
researchers by quantifying the approximate financial and environmental costs of
training a variety of recently successful neural network models for NLP. Based
on these findings, we propose actionable recommendations to reduce costs and
improve equity in NLP research and practice. | 2019-06-05T18:40:53Z | In the 57th Annual Meeting of the Association for Computational
Linguistics (ACL). Florence, Italy. July 2019 | null | null | null | null | null | null | null | null | null |
1,906.02467 | ActivityNet-QA: A Dataset for Understanding Complex Web Videos via
Question Answering | ['Zhou Yu', 'Dejing Xu', 'Jun Yu', 'Ting Yu', 'Zhou Zhao', 'Yueting Zhuang', 'Dacheng Tao'] | ['cs.CV'] | Recent developments in modeling language and vision have been successfully
applied to image question answering. It is both crucial and natural to extend
this research direction to the video domain for video question answering
(VideoQA). Compared to the image domain where large scale and fully annotated
benchmark datasets exists, VideoQA datasets are limited to small scale and are
automatically generated, etc. These limitations restrict their applicability in
practice. Here we introduce ActivityNet-QA, a fully annotated and large scale
VideoQA dataset. The dataset consists of 58,000 QA pairs on 5,800 complex web
videos derived from the popular ActivityNet dataset. We present a statistical
analysis of our ActivityNet-QA dataset and conduct extensive experiments on it
by comparing existing VideoQA baselines. Moreover, we explore various video
representation strategies to improve VideoQA performance, especially for long
videos. The dataset is available at https://github.com/MILVLG/activitynet-qa | 2019-06-06T08:08:14Z | Accepted at AAAI 2019 | null | null | ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering | ['Zhou Yu', 'D. Xu', 'Jun Yu', 'Ting Yu', 'Zhou Zhao', 'Yueting Zhuang', 'D. Tao'] | 2,019 | AAAI Conference on Artificial Intelligence | 478 | 43 | ['Computer Science'] |
1,906.02569 | Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild | ['Abubakar Abid', 'Ali Abdalla', 'Ali Abid', 'Dawood Khan', 'Abdulrahman Alfozan', 'James Zou'] | ['cs.LG', 'cs.HC', 'stat.ML'] | Accessibility is a major challenge of machine learning (ML). Typical ML
models are built by specialists and require specialized hardware/software as
well as ML experience to validate. This makes it challenging for non-technical
collaborators and endpoint users (e.g. physicians) to easily provide feedback
on model development and to gain trust in ML. The accessibility challenge also
makes collaboration more difficult and limits the ML researcher's exposure to
realistic data and scenarios that occur in the wild. To improve accessibility
and facilitate collaboration, we developed an open-source Python package,
Gradio, which allows researchers to rapidly generate a visual interface for
their ML models. Gradio makes accessing any ML model as easy as sharing a URL.
Our development of Gradio is informed by interviews with a number of machine
learning researchers who participate in interdisciplinary collaborations. Their
feedback identified that Gradio should support a variety of interfaces and
frameworks, allow for easy sharing of the interface, allow for input
manipulation and interactive inference by the domain expert, as well as allow
embedding the interface in iPython notebooks. We developed these features and
carried out a case study to understand Gradio's usefulness and usability in the
setting of a machine learning collaboration between a researcher and a
cardiologist. | 2019-06-06T13:18:47Z | Presented at 2019 ICML Workshop on Human in the Loop Learning (HILL
2019), Long Beach, USA | null | null | Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild | ['Abubakar Abid', 'Ali Abdalla', 'Ali Abid', 'Dawood Khan', 'Abdulrahman Alfozan', 'James Y. Zou'] | 2,019 | arXiv.org | 213 | 10 | ['Computer Science', 'Mathematics'] |
1,906.02659 | Does Object Recognition Work for Everyone? | ['Terrance DeVries', 'Ishan Misra', 'Changhan Wang', 'Laurens van der Maaten'] | ['cs.CV', 'cs.LG'] | The paper analyzes the accuracy of publicly available object-recognition
systems on a geographically diverse dataset. This dataset contains household
items and was designed to have a more representative geographical coverage than
commonly used image datasets in object recognition. We find that the systems
perform relatively poorly on household items that commonly occur in countries
with a low household income. Qualitative analyses suggest the drop in
performance is primarily due to appearance differences within an object class
(e.g., dish soap) and due to items appearing in a different context (e.g.,
toothbrushes appearing outside of bathrooms). The results of our study suggest
that further work is needed to make object-recognition systems work equally
well for people across different countries and income levels. | 2019-06-06T16:00:18Z | null | null | null | Does Object Recognition Work for Everyone? | ['Terrance Devries', 'Ishan Misra', 'Changhan Wang', 'L. Maaten'] | 2,019 | CVPR Workshops | 265 | 43 | ['Computer Science'] |
1,906.02762 | Understanding and Improving Transformer From a Multi-Particle Dynamic
System Point of View | ['Yiping Lu', 'Zhuohan Li', 'Di He', 'Zhiqing Sun', 'Bin Dong', 'Tao Qin', 'Liwei Wang', 'Tie-Yan Liu'] | ['cs.LG', 'cs.CL', 'stat.ML'] | The Transformer architecture is widely used in natural language processing.
Despite its success, the design principle of the Transformer remains elusive.
In this paper, we provide a novel perspective towards understanding the
architecture: we show that the Transformer can be mathematically interpreted as
a numerical Ordinary Differential Equation (ODE) solver for a
convection-diffusion equation in a multi-particle dynamic system. In
particular, how words in a sentence are abstracted into contexts by passing
through the layers of the Transformer can be interpreted as approximating
multiple particles' movement in the space using the Lie-Trotter splitting
scheme and the Euler's method. Given this ODE's perspective, the rich
literature of numerical analysis can be brought to guide us in designing
effective structures beyond the Transformer. As an example, we propose to
replace the Lie-Trotter splitting scheme by the Strang-Marchuk splitting
scheme, a scheme that is more commonly used and with much lower local
truncation errors. The Strang-Marchuk splitting scheme suggests that the
self-attention and position-wise feed-forward network (FFN) sub-layers should
not be treated equally. Instead, in each layer, two position-wise FFN
sub-layers should be used, and the self-attention sub-layer is placed in
between. This leads to a brand new architecture. Such an FFN-attention-FFN
layer is "Macaron-like", and thus we call the network with this new
architecture the Macaron Net. Through extensive experiments, we show that the
Macaron Net is superior to the Transformer on both supervised and unsupervised
learning tasks. The reproducible codes and pretrained models can be found at
https://github.com/zhuohan123/macaron-net | 2019-06-06T18:10:08Z | null | null | null | null | null | null | null | null | null | null |
1,906.03402 | Effective Use of Variational Embedding Capacity in Expressive End-to-End
Speech Synthesis | ['Eric Battenberg', 'Soroosh Mariooryad', 'Daisy Stanton', 'RJ Skerry-Ryan', 'Matt Shannon', 'David Kao', 'Tom Bagby'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | Recent work has explored sequence-to-sequence latent variable models for
expressive speech synthesis (supporting control and transfer of prosody and
style), but has not presented a coherent framework for understanding the
trade-offs between the competing methods. In this paper, we propose embedding
capacity (the amount of information the embedding contains about the data) as a
unified method of analyzing the behavior of latent variable models of speech,
comparing existing heuristic (non-variational) methods to variational methods
that are able to explicitly constrain capacity using an upper bound on
representational mutual information. In our proposed model (Capacitron), we
show that by adding conditional dependencies to the variational posterior such
that it matches the form of the true posterior, the same model can be used for
high-precision prosody transfer, text-agnostic style transfer, and generation
of natural-sounding prior samples. For multi-speaker models, Capacitron is able
to preserve target speaker identity during inter-speaker prosody transfer and
when drawing samples from the latent prior. Lastly, we introduce a method for
decomposing embedding capacity hierarchically across two sets of latents,
allowing a portion of the latent variability to be specified and the remaining
variability sampled from a learned prior. Audio examples are available on the
web. | 2019-06-08T06:59:56Z | Submitted to ICLR 2020 | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.