arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,005.05957 | Flowtron: an Autoregressive Flow-based Generative Network for
Text-to-Speech Synthesis | ['Rafael Valle', 'Kevin Shih', 'Ryan Prenger', 'Bryan Catanzaro'] | ['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS'] | In this paper we propose Flowtron: an autoregressive flow-based generative
network for text-to-speech synthesis with control over speech variation and
style transfer. Flowtron borrows insights from IAF and revamps Tacotron in
order to provide high-quality and expressive mel-spectrogram synthesis.
Flowtron is optimized by maximizing the likelihood of the training data, which
makes training simple and stable. Flowtron learns an invertible mapping of data
to a latent space that can be manipulated to control many aspects of speech
synthesis (pitch, tone, speech rate, cadence, accent). Our mean opinion scores
(MOS) show that Flowtron matches state-of-the-art TTS models in terms of speech
quality. In addition, we provide results on control of speech variation,
interpolation between samples and style transfer between speakers seen and
unseen during training. Code and pre-trained models will be made publicly
available at https://github.com/NVIDIA/flowtron | 2020-05-12T17:57:17Z | 10 pages, 7 pictures | null | null | null | null | null | null | null | null | null |
2,005.06149 | DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses | ['Yaxin Li', 'Wei Jin', 'Han Xu', 'Jiliang Tang'] | ['cs.LG', 'cs.CR', 'stat.ML'] | DeepRobust is a PyTorch adversarial learning library which aims to build a
comprehensive and easy-to-use platform to foster this research field. It
currently contains more than 10 attack algorithms and 8 defense algorithms in
image domain and 9 attack algorithms and 4 defense algorithms in graph domain,
under a variety of deep learning architectures. In this manual, we introduce
the main contents of DeepRobust with detailed instructions. The library is kept
updated and can be found at https://github.com/DSE-MSU/DeepRobust. | 2020-05-13T04:43:46Z | Adversarial attacks and defenses, Pytorch library | null | null | null | null | null | null | null | null | null |
2,005.07143 | ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in
TDNN Based Speaker Verification | ['Brecht Desplanques', 'Jenthe Thienpondt', 'Kris Demuynck'] | ['eess.AS', 'cs.SD'] | Current speaker verification techniques rely on a neural network to extract
speaker representations. The successful x-vector architecture is a Time Delay
Neural Network (TDNN) that applies statistics pooling to project
variable-length utterances into fixed-length speaker characterizing embeddings.
In this paper, we propose multiple enhancements to this architecture based on
recent trends in the related fields of face verification and computer vision.
Firstly, the initial frame layers can be restructured into 1-dimensional
Res2Net modules with impactful skip connections. Similarly to SE-ResNet, we
introduce Squeeze-and-Excitation blocks in these modules to explicitly model
channel interdependencies. The SE block expands the temporal context of the
frame layer by rescaling the channels according to global properties of the
recording. Secondly, neural networks are known to learn hierarchical features,
with each layer operating on a different level of complexity. To leverage this
complementary information, we aggregate and propagate features of different
hierarchical levels. Finally, we improve the statistics pooling module with
channel-dependent frame attention. This enables the network to focus on
different subsets of frames during each of the channel's statistics estimation.
The proposed ECAPA-TDNN architecture significantly outperforms state-of-the-art
TDNN based systems on the VoxCeleb test sets and the 2019 VoxCeleb Speaker
Recognition Challenge. | 2020-05-14T17:02:15Z | proceedings of INTERSPEECH 2020 | null | 10.21437/Interspeech.2020-2650 | ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification | ['Brecht Desplanques', 'Jenthe Thienpondt', 'Kris Demuynck'] | 2,020 | Interspeech | 1,350 | 31 | ['Computer Science', 'Engineering'] |
2,005.07202 | Pre-training technique to localize medical BERT and enhance biomedical
BERT | ['Shoya Wada', 'Toshihiro Takeda', 'Shiro Manabe', 'Shozo Konishi', 'Jun Kamohara', 'Yasushi Matsumura'] | ['cs.CL'] | Pre-training large-scale neural language models on raw texts has made a
significant contribution to improving transfer learning in natural language
processing (NLP). With the introduction of transformer-based language models,
such as bidirectional encoder representations from transformers (BERT), the
performance of information extraction from a free text by NLP has significantly
improved for both the general domain and medical domain; however, it is
difficult to train specific BERT models that perform well for domains in which
there are few publicly available databases of high quality and large size. We
hypothesized that this problem can be addressed by up-sampling a
domain-specific corpus and using it for pre-training with a larger corpus in a
balanced manner. Our proposed method consists of a single intervention with one
option: simultaneous pre-training after up-sampling and amplified vocabulary.
We conducted three experiments and evaluated the resulting products. We
confirmed that our Japanese medical BERT outperformed conventional baselines
and the other BERT models in terms of the medical document classification task
and that our English BERT pre-trained using both the general and medical-domain
corpora performed sufficiently well for practical use in terms of the
biomedical language understanding evaluation (BLUE) benchmark. Moreover, our
enhanced biomedical BERT model, in which clinical notes were not used during
pre-training, showed that both the clinical and biomedical scores of the BLUE
benchmark were 0.3 points above that of the ablation model trained without our
proposed method. Well-balanced pre-training by up-sampling instances derived
from a corpus appropriate for the target task allows us to construct a
high-performance BERT model. | 2020-05-14T18:00:01Z | We made the pre-trained weights of ouBioBERT and the source code for
fine-tuning freely available at
https://github.com/sy-wada/blue_benchmark_with_transformers | null | 10.1016/j.artmed.2024.102889 | Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT | ['Shoya Wada', 'Toshihiro Takeda', 'Katsuki Okada', 'S. Manabe', 'Shozo Konishi', 'Jun Kamohara', 'Y. Matsumura'] | 2,020 | Artif. Intell. Medicine | 12 | 38 | ['Medicine', 'Computer Science'] |
2,005.07421 | Spelling Error Correction with Soft-Masked BERT | ['Shaohua Zhang', 'Haoran Huang', 'Jicong Liu', 'Hang Li'] | ['cs.CL', 'cs.LG'] | Spelling error correction is an important yet challenging task because a
satisfactory solution of it essentially needs human-level language
understanding ability. Without loss of generality we consider Chinese spelling
error correction (CSC) in this paper. A state-of-the-art method for the task
selects a character from a list of candidates for correction (including
non-correction) at each position of the sentence on the basis of BERT, the
language representation model. The accuracy of the method can be sub-optimal,
however, because BERT does not have sufficient capability to detect whether
there is an error at each position, apparently due to the way of pre-training
it using mask language modeling. In this work, we propose a novel neural
architecture to address the aforementioned issue, which consists of a network
for error detection and a network for error correction based on BERT, with the
former being connected to the latter with what we call soft-masking technique.
Our method of using `Soft-Masked BERT' is general, and it may be employed in
other language detection-correction problems. Experimental results on two
datasets demonstrate that the performance of our proposed method is
significantly better than the baselines including the one solely based on BERT. | 2020-05-15T09:02:38Z | To be published at ACL 2020 | null | null | Spelling Error Correction with Soft-Masked BERT | ['Shaohua Zhang', 'Haoran Huang', 'Jicong Liu', 'Hang Li'] | 2,020 | Annual Meeting of the Association for Computational Linguistics | 214 | 17 | ['Computer Science'] |
2,005.07503 | COVID-Twitter-BERT: A Natural Language Processing Model to Analyse
COVID-19 Content on Twitter | ['Martin Müller', 'Marcel Salathé', 'Per E Kummervold'] | ['cs.CL', 'cs.LG', 'cs.SI'] | In this work, we release COVID-Twitter-BERT (CT-BERT), a transformer-based
model, pretrained on a large corpus of Twitter messages on the topic of
COVID-19. Our model shows a 10-30% marginal improvement compared to its base
model, BERT-Large, on five different classification datasets. The largest
improvements are on the target domain. Pretrained transformer models, such as
CT-BERT, are trained on a specific target domain and can be used for a wide
variety of natural language processing tasks, including classification,
question-answering and chatbots. CT-BERT is optimised to be used on COVID-19
content, in particular social media posts from Twitter. | 2020-05-15T12:40:46Z | null | null | null | null | null | null | null | null | null | null |
2,005.07683 | Movement Pruning: Adaptive Sparsity by Fine-Tuning | ['Victor Sanh', 'Thomas Wolf', 'Alexander M. Rush'] | ['cs.CL', 'cs.LG'] | Magnitude pruning is a widely used strategy for reducing model size in pure
supervised learning; however, it is less effective in the transfer learning
regime that has become standard for state-of-the-art natural language
processing applications. We propose the use of movement pruning, a simple,
deterministic first-order weight pruning method that is more adaptive to
pretrained model fine-tuning. We give mathematical foundations to the method
and compare it to existing zeroth- and first-order pruning methods. Experiments
show that when pruning large pretrained language models, movement pruning shows
significant improvements in high-sparsity regimes. When combined with
distillation, the approach achieves minimal accuracy loss with down to only 3%
of the model parameters. | 2020-05-15T17:54:15Z | 14 pages, 6 figures, 3 tables. Published at NeurIPS2020. Code:
\url{huggingface.co/mvp} | null | null | null | null | null | null | null | null | null |
2,005.08072 | Speech Recognition and Multi-Speaker Diarization of Long Conversations | ['Huanru Henry Mao', 'Shuyang Li', 'Julian McAuley', 'Garrison Cottrell'] | ['eess.AS', 'cs.LG', 'cs.SD'] | Speech recognition (ASR) and speaker diarization (SD) models have
traditionally been trained separately to produce rich conversation transcripts
with speaker labels. Recent advances have shown that joint ASR and SD models
can learn to leverage audio-lexical inter-dependencies to improve word
diarization performance. We introduce a new benchmark of hour-long podcasts
collected from the weekly This American Life radio program to better compare
these approaches when applied to extended multi-speaker conversations. We find
that training separate ASR and SD models perform better when utterance
boundaries are known but otherwise joint models can perform better. To handle
long conversations with unknown utterance boundaries, we introduce a striding
attention decoding algorithm and data augmentation techniques which, combined
with model pre-training, improves ASR and SD. | 2020-05-16T19:29:33Z | null | null | null | null | null | null | null | null | null | null |
2,005.081 | Conformer: Convolution-augmented Transformer for Speech Recognition | ['Anmol Gulati', 'James Qin', 'Chung-Cheng Chiu', 'Niki Parmar', 'Yu Zhang', 'Jiahui Yu', 'Wei Han', 'Shibo Wang', 'Zhengdong Zhang', 'Yonghui Wu', 'Ruoming Pang'] | ['eess.AS', 'cs.LG', 'cs.SD'] | Recently Transformer and Convolution neural network (CNN) based models have
shown promising results in Automatic Speech Recognition (ASR), outperforming
Recurrent neural networks (RNNs). Transformer models are good at capturing
content-based global interactions, while CNNs exploit local features
effectively. In this work, we achieve the best of both worlds by studying how
to combine convolution neural networks and transformers to model both local and
global dependencies of an audio sequence in a parameter-efficient way. To this
regard, we propose the convolution-augmented transformer for speech
recognition, named Conformer. Conformer significantly outperforms the previous
Transformer and CNN based models achieving state-of-the-art accuracies. On the
widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without
using a language model and 1.9%/3.9% with an external language model on
test/testother. We also observe competitive performance of 2.7%/6.3% with a
small model of only 10M parameters. | 2020-05-16T20:56:25Z | Submitted to Interspeech 2020 | null | null | null | null | null | null | null | null | null |
2,005.09007 | U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object
Detection | ['Xuebin Qin', 'Zichen Zhang', 'Chenyang Huang', 'Masood Dehghan', 'Osmar R. Zaiane', 'Martin Jagersand'] | ['cs.CV'] | In this paper, we design a simple yet powerful deep network architecture,
U$^2$-Net, for salient object detection (SOD). The architecture of our
U$^2$-Net is a two-level nested U-structure. The design has the following
advantages: (1) it is able to capture more contextual information from
different scales thanks to the mixture of receptive fields of different sizes
in our proposed ReSidual U-blocks (RSU), (2) it increases the depth of the
whole architecture without significantly increasing the computational cost
because of the pooling operations used in these RSU blocks. This architecture
enables us to train a deep network from scratch without using backbones from
image classification tasks. We instantiate two models of the proposed
architecture, U$^2$-Net (176.3 MB, 30 FPS on GTX 1080Ti GPU) and
U$^2$-Net$^{\dagger}$ (4.7 MB, 40 FPS), to facilitate the usage in different
environments. Both models achieve competitive performance on six SOD datasets.
The code is available: https://github.com/NathanUA/U-2-Net. | 2020-05-18T18:08:26Z | Accepted in Pattern Recognition 2020 | null | 10.1016/j.patcog.2020.107404 | U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection | ['Xuebin Qin', 'Zichen Zhang', 'Chenyang Huang', 'Masood Dehghan', 'Osmar R Zaiane', 'Martin Jägersand'] | 2,020 | Pattern Recognition | 1,683 | 59 | ['Computer Science'] |
2,005.11129 | Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment
Search | ['Jaehyeon Kim', 'Sungwon Kim', 'Jungil Kong', 'Sungroh Yoon'] | ['eess.AS', 'cs.SD'] | Recently, text-to-speech (TTS) models such as FastSpeech and ParaNet have
been proposed to generate mel-spectrograms from text in parallel. Despite the
advantage, the parallel TTS models cannot be trained without guidance from
autoregressive TTS models as their external aligners. In this work, we propose
Glow-TTS, a flow-based generative model for parallel TTS that does not require
any external aligner. By combining the properties of flows and dynamic
programming, the proposed model searches for the most probable monotonic
alignment between text and the latent representation of speech on its own. We
demonstrate that enforcing hard monotonic alignments enables robust TTS, which
generalizes to long utterances, and employing generative flows enables fast,
diverse, and controllable speech synthesis. Glow-TTS obtains an
order-of-magnitude speed-up over the autoregressive model, Tacotron 2, at
synthesis with comparable speech quality. We further show that our model can be
easily extended to a multi-speaker setting. | 2020-05-22T12:06:46Z | Accepted by NeurIPS2020 | null | null | null | null | null | null | null | null | null |
2,005.11401 | Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks | ['Patrick Lewis', 'Ethan Perez', 'Aleksandra Piktus', 'Fabio Petroni', 'Vladimir Karpukhin', 'Naman Goyal', 'Heinrich Küttler', 'Mike Lewis', 'Wen-tau Yih', 'Tim Rocktäschel', 'Sebastian Riedel', 'Douwe Kiela'] | ['cs.CL', 'cs.LG'] | Large pre-trained language models have been shown to store factual knowledge
in their parameters, and achieve state-of-the-art results when fine-tuned on
downstream NLP tasks. However, their ability to access and precisely manipulate
knowledge is still limited, and hence on knowledge-intensive tasks, their
performance lags behind task-specific architectures. Additionally, providing
provenance for their decisions and updating their world knowledge remain open
research problems. Pre-trained models with a differentiable access mechanism to
explicit non-parametric memory can overcome this issue, but have so far been
only investigated for extractive downstream tasks. We explore a general-purpose
fine-tuning recipe for retrieval-augmented generation (RAG) -- models which
combine pre-trained parametric and non-parametric memory for language
generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index
of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG
formulations, one which conditions on the same retrieved passages across the
whole generated sequence, the other can use different passages per token. We
fine-tune and evaluate our models on a wide range of knowledge-intensive NLP
tasks and set the state-of-the-art on three open domain QA tasks, outperforming
parametric seq2seq models and task-specific retrieve-and-extract architectures.
For language generation tasks, we find that RAG models generate more specific,
diverse and factual language than a state-of-the-art parametric-only seq2seq
baseline. | 2020-05-22T21:34:34Z | Accepted at NeurIPS 2020 | null | null | Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks | ['Patrick Lewis', 'Ethan Perez', 'Aleksandara Piktus', 'F. Petroni', 'Vladimir Karpukhin', 'Naman Goyal', 'Heinrich Kuttler', 'M. Lewis', 'Wen-tau Yih', 'Tim Rocktäschel', 'Sebastian Riedel', 'Douwe Kiela'] | 2,020 | Neural Information Processing Systems | 6,575 | 67 | ['Computer Science'] |
2,005.11723 | Query Resolution for Conversational Search with Limited Supervision | ['Nikos Voskarides', 'Dan Li', 'Pengjie Ren', 'Evangelos Kanoulas', 'Maarten de Rijke'] | ['cs.IR', 'cs.CL'] | In this work we focus on multi-turn passage retrieval as a crucial component
of conversational search. One of the key challenges in multi-turn passage
retrieval comes from the fact that the current turn query is often
underspecified due to zero anaphora, topic change, or topic return. Context
from the conversational history can be used to arrive at a better expression of
the current turn query, defined as the task of query resolution. In this paper,
we model the query resolution task as a binary term classification problem: for
each term appearing in the previous turns of the conversation decide whether to
add it to the current turn query or not. We propose QuReTeC (Query Resolution
by Term Classification), a neural query resolution model based on bidirectional
transformers. We propose a distant supervision method to automatically generate
training data by using query-passage relevance labels. Such labels are often
readily available in a collection either as human annotations or inferred from
user interactions. We show that QuReTeC outperforms state-of-the-art models,
and furthermore, that our distant supervision method can be used to
substantially reduce the amount of human-curated data required to train
QuReTeC. We incorporate QuReTeC in a multi-turn, multi-stage passage retrieval
architecture and demonstrate its effectiveness on the TREC CAsT dataset. | 2020-05-24T11:37:22Z | SIGIR 2020 full conference paper | null | 10.1145/3397271.3401130 | null | null | null | null | null | null | null |
2,005.1232 | SCAN: Learning to Classify Images without Labels | ['Wouter Van Gansbeke', 'Simon Vandenhende', 'Stamatios Georgoulis', 'Marc Proesmans', 'Luc Van Gool'] | ['cs.CV', 'cs.LG'] | Can we automatically group images into semantically meaningful clusters when
ground-truth annotations are absent? The task of unsupervised image
classification remains an important, and open challenge in computer vision.
Several recent approaches have tried to tackle this problem in an end-to-end
fashion. In this paper, we deviate from recent works, and advocate a two-step
approach where feature learning and clustering are decoupled. First, a
self-supervised task from representation learning is employed to obtain
semantically meaningful features. Second, we use the obtained features as a
prior in a learnable clustering approach. In doing so, we remove the ability
for cluster learning to depend on low-level features, which is present in
current end-to-end learning approaches. Experimental evaluation shows that we
outperform state-of-the-art methods by large margins, in particular +26.6% on
CIFAR10, +25.0% on CIFAR100-20 and +21.3% on STL10 in terms of classification
accuracy. Furthermore, our method is the first to perform well on a large-scale
dataset for image classification. In particular, we obtain promising results on
ImageNet, and outperform several semi-supervised learning methods in the
low-data regime without the use of any ground-truth annotations. The code is
made publicly available at
https://github.com/wvangansbeke/Unsupervised-Classification. | 2020-05-25T18:12:33Z | Accepted at ECCV 2020. Includes supplementary. Code and pretrained
models at https://github.com/wvangansbeke/Unsupervised-Classification | null | null | null | null | null | null | null | null | null |
2,005.12515 | ParsBERT: Transformer-based Model for Persian Language Understanding | ['Mehrdad Farahani', 'Mohammad Gharachorloo', 'Marzieh Farahani', 'Mohammad Manthouri'] | ['cs.CL'] | The surge of pre-trained language models has begun a new era in the field of
Natural Language Processing (NLP) by allowing us to build powerful language
models. Among these models, Transformer-based models such as BERT have become
increasingly popular due to their state-of-the-art performance. However, these
models are usually focused on English, leaving other languages to multilingual
models with limited resources. This paper proposes a monolingual BERT for the
Persian language (ParsBERT), which shows its state-of-the-art performance
compared to other architectures and multilingual models. Also, since the amount
of data available for NLP tasks in Persian is very restricted, a massive
dataset for different NLP tasks as well as pre-training the model is composed.
ParsBERT obtains higher scores in all datasets, including existing ones as well
as composed ones and improves the state-of-the-art performance by outperforming
both multilingual BERT and other prior works in Sentiment Analysis, Text
Classification and Named Entity Recognition tasks. | 2020-05-26T05:05:32Z | 10 pages, 5 figures, 7 tables, table 7 corrected and some refs
related to table 7 | null | 10.1007/s11063-021-10528-4 | null | null | null | null | null | null | null |
2,005.12872 | End-to-End Object Detection with Transformers | ['Nicolas Carion', 'Francisco Massa', 'Gabriel Synnaeve', 'Nicolas Usunier', 'Alexander Kirillov', 'Sergey Zagoruyko'] | ['cs.CV'] | We present a new method that views object detection as a direct set
prediction problem. Our approach streamlines the detection pipeline,
effectively removing the need for many hand-designed components like a
non-maximum suppression procedure or anchor generation that explicitly encode
our prior knowledge about the task. The main ingredients of the new framework,
called DEtection TRansformer or DETR, are a set-based global loss that forces
unique predictions via bipartite matching, and a transformer encoder-decoder
architecture. Given a fixed small set of learned object queries, DETR reasons
about the relations of the objects and the global image context to directly
output the final set of predictions in parallel. The new model is conceptually
simple and does not require a specialized library, unlike many other modern
detectors. DETR demonstrates accuracy and run-time performance on par with the
well-established and highly-optimized Faster RCNN baseline on the challenging
COCO object detection dataset. Moreover, DETR can be easily generalized to
produce panoptic segmentation in a unified manner. We show that it
significantly outperforms competitive baselines. Training code and pretrained
models are available at https://github.com/facebookresearch/detr. | 2020-05-26T17:06:38Z | null | null | null | End-to-End Object Detection with Transformers | ['Nicolas Carion', 'Francisco Massa', 'Gabriel Synnaeve', 'Nicolas Usunier', 'Alexander Kirillov', 'Sergey Zagoruyko'] | 2,020 | European Conference on Computer Vision | 13,239 | 53 | ['Computer Science'] |
2,005.14147 | IMDb data from Two Generations, from 1979 to 2019; Part one, Dataset
Introduction and Preliminary Analysis | ['M. Bahraminasr', 'A. Vafaei Sadr'] | ['cs.CY'] | "IMDb" as a user-regulating and one the most-visited portal has provided an
opportunity to create an enormous database. Analysis of the information on
Internet Movie Database - IMDb, either those related to the movie or provided
by users would help to reveal the determinative factors in the route of success
for each movie. As the lack of a comprehensive dataset was felt, we determined
to do create a compendious dataset for the later analysis using the statistical
methods and machine learning models; It comprises of various information
provided on IMDb such as rating data, genre, cast and crew, MPAA rating
certificate, parental guide details, related movie information, posters, etc,
for over 79k titles which is the largest dataset by this date. The present
paper is the first paper in a series of papers aiming at the mentioned goals,
by a description of the created dataset and a preliminary analysis including
some trend in data, demographic analysis of IMDb scores and their relation of
genre MPAA rating certificate has been investigated. | 2020-05-28T17:01:06Z | 12 pages, 9 figures | null | null | null | null | null | null | null | null | null |
2,005.14165 | Language Models are Few-Shot Learners | ['Tom B. Brown', 'Benjamin Mann', 'Nick Ryder', 'Melanie Subbiah', 'Jared Kaplan', 'Prafulla Dhariwal', 'Arvind Neelakantan', 'Pranav Shyam', 'Girish Sastry', 'Amanda Askell', 'Sandhini Agarwal', 'Ariel Herbert-Voss', 'Gretchen Krueger', 'Tom Henighan', 'Rewon Child', 'Aditya Ramesh', 'Daniel M. Ziegler', 'Jeffrey Wu', 'Clemens Winter', 'Christopher Hesse', 'Mark Chen', 'Eric Sigler', 'Mateusz Litwin', 'Scott Gray', 'Benjamin Chess', 'Jack Clark', 'Christopher Berner', 'Sam McCandlish', 'Alec Radford', 'Ilya Sutskever', 'Dario Amodei'] | ['cs.CL'] | Recent work has demonstrated substantial gains on many NLP tasks and
benchmarks by pre-training on a large corpus of text followed by fine-tuning on
a specific task. While typically task-agnostic in architecture, this method
still requires task-specific fine-tuning datasets of thousands or tens of
thousands of examples. By contrast, humans can generally perform a new language
task from only a few examples or from simple instructions - something which
current NLP systems still largely struggle to do. Here we show that scaling up
language models greatly improves task-agnostic, few-shot performance, sometimes
even reaching competitiveness with prior state-of-the-art fine-tuning
approaches. Specifically, we train GPT-3, an autoregressive language model with
175 billion parameters, 10x more than any previous non-sparse language model,
and test its performance in the few-shot setting. For all tasks, GPT-3 is
applied without any gradient updates or fine-tuning, with tasks and few-shot
demonstrations specified purely via text interaction with the model. GPT-3
achieves strong performance on many NLP datasets, including translation,
question-answering, and cloze tasks, as well as several tasks that require
on-the-fly reasoning or domain adaptation, such as unscrambling words, using a
novel word in a sentence, or performing 3-digit arithmetic. At the same time,
we also identify some datasets where GPT-3's few-shot learning still struggles,
as well as some datasets where GPT-3 faces methodological issues related to
training on large web corpora. Finally, we find that GPT-3 can generate samples
of news articles which human evaluators have difficulty distinguishing from
articles written by humans. We discuss broader societal impacts of this finding
and of GPT-3 in general. | 2020-05-28T17:29:03Z | 40+32 pages | null | null | null | null | null | null | null | null | null |
2,005.14511 | NuClick: A Deep Learning Framework for Interactive Segmentation of
Microscopy Images | ['Navid Alemi Koohbanani', 'Mostafa Jahanifar', 'Neda Zamani Tajadin', 'Nasir Rajpoot'] | ['cs.CV', 'stat.AP'] | Object segmentation is an important step in the workflow of computational
pathology. Deep learning based models generally require large amount of labeled
data for precise and reliable prediction. However, collecting labeled data is
expensive because it often requires expert knowledge, particularly in medical
imaging domain where labels are the result of a time-consuming analysis made by
one or more human experts. As nuclei, cells and glands are fundamental objects
for downstream analysis in computational pathology/cytology, in this paper we
propose a simple CNN-based approach to speed up collecting annotations for
these objects which requires minimum interaction from the annotator. We show
that for nuclei and cells in histology and cytology images, one click inside
each object is enough for NuClick to yield a precise annotation. For
multicellular structures such as glands, we propose a novel approach to provide
the NuClick with a squiggle as a guiding signal, enabling it to segment the
glandular boundaries. These supervisory signals are fed to the network as
auxiliary inputs along with RGB channels. With detailed experiments, we show
that NuClick is adaptable to the object scale, robust against variations in the
user input, adaptable to new domains, and delivers reliable annotations. An
instance segmentation model trained on masks generated by NuClick achieved the
first rank in LYON19 challenge. As exemplar outputs of our framework, we are
releasing two datasets: 1) a dataset of lymphocyte annotations within IHC
images, and 2) a dataset of segmented WBCs in blood smear images. | 2020-05-29T11:51:27Z | null | null | null | NuClick: A Deep Learning Framework for Interactive Segmentation of Microscopy Images | ['Navid Alemi Koohbanani', 'M. Jahanifar', 'Neda Zamani Tajadin', 'N. Rajpoot'] | 2,020 | Medical Image Anal. | 125 | 97 | ['Computer Science', 'Medicine', 'Mathematics'] |
2,006.00885 | CoAID: COVID-19 Healthcare Misinformation Dataset | ['Limeng Cui', 'Dongwon Lee'] | ['cs.SI', 'cs.CL'] | As the COVID-19 virus quickly spreads around the world, unfortunately,
misinformation related to COVID-19 also gets created and spreads like wild
fire. Such misinformation has caused confusion among people, disruptions in
society, and even deadly consequences in health problems. To be able to
understand, detect, and mitigate such COVID-19 misinformation, therefore, has
not only deep intellectual values but also huge societal impacts. To help
researchers combat COVID-19 health misinformation, therefore, we present CoAID
(Covid-19 heAlthcare mIsinformation Dataset), with diverse COVID-19 healthcare
misinformation, including fake news on websites and social platforms, along
with users' social engagement about such news. CoAID includes 4,251 news,
296,000 related user engagements, 926 social platform posts about COVID-19, and
ground truth labels. The dataset is available at:
https://github.com/cuilimeng/CoAID. | 2020-05-22T19:08:14Z | null | null | null | null | null | null | null | null | null | null |
2,006.02049 | FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining | ['Xiaoliang Dai', 'Alvin Wan', 'Peizhao Zhang', 'Bichen Wu', 'Zijian He', 'Zhen Wei', 'Kan Chen', 'Yuandong Tian', 'Matthew Yu', 'Peter Vajda', 'Joseph E. Gonzalez'] | ['cs.CV', 'cs.LG', 'cs.NE'] | Neural Architecture Search (NAS) yields state-of-the-art neural networks that
outperform their best manually-designed counterparts. However, previous NAS
methods search for architectures under one set of training hyper-parameters
(i.e., a training recipe), overlooking superior architecture-recipe
combinations. To address this, we present Neural Architecture-Recipe Search
(NARS) to search both (a) architectures and (b) their corresponding training
recipes, simultaneously. NARS utilizes an accuracy predictor that scores
architecture and training recipes jointly, guiding both sample selection and
ranking. Furthermore, to compensate for the enlarged search space, we leverage
"free" architecture statistics (e.g., FLOP count) to pretrain the predictor,
significantly improving its sample efficiency and prediction reliability. After
training the predictor via constrained iterative optimization, we run fast
evolutionary searches in just CPU minutes to generate architecture-recipe pairs
for a variety of resource constraints, called FBNetV3. FBNetV3 makes up a
family of state-of-the-art compact neural networks that outperform both
automatically and manually-designed competitors. For example, FBNetV3 matches
both EfficientNet and ResNeSt accuracy on ImageNet with up to 2.0x and 7.1x
fewer FLOPs, respectively. Furthermore, FBNetV3 yields significant performance
gains for downstream object detection tasks, improving mAP despite 18% fewer
FLOPs and 34% fewer parameters than EfficientNet-based equivalents. | 2020-06-03T05:20:21Z | null | null | null | FBNetV3: Joint Architecture-Recipe Search using Neural Acquisition Function | ['Xiaoliang Dai', 'Alvin Wan', 'Peizhao Zhang', 'Bichen Wu', 'Zijian He', 'Zhen Wei', 'Kan Chen', 'Yuandong Tian', 'Matthew Yu', 'Péter Vajda', 'Joseph E. Gonzalez'] | 2,020 | arXiv.org | 73 | 54 | ['Computer Science'] |
2,006.03236 | Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing | ['Zihang Dai', 'Guokun Lai', 'Yiming Yang', 'Quoc V. Le'] | ['cs.LG', 'cs.CL', 'stat.ML'] | With the success of language pretraining, it is highly desirable to develop
more efficient architectures of good scalability that can exploit the abundant
unlabeled data at a lower cost. To improve the efficiency, we examine the
much-overlooked redundancy in maintaining a full-length token-level
presentation, especially for tasks that only require a single-vector
presentation of the sequence. With this intuition, we propose
Funnel-Transformer which gradually compresses the sequence of hidden states to
a shorter one and hence reduces the computation cost. More importantly, by
re-investing the saved FLOPs from length reduction in constructing a deeper or
wider model, we further improve the model capacity. In addition, to perform
token-level predictions as required by common pretraining objectives,
Funnel-Transformer is able to recover a deep representation for each token from
the reduced hidden sequence via a decoder. Empirically, with comparable or
fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide
variety of sequence-level prediction tasks, including text classification,
language understanding, and reading comprehension. The code and pretrained
checkpoints are available at https://github.com/laiguokun/Funnel-Transformer. | 2020-06-05T05:16:23Z | null | null | null | Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing | ['Zihang Dai', 'Guokun Lai', 'Yiming Yang', 'Quoc V. Le'] | 2,020 | Neural Information Processing Systems | 236 | 44 | ['Computer Science', 'Mathematics'] |
2,006.03654 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention | ['Pengcheng He', 'Xiaodong Liu', 'Jianfeng Gao', 'Weizhu Chen'] | ['cs.CL', 'cs.LG', 'cs.CL, cs.GL', 'I.2; I.7'] | Recent progress in pre-trained neural language models has significantly
improved the performance of many natural language processing (NLP) tasks. In
this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT
with disentangled attention) that improves the BERT and RoBERTa models using
two novel techniques. The first is the disentangled attention mechanism, where
each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed
using disentangled matrices on their contents and relative positions,
respectively. Second, an enhanced mask decoder is used to incorporate absolute
positions in the decoding layer to predict the masked tokens in model
pre-training. In addition, a new virtual adversarial training method is used
for fine-tuning to improve models' generalization. We show that these
techniques significantly improve the efficiency of model pre-training and the
performance of both natural language understanding (NLU) and natural langauge
generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model
trained on half of the training data performs consistently better on a wide
range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%),
on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%).
Notably, we scale up DeBERTa by training a larger version that consists of 48
Transform layers with 1.5 billion parameters. The significant performance boost
makes the single DeBERTa model surpass the human performance on the SuperGLUE
benchmark (Wang et al., 2019a) for the first time in terms of macro-average
score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the
SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline
by a decent margin (90.3 versus 89.8). | 2020-06-05T19:54:34Z | 20 pages,5 figures, 13 tables. In v2, we scale up DeBERTa to 1.5B
parameters and it surpasses the human performance on SuperGLUE leaderboard
for the first time as of December 29, 2020. In v3, we replace MLM with RTD
objective which significantly improves the model performance | null | null | null | null | null | null | null | null | null |
2,006.03659 | DeCLUTR: Deep Contrastive Learning for Unsupervised Textual
Representations | ['John Giorgi', 'Osvald Nitski', 'Bo Wang', 'Gary Bader'] | ['cs.CL', 'cs.LG'] | Sentence embeddings are an important component of many natural language
processing (NLP) systems. Like word embeddings, sentence embeddings are
typically learned on large text corpora and then transferred to various
downstream tasks, such as clustering and retrieval. Unlike word embeddings, the
highest performing solutions for learning sentence embeddings require labelled
data, limiting their usefulness to languages and domains where labelled data is
abundant. In this paper, we present DeCLUTR: Deep Contrastive Learning for
Unsupervised Textual Representations. Inspired by recent advances in deep
metric learning (DML), we carefully design a self-supervised objective for
learning universal sentence embeddings that does not require labelled training
data. When used to extend the pretraining of transformer-based language models,
our approach closes the performance gap between unsupervised and supervised
pretraining for universal sentence encoders. Importantly, our experiments
suggest that the quality of the learned embeddings scale with both the number
of trainable parameters and the amount of unlabelled training data. Our code
and pretrained models are publicly available and can be easily adapted to new
domains or used to embed unseen text. | 2020-06-05T20:00:28Z | ACL2021 Camera Ready V2 | null | null | DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations | ['John Giorgi', 'O. Nitski', 'Gary D Bader', 'Bo Wang'] | 2,020 | Annual Meeting of the Association for Computational Linguistics | 499 | 97 | ['Computer Science'] |
2,006.03677 | Visual Transformers: Token-based Image Representation and Processing for
Computer Vision | ['Bichen Wu', 'Chenfeng Xu', 'Xiaoliang Dai', 'Alvin Wan', 'Peizhao Zhang', 'Zhicheng Yan', 'Masayoshi Tomizuka', 'Joseph Gonzalez', 'Kurt Keutzer', 'Peter Vajda'] | ['cs.CV', 'cs.LG', 'eess.IV'] | Computer vision has achieved remarkable success by (a) representing images as
uniformly-arranged pixel arrays and (b) convolving highly-localized features.
However, convolutions treat all image pixels equally regardless of importance;
explicitly model all concepts across all images, regardless of content; and
struggle to relate spatially-distant concepts. In this work, we challenge this
paradigm by (a) representing images as semantic visual tokens and (b) running
transformers to densely model token relationships. Critically, our Visual
Transformer operates in a semantic token space, judiciously attending to
different image parts based on context. This is in sharp contrast to
pixel-space transformers that require orders-of-magnitude more compute. Using
an advanced training recipe, our VTs significantly outperform their
convolutional counterparts, raising ResNet accuracy on ImageNet top-1 by 4.6 to
7 points while using fewer FLOPs and parameters. For semantic segmentation on
LIP and COCO-stuff, VT-based feature pyramid networks (FPN) achieve 0.35 points
higher mIoU while reducing the FPN module's FLOPs by 6.5x. | 2020-06-05T20:49:49Z | null | null | null | null | null | null | null | null | null | null |
2,006.04045 | A Generic First-Order Algorithmic Framework for Bi-Level Programming
Beyond Lower-Level Singleton | ['Risheng Liu', 'Pan Mu', 'Xiaoming Yuan', 'Shangzhi Zeng', 'Jin Zhang'] | ['cs.LG', 'cs.CV', 'math.DS', 'math.OC', 'stat.ML'] | In recent years, a variety of gradient-based first-order methods have been
developed to solve bi-level optimization problems for learning applications.
However, theoretical guarantees of these existing approaches heavily rely on
the simplification that for each fixed upper-level variable, the lower-level
solution must be a singleton (a.k.a., Lower-Level Singleton, LLS). In this
work, we first design a counter-example to illustrate the invalidation of such
LLS condition. Then by formulating BLPs from the view point of optimistic
bi-level and aggregating hierarchical objective information, we establish
Bi-level Descent Aggregation (BDA), a flexible and modularized algorithmic
framework for generic bi-level optimization. Theoretically, we derive a new
methodology to prove the convergence of BDA without the LLS condition. Our
investigations also demonstrate that BDA is indeed compatible to a verify of
particular first-order computation modules. Additionally, as an interesting
byproduct, we also improve these conventional first-order bi-level schemes
(under the LLS simplification). Particularly, we establish their convergences
with weaker assumptions. Extensive experiments justify our theoretical results
and demonstrate the superiority of the proposed BDA for different tasks,
including hyper-parameter optimization and meta learning. | 2020-06-07T05:18:50Z | Accepted at ICML 2020 | null | null | null | null | null | null | null | null | null |
2,006.04558 | FastSpeech 2: Fast and High-Quality End-to-End Text to Speech | ['Yi Ren', 'Chenxu Hu', 'Xu Tan', 'Tao Qin', 'Sheng Zhao', 'Zhou Zhao', 'Tie-Yan Liu'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Non-autoregressive text to speech (TTS) models such as FastSpeech can
synthesize speech significantly faster than previous autoregressive models with
comparable quality. The training of FastSpeech model relies on an
autoregressive teacher model for duration prediction (to provide more
information as input) and knowledge distillation (to simplify the data
distribution in output), which can ease the one-to-many mapping problem (i.e.,
multiple speech variations correspond to the same text) in TTS. However,
FastSpeech has several disadvantages: 1) the teacher-student distillation
pipeline is complicated and time-consuming, 2) the duration extracted from the
teacher model is not accurate enough, and the target mel-spectrograms distilled
from teacher model suffer from information loss due to data simplification,
both of which limit the voice quality. In this paper, we propose FastSpeech 2,
which addresses the issues in FastSpeech and better solves the one-to-many
mapping problem in TTS by 1) directly training the model with ground-truth
target instead of the simplified output from teacher, and 2) introducing more
variation information of speech (e.g., pitch, energy and more accurate
duration) as conditional inputs. Specifically, we extract duration, pitch and
energy from speech waveform and directly take them as conditional inputs in
training and use predicted values in inference. We further design FastSpeech
2s, which is the first attempt to directly generate speech waveform from text
in parallel, enjoying the benefit of fully end-to-end inference. Experimental
results show that 1) FastSpeech 2 achieves a 3x training speed-up over
FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech
2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even
surpass autoregressive models. Audio samples are available at
https://speechresearch.github.io/fastspeech2/. | 2020-06-08T13:05:40Z | Accepted by ICLR 2021 | null | null | null | null | null | null | null | null | null |
2,006.06676 | Training Generative Adversarial Networks with Limited Data | ['Tero Karras', 'Miika Aittala', 'Janne Hellsten', 'Samuli Laine', 'Jaakko Lehtinen', 'Timo Aila'] | ['cs.CV', 'cs.LG', 'cs.NE', 'stat.ML'] | Training generative adversarial networks (GAN) using too little data
typically leads to discriminator overfitting, causing training to diverge. We
propose an adaptive discriminator augmentation mechanism that significantly
stabilizes training in limited data regimes. The approach does not require
changes to loss functions or network architectures, and is applicable both when
training from scratch and when fine-tuning an existing GAN on another dataset.
We demonstrate, on several datasets, that good results are now possible using
only a few thousand training images, often matching StyleGAN2 results with an
order of magnitude fewer images. We expect this to open up new application
domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a
limited data benchmark, and improve the record FID from 5.59 to 2.42. | 2020-06-11T17:06:34Z | null | null | null | Training Generative Adversarial Networks with Limited Data | ['Tero Karras', 'M. Aittala', 'Janne Hellsten', 'S. Laine', 'J. Lehtinen', 'Timo Aila'] | 2,020 | Neural Information Processing Systems | 1,897 | 56 | ['Computer Science', 'Mathematics'] |
2,006.06687 | On the asymptotics of wide networks with polynomial activations | ['Kyle Aitken', 'Guy Gur-Ari'] | ['cs.LG', 'hep-th', 'stat.ML'] | We consider an existing conjecture addressing the asymptotic behavior of
neural networks in the large width limit. The results that follow from this
conjecture include tight bounds on the behavior of wide networks during
stochastic gradient descent, and a derivation of their finite-width dynamics.
We prove the conjecture for deep networks with polynomial activation functions,
greatly extending the validity of these results. Finally, we point out a
difference in the asymptotic behavior of networks with analytic (and
non-linear) activation functions and those with piecewise-linear activations
such as ReLU. | 2020-06-11T18:00:01Z | 8+12 pages, 6 figures, 2 tables | null | null | On the asymptotics of wide networks with polynomial activations | ['Kyle Aitken', 'Guy Gur-Ari'] | 2,020 | arXiv.org | 23 | 17 | ['Computer Science', 'Physics', 'Mathematics'] |
2,006.06873 | FastPitch: Parallel Text-to-speech with Pitch Prediction | ['Adrian Łańcucki'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | We present FastPitch, a fully-parallel text-to-speech model based on
FastSpeech, conditioned on fundamental frequency contours. The model predicts
pitch contours during inference. By altering these predictions, the generated
speech can be more expressive, better match the semantic of the utterance, and
in the end more engaging to the listener. Uniformly increasing or decreasing
pitch with FastPitch generates speech that resembles the voluntary modulation
of voice. Conditioning on frequency contours improves the overall quality of
synthesized speech, making it comparable to state-of-the-art. It does not
introduce an overhead, and FastPitch retains the favorable, fully-parallel
Transformer architecture, with over 900x real-time factor for mel-spectrogram
synthesis of a typical utterance. | 2020-06-11T23:23:58Z | Accepted to ICASSP 2021 | null | null | null | null | null | null | null | null | null |
2,006.07164 | ESAD: Endoscopic Surgeon Action Detection Dataset | ['Vivek Singh Bawa', 'Gurkirt Singh', 'Francis KapingA', 'Inna Skarga-Bandurova', 'Alice Leporini', 'Carmela Landolfo', 'Armando Stabile', 'Francesco Setti', 'Riccardo Muradore', 'Elettra Oleari', 'Fabio Cuzzolin'] | ['cs.CV', 'cs.RO'] | In this work, we take aim towards increasing the effectiveness of surgical
assistant robots. We intended to make assistant robots safer by making them
aware about the actions of surgeon, so it can take appropriate assisting
actions. In other words, we aim to solve the problem of surgeon action
detection in endoscopic videos. To this, we introduce a challenging dataset for
surgeon action detection in real-world endoscopic videos. Action classes are
picked based on the feedback of surgeons and annotated by medical professional.
Given a video frame, we draw bounding box around surgical tool which is
performing action and label it with action label. Finally, we presenta
frame-level action detection baseline model based on recent advances in ob-ject
detection. Results on our new dataset show that our presented dataset provides
enough interesting challenges for future method and it can serveas strong
benchmark corresponding research in surgeon action detection in endoscopic
videos. | 2020-06-12T13:22:41Z | In context of SARAS ESAD Challeneg at MIDL | null | null | ESAD: Endoscopic Surgeon Action Detection Dataset | ['V. Bawa', 'Gurkirt Singh', 'Francis KapingA', 'InnaSkarga-Bandurova', 'A. Leporini', 'Carmela Landolfo', 'Armando Stabile', 'Francesco Setti', 'R. Muradore', 'Elettra Oleari', 'Fabio Cuzzolin'] | 2,020 | arXiv.org | 15 | 30 | ['Computer Science'] |
2,006.07698 | Transferring Monolingual Model to Low-Resource Language: The Case of
Tigrinya | ['Abrhalei Tela', 'Abraham Woubie', 'Ville Hautamaki'] | ['cs.CL', 'cs.LG'] | In recent years, transformer models have achieved great success in natural
language processing (NLP) tasks. Most of the current state-of-the-art NLP
results are achieved by using monolingual transformer models, where the model
is pre-trained using a single language unlabelled text corpus. Then, the model
is fine-tuned to the specific downstream task. However, the cost of
pre-training a new transformer model is high for most languages. In this work,
we propose a cost-effective transfer learning method to adopt a strong source
language model, trained from a large monolingual corpus to a low-resource
language. Thus, using XLNet language model, we demonstrate competitive
performance with mBERT and a pre-trained target language model on the
cross-lingual sentiment (CLS) dataset and on a new sentiment analysis dataset
for low-resourced language Tigrinya. With only 10k examples of the given
Tigrinya sentiment analysis dataset, English XLNet has achieved 78.88% F1-Score
outperforming BERT and mBERT by 10% and 7%, respectively. More interestingly,
fine-tuning (English) XLNet model on the CLS dataset has promising results
compared to mBERT and even outperformed mBERT for one dataset of the Japanese
language. | 2020-06-13T18:53:22Z | null | null | null | null | null | null | null | null | null | null |
2,006.07733 | Bootstrap your own latent: A new approach to self-supervised Learning | ['Jean-Bastien Grill', 'Florian Strub', 'Florent Altché', 'Corentin Tallec', 'Pierre H. Richemond', 'Elena Buchatskaya', 'Carl Doersch', 'Bernardo Avila Pires', 'Zhaohan Daniel Guo', 'Mohammad Gheshlaghi Azar', 'Bilal Piot', 'Koray Kavukcuoglu', 'Rémi Munos', 'Michal Valko'] | ['cs.LG', 'cs.CV', 'stat.ML'] | We introduce Bootstrap Your Own Latent (BYOL), a new approach to
self-supervised image representation learning. BYOL relies on two neural
networks, referred to as online and target networks, that interact and learn
from each other. From an augmented view of an image, we train the online
network to predict the target network representation of the same image under a
different augmented view. At the same time, we update the target network with a
slow-moving average of the online network. While state-of-the art methods rely
on negative pairs, BYOL achieves a new state of the art without them. BYOL
reaches $74.3\%$ top-1 classification accuracy on ImageNet using a linear
evaluation with a ResNet-50 architecture and $79.6\%$ with a larger ResNet. We
show that BYOL performs on par or better than the current state of the art on
both transfer and semi-supervised benchmarks. Our implementation and pretrained
models are given on GitHub. | 2020-06-13T22:35:21Z | null | null | null | null | null | null | null | null | null | null |
2,006.0789 | FinEst BERT and CroSloEngual BERT: less is more in multilingual models | ['Matej Ulčar', 'Marko Robnik-Šikonja'] | ['cs.CL'] | Large pretrained masked language models have become state-of-the-art
solutions for many NLP problems. The research has been mostly focused on
English language, though. While massively multilingual models exist, studies
have shown that monolingual models produce much better results. We train two
trilingual BERT-like models, one for Finnish, Estonian, and English, the other
for Croatian, Slovenian, and English. We evaluate their performance on several
downstream tasks, NER, POS-tagging, and dependency parsing, using the
multilingual BERT and XLM-R as baselines. The newly created FinEst BERT and
CroSloEngual BERT improve the results on all tasks in most monolingual and
cross-lingual situations | 2020-06-14T12:54:01Z | 10 pages, accepted at TSD 2020 conference | Proceedings of the 23rd Internetional Conference on Text, Speech,
and Dialogue (TSD 2020), pages 104-111 | null | null | null | null | null | null | null | null |
2,006.08097 | FinBERT: A Pretrained Language Model for Financial Communications | ['Yi Yang', 'Mark Christopher Siy UY', 'Allen Huang'] | ['cs.CL'] | Contextual pretrained language models, such as BERT (Devlin et al., 2019),
have made significant breakthrough in various NLP tasks by training on large
scale of unlabeled text re-sources.Financial sector also accumulates large
amount of financial communication text.However, there is no pretrained finance
specific language models available. In this work,we address the need by
pretraining a financial domain specific BERT models, FinBERT, using a large
scale of financial communication corpora. Experiments on three financial
sentiment classification tasks confirm the advantage of FinBERT over generic
domain BERT model. The code and pretrained models are available at
https://github.com/yya518/FinBERT. We hope this will be useful for
practitioners and researchers working on financial NLP tasks. | 2020-06-15T02:51:06Z | https://github.com/yya518/FinBERT | null | null | null | null | null | null | null | null | null |
2,006.09092 | Learning Rates as a Function of Batch Size: A Random Matrix Theory
Approach to Neural Network Training | ['Diego Granziol', 'Stefan Zohren', 'Stephen Roberts'] | ['stat.ML', 'cs.LG'] | We study the effect of mini-batching on the loss landscape of deep neural
networks using spiked, field-dependent random matrix theory. We demonstrate
that the magnitude of the extremal values of the batch Hessian are larger than
those of the empirical Hessian. We also derive similar results for the
Generalised Gauss-Newton matrix approximation of the Hessian. As a consequence
of our theorems we derive an analytical expressions for the maximal learning
rates as a function of batch size, informing practical training regimens for
both stochastic gradient descent (linear scaling) and adaptive algorithms, such
as Adam (square root scaling), for smooth, non-convex deep neural networks.
Whilst the linear scaling for stochastic gradient descent has been derived
under more restrictive conditions, which we generalise, the square root scaling
rule for adaptive optimisers is, to our knowledge, completely novel. %For
stochastic second-order methods and adaptive methods, we derive that the
minimal damping coefficient is proportional to the ratio of the learning rate
to batch size. We validate our claims on the VGG/WideResNet architectures on
the CIFAR-$100$ and ImageNet datasets. Based on our investigations of the
sub-sampled Hessian we develop a stochastic Lanczos quadrature based on the fly
learning rate and momentum learner, which avoids the need for expensive
multiple evaluations for these key hyper-parameters and shows good preliminary
results on the Pre-Residual Architecure for CIFAR-$100$. | 2020-06-16T11:55:45Z | null | null | null | null | null | null | null | null | null | null |
2,006.09158 | G1020: A Benchmark Retinal Fundus Image Dataset for Computer-Aided
Glaucoma Detection | ['Muhammad Naseer Bajwa', 'Gur Amrit Pal Singh', 'Wolfgang Neumeier', 'Muhammad Imran Malik', 'Andreas Dengel', 'Sheraz Ahmed'] | ['eess.IV', 'cs.CV', 'cs.LG'] | Scarcity of large publicly available retinal fundus image datasets for
automated glaucoma detection has been the bottleneck for successful application
of artificial intelligence towards practical Computer-Aided Diagnosis (CAD). A
few small datasets that are available for research community usually suffer
from impractical image capturing conditions and stringent inclusion criteria.
These shortcomings in already limited choice of existing datasets make it
challenging to mature a CAD system so that it can perform in real-world
environment. In this paper we present a large publicly available retinal fundus
image dataset for glaucoma classification called G1020. The dataset is curated
by conforming to standard practices in routine ophthalmology and it is expected
to serve as standard benchmark dataset for glaucoma detection. This database
consists of 1020 high resolution colour fundus images and provides ground truth
annotations for glaucoma diagnosis, optic disc and optic cup segmentation,
vertical cup-to-disc ratio, size of neuroretinal rim in inferior, superior,
nasal and temporal quadrants, and bounding box location for optic disc. We also
report baseline results by conducting extensive experiments for automated
glaucoma diagnosis and segmentation of optic disc and optic cup. | 2020-05-28T14:29:03Z | Accepted in IJCNN-2020, 7 pages, 5 figures | null | null | G1020: A Benchmark Retinal Fundus Image Dataset for Computer-Aided Glaucoma Detection | ['Muhammad Naseer Bajwa', 'Gurbinder Singh', 'Wolfgang Neumeier', 'M. I. Malik', 'A. Dengel', 'Sheraz Ahmed'] | 2,020 | IEEE International Joint Conference on Neural Network | 80 | 34 | ['Computer Science', 'Engineering'] |
2,006.09882 | Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments | ['Mathilde Caron', 'Ishan Misra', 'Julien Mairal', 'Priya Goyal', 'Piotr Bojanowski', 'Armand Joulin'] | ['cs.CV'] | Unsupervised image representations have significantly reduced the gap with
supervised pretraining, notably with the recent achievements of contrastive
learning methods. These contrastive methods typically work online and rely on a
large number of explicit pairwise feature comparisons, which is computationally
challenging. In this paper, we propose an online algorithm, SwAV, that takes
advantage of contrastive methods without requiring to compute pairwise
comparisons. Specifically, our method simultaneously clusters the data while
enforcing consistency between cluster assignments produced for different
augmentations (or views) of the same image, instead of comparing features
directly as in contrastive learning. Simply put, we use a swapped prediction
mechanism where we predict the cluster assignment of a view from the
representation of another view. Our method can be trained with large and small
batches and can scale to unlimited amounts of data. Compared to previous
contrastive methods, our method is more memory efficient since it does not
require a large memory bank or a special momentum network. In addition, we also
propose a new data augmentation strategy, multi-crop, that uses a mix of views
with different resolutions in place of two full-resolution views, without
increasing the memory or compute requirements much. We validate our findings by
achieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as
surpassing supervised pretraining on all the considered transfer tasks. | 2020-06-17T14:00:42Z | NeurIPS 2020 | null | null | Unsupervised Learning of Visual Features by Contrasting Cluster Assignments | ['Mathilde Caron', 'Ishan Misra', 'J. Mairal', 'Priya Goyal', 'Piotr Bojanowski', 'Armand Joulin'] | 2,020 | Neural Information Processing Systems | 4,115 | 69 | ['Computer Science'] |
2,006.10029 | Big Self-Supervised Models are Strong Semi-Supervised Learners | ['Ting Chen', 'Simon Kornblith', 'Kevin Swersky', 'Mohammad Norouzi', 'Geoffrey Hinton'] | ['cs.LG', 'cs.CV', 'stat.ML'] | One paradigm for learning from few labeled examples while making best use of
a large amount of unlabeled data is unsupervised pretraining followed by
supervised fine-tuning. Although this paradigm uses unlabeled data in a
task-agnostic way, in contrast to common approaches to semi-supervised learning
for computer vision, we show that it is surprisingly effective for
semi-supervised learning on ImageNet. A key ingredient of our approach is the
use of big (deep and wide) networks during pretraining and fine-tuning. We find
that, the fewer the labels, the more this approach (task-agnostic use of
unlabeled data) benefits from a bigger network. After fine-tuning, the big
network can be further improved and distilled into a much smaller one with
little loss in classification accuracy by using the unlabeled examples for a
second time, but in a task-specific way. The proposed semi-supervised learning
algorithm can be summarized in three steps: unsupervised pretraining of a big
ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples,
and distillation with unlabeled examples for refining and transferring the
task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy
with just 1% of the labels ($\le$13 labeled images per class) using ResNet-50,
a $10\times$ improvement in label efficiency over the previous
state-of-the-art. With 10% of labels, ResNet-50 trained with our method
achieves 77.5% top-1 accuracy, outperforming standard supervised training with
all of the labels. | 2020-06-17T17:48:22Z | NeurIPS'2020. Code and pretrained models at
https://github.com/google-research/simclr | null | null | Big Self-Supervised Models are Strong Semi-Supervised Learners | ['Ting Chen', 'Simon Kornblith', 'Kevin Swersky', 'Mohammad Norouzi', 'Geoffrey E. Hinton'] | 2,020 | Neural Information Processing Systems | 2,258 | 75 | ['Computer Science', 'Mathematics'] |
2,006.10204 | BlazePose: On-device Real-time Body Pose tracking | ['Valentin Bazarevsky', 'Ivan Grishchenko', 'Karthik Raveendran', 'Tyler Zhu', 'Fan Zhang', 'Matthias Grundmann'] | ['cs.CV'] | We present BlazePose, a lightweight convolutional neural network architecture
for human pose estimation that is tailored for real-time inference on mobile
devices. During inference, the network produces 33 body keypoints for a single
person and runs at over 30 frames per second on a Pixel 2 phone. This makes it
particularly suited to real-time use cases like fitness tracking and sign
language recognition. Our main contributions include a novel body pose tracking
solution and a lightweight body pose estimation neural network that uses both
heatmaps and regression to keypoint coordinates. | 2020-06-17T23:52:46Z | 4 pages, 6 figures; CVPR Workshop on Computer Vision for Augmented
and Virtual Reality, Seattle, WA, USA, 2020 | null | null | BlazePose: On-device Real-time Body Pose tracking | ['Valentin Bazarevsky', 'Ivan Grishchenko', 'Karthik Raveendran', 'Tyler Lixuan Zhu', 'Fan Zhang', 'Matthias Grundmann'] | 2,020 | arXiv.org | 592 | 11 | ['Computer Science'] |
2,006.10214 | MediaPipe Hands: On-device Real-time Hand Tracking | ['Fan Zhang', 'Valentin Bazarevsky', 'Andrey Vakunov', 'Andrei Tkachenka', 'George Sung', 'Chuo-Ling Chang', 'Matthias Grundmann'] | ['cs.CV'] | We present a real-time on-device hand tracking pipeline that predicts hand
skeleton from single RGB camera for AR/VR applications. The pipeline consists
of two models: 1) a palm detector, 2) a hand landmark model. It's implemented
via MediaPipe, a framework for building cross-platform ML solutions. The
proposed model and pipeline architecture demonstrates real-time inference speed
on mobile GPUs and high prediction quality. MediaPipe Hands is open sourced at
https://mediapipe.dev. | 2020-06-18T00:19:13Z | 5 pages, 7 figures; CVPR Workshop on Computer Vision for Augmented
and Virtual Reality, Seattle, WA, USA, 2020 | null | null | null | null | null | null | null | null | null |
2,006.10369 | Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine
Translation | ['Jungo Kasai', 'Nikolaos Pappas', 'Hao Peng', 'James Cross', 'Noah A. Smith'] | ['cs.CL'] | Much recent effort has been invested in non-autoregressive neural machine
translation, which appears to be an efficient alternative to state-of-the-art
autoregressive machine translation on modern GPUs. In contrast to the latter,
where generation is sequential, the former allows generation to be parallelized
across target token positions. Some of the latest non-autoregressive models
have achieved impressive translation quality-speed tradeoffs compared to
autoregressive baselines. In this work, we reexamine this tradeoff and argue
that autoregressive baselines can be substantially sped up without loss in
accuracy. Specifically, we study autoregressive models with encoders and
decoders of varied depths. Our extensive experiments show that given a
sufficiently deep encoder, a single-layer autoregressive decoder can
substantially outperform strong non-autoregressive models with comparable
inference speed. We show that the speed disadvantage for autoregressive
baselines compared to non-autoregressive methods has been overestimated in
three aspects: suboptimal layer allocation, insufficient speed measurement, and
lack of knowledge distillation. Our results establish a new protocol for future
research toward fast, accurate machine translation. Our code is available at
https://github.com/jungokasai/deep-shallow. | 2020-06-18T09:06:49Z | ICLR 2021 Final Version | null | null | null | null | null | null | null | null | null |
2,006.10518 | Improving Post Training Neural Quantization: Layer-wise Calibration and
Integer Programming | ['Itay Hubara', 'Yury Nahshan', 'Yair Hanani', 'Ron Banner', 'Daniel Soudry'] | ['cs.LG', 'stat.ML'] | Lately, post-training quantization methods have gained considerable
attention, as they are simple to use, and require only a small unlabeled
calibration set. This small dataset cannot be used to fine-tune the model
without significant over-fitting. Instead, these methods only use the
calibration set to set the activations' dynamic ranges. However, such methods
always resulted in significant accuracy degradation, when used below 8-bits
(except on small datasets). Here we aim to break the 8-bit barrier. To this
end, we minimize the quantization errors of each layer separately by optimizing
its parameters over the calibration set. We empirically demonstrate that this
approach is: (1) much less susceptible to over-fitting than the standard
fine-tuning approaches, and can be used even on a very small calibration set;
and (2) more powerful than previous methods, which only set the activations'
dynamic ranges. Furthermore, we demonstrate how to optimally allocate the
bit-widths for each layer, while constraining accuracy degradation or model
compression by proposing a novel integer programming formulation. Finally, we
suggest model global statistics tuning, to correct biases introduced during
quantization. Together, these methods yield state-of-the-art results for both
vision and text models. For instance, on ResNet50, we obtain less than 1\%
accuracy degradation --- with 4-bit weights and activations in all layers, but
the smallest two. We open-sourced our code. | 2020-06-14T16:07:55Z | null | null | null | Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming | ['Itay Hubara', 'Yury Nahshan', 'Yair Hanani', 'Ron Banner', 'Daniel Soudry'] | 2,020 | arXiv.org | 129 | 27 | ['Computer Science', 'Mathematics'] |
2,006.10802 | DS6, Deformation-aware Semi-supervised Learning: Application to Small
Vessel Segmentation with Noisy Training Data | ['Soumick Chatterjee', 'Kartik Prabhu', 'Mahantesh Pattadkal', 'Gerda Bortsova', 'Chompunuch Sarasaen', 'Florian Dubost', 'Hendrik Mattern', 'Marleen de Bruijne', 'Oliver Speck', 'Andreas Nürnberger'] | ['eess.IV', 'cs.CV', 'cs.LG', '68T07 (Primary) 68T45 (Secondary)', 'I.2.6; I.4.6'] | Blood vessels of the brain provide the human brain with the required
nutrients and oxygen. As a vulnerable part of the cerebral blood supply,
pathology of small vessels can cause serious problems such as Cerebral Small
Vessel Diseases (CSVD). It has also been shown that CSVD is related to
neurodegeneration, such as Alzheimer's disease. With the advancement of 7 Tesla
MRI systems, higher spatial image resolution can be achieved, enabling the
depiction of very small vessels in the brain. Non-Deep Learning-based
approaches for vessel segmentation, e.g., Frangi's vessel enhancement with
subsequent thresholding, are capable of segmenting medium to large vessels but
often fail to segment small vessels. The sensitivity of these methods to small
vessels can be increased by extensive parameter tuning or by manual
corrections, albeit making them time-consuming, laborious, and not feasible for
larger datasets. This paper proposes a deep learning architecture to
automatically segment small vessels in 7 Tesla 3D Time-of-Flight (ToF) Magnetic
Resonance Angiography (MRA) data. The algorithm was trained and evaluated on a
small imperfect semi-automatically segmented dataset of only 11 subjects; using
six for training, two for validation, and three for testing. The deep learning
model based on U-Net Multi-Scale Supervision was trained using the training
subset and was made equivariant to elastic deformations in a self-supervised
manner using deformation-aware learning to improve the generalisation
performance. The proposed technique was evaluated quantitatively and
qualitatively against the test set and achieved a Dice score of 80.44 $\pm$
0.83. Furthermore, the result of the proposed method was compared against a
selected manually segmented region (62.07 resultant Dice) and has shown a
considerable improvement (18.98\%) with deformation-aware learning. | 2020-06-18T18:42:57Z | null | Journal of Imaging. 2022; 8(10):259 | 10.3390/jimaging8100259 | null | null | null | null | null | null | null |
2,006.10962 | Attention Mesh: High-fidelity Face Mesh Prediction in Real-time | ['Ivan Grishchenko', 'Artsiom Ablavatski', 'Yury Kartynnik', 'Karthik Raveendran', 'Matthias Grundmann'] | ['cs.CV'] | We present Attention Mesh, a lightweight architecture for 3D face mesh
prediction that uses attention to semantically meaningful regions. Our neural
network is designed for real-time on-device inference and runs at over 50 FPS
on a Pixel 2 phone. Our solution enables applications like AR makeup, eye
tracking and AR puppeteering that rely on highly accurate landmarks for eye and
lips regions. Our main contribution is a unified network architecture that
achieves the same accuracy on facial landmarks as a multi-stage cascaded
approach, while being 30 percent faster. | 2020-06-19T05:07:38Z | 4 pages, 5 figures; CVPR Workshop on Computer Vision for Augmented
and Virtual Reality, Seattle, WA, USA, 2020 | null | null | null | null | null | null | null | null | null |
2,006.11063 | Dataset for Automatic Summarization of Russian News | ['Ilya Gusev'] | ['cs.CL'] | Automatic text summarization has been studied in a variety of domains and
languages. However, this does not hold for the Russian language. To overcome
this issue, we present Gazeta, the first dataset for summarization of Russian
news. We describe the properties of this dataset and benchmark several
extractive and abstractive models. We demonstrate that the dataset is a valid
task for methods of text summarization for Russian. Additionally, we prove the
pretrained mBART model to be useful for Russian text summarization. | 2020-06-19T10:44:06Z | Version 4, October 2021, corrected BLEU scores | In: AINL 2020. Communications in Computer and Information Science,
vol 1292. Springer, Cham (2020) | 10.1007/978-3-030-59082-6_9 | null | null | null | null | null | null | null |
2,006.11239 | Denoising Diffusion Probabilistic Models | ['Jonathan Ho', 'Ajay Jain', 'Pieter Abbeel'] | ['cs.LG', 'stat.ML'] | We present high quality image synthesis results using diffusion probabilistic
models, a class of latent variable models inspired by considerations from
nonequilibrium thermodynamics. Our best results are obtained by training on a
weighted variational bound designed according to a novel connection between
diffusion probabilistic models and denoising score matching with Langevin
dynamics, and our models naturally admit a progressive lossy decompression
scheme that can be interpreted as a generalization of autoregressive decoding.
On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and
a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality
similar to ProgressiveGAN. Our implementation is available at
https://github.com/hojonathanho/diffusion | 2020-06-19T17:24:44Z | null | null | null | Denoising Diffusion Probabilistic Models | ['Jonathan Ho', 'Ajay Jain', 'P. Abbeel'] | 2,020 | Neural Information Processing Systems | 18,550 | 73 | ['Computer Science', 'Mathematics'] |
2,006.11316 | SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? | ['Forrest N. Iandola', 'Albert E. Shaw', 'Ravi Krishna', 'Kurt W. Keutzer'] | ['cs.CL', 'cs.CV', 'cs.LG'] | Humans read and write hundreds of billions of messages every day. Further,
due to the availability of large datasets, large computing systems, and better
neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these
messages. Thus, there is a significant opportunity to deploy NLP in myriad
applications to help web users, social networks, and businesses. In particular,
we consider smartphones and other mobile devices as crucial platforms for
deploying NLP models at scale. However, today's highly-accurate NLP neural
network models such as BERT and RoBERTa are extremely computationally
expensive, with BERT-base taking 1.7 seconds to classify a text snippet on a
Pixel 3 smartphone. In this work, we observe that methods such as grouped
convolutions have yielded significant speedups for computer vision networks,
but many of these techniques have not been adopted by NLP neural network
designers. We demonstrate how to replace several operations in self-attention
layers with grouped convolutions, and we use this technique in a novel network
architecture called SqueezeBERT, which runs 4.3x faster than BERT-base on the
Pixel 3 while achieving competitive accuracy on the GLUE test set. The
SqueezeBERT code will be released. | 2020-06-19T18:40:29Z | 9 pages + appendix | null | null | null | null | null | null | null | null | null |
2,006.11477 | wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations | ['Alexei Baevski', 'Henry Zhou', 'Abdelrahman Mohamed', 'Michael Auli'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | We show for the first time that learning powerful representations from speech
audio alone followed by fine-tuning on transcribed speech can outperform the
best semi-supervised methods while being conceptually simpler. wav2vec 2.0
masks the speech input in the latent space and solves a contrastive task
defined over a quantization of the latent representations which are jointly
learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER
on the clean/other test sets. When lowering the amount of labeled data to one
hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour
subset while using 100 times less labeled data. Using just ten minutes of
labeled data and pre-training on 53k hours of unlabeled data still achieves
4.8/8.2 WER. This demonstrates the feasibility of speech recognition with
limited amounts of labeled data. | 2020-06-20T02:35:02Z | null | null | null | wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations | ['Alexei Baevski', 'Henry Zhou', 'Abdel-rahman Mohamed', 'Michael Auli'] | 2,020 | Neural Information Processing Systems | 5,880 | 61 | ['Computer Science', 'Engineering'] |
2,006.13979 | Unsupervised Cross-lingual Representation Learning for Speech
Recognition | ['Alexis Conneau', 'Alexei Baevski', 'Ronan Collobert', 'Abdelrahman Mohamed', 'Michael Auli'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | This paper presents XLSR which learns cross-lingual speech representations by
pretraining a single model from the raw waveform of speech in multiple
languages. We build on wav2vec 2.0 which is trained by solving a contrastive
task over masked latent speech representations and jointly learns a
quantization of the latents shared across languages. The resulting model is
fine-tuned on labeled data and experiments show that cross-lingual pretraining
significantly outperforms monolingual pretraining. On the CommonVoice
benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared
to the best known results. On BABEL, our approach improves word error rate by
16% relative compared to a comparable system. Our approach enables a single
multilingual speech recognition model which is competitive to strong individual
models. Analysis shows that the latent discrete speech representations are
shared across languages with increased sharing for related languages. We hope
to catalyze research in low-resource speech understanding by releasing XLSR-53,
a large model pretrained in 53 languages. | 2020-06-24T18:25:05Z | null | null | null | null | null | null | null | null | null | null |
2,006.1409 | Neural Architecture Design for GPU-Efficient Networks | ['Ming Lin', 'Hesen Chen', 'Xiuyu Sun', 'Qi Qian', 'Hao Li', 'Rong Jin'] | ['cs.CV'] | Many mission-critical systems are based on GPU for inference. It requires not
only high recognition accuracy but also low latency in responding time.
Although many studies are devoted to optimizing the structure of deep models
for efficient inference, most of them do not leverage the architecture of
\textbf{modern GPU} for fast inference, leading to suboptimal performance. To
address this issue, we propose a general principle for designing GPU-efficient
networks based on extensive empirical studies. This design principle enables us
to search for GPU-efficient network structures effectively by a simple and
lightweight method as opposed to most Neural Architecture Search (NAS) methods
that are complicated and computationally expensive. Based on the proposed
framework, we design a family of GPU-Efficient Networks, or GENets in short. We
did extensive evaluations on multiple GPU platforms and inference engines.
While achieving $\geq 81.3\%$ top-1 accuracy on ImageNet, GENet is up to $6.4$
times faster than EfficienNet on GPU. It also outperforms most state-of-the-art
models that are more efficient than EfficientNet in high precision regimes. Our
source code and pre-trained models are available from
\url{https://github.com/idstcv/GPU-Efficient-Networks}. | 2020-06-24T22:42:18Z | update training setting | null | null | null | null | null | null | null | null | null |
2,006.14147 | FastSpec: Scalable Generation and Detection of Spectre Gadgets Using
Neural Embeddings | ['M. Caner Tol', 'Berk Gulmezoglu', 'Koray Yurtseven', 'Berk Sunar'] | ['cs.CR', 'cs.LG'] | Several techniques have been proposed to detect vulnerable Spectre gadgets in
widely deployed commercial software. Unfortunately, detection techniques
proposed so far rely on hand-written rules which fall short in covering subtle
variations of known Spectre gadgets as well as demand a huge amount of time to
analyze each conditional branch in software. Moreover, detection tool
evaluations are based only on a handful of these gadgets, as it requires
arduous effort to craft new gadgets manually.
In this work, we employ both fuzzing and deep learning techniques to automate
the generation and detection of Spectre gadgets. We first create a diverse set
of Spectre-V1 gadgets by introducing perturbations to the known gadgets. Using
mutational fuzzing, we produce a data set with more than 1 million Spectre-V1
gadgets which is the largest Spectre gadget data set built to date. Next, we
conduct the first empirical usability study of Generative Adversarial Networks
(GANs) in the context of assembly code generation without any human
interaction. We introduce SpectreGAN which leverages masking implementation of
GANs for both learning the gadget structures and generating new gadgets. This
provides the first scalable solution to extend the variety of Spectre gadgets.
Finally, we propose FastSpec which builds a classifier with the generated
Spectre gadgets based on a novel high dimensional Neural Embeddings technique
(BERT). For the case studies, we demonstrate that FastSpec discovers potential
gadgets with a high success rate in OpenSSL libraries and Phoronix benchmarks.
Further, FastSpec offers much greater flexibility and time-related performance
gain compared to the existing tools and therefore can be used for gadget
detection in large-scale software. | 2020-06-25T03:08:20Z | IEEE European Symposium on Security and Privacy 2021 | null | null | FastSpec: Scalable Generation and Detection of Spectre Gadgets Using Neural Embeddings | ['M. Caner Tol', 'Koray Yurtseven', 'Berk Gülmezoglu', 'B. Sunar'] | 2,020 | European Symposium on Security and Privacy | 16 | 82 | ['Computer Science'] |
2,006.15418 | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild | ['Debidatta Dwibedi', 'Yusuf Aytar', 'Jonathan Tompson', 'Pierre Sermanet', 'Andrew Zisserman'] | ['cs.CV'] | We present an approach for estimating the period with which an action is
repeated in a video. The crux of the approach lies in constraining the period
prediction module to use temporal self-similarity as an intermediate
representation bottleneck that allows generalization to unseen repetitions in
videos in the wild. We train this model, called Repnet, with a synthetic
dataset that is generated from a large unlabeled video collection by sampling
short clips of varying lengths and repeating them with different periods and
counts. This combination of synthetic data and a powerful yet constrained
model, allows us to predict periods in a class-agnostic fashion. Our model
substantially exceeds the state of the art performance on existing periodicity
(PERTUBE) and repetition counting (QUVA) benchmarks. We also collect a new
challenging dataset called Countix (~90 times larger than existing datasets)
which captures the challenges of repetition counting in real-world videos.
Project webpage: https://sites.google.com/view/repnet . | 2020-06-27T18:00:42Z | Accepted at CVPR 2020. Project webpage:
https://sites.google.com/view/repnet | null | null | Counting Out Time: Class Agnostic Video Repetition Counting in the Wild | ['Debidatta Dwibedi', 'Y. Aytar', 'Jonathan Tompson', 'P. Sermanet', 'Andrew Zisserman'] | 2,020 | Computer Vision and Pattern Recognition | 114 | 56 | ['Computer Science'] |
2,006.15994 | Improving Sequence Tagging for Vietnamese Text Using Transformer-based
Neural Models | ['Viet Bui The', 'Oanh Tran Thi', 'Phuong Le-Hong'] | ['cs.CL'] | This paper describes our study on using mutilingual BERT embeddings and some
new neural models for improving sequence tagging tasks for the Vietnamese
language. We propose new model architectures and evaluate them extensively on
two named entity recognition datasets of VLSP 2016 and VLSP 2018, and on two
part-of-speech tagging datasets of VLSP 2010 and VLSP 2013. Our proposed models
outperform existing methods and achieve new state-of-the-art results. In
particular, we have pushed the accuracy of part-of-speech tagging to 95.40% on
the VLSP 2010 corpus, to 96.77% on the VLSP 2013 corpus; and the F1 score of
named entity recognition to 94.07% on the VLSP 2016 corpus, to 90.31% on the
VLSP 2018 corpus. Our code and pre-trained models viBERT and vELECTRA are
released as open source to facilitate adoption and further research. | 2020-06-29T12:39:44Z | Accepted at the Conference PACLIC 2020 | null | null | Improving Sequence Tagging for Vietnamese Text using Transformer-based Neural Models | ['Viet The Bui', 'Oanh T. K. Tran', 'Hong Phuong Le'] | 2,020 | Pacific Asia Conference on Language, Information and Computation | 40 | 27 | ['Computer Science'] |
2,006.16668 | GShard: Scaling Giant Models with Conditional Computation and Automatic
Sharding | ['Dmitry Lepikhin', 'HyoukJoong Lee', 'Yuanzhong Xu', 'Dehao Chen', 'Orhan Firat', 'Yanping Huang', 'Maxim Krikun', 'Noam Shazeer', 'Zhifeng Chen'] | ['cs.CL', 'cs.LG', 'stat.ML'] | Neural network scaling has been critical for improving the model quality in
many real-world machine learning applications with vast amounts of training
data and compute. Although this trend of scaling is affirmed to be a sure-fire
approach for better model quality, there are challenges on the path such as the
computation cost, ease of programming, and efficient implementation on parallel
devices. GShard is a module composed of a set of lightweight annotation APIs
and an extension to the XLA compiler. It provides an elegant way to express a
wide range of parallel computation patterns with minimal changes to the
existing model code. GShard enabled us to scale up multilingual neural machine
translation Transformer model with Sparsely-Gated Mixture-of-Experts beyond 600
billion parameters using automatic sharding. We demonstrate that such a giant
model can efficiently be trained on 2048 TPU v3 accelerators in 4 days to
achieve far superior quality for translation from 100 languages to English
compared to the prior art. | 2020-06-30T10:42:02Z | null | null | null | null | null | null | null | null | null | null |
2,007.00224 | Debiased Contrastive Learning | ['Ching-Yao Chuang', 'Joshua Robinson', 'Lin Yen-Chen', 'Antonio Torralba', 'Stefanie Jegelka'] | ['cs.LG', 'stat.ML'] | A prominent technique for self-supervised representation learning has been to
contrast semantically similar and dissimilar pairs of samples. Without access
to labels, dissimilar (negative) points are typically taken to be randomly
sampled datapoints, implicitly accepting that these points may, in reality,
actually have the same label. Perhaps unsurprisingly, we observe that sampling
negative examples from truly different labels improves performance, in a
synthetic setting where labels are available. Motivated by this observation, we
develop a debiased contrastive objective that corrects for the sampling of
same-label datapoints, even without knowledge of the true labels. Empirically,
the proposed objective consistently outperforms the state-of-the-art for
representation learning in vision, language, and reinforcement learning
benchmarks. Theoretically, we establish generalization bounds for the
downstream classification task. | 2020-07-01T04:25:24Z | null | Advances in Neural Information Processing Systems (2020) | null | null | null | null | null | null | null | null |
2,007.00398 | DocVQA: A Dataset for VQA on Document Images | ['Minesh Mathew', 'Dimosthenis Karatzas', 'C. V. Jawahar'] | ['cs.CV', 'cs.IR'] | We present a new dataset for Visual Question Answering (VQA) on document
images called DocVQA. The dataset consists of 50,000 questions defined on
12,000+ document images. Detailed analysis of the dataset in comparison with
similar datasets for VQA and reading comprehension is presented. We report
several baseline results by adopting existing VQA and reading comprehension
models. Although the existing models perform reasonably well on certain types
of questions, there is large performance gap compared to human performance
(94.36% accuracy). The models need to improve specifically on questions where
understanding structure of the document is crucial. The dataset, code and
leaderboard are available at docvqa.org | 2020-07-01T11:37:40Z | accepted at WACV 2021 | null | null | null | null | null | null | null | null | null |
2,007.00808 | Approximate Nearest Neighbor Negative Contrastive Learning for Dense
Text Retrieval | ['Lee Xiong', 'Chenyan Xiong', 'Ye Li', 'Kwok-Fung Tang', 'Jialin Liu', 'Paul Bennett', 'Junaid Ahmed', 'Arnold Overwijk'] | ['cs.IR', 'cs.CL', 'cs.LG'] | Conducting text retrieval in a dense learned representation space has many
intriguing advantages over sparse retrieval. Yet the effectiveness of dense
retrieval (DR) often requires combination with sparse retrieval. In this paper,
we identify that the main bottleneck is in the training mechanisms, where the
negative instances used in training are not representative of the irrelevant
documents in testing. This paper presents Approximate nearest neighbor Negative
Contrastive Estimation (ANCE), a training mechanism that constructs negatives
from an Approximate Nearest Neighbor (ANN) index of the corpus, which is
parallelly updated with the learning process to select more realistic negative
training instances. This fundamentally resolves the discrepancy between the
data distribution used in the training and testing of DR. In our experiments,
ANCE boosts the BERT-Siamese DR model to outperform all competitive dense and
sparse retrieval baselines. It nearly matches the accuracy of
sparse-retrieval-and-BERT-reranking using dot-product in the ANCE-learned
representation space and provides almost 100x speed-up. | 2020-07-01T23:15:56Z | null | null | null | null | null | null | null | null | null | null |
2,007.00814 | Relevance-guided Supervision for OpenQA with ColBERT | ['Omar Khattab', 'Christopher Potts', 'Matei Zaharia'] | ['cs.CL', 'cs.IR'] | Systems for Open-Domain Question Answering (OpenQA) generally depend on a
retriever for finding candidate passages in a large corpus and a reader for
extracting answers from those passages. In much recent work, the retriever is a
learned component that uses coarse-grained vector representations of questions
and passages. We argue that this modeling choice is insufficiently expressive
for dealing with the complexity of natural language questions. To address this,
we define ColBERT-QA, which adapts the scalable neural retrieval model ColBERT
to OpenQA. ColBERT creates fine-grained interactions between questions and
passages. We propose an efficient weak supervision strategy that iteratively
uses ColBERT to create its own training data. This greatly improves OpenQA
retrieval on Natural Questions, SQuAD, and TriviaQA, and the resulting system
attains state-of-the-art extractive OpenQA performance on all three datasets. | 2020-07-01T23:50:58Z | Accepted for publication in Transactions of the Association for
Computational Linguistics (TACL), 2021. Author's final version. Oral
presentation at ACL'21 | null | null | Relevance-guided Supervision for OpenQA with ColBERT | ['O. Khattab', 'Christopher Potts', 'M. Zaharia'] | 2,020 | Transactions of the Association for Computational Linguistics | 100 | 46 | ['Computer Science'] |
2,007.00992 | Rethinking Channel Dimensions for Efficient Model Design | ['Dongyoon Han', 'Sangdoo Yun', 'Byeongho Heo', 'YoungJoon Yoo'] | ['cs.CV'] | Designing an efficient model within the limited computational cost is
challenging. We argue the accuracy of a lightweight model has been further
limited by the design convention: a stage-wise configuration of the channel
dimensions, which looks like a piecewise linear function of the network stage.
In this paper, we study an effective channel dimension configuration towards
better performance than the convention. To this end, we empirically study how
to design a single layer properly by analyzing the rank of the output feature.
We then investigate the channel configuration of a model by searching network
architectures concerning the channel configuration under the computational cost
restriction. Based on the investigation, we propose a simple yet effective
channel configuration that can be parameterized by the layer index. As a
result, our proposed model following the channel parameterization achieves
remarkable performance on ImageNet classification and transfer learning tasks
including COCO object detection, COCO instance segmentation, and fine-grained
classifications. Code and ImageNet pretrained models are available at
https://github.com/clovaai/rexnet. | 2020-07-02T10:01:12Z | 13 pages, 8 figures, CVPR 2021 | null | null | Rethinking Channel Dimensions for Efficient Model Design | ['Dongyoon Han', 'Sangdoo Yun', 'Byeongho Heo', 'Y. Yoo'] | 2,020 | Computer Vision and Pattern Recognition | 86 | 66 | ['Computer Science'] |
2,007.01282 | Leveraging Passage Retrieval with Generative Models for Open Domain
Question Answering | ['Gautier Izacard', 'Edouard Grave'] | ['cs.CL', 'cs.LG'] | Generative models for open domain question answering have proven to be
competitive, without resorting to external knowledge. While promising, this
approach requires to use models with billions of parameters, which are
expensive to train and query. In this paper, we investigate how much these
models can benefit from retrieving text passages, potentially containing
evidence. We obtain state-of-the-art results on the Natural Questions and
TriviaQA open benchmarks. Interestingly, we observe that the performance of
this method significantly improves when increasing the number of retrieved
passages. This is evidence that generative models are good at aggregating and
combining evidence from multiple passages. | 2020-07-02T17:44:57Z | null | null | null | null | null | null | null | null | null | null |
2,007.01658 | Playing with Words at the National Library of Sweden -- Making a Swedish
BERT | ['Martin Malmsten', 'Love Börjeson', 'Chris Haffenden'] | ['cs.CL'] | This paper introduces the Swedish BERT ("KB-BERT") developed by the KBLab for
data-driven research at the National Library of Sweden (KB). Building on recent
efforts to create transformer-based BERT models for languages other than
English, we explain how we used KB's collections to create and train a new
language-specific BERT model for Swedish. We also present the results of our
model in comparison with existing models - chiefly that produced by the Swedish
Public Employment Service, Arbetsf\"ormedlingen, and Google's multilingual
M-BERT - where we demonstrate that KB-BERT outperforms these in a range of NLP
tasks from named entity recognition (NER) to part-of-speech tagging (POS). Our
discussion highlights the difficulties that continue to exist given the lack of
training data and testbeds for smaller languages like Swedish. We release our
model for further exploration and research here:
https://github.com/Kungbib/swedish-bert-models . | 2020-07-03T12:53:39Z | null | null | null | Playing with Words at the National Library of Sweden - Making a Swedish BERT | ['Martin Malmsten', 'Love Börjeson', 'Chris Haffenden'] | 2,020 | arXiv.org | 126 | 18 | ['Computer Science'] |
2,007.01852 | Language-agnostic BERT Sentence Embedding | ['Fangxiaoyu Feng', 'Yinfei Yang', 'Daniel Cer', 'Naveen Arivazhagan', 'Wei Wang'] | ['cs.CL'] | While BERT is an effective method for learning monolingual sentence
embeddings for semantic similarity and embedding based transfer learning
(Reimers and Gurevych, 2019), BERT based cross-lingual sentence embeddings have
yet to be explored. We systematically investigate methods for learning
multilingual sentence embeddings by combining the best methods for learning
monolingual and cross-lingual representations including: masked language
modeling (MLM), translation language modeling (TLM) (Conneau and Lample, 2019),
dual encoder translation ranking (Guo et al., 2018), and additive margin
softmax (Yang et al., 2019a). We show that introducing a pre-trained
multilingual language model dramatically reduces the amount of parallel
training data required to achieve good performance by 80%. Composing the best
of these methods produces a model that achieves 83.7% bi-text retrieval
accuracy over 112 languages on Tatoeba, well above the 65.5% achieved by
Artetxe and Schwenk (2019b), while still performing competitively on
monolingual transfer learning benchmarks (Conneau and Kiela, 2018). Parallel
data mined from CommonCrawl using our best model is shown to train competitive
NMT models for en-zh and en-de. We publicly release our best multilingual
sentence embedding model for 109+ languages at https://tfhub.dev/google/LaBSE. | 2020-07-03T17:58:42Z | To be presented at ACL 2022 | null | null | Language-agnostic BERT Sentence Embedding | ['Fangxiaoyu Feng', 'Yinfei Yang', 'Daniel Matthew Cer', 'N. Arivazhagan', 'Wei Wang'] | 2,020 | Annual Meeting of the Association for Computational Linguistics | 921 | 51 | ['Computer Science'] |
2,007.02713 | Bifurcated backbone strategy for RGB-D salient object detection | ['Yingjie Zhai', 'Deng-Ping Fan', 'Jufeng Yang', 'Ali Borji', 'Ling Shao', 'Junwei Han', 'Liang Wang'] | ['cs.CV'] | Multi-level feature fusion is a fundamental topic in computer vision. It has
been exploited to detect, segment and classify objects at various scales. When
multi-level features meet multi-modal cues, the optimal feature aggregation and
multi-modal learning strategy become a hot potato. In this paper, we leverage
the inherent multi-modal and multi-level nature of RGB-D salient object
detection to devise a novel cascaded refinement network. In particular, first,
we propose to regroup the multi-level features into teacher and student
features using a bifurcated backbone strategy (BBS). Second, we introduce a
depth-enhanced module (DEM) to excavate informative depth cues from the channel
and spatial views. Then, RGB and depth modalities are fused in a complementary
way. Our architecture, named Bifurcated Backbone Strategy Network (BBS-Net), is
simple, efficient, and backbone-independent. Extensive experiments show that
BBS-Net significantly outperforms eighteen SOTA models on eight challenging
datasets under five evaluation measures, demonstrating the superiority of our
approach ($\sim 4 \%$ improvement in S-measure $vs.$ the top-ranked model:
DMRA-iccv2019). In addition, we provide a comprehensive analysis on the
generalization ability of different RGB-D datasets and provide a powerful
training set for future research. | 2020-07-06T13:01:30Z | A preliminary version of this work has been accepted in ECCV 2020 | IEEE Transactions on Image Processing, 2021, 30: 8727-8742 | 10.1109/TIP.2021.3116793 | null | null | null | null | null | null | null |
2,007.05194 | What Can We Learn From Almost a Decade of Food Tweets | ['Uga Sproģis', 'Matīss Rikters'] | ['cs.CL'] | We present the Latvian Twitter Eater Corpus - a set of tweets in the narrow
domain related to food, drinks, eating and drinking. The corpus has been
collected over time-span of over 8 years and includes over 2 million tweets
entailed with additional useful data. We also separate two sub-corpora of
question and answer tweets and sentiment annotated tweets. We analyse contents
of the corpus and demonstrate use-cases for the sub-corpora by training
domain-specific question-answering and sentiment-analysis models using data
from the corpus. | 2020-07-10T06:36:13Z | null | In Proceedings of the 9th Conference Human Language Technologies -
The Baltic Perspective (Baltic HLT 2020) | null | null | null | null | null | null | null | null |
2,007.05612 | Multi-Dialect Arabic BERT for Country-Level Dialect Identification | ['Bashar Talafha', 'Mohammad Ali', "Muhy Eddin Za'ter", 'Haitham Seelawi', 'Ibraheem Tuffaha', 'Mostafa Samir', 'Wael Farhan', 'Hussein T. Al-Natsheh'] | ['cs.CL', 'cs.LG'] | Arabic dialect identification is a complex problem for a number of inherent
properties of the language itself. In this paper, we present the experiments
conducted, and the models developed by our competing team, Mawdoo3 AI, along
the way to achieving our winning solution to subtask 1 of the Nuanced Arabic
Dialect Identification (NADI) shared task. The dialect identification subtask
provides 21,000 country-level labeled tweets covering all 21 Arab countries. An
unlabeled corpus of 10M tweets from the same domain is also presented by the
competition organizers for optional use. Our winning solution itself came in
the form of an ensemble of different training iterations of our pre-trained
BERT model, which achieved a micro-averaged F1-score of 26.78% on the subtask
at hand. We publicly release the pre-trained language model component of our
winning solution under the name of Multi-dialect-Arabic-BERT model, for any
interested researcher out there. | 2020-07-10T21:11:46Z | Accepted at the Fifth Arabic Natural Language Processing Workshop
(WANLP2020) co-located with the 28th International Conference on
Computational Linguistics (COLING'2020), Barcelona, Spain, 12 Dec. 2020 | null | null | Multi-dialect Arabic BERT for Country-level Dialect Identification | ['Bashar Talafha', 'Mohammad Ali', "Muhy Eddin Za'ter", 'Haitham Seelawi', 'Ibraheem Tuffaha', 'Mostafa Samir', 'Wael Farhan', 'Hussein T. Al-Natsheh'] | 2,020 | Workshop on Arabic Natural Language Processing | 53 | 30 | ['Computer Science'] |
2,007.06346 | Whitening for Self-Supervised Representation Learning | ['Aleksandr Ermolov', 'Aliaksandr Siarohin', 'Enver Sangineto', 'Nicu Sebe'] | ['cs.LG', 'cs.CV', 'stat.ML'] | Most of the current self-supervised representation learning (SSL) methods are
based on the contrastive loss and the instance-discrimination task, where
augmented versions of the same image instance ("positives") are contrasted with
instances extracted from other images ("negatives"). For the learning to be
effective, many negatives should be compared with a positive pair, which is
computationally demanding. In this paper, we propose a different direction and
a new loss function for SSL, which is based on the whitening of the
latent-space features. The whitening operation has a "scattering" effect on the
batch samples, avoiding degenerate solutions where all the sample
representations collapse to a single point. Our solution does not require
asymmetric networks and it is conceptually simple. Moreover, since negatives
are not needed, we can extract multiple positive pairs from the same image
instance. The source code of the method and of all the experiments is available
at: https://github.com/htdt/self-supervised. | 2020-07-13T12:33:25Z | ICML 2021 | null | null | Whitening for Self-Supervised Representation Learning | ['Aleksandr Ermolov', 'Aliaksandr Siarohin', 'E. Sangineto', 'N. Sebe'] | 2,020 | International Conference on Machine Learning | 316 | 65 | ['Computer Science', 'Mathematics'] |
2,007.07779 | AdapterHub: A Framework for Adapting Transformers | ['Jonas Pfeiffer', 'Andreas Rücklé', 'Clifton Poth', 'Aishwarya Kamath', 'Ivan Vulić', 'Sebastian Ruder', 'Kyunghyun Cho', 'Iryna Gurevych'] | ['cs.CL'] | The current modus operandi in NLP involves downloading and fine-tuning
pre-trained models consisting of millions or billions of parameters. Storing
and sharing such large trained models is expensive, slow, and time-consuming,
which impedes progress towards more general and versatile NLP methods that
learn from and for many tasks. Adapters -- small learnt bottleneck layers
inserted within each layer of a pre-trained model -- ameliorate this issue by
avoiding full fine-tuning of the entire model. However, sharing and integrating
adapter layers is not straightforward. We propose AdapterHub, a framework that
allows dynamic "stitching-in" of pre-trained adapters for different tasks and
languages. The framework, built on top of the popular HuggingFace Transformers
library, enables extremely easy and quick adaptations of state-of-the-art
pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages.
Downloading, sharing, and training adapters is as seamless as possible using
minimal changes to the training scripts and a specialized infrastructure. Our
framework enables scalable and easy access to sharing of task-specific models,
particularly in low-resource scenarios. AdapterHub includes all recent adapter
architectures and can be found at https://AdapterHub.ml. | 2020-07-15T15:56:05Z | EMNLP 2020: Systems Demonstrations | null | null | null | null | null | null | null | null | null |
2,007.07834 | InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language
Model Pre-Training | ['Zewen Chi', 'Li Dong', 'Furu Wei', 'Nan Yang', 'Saksham Singhal', 'Wenhui Wang', 'Xia Song', 'Xian-Ling Mao', 'Heyan Huang', 'Ming Zhou'] | ['cs.CL'] | In this work, we present an information-theoretic framework that formulates
cross-lingual language model pre-training as maximizing mutual information
between multilingual-multi-granularity texts. The unified view helps us to
better understand the existing methods for learning cross-lingual
representations. More importantly, inspired by the framework, we propose a new
pre-training task based on contrastive learning. Specifically, we regard a
bilingual sentence pair as two views of the same meaning and encourage their
encoded representations to be more similar than the negative examples. By
leveraging both monolingual and parallel corpora, we jointly train the pretext
tasks to improve the cross-lingual transferability of pre-trained models.
Experimental results on several benchmarks show that our approach achieves
considerably better performance. The code and pre-trained models are available
at https://aka.ms/infoxlm. | 2020-07-15T16:58:01Z | NAACL 2021 | null | null | null | null | null | null | null | null | null |
2,007.08489 | Do Adversarially Robust ImageNet Models Transfer Better? | ['Hadi Salman', 'Andrew Ilyas', 'Logan Engstrom', 'Ashish Kapoor', 'Aleksander Madry'] | ['cs.CV', 'cs.LG', 'stat.ML'] | Transfer learning is a widely-used paradigm in deep learning, where models
pre-trained on standard datasets can be efficiently adapted to downstream
tasks. Typically, better pre-trained models yield better transfer results,
suggesting that initial accuracy is a key aspect of transfer learning
performance. In this work, we identify another such aspect: we find that
adversarially robust models, while less accurate, often perform better than
their standard-trained counterparts when used for transfer learning.
Specifically, we focus on adversarially robust ImageNet classifiers, and show
that they yield improved accuracy on a standard suite of downstream
classification tasks. Further analysis uncovers more differences between robust
and standard models in the context of transfer learning. Our results are
consistent with (and in fact, add to) recent hypotheses stating that robustness
leads to improved feature representations. Our code and models are available at
https://github.com/Microsoft/robust-models-transfer . | 2020-07-16T17:42:40Z | NeurIPS 2020 | null | null | Do Adversarially Robust ImageNet Models Transfer Better? | ['Hadi Salman', 'Andrew Ilyas', 'Logan Engstrom', 'Ashish Kapoor', 'A. Ma̧dry'] | 2,020 | Neural Information Processing Systems | 428 | 103 | ['Computer Science', 'Mathematics'] |
2,007.09127 | CTC-Segmentation of Large Corpora for German End-to-end Speech
Recognition | ['Ludwig Kürzinger', 'Dominik Winkelbauer', 'Lujun Li', 'Tobias Watzel', 'Gerhard Rigoll'] | ['eess.AS'] | Recent end-to-end Automatic Speech Recognition (ASR) systems demonstrated the
ability to outperform conventional hybrid DNN/ HMM ASR. Aside from
architectural improvements in those systems, those models grew in terms of
depth, parameters and model capacity. However, these models also require more
training data to achieve comparable performance.
In this work, we combine freely available corpora for German speech
recognition, including yet unlabeled speech data, to a big dataset of over
$1700$h of speech data. For data preparation, we propose a two-stage approach
that uses an ASR model pre-trained with Connectionist Temporal Classification
(CTC) to boot-strap more training data from unsegmented or unlabeled training
data. Utterances are then extracted from label probabilities obtained from the
network trained with CTC to determine segment alignments. With this training
data, we trained a hybrid CTC/attention Transformer model that achieves
$12.8\%$ WER on the Tuda-DE test set, surpassing the previous baseline of
$14.4\%$ of conventional hybrid DNN/HMM ASR. | 2020-07-17T17:38:08Z | Published at SPECOM 2020 | Speech and Computer (2020) | 10.1007/978-3-030-60276-5_27 | null | null | null | null | null | null | null |
2,007.14062 | Big Bird: Transformers for Longer Sequences | ['Manzil Zaheer', 'Guru Guruganesh', 'Avinava Dubey', 'Joshua Ainslie', 'Chris Alberti', 'Santiago Ontanon', 'Philip Pham', 'Anirudh Ravula', 'Qifan Wang', 'Li Yang', 'Amr Ahmed'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Transformers-based models, such as BERT, have been one of the most successful
deep learning models for NLP. Unfortunately, one of their core limitations is
the quadratic dependency (mainly in terms of memory) on the sequence length due
to their full attention mechanism. To remedy this, we propose, BigBird, a
sparse attention mechanism that reduces this quadratic dependency to linear. We
show that BigBird is a universal approximator of sequence functions and is
Turing complete, thereby preserving these properties of the quadratic, full
attention model. Along the way, our theoretical analysis reveals some of the
benefits of having $O(1)$ global tokens (such as CLS), that attend to the
entire sequence as part of the sparse attention mechanism. The proposed sparse
attention can handle sequences of length up to 8x of what was previously
possible using similar hardware. As a consequence of the capability to handle
longer context, BigBird drastically improves performance on various NLP tasks
such as question answering and summarization. We also propose novel
applications to genomics data. | 2020-07-28T08:34:04Z | null | Neural Information Processing Systems (NeurIPS) 2020 | null | Big Bird: Transformers for Longer Sequences | ['M. Zaheer', 'Guru Guruganesh', 'Kumar Avinava Dubey', 'J. Ainslie', 'Chris Alberti', 'Santiago Ontañón', 'Philip Pham', 'Anirudh Ravula', 'Qifan Wang', 'Li Yang', 'Amr Ahmed'] | 2,020 | Neural Information Processing Systems | 2,111 | 118 | ['Computer Science', 'Mathematics', 'Geography'] |
2,007.14271 | Declarative Experimentation in Information Retrieval using PyTerrier | ['Craig Macdonald', 'Nicola Tonellotto'] | ['cs.IR'] | The advent of deep machine learning platforms such as Tensorflow and Pytorch,
developed in expressive high-level languages such as Python, have allowed more
expressive representations of deep neural network architectures. We argue that
such a powerful formalism is missing in information retrieval (IR), and propose
a framework called PyTerrier that allows advanced retrieval pipelines to be
expressed, and evaluated, in a declarative manner close to their conceptual
design. Like the aforementioned frameworks that compile deep learning
experiments into primitive GPU operations, our framework targets IR platforms
as backends in order to execute and evaluate retrieval pipelines. Further, we
can automatically optimise the retrieval pipelines to increase their efficiency
to suite a particular IR platform backend. Our experiments, conducted on TREC
Robust and ClueWeb09 test collections, demonstrate the efficiency benefits of
these optimisations for retrieval pipelines involving both the Anserini and
Terrier IR platforms. | 2020-07-28T14:36:29Z | null | 2020 ACM SIGIR International Conference on the Theory of
Information Retrieval (ICTIR '20) | 10.1145/3409256.3409829 | Declarative Experimentation in Information Retrieval using PyTerrier | ['Craig Macdonald', 'N. Tonellotto'] | 2,020 | International Conference on the Theory of Information Retrieval | 147 | 30 | ['Computer Science'] |
2,007.14937 | Learning Video Representations from Textual Web Supervision | ['Jonathan C. Stroud', 'Zhichao Lu', 'Chen Sun', 'Jia Deng', 'Rahul Sukthankar', 'Cordelia Schmid', 'David A. Ross'] | ['cs.CV'] | Videos on the Internet are paired with pieces of text, such as titles and
descriptions. This text typically describes the most important content in the
video, such as the objects in the scene and the actions being performed. Based
on this observation, we propose to use text as a method for learning video
representations. To accomplish this, we propose a data collection process and
use it to collect 70M video clips shared publicly on the Internet, and we then
train a model to pair each video with its associated text. We evaluate the
model on several down-stream action recognition tasks, including Kinetics,
HMDB-51, and UCF-101. We find that this approach is an effective method of
pre-training video representations. Specifically, it outperforms all existing
methods for self-supervised and cross-modal video representation learning. | 2020-07-29T16:19:50Z | null | null | null | null | null | null | null | null | null | null |
2,007.14966 | Mirostat: A Neural Text Decoding Algorithm that Directly Controls
Perplexity | ['Sourya Basu', 'Govardana Sachitanandam Ramachandran', 'Nitish Shirish Keskar', 'Lav R. Varshney'] | ['cs.CL', 'cs.IT', 'math.IT'] | Neural text decoding is important for generating high-quality texts using
language models. To generate high-quality text, popular decoding algorithms
like top-k, top-p (nucleus), and temperature-based sampling truncate or distort
the unreliable low probability tail of the language model. Though these methods
generate high-quality text after parameter tuning, they are ad hoc. Not much is
known about the control they provide over the statistics of the output, which
is important since recent reports show text quality is highest for a specific
range of likelihoods. Here, first we provide a theoretical analysis of
perplexity in top-k, top-p, and temperature sampling, finding that
cross-entropy behaves approximately linearly as a function of p in top-p
sampling whereas it is a nonlinear function of k in top-k sampling, under
Zipfian statistics. We use this analysis to design a feedback-based adaptive
top-k text decoding algorithm called mirostat that generates text (of any
length) with a predetermined value of perplexity, and thereby high-quality text
without any tuning. Experiments show that for low values of k and p in top-k
and top-p sampling, perplexity drops significantly with generated text length,
which is also correlated with excessive repetitions in the text (the boredom
trap). On the other hand, for large values of k and p, we find that perplexity
increases with generated text length, which is correlated with incoherence in
the text (confusion trap). Mirostat avoids both traps: experiments show that
cross-entropy has a near-linear relation with repetition in generated text.
This relation is almost independent of the sampling method but slightly
dependent on the model used. Hence, for a given language model, control over
perplexity also gives control over repetitions. Experiments with human raters
for fluency, coherence, and quality further verify our findings. | 2020-07-29T17:22:26Z | 25 pages, 12 figures | null | null | null | null | null | null | null | null | null |
2,007.15207 | MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain
Question Answering | ['Shayne Longpre', 'Yi Lu', 'Joachim Daiber'] | ['cs.CL'] | Progress in cross-lingual modeling depends on challenging, realistic, and
diverse evaluation sets. We introduce Multilingual Knowledge Questions and
Answers (MKQA), an open-domain question answering evaluation set comprising 10k
question-answer pairs aligned across 26 typologically diverse languages (260k
question-answer pairs in total). Answers are based on a heavily curated,
language-independent data representation, making results comparable across
languages and independent of language-specific passages. With 26 languages,
this dataset supplies the widest range of languages to-date for evaluating
question answering. We benchmark a variety of state-of-the-art methods and
baselines for generative and extractive question answering, trained on Natural
Questions, in zero shot and translation settings. Results indicate this dataset
is challenging even in English, but especially in low-resource languages | 2020-07-30T03:33:46Z | null | null | null | null | null | null | null | null | null | null |
2,007.15651 | Contrastive Learning for Unpaired Image-to-Image Translation | ['Taesung Park', 'Alexei A. Efros', 'Richard Zhang', 'Jun-Yan Zhu'] | ['cs.CV', 'cs.LG'] | In image-to-image translation, each patch in the output should reflect the
content of the corresponding patch in the input, independent of domain. We
propose a straightforward method for doing so -- maximizing mutual information
between the two, using a framework based on contrastive learning. The method
encourages two elements (corresponding patches) to map to a similar point in a
learned feature space, relative to other elements (other patches) in the
dataset, referred to as negatives. We explore several critical design choices
for making contrastive learning effective in the image synthesis setting.
Notably, we use a multilayer, patch-based approach, rather than operate on
entire images. Furthermore, we draw negatives from within the input image
itself, rather than from the rest of the dataset. We demonstrate that our
framework enables one-sided translation in the unpaired image-to-image
translation setting, while improving quality and reducing training time. In
addition, our method can even be extended to the training setting where each
"domain" is only a single image. | 2020-07-30T17:59:58Z | ECCV 2020. Please visit
https://taesungp.github.io/ContrastiveUnpairedTranslation/ for introduction
videos and more. v3 contains typo fixes and citation update | null | null | null | null | null | null | null | null | null |
2,007.15779 | Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing | ['Yu Gu', 'Robert Tinn', 'Hao Cheng', 'Michael Lucas', 'Naoto Usuyama', 'Xiaodong Liu', 'Tristan Naumann', 'Jianfeng Gao', 'Hoifung Poon'] | ['cs.CL', 'cs.LG'] | Pretraining large neural language models, such as BERT, has led to impressive
gains on many natural language processing (NLP) tasks. However, most
pretraining efforts focus on general domain corpora, such as newswire and Web.
A prevailing assumption is that even domain-specific pretraining can benefit by
starting from general-domain language models. In this paper, we challenge this
assumption by showing that for domains with abundant unlabeled text, such as
biomedicine, pretraining language models from scratch results in substantial
gains over continual pretraining of general-domain language models. To
facilitate this investigation, we compile a comprehensive biomedical NLP
benchmark from publicly-available datasets. Our experiments show that
domain-specific pretraining serves as a solid foundation for a wide range of
biomedical NLP tasks, leading to new state-of-the-art results across the board.
Further, in conducting a thorough evaluation of modeling choices, both for
pretraining and task-specific fine-tuning, we discover that some common
practices are unnecessary with BERT models, such as using complex tagging
schemes in named entity recognition (NER). To help accelerate research in
biomedical NLP, we have released our state-of-the-art pretrained and
task-specific models for the community, and created a leaderboard featuring our
BLURB benchmark (short for Biomedical Language Understanding & Reasoning
Benchmark) at https://aka.ms/BLURB. | 2020-07-31T00:04:15Z | ACM Transactions on Computing for Healthcare (HEALTH) | null | 10.1145/3458754 | null | null | null | null | null | null | null |
2,008.00401 | Multilingual Translation with Extensible Multilingual Pretraining and
Finetuning | ['Yuqing Tang', 'Chau Tran', 'Xian Li', 'Peng-Jen Chen', 'Naman Goyal', 'Vishrav Chaudhary', 'Jiatao Gu', 'Angela Fan'] | ['cs.CL'] | Recent work demonstrates the potential of multilingual pretraining of
creating one model that can be used for various tasks in different languages.
Previous work in multilingual pretraining has demonstrated that machine
translation systems can be created by finetuning on bitext. In this work, we
show that multilingual translation models can be created through multilingual
finetuning. Instead of finetuning on one direction, a pretrained model is
finetuned on many directions at the same time. Compared to multilingual models
trained from scratch, starting from pretrained models incorporates the benefits
of large quantities of unlabeled monolingual data, which is particularly
important for low resource languages where bitext is not available. We
demonstrate that pretrained models can be extended to incorporate additional
languages without loss of performance. We double the number of languages in
mBART to support multilingual machine translation models of 50 languages.
Finally, we create the ML50 benchmark, covering low, mid, and high resource
languages, to facilitate reproducible research by standardizing training and
evaluation data. On ML50, we demonstrate that multilingual finetuning improves
on average 1 BLEU over the strongest baselines (being either multilingual from
scratch or bilingual finetuning) while improving 9.3 BLEU on average over
bilingual baselines from scratch. | 2020-08-02T05:36:55Z | 10 pages (main) + 5 pages (appendices). 9 tables and 2 figures | null | null | null | null | null | null | null | null | null |
2,008.02275 | Aligning AI With Shared Human Values | ['Dan Hendrycks', 'Collin Burns', 'Steven Basart', 'Andrew Critch', 'Jerry Li', 'Dawn Song', 'Jacob Steinhardt'] | ['cs.CY', 'cs.AI', 'cs.CL', 'cs.LG'] | We show how to assess a language model's knowledge of basic concepts of
morality. We introduce the ETHICS dataset, a new benchmark that spans concepts
in justice, well-being, duties, virtues, and commonsense morality. Models
predict widespread moral judgments about diverse text scenarios. This requires
connecting physical and social world knowledge to value judgements, a
capability that may enable us to steer chatbot outputs or eventually regularize
open-ended reinforcement learning agents. With the ETHICS dataset, we find that
current language models have a promising but incomplete ability to predict
basic human ethical judgements. Our work shows that progress can be made on
machine ethics today, and it provides a steppingstone toward AI that is aligned
with human values. | 2020-08-05T17:59:16Z | ICLR 2021; the ETHICS dataset is available at
https://github.com/hendrycks/ethics/ | null | null | null | null | null | null | null | null | null |
2,008.02496 | ConvBERT: Improving BERT with Span-based Dynamic Convolution | ['Zihang Jiang', 'Weihao Yu', 'Daquan Zhou', 'Yunpeng Chen', 'Jiashi Feng', 'Shuicheng Yan'] | ['cs.CL'] | Pre-trained language models like BERT and its variants have recently achieved
impressive performance in various natural language understanding tasks.
However, BERT heavily relies on the global self-attention block and thus
suffers large memory footprint and computation cost. Although all its attention
heads query on the whole input sequence for generating the attention map from a
global perspective, we observe some heads only need to learn local
dependencies, which means the existence of computation redundancy. We therefore
propose a novel span-based dynamic convolution to replace these self-attention
heads to directly model local dependencies. The novel convolution heads,
together with the rest self-attention heads, form a new mixed attention block
that is more efficient at both global and local context learning. We equip BERT
with this mixed attention design and build a ConvBERT model. Experiments have
shown that ConvBERT significantly outperforms BERT and its variants in various
downstream tasks, with lower training cost and fewer model parameters.
Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than
ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained
models will be released. | 2020-08-06T07:43:19Z | 17 pages | null | null | ConvBERT: Improving BERT with Span-based Dynamic Convolution | ['Zihang Jiang', 'Weihao Yu', 'Daquan Zhou', 'Yunpeng Chen', 'Jiashi Feng', 'Shuicheng Yan'] | 2,020 | Neural Information Processing Systems | 163 | 81 | ['Computer Science'] |
2,008.03415 | Assessing Demographic Bias in Named Entity Recognition | ['Shubhanshu Mishra', 'Sijun He', 'Luca Belli'] | ['cs.CL', 'cs.CY', 'cs.IR', 'cs.LG', '68T50 (Primary), 68T30 (Secondary), 68U15', 'I.2.7; I.2.1; I.2.6; H.3.1; H.3.3; H.1.2; K.4.2'] | Named Entity Recognition (NER) is often the first step towards automated
Knowledge Base (KB) generation from raw text. In this work, we assess the bias
in various Named Entity Recognition (NER) systems for English across different
demographic groups with synthetically generated corpora. Our analysis reveals
that models perform better at identifying names from specific demographic
groups across two datasets. We also identify that debiased embeddings do not
help in resolving this issue. Finally, we observe that character-based
contextualized word representation models such as ELMo results in the least
bias across demographics. Our work can shed light on potential biases in
automated KB generation due to systematic exclusion of named entities belonging
to certain demographics. | 2020-08-08T02:01:25Z | Presented at the AKBC Workshop on Bias in Automatic Knowledge Graph
Construction, 2020 (arXiv:2007.11659) | null | null | Assessing Demographic Bias in Named Entity Recognition | ['Shubhanshu Mishra', 'Sijun He', 'Luca Belli'] | 2,020 | arXiv.org | 47 | 34 | ['Computer Science'] |
2,008.03802 | SpeedySpeech: Efficient Neural Speech Synthesis | ['Jan Vainer', 'Ondřej Dušek'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | While recent neural sequence-to-sequence models have greatly improved the
quality of speech synthesis, there has not been a system capable of fast
training, fast inference and high-quality audio synthesis at the same time. We
propose a student-teacher network capable of high-quality faster-than-real-time
spectrogram synthesis, with low requirements on computational resources and
fast training time. We show that self-attention layers are not necessary for
generation of high quality audio. We utilize simple convolutional blocks with
residual connections in both student and teacher networks and use only a single
attention layer in the teacher model. Coupled with a MelGAN vocoder, our
model's voice quality was rated significantly higher than Tacotron 2. Our model
can be efficiently trained on a single GPU and can run in real time even on a
CPU. We provide both our source code and audio samples in our GitHub
repository. | 2020-08-09T20:00:57Z | 5 pages, 3 figures, Interspeech 2020 | null | null | null | null | null | null | null | null | null |
2,008.03946 | A Large-Scale Chinese Short-Text Conversation Dataset | ['Yida Wang', 'Pei Ke', 'Yinhe Zheng', 'Kaili Huang', 'Yong Jiang', 'Xiaoyan Zhu', 'Minlie Huang'] | ['cs.CL'] | The advancements of neural dialogue generation models show promising results
on modeling short-text conversations. However, training such models usually
needs a large-scale high-quality dialogue corpus, which is hard to access. In
this paper, we present a large-scale cleaned Chinese conversation dataset,
LCCC, which contains a base version (6.8million dialogues) and a large version
(12.0 million dialogues). The quality of our dataset is ensured by a rigorous
data cleaning pipeline, which is built based on a set of rules and a classifier
that is trained on manually annotated 110K dialogue pairs. We also release
pre-training dialogue models which are trained on LCCC-base and LCCC-large
respectively. The cleaned dataset and the pre-training models will facilitate
the research of short-text conversation modeling. All the models and datasets
are available at https://github.com/thu-coai/CDial-GPT. | 2020-08-10T08:12:49Z | Accepted by NLPCC 2020 (Best Student Paper) | null | null | null | null | null | null | null | null | null |
2,008.03979 | KR-BERT: A Small-Scale Korean-Specific Language Model | ['Sangah Lee', 'Hansol Jang', 'Yunmee Baik', 'Suzi Park', 'Hyopil Shin'] | ['cs.CL'] | Since the appearance of BERT, recent works including XLNet and RoBERTa
utilize sentence embedding models pre-trained by large corpora and a large
number of parameters. Because such models have large hardware and a huge amount
of data, they take a long time to pre-train. Therefore it is important to
attempt to make smaller models that perform comparatively. In this paper, we
trained a Korean-specific model KR-BERT, utilizing a smaller vocabulary and
dataset. Since Korean is one of the morphologically rich languages with poor
resources using non-Latin alphabets, it is also important to capture
language-specific linguistic phenomena that the Multilingual BERT model missed.
We tested several tokenizers including our BidirectionalWordPiece Tokenizer and
adjusted the minimal span of tokens for tokenization ranging from sub-character
level to character-level to construct a better vocabulary for our model. With
those adjustments, our KR-BERT model performed comparably and even better than
other existing pre-trained models using a corpus about 1/10 of the size. | 2020-08-10T09:26:00Z | 7 pages | null | null | KR-BERT: A Small-Scale Korean-Specific Language Model | ['Sangah Lee', 'Hansol Jang', 'Yunmee Baik', 'Suzi Park', 'Hyopil Shin'] | 2,020 | arXiv.org | 52 | 19 | ['Computer Science'] |
2,008.04162 | Navigating Human Language Models with Synthetic Agents | ['Philip Feldman', 'Antonio Bucchiarone'] | ['cs.AI', 'cs.CL', 'cs.MA', 'I.2; I.6; J.4'] | Modern natural language models such as the GPT-2/GPT-3 contain tremendous
amounts of information about human belief in a consistently testable form. If
these models could be shown to accurately reflect the underlying beliefs of the
human beings that produced the data used to train these models, then such
models become a powerful sociological tool in ways that are distinct from
traditional methods, such as interviews and surveys. In this study, We train a
version of the GPT-2 on a corpora of historical chess games, and then "launch"
clusters of synthetic agents into the model, using text strings to create
context and orientation. We compare the trajectories contained in the text
generated by the agents/model and compare that to the known ground truth of the
chess board, move legality, and historical patterns of play. We find that the
percentages of moves by piece using the model are substantially similar from
human patterns. We further find that the model creates an accurate latent
representation of the chessboard, and that it is possible to plot trajectories
of legal moves across the board using this knowledge. | 2020-08-10T14:39:53Z | 8 pages, 6 figures, 2 tables, 1 algorithm | null | null | Navigating Language Models with Synthetic Agents | ['Philip G. Feldman'] | 2,020 | arXiv.org | 4 | 24 | ['Computer Science'] |
2,008.05671 | Large-scale Transfer Learning for Low-resource Spoken Language
Understanding | ['Xueli Jia', 'Jianzong Wang', 'Zhiyong Zhang', 'Ning Cheng', 'Jing Xiao'] | ['eess.AS', 'cs.CL', 'cs.SD'] | End-to-end Spoken Language Understanding (SLU) models are made increasingly
large and complex to achieve the state-ofthe-art accuracy. However, the
increased complexity of a model can also introduce high risk of over-fitting,
which is a major challenge in SLU tasks due to the limitation of available
data. In this paper, we propose an attention-based SLU model together with
three encoder enhancement strategies to overcome data sparsity challenge. The
first strategy focuses on the transferlearning approach to improve feature
extraction capability of the encoder. It is implemented by pre-training the
encoder component with a quantity of Automatic Speech Recognition annotated
data relying on the standard Transformer architecture and then fine-tuning the
SLU model with a small amount of target labelled data. The second strategy
adopts multitask learning strategy, the SLU model integrates the speech
recognition model by sharing the same underlying encoder, such that improving
robustness and generalization ability. The third strategy, learning from
Component Fusion (CF) idea, involves a Bidirectional Encoder Representation
from Transformer (BERT) model and aims to boost the capability of the decoder
with an auxiliary network. It hence reduces the risk of over-fitting and
augments the ability of the underlying encoder, indirectly. Experiments on the
FluentAI dataset show that cross-language transfer learning and multi-task
strategies have been improved by up to 4:52% and 3:89% respectively, compared
to the baseline. | 2020-08-13T03:43:05Z | will be presented in INTERSPEECH 2020 | null | null | null | null | null | null | null | null | null |
2,008.06048 | MMM : Exploring Conditional Multi-Track Music Generation with the
Transformer | ['Jeff Ens', 'Philippe Pasquier'] | ['cs.SD', 'cs.LG', 'cs.MM'] | We propose the Multi-Track Music Machine (MMM), a generative system based on
the Transformer architecture that is capable of generating multi-track music.
In contrast to previous work, which represents musical material as a single
time-ordered sequence, where the musical events corresponding to different
tracks are interleaved, we create a time-ordered sequence of musical events for
each track and concatenate several tracks into a single sequence. This takes
advantage of the Transformer's attention-mechanism, which can adeptly handle
long-term dependencies. We explore how various representations can offer the
user a high degree of control at generation time, providing an interactive demo
that accommodates track-level and bar-level inpainting, and offers control over
track instrumentation and note density. | 2020-08-13T02:36:34Z | null | null | null | null | null | null | null | null | null | null |
2,008.07905 | Glancing Transformer for Non-Autoregressive Neural Machine Translation | ['Lihua Qian', 'Hao Zhou', 'Yu Bao', 'Mingxuan Wang', 'Lin Qiu', 'Weinan Zhang', 'Yong Yu', 'Lei Li'] | ['cs.CL'] | Recent work on non-autoregressive neural machine translation (NAT) aims at
improving the efficiency by parallel decoding without sacrificing the quality.
However, existing NAT methods are either inferior to Transformer or require
multiple decoding passes, leading to reduced speedup. We propose the Glancing
Language Model (GLM), a method to learn word interdependency for single-pass
parallel generation models. With GLM, we develop Glancing Transformer (GLAT)
for machine translation. With only single-pass parallel decoding, GLAT is able
to generate high-quality translation with 8-15 times speedup. Experiments on
multiple WMT language directions show that GLAT outperforms all previous single
pass non-autoregressive methods, and is nearly comparable to Transformer,
reducing the gap to 0.25-0.9 BLEU points. | 2020-08-18T13:04:03Z | 9 pages, 7 figures, ACL2021 | null | null | null | null | null | null | null | null | null |
2,008.08767 | Single Image Super-Resolution via a Holistic Attention Network | ['Ben Niu', 'Weilei Wen', 'Wenqi Ren', 'Xiangde Zhang', 'Lianping Yang', 'Shuzhen Wang', 'Kaihao Zhang', 'Xiaochun Cao', 'Haifeng Shen'] | ['eess.IV', 'cs.CV'] | Informative features play a crucial role in the single image super-resolution
task. Channel attention has been demonstrated to be effective for preserving
information-rich features in each layer. However, channel attention treats each
convolution layer as a separate process that misses the correlation among
different layers. To address this problem, we propose a new holistic attention
network (HAN), which consists of a layer attention module (LAM) and a
channel-spatial attention module (CSAM), to model the holistic
interdependencies among layers, channels, and positions. Specifically, the
proposed LAM adaptively emphasizes hierarchical features by considering
correlations among layers. Meanwhile, CSAM learns the confidence at all the
positions of each channel to selectively capture more informative features.
Extensive experiments demonstrate that the proposed HAN performs favorably
against the state-of-the-art single image super-resolution approaches. | 2020-08-20T04:13:15Z | 16 pages, 6 figures, IEEE International Conference on Computer Vision | null | null | null | null | null | null | null | null | null |
2,008.09093 | PARADE: Passage Representation Aggregation for Document Reranking | ['Canjia Li', 'Andrew Yates', 'Sean MacAvaney', 'Ben He', 'Yingfei Sun'] | ['cs.IR'] | Pretrained transformer models, such as BERT and T5, have shown to be highly
effective at ad-hoc passage and document ranking. Due to inherent sequence
length limits of these models, they need to be run over a document's passages,
rather than processing the entire document sequence at once. Although several
approaches for aggregating passage-level signals have been proposed, there has
yet to be an extensive comparison of these techniques. In this work, we explore
strategies for aggregating relevance signals from a document's passages into a
final ranking score. We find that passage representation aggregation techniques
can significantly improve over techniques proposed in prior work, such as
taking the maximum passage score. We call this new approach PARADE. In
particular, PARADE can significantly improve results on collections with broad
information needs where relevance signals can be spread throughout the document
(such as TREC Robust04 and GOV2). Meanwhile, less complex aggregation
techniques may work better on collections with an information need that can
often be pinpointed to a single passage (such as TREC DL and TREC Genomics). We
also conduct efficiency analyses, and highlight several strategies for
improving transformer-based aggregation. | 2020-08-20T17:32:30Z | null | null | null | null | null | null | null | null | null | null |
2,008.09144 | PTT5: Pretraining and validating the T5 model on Brazilian Portuguese
data | ['Diedre Carmo', 'Marcos Piau', 'Israel Campiotti', 'Rodrigo Nogueira', 'Roberto Lotufo'] | ['cs.CL'] | In natural language processing (NLP), there is a need for more resources in
Portuguese, since much of the data used in the state-of-the-art research is in
other languages. In this paper, we pretrain a T5 model on the BrWac corpus, an
extensive collection of web pages in Portuguese, and evaluate its performance
against other Portuguese pretrained models and multilingual models on three
different tasks. We show that our Portuguese pretrained models have
significantly better performance over the original T5 models. Moreover, we
demonstrate the positive impact of using a Portuguese vocabulary. Our code and
models are available at https://github.com/unicamp-dl/PTT5. | 2020-08-20T18:10:13Z | null | null | null | PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data | ['D. Carmo', 'Marcos Piau', 'Israel Campiotti', 'Rodrigo Nogueira', 'R. Lotufo'] | 2,020 | arXiv.org | 52 | 28 | ['Computer Science'] |
2,008.1001 | A Lip Sync Expert Is All You Need for Speech to Lip Generation In The
Wild | ['K R Prajwal', 'Rudrabha Mukhopadhyay', 'Vinay Namboodiri', 'C V Jawahar'] | ['cs.CV', 'cs.LG', 'cs.SD', 'eess.AS'] | In this work, we investigate the problem of lip-syncing a talking face video
of an arbitrary identity to match a target speech segment. Current works excel
at producing accurate lip movements on a static image or videos of specific
people seen during the training phase. However, they fail to accurately morph
the lip movements of arbitrary identities in dynamic, unconstrained talking
face videos, resulting in significant parts of the video being out-of-sync with
the new audio. We identify key reasons pertaining to this and hence resolve
them by learning from a powerful lip-sync discriminator. Next, we propose new,
rigorous evaluation benchmarks and metrics to accurately measure lip
synchronization in unconstrained videos. Extensive quantitative evaluations on
our challenging benchmarks show that the lip-sync accuracy of the videos
generated by our Wav2Lip model is almost as good as real synced videos. We
provide a demo video clearly showing the substantial impact of our Wav2Lip
model and evaluation benchmarks on our website:
\url{cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild}.
The code and models are released at this GitHub repository:
\url{github.com/Rudrabha/Wav2Lip}. You can also try out the interactive demo at
this link: \url{bhaasha.iiit.ac.in/lipsync}. | 2020-08-23T11:01:25Z | 9 pages (including references), 3 figures, Accepted in ACM
Multimedia, 2020 | null | 10.1145/3394171.3413532 | null | null | null | null | null | null | null |
2,008.1057 | Example-Based Named Entity Recognition | ['Morteza Ziyadi', 'Yuting Sun', 'Abhishek Goswami', 'Jade Huang', 'Weizhu Chen'] | ['cs.CL', 'cs.IR'] | We present a novel approach to named entity recognition (NER) in the presence
of scarce data that we call example-based NER. Our train-free few-shot learning
approach takes inspiration from question-answering to identify entity spans in
a new and unseen domain. In comparison with the current state-of-the-art, the
proposed method performs significantly better, especially when using a low
number of support examples. | 2020-08-24T17:18:24Z | 15 pages, 6 figures, 5 tables with appendix | null | null | null | null | null | null | null | null | null |
2,008.10831 | CDeC-Net: Composite Deformable Cascade Network for Table Detection in
Document Images | ['Madhav Agarwal', 'Ajoy Mondal', 'C. V. Jawahar'] | ['cs.CV'] | Localizing page elements/objects such as tables, figures, equations, etc. is
the primary step in extracting information from document images. We propose a
novel end-to-end trainable deep network, (CDeC-Net) for detecting tables
present in the documents. The proposed network consists of a multistage
extension of Mask R-CNN with a dual backbone having deformable convolution for
detecting tables varying in scale with high detection accuracy at higher IoU
threshold. We empirically evaluate CDeC-Net on all the publicly available
benchmark datasets - ICDAR-2013, ICDAR-2017, ICDAR-2019,UNLV, Marmot,
PubLayNet, and TableBank - with extensive experiments.
Our solution has three important properties: (i) a single trained model
CDeC-Net{\ddag} performs well across all the popular benchmark datasets; (ii)
we report excellent performances across multiple, including higher, thresholds
of IoU; (iii) by following the same protocol of the recent papers for each of
the benchmarks, we consistently demonstrate the superior quantitative
performance. Our code and models will be publicly released for enabling the
reproducibility of the results. | 2020-08-25T05:53:59Z | 12 | null | null | CDeC-Net: Composite Deformable Cascade Network for Table Detection in Document Images | ['Madhav Agarwal', 'Ajoy Mondal', 'C. V. Jawahar'] | 2,020 | International Conference on Pattern Recognition | 63 | 56 | ['Computer Science'] |
2,008.12014 | GREEK-BERT: The Greeks visiting Sesame Street | ['John Koutsikakis', 'Ilias Chalkidis', 'Prodromos Malakasiotis', 'Ion Androutsopoulos'] | ['cs.CL'] | Transformer-based language models, such as BERT and its variants, have
achieved state-of-the-art performance in several downstream natural language
processing (NLP) tasks on generic benchmark datasets (e.g., GLUE, SQUAD, RACE).
However, these models have mostly been applied to the resource-rich English
language. In this paper, we present GREEK-BERT, a monolingual BERT-based
language model for modern Greek. We evaluate its performance in three NLP
tasks, i.e., part-of-speech tagging, named entity recognition, and natural
language inference, obtaining state-of-the-art performance. Interestingly, in
two of the benchmarks GREEK-BERT outperforms two multilingual Transformer-based
models (M-BERT, XLM-R), as well as shallower neural baselines operating on
pre-trained word embeddings, by a large margin (5%-10%). Most importantly, we
make both GREEK-BERT and our training code publicly available, along with code
illustrating how GREEK-BERT can be fine-tuned for downstream NLP tasks. We
expect these resources to boost NLP research and applications for modern Greek. | 2020-08-27T09:36:14Z | 8 pages, 1 figure, 11th Hellenic Conference on Artificial
Intelligence (SETN 2020) | null | 10.1145/3411408.3411440 | null | null | null | null | null | null | null |
2,008.12272 | Monocular, One-stage, Regression of Multiple 3D People | ['Yu Sun', 'Qian Bao', 'Wu Liu', 'Yili Fu', 'Michael J. Black', 'Tao Mei'] | ['cs.CV'] | This paper focuses on the regression of multiple 3D people from a single RGB
image. Existing approaches predominantly follow a multi-stage pipeline that
first detects people in bounding boxes and then independently regresses their
3D body meshes. In contrast, we propose to Regress all meshes in a One-stage
fashion for Multiple 3D People (termed ROMP). The approach is conceptually
simple, bounding box-free, and able to learn a per-pixel representation in an
end-to-end manner. Our method simultaneously predicts a Body Center heatmap and
a Mesh Parameter map, which can jointly describe the 3D body mesh on the pixel
level. Through a body-center-guided sampling process, the body mesh parameters
of all people in the image are easily extracted from the Mesh Parameter map.
Equipped with such a fine-grained representation, our one-stage framework is
free of the complex multi-stage process and more robust to occlusion. Compared
with state-of-the-art methods, ROMP achieves superior performance on the
challenging multi-person benchmarks, including 3DPW and CMU Panoptic.
Experiments on crowded/occluded datasets demonstrate the robustness under
various types of occlusion. The released code is the first real-time
implementation of monocular multi-person 3D mesh regression. | 2020-08-27T17:21:47Z | ICCV 2021, Code https://github.com/Arthur151/ROMP | null | null | Monocular, One-stage, Regression of Multiple 3D People | ['Yu Sun', 'Qian Bao', 'Wu Liu', 'Yili Fu', 'Michael J. Black', 'Tao Mei'] | 2,020 | IEEE International Conference on Computer Vision | 278 | 64 | ['Computer Science'] |
2,009.0059 | Summary-Source Proposition-level Alignment: Task, Datasets and
Supervised Baseline | ['Ori Ernst', 'Ori Shapira', 'Ramakanth Pasunuru', 'Michael Lepioshkin', 'Jacob Goldberger', 'Mohit Bansal', 'Ido Dagan'] | ['cs.CL'] | Aligning sentences in a reference summary with their counterparts in source
documents was shown as a useful auxiliary summarization task, notably for
generating training data for salience detection. Despite its assessed utility,
the alignment step was mostly approached with heuristic unsupervised methods,
typically ROUGE-based, and was never independently optimized or evaluated. In
this paper, we propose establishing summary-source alignment as an explicit
task, while introducing two major novelties: (1) applying it at the more
accurate proposition span level, and (2) approaching it as a supervised
classification task. To that end, we created a novel training dataset for
proposition-level alignment, derived automatically from available summarization
evaluation data. In addition, we crowdsourced dev and test datasets, enabling
model development and proper evaluation. Utilizing these data, we present a
supervised proposition alignment baseline model, showing improved
alignment-quality over the unsupervised approach. | 2020-09-01T17:27:12Z | CoNLL 2021 | null | null | Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline | ['Ori Ernst', 'Ori Shapira', 'Ramakanth Pasunuru', 'Michael Lepioshkin', 'J. Goldberger', 'Mohit Bansal', 'Ido Dagan'] | 2,020 | Conference on Computational Natural Language Learning | 28 | 30 | ['Computer Science'] |
2,009.00713 | WaveGrad: Estimating Gradients for Waveform Generation | ['Nanxin Chen', 'Yu Zhang', 'Heiga Zen', 'Ron J. Weiss', 'Mohammad Norouzi', 'William Chan'] | ['eess.AS', 'cs.LG', 'cs.SD', 'stat.ML'] | This paper introduces WaveGrad, a conditional model for waveform generation
which estimates gradients of the data density. The model is built on prior work
on score matching and diffusion probabilistic models. It starts from a Gaussian
white noise signal and iteratively refines the signal via a gradient-based
sampler conditioned on the mel-spectrogram. WaveGrad offers a natural way to
trade inference speed for sample quality by adjusting the number of refinement
steps, and bridges the gap between non-autoregressive and autoregressive models
in terms of audio quality. We find that it can generate high fidelity audio
samples using as few as six iterations. Experiments reveal WaveGrad to generate
high fidelity audio, outperforming adversarial non-autoregressive baselines and
matching a strong likelihood-based autoregressive baseline using fewer
sequential operations. Audio samples are available at
https://wavegrad.github.io/. | 2020-09-02T17:44:10Z | null | null | null | WaveGrad: Estimating Gradients for Waveform Generation | ['Nanxin Chen', 'Yu Zhang', 'H. Zen', 'Ron J. Weiss', 'Mohammad Norouzi', 'William Chan'] | 2,020 | International Conference on Learning Representations | 795 | 66 | ['Computer Science', 'Engineering', 'Mathematics'] |
2,009.01325 | Learning to summarize from human feedback | ['Nisan Stiennon', 'Long Ouyang', 'Jeff Wu', 'Daniel M. Ziegler', 'Ryan Lowe', 'Chelsea Voss', 'Alec Radford', 'Dario Amodei', 'Paul Christiano'] | ['cs.CL', 'cs.AI', 'cs.LG'] | As language models become more powerful, training and evaluation are
increasingly bottlenecked by the data and metrics used for a particular task.
For example, summarization models are often trained to predict human reference
summaries and evaluated using ROUGE, but both of these metrics are rough
proxies for what we really care about -- summary quality. In this work, we show
that it is possible to significantly improve summary quality by training a
model to optimize for human preferences. We collect a large, high-quality
dataset of human comparisons between summaries, train a model to predict the
human-preferred summary, and use that model as a reward function to fine-tune a
summarization policy using reinforcement learning. We apply our method to a
version of the TL;DR dataset of Reddit posts and find that our models
significantly outperform both human reference summaries and much larger models
fine-tuned with supervised learning alone. Our models also transfer to CNN/DM
news articles, producing summaries nearly as good as the human reference
without any news-specific fine-tuning. We conduct extensive analyses to
understand our human feedback dataset and fine-tuned models We establish that
our reward model generalizes to new datasets, and that optimizing our reward
model results in better summaries than optimizing ROUGE according to humans. We
hope the evidence from our paper motivates machine learning researchers to pay
closer attention to how their training loss affects the model behavior they
actually want. | 2020-09-02T19:54:41Z | NeurIPS 2020 | null | null | Learning to summarize from human feedback | ['Nisan Stiennon', 'Long Ouyang', 'Jeff Wu', 'Daniel M. Ziegler', 'Ryan J. Lowe', 'Chelsea Voss', 'Alec Radford', 'Dario Amodei', 'Paul Christiano'] | 2,020 | Neural Information Processing Systems | 2,195 | 84 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.