arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,012.00857 | StructFormer: Joint Unsupervised Induction of Dependency and
Constituency Structure from Masked Language Modeling | ['Yikang Shen', 'Yi Tay', 'Che Zheng', 'Dara Bahri', 'Donald Metzler', 'Aaron Courville'] | ['cs.CL', 'cs.AI', 'cs.LG'] | There are two major classes of natural language grammar -- the dependency
grammar that models one-to-one correspondences between words and the
constituency grammar that models the assembly of one or several corresponded
words. While previous unsupervised parsing methods mostly focus on only
inducing one class of grammars, we introduce a novel model, StructFormer, that
can simultaneously induce dependency and constituency structure. To achieve
this, we propose a new parsing framework that can jointly generate a
constituency tree and dependency graph. Then we integrate the induced
dependency relations into the transformer, in a differentiable manner, through
a novel dependency-constrained self-attention mechanism. Experimental results
show that our model can achieve strong results on unsupervised constituency
parsing, unsupervised dependency parsing, and masked language modeling at the
same time. | 2020-12-01T21:54:51Z | Published as a conference paper at ACL 2021 | null | null | StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling | ['Yikang Shen', 'Yi Tay', 'Che Zheng', 'Dara Bahri', 'Donald Metzler', 'Aaron C. Courville'] | 2,020 | Annual Meeting of the Association for Computational Linguistics | 41 | 44 | ['Computer Science'] |
2,012.01477 | The Third DIHARD Diarization Challenge | ['Neville Ryant', 'Prachi Singh', 'Venkat Krishnamohan', 'Rajat Varma', 'Kenneth Church', 'Christopher Cieri', 'Jun Du', 'Sriram Ganapathy', 'Mark Liberman'] | ['eess.AS', 'cs.SD'] | DIHARD III was the third in a series of speaker diarization challenges
intended to improve the robustness of diarization systems to variability in
recording equipment, noise conditions, and conversational domain. Speaker
diarization was evaluated under two speech activity conditions (diarization
from a reference speech activity vs. diarization from scratch) and 11 diverse
domains. The domains span a range of recording conditions and interaction
types, including read audio-books, meeting speech, clinical interviews, web
videos, and, for the first time, conversational telephone speech. A total of 30
organizations (forming 21teams) from industry and academia submitted 499 valid
system outputs. The evaluation results indicate that speaker diarization has
improved markedly since DIHARD I, particularly for two-party interactions, but
that for many domains (e.g., web video) the problem remains far from solved. | 2020-12-02T19:33:44Z | arXiv admin note: text overlap with arXiv:1906.07839 | null | null | The Third DIHARD Diarization Challenge | ['Neville Ryant', 'Prachi Singh', 'Venkat Krishnamohan', 'Rajat Varma', 'Kenneth Ward Church', 'C. Cieri', 'Jun Du', 'Sriram Ganapathy', 'M. Liberman'] | 2,020 | Interspeech | 135 | 43 | ['Engineering', 'Computer Science'] |
2,012.01873 | Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries | ['Ashish Shrivastava', 'Kaustubh Dhole', 'Abhinav Bhatt', 'Sharvani Raghunath'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite end-to-end neural systems making significant progress in the last
decade for task-oriented as well as chit-chat based dialogue systems, most
dialogue systems rely on hybrid approaches which use a combination of
rule-based, retrieval and generative approaches for generating a set of ranked
responses. Such dialogue systems need to rely on a fallback mechanism to
respond to out-of-domain or novel user queries which are not answerable within
the scope of the dialog system. While, dialog systems today rely on static and
unnatural responses like "I don't know the answer to that question" or "I'm not
sure about that", we design a neural approach which generates responses which
are contextually aware with the user query as well as say no to the user. Such
customized responses provide paraphrasing ability and contextualization as well
as improve the interaction with the user and reduce dialogue monotonicity. Our
simple approach makes use of rules over dependency parses and a text-to-text
transformer fine-tuned on synthetic data of question-response pairs generating
highly relevant, grammatical as well as diverse questions. We perform automatic
and manual evaluations to demonstrate the efficacy of the system. | 2020-12-03T12:34:22Z | ACL-IJCNLP 2021 | null | null | null | null | null | null | null | null | null |
2,012.0211 | GottBERT: a pure German Language Model | ['Raphael Scheible', 'Fabian Thomczyk', 'Patric Tippmann', 'Victor Jaravine', 'Martin Boeker'] | ['cs.CL', 'cs.LG'] | Lately, pre-trained language models advanced the field of natural language
processing (NLP). The introduction of Bidirectional Encoders for Transformers
(BERT) and its optimized version RoBERTa have had significant impact and
increased the relevance of pre-trained models. First, research in this field
mainly started on English data followed by models trained with multilingual
text corpora. However, current research shows that multilingual models are
inferior to monolingual models. Currently, no German single language RoBERTa
model is yet published, which we introduce in this work (GottBERT). The German
portion of the OSCAR data set was used as text corpus. In an evaluation we
compare its performance on the two Named Entity Recognition (NER) tasks Conll
2003 and GermEval 2014 as well as on the text classification tasks GermEval
2018 (fine and coarse) and GNAD with existing German single language BERT
models and two multilingual ones. GottBERT was pre-trained related to the
original RoBERTa model using fairseq. All downstream tasks were trained using
hyperparameter presets taken from the benchmark of German BERT. The experiments
were setup utilizing FARM. Performance was measured by the $F_{1}$ score.
GottBERT was successfully pre-trained on a 256 core TPU pod using the RoBERTa
BASE architecture. Even without extensive hyper-parameter optimization, in all
NER and one text classification task, GottBERT already outperformed all other
tested German and multilingual models. In order to support the German NLP
field, we publish GottBERT under the AGPLv3 license. | 2020-12-03T17:45:03Z | null | null | 10.18653/v1/2024.emnlp-main.1183 | GottBERT: a pure German Language Model | ['Raphael Scheible', 'Fabian Thomczyk', 'P. Tippmann', 'V. Jaravine', 'M. Boeker'] | 2,020 | Conference on Empirical Methods in Natural Language Processing | 81 | 35 | ['Computer Science'] |
2,012.02613 | FinnSentiment -- A Finnish Social Media Corpus for Sentiment Polarity
Annotation | ['Krister Lindén', 'Tommi Jauhiainen', 'Sam Hardwick'] | ['cs.CL'] | Sentiment analysis and opinion mining is an important task with obvious
application areas in social media, e.g. when indicating hate speech and fake
news. In our survey of previous work, we note that there is no large-scale
social media data set with sentiment polarity annotations for Finnish. This
publications aims to remedy this shortcoming by introducing a 27,000 sentence
data set annotated independently with sentiment polarity by three native
annotators. We had the same three annotators for the whole data set, which
provides a unique opportunity for further studies of annotator behaviour over
time. We analyse their inter-annotator agreement and provide two baselines to
validate the usefulness of the data set. | 2020-12-04T14:17:46Z | null | null | null | null | null | null | null | null | null | null |
2,012.02951 | FloodNet: A High Resolution Aerial Imagery Dataset for Post Flood Scene
Understanding | ['Maryam Rahnemoonfar', 'Tashnim Chowdhury', 'Argho Sarkar', 'Debvrat Varshney', 'Masoud Yari', 'Robin Murphy'] | ['cs.CV', '68T45', 'I.4.6'] | Visual scene understanding is the core task in making any crucial decision in
any computer vision system. Although popular computer vision datasets like
Cityscapes, MS-COCO, PASCAL provide good benchmarks for several tasks (e.g.
image classification, segmentation, object detection), these datasets are
hardly suitable for post disaster damage assessments. On the other hand,
existing natural disaster datasets include mainly satellite imagery which have
low spatial resolution and a high revisit period. Therefore, they do not have a
scope to provide quick and efficient damage assessment tasks. Unmanned Aerial
Vehicle(UAV) can effortlessly access difficult places during any disaster and
collect high resolution imagery that is required for aforementioned tasks of
computer vision. To address these issues we present a high resolution UAV
imagery, FloodNet, captured after the hurricane Harvey. This dataset
demonstrates the post flooded damages of the affected areas. The images are
labeled pixel-wise for semantic segmentation task and questions are produced
for the task of visual question answering. FloodNet poses several challenges
including detection of flooded roads and buildings and distinguishing between
natural water and flooded water. With the advancement of deep learning
algorithms, we can analyze the impact of any disaster which can make a precise
understanding of the affected areas. In this paper, we compare and contrast the
performances of baseline methods for image classification, semantic
segmentation, and visual question answering on our dataset. | 2020-12-05T05:15:36Z | 11 pages | null | null | null | null | null | null | null | null | null |
2,012.03308 | TediGAN: Text-Guided Diverse Face Image Generation and Manipulation | ['Weihao Xia', 'Yujiu Yang', 'Jing-Hao Xue', 'Baoyuan Wu'] | ['cs.CV', 'cs.AI', 'cs.MM'] | In this work, we propose TediGAN, a novel framework for multi-modal image
generation and manipulation with textual descriptions. The proposed method
consists of three components: StyleGAN inversion module, visual-linguistic
similarity learning, and instance-level optimization. The inversion module maps
real images to the latent space of a well-trained StyleGAN. The
visual-linguistic similarity learns the text-image matching by mapping the
image and text into a common embedding space. The instance-level optimization
is for identity preservation in manipulation. Our model can produce diverse and
high-quality images with an unprecedented resolution at 1024. Using a control
mechanism based on style-mixing, our TediGAN inherently supports image
synthesis with multi-modal inputs, such as sketches or semantic labels, with or
without instance guidance. To facilitate text-guided multi-modal synthesis, we
propose the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real
face images and corresponding semantic segmentation map, sketch, and textual
descriptions. Extensive experiments on the introduced dataset demonstrate the
superior performance of our proposed method. Code and data are available at
https://github.com/weihaox/TediGAN. | 2020-12-06T16:20:19Z | CVPR 2021. Code: https://github.com/weihaox/TediGAN Data:
https://github.com/weihaox/Multi-Modal-CelebA-HQ Video:
https://youtu.be/L8Na2f5viAM | null | null | TediGAN: Text-Guided Diverse Image Generation and Manipulation | ['Weihao Xia', 'Yujiu Yang', 'Jing-Hao Xue', 'Baoyuan Wu'] | 2,020 | arXiv.org | 23 | 64 | ['Computer Science'] |
2,012.03411 | MLS: A Large-Scale Multilingual Dataset for Speech Research | ['Vineel Pratap', 'Qiantong Xu', 'Anuroop Sriram', 'Gabriel Synnaeve', 'Ronan Collobert'] | ['eess.AS', 'cs.CL', 'cs.SD'] | This paper introduces Multilingual LibriSpeech (MLS) dataset, a large
multilingual corpus suitable for speech research. The dataset is derived from
read audiobooks from LibriVox and consists of 8 languages, including about
44.5K hours of English and a total of about 6K hours for other languages.
Additionally, we provide Language Models (LM) and baseline Automatic Speech
Recognition (ASR) models and for all the languages in our dataset. We believe
such a large transcribed dataset will open new avenues in ASR and
Text-To-Speech (TTS) research. The dataset will be made freely available for
anyone at http://www.openslr.org. | 2020-12-07T01:53:45Z | null | Interspeech 2020 | 10.21437/Interspeech.2020-2826 | null | null | null | null | null | null | null |
2,012.03619 | Structural Text Segmentation of Legal Documents | ['Dennis Aumiller', 'Satya Almasian', 'Sebastian Lackner', 'Michael Gertz'] | ['cs.CL'] | The growing complexity of legal cases has lead to an increasing interest in
legal information retrieval systems that can effectively satisfy user-specific
information needs. However, such downstream systems typically require documents
to be properly formatted and segmented, which is often done with relatively
simple pre-processing steps, disregarding topical coherence of segments.
Systems generally rely on representations of individual sentences or
paragraphs, which may lack crucial context, or document-level representations,
which are too long for meaningful search results. To address this issue, we
propose a segmentation system that can predict topical coherence of sequential
text segments spanning several paragraphs, effectively segmenting a document
and providing a more balanced representation for downstream applications. We
build our model on top of popular transformer networks and formulate structural
text segmentation as topical change detection, by performing a series of
independent classifications that allow for efficient fine-tuning on
task-specific data. We crawl a novel dataset consisting of roughly $74,000$
online Terms-of-Service documents, including hierarchical topic annotations,
which we use for training. Results show that our proposed system significantly
outperforms baselines, and adapts well to structural peculiarities of legal
documents. We release both data and trained models to the research community
for future work.https://github.com/dennlinger/TopicalChange | 2020-12-07T12:09:37Z | null | null | 10.1145/3462757.3466085 | null | null | null | null | null | null | null |
2,012.04584 | Distilling Knowledge from Reader to Retriever for Question Answering | ['Gautier Izacard', 'Edouard Grave'] | ['cs.CL', 'cs.LG'] | The task of information retrieval is an important component of many natural
language processing systems, such as open domain question answering. While
traditional methods were based on hand-crafted features, continuous
representations based on neural networks recently obtained competitive results.
A challenge of using such methods is to obtain supervised data to train the
retriever model, corresponding to pairs of query and support documents. In this
paper, we propose a technique to learn retriever models for downstream tasks,
inspired by knowledge distillation, and which does not require annotated pairs
of query and documents. Our approach leverages attention scores of a reader
model, used to solve the task based on retrieved documents, to obtain synthetic
labels for the retriever. We evaluate our method on question answering,
obtaining state-of-the-art results. | 2020-12-08T17:36:34Z | null | null | null | Distilling Knowledge from Reader to Retriever for Question Answering | ['Gautier Izacard', 'Edouard Grave'] | 2,020 | International Conference on Learning Representations | 267 | 41 | ['Computer Science'] |
2,012.05483 | Specialization maps for Scholze's category of diamonds | ['Ian Gleason'] | ['math.AG', 'math.NT'] | We introduce the specialization map in Scholzes theory of diamonds. We
consider v-sheaves that behave like formal schemes and call them kimberlites.
We attach to them: a reduced special fiber, an analytic locus, a specialization
map, a Zariski site, and an etale site. When the kimberlite comes from a formal
scheme, our sites recover the classical ones. We prove that unramified p-adic
Beilinson--Drinfeld Grassmannians are kimberlites with finiteness and normality
properties. | 2020-12-10T07:00:21Z | The material of specialization maps for moduli spaces of p-adic
shtukas can now be found in arXiv:2107.03579 | null | null | null | null | null | null | null | null | null |
2,012.05628 | As Good as New. How to Successfully Recycle English GPT-2 to Make Models
for Other Languages | ['Wietse de Vries', 'Malvina Nissim'] | ['cs.CL'] | Large generative language models have been very successful for English, but
other languages lag behind, in part due to data and computational limitations.
We propose a method that may overcome these problems by adapting existing
pre-trained models to new languages. Specifically, we describe the adaptation
of English GPT-2 to Italian and Dutch by retraining lexical embeddings without
tuning the Transformer layers. As a result, we obtain lexical embeddings for
Italian and Dutch that are aligned with the original English lexical
embeddings. Additionally, we scale up complexity by transforming relearned
lexical embeddings of GPT-2 small to the GPT-2 medium embedding space. This
method minimises the amount of training and prevents losing information during
adaptation that was learned by GPT-2. English GPT-2 models with relearned
lexical embeddings can generate realistic sentences in Italian and Dutch.
Though on average these sentences are still identifiable as artificial by
humans, they are assessed on par with sentences generated by a GPT-2 model
fully trained from scratch. | 2020-12-10T12:27:16Z | Findings of ACL 2021 Camera Ready | Findings of the Association for Computational Linguistics:
ACL-IJCNLP 2021 | 10.18653/v1/2021.findings-acl.74 | As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages | ['Wietse de Vries', 'M. Nissim'] | 2,020 | Findings | 78 | 42 | ['Computer Science'] |
2,012.06785 | DETR for Crowd Pedestrian Detection | ['Matthieu Lin', 'Chuming Li', 'Xingyuan Bu', 'Ming Sun', 'Chen Lin', 'Junjie Yan', 'Wanli Ouyang', 'Zhidong Deng'] | ['cs.CV'] | Pedestrian detection in crowd scenes poses a challenging problem due to the
heuristic defined mapping from anchors to pedestrians and the conflict between
NMS and highly overlapped pedestrians. The recently proposed end-to-end
detectors(ED), DETR and deformable DETR, replace hand designed components such
as NMS and anchors using the transformer architecture, which gets rid of
duplicate predictions by computing all pairwise interactions between queries.
Inspired by these works, we explore their performance on crowd pedestrian
detection. Surprisingly, compared to Faster-RCNN with FPN, the results are
opposite to those obtained on COCO. Furthermore, the bipartite match of ED
harms the training efficiency due to the large ground truth number in crowd
scenes. In this work, we identify the underlying motives driving ED's poor
performance and propose a new decoder to address them. Moreover, we design a
mechanism to leverage the less occluded visible parts of pedestrian
specifically for ED, and achieve further improvements. A faster bipartite match
algorithm is also introduced to make ED training on crowd dataset more
practical. The proposed detector PED(Pedestrian End-to-end Detector)
outperforms both previous EDs and the baseline Faster-RCNN on CityPersons and
CrowdHuman. It also achieves comparable performance with state-of-the-art
pedestrian detection methods. Code will be released soon. | 2020-12-12T11:02:05Z | null | null | null | null | null | null | null | null | null | null |
2,012.07436 | Informer: Beyond Efficient Transformer for Long Sequence Time-Series
Forecasting | ['Haoyi Zhou', 'Shanghang Zhang', 'Jieqi Peng', 'Shuai Zhang', 'Jianxin Li', 'Hui Xiong', 'Wancai Zhang'] | ['cs.LG', 'cs.AI', 'cs.IR'] | Many real-world applications require the prediction of long sequence
time-series, such as electricity consumption planning. Long sequence
time-series forecasting (LSTF) demands a high prediction capacity of the model,
which is the ability to capture precise long-range dependency coupling between
output and input efficiently. Recent studies have shown the potential of
Transformer to increase the prediction capacity. However, there are several
severe issues with Transformer that prevent it from being directly applicable
to LSTF, including quadratic time complexity, high memory usage, and inherent
limitation of the encoder-decoder architecture. To address these issues, we
design an efficient transformer-based model for LSTF, named Informer, with
three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism,
which achieves $O(L \log L)$ in time complexity and memory usage, and has
comparable performance on sequences' dependency alignment. (ii) the
self-attention distilling highlights dominating attention by halving cascading
layer input, and efficiently handles extreme long input sequences. (iii) the
generative style decoder, while conceptually simple, predicts the long
time-series sequences at one forward operation rather than a step-by-step way,
which drastically improves the inference speed of long-sequence predictions.
Extensive experiments on four large-scale datasets demonstrate that Informer
significantly outperforms existing methods and provides a new solution to the
LSTF problem. | 2020-12-14T11:43:09Z | 8 pages (main), 5 pages (appendix) and to be appeared in AAAI2021 | null | null | Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting | ['Haoyi Zhou', 'Shanghang Zhang', 'Jieqi Peng', 'Shuai Zhang', 'Jianxin Li', 'Hui Xiong', 'Wan Zhang'] | 2,020 | AAAI Conference on Artificial Intelligence | 4,298 | 57 | ['Computer Science'] |
2,012.07791 | img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation | ['Vítor Albiero', 'Xingyu Chen', 'Xi Yin', 'Guan Pang', 'Tal Hassner'] | ['cs.CV'] | We propose real-time, six degrees of freedom (6DoF), 3D face pose estimation
without face detection or landmark localization. We observe that estimating the
6DoF rigid transformation of a face is a simpler problem than facial landmark
detection, often used for 3D face alignment. In addition, 6DoF offers more
information than face bounding box labels. We leverage these observations to
make multiple contributions: (a) We describe an easily trained, efficient,
Faster R-CNN--based model which regresses 6DoF pose for all faces in the photo,
without preliminary face detection. (b) We explain how pose is converted and
kept consistent between the input photo and arbitrary crops created while
training and evaluating our model. (c) Finally, we show how face poses can
replace detection bounding box training labels. Tests on AFLW2000-3D and BIWI
show that our method runs at real-time and outperforms state of the art (SotA)
face pose estimators. Remarkably, our method also surpasses SotA models of
comparable complexity on the WIDER FACE detection benchmark, despite not been
optimized on bounding box labels. | 2020-12-14T18:26:20Z | To appear in CVPR 2021. Joint first authorship: V\'itor Albiero and
Xingyu Chen | null | null | null | null | null | null | null | null | null |
2,012.09841 | Taming Transformers for High-Resolution Image Synthesis | ['Patrick Esser', 'Robin Rombach', 'Björn Ommer'] | ['cs.CV'] | Designed to learn long-range interactions on sequential data, transformers
continue to show state-of-the-art results on a wide variety of tasks. In
contrast to CNNs, they contain no inductive bias that prioritizes local
interactions. This makes them expressive, but also computationally infeasible
for long sequences, such as high-resolution images. We demonstrate how
combining the effectiveness of the inductive bias of CNNs with the expressivity
of transformers enables them to model and thereby synthesize high-resolution
images. We show how to (i) use CNNs to learn a context-rich vocabulary of image
constituents, and in turn (ii) utilize transformers to efficiently model their
composition within high-resolution images. Our approach is readily applied to
conditional synthesis tasks, where both non-spatial information, such as object
classes, and spatial information, such as segmentations, can control the
generated image. In particular, we present the first results on
semantically-guided synthesis of megapixel images with transformers and obtain
the state of the art among autoregressive models on class-conditional ImageNet.
Code and pretrained models can be found at
https://github.com/CompVis/taming-transformers . | 2020-12-17T18:57:28Z | Changelog can be found in the supplementary | null | null | Taming Transformers for High-Resolution Image Synthesis | ['Patrick Esser', 'Robin Rombach', 'B. Ommer'] | 2,020 | Computer Vision and Pattern Recognition | 3,016 | 82 | ['Computer Science'] |
2,012.10289 | HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection | ['Binny Mathew', 'Punyajoy Saha', 'Seid Muhie Yimam', 'Chris Biemann', 'Pawan Goyal', 'Animesh Mukherjee'] | ['cs.CL', 'cs.AI', 'cs.SI'] | Hate speech is a challenging issue plaguing the online social media. While
better models for hate speech detection are continuously being developed, there
is little research on the bias and interpretability aspects of hate speech. In
this paper, we introduce HateXplain, the first benchmark hate speech dataset
covering multiple aspects of the issue. Each post in our dataset is annotated
from three different perspectives: the basic, commonly used 3-class
classification (i.e., hate, offensive or normal), the target community (i.e.,
the community that has been the victim of hate speech/offensive speech in the
post), and the rationales, i.e., the portions of the post on which their
labelling decision (as hate, offensive or normal) is based. We utilize existing
state-of-the-art models and observe that even models that perform very well in
classification do not score high on explainability metrics like model
plausibility and faithfulness. We also observe that models, which utilize the
human rationales for training, perform better in reducing unintended bias
towards target communities. We have made our code and dataset public at
https://github.com/punyajoy/HateXplain | 2020-12-18T15:12:14Z | 12 pages, 7 figues, 8 tables. Accepted at AAAI 2021 | null | null | HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection | ['Binny Mathew', 'Punyajoy Saha', 'Seid Muhie Yimam', 'Chris Biemann', 'Pawan Goyal', 'Animesh Mukherjee'] | 2,020 | AAAI Conference on Artificial Intelligence | 582 | 60 | ['Computer Science'] |
2,012.10309 | Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-Training | ['Peng Shi', 'Patrick Ng', 'Zhiguo Wang', 'Henghui Zhu', 'Alexander Hanbo Li', 'Jun Wang', 'Cicero Nogueira dos Santos', 'Bing Xiang'] | ['cs.CL'] | Most recently, there has been significant interest in learning contextual
representations for various NLP tasks, by leveraging large scale text corpora
to train large neural language models with self-supervised learning objectives,
such as Masked Language Model (MLM). However, based on a pilot study, we
observe three issues of existing general-purpose language models when they are
applied to text-to-SQL semantic parsers: fail to detect column mentions in the
utterances, fail to infer column mentions from cell values, and fail to compose
complex SQL queries. To mitigate these issues, we present a model pre-training
framework, Generation-Augmented Pre-training (GAP), that jointly learns
representations of natural language utterances and table schemas by leveraging
generation models to generate pre-train data. GAP MODEL is trained on 2M
utterance-schema pairs and 30K utterance-schema-SQL triples, whose utterances
are produced by generative models. Based on experimental results, neural
semantic parsers that leverage GAP MODEL as a representation encoder obtain new
state-of-the-art results on both SPIDER and CRITERIA-TO-SQL benchmarks. | 2020-12-18T15:53:50Z | Accepted to AAAI 2021 | null | null | Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training | ['Peng Shi', 'Patrick Ng', 'Zhiguo Wang', 'Henghui Zhu', 'Alexander Hanbo Li', 'Jun Wang', 'C. D. Santos', 'Bing Xiang'] | 2,020 | AAAI Conference on Artificial Intelligence | 117 | 48 | ['Computer Science'] |
2,012.1182 | Recognizing Emotion Cause in Conversations | ['Soujanya Poria', 'Navonil Majumder', 'Devamanyu Hazarika', 'Deepanway Ghosal', 'Rishabh Bhardwaj', 'Samson Yu Bai Jian', 'Pengfei Hong', 'Romila Ghosh', 'Abhinaba Roy', 'Niyati Chhaya', 'Alexander Gelbukh', 'Rada Mihalcea'] | ['cs.CL'] | We address the problem of recognizing emotion cause in conversations, define
two novel sub-tasks of this problem, and provide a corresponding dialogue-level
dataset, along with strong Transformer-based baselines. The dataset is
available at https://github.com/declare-lab/RECCON.
Introduction: Recognizing the cause behind emotions in text is a fundamental
yet under-explored area of research in NLP. Advances in this area hold the
potential to improve interpretability and performance in affect-based models.
Identifying emotion causes at the utterance level in conversations is
particularly challenging due to the intermingling dynamics among the
interlocutors.
Method: We introduce the task of Recognizing Emotion Cause in CONversations
with an accompanying dataset named RECCON, containing over 1,000 dialogues and
10,000 utterance cause-effect pairs. Furthermore, we define different cause
types based on the source of the causes, and establish strong Transformer-based
baselines to address two different sub-tasks on this dataset: causal span
extraction and causal emotion entailment.
Result: Our Transformer-based baselines, which leverage contextual
pre-trained embeddings, such as RoBERTa, outperform the state-of-the-art
emotion cause extraction approaches
Conclusion: We introduce a new task highly relevant for (explainable)
emotion-aware artificial intelligence: recognizing emotion cause in
conversations, provide a new highly challenging publicly available
dialogue-level dataset for this task, and give strong baseline results on this
dataset. | 2020-12-22T03:51:35Z | https://github.com/declare-lab/RECCON, Accepted at Cognitive
Computation | null | null | Recognizing Emotion Cause in Conversations | ['Soujanya Poria', 'Navonil Majumder', 'Devamanyu Hazarika', 'Deepanway Ghosal', 'Rishabh Bhardwaj', 'Samson Yu', 'Pengfei Hong', 'Romila Ghosh', 'Niyati Chhaya', 'A. Gelbukh', 'Rada Mihalcea'] | 2,020 | Cognitive Computation | 129 | 45 | ['Computer Science'] |
2,012.12624 | Learning Dense Representations of Phrases at Scale | ['Jinhyuk Lee', 'Mujeen Sung', 'Jaewoo Kang', 'Danqi Chen'] | ['cs.CL'] | Open-domain question answering can be reformulated as a phrase retrieval
problem, without the need for processing documents on-demand during inference
(Seo et al., 2019). However, current phrase retrieval models heavily depend on
sparse representations and still underperform retriever-reader approaches. In
this work, we show for the first time that we can learn dense representations
of phrases alone that achieve much stronger performance in open-domain QA. We
present an effective method to learn phrase representations from the
supervision of reading comprehension tasks, coupled with novel negative
sampling methods. We also propose a query-side fine-tuning strategy, which can
support transfer learning and reduce the discrepancy between training and
inference. On five popular open-domain QA datasets, our model DensePhrases
improves over previous phrase retrieval models by 15%-25% absolute accuracy and
matches the performance of state-of-the-art retriever-reader models. Our model
is easy to parallelize due to pure dense representations and processes more
than 10 questions per second on CPUs. Finally, we directly use our pre-indexed
dense phrase representations for two slot filling tasks, showing the promise of
utilizing DensePhrases as a dense knowledge base for downstream tasks. | 2020-12-23T12:28:17Z | ACL 2021. Code available at
https://github.com/princeton-nlp/DensePhrases | null | null | Learning Dense Representations of Phrases at Scale | ['Jinhyuk Lee', 'Mujeen Sung', 'Jaewoo Kang', 'Danqi Chen'] | 2,020 | Annual Meeting of the Association for Computational Linguistics | 122 | 52 | ['Computer Science'] |
2,012.12877 | Training data-efficient image transformers & distillation through
attention | ['Hugo Touvron', 'Matthieu Cord', 'Matthijs Douze', 'Francisco Massa', 'Alexandre Sablayrolles', 'Hervé Jégou'] | ['cs.CV'] | Recently, neural networks purely based on attention were shown to address
image understanding tasks such as image classification. However, these visual
transformers are pre-trained with hundreds of millions of images using an
expensive infrastructure, thereby limiting their adoption.
In this work, we produce a competitive convolution-free transformer by
training on Imagenet only. We train them on a single computer in less than 3
days. Our reference vision transformer (86M parameters) achieves top-1 accuracy
of 83.1% (single-crop evaluation) on ImageNet with no external data.
More importantly, we introduce a teacher-student strategy specific to
transformers. It relies on a distillation token ensuring that the student
learns from the teacher through attention. We show the interest of this
token-based distillation, especially when using a convnet as a teacher. This
leads us to report results competitive with convnets for both Imagenet (where
we obtain up to 85.2% accuracy) and when transferring to other tasks. We share
our code and models. | 2020-12-23T18:42:10Z | null | null | null | null | null | null | null | null | null | null |
2,012.13577 | LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification | ['Jiangjie Chen', 'Qiaoben Bao', 'Changzhi Sun', 'Xinbo Zhang', 'Jiaze Chen', 'Hao Zhou', 'Yanghua Xiao', 'Lei Li'] | ['cs.CL', 'cs.AI'] | Given a natural language statement, how to verify its veracity against a
large-scale textual knowledge source like Wikipedia? Most existing neural
models make predictions without giving clues about which part of a false claim
goes wrong. In this paper, we propose LOREN, an approach for interpretable fact
verification. We decompose the verification of the whole claim at phrase-level,
where the veracity of the phrases serves as explanations and can be aggregated
into the final verdict according to logical rules. The key insight of LOREN is
to represent claim phrase veracity as three-valued latent variables, which are
regularized by aggregation logical rules. The final claim verification is based
on all latent variables. Thus, LOREN enjoys the additional benefit of
interpretability -- it is easy to explain how it reaches certain results with
claim phrase veracity. Experiments on a public fact verification benchmark show
that LOREN is competitive against previous approaches while enjoying the merit
of faithful and accurate interpretability. The resources of LOREN are available
at: https://github.com/jiangjiechen/LOREN. | 2020-12-25T13:57:04Z | Accepted to AAAI 2022 | null | 10.1609/aaai.v36i10.21291 | null | null | null | null | null | null | null |
2,012.1421 | The Curse of Dense Low-Dimensional Information Retrieval for Large Index
Sizes | ['Nils Reimers', 'Iryna Gurevych'] | ['cs.IR', 'cs.CL'] | Information Retrieval using dense low-dimensional representations recently
became popular and showed out-performance to traditional sparse-representations
like BM25. However, no previous work investigated how dense representations
perform with large index sizes. We show theoretically and empirically that the
performance for dense representations decreases quicker than sparse
representations for increasing index sizes. In extreme cases, this can even
lead to a tipping point where at a certain index size sparse representations
outperform dense representations. We show that this behavior is tightly
connected to the number of dimensions of the representations: The lower the
dimension, the higher the chance for false positives, i.e. returning irrelevant
documents. | 2020-12-28T12:25:25Z | Published at ACL 2021 | null | null | null | null | null | null | null | null | null |
2,012.14353 | DeepHateExplainer: Explainable Hate Speech Detection in Under-resourced
Bengali Language | ['Md. Rezaul Karim', 'Sumon Kanti Dey', 'Tanhim Islam', 'Sagor Sarker', 'Mehadi Hasan Menon', 'Kabir Hossain', 'Bharathi Raja Chakravarthi', 'Md. Azam Hossain', 'Stefan Decker'] | ['cs.CL', 'cs.LG'] | The exponential growths of social media and micro-blogging sites not only
provide platforms for empowering freedom of expressions and individual voices,
but also enables people to express anti-social behaviour like online
harassment, cyberbullying, and hate speech. Numerous works have been proposed
to utilize textual data for social and anti-social behaviour analysis, by
predicting the contexts mostly for highly-resourced languages like English.
However, some languages are under-resourced, e.g., South Asian languages like
Bengali, that lack computational resources for accurate natural language
processing (NLP). In this paper, we propose an explainable approach for hate
speech detection from the under-resourced Bengali language, which we called
DeepHateExplainer. Bengali texts are first comprehensively preprocessed, before
classifying them into political, personal, geopolitical, and religious hates
using a neural ensemble method of transformer-based neural architectures (i.e.,
monolingual Bangla BERT-base, multilingual BERT-cased/uncased, and
XLM-RoBERTa). Important(most and least) terms are then identified using
sensitivity analysis and layer-wise relevance propagation(LRP), before
providing human-interpretable explanations. Finally, we compute
comprehensiveness and sufficiency scores to measure the quality of explanations
w.r.t faithfulness. Evaluations against machine learning~(linear and tree-based
models) and neural networks (i.e., CNN, Bi-LSTM, and Conv-LSTM with word
embeddings) baselines yield F1-scores of 78%, 91%, 89%, and 84%, for political,
personal, geopolitical, and religious hates, respectively, outperforming both
ML and DNN baselines. | 2020-12-28T16:46:03Z | Proceeding of IEEE International Conference on Data Science and
Advanced Analytics (DSAA'2021), October 6-9, 2021, Porto, Portugal | null | null | DeepHateExplainer: Explainable Hate Speech Detection in Under-resourced Bengali Language | ['Md. Rezaul Karim', 'Sumon Dey', 'Bharathi Raja Chakravarthi'] | 2,020 | International Conference on Data Science and Advanced Analytics | 85 | 36 | ['Computer Science'] |
2,012.1474 | LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document
Understanding | ['Yang Xu', 'Yiheng Xu', 'Tengchao Lv', 'Lei Cui', 'Furu Wei', 'Guoxin Wang', 'Yijuan Lu', 'Dinei Florencio', 'Cha Zhang', 'Wanxiang Che', 'Min Zhang', 'Lidong Zhou'] | ['cs.CL'] | Pre-training of text and layout has proved effective in a variety of
visually-rich document understanding tasks due to its effective model
architecture and the advantage of large-scale unlabeled scanned/digital-born
documents. We propose LayoutLMv2 architecture with new pre-training tasks to
model the interaction among text, layout, and image in a single multi-modal
framework. Specifically, with a two-stream multi-modal Transformer encoder,
LayoutLMv2 uses not only the existing masked visual-language modeling task but
also the new text-image alignment and text-image matching tasks, which make it
better capture the cross-modality interaction in the pre-training stage.
Meanwhile, it also integrates a spatial-aware self-attention mechanism into the
Transformer architecture so that the model can fully understand the relative
positional relationship among different text blocks. Experiment results show
that LayoutLMv2 outperforms LayoutLM by a large margin and achieves new
state-of-the-art results on a wide variety of downstream visually-rich document
understanding tasks, including FUNSD (0.7895 $\to$ 0.8420), CORD (0.9493 $\to$
0.9601), SROIE (0.9524 $\to$ 0.9781), Kleister-NDA (0.8340 $\to$ 0.8520),
RVL-CDIP (0.9443 $\to$ 0.9564), and DocVQA (0.7295 $\to$ 0.8672). We made our
model and code publicly available at \url{https://aka.ms/layoutlmv2}. | 2020-12-29T13:01:52Z | ACL 2021 main conference | null | null | null | null | null | null | null | null | null |
2,012.15349 | DynaSent: A Dynamic Benchmark for Sentiment Analysis | ['Christopher Potts', 'Zhengxuan Wu', 'Atticus Geiger', 'Douwe Kiela'] | ['cs.CL'] | We introduce DynaSent ('Dynamic Sentiment'), a new English-language benchmark
task for ternary (positive/negative/neutral) sentiment analysis. DynaSent
combines naturally occurring sentences with sentences created using the
open-source Dynabench Platform, which facilities human-and-model-in-the-loop
dataset creation. DynaSent has a total of 121,634 sentences, each validated by
five crowdworkers, and its development and test splits are designed to produce
chance performance for even the best models we have been able to develop; when
future models solve this task, we will use them to create DynaSent version 2,
continuing the dynamic evolution of this benchmark. Here, we report on the
dataset creation effort, focusing on the steps we took to increase quality and
reduce artifacts. We also present evidence that DynaSent's Neutral category is
more coherent than the comparable category in other benchmarks, and we motivate
training models from scratch for each round over successive fine-tuning. | 2020-12-30T22:38:21Z | null | null | null | null | null | null | null | null | null | null |
2,012.15516 | AraELECTRA: Pre-Training Text Discriminators for Arabic Language
Understanding | ['Wissam Antoun', 'Fady Baly', 'Hazem Hajj'] | ['cs.CL'] | Advances in English language representation enabled a more sample-efficient
pre-training task by Efficiently Learning an Encoder that Classifies Token
Replacements Accurately (ELECTRA). Which, instead of training a model to
recover masked tokens, it trains a discriminator model to distinguish true
input tokens from corrupted tokens that were replaced by a generator network.
On the other hand, current Arabic language representation approaches rely only
on pretraining via masked language modeling. In this paper, we develop an
Arabic language representation model, which we name AraELECTRA. Our model is
pretrained using the replaced token detection objective on large Arabic text
corpora. We evaluate our model on multiple Arabic NLP tasks, including reading
comprehension, sentiment analysis, and named-entity recognition and we show
that AraELECTRA outperforms current state-of-the-art Arabic language
representation models, given the same pretraining data and with even a smaller
model size. | 2020-12-31T09:35:39Z | null | null | null | null | null | null | null | null | null | null |
2,012.1552 | AraGPT2: Pre-Trained Transformer for Arabic Language Generation | ['Wissam Antoun', 'Fady Baly', 'Hazem Hajj'] | ['cs.CL'] | Recently, pre-trained transformer-based architectures have proven to be very
efficient at language modeling and understanding, given that they are trained
on a large enough corpus. Applications in language generation for Arabic are
still lagging in comparison to other NLP advances primarily due to the lack of
advanced Arabic language generation models. In this paper, we develop the first
advanced Arabic language generation model, AraGPT2, trained from scratch on a
large Arabic corpus of internet text and news articles. Our largest model,
AraGPT2-mega, has 1.46 billion parameters, which makes it the largest Arabic
language model available. The Mega model was evaluated and showed success on
different tasks including synthetic news generation, and zero-shot question
answering. For text generation, our best model achieves a perplexity of 29.8 on
held-out Wikipedia articles. A study conducted with human evaluators showed the
significant success of AraGPT2-mega in generating news articles that are
difficult to distinguish from articles written by humans. We thus develop and
release an automatic discriminator model with a 98% percent accuracy in
detecting model-generated text. The models are also publicly available, hoping
to encourage new research directions and applications for Arabic NLP. | 2020-12-31T09:48:05Z | null | null | null | null | null | null | null | null | null | null |
2,012.15562 | UNKs Everywhere: Adapting Multilingual Language Models to New Scripts | ['Jonas Pfeiffer', 'Ivan Vulić', 'Iryna Gurevych', 'Sebastian Ruder'] | ['cs.CL'] | Massively multilingual language models such as multilingual BERT offer
state-of-the-art cross-lingual transfer performance on a range of NLP tasks.
However, due to limited capacity and large differences in pretraining data
sizes, there is a profound performance gap between resource-rich and
resource-poor target languages. The ultimate challenge is dealing with
under-resourced languages not covered at all by the models and written in
scripts unseen during pretraining. In this work, we propose a series of novel
data-efficient methods that enable quick and effective adaptation of pretrained
multilingual models to such low-resource languages and unseen scripts. Relying
on matrix factorization, our methods capitalize on the existing latent
knowledge about multiple languages already available in the pretrained model's
embedding matrix. Furthermore, we show that learning of the new dedicated
embedding matrix in the target language can be improved by leveraging a small
number of vocabulary items (i.e., the so-called lexically overlapping tokens)
shared between mBERT's and target language vocabulary. Our adaptation
techniques offer substantial performance gains for languages with unseen
scripts. We also demonstrate that they can yield improvements for low-resource
languages written in scripts covered by the pretrained model. | 2020-12-31T11:37:28Z | EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,012.15613 | How Good is Your Tokenizer? On the Monolingual Performance of
Multilingual Language Models | ['Phillip Rust', 'Jonas Pfeiffer', 'Ivan Vulić', 'Sebastian Ruder', 'Iryna Gurevych'] | ['cs.CL'] | In this work, we provide a systematic and comprehensive empirical comparison
of pretrained multilingual language models versus their monolingual
counterparts with regard to their monolingual task performance. We study a set
of nine typologically diverse languages with readily available pretrained
monolingual models on a set of five diverse monolingual downstream tasks. We
first aim to establish, via fair and controlled comparisons, if a gap between
the multilingual and the corresponding monolingual representation of that
language exists, and subsequently investigate the reason for any performance
difference. To disentangle conflating factors, we train new monolingual models
on the same data, with monolingually and multilingually trained tokenizers. We
find that while the pretraining data size is an important factor, a designated
monolingual tokenizer plays an equally important role in the downstream
performance. Our results show that languages that are adequately represented in
the multilingual model's vocabulary exhibit negligible performance decreases
over their monolingual counterparts. We further find that replacing the
original multilingual tokenizer with the specialized monolingual tokenizer
improves the downstream performance of the multilingual model for almost every
task and language. | 2020-12-31T14:11:00Z | ACL 2021 | null | null | null | null | null | null | null | null | null |
2,012.15674 | ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual
Semantics with Monolingual Corpora | ['Xuan Ouyang', 'Shuohuan Wang', 'Chao Pang', 'Yu Sun', 'Hao Tian', 'Hua Wu', 'Haifeng Wang'] | ['cs.CL'] | Recent studies have demonstrated that pre-trained cross-lingual models
achieve impressive performance in downstream cross-lingual tasks. This
improvement benefits from learning a large amount of monolingual and parallel
corpora. Although it is generally acknowledged that parallel corpora are
critical for improving the model performance, existing methods are often
constrained by the size of parallel corpora, especially for low-resource
languages. In this paper, we propose ERNIE-M, a new training method that
encourages the model to align the representation of multiple languages with
monolingual corpora, to overcome the constraint that the parallel corpus size
places on the model performance. Our key insight is to integrate
back-translation into the pre-training process. We generate pseudo-parallel
sentence pairs on a monolingual corpus to enable the learning of semantic
alignments between different languages, thereby enhancing the semantic modeling
of cross-lingual models. Experimental results show that ERNIE-M outperforms
existing cross-lingual models and delivers new state-of-the-art results in
various cross-lingual downstream tasks. | 2020-12-31T15:52:27Z | Accepted by EMNLP 2021 (main conference, long paper) | null | null | null | null | null | null | null | null | null |
2,012.15761 | Learning from the Worst: Dynamically Generated Datasets to Improve
Online Hate Detection | ['Bertie Vidgen', 'Tristan Thrush', 'Zeerak Waseem', 'Douwe Kiela'] | ['cs.CL', 'cs.LG'] | We present a human-and-model-in-the-loop process for dynamically generating
datasets and training better performing and more robust hate detection models.
We provide a new dataset of ~40,000 entries, generated and labelled by trained
annotators over four rounds of dynamic data creation. It includes ~15,000
challenging perturbations and each hateful entry has fine-grained labels for
the type and target of hate. Hateful entries make up 54% of the dataset, which
is substantially higher than comparable datasets. We show that model
performance is substantially improved using this approach. Models trained on
later rounds of data collection perform better on test sets and are harder for
annotators to trick. They also perform better on HateCheck, a suite of
functional tests for online hate detection. We provide the code, dataset and
annotation guidelines for other researchers to use. Accepted at ACL 2021. | 2020-12-31T17:36:48Z | null | null | null | null | null | null | null | null | null | null |
2,012.15828 | MiniLMv2: Multi-Head Self-Attention Relation Distillation for
Compressing Pretrained Transformers | ['Wenhui Wang', 'Hangbo Bao', 'Shaohan Huang', 'Li Dong', 'Furu Wei'] | ['cs.CL'] | We generalize deep self-attention distillation in MiniLM (Wang et al., 2020)
by only using self-attention relation distillation for task-agnostic
compression of pretrained Transformers. In particular, we define multi-head
self-attention relations as scaled dot-product between the pairs of query, key,
and value vectors within each self-attention module. Then we employ the above
relational knowledge to train the student model. Besides its simplicity and
unified principle, more favorably, there is no restriction in terms of the
number of student's attention heads, while most previous work has to guarantee
the same head number between teacher and student. Moreover, the fine-grained
self-attention relations tend to fully exploit the interaction knowledge
learned by Transformer. In addition, we thoroughly examine the layer selection
strategy for teacher models, rather than just relying on the last layer as in
MiniLM. We conduct extensive experiments on compressing both monolingual and
multilingual pretrained models. Experimental results demonstrate that our
models distilled from base-size and large-size teachers (BERT, RoBERTa and
XLM-R) outperform the state-of-the-art. | 2020-12-31T18:51:26Z | Monolingual and multilingual distilled models:
https://github.com/microsoft/unilm/tree/master/minilm | null | null | null | null | null | null | null | null | null |
2,012.1584 | Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective
with Transformers | ['Sixiao Zheng', 'Jiachen Lu', 'Hengshuang Zhao', 'Xiatian Zhu', 'Zekun Luo', 'Yabiao Wang', 'Yanwei Fu', 'Jianfeng Feng', 'Tao Xiang', 'Philip H. S. Torr', 'Li Zhang'] | ['cs.CV'] | Most recent semantic segmentation methods adopt a fully-convolutional network
(FCN) with an encoder-decoder architecture. The encoder progressively reduces
the spatial resolution and learns more abstract/semantic visual concepts with
larger receptive fields. Since context modeling is critical for segmentation,
the latest efforts have been focused on increasing the receptive field, through
either dilated/atrous convolutions or inserting attention modules. However, the
encoder-decoder based FCN architecture remains unchanged. In this paper, we aim
to provide an alternative perspective by treating semantic segmentation as a
sequence-to-sequence prediction task. Specifically, we deploy a pure
transformer (ie, without convolution and resolution reduction) to encode an
image as a sequence of patches. With the global context modeled in every layer
of the transformer, this encoder can be combined with a simple decoder to
provide a powerful segmentation model, termed SEgmentation TRansformer (SETR).
Extensive experiments show that SETR achieves new state of the art on ADE20K
(50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on
Cityscapes. Particularly, we achieve the first position in the highly
competitive ADE20K test server leaderboard on the day of submission. | 2020-12-31T18:55:57Z | CVPR 2021. Project page at https://fudan-zvg.github.io/SETR/ | null | null | Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers | ['Sixiao Zheng', 'Jiachen Lu', 'Hengshuang Zhao', 'Xiatian Zhu', 'Zekun Luo', 'Yabiao Wang', 'Yanwei Fu', 'Jianfeng Feng', 'T. Xiang', 'Philip H. S. Torr', 'Li Zhang'] | 2,020 | Computer Vision and Pattern Recognition | 2,928 | 63 | ['Computer Science'] |
2,023.12345 | null | [] | [''] | null | null | null | null | null | null | null | null | null | null | null | null |
2,101.00027 | The Pile: An 800GB Dataset of Diverse Text for Language Modeling | ['Leo Gao', 'Stella Biderman', 'Sid Black', 'Laurence Golding', 'Travis Hoppe', 'Charles Foster', 'Jason Phang', 'Horace He', 'Anish Thite', 'Noa Nabeshima', 'Shawn Presser', 'Connor Leahy'] | ['cs.CL'] | Recent work has demonstrated that increased training dataset diversity
improves general cross-domain knowledge and downstream generalization
capability for large-scale language models. With this in mind, we present
\textit{the Pile}: an 825 GiB English text corpus targeted at training
large-scale language models. The Pile is constructed from 22 diverse
high-quality subsets -- both existing and newly constructed -- many of which
derive from academic or professional sources. Our evaluation of the untuned
performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on
many of its components, such as academic writing. Conversely, models trained on
the Pile improve significantly over both Raw CC and CC-100 on all components of
the Pile, while improving performance on downstream evaluations. Through an
in-depth exploratory analysis, we document potentially concerning aspects of
the data for prospective users. We make publicly available the code used in its
construction. | 2020-12-31T19:00:10Z | null | null | null | null | null | null | null | null | null | null |
2,101.0019 | Prefix-Tuning: Optimizing Continuous Prompts for Generation | ['Xiang Lisa Li', 'Percy Liang'] | ['cs.CL'] | Fine-tuning is the de facto way to leverage large pretrained language models
to perform downstream tasks. However, it modifies all the language model
parameters and therefore necessitates storing a full copy for each task. In
this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning
for natural language generation tasks, which keeps language model parameters
frozen, but optimizes a small continuous task-specific vector (called the
prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent
tokens to attend to this prefix as if it were "virtual tokens". We apply
prefix-tuning to GPT-2 for table-to-text generation and to BART for
summarization. We find that by learning only 0.1\% of the parameters,
prefix-tuning obtains comparable performance in the full data setting,
outperforms fine-tuning in low-data settings, and extrapolates better to
examples with topics unseen during training. | 2021-01-01T08:00:36Z | null | null | null | Prefix-Tuning: Optimizing Continuous Prompts for Generation | ['Xiang Lisa Li', 'Percy Liang'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 4,340 | 55 | ['Computer Science'] |
2,101.00204 | BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource
Language Understanding Evaluation in Bangla | ['Abhik Bhattacharjee', 'Tahmid Hasan', 'Wasi Uddin Ahmad', 'Kazi Samin', 'Md Saiful Islam', 'Anindya Iqbal', 'M. Sohel Rahman', 'Rifat Shahriyar'] | ['cs.CL'] | In this work, we introduce BanglaBERT, a BERT-based Natural Language
Understanding (NLU) model pretrained in Bangla, a widely spoken yet
low-resource language in the NLP literature. To pretrain BanglaBERT, we collect
27.5 GB of Bangla pretraining data (dubbed `Bangla2B+') by crawling 110 popular
Bangla sites. We introduce two downstream task datasets on natural language
inference and question answering and benchmark on four diverse NLU tasks
covering text classification, sequence labeling, and span prediction. In the
process, we bring them under the first-ever Bangla Language Understanding
Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming
multilingual and monolingual models. We are making the models, datasets, and a
leaderboard publicly available at https://github.com/csebuetnlp/banglabert to
advance Bangla NLP. | 2021-01-01T09:28:45Z | Findings of North American Chapter of the Association for
Computational Linguistics, NAACL 2022 (camera-ready) | null | null | BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla | ['Abhik Bhattacharjee', 'Tahmid Hasan', 'Kazi Samin Mubasshir', 'Md. Saiful Islam', 'Wasi Uddin Ahmad', 'Anindya Iqbal', 'M. Rahman', 'Rifat Shahriyar'] | 2,021 | NAACL-HLT | 180 | 58 | ['Computer Science'] |
2,101.0039 | VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation | ['Changhan Wang', 'Morgane Rivière', 'Ann Lee', 'Anne Wu', 'Chaitanya Talnikar', 'Daniel Haziza', 'Mary Williamson', 'Juan Pino', 'Emmanuel Dupoux'] | ['cs.CL', 'eess.AS'] | We introduce VoxPopuli, a large-scale multilingual corpus providing 100K
hours of unlabelled speech data in 23 languages. It is the largest open data to
date for unsupervised representation learning as well as semi-supervised
learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16
languages and their aligned oral interpretations into 5 other languages
totaling 5.1K hours. We provide speech recognition baselines and validate the
versatility of VoxPopuli unlabelled data in semi-supervised learning under
challenging out-of-domain settings. We will release the corpus at
https://github.com/facebookresearch/voxpopuli under an open license. | 2021-01-02T07:24:21Z | Accepted to ACL 2021 (long paper) | null | null | null | null | null | null | null | null | null |
2,101.00406 | CDLM: Cross-Document Language Modeling | ['Avi Caciularu', 'Arman Cohan', 'Iz Beltagy', 'Matthew E. Peters', 'Arie Cattan', 'Ido Dagan'] | ['cs.CL'] | We introduce a new pretraining approach geared for multi-document language
modeling, incorporating two key ideas into the masked language modeling
self-supervised objective. First, instead of considering documents in
isolation, we pretrain over sets of multiple related documents, encouraging the
model to learn cross-document relationships. Second, we improve over recent
long-range transformers by introducing dynamic global attention that has access
to the entire input to predict masked tokens. We release CDLM (Cross-Document
Language Model), a new general language model for multi-document setting that
can be easily applied to downstream tasks. Our extensive analysis shows that
both ideas are essential for the success of CDLM, and work in synergy to set
new state-of-the-art results for several multi-text tasks. Code and models are
available at https://github.com/aviclu/CDLM. | 2021-01-02T09:01:39Z | EMNLP 2021, findings | null | null | null | null | null | null | null | null | null |
2,101.00416 | Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting | ['Wangchunshu Zhou', 'Tao Ge', 'Canwen Xu', 'Ke Xu', 'Furu Wei'] | ['cs.CL'] | In this paper, we generalize text infilling (e.g., masked language models) by
proposing Sequence Span Rewriting (SSR) as a self-supervised
sequence-to-sequence (seq2seq) pre-training objective. SSR provides more
fine-grained learning signals for text representations by supervising the model
to rewrite imperfect spans to ground truth, and it is more consistent than text
infilling with many downstream seq2seq tasks that rewrite a source sentences
into a target sentence. Our experiments with T5 models on various seq2seq tasks
show that SSR can substantially improve seq2seq pre-training. Moreover, we
observe SSR is especially helpful to improve pre-training a small-size seq2seq
model with a powerful imperfect span generator, which indicates a new
perspective of transferring knowledge from a large model to a smaller model for
seq2seq pre-training. | 2021-01-02T10:27:11Z | null | null | null | null | null | null | null | null | null | null |
2,101.00434 | Coreference Resolution without Span Representations | ['Yuval Kirstain', 'Ori Ram', 'Omer Levy'] | ['cs.CL'] | The introduction of pretrained language models has reduced many complex
task-specific NLP models to simple lightweight layers. An exception to this
trend is coreference resolution, where a sophisticated task-specific model is
appended to a pretrained transformer encoder. While highly effective, the model
has a very large memory footprint -- primarily due to dynamically-constructed
span and span-pair representations -- which hinders the processing of complete
documents and the ability to train on multiple instances in a single batch. We
introduce a lightweight end-to-end coreference model that removes the
dependency on span representations, handcrafted features, and heuristics. Our
model performs competitively with the current standard model, while being
simpler and more efficient. | 2021-01-02T11:46:51Z | Accepted to ACL 2021 | null | null | Coreference Resolution without Span Representations | ['Yuval Kirstain', 'Ori Ram', 'Omer Levy'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 72 | 18 | ['Computer Science'] |
2,101.00436 | Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval | ['Omar Khattab', 'Christopher Potts', 'Matei Zaharia'] | ['cs.CL', 'cs.IR'] | Multi-hop reasoning (i.e., reasoning across two or more documents) is a key
ingredient for NLP models that leverage large corpora to exhibit broad
knowledge. To retrieve evidence passages, multi-hop models must contend with a
fast-growing search space across the hops, represent complex queries that
combine multiple information needs, and resolve ambiguity about the best order
in which to hop between training passages. We tackle these problems via Baleen,
a system that improves the accuracy of multi-hop retrieval while learning
robustly from weak training signals in the many-hop setting. To tame the search
space, we propose condensed retrieval, a pipeline that summarizes the retrieved
passages after each hop into a single compact context. To model complex
queries, we introduce a focused late interaction retriever that allows
different parts of the same query representation to match disparate relevant
passages. Lastly, to infer the hopping dependencies among unordered training
passages, we devise latent hop ordering, a weak-supervision strategy in which
the trained retriever itself selects the sequence of hops. We evaluate Baleen
on retrieval for two-hop question answering and many-hop claim verification,
establishing state-of-the-art performance. | 2021-01-02T11:52:20Z | NeurIPS 2021 (Spotlight) | null | null | Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval | ['O. Khattab', 'Christopher Potts', 'M. Zaharia'] | 2,021 | Neural Information Processing Systems | 58 | 32 | ['Computer Science'] |
2,101.00438 | Few-Shot Question Answering by Pretraining Span Selection | ['Ori Ram', 'Yuval Kirstain', 'Jonathan Berant', 'Amir Globerson', 'Omer Levy'] | ['cs.CL'] | In several question answering benchmarks, pretrained models have reached
human parity through fine-tuning on an order of 100,000 annotated questions and
answers. We explore the more realistic few-shot setting, where only a few
hundred training examples are available, and observe that standard models
perform poorly, highlighting the discrepancy between current pretraining
objectives and question answering. We propose a new pretraining scheme tailored
for question answering: recurring span selection. Given a passage with multiple
sets of recurring spans, we mask in each set all recurring spans but one, and
ask the model to select the correct span in the passage for each masked span.
Masked spans are replaced with a special token, viewed as a question
representation, that is later used during fine-tuning to select the answer
span. The resulting model obtains surprisingly good results on multiple
benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while
maintaining competitive performance in the high-resource setting. | 2021-01-02T11:58:44Z | Accepted to ACL 2021 | null | null | null | null | null | null | null | null | null |
2,101.01039 | Improving reference mining in patents with BERT | ['Ken Voskuil', 'Suzan Verberne'] | ['cs.IR', 'cs.CL', 'H.3.1; I.2.7'] | In this paper we address the challenge of extracting scientific references
from patents. We approach the problem as a sequence labelling task and
investigate the merits of BERT models to the extraction of these long
sequences. References in patents to scientific literature are relevant to study
the connection between science and industry. Most prior work only uses the
front-page citations for this analysis, which are provided in the metadata of
patent archives. In this paper we build on prior work using Conditional Random
Fields (CRF) and Flair for reference extraction. We improve the quality of the
training data and train three BERT-based models on the labelled data (BERT,
bioBERT, sciBERT). We find that the improved training data leads to a large
improvement in the quality of the trained models. In addition, the BERT models
beat CRF and Flair, with recall scores around 97% obtained with cross
validation. With the best model we label a large collection of 33 thousand
patents, extract the citations, and match them to publications in the Web of
Science database. We extract 50% more references than with the old training
data and methods: 735 thousand references in total. With these
patent-publication links, follow-up research will further analyze which types
of scientific work lead to inventions. | 2021-01-04T15:56:21Z | 10 pages, 3 figures | Published in the 11th International Workshop on
Bibliometric-enhanced Information Retrieval (BIR 2021) | null | null | null | null | null | null | null | null |
2,101.01213 | Improving Portuguese Semantic Role Labeling with Transformers and
Transfer Learning | ['Sofia Oliveira', 'Daniel Loureiro', 'Alípio Jorge'] | ['cs.CL'] | The Natural Language Processing task of determining "Who did what to whom" is
called Semantic Role Labeling. For English, recent methods based on Transformer
models have allowed for major improvements in this task over the previous state
of the art. However, for low resource languages, like Portuguese, currently
available semantic role labeling models are hindered by scarce training data.
In this paper, we explore a model architecture with only a pre-trained
Transformer-based model, a linear layer, softmax and Viterbi decoding. We
substantially improve the state-of-the-art performance in Portuguese by over 15
F1. Additionally, we improve semantic role labeling results in Portuguese
corpora by exploiting cross-lingual transfer learning using multilingual
pre-trained models, and transfer learning from dependency parsing in
Portuguese, evaluating the various proposed approaches empirically. | 2021-01-04T19:56:01Z | 30 pages, 3 figures; Fixed broken links in References | 2021 IEEE 8th International Conference on Data Science and
Advanced Analytics (DSAA), 2021, pp. 1-9 | 10.1109/DSAA53316.2021.9564238 | null | null | null | null | null | null | null |
2,101.01321 | I-BERT: Integer-only BERT Quantization | ['Sehoon Kim', 'Amir Gholami', 'Zhewei Yao', 'Michael W. Mahoney', 'Kurt Keutzer'] | ['cs.CL'] | Transformer based models, like BERT and RoBERTa, have achieved
state-of-the-art results in many Natural Language Processing tasks. However,
their memory footprint, inference latency, and power consumption are
prohibitive efficient inference at the edge, and even at the data center. While
quantization can be a viable solution for this, previous work on quantizing
Transformer based models use floating-point arithmetic during inference, which
cannot efficiently utilize integer-only logical units such as the recent Turing
Tensor Cores, or traditional integer-only ARM processors. In this work, we
propose I-BERT, a novel quantization scheme for Transformer based models that
quantizes the entire inference with integer-only arithmetic. Based on
lightweight integer-only approximation methods for nonlinear operations, e.g.,
GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end
integer-only BERT inference without any floating point calculation. We evaluate
our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that
for both cases, I-BERT achieves similar (and slightly higher) accuracy as
compared to the full-precision baseline. Furthermore, our preliminary
implementation of I-BERT shows a speedup of 2.4-4.0x for INT8 inference on a T4
GPU system as compared to FP32 inference. The framework has been developed in
PyTorch and has been open-sourced. | 2021-01-05T02:42:58Z | null | ICML 2021 (Oral) | null | null | null | null | null | null | null | null |
2,101.02235 | Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit
Reasoning Strategies | ['Mor Geva', 'Daniel Khashabi', 'Elad Segal', 'Tushar Khot', 'Dan Roth', 'Jonathan Berant'] | ['cs.CL'] | A key limitation in current datasets for multi-hop reasoning is that the
required steps for answering the question are mentioned in it explicitly. In
this work, we introduce StrategyQA, a question answering (QA) benchmark where
the required reasoning steps are implicit in the question, and should be
inferred using a strategy. A fundamental challenge in this setup is how to
elicit such creative questions from crowdsourcing workers, while covering a
broad range of potential strategies. We propose a data collection procedure
that combines term-based priming to inspire annotators, careful control over
the annotator population, and adversarial filtering for eliminating reasoning
shortcuts. Moreover, we annotate each question with (1) a decomposition into
reasoning steps for answering it, and (2) Wikipedia paragraphs that contain the
answers to each step. Overall, StrategyQA includes 2,780 examples, each
consisting of a strategy question, its decomposition, and evidence paragraphs.
Analysis shows that questions in StrategyQA are short, topic-diverse, and cover
a wide range of strategies. Empirically, we show that humans perform well (87%)
on this task, while our best baseline reaches an accuracy of $\sim$66%. | 2021-01-06T19:14:23Z | Accepted for publication in Transactions of the Association for
Computational Linguistics (TACL), 2021. Author's final version | null | null | null | null | null | null | null | null | null |
2,101.02477 | GAN-Control: Explicitly Controllable GANs | ['Alon Shoshan', 'Nadav Bhonker', 'Igor Kviatkovsky', 'Gerard Medioni'] | ['cs.CV'] | We present a framework for training GANs with explicit control over generated
images. We are able to control the generated image by settings exact attributes
such as age, pose, expression, etc. Most approaches for editing GAN-generated
images achieve partial control by leveraging the latent space disentanglement
properties, obtained implicitly after standard GAN training. Such methods are
able to change the relative intensity of certain attributes, but not explicitly
set their values. Recently proposed methods, designed for explicit control over
human faces, harness morphable 3D face models to allow fine-grained control
capabilities in GANs. Unlike these methods, our control is not constrained to
morphable 3D face model parameters and is extendable beyond the domain of human
faces. Using contrastive learning, we obtain GANs with an explicitly
disentangled latent space. This disentanglement is utilized to train
control-encoders mapping human-interpretable inputs to suitable latent vectors,
thus allowing explicit control. In the domain of human faces we demonstrate
control over identity, age, pose, expression, hair color and illumination. We
also demonstrate control capabilities of our framework in the domains of
painted portraits and dog image generation. We demonstrate that our approach
achieves state-of-the-art performance both qualitatively and quantitatively. | 2021-01-07T10:54:17Z | null | null | null | null | null | null | null | null | null | null |
2,101.03697 | RepVGG: Making VGG-style ConvNets Great Again | ['Xiaohan Ding', 'Xiangyu Zhang', 'Ningning Ma', 'Jungong Han', 'Guiguang Ding', 'Jian Sun'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present a simple but powerful architecture of convolutional neural
network, which has a VGG-like inference-time body composed of nothing but a
stack of 3x3 convolution and ReLU, while the training-time model has a
multi-branch topology. Such decoupling of the training-time and inference-time
architecture is realized by a structural re-parameterization technique so that
the model is named RepVGG. On ImageNet, RepVGG reaches over 80% top-1 accuracy,
which is the first time for a plain model, to the best of our knowledge. On
NVIDIA 1080Ti GPU, RepVGG models run 83% faster than ResNet-50 or 101% faster
than ResNet-101 with higher accuracy and show favorable accuracy-speed
trade-off compared to the state-of-the-art models like EfficientNet and RegNet.
The code and trained models are available at
https://github.com/megvii-model/RepVGG. | 2021-01-11T04:46:11Z | CVPR 2021 | null | null | null | null | null | null | null | null | null |
2,101.03961 | Switch Transformers: Scaling to Trillion Parameter Models with Simple
and Efficient Sparsity | ['William Fedus', 'Barret Zoph', 'Noam Shazeer'] | ['cs.LG', 'cs.AI'] | In deep learning, models typically reuse the same parameters for all inputs.
Mixture of Experts (MoE) defies this and instead selects different parameters
for each incoming example. The result is a sparsely-activated model -- with
outrageous numbers of parameters -- but a constant computational cost. However,
despite several notable successes of MoE, widespread adoption has been hindered
by complexity, communication costs and training instability -- we address these
with the Switch Transformer. We simplify the MoE routing algorithm and design
intuitive improved models with reduced communication and computational costs.
Our proposed training techniques help wrangle the instabilities and we show
large sparse models may be trained, for the first time, with lower precision
(bfloat16) formats. We design models based off T5-Base and T5-Large to obtain
up to 7x increases in pre-training speed with the same computational resources.
These improvements extend into multilingual settings where we measure gains
over the mT5-Base version across all 101 languages. Finally, we advance the
current scale of language models by pre-training up to trillion parameter
models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the
T5-XXL model. | 2021-01-11T16:11:52Z | JMLR | null | null | Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | ['W. Fedus', 'Barret Zoph', 'Noam M. Shazeer'] | 2,021 | Journal of machine learning research | 2,249 | 65 | ['Computer Science'] |
2,101.04061 | Towards Real-World Blind Face Restoration with Generative Facial Prior | ['Xintao Wang', 'Yu Li', 'Honglun Zhang', 'Ying Shan'] | ['cs.CV'] | Blind face restoration usually relies on facial priors, such as facial
geometry prior or reference prior, to restore realistic and faithful details.
However, very low-quality inputs cannot offer accurate geometric prior while
high-quality references are inaccessible, limiting the applicability in
real-world scenarios. In this work, we propose GFP-GAN that leverages rich and
diverse priors encapsulated in a pretrained face GAN for blind face
restoration. This Generative Facial Prior (GFP) is incorporated into the face
restoration process via novel channel-split spatial feature transform layers,
which allow our method to achieve a good balance of realness and fidelity.
Thanks to the powerful generative facial prior and delicate designs, our
GFP-GAN could jointly restore facial details and enhance colors with just a
single forward pass, while GAN inversion methods require expensive
image-specific optimization at inference. Extensive experiments show that our
method achieves superior performance to prior art on both synthetic and
real-world datasets. | 2021-01-11T17:54:38Z | CVPR 2021. Codes: https://github.com/TencentARC/GFPGAN | null | null | null | null | null | null | null | null | null |
2,101.04615 | Toward Effective Automated Content Analysis via Crowdsourcing | ['Jiele Wu', 'Chau-Wai Wong', 'Xinyan Zhao', 'Xianpeng Liu'] | ['cs.CL', 'cs.IR', 'cs.LG'] | Many computer scientists use the aggregated answers of online workers to
represent ground truth. Prior work has shown that aggregation methods such as
majority voting are effective for measuring relatively objective features. For
subjective features such as semantic connotation, online workers, known for
optimizing their hourly earnings, tend to deteriorate in the quality of their
responses as they work longer. In this paper, we aim to address this issue by
proposing a quality-aware semantic data annotation system. We observe that with
timely feedback on workers' performance quantified by quality scores, better
informed online workers can maintain the quality of their labeling throughout
an extended period of time. We validate the effectiveness of the proposed
annotation system through i) evaluating performance based on an expert-labeled
dataset, and ii) demonstrating machine learning tasks that can lead to
consistent learning behavior with 70%-80% accuracy. Our results suggest that
with our system, researchers can collect high-quality answers of subjective
semantic features at a large scale. | 2021-01-12T17:14:18Z | Corrected minor typos. Camera-ready version for the 2021 IEEE
International Conference on Multimedia and Expo (ICME) | null | null | Toward Effective Automated Content Analysis via Crowdsourcing | ['Jiele Wu', 'Chau-Wai Wong', 'Xinyan Zhao', 'Xianpeng Liu'] | 2,021 | IEEE International Conference on Multimedia and Expo | 4 | 19 | ['Computer Science'] |
2,101.04704 | Boundary-Aware Segmentation Network for Mobile and Web Applications | ['Xuebin Qin', 'Deng-Ping Fan', 'Chenyang Huang', 'Cyril Diagne', 'Zichen Zhang', "Adrià Cabeza Sant'Anna", 'Albert Suàrez', 'Martin Jagersand', 'Ling Shao'] | ['cs.CV'] | Although deep models have greatly improved the accuracy and robustness of
image segmentation, obtaining segmentation results with highly accurate
boundaries and fine structures is still a challenging problem. In this paper,
we propose a simple yet powerful Boundary-Aware Segmentation Network (BASNet),
which comprises a predict-refine architecture and a hybrid loss, for highly
accurate image segmentation. The predict-refine architecture consists of a
densely supervised encoder-decoder network and a residual refinement module,
which are respectively used to predict and refine a segmentation probability
map. The hybrid loss is a combination of the binary cross entropy, structural
similarity and intersection-over-union losses, which guide the network to learn
three-level (ie, pixel-, patch- and map- level) hierarchy representations. We
evaluate our BASNet on two reverse tasks including salient object segmentation,
camouflaged object segmentation, showing that it achieves very competitive
performance with sharp segmentation boundaries. Importantly, BASNet runs at
over 70 fps on a single GPU which benefits many potential real applications.
Based on BASNet, we further developed two (close to) commercial applications:
AR COPY & PASTE, in which BASNet is integrated with augmented reality for
"COPYING" and "PASTING" real-world objects, and OBJECT CUT, which is a
web-based tool for automatic object background removal. Both applications have
already drawn huge amount of attention and have important real-world impacts.
The code and two applications will be publicly available at:
https://github.com/NathanUA/BASNet. | 2021-01-12T19:20:26Z | 18 pages, 16 figures, submitted to TPAMI | null | null | Boundary-Aware Segmentation Network for Mobile and Web Applications | ['Xuebin Qin', 'Deng-Ping Fan', 'Chenyang Huang', 'Cyril Diagne', 'Zichen Zhang', "Adria Cabeza Sant'Anna", 'Albert Suàrez', 'Martin Jägersand', 'Ling Shao'] | 2,021 | arXiv.org | 81 | 149 | ['Computer Science'] |
2,101.04775 | Towards Faster and Stabilized GAN Training for High-fidelity Few-shot
Image Synthesis | ['Bingchen Liu', 'Yizhe Zhu', 'Kunpeng Song', 'Ahmed Elgammal'] | ['cs.CV', 'cs.AI'] | Training Generative Adversarial Networks (GAN) on high-fidelity images
usually requires large-scale GPU-clusters and a vast number of training images.
In this paper, we study the few-shot image synthesis task for GAN with minimum
computing cost. We propose a light-weight GAN structure that gains superior
quality on 1024*1024 resolution. Notably, the model converges from scratch with
just a few hours of training on a single RTX-2080 GPU, and has a consistent
performance, even with less than 100 training samples. Two technique designs
constitute our work, a skip-layer channel-wise excitation module and a
self-supervised discriminator trained as a feature-encoder. With thirteen
datasets covering a wide variety of image domains (The datasets and code are
available at: https://github.com/odegeasslbc/FastGAN-pytorch), we show our
model's superior performance compared to the state-of-the-art StyleGAN2, when
data and computing budget are limited. | 2021-01-12T22:02:54Z | ICLR-2021 | null | null | null | null | null | null | null | null | null |
2,101.05667 | The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained
Sequence-to-Sequence Models | ['Ronak Pradeep', 'Rodrigo Nogueira', 'Jimmy Lin'] | ['cs.IR', 'cs.CL'] | We propose a design pattern for tackling text ranking problems, dubbed
"Expando-Mono-Duo", that has been empirically validated for a number of ad hoc
retrieval tasks in different domains. At the core, our design relies on
pretrained sequence-to-sequence models within a standard multi-stage ranking
architecture. "Expando" refers to the use of document expansion techniques to
enrich keyword representations of texts prior to inverted indexing. "Mono" and
"Duo" refer to components in a reranking pipeline based on a pointwise model
and a pairwise model that rerank initial candidates retrieved using keyword
search. We present experimental results from the MS MARCO passage and document
ranking tasks, the TREC 2020 Deep Learning Track, and the TREC-COVID challenge
that validate our design. In all these tasks, we achieve effectiveness that is
at or near the state of the art, in some cases using a zero-shot approach that
does not exploit any training data from the target task. To support
replicability, implementations of our design pattern are open-sourced in the
Pyserini IR toolkit and PyGaggle neural reranking library. | 2021-01-14T15:29:54Z | null | null | null | null | null | null | null | null | null | null |
2,101.05716 | SICKNL: A Dataset for Dutch Natural Language Inference | ['Gijs Wijnholds', 'Michael Moortgat'] | ['cs.CL'] | We present SICK-NL (read: signal), a dataset targeting Natural Language
Inference in Dutch. SICK-NL is obtained by translating the SICK dataset of
Marelli et al. (2014)from English into Dutch. Having a parallel inference
dataset allows us to compare both monolingual and multilingual NLP models for
English and Dutch on the two tasks. In the paper, we motivate and detail the
translation process, perform a baseline evaluation on both the original SICK
dataset and its Dutch incarnation SICK-NL, taking inspiration from Dutch
skipgram embeddings and contextualised embedding models. In addition, we
encapsulate two phenomena encountered in the translation to formulate stress
tests and verify how well the Dutch models capture syntactic restructurings
that do not affect semantics. Our main finding is all models perform worse on
SICK-NL than on SICK, indicating that the Dutch dataset is more challenging
than the English original. Results on the stress tests show that models don't
fully capture word order freedom in Dutch, warranting future systematic
studies. | 2021-01-14T16:42:57Z | To appear at EACL 2021 | null | null | SICK-NL: A Dataset for Dutch Natural Language Inference | ['G. Wijnholds', 'M. Moortgat'] | 2,021 | Conference of the European Chapter of the Association for Computational Linguistics | 26 | 21 | ['Computer Science'] |
2,101.06085 | Deep Dual-resolution Networks for Real-time and Accurate Semantic
Segmentation of Road Scenes | ['Yuanduo Hong', 'Huihui Pan', 'Weichao Sun', 'Yisong Jia'] | ['cs.CV'] | Semantic segmentation is a key technology for autonomous vehicles to
understand the surrounding scenes. The appealing performances of contemporary
models usually come at the expense of heavy computations and lengthy inference
time, which is intolerable for self-driving. Using light-weight architectures
(encoder-decoder or two-pathway) or reasoning on low-resolution images, recent
methods realize very fast scene parsing, even running at more than 100 FPS on a
single 1080Ti GPU. However, there is still a significant gap in performance
between these real-time methods and the models based on dilation backbones. To
tackle this problem, we proposed a family of efficient backbones specially
designed for real-time semantic segmentation. The proposed deep dual-resolution
networks (DDRNets) are composed of two deep branches between which multiple
bilateral fusions are performed. Additionally, we design a new contextual
information extractor named Deep Aggregation Pyramid Pooling Module (DAPPM) to
enlarge effective receptive fields and fuse multi-scale context based on
low-resolution feature maps. Our method achieves a new state-of-the-art
trade-off between accuracy and speed on both Cityscapes and CamVid dataset. In
particular, on a single 2080Ti GPU, DDRNet-23-slim yields 77.4% mIoU at 102 FPS
on Cityscapes test set and 74.7% mIoU at 230 FPS on CamVid test set. With
widely used test augmentation, our method is superior to most state-of-the-art
models and requires much less computation. Codes and trained models are
available online. | 2021-01-15T12:56:18Z | 12 pages, 7 figures. This work has been submitted to the IEEE for
possible publication | null | null | null | null | null | null | null | null | null |
2,101.0684 | ZeRO-Offload: Democratizing Billion-Scale Model Training | ['Jie Ren', 'Samyam Rajbhandari', 'Reza Yazdani Aminabadi', 'Olatunji Ruwase', 'Shuangyan Yang', 'Minjia Zhang', 'Dong Li', 'Yuxiong He'] | ['cs.DC', 'cs.LG'] | Large-scale model training has been a playing ground for a limited few
requiring complex model refactoring and access to prohibitively expensive GPU
clusters. ZeRO-Offload changes the large model training landscape by making
large model training accessible to nearly everyone. It can train models with
over 13 billion parameters on a single GPU, a 10x increase in size compared to
popular framework such as PyTorch, and it does so without requiring any model
change from the data scientists or sacrificing computational efficiency.
ZeRO-Offload enables large model training by offloading data and compute to
CPU. To preserve compute efficiency, it is designed to minimize the data
movement to/from GPU, and reduce CPU compute time while maximizing memory
savings on GPU. As a result, ZeRO-Offload can achieve 40 TFlops/GPU on a single
NVIDIA V100 GPU for 10B parameter model compared to 30TF using PyTorch alone
for a 1.4B parameter model, the largest that can be trained without running out
of memory. ZeRO-Offload is also designed to scale on multiple-GPUs when
available, offering near linear speedup on up to 128 GPUs. Additionally, it can
work together with model parallelism to train models with over 70 billion
parameters on a single DGX-2 box, a 4.5x increase in model size compared to
using model parallelism alone. By combining compute and memory efficiency with
ease-of-use, ZeRO-Offload democratizes large-scale model training making it
accessible to even data scientists with access to just a single GPU. | 2021-01-18T02:11:25Z | null | null | null | null | null | null | null | null | null | null |
2,101.06983 | Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup | ['Luyu Gao', 'Yunyi Zhang', 'Jiawei Han', 'Jamie Callan'] | ['cs.LG', 'cs.CL', 'cs.IR'] | Contrastive learning has been applied successfully to learn vector
representations of text. Previous research demonstrated that learning
high-quality representations benefits from batch-wise contrastive loss with a
large number of negatives. In practice, the technique of in-batch negative is
used, where for each example in a batch, other batch examples' positives will
be taken as its negatives, avoiding encoding extra negatives. This, however,
still conditions each example's loss on all batch examples and requires fitting
the entire large batch into GPU memory. This paper introduces a gradient
caching technique that decouples backpropagation between contrastive loss and
the encoder, removing encoder backward pass data dependency along the batch
dimension. As a result, gradients can be computed for one subset of the batch
at a time, leading to almost constant memory usage. | 2021-01-18T10:42:34Z | RepL4NLP 2021 | null | null | null | null | null | null | null | null | null |
2,101.07138 | Teach me how to Label: Labeling Functions from Natural Language with
Text-to-text Transformers | ['Yannis Papanikolaou'] | ['cs.CL', 'cs.LG'] | Annotated data has become the most important bottleneck in training accurate
machine learning models, especially for areas that require domain expertise. A
recent approach to deal with the above issue proposes using natural language
explanations instead of labeling individual data points, thereby increasing
human annotators' efficiency as well as decreasing costs substantially. This
paper focuses on the task of turning these natural language descriptions into
Python labeling functions by following a novel approach to semantic parsing
with pre-trained text-to-text Transformers. In a series of experiments our
approach achieves a new state of the art on the semantic parsing benchmark
CoNaLa, surpassing the previous best approach by 3.7 BLEU points. Furthermore,
on a manually constructed dataset of natural language descriptions-labeling
functions pairs we achieve a BLEU of 0.39. Our approach can be regarded as a
stepping stone towards models that are taught how to label in natural language,
instead of being provided specific labeled samples. Our code, constructed
dataset and models are available at
https://github.com/ypapanik/t5-for-code-generation. | 2021-01-18T16:04:15Z | null | null | null | null | null | null | null | null | null | null |
2,101.07597 | UniSpeech: Unified Speech Representation Learning with Labeled and
Unlabeled Data | ['Chengyi Wang', 'Yu Wu', 'Yao Qian', 'Kenichi Kumatani', 'Shujie Liu', 'Furu Wei', 'Michael Zeng', 'Xuedong Huang'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | In this paper, we propose a unified pre-training approach called UniSpeech to
learn speech representations with both unlabeled and labeled data, in which
supervised phonetic CTC learning and phonetically-aware contrastive
self-supervised learning are conducted in a multi-task learning manner. The
resultant representations can capture information more correlated with phonetic
structures and improve the generalization across languages and domains. We
evaluate the effectiveness of UniSpeech for cross-lingual representation
learning on public CommonVoice corpus. The results show that UniSpeech
outperforms self-supervised pretraining and supervised transfer learning for
speech recognition by a maximum of 13.4% and 17.8% relative phone error rate
reductions respectively (averaged over all testing languages). The
transferability of UniSpeech is also demonstrated on a domain-shift speech
recognition task, i.e., a relative word error rate reduction of 6% against the
previous approach. | 2021-01-19T12:53:43Z | accepted by ICML2021 | null | null | UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data | ['Chengyi Wang', 'Yuehua Wu', 'Yu Wu', 'Yao Qian', 'K. Kumatani', 'Shujie Liu', 'Furu Wei', 'Michael Zeng', 'Xuedong Huang'] | 2,021 | International Conference on Machine Learning | 115 | 44 | ['Computer Science', 'Engineering'] |
2,101.08231 | Word Alignment by Fine-tuning Embeddings on Parallel Corpora | ['Zi-Yi Dou', 'Graham Neubig'] | ['cs.CL'] | Word alignment over parallel corpora has a wide variety of applications,
including learning translation lexicons, cross-lingual transfer of language
processing tools, and automatic evaluation or analysis of translation outputs.
The great majority of past work on word alignment has worked by performing
unsupervised learning on parallel texts. Recently, however, other work has
demonstrated that pre-trained contextualized word embeddings derived from
multilingually trained language models (LMs) prove an attractive alternative,
achieving competitive results on the word alignment task even in the absence of
explicit training on parallel data. In this paper, we examine methods to marry
the two approaches: leveraging pre-trained LMs but fine-tuning them on parallel
text with objectives designed to improve alignment quality, and proposing
methods to effectively extract alignments from these fine-tuned models. We
perform experiments on five language pairs and demonstrate that our model can
consistently outperform previous state-of-the-art models of all varieties. In
addition, we demonstrate that we are able to train multilingual word aligners
that can obtain robust performance on different language pairs. Our aligner,
AWESOME (Aligning Word Embedding Spaces of Multilingual Encoders), with
pre-trained models is available at https://github.com/neulab/awesome-align | 2021-01-20T17:54:47Z | EACL 2021 | null | null | null | null | null | null | null | null | null |
2,101.08674 | DAF:re: A Challenging, Crowd-Sourced, Large-Scale, Long-Tailed Dataset
For Anime Character Recognition | ['Edwin Arkel Rios', 'Wen-Huang Cheng', 'Bo-Cheng Lai'] | ['cs.CV', 'I.2; I.4'] | In this work we tackle the challenging problem of anime character
recognition. Anime, referring to animation produced within Japan and work
derived or inspired from it. For this purpose we present DAF:re
(DanbooruAnimeFaces:revamped), a large-scale, crowd-sourced, long-tailed
dataset with almost 500 K images spread across more than 3000 classes.
Additionally, we conduct experiments on DAF:re and similar datasets using a
variety of classification models, including CNN based ResNets and
self-attention based Vision Transformer (ViT). Our results give new insights
into the generalization and transfer learning properties of ViT models on
substantially different domain datasets from those used for the upstream
pre-training, including the influence of batch and image size in their
training. Additionally, we share our dataset, source-code, pre-trained
checkpoints and results, as Animesion, the first end-to-end framework for
large-scale anime character recognition: https://github.com/arkel23/animesion | 2021-01-21T15:40:45Z | 5 pages, 3 figures, 4 tables | null | null | DAF: re: A Challenging, Crowd-Sourced, Large-Scale, Long-Tailed Dataset For Anime Character Recognition | ['Edwin Arkel Rios', 'Wen-Huang Cheng', 'B. Lai'] | 2,021 | arXiv.org | 12 | 22 | ['Computer Science'] |
2,101.08692 | Characterizing signal propagation to close the performance gap in
unnormalized ResNets | ['Andrew Brock', 'Soham De', 'Samuel L. Smith'] | ['cs.LG', 'cs.CV', 'stat.ML'] | Batch Normalization is a key component in almost all state-of-the-art image
classifiers, but it also introduces practical challenges: it breaks the
independence between training examples within a batch, can incur compute and
memory overhead, and often results in unexpected bugs. Building on recent
theoretical analyses of deep ResNets at initialization, we propose a simple set
of analysis tools to characterize signal propagation on the forward pass, and
leverage these tools to design highly performant ResNets without activation
normalization layers. Crucial to our success is an adapted version of the
recently proposed Weight Standardization. Our analysis tools show how this
technique preserves the signal in networks with ReLU or Swish activation
functions by ensuring that the per-channel activation means do not grow with
depth. Across a range of FLOP budgets, our networks attain performance
competitive with the state-of-the-art EfficientNets on ImageNet. | 2021-01-21T16:07:06Z | Published as a conference paper at ICLR 2021 | null | null | Characterizing signal propagation to close the performance gap in unnormalized ResNets | ['Andrew Brock', 'Soham De', 'Samuel L. Smith'] | 2,021 | International Conference on Learning Representations | 124 | 81 | ['Computer Science', 'Mathematics'] |
2,101.09635 | WangchanBERTa: Pretraining transformer-based Thai Language Models | ['Lalita Lowphansirikul', 'Charin Polpanumas', 'Nawat Jantrakulchai', 'Sarana Nutanong'] | ['cs.CL'] | Transformer-based language models, more specifically BERT-based architectures
have achieved state-of-the-art performance in many downstream tasks. However,
for a relatively low-resource language such as Thai, the choices of models are
limited to training a BERT-based model based on a much smaller dataset or
finetuning multi-lingual models, both of which yield suboptimal downstream
performance. Moreover, large-scale multi-lingual pretraining does not take into
account language-specific features for Thai. To overcome these limitations, we
pretrain a language model based on RoBERTa-base architecture on a large,
deduplicated, cleaned training set (78GB in total size), curated from diverse
domains of social media posts, news articles and other publicly available
datasets. We apply text processing rules that are specific to Thai most
importantly preserving spaces, which are important chunk and sentence
boundaries in Thai before subword tokenization. We also experiment with
word-level, syllable-level and SentencePiece tokenization with a smaller
dataset to explore the effects on tokenization on downstream performance. Our
model wangchanberta-base-att-spm-uncased trained on the 78.5GB dataset
outperforms strong baselines (NBSVM, CRF and ULMFit) and multi-lingual models
(XLMR and mBERT) on both sequence classification and token classification tasks
in human-annotated, mono-lingual contexts. | 2021-01-24T03:06:34Z | 24 pages, edited the citation of the syllable-level tokenizer from
[Chormai et al., 2020] to [Phatthiyaphaibun et al., 2020] as the authors used
the syllable-level tokenizer from PyThaiNLP [Phatthiyaphaibun et al., 2020]
in the experiments | null | null | WangchanBERTa: Pretraining transformer-based Thai Language Models | ['Lalita Lowphansirikul', 'Charin Polpanumas', 'Nawat Jantrakulchai', 'Sarana Nutanong'] | 2,021 | arXiv.org | 77 | 42 | ['Computer Science'] |
2,101.10804 | CPTR: Full Transformer Network for Image Captioning | ['Wei Liu', 'Sihan Chen', 'Longteng Guo', 'Xinxin Zhu', 'Jing Liu'] | ['cs.CV'] | In this paper, we consider the image captioning task from a new
sequence-to-sequence prediction perspective and propose CaPtion TransformeR
(CPTR) which takes the sequentialized raw images as the input to Transformer.
Compared to the "CNN+Transformer" design paradigm, our model can model global
context at every encoder layer from the beginning and is totally
convolution-free. Extensive experiments demonstrate the effectiveness of the
proposed model and we surpass the conventional "CNN+Transformer" methods on the
MSCOCO dataset. Besides, we provide detailed visualizations of the
self-attention between patches in the encoder and the "words-to-patches"
attention in the decoder thanks to the full Transformer architecture. | 2021-01-26T14:29:52Z | null | null | null | null | null | null | null | null | null | null |
2,101.11038 | Muppet: Massive Multi-task Representations with Pre-Finetuning | ['Armen Aghajanyan', 'Anchit Gupta', 'Akshat Shrivastava', 'Xilun Chen', 'Luke Zettlemoyer', 'Sonal Gupta'] | ['cs.CL', 'cs.LG'] | We propose pre-finetuning, an additional large-scale learning stage between
language model pre-training and fine-tuning. Pre-finetuning is massively
multi-task learning (around 50 datasets, over 4.8 million total labeled
examples), and is designed to encourage learning of representations that
generalize better to many different tasks. We show that pre-finetuning
consistently improves performance for pretrained discriminators (e.g.~RoBERTa)
and generation models (e.g.~BART) on a wide range of tasks (sentence
prediction, commonsense reasoning, MRC, etc.), while also significantly
improving sample efficiency during fine-tuning. We also show that large-scale
multi-tasking is crucial; pre-finetuning can hurt performance when few tasks
are used up until a critical point (usually above 15) after which performance
improves linearly in the number of tasks. | 2021-01-26T19:18:27Z | null | null | null | Muppet: Massive Multi-task Representations with Pre-Finetuning | ['Armen Aghajanyan', 'Anchit Gupta', 'Akshat Shrivastava', 'Xilun Chen', 'Luke Zettlemoyer', 'Sonal Gupta'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 270 | 75 | ['Computer Science'] |
2,101.11075 | Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged
Gradient Method for Stochastic Optimization | ['Aaron Defazio', 'Samy Jelassi'] | ['cs.LG', 'cs.AI', 'math.OC'] | We introduce MADGRAD, a novel optimization method in the family of AdaGrad
adaptive gradient methods. MADGRAD shows excellent performance on deep learning
optimization problems from multiple fields, including classification and
image-to-image tasks in vision, and recurrent and bidirectionally-masked models
in natural language processing. For each of these tasks, MADGRAD matches or
outperforms both SGD and ADAM in test set performance, even on problems for
which adaptive methods normally perform poorly. | 2021-01-26T20:38:26Z | null | null | null | Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization | ['Aaron Defazio', 'Samy Jelassi'] | 2,021 | arXiv.org | 70 | 38 | ['Computer Science', 'Mathematics'] |
2,101.11605 | Bottleneck Transformers for Visual Recognition | ['Aravind Srinivas', 'Tsung-Yi Lin', 'Niki Parmar', 'Jonathon Shlens', 'Pieter Abbeel', 'Ashish Vaswani'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present BoTNet, a conceptually simple yet powerful backbone architecture
that incorporates self-attention for multiple computer vision tasks including
image classification, object detection and instance segmentation. By just
replacing the spatial convolutions with global self-attention in the final
three bottleneck blocks of a ResNet and no other changes, our approach improves
upon the baselines significantly on instance segmentation and object detection
while also reducing the parameters, with minimal overhead in latency. Through
the design of BoTNet, we also point out how ResNet bottleneck blocks with
self-attention can be viewed as Transformer blocks. Without any bells and
whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance
Segmentation benchmark using the Mask R-CNN framework; surpassing the previous
best published single model and single scale results of ResNeSt evaluated on
the COCO validation set. Finally, we present a simple adaptation of the BoTNet
design for image classification, resulting in models that achieve a strong
performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to
1.64x faster in compute time than the popular EfficientNet models on TPU-v3
hardware. We hope our simple and effective approach will serve as a strong
baseline for future research in self-attention models for vision | 2021-01-27T18:55:27Z | Technical Report, 20 pages, 13 figures, 19 tables | null | null | null | null | null | null | null | null | null |
2,101.11718 | BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language
Generation | ['Jwala Dhamala', 'Tony Sun', 'Varun Kumar', 'Satyapriya Krishna', 'Yada Pruksachatkun', 'Kai-Wei Chang', 'Rahul Gupta'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent advances in deep learning techniques have enabled machines to generate
cohesive open-ended text when prompted with a sequence of words as context.
While these models now empower many downstream applications from conversation
bots to automatic storytelling, they have been shown to generate texts that
exhibit social biases. To systematically study and benchmark social biases in
open-ended language generation, we introduce the Bias in Open-Ended Language
Generation Dataset (BOLD), a large-scale dataset that consists of 23,679
English text generation prompts for bias benchmarking across five domains:
profession, gender, race, religion, and political ideology. We also propose new
automated metrics for toxicity, psycholinguistic norms, and text gender
polarity to measure social biases in open-ended text generation from multiple
angles. An examination of text generated from three popular language models
reveals that the majority of these models exhibit a larger social bias than
human-written Wikipedia text across all domains. With these results we
highlight the need to benchmark biases in open-ended language generation and
caution users of language generation models on downstream tasks to be cognizant
of these embedded prejudices. | 2021-01-27T22:07:03Z | null | null | 10.1145/3442188.3445924 | BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation | ['J. Dhamala', 'Tony Sun', 'Varun Kumar', 'Satyapriya Krishna', 'Yada Pruksachatkun', 'Kai-Wei Chang', 'Rahul Gupta'] | 2,021 | Conference on Fairness, Accountability and Transparency | 403 | 47 | ['Computer Science'] |
2,102.00086 | Challenges in Automated Debiasing for Toxic Language Detection | ['Xuhui Zhou', 'Maarten Sap', 'Swabha Swayamdipta', 'Noah A. Smith', 'Yejin Choi'] | ['cs.CL'] | Biased associations have been a challenge in the development of classifiers
for detecting toxic language, hindering both fairness and accuracy. As
potential solutions, we investigate recently introduced debiasing methods for
text classification datasets and models, as applied to toxic language
detection. Our focus is on lexical (e.g., swear words, slurs, identity
mentions) and dialectal markers (specifically African American English). Our
comprehensive experiments establish that existing methods are limited in their
ability to prevent biased behavior in current toxicity detectors. We then
propose an automatic, dialect-aware data correction method, as a
proof-of-concept. Despite the use of synthetic labels, this method reduces
dialectal associations with toxicity. Overall, our findings show that debiasing
a model trained on biased toxic language data is not as effective as simply
relabeling the data to remove existing biases. | 2021-01-29T22:03:17Z | EACL 2021 | null | null | Challenges in Automated Debiasing for Toxic Language Detection | ['Xuhui Zhou', 'Maarten Sap', 'Swabha Swayamdipta', 'Noah A. Smith', 'Yejin Choi'] | 2,021 | Conference of the European Chapter of the Association for Computational Linguistics | 142 | 49 | ['Computer Science'] |
2,102.01192 | Generative Spoken Language Modeling from Raw Audio | ['Kushal Lakhotia', 'Evgeny Kharitonov', 'Wei-Ning Hsu', 'Yossi Adi', 'Adam Polyak', 'Benjamin Bolte', 'Tu-Anh Nguyen', 'Jade Copet', 'Alexei Baevski', 'Adelrahman Mohamed', 'Emmanuel Dupoux'] | ['cs.CL'] | We introduce Generative Spoken Language Modeling, the task of learning the
acoustic and linguistic characteristics of a language from raw audio (no text,
no labels), and a set of metrics to automatically evaluate the learned
representations at acoustic and linguistic levels for both encoding and
generation. We set up baseline systems consisting of a discrete speech encoder
(returning pseudo-text units), a generative language model (trained on
pseudo-text), and a speech decoder (generating a waveform from pseudo-text) all
trained without supervision and validate the proposed metrics with human
evaluation. Across 3 speech encoders (CPC, wav2vec 2.0, HuBERT), we find that
the number of discrete units (50, 100, or 200) matters in a task-dependent and
encoder-dependent way, and that some combinations approach text-based systems. | 2021-02-01T21:41:40Z | null | null | null | On Generative Spoken Language Modeling from Raw Audio | ['Kushal Lakhotia', 'Evgeny Kharitonov', 'Wei-Ning Hsu', 'Yossi Adi', 'Adam Polyak', 'Benjamin Bolte', 'Tu Nguyen', 'Jade Copet', 'Alexei Baevski', 'A. Mohamed', 'Emmanuel Dupoux'] | 2,021 | Transactions of the Association for Computational Linguistics | 366 | 80 | ['Computer Science'] |
2,102.01547 | WeNet: Production oriented Streaming and Non-streaming End-to-End Speech
Recognition Toolkit | ['Zhuoyuan Yao', 'Di Wu', 'Xiong Wang', 'Binbin Zhang', 'Fan Yu', 'Chao Yang', 'Zhendong Peng', 'Xiaoyu Chen', 'Lei Xie', 'Xin Lei'] | ['cs.SD', 'cs.CL', 'eess.AS'] | In this paper, we propose an open source, production first, and production
ready speech recognition toolkit called WeNet in which a new two-pass approach
is implemented to unify streaming and non-streaming end-to-end (E2E) speech
recognition in a single model. The main motivation of WeNet is to close the gap
between the research and the production of E2E speechrecognition models. WeNet
provides an efficient way to ship ASR applications in several real-world
scenarios, which is the main difference and advantage to other open source E2E
speech recognition toolkits. In our toolkit, a new two-pass method is
implemented. Our method propose a dynamic chunk-based attention strategy of the
the transformer layers to allow arbitrary right context length modifies in
hybrid CTC/attention architecture. The inference latency could be easily
controlled by only changing the chunk size. The CTC hypotheses are then
rescored by the attention decoder to get the final result. Our experiments on
the AISHELL-1 dataset using WeNet show that, our model achieves 5.03\% relative
character error rate (CER) reduction in non-streaming ASR compared to a
standard non-streaming transformer. After model quantification, our model
perform reasonable RTF and latency. | 2021-02-02T15:19:41Z | 5 pages, 2 figures, 4 tables | null | null | WeNet: Production Oriented Streaming and Non-Streaming End-to-End Speech Recognition Toolkit | ['Zhuoyuan Yao', 'Di Wu', 'Xiong Wang', 'Binbin Zhang', 'Fan Yu', 'Chao Yang', 'Zhendong Peng', 'Xiaoyu Chen', 'Lei Xie', 'X. Lei'] | 2,021 | Interspeech | 268 | 32 | ['Computer Science', 'Engineering'] |
2,102.01909 | HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis
and Emotion Recognition | ['Avihay Chriqui', 'Inbal Yahav'] | ['cs.CL'] | This paper introduces HeBERT and HebEMO. HeBERT is a Transformer-based model
for modern Hebrew text, which relies on a BERT (Bidirectional Encoder
Representations for Transformers) architecture. BERT has been shown to
outperform alternative architectures in sentiment analysis, and is suggested to
be particularly appropriate for MRLs. Analyzing multiple BERT specifications,
we find that while model complexity correlates with high performance on
language tasks that aim to understand terms in a sentence, a more-parsimonious
model better captures the sentiment of entire sentence. Either way, out
BERT-based language model outperforms all existing Hebrew alternatives on all
common language tasks. HebEMO is a tool that uses HeBERT to detect polarity and
extract emotions from Hebrew UGC. HebEMO is trained on a unique
Covid-19-related UGC dataset that we collected and annotated for this study.
Data collection and annotation followed an active learning procedure that aimed
to maximize predictability. We show that HebEMO yields a high F1-score of 0.96
for polarity classification. Emotion detection reaches F1-scores of 0.78-0.97
for various target emotions, with the exception of surprise, which the model
failed to capture (F1 = 0.41). These results are better than the best-reported
performance, even among English-language models of emotion detection. | 2021-02-03T06:59:59Z | null | null | 10.1287/ijds.2022.0016 | HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition | ['Avihay Chriqui', 'I. Yahav'] | 2,021 | INFORMS Journal on Data Science | 37 | 82 | ['Computer Science'] |
2,102.02611 | CKConv: Continuous Kernel Convolution For Sequential Data | ['David W. Romero', 'Anna Kuzina', 'Erik J. Bekkers', 'Jakub M. Tomczak', 'Mark Hoogendoorn'] | ['cs.LG'] | Conventional neural architectures for sequential data present important
limitations. Recurrent networks suffer from exploding and vanishing gradients,
small effective memory horizons, and must be trained sequentially.
Convolutional networks are unable to handle sequences of unknown size and their
memory horizon must be defined a priori. In this work, we show that all these
problems can be solved by formulating convolutional kernels in CNNs as
continuous functions. The resulting Continuous Kernel Convolution (CKConv)
allows us to model arbitrarily long sequences in a parallel manner, within a
single operation, and without relying on any form of recurrence. We show that
Continuous Kernel Convolutional Networks (CKCNNs) obtain state-of-the-art
results in multiple datasets, e.g., permuted MNIST, and, thanks to their
continuous nature, are able to handle non-uniformly sampled datasets and
irregularly-sampled data natively. CKCNNs match or perform better than neural
ODEs designed for these purposes in a faster and simpler manner. | 2021-02-04T13:51:19Z | null | null | null | null | null | null | null | null | null | null |
2,102.02766 | Designing an Encoder for StyleGAN Image Manipulation | ['Omer Tov', 'Yuval Alaluf', 'Yotam Nitzan', 'Or Patashnik', 'Daniel Cohen-Or'] | ['cs.CV'] | Recently, there has been a surge of diverse methods for performing image
editing by employing pre-trained unconditional generators. Applying these
methods on real images, however, remains a challenge, as it necessarily
requires the inversion of the images into their latent space. To successfully
invert a real image, one needs to find a latent code that reconstructs the
input image accurately, and more importantly, allows for its meaningful
manipulation. In this paper, we carefully study the latent space of StyleGAN,
the state-of-the-art unconditional generator. We identify and analyze the
existence of a distortion-editability tradeoff and a distortion-perception
tradeoff within the StyleGAN latent space. We then suggest two principles for
designing encoders in a manner that allows one to control the proximity of the
inversions to regions that StyleGAN was originally trained on. We present an
encoder based on our two principles that is specifically designed for
facilitating editing on real images by balancing these tradeoffs. By evaluating
its performance qualitatively and quantitatively on numerous challenging
domains, including cars and horses, we show that our inversion method, followed
by common editing techniques, achieves superior real-image editing quality,
with only a small reconstruction accuracy drop. | 2021-02-04T17:52:38Z | null | null | null | null | null | null | null | null | null | null |
2,102.02779 | Unifying Vision-and-Language Tasks via Text Generation | ['Jaemin Cho', 'Jie Lei', 'Hao Tan', 'Mohit Bansal'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | Existing methods for vision-and-language learning typically require designing
task-specific architectures and objectives for each task. For example, a
multi-label answer classifier for visual question answering, a region scorer
for referring expression comprehension, and a language decoder for image
captioning, etc. To alleviate these hassles, in this work, we propose a unified
framework that learns different tasks in a single architecture with the same
language modeling objective, i.e., multimodal conditional text generation,
where our models learn to generate labels in text based on the visual and
textual inputs. On 7 popular vision-and-language benchmarks, including visual
question answering, referring expression comprehension, visual commonsense
reasoning, most of which have been previously modeled as discriminative tasks,
our generative approach (with a single unified architecture) reaches comparable
performance to recent task-specific state-of-the-art vision-and-language
models. Moreover, our generative approach shows better generalization ability
on questions that have rare answers. Also, we show that our framework allows
multi-task learning in a single architecture with a single set of parameters,
achieving similar performance to separately optimized single-task models. Our
code is publicly available at: https://github.com/j-min/VL-T5 | 2021-02-04T17:59:30Z | ICML 2021 (15 pages, 4 figures, 14 tables) | null | null | Unifying Vision-and-Language Tasks via Text Generation | ['Jaemin Cho', 'Jie Lei', 'Hao Tan', 'Mohit Bansal'] | 2,021 | International Conference on Machine Learning | 547 | 86 | ['Computer Science'] |
2,102.03334 | ViLT: Vision-and-Language Transformer Without Convolution or Region
Supervision | ['Wonjae Kim', 'Bokyung Son', 'Ildoo Kim'] | ['stat.ML', 'cs.LG'] | Vision-and-Language Pre-training (VLP) has improved performance on various
joint vision-and-language downstream tasks. Current approaches to VLP heavily
rely on image feature extraction processes, most of which involve region
supervision (e.g., object detection) and the convolutional architecture (e.g.,
ResNet). Although disregarded in the literature, we find it problematic in
terms of both (1) efficiency/speed, that simply extracting input features
requires much more computation than the multimodal interaction steps; and (2)
expressive power, as it is upper bounded to the expressive power of the visual
embedder and its predefined visual vocabulary. In this paper, we present a
minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the
sense that the processing of visual inputs is drastically simplified to just
the same convolution-free manner that we process textual inputs. We show that
ViLT is up to tens of times faster than previous VLP models, yet with
competitive or better downstream task performance. Our code and pre-trained
weights are available at https://github.com/dandelin/vilt. | 2021-02-05T18:36:11Z | ICML 2021 Long Presentation | null | null | ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision | ['Wonjae Kim', 'Bokyung Son', 'Ildoo Kim'] | 2,021 | International Conference on Machine Learning | 1,775 | 65 | ['Mathematics', 'Computer Science'] |
2,102.03902 | Nyströmformer: A Nyström-Based Algorithm for Approximating
Self-Attention | ['Yunyang Xiong', 'Zhanpeng Zeng', 'Rudrasis Chakraborty', 'Mingxing Tan', 'Glenn Fung', 'Yin Li', 'Vikas Singh'] | ['cs.CL', 'cs.LG'] | Transformers have emerged as a powerful tool for a broad range of natural
language processing tasks. A key component that drives the impressive
performance of Transformers is the self-attention mechanism that encodes the
influence or dependence of other tokens on each specific token. While
beneficial, the quadratic complexity of self-attention on the input sequence
length has limited its application to longer sequences -- a topic being
actively studied in the community. To address this limitation, we propose
Nystr\"{o}mformer -- a model that exhibits favorable scalability as a function
of sequence length. Our idea is based on adapting the Nystr\"{o}m method to
approximate standard self-attention with $O(n)$ complexity. The scalability of
Nystr\"{o}mformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE
benchmark and IMDB reviews with standard sequence length, and find that our
Nystr\"{o}mformer performs comparably, or in a few cases, even slightly better,
than standard self-attention. On longer sequence tasks in the Long Range Arena
(LRA) benchmark, Nystr\"{o}mformer performs favorably relative to other
efficient self-attention methods. Our code is available at
https://github.com/mlpen/Nystromformer. | 2021-02-07T20:06:59Z | AAAI 2021; Code and supplement available at
https://github.com/mlpen/Nystromformer | null | null | Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention | ['Yunyang Xiong', 'Zhanpeng Zeng', 'Rudrasis Chakraborty', 'Mingxing Tan', 'G. Fung', 'Yin Li', 'Vikas Singh'] | 2,021 | AAAI Conference on Artificial Intelligence | 526 | 65 | ['Computer Science', 'Medicine'] |
2,102.0404 | LightSpeech: Lightweight and Fast Text to Speech with Neural
Architecture Search | ['Renqian Luo', 'Xu Tan', 'Rui Wang', 'Tao Qin', 'Jinzhu Li', 'Sheng Zhao', 'Enhong Chen', 'Tie-Yan Liu'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | Text to speech (TTS) has been broadly used to synthesize natural and
intelligible speech in different scenarios. Deploying TTS in various end
devices such as mobile phones or embedded devices requires extremely small
memory usage and inference latency. While non-autoregressive TTS models such as
FastSpeech have achieved significantly faster inference speed than
autoregressive models, their model size and inference latency are still large
for the deployment in resource constrained devices. In this paper, we propose
LightSpeech, which leverages neural architecture search~(NAS) to automatically
design more lightweight and efficient models based on FastSpeech. We first
profile the components of current FastSpeech model and carefully design a novel
search space containing various lightweight and potentially effective
architectures. Then NAS is utilized to automatically discover well performing
architectures within the search space. Experiments show that the model
discovered by our method achieves 15x model compression ratio and 6.5x
inference speedup on CPU with on par voice quality. Audio demos are provided at
https://speechresearch.github.io/lightspeech. | 2021-02-08T07:45:06Z | Accepted to ICASSP 21 | null | null | null | null | null | null | null | null | null |
2,102.04411 | Traceability Transformed: Generating more Accurate Links with
Pre-Trained BERT Models | ['Jinfeng Lin', 'Yalin Liu', 'Qingkai Zeng', 'Meng Jiang', 'Jane Cleland-Huang'] | ['cs.SE'] | Software traceability establishes and leverages associations between diverse
development artifacts. Researchers have proposed the use of deep learning trace
models to link natural language artifacts, such as requirements and issue
descriptions, to source code; however, their effectiveness has been restricted
by availability of labeled data and efficiency at runtime. In this study, we
propose a novel framework called Trace BERT (T-BERT) to generate trace links
between source code and natural language artifacts. To address data sparsity,
we leverage a three-step training strategy to enable trace models to transfer
knowledge from a closely related Software Engineering challenge, which has a
rich dataset, to produce trace links with much higher accuracy than has
previously been achieved. We then apply the T-BERT framework to recover links
between issues and commits in Open Source Projects. We comparatively evaluated
accuracy and efficiency of three BERT architectures. Results show that a
Single-BERT architecture generated the most accurate links, while a
Siamese-BERT architecture produced comparable results with significantly less
execution time. Furthermore, by learning and transferring knowledge, all three
models in the framework outperform classical IR trace models. On the three
evaluated real-word OSS projects, the best T-BERT stably outperformed the VSM
model with average improvements of 60.31% measured using Mean Average Precision
(MAP). RNN severely underperformed on these projects due to insufficient
training data, while T-BERT overcame this problem by using pretrained language
models and transfer learning. | 2021-02-08T18:18:07Z | null | null | null | Traceability Transformed: Generating More Accurate Links with Pre-Trained BERT Models | ['Jinfeng Lin', 'Yalin Liu', 'Qingkai Zeng', 'Meng Jiang', 'J. Cleland-Huang'] | 2,021 | International Conference on Software Engineering | 117 | 43 | ['Computer Science'] |
2,102.04664 | CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding
and Generation | ['Shuai Lu', 'Daya Guo', 'Shuo Ren', 'Junjie Huang', 'Alexey Svyatkovskiy', 'Ambrosio Blanco', 'Colin Clement', 'Dawn Drain', 'Daxin Jiang', 'Duyu Tang', 'Ge Li', 'Lidong Zhou', 'Linjun Shou', 'Long Zhou', 'Michele Tufano', 'Ming Gong', 'Ming Zhou', 'Nan Duan', 'Neel Sundaresan', 'Shao Kun Deng', 'Shengyu Fu', 'Shujie Liu'] | ['cs.SE', 'cs.CL'] | Benchmark datasets have a significant impact on accelerating research in
programming language tasks. In this paper, we introduce CodeXGLUE, a benchmark
dataset to foster machine learning research for program understanding and
generation. CodeXGLUE includes a collection of 10 tasks across 14 datasets and
a platform for model evaluation and comparison. CodeXGLUE also features three
baseline systems, including the BERT-style, GPT-style, and Encoder-Decoder
models, to make it easy for researchers to use the platform. The availability
of such data and baselines can help the development and validation of new
methods that can be applied to various program understanding and generation
problems. | 2021-02-09T06:16:25Z | 14 pages; Revise CodeBLEU scores for all models on text-to-code task | null | null | CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation | ['Shuai Lu', 'Daya Guo', 'Shuo Ren', 'Junjie Huang', 'Alexey Svyatkovskiy', 'Ambrosio Blanco', 'Colin B. Clement', 'Dawn Drain', 'Daxin Jiang', 'Duyu Tang', 'Ge Li', 'Lidong Zhou', 'Linjun Shou', 'Long Zhou', 'Michele Tufano', 'Ming Gong', 'Ming Zhou', 'Nan Duan', 'Neel Sundaresan', 'Shao Kun Deng', 'Shengyu Fu', 'Shujie Liu'] | 2,021 | NeurIPS Datasets and Benchmarks | 1,166 | 117 | ['Computer Science'] |
2,102.05095 | Is Space-Time Attention All You Need for Video Understanding? | ['Gedas Bertasius', 'Heng Wang', 'Lorenzo Torresani'] | ['cs.CV'] | We present a convolution-free approach to video classification built
exclusively on self-attention over space and time. Our method, named
"TimeSformer," adapts the standard Transformer architecture to video by
enabling spatiotemporal feature learning directly from a sequence of
frame-level patches. Our experimental study compares different self-attention
schemes and suggests that "divided attention," where temporal attention and
spatial attention are separately applied within each block, leads to the best
video classification accuracy among the design choices considered. Despite the
radically new design, TimeSformer achieves state-of-the-art results on several
action recognition benchmarks, including the best reported accuracy on
Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks,
our model is faster to train, it can achieve dramatically higher test
efficiency (at a small drop in accuracy), and it can also be applied to much
longer video clips (over one minute long). Code and models are available at:
https://github.com/facebookresearch/TimeSformer. | 2021-02-09T19:49:33Z | Accepted to ICML 2021 | null | null | Is Space-Time Attention All You Need for Video Understanding? | ['Gedas Bertasius', 'Heng Wang', 'L. Torresani'] | 2,021 | International Conference on Machine Learning | 2,080 | 73 | ['Computer Science'] |
2,102.05918 | Scaling Up Visual and Vision-Language Representation Learning With Noisy
Text Supervision | ['Chao Jia', 'Yinfei Yang', 'Ye Xia', 'Yi-Ting Chen', 'Zarana Parekh', 'Hieu Pham', 'Quoc V. Le', 'Yunhsuan Sung', 'Zhen Li', 'Tom Duerig'] | ['cs.CV', 'cs.CL', 'cs.LG'] | Pre-trained representations are becoming crucial for many NLP and perception
tasks. While representation learning in NLP has transitioned to training on raw
text without human annotations, visual and vision-language representations
still rely heavily on curated training datasets that are expensive or require
expert knowledge. For vision applications, representations are mostly learned
using datasets with explicit class labels such as ImageNet or OpenImages. For
vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all
involve a non-trivial data collection (and cleaning) process. This costly
curation process limits the size of datasets and hence hinders the scaling of
trained models. In this paper, we leverage a noisy dataset of over one billion
image alt-text pairs, obtained without expensive filtering or post-processing
steps in the Conceptual Captions dataset. A simple dual-encoder architecture
learns to align visual and language representations of the image and text pairs
using a contrastive loss. We show that the scale of our corpus can make up for
its noise and leads to state-of-the-art representations even with such a simple
learning scheme. Our visual representation achieves strong performance when
transferred to classification tasks such as ImageNet and VTAB. The aligned
visual and language representations enables zero-shot image classification and
also set new state-of-the-art results on Flickr30K and MSCOCO image-text
retrieval benchmarks, even when compared with more sophisticated
cross-attention models. The representations also enable cross-modality search
with complex text and text + image queries. | 2021-02-11T10:08:12Z | ICML 2021 | International Conference on Machine Learning 2021 | null | null | null | null | null | null | null | null |
2,102.06171 | High-Performance Large-Scale Image Recognition Without Normalization | ['Andrew Brock', 'Soham De', 'Samuel L. Smith', 'Karen Simonyan'] | ['cs.CV', 'cs.LG', 'stat.ML'] | Batch normalization is a key component of most image classification models,
but it has many undesirable properties stemming from its dependence on the
batch size and interactions between examples. Although recent work has
succeeded in training deep ResNets without normalization layers, these models
do not match the test accuracies of the best batch-normalized networks, and are
often unstable for large learning rates or strong data augmentations. In this
work, we develop an adaptive gradient clipping technique which overcomes these
instabilities, and design a significantly improved class of Normalizer-Free
ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on
ImageNet while being up to 8.7x faster to train, and our largest models attain
a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free
models attain significantly better performance than their batch-normalized
counterparts when finetuning on ImageNet after large-scale pre-training on a
dataset of 300 million labeled images, with our best models obtaining an
accuracy of 89.2%. Our code is available at https://github.com/deepmind/
deepmind-research/tree/master/nfnets | 2021-02-11T18:23:20Z | null | null | null | null | null | null | null | null | null | null |
2,102.06203 | Proof Artifact Co-training for Theorem Proving with Language Models | ['Jesse Michael Han', 'Jason Rute', 'Yuhuai Wu', 'Edward W. Ayers', 'Stanislas Polu'] | ['cs.AI', 'cs.LG', 'cs.LO'] | Labeled data for imitation learning of theorem proving in large libraries of
formalized mathematics is scarce as such libraries require years of
concentrated effort by human specialists to be built. This is particularly
challenging when applying large Transformer language models to tactic
prediction, because the scaling of performance with respect to model size is
quickly disrupted in the data-scarce, easily-overfitted regime. We propose PACT
({\bf P}roof {\bf A}rtifact {\bf C}o-{\bf T}raining), a general methodology for
extracting abundant self-supervised data from kernel-level proof terms for
co-training alongside the usual tactic prediction objective. We apply this
methodology to Lean, an interactive proof assistant which hosts some of the
most sophisticated formalized mathematics to date. We instrument Lean with a
neural theorem prover driven by a Transformer language model and show that PACT
improves theorem proving success rate on a held-out suite of test theorems from
32\% to 48\%. | 2021-02-11T18:59:24Z | null | null | null | Proof Artifact Co-training for Theorem Proving with Language Models | ['Jesse Michael Han', 'Jason M. Rute', 'Yuhuai Wu', 'Edward W. Ayers', 'Stanislas Polu'] | 2,021 | International Conference on Learning Representations | 127 | 94 | ['Computer Science'] |
2,102.06867 | CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation | ['Shengcong Chen', 'Changxing Ding', 'Minfeng Liu', 'Jun Cheng', 'Dacheng Tao'] | ['cs.CV'] | Nucleus segmentation is a challenging task due to the crowded distribution
and blurry boundaries of nuclei. Recent approaches represent nuclei by means of
polygons to differentiate between touching and overlapping nuclei and have
accordingly achieved promising performance. Each polygon is represented by a
set of centroid-to-boundary distances, which are in turn predicted by features
of the centroid pixel for a single nucleus. However, using the centroid pixel
alone does not provide sufficient contextual information for robust prediction
and thus degrades the segmentation accuracy. To handle this problem, we propose
a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation.
First, we sample a point set rather than one single pixel within each cell for
distance prediction. This strategy substantially enhances contextual
information and thereby improves the robustness of the prediction. Second, we
propose a Confidence-based Weighting Module, which adaptively fuses the
predictions from the sampled point set. Third, we introduce a novel Shape-Aware
Perceptual (SAP) loss that constrains the shape of the predicted polygons.
Here, the SAP loss is based on an additional network that is pre-trained by
means of mapping the centroid probability map and the pixel-to-boundary
distance maps to a different nucleus representation. Extensive experiments
justify the effectiveness of each component in the proposed CPP-Net. Finally,
CPP-Net is found to achieve state-of-the-art performance on three publicly
available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper is
available at \url{https://github.com/csccsccsccsc/cpp-net | 2021-02-13T05:59:52Z | Accepted Version to IEEE Transactions on Image Processing | null | 10.1109/TIP.2023.3237013 | null | null | null | null | null | null | null |
2,102.07033 | PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them | ['Patrick Lewis', 'Yuxiang Wu', 'Linqing Liu', 'Pasquale Minervini', 'Heinrich Küttler', 'Aleksandra Piktus', 'Pontus Stenetorp', 'Sebastian Riedel'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Open-domain Question Answering models which directly leverage question-answer
(QA) pairs, such as closed-book QA (CBQA) models and QA-pair retrievers, show
promise in terms of speed and memory compared to conventional models which
retrieve and read from text corpora. QA-pair retrievers also offer
interpretable answers, a high degree of control, and are trivial to update at
test time with new knowledge. However, these models lack the accuracy of
retrieve-and-read systems, as substantially less knowledge is covered by the
available QA-pairs relative to text corpora like Wikipedia. To facilitate
improved QA-pair models, we introduce Probably Asked Questions (PAQ), a very
large resource of 65M automatically-generated QA-pairs. We introduce a new
QA-pair retriever, RePAQ, to complement PAQ. We find that PAQ preempts and
caches test questions, enabling RePAQ to match the accuracy of recent
retrieve-and-read models, whilst being significantly faster. Using PAQ, we
train CBQA models which outperform comparable baselines by 5%, but trail RePAQ
by over 15%, indicating the effectiveness of explicit retrieval. RePAQ can be
configured for size (under 500MB) or speed (over 1K questions per second)
whilst retaining high accuracy. Lastly, we demonstrate RePAQ's strength at
selective QA, abstaining from answering when it is likely to be incorrect. This
enables RePAQ to ``back-off" to a more expensive state-of-the-art model,
leading to a combined system which is both more accurate and 2x faster than the
state-of-the-art model alone. | 2021-02-13T23:43:45Z | null | null | null | null | null | null | null | null | null | null |
2,102.08473 | COCO-LM: Correcting and Contrasting Text Sequences for Language Model
Pretraining | ['Yu Meng', 'Chenyan Xiong', 'Payal Bajaj', 'Saurabh Tiwary', 'Paul Bennett', 'Jiawei Han', 'Xia Song'] | ['cs.CL', 'cs.LG'] | We present a self-supervised learning framework, COCO-LM, that pretrains
Language Models by COrrecting and COntrasting corrupted text sequences.
Following ELECTRA-style pretraining, COCO-LM employs an auxiliary language
model to corrupt text sequences, upon which it constructs two new tasks for
pretraining the main model. The first token-level task, Corrective Language
Modeling, is to detect and correct tokens replaced by the auxiliary model, in
order to better capture token-level semantics. The second sequence-level task,
Sequence Contrastive Learning, is to align text sequences originated from the
same source input while ensuring uniformity in the representation space.
Experiments on GLUE and SQuAD demonstrate that COCO-LM not only outperforms
recent state-of-the-art pretrained models in accuracy, but also improves
pretraining efficiency. It achieves the MNLI accuracy of ELECTRA with 50% of
its pretraining GPU hours. With the same pretraining steps of standard
base/large-sized models, COCO-LM outperforms the previous best models by 1+
GLUE average points. | 2021-02-16T22:24:29Z | NeurIPS 2021. (Code and Models: https://github.com/microsoft/COCO-LM) | null | null | null | null | null | null | null | null | null |
2,102.08602 | LambdaNetworks: Modeling Long-Range Interactions Without Attention | ['Irwan Bello'] | ['cs.CV', 'cs.LG'] | We present lambda layers -- an alternative framework to self-attention -- for
capturing long-range interactions between an input and structured contextual
information (e.g. a pixel surrounded by other pixels). Lambda layers capture
such interactions by transforming available contexts into linear functions,
termed lambdas, and applying these linear functions to each input separately.
Similar to linear attention, lambda layers bypass expensive attention maps, but
in contrast, they model both content and position-based interactions which
enables their application to large structured inputs such as images. The
resulting neural network architectures, LambdaNetworks, significantly
outperform their convolutional and attentional counterparts on ImageNet
classification, COCO object detection and COCO instance segmentation, while
being more computationally efficient. Additionally, we design LambdaResNets, a
family of hybrid architectures across different scales, that considerably
improves the speed-accuracy tradeoff of image classification models.
LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x
faster than the popular EfficientNets on modern machine learning accelerators.
When training with an additional 130M pseudo-labeled images, LambdaResNets
achieve up to a 9.5x speed-up over the corresponding EfficientNet checkpoints. | 2021-02-17T06:33:47Z | Accepted for publication at the International Conference in Learning
Representations 2021 (Spotlight) | null | null | LambdaNetworks: Modeling Long-Range Interactions Without Attention | ['Irwan Bello'] | 2,021 | International Conference on Learning Representations | 181 | 88 | ['Computer Science'] |
2,102.08981 | Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize
Long-Tail Visual Concepts | ['Soravit Changpinyo', 'Piyush Sharma', 'Nan Ding', 'Radu Soricut'] | ['cs.CV', 'cs.CL'] | The availability of large-scale image captioning and visual question
answering datasets has contributed significantly to recent successes in
vision-and-language pre-training. However, these datasets are often collected
with overrestrictive requirements inherited from their original target tasks
(e.g., image caption generation), which limit the resulting dataset scale and
diversity. We take a step further in pushing the limits of vision-and-language
pre-training data by relaxing the data collection pipeline used in Conceptual
Captions 3M (CC3M) [Sharma et al. 2018] and introduce the Conceptual 12M
(CC12M), a dataset with 12 million image-text pairs specifically meant to be
used for vision-and-language pre-training. We perform an analysis of this
dataset and benchmark its effectiveness against CC3M on multiple downstream
tasks with an emphasis on long-tail visual recognition. Our results clearly
illustrate the benefit of scaling up pre-training data for vision-and-language
tasks, as indicated by the new state-of-the-art results on both the nocaps and
Conceptual Captions benchmarks. | 2021-02-17T19:15:53Z | IEEE Conference on Computer Vision and Pattern Recognition (CVPR
2021). Our dataset is available at
https://github.com/google-research-datasets/conceptual-12m | null | null | Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts | ['Soravit Changpinyo', 'P. Sharma', 'Nan Ding', 'Radu Soricut'] | 2,021 | Computer Vision and Pattern Recognition | 1,143 | 100 | ['Computer Science'] |
2,102.09206 | Less is More: Pre-train a Strong Text Encoder for Dense Retrieval Using
a Weak Decoder | ['Shuqi Lu', 'Di He', 'Chenyan Xiong', 'Guolin Ke', 'Waleed Malik', 'Zhicheng Dou', 'Paul Bennett', 'Tieyan Liu', 'Arnold Overwijk'] | ['cs.LG'] | Dense retrieval requires high-quality text sequence embeddings to support
effective search in the representation space. Autoencoder-based language models
are appealing in dense retrieval as they train the encoder to output
high-quality embedding that can reconstruct the input texts. However, in this
paper, we provide theoretical analyses and show empirically that an autoencoder
language model with a low reconstruction loss may not provide good sequence
representations because the decoder may take shortcuts by exploiting language
patterns. To address this, we propose a new self-learning method that
pre-trains the autoencoder using a \textit{weak} decoder, with restricted
capacity and attention flexibility to push the encoder to provide better text
representations. Our experiments on web search, news recommendation, and open
domain question answering show that our pre-trained model significantly boosts
the effectiveness and few-shot ability of dense retrieval models. Our code is
available at https://github.com/microsoft/SEED-Encoder/. | 2021-02-18T08:08:17Z | null | null | null | null | null | null | null | null | null | null |
2,102.09542 | SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical
Visual Question Answering | ['Bo Liu', 'Li-Ming Zhan', 'Li Xu', 'Lin Ma', 'Yan Yang', 'Xiao-Ming Wu'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Medical visual question answering (Med-VQA) has tremendous potential in
healthcare. However, the development of this technology is hindered by the
lacking of publicly-available and high-quality labeled datasets for training
and evaluation. In this paper, we present a large bilingual dataset, SLAKE,
with comprehensive semantic labels annotated by experienced physicians and a
new structural medical knowledge base for Med-VQA. Besides, SLAKE includes
richer modalities and covers more human body parts than the currently available
dataset. We show that SLAKE can be used to facilitate the development and
evaluation of Med-VQA systems. The dataset can be downloaded from
http://www.med-vqa.com/slake. | 2021-02-18T18:44:50Z | ISBI 2021 | null | null | Slake: A Semantically-Labeled Knowledge-Enhanced Dataset For Medical Visual Question Answering | ['Bo Liu', 'Li-Ming Zhan', 'Li Xu', 'Lin Ma', 'Y. Yang', 'Xiao-Ming Wu'] | 2,021 | IEEE International Symposium on Biomedical Imaging | 274 | 15 | ['Computer Science'] |
2,102.09665 | MUDES: Multilingual Detection of Offensive Spans | ['Tharindu Ranasinghe', 'Marcos Zampieri'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The interest in offensive content identification in social media has grown
substantially in recent years. Previous work has dealt mostly with post level
annotations. However, identifying offensive spans is useful in many ways. To
help coping with this important challenge, we present MUDES, a multilingual
system to detect offensive spans in texts. MUDES features pre-trained models, a
Python API for developers, and a user-friendly web-based interface. A detailed
description of MUDES' components is presented in this paper. | 2021-02-18T23:19:00Z | Accepted to NAACL-HLT 2021 | null | null | MUDES: Multilingual Detection of Offensive Spans | ['Tharindu Ranasinghe', 'Marcos Zampieri'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 41 | 51 | ['Computer Science'] |
2,102.09672 | Improved Denoising Diffusion Probabilistic Models | ['Alex Nichol', 'Prafulla Dhariwal'] | ['cs.LG', 'cs.AI', 'stat.ML'] | Denoising diffusion probabilistic models (DDPM) are a class of generative
models which have recently been shown to produce excellent samples. We show
that with a few simple modifications, DDPMs can also achieve competitive
log-likelihoods while maintaining high sample quality. Additionally, we find
that learning variances of the reverse diffusion process allows sampling with
an order of magnitude fewer forward passes with a negligible difference in
sample quality, which is important for the practical deployment of these
models. We additionally use precision and recall to compare how well DDPMs and
GANs cover the target distribution. Finally, we show that the sample quality
and likelihood of these models scale smoothly with model capacity and training
compute, making them easily scalable. We release our code at
https://github.com/openai/improved-diffusion | 2021-02-18T23:44:17Z | null | null | null | null | null | null | null | null | null | null |
2,102.10684 | Pre-Training BERT on Arabic Tweets: Practical Considerations | ['Ahmed Abdelali', 'Sabit Hassan', 'Hamdy Mubarak', 'Kareem Darwish', 'Younes Samih'] | ['cs.CL', 'cs.AI'] | Pretraining Bidirectional Encoder Representations from Transformers (BERT)
for downstream NLP tasks is a non-trival task. We pretrained 5 BERT models that
differ in the size of their training sets, mixture of formal and informal
Arabic, and linguistic preprocessing. All are intended to support Arabic
dialects and social media. The experiments highlight the centrality of data
diversity and the efficacy of linguistically aware segmentation. They also
highlight that more data or more training step do not necessitate better
models. Our new models achieve new state-of-the-art results on several
downstream tasks. The resulting models are released to the community under the
name QARiB. | 2021-02-21T20:51:33Z | 6 pages, 5 figures | null | null | Pre-Training BERT on Arabic Tweets: Practical Considerations | ['Ahmed Abdelali', 'Sabit Hassan', 'Hamdy Mubarak', 'Kareem Darwish', 'Younes Samih'] | 2,021 | arXiv.org | 102 | 30 | ['Computer Science'] |
2,102.11646 | HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search | ['Niv Nayman', 'Yonathan Aflalo', 'Asaf Noy', 'Lihi Zelnik-Manor'] | ['cs.LG', 'cs.AI', 'cs.CV', 'math.OC', 'stat.ML', '68T09, 68T45', 'G.1.6; G.3; I.2.8; I.2.10; I.5.1'] | Realistic use of neural networks often requires adhering to multiple
constraints on latency, energy and memory among others. A popular approach to
find fitting networks is through constrained Neural Architecture Search (NAS),
however, previous methods enforce the constraint only softly. Therefore, the
resulting networks do not exactly adhere to the resource constraint and their
accuracy is harmed. In this work we resolve this by introducing Hard
Constrained diffeRentiable NAS (HardCoRe-NAS), that is based on an accurate
formulation of the expected resource requirement and a scalable search method
that satisfies the hard constraint throughout the search. Our experiments show
that HardCoRe-NAS generates state-of-the-art architectures, surpassing other
NAS methods, while strictly satisfying the hard resource constraints without
any tuning required. | 2021-02-23T11:56:30Z | Niv Nayman and Yonathan Aflalo contributed equally. An implementation
of HardCoRe-NAS is available at: https://github.com/Alibaba-MIIL/HardCoReNAS | null | null | null | null | null | null | null | null | null |
2,102.11972 | Do Transformer Modifications Transfer Across Implementations and
Applications? | ['Sharan Narang', 'Hyung Won Chung', 'Yi Tay', 'William Fedus', 'Thibault Fevry', 'Michael Matena', 'Karishma Malkan', 'Noah Fiedel', 'Noam Shazeer', 'Zhenzhong Lan', 'Yanqi Zhou', 'Wei Li', 'Nan Ding', 'Jake Marcus', 'Adam Roberts', 'Colin Raffel'] | ['cs.LG', 'cs.CL'] | The research community has proposed copious modifications to the Transformer
architecture since it was introduced over three years ago, relatively few of
which have seen widespread adoption. In this paper, we comprehensively evaluate
many of these modifications in a shared experimental setting that covers most
of the common uses of the Transformer in natural language processing.
Surprisingly, we find that most modifications do not meaningfully improve
performance. Furthermore, most of the Transformer variants we found beneficial
were either developed in the same codebase that we used or are relatively minor
changes. We conjecture that performance improvements may strongly depend on
implementation details and correspondingly make some recommendations for
improving the generality of experimental results. | 2021-02-23T22:44:54Z | To appear at EMNLP 2021 as a conference paper | null | null | null | null | null | null | null | null | null |
2,102.12092 | Zero-Shot Text-to-Image Generation | ['Aditya Ramesh', 'Mikhail Pavlov', 'Gabriel Goh', 'Scott Gray', 'Chelsea Voss', 'Alec Radford', 'Mark Chen', 'Ilya Sutskever'] | ['cs.CV', 'cs.LG'] | Text-to-image generation has traditionally focused on finding better modeling
assumptions for training on a fixed dataset. These assumptions might involve
complex architectures, auxiliary losses, or side information such as object
part labels or segmentation masks supplied during training. We describe a
simple approach for this task based on a transformer that autoregressively
models the text and image tokens as a single stream of data. With sufficient
data and scale, our approach is competitive with previous domain-specific
models when evaluated in a zero-shot fashion. | 2021-02-24T06:42:31Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.