arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,104.07091 | SummScreen: A Dataset for Abstractive Screenplay Summarization | ['Mingda Chen', 'Zewei Chu', 'Sam Wiseman', 'Kevin Gimpel'] | ['cs.CL'] | We introduce SummScreen, a summarization dataset comprised of pairs of TV
series transcripts and human written recaps. The dataset provides a challenging
testbed for abstractive summarization for several reasons. Plot details are
often expressed indirectly in character dialogues and may be scattered across
the entirety... | 2021-04-14T19:37:40Z | ACL 2022 | null | null | SummScreen: A Dataset for Abstractive Screenplay Summarization | ['Mingda Chen', 'Zewei Chu', 'Sam Wiseman', 'Kevin Gimpel'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 96 | 64 | ['Computer Science'] |
2,104.07179 | Does Putting a Linguist in the Loop Improve NLU Data Collection? | ['Alicia Parrish', 'William Huang', 'Omar Agha', 'Soo-Hwan Lee', 'Nikita Nangia', 'Alex Warstadt', 'Karmanya Aggarwal', 'Emily Allaway', 'Tal Linzen', 'Samuel R. Bowman'] | ['cs.CL'] | Many crowdsourced NLP datasets contain systematic gaps and biases that are
identified only after data collection is complete. Identifying these issues
from early data samples during crowdsourcing should make mitigation more
efficient, especially when done iteratively. We take natural language inference
as a test case a... | 2021-04-15T00:31:10Z | 14 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,104.07307 | NT5?! Training T5 to Perform Numerical Reasoning | ['Peng-Jian Yang', 'Ying Ting Chen', 'Yuechan Chen', 'Daniel Cer'] | ['cs.CL'] | Numerical reasoning over text (NRoT) presents unique challenges that are not
well addressed by existing pre-training objectives. We explore five sequential
training schedules that adapt a pre-trained T5 model for NRoT. Our final model
is adapted from T5, but further pre-trained on three datasets designed to
strengthen ... | 2021-04-15T08:34:44Z | 5 pages, 1 figure | null | null | NT5?! Training T5 to Perform Numerical Reasoning | ['Peng Yang', 'Ying Chen', 'Yuechan Chen', 'Daniel Matthew Cer'] | 2,021 | arXiv.org | 15 | 9 | ['Computer Science'] |
2,104.07555 | Data-QuestEval: A Referenceless Metric for Data-to-Text Semantic
Evaluation | ['Clément Rebuffel', 'Thomas Scialom', 'Laure Soulier', 'Benjamin Piwowarski', 'Sylvain Lamprier', 'Jacopo Staiano', 'Geoffrey Scoutheeten', 'Patrick Gallinari'] | ['cs.CL'] | QuestEval is a reference-less metric used in text-to-text tasks, that
compares the generated summaries directly to the source text, by automatically
asking and answering questions. Its adaptation to Data-to-Text tasks is not
straightforward, as it requires multimodal Question Generation and Answering
systems on the con... | 2021-04-15T16:10:46Z | Accepted at EMNLP 2021 | null | null | Data-QuestEval: A Referenceless Metric for Data-to-Text Semantic Evaluation | ['Clément Rebuffel', 'Thomas Scialom', 'L. Soulier', 'Benjamin Piwowarski', 'S. Lamprier', 'Jacopo Staiano', 'Geoffrey Scoutheeten', 'P. Gallinari'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 32 | 39 | ['Computer Science'] |
2,104.07566 | BAM: A Balanced Attention Mechanism for Single Image Super Resolution | ['Fanyi Wang', 'Haotian Hu', 'Cheng Shen'] | ['eess.IV', 'cs.CV', 'cs.LG'] | Recovering texture information from the aliasing regions has always been a
major challenge for Single Image Super Resolution (SISR) task. These regions
are often submerged in noise so that we have to restore texture details while
suppressing noise. To address this issue, we propose a Balanced Attention
Mechanism (BAM),... | 2021-04-15T16:22:16Z | 8 pages, 6 figures | null | null | null | null | null | null | null | null | null |
2,104.07613 | SINA-BERT: A pre-trained Language Model for Analysis of Medical Texts in
Persian | ['Nasrin Taghizadeh', 'Ehsan Doostmohammadi', 'Elham Seifossadat', 'Hamid R. Rabiee', 'Maedeh S. Tahaei'] | ['cs.CL'] | We have released Sina-BERT, a language model pre-trained on BERT (Devlin et
al., 2018) to address the lack of a high-quality Persian language model in the
medical domain. SINA-BERT utilizes pre-training on a large-scale corpus of
medical contents including formal and informal texts collected from a variety
of online re... | 2021-04-15T17:22:27Z | null | null | null | null | null | null | null | null | null | null |
2,104.07857 | ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep
Learning | ['Samyam Rajbhandari', 'Olatunji Ruwase', 'Jeff Rasley', 'Shaden Smith', 'Yuxiong He'] | ['cs.DC', 'cs.AI', 'cs.LG', 'cs.PF'] | In the last three years, the largest dense deep learning models have grown
over 1000x to reach hundreds of billions of parameters, while the GPU memory
has only grown by 5x (16 GB to 80 GB). Therefore, the growth in model scale has
been supported primarily though system innovations that allow large models to
fit in the... | 2021-04-16T02:22:12Z | null | null | null | null | null | null | null | null | null | null |
2,104.07972 | Language Models are Few-Shot Butlers | ['Vincent Micheli', 'François Fleuret'] | ['cs.CL', 'cs.LG'] | Pretrained language models demonstrate strong performance in most NLP tasks
when fine-tuned on small task-specific datasets. Hence, these autoregressive
models constitute ideal agents to operate in text-based environments where
language understanding and generative capabilities are essential. Nonetheless,
collecting ex... | 2021-04-16T08:47:07Z | EMNLP 2021 | null | null | Language Models are Few-Shot Butlers | ['Vincent Micheli', 'Franccois Fleuret'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 33 | 29 | ['Computer Science'] |
2,104.08027 | Fast, Effective, and Self-Supervised: Transforming Masked Language
Models into Universal Lexical and Sentence Encoders | ['Fangyu Liu', 'Ivan Vulić', 'Anna Korhonen', 'Nigel Collier'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent
years. However, previous work has indicated that off-the-shelf MLMs are not
effective as universal lexical or sentence encoders without further
task-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks
using annotated task dat... | 2021-04-16T10:49:56Z | EMNLP 2021 camera-ready version | null | null | Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders | ['Fangyu Liu', 'Ivan Vulic', 'A. Korhonen', 'Nigel Collier'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 121 | 82 | ['Computer Science'] |
2,104.082 | IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural
Language Generation | ['Samuel Cahyawijaya', 'Genta Indra Winata', 'Bryan Wilie', 'Karissa Vincentio', 'Xiaohong Li', 'Adhiguna Kuncoro', 'Sebastian Ruder', 'Zhi Yuan Lim', 'Syafri Bahar', 'Masayu Leylia Khodra', 'Ayu Purwarianti', 'Pascale Fung'] | ['cs.CL'] | Natural language generation (NLG) benchmarks provide an important avenue to
measure progress and develop better NLG systems. Unfortunately, the lack of
publicly available NLG benchmarks for low-resource languages poses a
challenging barrier for building NLG systems that work well for languages with
limited amounts of d... | 2021-04-16T16:16:44Z | Accepted in EMNLP 2021, 10 pages | null | null | null | null | null | null | null | null | null |
2,104.08247 | What to Pre-Train on? Efficient Intermediate Task Selection | ['Clifton Poth', 'Jonas Pfeiffer', 'Andreas Rücklé', 'Iryna Gurevych'] | ['cs.CL'] | Intermediate task fine-tuning has been shown to culminate in large transfer
gains across many NLP tasks. With an abundance of candidate datasets as well as
pre-trained language models, it has become infeasible to run the cross-product
of all combinations to find the best transfer setting. In this work we first
establis... | 2021-04-16T17:31:18Z | EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,104.08613 | Emotion Classification in a Resource Constrained Language Using
Transformer-based Approach | ['Avishek Das', 'Omar Sharif', 'Mohammed Moshiul Hoque', 'Iqbal H. Sarker'] | ['cs.CL'] | Although research on emotion classification has significantly progressed in
high-resource languages, it is still infancy for resource-constrained languages
like Bengali. However, unavailability of necessary language processing tools
and deficiency of benchmark corpora makes the emotion classification task in
Bengali mo... | 2021-04-17T18:28:39Z | Accepted in NAACL-SRW 2021 | null | null | Emotion Classification in a Resource Constrained Language Using Transformer-based Approach | ['Avishek Das', 'Omar Sharif', 'M. M. Hoque', 'Iqbal H. Sarker'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 41 | 28 | ['Computer Science'] |
2,104.08635 | UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans
Detection | ['Andrei Paraschiv', 'Dumitru-Clementin Cercel', 'Mihai Dascalu'] | ['cs.CL'] | The real-world impact of polarization and toxicity in the online sphere
marked the end of 2020 and the beginning of this year in a negative way.
Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of
a subset of the Jigsaw Unintended Bias dataset and is the first language
toxicity detection task... | 2021-04-17T19:42:12Z | null | null | null | UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans Detection | ['Andrei Paraschiv', 'Dumitru-Clementin Cercel', 'M. Dascalu'] | 2,021 | International Workshop on Semantic Evaluation | 1 | 47 | ['Computer Science'] |
2,104.08663 | BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information
Retrieval Models | ['Nandan Thakur', 'Nils Reimers', 'Andreas Rücklé', 'Abhishek Srivastava', 'Iryna Gurevych'] | ['cs.IR', 'cs.AI', 'cs.CL'] | Existing neural information retrieval (IR) models have often been studied in
homogeneous and narrow settings, which has considerably limited insights into
their out-of-distribution (OOD) generalization capabilities. To address this,
and to facilitate researchers to broadly evaluate the effectiveness of their
models, we... | 2021-04-17T23:29:55Z | Accepted at NeurIPS 2021 Dataset and Benchmark Track | null | null | BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models | ['Nandan Thakur', 'Nils Reimers', "Andreas Ruckl'e", 'Abhishek Srivastava', 'Iryna Gurevych'] | 2,021 | NeurIPS Datasets and Benchmarks | 1,064 | 103 | ['Computer Science'] |
2,104.08671 | When Does Pretraining Help? Assessing Self-Supervised Learning for Law
and the CaseHOLD Dataset | ['Lucia Zheng', 'Neel Guha', 'Brandon R. Anderson', 'Peter Henderson', 'Daniel E. Ho'] | ['cs.CL'] | While self-supervised learning has made rapid advances in natural language
processing, it remains unclear when researchers should engage in
resource-intensive domain-specific pretraining (domain pretraining). The law,
puzzlingly, has yielded few documented instances of substantial gains to domain
pretraining in spite o... | 2021-04-18T00:57:16Z | ICAIL 2021. Code & data available at
https://github.com/reglab/casehold | null | null | null | null | null | null | null | null | null |
2,104.08678 | Improving Question Answering Model Robustness with Synthetic Adversarial
Data Generation | ['Max Bartolo', 'Tristan Thrush', 'Robin Jia', 'Sebastian Riedel', 'Pontus Stenetorp', 'Douwe Kiela'] | ['cs.CL', 'cs.LG'] | Despite recent progress, state-of-the-art question answering models remain
vulnerable to a variety of adversarial attacks. While dynamic adversarial data
collection, in which a human annotator tries to write examples that fool a
model-in-the-loop, can improve model robustness, this process is expensive
which limits the... | 2021-04-18T02:00:06Z | EMNLP 2021 | Proceedings of the 2021 Conference on Empirical Methods in Natural
Language Processing, p.8830-8848. Association for Computational Linguistics | 10.18653/v1/2021.emnlp-main.696 | null | null | null | null | null | null | null |
2,104.08691 | The Power of Scale for Parameter-Efficient Prompt Tuning | ['Brian Lester', 'Rami Al-Rfou', 'Noah Constant'] | ['cs.CL'] | In this work, we explore "prompt tuning", a simple yet effective mechanism
for learning "soft prompts" to condition frozen language models to perform
specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft
prompts are learned through backpropagation and can be tuned to incorporate
signal from an... | 2021-04-18T03:19:26Z | Accepted to EMNLP 2021 | null | null | The Power of Scale for Parameter-Efficient Prompt Tuning | ['Brian Lester', 'Rami Al-Rfou', 'Noah Constant'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 4,129 | 60 | ['Computer Science'] |
2,104.08718 | CLIPScore: A Reference-free Evaluation Metric for Image Captioning | ['Jack Hessel', 'Ari Holtzman', 'Maxwell Forbes', 'Ronan Le Bras', 'Yejin Choi'] | ['cs.CV', 'cs.CL'] | Image captioning has conventionally relied on reference-based automatic
evaluations, where machine captions are compared against captions written by
humans. This is in contrast to the reference-free manner in which humans assess
caption quality.
In this paper, we report the surprising empirical finding that CLIP (Rad... | 2021-04-18T05:00:29Z | null | EMNLP 2021 | null | CLIPScore: A Reference-free Evaluation Metric for Image Captioning | ['Jack Hessel', 'Ari Holtzman', 'Maxwell Forbes', 'Ronan Le Bras', 'Yejin Choi'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 1,597 | 75 | ['Computer Science'] |
2,104.08727 | GooAQ: Open Question Answering with Diverse Answer Types | ['Daniel Khashabi', 'Amos Ng', 'Tushar Khot', 'Ashish Sabharwal', 'Hannaneh Hajishirzi', 'Chris Callison-Burch'] | ['cs.CL', 'cs.AI'] | While day-to-day questions come with a variety of answer types, the current
question-answering (QA) literature has failed to adequately address the answer
diversity of questions. To this end, we present GooAQ, a large-scale dataset
with a variety of answer types. This dataset contains over 5 million questions
and 3 mil... | 2021-04-18T05:40:39Z | EMNLP-Findings 2021 | null | null | GooAQ: Open Question Answering with Diverse Answer Types | ['Daniel Khashabi', 'Amos Ng', 'Tushar Khot', 'Ashish Sabharwal', 'Hannaneh Hajishirzi', 'Chris Callison-Burch'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 54 | 24 | ['Computer Science'] |
2,104.08801 | Back-Training excels Self-Training at Unsupervised Domain Adaptation of
Question Generation and Passage Retrieval | ['Devang Kulshreshtha', 'Robert Belfer', 'Iulian Vlad Serban', 'Siva Reddy'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this work, we introduce back-training, an alternative to self-training for
unsupervised domain adaptation (UDA) from source to target domain. While
self-training generates synthetic training data where natural inputs are
aligned with noisy outputs, back-training results in natural outputs aligned
with noisy inputs. ... | 2021-04-18T10:20:07Z | EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,104.08821 | SimCSE: Simple Contrastive Learning of Sentence Embeddings | ['Tianyu Gao', 'Xingcheng Yao', 'Danqi Chen'] | ['cs.CL', 'cs.LG'] | This paper presents SimCSE, a simple contrastive learning framework that
greatly advances state-of-the-art sentence embeddings. We first describe an
unsupervised approach, which takes an input sentence and predicts itself in a
contrastive objective, with only standard dropout used as noise. This simple
method works sur... | 2021-04-18T11:27:08Z | Accepted to EMNLP 2021. The code and pre-trained models are available
at https://github.com/princeton-nlp/simcse | null | null | null | null | null | null | null | null | null |
2,104.08836 | LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich
Document Understanding | ['Yiheng Xu', 'Tengchao Lv', 'Lei Cui', 'Guoxin Wang', 'Yijuan Lu', 'Dinei Florencio', 'Cha Zhang', 'Furu Wei'] | ['cs.CL'] | Multimodal pre-training with text, layout, and image has achieved SOTA
performance for visually-rich document understanding tasks recently, which
demonstrates the great potential for joint learning across different
modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model
for multilingual document... | 2021-04-18T12:16:00Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,104.0886 | CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip
Retrieval | ['Huaishao Luo', 'Lei Ji', 'Ming Zhong', 'Yang Chen', 'Wen Lei', 'Nan Duan', 'Tianrui Li'] | ['cs.CV'] | Video-text retrieval plays an essential role in multi-modal research and has
been widely used in many real-world web applications. The CLIP (Contrastive
Language-Image Pre-training), an image-language pre-training model, has
demonstrated the power of visual concepts learning from web collected
image-text datasets. In t... | 2021-04-18T13:59:50Z | null | null | null | CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval | ['Huaishao Luo', 'Lei Ji', 'Ming Zhong', 'Yang Chen', 'Wen Lei', 'Nan Duan', 'Tianrui Li'] | 2,021 | Neurocomputing | 816 | 52 | ['Computer Science'] |
2,104.09497 | Attention in Attention Network for Image Super-Resolution | ['Haoyu Chen', 'Jinjin Gu', 'Zhi Zhang'] | ['cs.CV'] | Convolutional neural networks have allowed remarkable advances in single
image super-resolution (SISR) over the last decade. Among recent advances in
SISR, attention mechanisms are crucial for high-performance SR models. However,
the attention mechanism remains unclear on why and how it works in SISR. In
this work, we ... | 2021-04-19T17:59:06Z | 11 pages, 10 figures. Codes are available at
https://github.com/haoyuc/A2N | null | null | Attention in Attention Network for Image Super-Resolution | ['Haoyu Chen', 'Jinjin Gu', 'Zhi Zhang'] | 2,021 | arXiv.org | 70 | 47 | ['Computer Science'] |
2,104.09617 | Operationalizing a National Digital Library: The Case for a Norwegian
Transformer Model | ['Per E Kummervold', 'Javier de la Rosa', 'Freddy Wetjen', 'Svein Arne Brygfjeld'] | ['cs.CL', 'cs.DL'] | In this work, we show the process of building a large-scale training set from
digital and digitized collections at a national library. The resulting
Bidirectional Encoder Representations from Transformers (BERT)-based language
model for Norwegian outperforms multilingual BERT (mBERT) models in several
token and sequenc... | 2021-04-19T20:36:24Z | Accepted to NoDaLiDa 2021 | null | null | null | null | null | null | null | null | null |
2,104.09864 | RoFormer: Enhanced Transformer with Rotary Position Embedding | ['Jianlin Su', 'Yu Lu', 'Shengfeng Pan', 'Ahmed Murtadha', 'Bo Wen', 'Yunfeng Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Position encoding recently has shown effective in the transformer
architecture. It enables valuable supervision for dependency modeling between
elements at different positions of the sequence. In this paper, we first
investigate various methods to integrate positional information into the
learning process of transforme... | 2021-04-20T09:54:06Z | fixed some typos | null | null | null | null | null | null | null | null | null |
2,104.09947 | Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using
Multilingual BERT | ['Kristen Scott', 'Pieter Delobelle', 'Bettina Berendt'] | ['cs.CL', 'cs.SI'] | We classify seven months' worth of Belgian COVID-related Tweets using
multilingual BERT and relate them to their governments' COVID measures. We
classify Tweets by their stated opinion on Belgian government curfew measures
(too strict, ok, too loose). We examine the change in topics discussed and
views expressed over t... | 2021-04-20T13:17:56Z | 5 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,104.10157 | VideoGPT: Video Generation using VQ-VAE and Transformers | ['Wilson Yan', 'Yunzhi Zhang', 'Pieter Abbeel', 'Aravind Srinivas'] | ['cs.CV', 'cs.LG'] | We present VideoGPT: a conceptually simple architecture for scaling
likelihood based generative modeling to natural videos. VideoGPT uses VQ-VAE
that learns downsampled discrete latent representations of a raw video by
employing 3D convolutions and axial self-attention. A simple GPT-like
architecture is then used to au... | 2021-04-20T17:58:03Z | Project website: https://wilson1yan.github.io/videogpt/index.html | null | null | null | null | null | null | null | null | null |
2,104.10972 | ImageNet-21K Pretraining for the Masses | ['Tal Ridnik', 'Emanuel Ben-Baruch', 'Asaf Noy', 'Lihi Zelnik-Manor'] | ['cs.CV', 'cs.LG'] | ImageNet-1K serves as the primary dataset for pretraining deep learning
models for computer vision tasks. ImageNet-21K dataset, which is bigger and
more diverse, is used less frequently for pretraining, mainly due to its
complexity, low accessibility, and underestimation of its added value. This
paper aims to close thi... | 2021-04-22T10:10:14Z | Accepted to NeurIPS 2021 (Datasets and Benchmarks) | null | null | null | null | null | null | null | null | null |
2,104.11227 | Multiscale Vision Transformers | ['Haoqi Fan', 'Bo Xiong', 'Karttikeya Mangalam', 'Yanghao Li', 'Zhicheng Yan', 'Jitendra Malik', 'Christoph Feichtenhofer'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present Multiscale Vision Transformers (MViT) for video and image
recognition, by connecting the seminal idea of multiscale feature hierarchies
with transformer models. Multiscale Transformers have several
channel-resolution scale stages. Starting from the input resolution and a small
channel dimension, the stages h... | 2021-04-22T17:59:45Z | Technical report | null | null | Multiscale Vision Transformers | ['Haoqi Fan', 'Bo Xiong', 'K. Mangalam', 'Yanghao Li', 'Zhicheng Yan', 'J. Malik', 'Christoph Feichtenhofer'] | 2,021 | IEEE International Conference on Computer Vision | 1,274 | 119 | ['Computer Science'] |
2,104.1128 | Motion Representations for Articulated Animation | ['Aliaksandr Siarohin', 'Oliver J. Woodford', 'Jian Ren', 'Menglei Chai', 'Sergey Tulyakov'] | ['cs.CV'] | We propose novel motion representations for animating articulated objects
consisting of distinct parts. In a completely unsupervised manner, our method
identifies object parts, tracks them in a driving video, and infers their
motions by considering their principal axes. In contrast to the previous
keypoint-based works,... | 2021-04-22T18:53:56Z | null | CVPR 2021 | null | null | null | null | null | null | null | null |
2,104.11394 | BERT-CoQAC: BERT-based Conversational Question Answering in Context | ['Munazza Zaib', 'Dai Hoang Tran', 'Subhash Sagar', 'Adnan Mahmood', 'Wei E. Zhang', 'Quan Z. Sheng'] | ['cs.CL'] | As one promising way to inquire about any particular information through a
dialog with the bot, question answering dialog systems have gained increasing
research interests recently. Designing interactive QA systems has always been a
challenging task in natural language processing and used as a benchmark to
evaluate a m... | 2021-04-23T03:05:17Z | null | null | null | BERT-CoQAC: BERT-Based Conversational Question Answering in Context | ['Munazza Zaib', 'Dai Hoang Tran', 'S. Sagar', 'A. Mahmood', 'Wei Emma Zhang', 'Quan Z. Sheng'] | 2,021 | International Symposium on Parallel Architectures, Algorithms and Programming | 18 | 14 | ['Computer Science'] |
2,104.1225 | XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis
and Beyond | ['Francesco Barbieri', 'Luis Espinosa Anke', 'Jose Camacho-Collados'] | ['cs.CL'] | Language models are ubiquitous in current NLP, and their multilingual
capacity has recently attracted considerable attention. However, current
analyses have almost exclusively focused on (multilingual variants of) standard
benchmarks, and have relied on clean pre-training and task-specific corpora as
multilingual signa... | 2021-04-25T20:28:53Z | LREC 2022. Code and data available at
https://github.com/cardiffnlp/xlm-t | null | null | XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond | ['Francesco Barbieri', 'Luis Espinosa Anke', 'José Camacho-Collados'] | 2,021 | International Conference on Language Resources and Evaluation | 228 | 45 | ['Computer Science'] |
2,104.12533 | Visformer: The Vision-friendly Transformer | ['Zhengsu Chen', 'Lingxi Xie', 'Jianwei Niu', 'Xuefeng Liu', 'Longhui Wei', 'Qi Tian'] | ['cs.CV'] | The past year has witnessed the rapid development of applying the Transformer
module to vision problems. While some researchers have demonstrated that
Transformer-based models enjoy a favorable ability of fitting data, there are
still growing number of evidences showing that these models suffer over-fitting
especially ... | 2021-04-26T13:13:03Z | null | null | null | Visformer: The Vision-friendly Transformer | ['Zhengsu Chen', 'Lingxi Xie', 'Jianwei Niu', 'Xuefeng Liu', 'Longhui Wei', 'Qi Tian'] | 2,021 | IEEE International Conference on Computer Vision | 223 | 67 | ['Computer Science'] |
2,104.12741 | GermanQuAD and GermanDPR: Improving Non-English Question Answering and
Passage Retrieval | ['Timo Möller', 'Julian Risch', 'Malte Pietsch'] | ['cs.CL', 'cs.LG'] | A major challenge of research on non-English machine reading for question
answering (QA) is the lack of annotated datasets. In this paper, we present
GermanQuAD, a dataset of 13,722 extractive question/answer pairs. To improve
the reproducibility of the dataset creation approach and foster QA research on
other language... | 2021-04-26T17:34:31Z | See https://deepset.ai/germanquad for downloading the datasets and
models | null | null | null | null | null | null | null | null | null |
2,104.12756 | InfographicVQA | ['Minesh Mathew', 'Viraj Bagal', 'Rubèn Pérez Tito', 'Dimosthenis Karatzas', 'Ernest Valveny', 'C. V Jawahar'] | ['cs.CV', 'cs.CL'] | Infographics are documents designed to effectively communicate information
using a combination of textual, graphical and visual elements. In this work, we
explore the automatic understanding of infographic images by using Visual
Question Answering technique.To this end, we present InfographicVQA, a new
dataset that com... | 2021-04-26T17:45:54Z | null | null | null | null | null | null | null | null | null | null |
2,104.13395 | ACDC: The Adverse Conditions Dataset with Correspondences for Robust
Semantic Driving Scene Perception | ['Christos Sakaridis', 'Haoran Wang', 'Ke Li', 'René Zurbrügg', 'Arpit Jadon', 'Wim Abbeloos', 'Daniel Olmeda Reino', 'Luc Van Gool', 'Dengxin Dai'] | ['cs.CV'] | Level-5 driving automation requires a robust visual perception system that
can parse input images under any condition. However, existing driving datasets
for dense semantic perception are either dominated by images captured under
normal conditions or are small in scale. To address this, we introduce ACDC,
the Adverse C... | 2021-04-27T18:00:05Z | Submitted for review to IEEE T-PAMI. Extended version of original
conference paper published in ICCV 2021 | null | null | null | null | null | null | null | null | null |
2,104.1384 | Twins: Revisiting the Design of Spatial Attention in Vision Transformers | ['Xiangxiang Chu', 'Zhi Tian', 'Yuqing Wang', 'Bo Zhang', 'Haibing Ren', 'Xiaolin Wei', 'Huaxia Xia', 'Chunhua Shen'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Very recently, a variety of vision transformer architectures for dense
prediction tasks have been proposed and they show that the design of spatial
attention is critical to their success in these tasks. In this work, we revisit
the design of the spatial attention and demonstrate that a carefully-devised
yet simple spat... | 2021-04-28T15:42:31Z | Accepted to NeurIPS2021 | null | null | Twins: Revisiting the Design of Spatial Attention in Vision Transformers | ['Xiangxiang Chu', 'Zhi Tian', 'Yuqing Wang', 'Bo Zhang', 'Haibing Ren', 'Xiaolin Wei', 'Huaxia Xia', 'Chunhua Shen'] | 2,021 | Neural Information Processing Systems | 1,034 | 52 | ['Computer Science'] |
2,104.14294 | Emerging Properties in Self-Supervised Vision Transformers | ['Mathilde Caron', 'Hugo Touvron', 'Ishan Misra', 'Hervé Jégou', 'Julien Mairal', 'Piotr Bojanowski', 'Armand Joulin'] | ['cs.CV'] | In this paper, we question if self-supervised learning provides new
properties to Vision Transformer (ViT) that stand out compared to convolutional
networks (convnets). Beyond the fact that adapting self-supervised methods to
this architecture works particularly well, we make the following observations:
first, self-sup... | 2021-04-29T12:28:51Z | 21 pages | null | null | null | null | null | null | null | null | null |
2,104.1469 | Entailment as Few-Shot Learner | ['Sinong Wang', 'Han Fang', 'Madian Khabsa', 'Hanzi Mao', 'Hao Ma'] | ['cs.CL', 'cs.AI'] | Large pre-trained language models (LMs) have demonstrated remarkable ability
as few-shot learners. However, their success hinges largely on scaling model
parameters to a degree that makes it challenging to train and serve. In this
paper, we propose a new approach, named as EFL, that can turn small LMs into
better few-s... | 2021-04-29T22:52:26Z | null | null | null | Entailment as Few-Shot Learner | ['Sinong Wang', 'Han Fang', 'Madian Khabsa', 'Hanzi Mao', 'Hao Ma'] | 2,021 | arXiv.org | 185 | 51 | ['Computer Science'] |
2,105.00059 | An analysis of full-size Russian complexly NER labelled corpus of
Internet user reviews on the drugs based on deep learning and language neural
nets | ['Alexander Sboev', 'Sanna Sboeva', 'Ivan Moloshnikov', 'Artem Gryaznov', 'Roman Rybka', 'Alexander Naumov', 'Anton Selivanov', 'Gleb Rylkov', 'Viacheslav Ilyin'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We present the full-size Russian complexly NER-labeled corpus of Internet
user reviews, along with an evaluation of accuracy levels reached on this
corpus by a set of advanced deep learning neural networks to extract the
pharmacologically meaningful entities from Russian texts. The corpus annotation
includes mentions o... | 2021-04-30T19:46:24Z | null | null | null | null | null | null | null | null | null | null |
2,105.00572 | Larger-Scale Transformers for Multilingual Masked Language Modeling | ['Naman Goyal', 'Jingfei Du', 'Myle Ott', 'Giri Anantharaman', 'Alexis Conneau'] | ['cs.CL'] | Recent work has demonstrated the effectiveness of cross-lingual language
model pretraining for cross-lingual understanding. In this study, we present
the results of two larger multilingual masked language models, with 3.5B and
10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform
XLM-R by 1.8% a... | 2021-05-02T23:15:02Z | 4 pages | null | null | null | null | null | null | null | null | null |
2,105.01051 | SUPERB: Speech processing Universal PERformance Benchmark | ['Shu-wen Yang', 'Po-Han Chi', 'Yung-Sung Chuang', 'Cheng-I Jeff Lai', 'Kushal Lakhotia', 'Yist Y. Lin', 'Andy T. Liu', 'Jiatong Shi', 'Xuankai Chang', 'Guan-Ting Lin', 'Tzu-Hsien Huang', 'Wei-Cheng Tseng', 'Ko-tik Lee', 'Da-Rong Liu', 'Zili Huang', 'Shuyan Dong', 'Shang-Wen Li', 'Shinji Watanabe', 'Abdelrahman Mohamed... | ['cs.CL', 'cs.SD', 'eess.AS'] | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing co... | 2021-05-03T17:51:09Z | To appear in Interspeech 2021 | null | null | null | null | null | null | null | null | null |
2,105.01279 | ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text
Encoders | ['Yan Song', 'Tong Zhang', 'Yonggang Wang', 'Kai-Fu Lee'] | ['cs.CL', 'cs.AI'] | Pre-trained text encoders have drawn sustaining attention in natural language
processing (NLP) and shown their capability in obtaining promising results in
different tasks. Recent studies illustrated that external self-supervised
signals (or knowledge extracted by unsupervised learning, such as n-grams) are
beneficial ... | 2021-05-04T04:08:58Z | 13 pages, 7 figures | null | null | ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders | ['Yan Song', 'Tong Zhang', 'Yonggang Wang', 'Kai-Fu Lee'] | 2,021 | arXiv.org | 45 | 42 | ['Computer Science'] |
2,105.01601 | MLP-Mixer: An all-MLP Architecture for Vision | ['Ilya Tolstikhin', 'Neil Houlsby', 'Alexander Kolesnikov', 'Lucas Beyer', 'Xiaohua Zhai', 'Thomas Unterthiner', 'Jessica Yung', 'Andreas Steiner', 'Daniel Keysers', 'Jakob Uszkoreit', 'Mario Lucic', 'Alexey Dosovitskiy'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Convolutional Neural Networks (CNNs) are the go-to model for computer vision.
Recently, attention-based networks, such as the Vision Transformer, have also
become popular. In this paper we show that while convolutions and attention are
both sufficient for good performance, neither of them are necessary. We present
MLP-... | 2021-05-04T16:17:21Z | v2: Fixed parameter counts in Table 1. v3: Added results on JFT-3B in
Figure 2(right); Added Section 3.4 on the input permutations. v4: Updated the
x label in Figure 2(right) | null | null | null | null | null | null | null | null | null |
2,105.02446 | DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism | ['Jinglin Liu', 'Chengxi Li', 'Yi Ren', 'Feiyang Chen', 'Zhou Zhao'] | ['eess.AS', 'cs.LG', 'cs.SD'] | Singing voice synthesis (SVS) systems are built to synthesize high-quality
and expressive singing voice, in which the acoustic model generates the
acoustic features (e.g., mel-spectrogram) given a music score. Previous singing
acoustic models adopt a simple loss (e.g., L1 and L2) or generative adversarial
network (GAN)... | 2021-05-06T05:21:42Z | SVS (DiffSinger), TTS (DiffSpeech), Shallow Diffusion Mechanism;
Submitted to arxiv on 6 May 2021; Accepted by AAAI 2022 | null | null | null | null | null | null | null | null | null |
2,105.02855 | Adapting Monolingual Models: Data can be Scarce when Language Similarity
is High | ['Wietse de Vries', 'Martijn Bartelds', 'Malvina Nissim', 'Martijn Wieling'] | ['cs.CL'] | For many (minority) languages, the resources needed to train large models are
not available. We investigate the performance of zero-shot transfer learning
with as little data as possible, and the influence of language similarity in
this process. We retrain the lexical layers of four BERT-based models using
data from tw... | 2021-05-06T17:43:40Z | Findings of ACL 2021 Camera Ready | Findings of the Association for Computational Linguistics:
ACL-IJCNLP 2021 | 10.18653/v1/2021.findings-acl.433 | null | null | null | null | null | null | null |
2,105.03011 | A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers | ['Pradeep Dasigi', 'Kyle Lo', 'Iz Beltagy', 'Arman Cohan', 'Noah A. Smith', 'Matt Gardner'] | ['cs.CL'] | Readers of academic research papers often read with the goal of answering
specific questions. Question Answering systems that can answer those questions
can make consumption of the content much more efficient. However, building such
tools requires data that reflect the difficulty of the task arising from
complex reason... | 2021-05-07T00:12:34Z | Accepted at NAACL 2021; Project page:
https://allenai.org/project/qasper | null | null | null | null | null | null | null | null | null |
2,105.03143 | AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News and Hate Speech
Detection Dataset | ['Mohamed Seghir Hadj Ameur', 'Hassina Aliane'] | ['cs.CL', 'cs.AI', '68T50', 'I.2.7'] | Along with the COVID-19 pandemic, an "infodemic" of false and misleading
information has emerged and has complicated the COVID-19 response efforts.
Social networking sites such as Facebook and Twitter have contributed largely
to the spread of rumors, conspiracy theories, hate, xenophobia, racism, and
prejudice. To comb... | 2021-05-07T09:52:44Z | null | null | null | AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News and Hate Speech Detection Dataset | ['Mohamed Seghir Hadj Ameur', 'H. Aliane'] | 2,021 | arXiv.org | 18 | 30 | ['Computer Science'] |
2,105.03404 | ResMLP: Feedforward networks for image classification with
data-efficient training | ['Hugo Touvron', 'Piotr Bojanowski', 'Mathilde Caron', 'Matthieu Cord', 'Alaaeldin El-Nouby', 'Edouard Grave', 'Gautier Izacard', 'Armand Joulin', 'Gabriel Synnaeve', 'Jakob Verbeek', 'Hervé Jégou'] | ['cs.CV'] | We present ResMLP, an architecture built entirely upon multi-layer
perceptrons for image classification. It is a simple residual network that
alternates (i) a linear layer in which image patches interact, independently
and identically across channels, and (ii) a two-layer feed-forward network in
which channels interact... | 2021-05-07T17:31:44Z | null | null | null | null | null | null | null | null | null | null |
2,105.03536 | Pareto-Optimal Quantized ResNet Is Mostly 4-bit | ['AmirAli Abdolrashidi', 'Lisa Wang', 'Shivani Agrawal', 'Jonathan Malmaud', 'Oleg Rybakov', 'Chas Leichner', 'Lukasz Lew'] | ['cs.LG', 'cs.CV'] | Quantization has become a popular technique to compress neural networks and
reduce compute cost, but most prior work focuses on studying quantization
without changing the network size. Many real-world applications of neural
networks have compute cost and memory budgets, which can be traded off with
model quality by cha... | 2021-05-07T23:28:37Z | 8 pages. Accepted at the Efficient Deep Learning for Computer Vision
Workshop at CVPR 2021 | null | 10.1109/CVPRW53098.2021.00345 | null | null | null | null | null | null | null |
2,105.03824 | FNet: Mixing Tokens with Fourier Transforms | ['James Lee-Thorp', 'Joshua Ainslie', 'Ilya Eckstein', 'Santiago Ontanon'] | ['cs.CL', 'cs.LG'] | We show that Transformer encoder architectures can be sped up, with limited
accuracy costs, by replacing the self-attention sublayers with simple linear
transformations that "mix" input tokens. These linear mixers, along with
standard nonlinearities in feed-forward layers, prove competent at modeling
semantic relations... | 2021-05-09T03:32:48Z | To appear at NAACL 2022 | null | null | FNet: Mixing Tokens with Fourier Transforms | ['J. Lee-Thorp', 'J. Ainslie', 'Ilya Eckstein', 'Santiago Ontañón'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 537 | 78 | ['Computer Science'] |
2,105.04206 | You Only Learn One Representation: Unified Network for Multiple Tasks | ['Chien-Yao Wang', 'I-Hau Yeh', 'Hong-Yuan Mark Liao'] | ['cs.CV'] | People ``understand'' the world via vision, hearing, tactile, and also the
past experience. Human experience can be learned through normal learning (we
call it explicit knowledge), or subconsciously (we call it implicit knowledge).
These experiences learned through normal learning or subconsciously will be
encoded and ... | 2021-05-10T09:03:11Z | null | null | null | null | null | null | null | null | null | null |
2,105.04906 | VICReg: Variance-Invariance-Covariance Regularization for
Self-Supervised Learning | ['Adrien Bardes', 'Jean Ponce', 'Yann LeCun'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Recent self-supervised methods for image representation learning are based on
maximizing the agreement between embedding vectors from different views of the
same image. A trivial solution is obtained when the encoder outputs constant
vectors. This collapse problem is often avoided through implicit biases in the
learnin... | 2021-05-11T09:53:21Z | Accepted at ICLR 2022 | null | null | VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning | ['Adrien Bardes', 'J. Ponce', 'Yann LeCun'] | 2,021 | International Conference on Learning Representations | 947 | 66 | ['Computer Science'] |
2,105.05209 | Restoring Hebrew Diacritics Without a Dictionary | ['Elazar Gershuni', 'Yuval Pinter'] | ['cs.CL'] | We demonstrate that it is feasible to diacritize Hebrew script without any
human-curated resources other than plain diacritized text. We present NAKDIMON,
a two-layer character level LSTM, that performs on par with much more
complicated curation-dependent systems, across a diverse array of modern Hebrew
sources. | 2021-05-11T17:23:29Z | Findings of NAACL 2022 (in press). 6 pages, 1 figure | null | null | Restoring Hebrew Diacritics Without a Dictionary | ['Elazar Gershuni', 'Yuval Pinter'] | 2,021 | NAACL-HLT | 8 | 25 | ['Computer Science'] |
2,105.05233 | Diffusion Models Beat GANs on Image Synthesis | ['Prafulla Dhariwal', 'Alex Nichol'] | ['cs.LG', 'cs.AI', 'cs.CV', 'stat.ML'] | We show that diffusion models can achieve image sample quality superior to
the current state-of-the-art generative models. We achieve this on
unconditional image synthesis by finding a better architecture through a series
of ablations. For conditional image synthesis, we further improve sample
quality with classifier g... | 2021-05-11T17:50:24Z | Added compute requirements, ImageNet 256$\times$256 upsampling FID
and samples, DDIM guided sampler, fixed typos | null | null | null | null | null | null | null | null | null |
2,105.05409 | A Large-Scale Benchmark for Food Image Segmentation | ['Xiongwei Wu', 'Xin Fu', 'Ying Liu', 'Ee-Peng Lim', 'Steven C. H. Hoi', 'Qianru Sun'] | ['cs.CV', 'cs.MM'] | Food image segmentation is a critical and indispensible task for developing
health-related applications such as estimating food calories and nutrients.
Existing food image segmentation models are underperforming due to two reasons:
(1) there is a lack of high quality food image datasets with fine-grained
ingredient lab... | 2021-05-12T03:00:07Z | 16 pages | null | null | null | null | null | null | null | null | null |
2,105.05521 | SauvolaNet: Learning Adaptive Sauvola Network for Degraded Document
Binarization | ['Deng Li', 'Yue Wu', 'Yicong Zhou'] | ['cs.CV'] | Inspired by the classic Sauvola local image thresholding approach, we
systematically study it from the deep neural network (DNN) perspective and
propose a new solution called SauvolaNet for degraded document binarization
(DDB). It is composed of three explainable modules, namely, Multi-Window
Sauvola (MWS), Pixelwise W... | 2021-05-12T08:56:04Z | Submitted to 16th International Conference on Document Analysis and
Recognition | null | null | null | null | null | null | null | null | null |
2,105.06337 | Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech | ['Vadim Popov', 'Ivan Vovk', 'Vladimir Gogoryan', 'Tasnima Sadekova', 'Mikhail Kudinov'] | ['cs.LG', 'cs.CL', 'stat.ML'] | Recently, denoising diffusion probabilistic models and generative score
matching have shown high potential in modelling complex data distributions
while stochastic calculus has provided a unified point of view on these
techniques allowing for flexible inference schemes. In this paper we introduce
Grad-TTS, a novel text... | 2021-05-13T14:47:44Z | null | null | null | Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech | ['Vadim Popov', 'Ivan Vovk', 'Vladimir Gogoryan', 'Tasnima Sadekova', 'Mikhail Kudinov'] | 2,021 | International Conference on Machine Learning | 544 | 36 | ['Computer Science', 'Mathematics'] |
2,105.06597 | RetGen: A Joint framework for Retrieval and Grounded Text Generation
Modeling | ['Yizhe Zhang', 'Siqi Sun', 'Xiang Gao', 'Yuwei Fang', 'Chris Brockett', 'Michel Galley', 'Jianfeng Gao', 'Bill Dolan'] | ['cs.CL', 'cs.AI'] | Recent advances in large-scale pre-training such as GPT-3 allow seemingly
high quality text to be generated from a given prompt. However, such generation
systems often suffer from problems of hallucinated facts, and are not
inherently designed to incorporate useful external information. Grounded
generation models appea... | 2021-05-14T00:11:38Z | accepted by AAAI-22, camera ready version | null | null | null | null | null | null | null | null | null |
2,105.07464 | Few-NERD: A Few-Shot Named Entity Recognition Dataset | ['Ning Ding', 'Guangwei Xu', 'Yulin Chen', 'Xiaobin Wang', 'Xu Han', 'Pengjun Xie', 'Hai-Tao Zheng', 'Zhiyuan Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recently, considerable literature has grown up around the theme of few-shot
named entity recognition (NER), but little published benchmark data
specifically focused on the practical and challenging task. Current approaches
collect existing supervised NER datasets and re-organize them to the few-shot
setting for empiric... | 2021-05-16T15:53:17Z | Accepted by ACL-IJCNLP 2021 (long paper), update | null | null | Few-NERD: A Few-shot Named Entity Recognition Dataset | ['Ning Ding', 'Guangwei Xu', 'Yulin Chen', 'Xiaobin Wang', 'Xu Han', 'Pengjun Xie', 'Haitao Zheng', 'Zhiyuan Liu'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 239 | 42 | ['Computer Science'] |
2,105.0805 | Pay Attention to MLPs | ['Hanxiao Liu', 'Zihang Dai', 'David R. So', 'Quoc V. Le'] | ['cs.LG', 'cs.CL', 'cs.CV'] | Transformers have become one of the most important architectural innovations
in deep learning and have enabled many breakthroughs over the past few years.
Here we propose a simple network architecture, gMLP, based on MLPs with gating,
and show that it can perform as well as Transformers in key language and vision
appli... | 2021-05-17T17:55:04Z | null | null | null | null | null | null | null | null | null | null |
2,105.08209 | BookSum: A Collection of Datasets for Long-form Narrative Summarization | ['Wojciech Kryściński', 'Nazneen Rajani', 'Divyansh Agarwal', 'Caiming Xiong', 'Dragomir Radev'] | ['cs.CL'] | The majority of available text summarization datasets include short-form
source documents that lack long-range causal and temporal dependencies, and
often contain strong layout and stylistic biases. While relevant, such datasets
will offer limited challenges for future generations of text summarization
systems. We addr... | 2021-05-18T00:22:46Z | 19 pages, 12 tables, 3 figures | null | null | BookSum: A Collection of Datasets for Long-form Narrative Summarization | ['Wojciech Kryscinski', 'Nazneen Rajani', 'Divyansh Agarwal', 'Caiming Xiong', 'Dragomir R. Radev'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 154 | 43 | ['Computer Science'] |
2,105.08276 | NExT-QA:Next Phase of Question-Answering to Explaining Temporal Actions | ['Junbin Xiao', 'Xindi Shang', 'Angela Yao', 'Tat-Seng Chua'] | ['cs.CV', 'cs.AI'] | We introduce NExT-QA, a rigorously designed video question answering
(VideoQA) benchmark to advance video understanding from describing to
explaining the temporal actions. Based on the dataset, we set up multi-choice
and open-ended QA tasks targeting causal action reasoning, temporal action
reasoning, and common scene ... | 2021-05-18T04:56:46Z | To appear at CVPR2021.(Receive one 'Strong Accept'. Review comments:
The benchmark will be beneficial for an important video understanding
problem. The analysis is comprehensive and provides meaningful insights.) | null | null | NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions | ['Junbin Xiao', 'Xindi Shang', 'Angela Yao', 'Tat-seng Chua'] | 2,021 | Computer Vision and Pattern Recognition | 507 | 69 | ['Computer Science'] |
2,105.08645 | CoTexT: Multi-task Learning with Code-Text Transformer | ['Long Phan', 'Hieu Tran', 'Daniel Le', 'Hieu Nguyen', 'James Anibal', 'Alec Peltekian', 'Yanfang Ye'] | ['cs.AI', 'cs.PL'] | We present CoTexT, a pre-trained, transformer-based encoder-decoder model
that learns the representative context between natural language (NL) and
programming language (PL). Using self-supervision, CoTexT is pre-trained on
large programming language corpora to learn a general understanding of language
and code. CoTexT ... | 2021-05-18T16:22:05Z | null | null | null | null | null | null | null | null | null | null |
2,105.08935 | DeepFaceEditing: Deep Face Generation and Editing with Disentangled
Geometry and Appearance Control | ['Shu-Yu Chen', 'Feng-Lin Liu', 'Yu-Kun Lai', 'Paul L. Rosin', 'Chunpeng Li', 'Hongbo Fu', 'Lin Gao'] | ['cs.GR'] | Recent facial image synthesis methods have been mainly based on conditional
generative models. Sketch-based conditions can effectively describe the
geometry of faces, including the contours of facial components, hair
structures, as well as salient edges (e.g., wrinkles) on face surfaces but lack
effective control of ap... | 2021-05-19T05:35:44Z | null | null | null | null | null | null | null | null | null | null |
2,105.09501 | Contrastive Learning for Many-to-many Multilingual Neural Machine
Translation | ['Xiao Pan', 'Mingxuan Wang', 'Liwei Wu', 'Lei Li'] | ['cs.CL', 'cs.LG'] | Existing multilingual machine translation approaches mainly focus on
English-centric directions, while the non-English directions still lag behind.
In this work, we aim to build a many-to-many translation system with an
emphasis on the quality of non-English language directions. Our intuition is
based on the hypothesis... | 2021-05-20T03:59:45Z | accepted as long paper in ACL2021 | null | null | null | null | null | null | null | null | null |
2,105.0968 | KLUE: Korean Language Understanding Evaluation | ['Sungjoon Park', 'Jihyung Moon', 'Sungdong Kim', 'Won Ik Cho', 'Jiyoon Han', 'Jangwon Park', 'Chisung Song', 'Junseong Kim', 'Yongsook Song', 'Taehwan Oh', 'Joohong Lee', 'Juhyun Oh', 'Sungwon Lyu', 'Younghoon Jeong', 'Inkwon Lee', 'Sangwoo Seo', 'Dongjun Lee', 'Hyunwoo Kim', 'Myeonghwa Lee', 'Seongbo Jang', 'Seungwon... | ['cs.CL'] | We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE
is a collection of 8 Korean natural language understanding (NLU) tasks,
including Topic Classification, SemanticTextual Similarity, Natural Language
Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing,
Machine Reading ... | 2021-05-20T11:40:30Z | 76 pages, 10 figures, 36 tables | null | null | KLUE: Korean Language Understanding Evaluation | ['Sungjoon Park', 'Jihyung Moon', 'Sungdong Kim', 'Won Ik Cho', 'Jiyoon Han', 'Jangwon Park', 'Chisung Song', 'Junseong Kim', 'Yongsook Song', 'Tae Hwan Oh', 'Joohong Lee', 'Juhyun Oh', 'Sungwon Lyu', 'Young-kuk Jeong', 'I. Lee', 'Sang-gyu Seo', 'Dongjun Lee', 'Hyunwoo Kim', 'Myeonghwa Lee', 'Seongbo Jang', 'Seungwon D... | 2,021 | NeurIPS Datasets and Benchmarks | 198 | 168 | ['Computer Science'] |
2,105.09816 | Intra-Document Cascading: Learning to Select Passages for Neural
Document Ranking | ['Sebastian Hofstätter', 'Bhaskar Mitra', 'Hamed Zamani', 'Nick Craswell', 'Allan Hanbury'] | ['cs.IR', 'cs.CL'] | An emerging recipe for achieving state-of-the-art effectiveness in neural
document re-ranking involves utilizing large pre-trained language models -
e.g., BERT - to evaluate all individual passages in the document and then
aggregating the outputs by pooling or additional Transformer layers. A major
drawback of this app... | 2021-05-20T15:10:13Z | Accepted at SIGIR 2021 (Full Paper Track) | null | null | Intra-Document Cascading: Learning to Select Passages for Neural Document Ranking | ['Sebastian Hofstätter', 'Bhaskar Mitra', 'Hamed Zamani', 'Nick Craswell', 'A. Hanbury'] | 2,021 | Annual International ACM SIGIR Conference on Research and Development in Information Retrieval | 44 | 41 | ['Computer Science'] |
2,105.10288 | Extremely Lightweight Quantization Robust Real-Time Single-Image Super
Resolution for Mobile Devices | ['Mustafa Ayazoglu'] | ['cs.CV', 'cs.AI', 'cs.LG', 'eess.IV'] | Single-Image Super Resolution (SISR) is a classical computer vision problem
and it has been studied for over decades. With the recent success of deep
learning methods, recent work on SISR focuses solutions with deep learning
methodologies and achieves state-of-the-art results. However most of the
state-of-the-art SISR ... | 2021-05-21T11:29:48Z | null | IEEE Computer Vision Pattern Recognition Workshops (Mobile AI 2021
Workshop) | null | null | null | null | null | null | null | null |
2,105.11314 | RobeCzech: Czech RoBERTa, a monolingual contextualized language
representation model | ['Milan Straka', 'Jakub Náplava', 'Jana Straková', 'David Samuel'] | ['cs.CL'] | We present RobeCzech, a monolingual RoBERTa language representation model
trained on Czech data. RoBERTa is a robustly optimized Transformer-based
pretraining approach. We show that RobeCzech considerably outperforms
equally-sized multilingual and Czech-trained contextualized language
representation models, surpasses c... | 2021-05-24T14:50:04Z | Published in TSD 2021 | null | 10.1007/978-3-030-83527-9_17 | RobeCzech: Czech RoBERTa, a monolingual contextualized language representation model | ['Milan Straka', "Jakub N'aplava", "Jana Strakov'a", 'David Samuel'] | 2,021 | Workshop on Time-Delay Systems | 47 | 47 | ['Computer Science'] |
2,105.12306 | Read, Listen, and See: Leveraging Multimodal Information Helps Chinese
Spell Checking | ['Heng-Da Xu', 'Zhongli Li', 'Qingyu Zhou', 'Chao Li', 'Zizhen Wang', 'Yunbo Cao', 'Heyan Huang', 'Xian-Ling Mao'] | ['cs.CL'] | Chinese Spell Checking (CSC) aims to detect and correct erroneous characters
for user-generated text in the Chinese language. Most of the Chinese spelling
errors are misused semantically, phonetically or graphically similar
characters. Previous attempts noticed this phenomenon and try to use the
similarity for this tas... | 2021-05-26T02:38:11Z | In ACL Findings 2021 | null | null | Read, Listen, and See: Leveraging Multimodal Information Helps Chinese Spell Checking | ['Heng-Da Xu', 'Zhongli Li', 'Qingyu Zhou', 'Chao Li', 'Zizhen Wang', 'Yunbo Cao', 'Heyan Huang', 'Xian-Ling Mao'] | 2,021 | Findings | 97 | 53 | ['Computer Science'] |
2,105.12723 | Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and
Interpretable Visual Understanding | ['Zizhao Zhang', 'Han Zhang', 'Long Zhao', 'Ting Chen', 'Sercan O. Arik', 'Tomas Pfister'] | ['cs.CV'] | Hierarchical structures are popular in recent vision transformers, however,
they require sophisticated designs and massive datasets to work well. In this
paper, we explore the idea of nesting basic local transformers on
non-overlapping image blocks and aggregating them in a hierarchical way. We
find that the block aggr... | 2021-05-26T17:56:48Z | AAAI2022 | null | null | Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding | ['Zizhao Zhang', 'Han Zhang', 'Long Zhao', 'Ting Chen', 'Sercan Ö. Arik', 'Tomas Pfister'] | 2,021 | AAAI Conference on Artificial Intelligence | 174 | 71 | ['Computer Science'] |
2,105.12995 | ProtAugment: Unsupervised diverse short-texts paraphrasing for intent
detection meta-learning | ['Thomas Dopierre', 'Christophe Gravier', 'Wilfried Logerais'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent research considers few-shot intent detection as a meta-learning
problem: the model is learning to learn from a consecutive set of small tasks
named episodes. In this work, we propose ProtAugment, a meta-learning algorithm
for short texts classification (the intent detection task). ProtAugment is a
novel extensio... | 2021-05-27T08:31:27Z | Accepted at the 59th Annual Meeting of the Association for
Computational Linguistics (ACL2021) | null | null | null | null | null | null | null | null | null |
2,105.1329 | CogView: Mastering Text-to-Image Generation via Transformers | ['Ming Ding', 'Zhuoyi Yang', 'Wenyi Hong', 'Wendi Zheng', 'Chang Zhou', 'Da Yin', 'Junyang Lin', 'Xu Zou', 'Zhou Shao', 'Hongxia Yang', 'Jie Tang'] | ['cs.CV', 'cs.LG'] | Text-to-Image generation in the general domain has long been an open problem,
which requires both a powerful generative model and cross-modal understanding.
We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to
advance this problem. We also demonstrate the finetuning strategies for various
down... | 2021-05-26T16:52:53Z | to appear in NeurIPS 2021 | null | null | null | null | null | null | null | null | null |
2,105.13562 | ILDC for CJPE: Indian Legal Documents Corpus for Court Judgment
Prediction and Explanation | ['Vijit Malik', 'Rishabh Sanjay', 'Shubham Kumar Nigam', 'Kripa Ghosh', 'Shouvik Kumar Guha', 'Arnab Bhattacharya', 'Ashutosh Modi'] | ['cs.CL', 'cs.AI'] | An automated system that could assist a judge in predicting the outcome of a
case would help expedite the judicial process. For such a system to be
practically useful, predictions by the system should be explainable. To promote
research in developing such a system, we introduce ILDC (Indian Legal Documents
Corpus). ILD... | 2021-05-28T03:07:32Z | Accepted at ACL 2021, 17 Pages (9 Pages main paper, 4 pages
references, 4 pages appendix) | null | null | null | null | null | null | null | null | null |
2,105.13626 | ByT5: Towards a token-free future with pre-trained byte-to-byte models | ['Linting Xue', 'Aditya Barua', 'Noah Constant', 'Rami Al-Rfou', 'Sharan Narang', 'Mihir Kale', 'Adam Roberts', 'Colin Raffel'] | ['cs.CL'] | Most widely-used pre-trained language models operate on sequences of tokens
corresponding to word or subword units. By comparison, token-free models that
operate directly on raw text (bytes or characters) have many benefits: they can
process text in any language out of the box, they are more robust to noise, and
they m... | 2021-05-28T07:03:22Z | To be published in TACL 2022 | null | null | ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models | ['Linting Xue', 'Aditya Barua', 'Noah Constant', 'Rami Al-Rfou', 'Sharan Narang', 'Mihir Kale', 'Adam Roberts', 'Colin Raffel'] | 2,021 | Transactions of the Association for Computational Linguistics | 509 | 68 | ['Computer Science'] |
2,105.14039 | Towards mental time travel: a hierarchical memory for reinforcement
learning agents | ['Andrew Kyle Lampinen', 'Stephanie C. Y. Chan', 'Andrea Banino', 'Felix Hill'] | ['cs.LG', 'cs.AI', 'cs.NE', 'I.2.6'] | Reinforcement learning agents often forget details of the past, especially
after delays or distractor tasks. Agents with common memory architectures
struggle to recall and integrate across multiple timesteps of a past event, or
even to recall the details of a single timestep that is followed by distractor
tasks. To add... | 2021-05-28T18:12:28Z | NeurIPS 2021; 10 pages main text; 29 pages total | Advances in Neural Information Processing Systems, 2021 | null | Towards mental time travel: a hierarchical memory for reinforcement learning agents | ['Andrew Kyle Lampinen', 'Stephanie C. Y. Chan', 'Andrea Banino', 'Felix Hill'] | 2,021 | Neural Information Processing Systems | 47 | 63 | ['Computer Science'] |
2,105.14103 | An Attention Free Transformer | ['Shuangfei Zhai', 'Walter Talbott', 'Nitish Srivastava', 'Chen Huang', 'Hanlin Goh', 'Ruixiang Zhang', 'Josh Susskind'] | ['cs.LG', 'cs.CL', 'cs.CV'] | We introduce Attention Free Transformer (AFT), an efficient variant of
Transformers that eliminates the need for dot product self attention. In an AFT
layer, the key and value are first combined with a set of learned position
biases, the result of which is multiplied with the query in an element-wise
fashion. This new ... | 2021-05-28T20:45:30Z | null | null | null | An Attention Free Transformer | ['Shuangfei Zhai', 'Walter A. Talbott', 'Nitish Srivastava', 'Chen Huang', 'Hanlin Goh', 'Ruixiang Zhang', 'J. Susskind'] | 2,021 | arXiv.org | 132 | 37 | ['Computer Science'] |
2,105.14491 | How Attentive are Graph Attention Networks? | ['Shaked Brody', 'Uri Alon', 'Eran Yahav'] | ['cs.LG'] | Graph Attention Networks (GATs) are one of the most popular GNN architectures
and are considered as the state-of-the-art architecture for representation
learning with graphs. In GAT, every node attends to its neighbors given its own
representation as the query. However, in this paper we show that GAT computes a
very li... | 2021-05-30T10:17:58Z | Published in ICLR 2022 | null | null | null | null | null | null | null | null | null |
2,105.15203 | SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers | ['Enze Xie', 'Wenhai Wang', 'Zhiding Yu', 'Anima Anandkumar', 'Jose M. Alvarez', 'Ping Luo'] | ['cs.CV', 'cs.LG'] | We present SegFormer, a simple, efficient yet powerful semantic segmentation
framework which unifies Transformers with lightweight multilayer perception
(MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a
novel hierarchically structured Transformer encoder which outputs multiscale
features. I... | 2021-05-31T17:59:51Z | Accepted by NeurIPS 2021 | null | null | SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers | ['Enze Xie', 'Wenhai Wang', 'Zhiding Yu', 'Anima Anandkumar', 'J. Álvarez', 'Ping Luo'] | 2,021 | Neural Information Processing Systems | 5,179 | 87 | ['Computer Science'] |
2,106.00186 | Towards Light-weight and Real-time Line Segment Detection | ['Geonmo Gu', 'Byungsoo Ko', 'SeoungHyun Go', 'Sung-Hyun Lee', 'Jingeun Lee', 'Minchul Shin'] | ['cs.CV', 'cs.LG'] | Previous deep learning-based line segment detection (LSD) suffers from the
immense model size and high computational cost for line prediction. This
constrains them from real-time inference on computationally restricted
environments. In this paper, we propose a real-time and light-weight line
segment detector for resour... | 2021-06-01T02:28:08Z | Accepted by AAAI2022 | null | null | Towards Light-Weight and Real-Time Line Segment Detection | ['Geonmo Gu', 'ByungSoo Ko', 'SeoungHyun Go', 'Sung-Hyun Lee', 'Jingeun Lee', 'Minchul Shin'] | 2,021 | AAAI Conference on Artificial Intelligence | 66 | 45 | ['Computer Science'] |
2,106.00666 | You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection | ['Yuxin Fang', 'Bencheng Liao', 'Xinggang Wang', 'Jiemin Fang', 'Jiyang Qi', 'Rui Wu', 'Jianwei Niu', 'Wenyu Liu'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Can Transformer perform 2D object- and region-level recognition from a pure
sequence-to-sequence perspective with minimal knowledge about the 2D spatial
structure? To answer this question, we present You Only Look at One Sequence
(YOLOS), a series of object detection models based on the vanilla Vision
Transformer with ... | 2021-06-01T17:54:09Z | NeurIPS 2021 Camera Ready | null | null | null | null | null | null | null | null | null |
2,106.00753 | Is good old GRAPPA dead? | ['Zaccharie Ramzi', 'Alexandre Vignaud', 'Jean-Luc Starck', 'Philippe Ciuciu'] | ['eess.IV', 'cs.LG', 'physics.med-ph'] | We perform a qualitative analysis of performance of XPDNet, a
state-of-the-art deep learning approach for MRI reconstruction, compared to
GRAPPA, a classical approach. We do this in multiple settings, in particular
testing the robustness of the XPDNet to unseen settings, and show that the
XPDNet can to some degree gene... | 2021-06-01T19:59:21Z | Presented at ISMRM 2021 | null | null | null | null | null | null | null | null | null |
2,106.00817 | nnDetection: A Self-configuring Method for Medical Object Detection | ['Michael Baumgartner', 'Paul F. Jaeger', 'Fabian Isensee', 'Klaus H. Maier-Hein'] | ['eess.IV', 'cs.CV'] | Simultaneous localisation and categorization of objects in medical images,
also referred to as medical object detection, is of high clinical relevance
because diagnostic decisions often depend on rating of objects rather than e.g.
pixels. For this task, the cumbersome and iterative process of method
configuration const... | 2021-06-01T21:55:03Z | MICCAI 2021 (splitted LN data set for camera-ready version); *Michael
Baumgartner and Paul F. J\"ager contributed equally | null | 10.1007/978-3-030-87240-3_51 | null | null | null | null | null | null | null |
2,106.00882 | Efficient Passage Retrieval with Hashing for Open-domain Question
Answering | ['Ikuya Yamada', 'Akari Asai', 'Hannaneh Hajishirzi'] | ['cs.CL', 'cs.IR'] | Most state-of-the-art open-domain question answering systems use a neural
retrieval model to encode passages into continuous vectors and extract them
from a knowledge source. However, such retrieval models often require large
memory to run because of the massive size of their passage index. In this
paper, we introduce ... | 2021-06-02T01:34:42Z | ACL 2021 | null | null | null | null | null | null | null | null | null |
2,106.01144 | Towards Emotional Support Dialog Systems | ['Siyang Liu', 'Chujie Zheng', 'Orianna Demasi', 'Sahand Sabour', 'Yu Li', 'Zhou Yu', 'Yong Jiang', 'Minlie Huang'] | ['cs.CL'] | Emotional support is a crucial ability for many conversation scenarios,
including social interactions, mental health support, and customer service
chats. Following reasonable procedures and using various support skills can
help to effectively provide support. However, due to the lack of a
well-designed task and corpora... | 2021-06-02T13:30:43Z | Accepted to ACL 2021 (Long Paper) | null | null | Towards Emotional Support Dialog Systems | ['Siyang Liu', 'Chujie Zheng', 'O. Demasi', 'Sahand Sabour', 'Yu Li', 'Zhou Yu', 'Yong Jiang', 'Minlie Huang'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 257 | 42 | ['Computer Science'] |
2,106.01345 | Decision Transformer: Reinforcement Learning via Sequence Modeling | ['Lili Chen', 'Kevin Lu', 'Aravind Rajeswaran', 'Kimin Lee', 'Aditya Grover', 'Michael Laskin', 'Pieter Abbeel', 'Aravind Srinivas', 'Igor Mordatch'] | ['cs.LG', 'cs.AI'] | We introduce a framework that abstracts Reinforcement Learning (RL) as a
sequence modeling problem. This allows us to draw upon the simplicity and
scalability of the Transformer architecture, and associated advances in
language modeling such as GPT-x and BERT. In particular, we present Decision
Transformer, an architec... | 2021-06-02T17:53:39Z | First two authors contributed equally. Last two authors advised
equally | null | null | Decision Transformer: Reinforcement Learning via Sequence Modeling | ['Lili Chen', 'Kevin Lu', 'A. Rajeswaran', 'Kimin Lee', 'Aditya Grover', 'M. Laskin', 'P. Abbeel', 'A. Srinivas', 'Igor Mordatch'] | 2,021 | Neural Information Processing Systems | 1,671 | 91 | ['Computer Science'] |
2,106.01548 | When Vision Transformers Outperform ResNets without Pre-training or
Strong Data Augmentations | ['Xiangning Chen', 'Cho-Jui Hsieh', 'Boqing Gong'] | ['cs.CV', 'cs.LG'] | Vision Transformers (ViTs) and MLPs signal further efforts on replacing
hand-wired features or inductive biases with general-purpose neural
architectures. Existing works empower the models by massive data, such as
large-scale pre-training and/or repeated strong data augmentations, and still
report optimization-related ... | 2021-06-03T02:08:03Z | ICLR 2022 (spotlight) | null | null | null | null | null | null | null | null | null |
2,106.0189 | SimCLS: A Simple Framework for Contrastive Learning of Abstractive
Summarization | ['Yixin Liu', 'Pengfei Liu'] | ['cs.CL'] | In this paper, we present a conceptually simple while empirically powerful
framework for abstractive summarization, SimCLS, which can bridge the gap
between the learning objective and evaluation metrics resulting from the
currently dominated sequence-to-sequence learning framework by formulating text
generation as a re... | 2021-06-03T14:34:17Z | Published as a short paper at ACL 2021 | null | null | null | null | null | null | null | null | null |
2,106.02241 | ERNIE-Tiny : A Progressive Distillation Framework for Pretrained
Transformer Compression | ['Weiyue Su', 'Xuyi Chen', 'Shikun Feng', 'Jiaxiang Liu', 'Weixin Liu', 'Yu Sun', 'Hao Tian', 'Hua Wu', 'Haifeng Wang'] | ['cs.CL'] | Pretrained language models (PLMs) such as BERT adopt a training paradigm
which first pretrain the model in general data and then finetune the model on
task-specific data, and have recently achieved great success. However, PLMs are
notorious for their enormous parameters and hard to be deployed on real-life
applications... | 2021-06-04T04:00:16Z | null | null | null | null | null | null | null | null | null | null |
2,106.02636 | MERLOT: Multimodal Neural Script Knowledge Models | ['Rowan Zellers', 'Ximing Lu', 'Jack Hessel', 'Youngjae Yu', 'Jae Sung Park', 'Jize Cao', 'Ali Farhadi', 'Yejin Choi'] | ['cs.CV', 'cs.CL', 'cs.LG'] | As humans, we understand events in the visual world contextually, performing
multimodal reasoning across time to make inferences about the past, present,
and future. We introduce MERLOT, a model that learns multimodal script
knowledge by watching millions of YouTube videos with transcribed speech -- in
an entirely labe... | 2021-06-04T17:57:39Z | project page at https://rowanzellers.com/merlot; NeurIPS 2021 camera
ready | null | null | MERLOT: Multimodal Neural Script Knowledge Models | ['Rowan Zellers', 'Ximing Lu', 'Jack Hessel', 'Youngjae Yu', 'J. S. Park', 'Jize Cao', 'Ali Farhadi', 'Yejin Choi'] | 2,021 | Neural Information Processing Systems | 384 | 132 | ['Computer Science'] |
2,106.02637 | Aligning Pretraining for Detection via Object-Level Contrastive Learning | ['Fangyun Wei', 'Yue Gao', 'Zhirong Wu', 'Han Hu', 'Stephen Lin'] | ['cs.CV'] | Image-level contrastive representation learning has proven to be highly
effective as a generic model for transfer learning. Such generality for
transfer learning, however, sacrifices specificity if we are interested in a
certain downstream task. We argue that this could be sub-optimal and thus
advocate a design princip... | 2021-06-04T17:59:52Z | Accepted by NeurIPS 2021 (spotlight), code is availabel at
https://github.com/hologerry/SoCo | null | null | Aligning Pretraining for Detection via Object-Level Contrastive Learning | ['Fangyun Wei', 'Yue Gao', 'Zhirong Wu', 'Han Hu', 'Stephen Lin'] | 2,021 | Neural Information Processing Systems | 148 | 48 | ['Computer Science'] |
2,106.03106 | Uformer: A General U-Shaped Transformer for Image Restoration | ['Zhendong Wang', 'Xiaodong Cun', 'Jianmin Bao', 'Wengang Zhou', 'Jianzhuang Liu', 'Houqiang Li'] | ['cs.CV'] | In this paper, we present Uformer, an effective and efficient
Transformer-based architecture for image restoration, in which we build a
hierarchical encoder-decoder network using the Transformer block. In Uformer,
there are two core designs. First, we introduce a novel locally-enhanced window
(LeWin) Transformer block,... | 2021-06-06T12:33:22Z | 17 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,106.03193 | The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual
Machine Translation | ['Naman Goyal', 'Cynthia Gao', 'Vishrav Chaudhary', 'Peng-Jen Chen', 'Guillaume Wenzek', 'Da Ju', 'Sanjana Krishnan', "Marc'Aurelio Ranzato", 'Francisco Guzman', 'Angela Fan'] | ['cs.CL', 'cs.AI'] | One of the biggest challenges hindering progress in low-resource and
multilingual machine translation is the lack of good evaluation benchmarks.
Current evaluation benchmarks either lack good coverage of low-resource
languages, consider only restricted domains, or are low quality because they
are constructed using semi... | 2021-06-06T17:58:12Z | null | null | null | null | null | null | null | null | null | null |
2,106.03598 | SciFive: a text-to-text transformer model for biomedical literature | ['Long N. Phan', 'James T. Anibal', 'Hieu Tran', 'Shaurya Chanana', 'Erol Bahadroglu', 'Alec Peltekian', 'Grégoire Altan-Bonnet'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In this report, we introduce SciFive, a domain-specific T5 model that has
been pre-trained on large biomedical corpora. Our model outperforms the current
SOTA methods (i.e. BERT, BioBERT, Base T5) on tasks in named entity relation,
relation extraction, natural language inference, and question-answering. We
show that te... | 2021-05-28T06:09:23Z | null | null | null | null | null | null | null | null | null | null |
2,106.0383 | A Simple Recipe for Multilingual Grammatical Error Correction | ['Sascha Rothe', 'Jonathan Mallinson', 'Eric Malmi', 'Sebastian Krause', 'Aliaksei Severyn'] | ['cs.CL'] | This paper presents a simple recipe to train state-of-the-art multilingual
Grammatical Error Correction (GEC) models. We achieve this by first proposing a
language-agnostic method to generate a large number of synthetic examples. The
second ingredient is to use large-scale multilingual language models (up to 11B
parame... | 2021-06-07T17:47:04Z | null | null | null | A Simple Recipe for Multilingual Grammatical Error Correction | ['S. Rothe', 'Jonathan Mallinson', 'Eric Malmi', 'Sebastian Krause', 'A. Severyn'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 169 | 23 | ['Computer Science'] |
2,106.04563 | XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation | ['Subhabrata Mukherjee', 'Ahmed Hassan Awadallah', 'Jianfeng Gao'] | ['cs.CL', 'cs.AI', 'cs.LG'] | While deep and large pre-trained models are the state-of-the-art for various
natural language processing tasks, their huge size poses significant challenges
for practical uses in resource constrained settings. Recent works in knowledge
distillation propose task-agnostic as well as task-specific methods to compress
thes... | 2021-06-08T17:49:33Z | Code and checkpoints released (links in draft) | null | null | null | null | null | null | null | null | null |
2,106.04624 | SpeechBrain: A General-Purpose Speech Toolkit | ['Mirco Ravanelli', 'Titouan Parcollet', 'Peter Plantinga', 'Aku Rouhe', 'Samuele Cornell', 'Loren Lugosch', 'Cem Subakan', 'Nauman Dawalatabad', 'Abdelwahab Heba', 'Jianyuan Zhong', 'Ju-Chieh Chou', 'Sung-Lin Yeh', 'Szu-Wei Fu', 'Chien-Feng Liao', 'Elena Rastorgueva', 'François Grondin', 'William Aris', 'Hwidong Na', ... | ['eess.AS', 'cs.AI', 'cs.LG', 'cs.SD'] | SpeechBrain is an open-source and all-in-one speech toolkit. It is designed
to facilitate the research and development of neural speech processing
technologies by being simple, flexible, user-friendly, and well-documented.
This paper describes the core architecture designed to support several tasks of
common interest, ... | 2021-06-08T18:22:56Z | Preprint | null | null | SpeechBrain: A General-Purpose Speech Toolkit | ['M. Ravanelli', 'Titouan Parcollet', 'Peter William VanHarn Plantinga', 'Aku Rouhe', 'Samuele Cornell', 'Loren Lugosch', 'Cem Subakan', 'Nauman Dawalatabad', 'A. Heba', 'Jianyuan Zhong', 'Ju-Chieh Chou', 'Sung-Lin Yeh', 'Szu-Wei Fu', 'Chien-Feng Liao', 'Elena Rastorgueva', 'Franccois Grondin', 'William Aris', 'Hwidong... | 2,021 | arXiv.org | 770 | 113 | ['Engineering', 'Computer Science'] |
2,106.04647 | Compacter: Efficient Low-Rank Hypercomplex Adapter Layers | ['Rabeeh Karimi Mahabadi', 'James Henderson', 'Sebastian Ruder'] | ['cs.CL'] | Adapting large-scale pretrained language models to downstream tasks via
fine-tuning is the standard method for achieving state-of-the-art performance
on NLP benchmarks. However, fine-tuning all weights of models with millions or
billions of parameters is sample-inefficient, unstable in low-resource
settings, and wastef... | 2021-06-08T19:17:04Z | accepted in NeurIPS, 2021 | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.