arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,102.12122 | Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
without Convolutions | ['Wenhai Wang', 'Enze Xie', 'Xiang Li', 'Deng-Ping Fan', 'Kaitao Song', 'Ding Liang', 'Tong Lu', 'Ping Luo', 'Ling Shao'] | ['cs.CV'] | Although using convolutional neural networks (CNNs) as backbones achieves
great successes in computer vision, this work investigates a simple backbone
network useful for many dense prediction tasks without convolutions. Unlike the
recently-proposed Transformer model (e.g., ViT) that is specially designed for
image clas... | 2021-02-24T08:33:55Z | Accepted to ICCV 2021 | null | null | Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions | ['Wenhai Wang', 'Enze Xie', 'Xiang Li', 'Deng-Ping Fan', 'Kaitao Song', 'Ding Liang', 'Tong Lu', 'P. Luo', 'Ling Shao'] | 2,021 | IEEE International Conference on Computer Vision | 3,761 | 87 | ['Computer Science'] |
2,103.0002 | Learning Transferable Visual Models From Natural Language Supervision | ['Alec Radford', 'Jong Wook Kim', 'Chris Hallacy', 'Aditya Ramesh', 'Gabriel Goh', 'Sandhini Agarwal', 'Girish Sastry', 'Amanda Askell', 'Pamela Mishkin', 'Jack Clark', 'Gretchen Krueger', 'Ilya Sutskever'] | ['cs.CV', 'cs.LG'] | State-of-the-art computer vision systems are trained to predict a fixed set
of predetermined object categories. This restricted form of supervision limits
their generality and usability since additional labeled data is needed to
specify any other visual concept. Learning directly from raw text about images
is a promisi... | 2021-02-26T19:04:58Z | null | null | null | null | null | null | null | null | null | null |
2,103.00112 | Transformer in Transformer | ['Kai Han', 'An Xiao', 'Enhua Wu', 'Jianyuan Guo', 'Chunjing Xu', 'Yunhe Wang'] | ['cs.CV', 'cs.AI'] | Transformer is a new kind of neural architecture which encodes the input data
as powerful features via the attention mechanism. Basically, the visual
transformers first divide the input images into several local patches and then
calculate both representations and their relationship. Since natural images are
of high com... | 2021-02-27T03:12:16Z | Accepted by NeurIPS 2021 | null | null | null | null | null | null | null | null | null |
2,103.00993 | AdaSpeech: Adaptive Text to Speech for Custom Voice | ['Mingjian Chen', 'Xu Tan', 'Bohan Li', 'Yanqing Liu', 'Tao Qin', 'Sheng Zhao', 'Tie-Yan Liu'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.SD'] | Custom voice, a specific text to speech (TTS) service in commercial speech
platforms, aims to adapt a source TTS model to synthesize personal voice for a
target speaker using few speech data. Custom voice presents two unique
challenges for TTS adaptation: 1) to support diverse customers, the adaptation
model needs to h... | 2021-03-01T13:28:59Z | Accepted by ICLR 2021 | null | null | null | null | null | null | null | null | null |
2,103.01306 | Scalable Scene Flow from Point Clouds in the Real World | ['Philipp Jund', 'Chris Sweeney', 'Nichola Abdo', 'Zhifeng Chen', 'Jonathon Shlens'] | ['cs.CV', 'cs.LG'] | Autonomous vehicles operate in highly dynamic environments necessitating an
accurate assessment of which aspects of a scene are moving and where they are
moving to. A popular approach to 3D motion estimation, termed scene flow, is to
employ 3D point cloud data from consecutive LiDAR scans, although such
approaches have... | 2021-03-01T20:56:05Z | null | null | null | null | null | null | null | null | null | null |
2,103.01458 | Diffusion Probabilistic Models for 3D Point Cloud Generation | ['Shitong Luo', 'Wei Hu'] | ['cs.CV'] | We present a probabilistic model for point cloud generation, which is
fundamental for various 3D vision tasks such as shape completion, upsampling,
synthesis and data augmentation. Inspired by the diffusion process in
non-equilibrium thermodynamics, we view points in point clouds as particles in
a thermodynamic system ... | 2021-03-02T03:56:02Z | Accepted to CVPR 2021 | null | null | null | null | null | null | null | null | null |
2,103.01913 | WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual
Machine Learning | ['Krishna Srinivasan', 'Karthik Raman', 'Jiecao Chen', 'Michael Bendersky', 'Marc Najork'] | ['cs.CV', 'cs.CL', 'cs.IR'] | The milestone improvements brought about by deep representation learning and
pre-training techniques have led to large performance gains across downstream
NLP, IR and Vision tasks. Multimodal modeling techniques aim to leverage large
high-quality visio-linguistic datasets for learning complementary information
(across ... | 2021-03-02T18:13:54Z | null | null | 10.1145/3404835.3463257 | null | null | null | null | null | null | null |
2,103.01988 | Self-supervised Pretraining of Visual Features in the Wild | ['Priya Goyal', 'Mathilde Caron', 'Benjamin Lefaudeux', 'Min Xu', 'Pengchao Wang', 'Vivek Pai', 'Mannat Singh', 'Vitaliy Liptchinsky', 'Ishan Misra', 'Armand Joulin', 'Piotr Bojanowski'] | ['cs.CV', 'cs.AI'] | Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV
have reduced the gap with supervised methods. These results have been achieved
in a control environment, that is the highly curated ImageNet dataset. However,
the premise of self-supervised learning is that it can learn from any random
image an... | 2021-03-02T19:12:29Z | null | null | null | null | null | null | null | null | null | null |
2,103.03206 | Perceiver: General Perception with Iterative Attention | ['Andrew Jaegle', 'Felix Gimeno', 'Andrew Brock', 'Andrew Zisserman', 'Oriol Vinyals', 'Joao Carreira'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.SD', 'eess.AS'] | Biological systems perceive the world by simultaneously processing
high-dimensional inputs from modalities as diverse as vision, audition, touch,
proprioception, etc. The perception models used in deep learning on the other
hand are designed for individual modalities, often relying on domain-specific
assumptions such a... | 2021-03-04T18:20:50Z | ICML 2021 | null | null | null | null | null | null | null | null | null |
2,103.0323 | Barlow Twins: Self-Supervised Learning via Redundancy Reduction | ['Jure Zbontar', 'Li Jing', 'Ishan Misra', 'Yann LeCun', 'Stéphane Deny'] | ['cs.CV', 'cs.AI', 'cs.LG', 'q-bio.NC'] | Self-supervised learning (SSL) is rapidly closing the gap with supervised
methods on large computer vision benchmarks. A successful approach to SSL is to
learn embeddings which are invariant to distortions of the input sample.
However, a recurring issue with this approach is the existence of trivial
constant solutions.... | 2021-03-04T18:55:09Z | 13 pages, 6 figures, to appear at ICML 2021 | null | null | null | null | null | null | null | null | null |
2,103.03874 | Measuring Mathematical Problem Solving With the MATH Dataset | ['Dan Hendrycks', 'Collin Burns', 'Saurav Kadavath', 'Akul Arora', 'Steven Basart', 'Eric Tang', 'Dawn Song', 'Jacob Steinhardt'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Many intellectual endeavors require mathematical problem solving, but this
skill remains beyond the capabilities of computers. To measure this ability in
machine learning models, we introduce MATH, a new dataset of 12,500 challenging
competition mathematics problems. Each problem in MATH has a full step-by-step
solutio... | 2021-03-05T18:59:39Z | NeurIPS 2021. Code and the MATH dataset is available at
https://github.com/hendrycks/math/ | null | null | null | null | null | null | null | null | null |
2,103.05345 | Detecting Inappropriate Messages on Sensitive Topics that Could Harm a
Company's Reputation | ['Nikolay Babakov', 'Varvara Logacheva', 'Olga Kozlova', 'Nikita Semenov', 'Alexander Panchenko'] | ['cs.CL'] | Not all topics are equally "flammable" in terms of toxicity: a calm
discussion of turtles or fishing less often fuels inappropriate toxic dialogues
than a discussion of politics or sexual minorities. We define a set of
sensitive topics that can yield inappropriate and toxic messages and describe
the methodology of coll... | 2021-03-09T10:50:30Z | Accepted to the Balto-Slavic NLP workshop 2021 co-located with
EACL-2021 | null | null | null | null | null | null | null | null | null |
2,103.05959 | Beyond Self-Supervision: A Simple Yet Effective Network Distillation
Alternative to Improve Backbones | ['Cheng Cui', 'Ruoyu Guo', 'Yuning Du', 'Dongliang He', 'Fu Li', 'Zewu Wu', 'Qiwen Liu', 'Shilei Wen', 'Jizhou Huang', 'Xiaoguang Hu', 'Dianhai Yu', 'Errui Ding', 'Yanjun Ma'] | ['cs.CV'] | Recently, research efforts have been concentrated on revealing how
pre-trained model makes a difference in neural network performance.
Self-supervision and semi-supervised learning technologies have been
extensively explored by the community and are proven to be of great potential
in obtaining a powerful pre-trained mo... | 2021-03-10T09:32:44Z | 10 pages, 3 figures, 9 tables | null | null | null | null | null | null | null | null | null |
2,103.06255 | Involution: Inverting the Inherence of Convolution for Visual
Recognition | ['Duo Li', 'Jie Hu', 'Changhu Wang', 'Xiangtai Li', 'Qi She', 'Lei Zhu', 'Tong Zhang', 'Qifeng Chen'] | ['cs.CV'] | Convolution has been the core ingredient of modern neural networks,
triggering the surge of deep learning in vision. In this work, we rethink the
inherent principles of standard convolution for vision tasks, specifically
spatial-agnostic and channel-specific. Instead, we present a novel atomic
operation for deep neural... | 2021-03-10T18:40:46Z | Accepted to CVPR 2021. Code and models are available at
https://github.com/d-li14/involution | null | null | null | null | null | null | null | null | null |
2,103.06268 | CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review | ['Dan Hendrycks', 'Collin Burns', 'Anya Chen', 'Spencer Ball'] | ['cs.CL', 'cs.LG'] | Many specialized domains remain untouched by deep learning, as large labeled
datasets require expensive expert annotators. We address this bottleneck within
the legal domain by introducing the Contract Understanding Atticus Dataset
(CUAD), a new dataset for legal contract review. CUAD was created with dozens
of legal e... | 2021-03-10T18:59:34Z | NeurIPS 2021. Code and the CUAD dataset are available at
https://github.com/TheAtticusProject/cuad/ | null | null | CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review | ['Dan Hendrycks', 'Collin Burns', 'Anya Chen', 'Spencer Ball'] | 2,021 | NeurIPS Datasets and Benchmarks | 195 | 28 | ['Computer Science'] |
2,103.06418 | LightMBERT: A Simple Yet Effective Method for Multilingual BERT
Distillation | ['Xiaoqi Jiao', 'Yichun Yin', 'Lifeng Shang', 'Xin Jiang', 'Xiao Chen', 'Linlin Li', 'Fang Wang', 'Qun Liu'] | ['cs.CL'] | The multilingual pre-trained language models (e.g, mBERT, XLM and XLM-R) have
shown impressive performance on cross-lingual natural language understanding
tasks. However, these models are computationally intensive and difficult to be
deployed on resource-restricted devices. In this paper, we propose a simple yet
effect... | 2021-03-11T02:24:41Z | null | null | null | LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation | ['Xiaoqi Jiao', 'Yichun Yin', 'Lifeng Shang', 'Xin Jiang', 'Xiao Chen', 'Linlin Li', 'Fang Wang', 'Qun Liu'] | 2,021 | arXiv.org | 9 | 22 | ['Computer Science'] |
2,103.06561 | WenLan: Bridging Vision and Language by Large-Scale Multi-Modal
Pre-Training | ['Yuqi Huo', 'Manli Zhang', 'Guangzhen Liu', 'Haoyu Lu', 'Yizhao Gao', 'Guoxing Yang', 'Jingyuan Wen', 'Heng Zhang', 'Baogui Xu', 'Weihao Zheng', 'Zongzheng Xi', 'Yueqian Yang', 'Anwen Hu', 'Jinming Zhao', 'Ruichen Li', 'Yida Zhao', 'Liang Zhang', 'Yuqing Song', 'Xin Hong', 'Wanqing Cui', 'Danyang Hou', 'Yingyan Li', '... | ['cs.CV', 'cs.IR'] | Multi-modal pre-training models have been intensively explored to bridge
vision and language in recent years. However, most of them explicitly model the
cross-modal interaction between image-text pairs, by assuming that there exists
strong semantic correlation between the text and image modalities. Since this
strong as... | 2021-03-11T09:39:49Z | This paper is the outcome of the Chinese multi-modal pre-training
project called 'WenLan' | null | null | null | null | null | null | null | null | null |
2,103.06583 | Preprint: Norm Loss: An efficient yet effective regularization method
for deep neural networks | ['Theodoros Georgiou', 'Sebastian Schmitt', 'Thomas Bäck', 'Wei Chen', 'Michael Lew'] | ['cs.CV'] | Convolutional neural network training can suffer from diverse issues like
exploding or vanishing gradients, scaling-based weight space symmetry and
covariant-shift. In order to address these issues, researchers develop weight
regularization methods and activation normalization methods. In this work we
propose a weight ... | 2021-03-11T10:24:49Z | null | Proceedings of the International Conference on Pattern Recognition
(ICPR) 2020 | null | null | null | null | null | null | null | null |
2,103.06678 | The Interplay of Variant, Size, and Task Type in Arabic Pre-trained
Language Models | ['Go Inoue', 'Bashar Alhafni', 'Nurpeiis Baimukan', 'Houda Bouamor', 'Nizar Habash'] | ['cs.CL'] | In this paper, we explore the effects of language variants, data sizes, and
fine-tuning task types in Arabic pre-trained language models. To do so, we
build three pre-trained language models across three variants of Arabic: Modern
Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a
fourth la... | 2021-03-11T14:11:43Z | Accepted to WANLP 2021 | null | null | The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models | ['Go Inoue', 'Bashar Alhafni', 'Nurpeiis Baimukan', 'Houda Bouamor', 'Nizar Habash'] | 2,021 | Workshop on Arabic Natural Language Processing | 237 | 63 | ['Computer Science'] |
2,103.06874 | CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation | ['Jonathan H. Clark', 'Dan Garrette', 'Iulia Turc', 'John Wieting'] | ['cs.CL', 'cs.LG'] | Pipelined NLP systems have largely been superseded by end-to-end neural
modeling, yet nearly all commonly-used models still require an explicit
tokenization step. While recent tokenization approaches based on data-derived
subword lexicons are less brittle than manually engineered tokenizers, these
techniques are not eq... | 2021-03-11T18:57:44Z | TACL Final Version | Transactions of the Association for Computational Linguistics
(2022) 10: 73--91 | 10.1162/tacl_a_00448 | null | null | null | null | null | null | null |
2,103.06877 | Fast and Accurate Model Scaling | ['Piotr Dollár', 'Mannat Singh', 'Ross Girshick'] | ['cs.CV', 'cs.LG'] | In this work we analyze strategies for convolutional neural network scaling;
that is, the process of scaling a base convolutional network to endow it with
greater computational complexity and consequently representational power.
Example scaling strategies may include increasing model width, depth,
resolution, etc. Whil... | 2021-03-11T18:59:14Z | CVPR 2021 | null | null | null | null | null | null | null | null | null |
2,103.07579 | Revisiting ResNets: Improved Training and Scaling Strategies | ['Irwan Bello', 'William Fedus', 'Xianzhi Du', 'Ekin D. Cubuk', 'Aravind Srinivas', 'Tsung-Yi Lin', 'Jonathon Shlens', 'Barret Zoph'] | ['cs.CV'] | Novel computer vision architectures monopolize the spotlight, but the impact
of the model architecture is often conflated with simultaneous changes to
training methodology and scaling strategies. Our work revisits the canonical
ResNet (He et al., 2015) and studies these three aspects in an effort to
disentangle them. P... | 2021-03-13T00:18:19Z | null | null | null | Revisiting ResNets: Improved Training and Scaling Strategies | ['Irwan Bello', 'W. Fedus', 'Xianzhi Du', 'E. D. Cubuk', 'A. Srinivas', 'Tsung-Yi Lin', 'Jonathon Shlens', 'Barret Zoph'] | 2,021 | Neural Information Processing Systems | 303 | 78 | ['Computer Science'] |
2,103.07762 | OkwuGbé: End-to-End Speech Recognition for Fon and Igbo | ['Bonaventure F. P. Dossou', 'Chris C. Emezue'] | ['cs.CL', 'cs.AI', 'cs.CY'] | Language is inherent and compulsory for human communication. Whether
expressed in a written or spoken way, it ensures understanding between people
of the same and different regions. With the growing awareness and effort to
include more low-resourced languages in NLP research, African languages have
recently been a majo... | 2021-03-13T18:02:44Z | null | African NLP, EACL 2021 | null | OkwuGbé: End-to-End Speech Recognition for Fon and Igbo | ['Bonaventure F. P. Dossou', 'Chris C. Emezue'] | 2,021 | WINLP | 14 | 69 | ['Computer Science'] |
2,103.08541 | Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence | ['Tal Schuster', 'Adam Fisch', 'Regina Barzilay'] | ['cs.CL', 'cs.IR', 'cs.LG'] | Typical fact verification models use retrieved written evidence to verify
claims. Evidence sources, however, often change over time as more information
is gathered and revised. In order to adapt, models must be sensitive to subtle
differences in supporting evidence. We present VitaminC, a benchmark infused
with challen... | 2021-03-15T17:05:13Z | NAACL 2021 | null | null | Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence | ['Tal Schuster', 'Adam Fisch', 'R. Barzilay'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 239 | 82 | ['Computer Science'] |
2,103.08647 | The Effect of Domain and Diacritics in Yorùbá-English Neural Machine
Translation | ['David I. Adelani', 'Dana Ruiter', 'Jesujoba O. Alabi', 'Damilola Adebonojo', 'Adesina Ayeni', 'Mofe Adeyemi', 'Ayodele Awokoya', 'Cristina España-Bonet'] | ['cs.CL'] | Massively multilingual machine translation (MT) has shown impressive
capabilities, including zero and few-shot translation between low-resource
language pairs. However, these models are often evaluated on high-resource
languages with the assumption that they generalize to low-resource ones. The
difficulty of evaluating... | 2021-03-15T18:52:32Z | Accepted to MT Summit 2021 (Research Track) | null | null | The Effect of Domain and Diacritics in Yoruba–English Neural Machine Translation | ['David Ifeoluwa Adelani', 'Dana Ruiter', 'Jesujoba Oluwadara Alabi', 'Damilola Adebonojo', 'Adesina Ayeni', 'Mofetoluwa Adeyemi', 'Ayodele Awokoya', 'C. España-Bonet'] | 2,021 | Machine Translation Summit | 42 | 47 | ['Computer Science'] |
2,103.09404 | Collapsible Linear Blocks for Super-Efficient Super Resolution | ['Kartikeya Bhardwaj', 'Milos Milosavljevic', "Liam O'Neil", 'Dibakar Gope', 'Ramon Matas', 'Alex Chalfin', 'Naveen Suda', 'Lingchuan Meng', 'Danny Loh'] | ['eess.IV', 'cs.CV', 'cs.LG'] | With the advent of smart devices that support 4K and 8K resolution, Single
Image Super Resolution (SISR) has become an important computer vision problem.
However, most super resolution deep networks are computationally very
expensive. In this paper, we propose Super-Efficient Super Resolution (SESR)
networks that estab... | 2021-03-17T02:16:31Z | Accepted at MLSys 2022 conference | null | null | null | null | null | null | null | null | null |
2,103.09815 | TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL | ['Clément Romac', 'Rémy Portelas', 'Katja Hofmann', 'Pierre-Yves Oudeyer'] | ['cs.LG'] | Training autonomous agents able to generalize to multiple tasks is a key
target of Deep Reinforcement Learning (DRL) research. In parallel to improving
DRL algorithms themselves, Automatic Curriculum Learning (ACL) study how
teacher algorithms can train DRL agents more efficiently by adapting task
selection to their ev... | 2021-03-17T17:59:22Z | null | null | null | TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL | ['Clément Romac', 'Rémy Portelas', 'Katja Hofmann', 'Pierre-Yves Oudeyer'] | 2,021 | International Conference on Machine Learning | 23 | 80 | ['Computer Science'] |
2,103.1036 | GLM: General Language Model Pretraining with Autoregressive Blank
Infilling | ['Zhengxiao Du', 'Yujie Qian', 'Xiao Liu', 'Ming Ding', 'Jiezhong Qiu', 'Zhilin Yang', 'Jie Tang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | There have been various types of pretraining architectures including
autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and
encoder-decoder models (e.g., T5). However, none of the pretraining frameworks
performs the best for all tasks of three main categories including natural
language understanding (... | 2021-03-18T16:30:26Z | to be published in ACL 2022. 16 pages, 4 figures | null | null | GLM: General Language Model Pretraining with Autoregressive Blank Infilling | ['Zhengxiao Du', 'Yujie Qian', 'Xiao Liu', 'Ming Ding', 'J. Qiu', 'Zhilin Yang', 'Jie Tang'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 1,568 | 64 | ['Computer Science'] |
2,103.10697 | ConViT: Improving Vision Transformers with Soft Convolutional Inductive
Biases | ["Stéphane d'Ascoli", 'Hugo Touvron', 'Matthew Leavitt', 'Ari Morcos', 'Giulio Biroli', 'Levent Sagun'] | ['cs.CV', 'cs.LG', 'stat.ML'] | Convolutional architectures have proven extremely successful for vision
tasks. Their hard inductive biases enable sample-efficient learning, but come
at the cost of a potentially lower performance ceiling. Vision Transformers
(ViTs) rely on more flexible self-attention layers, and have recently
outperformed CNNs for im... | 2021-03-19T09:11:20Z | null | null | 10.1088/1742-5468/ac9830 | null | null | null | null | null | null | null |
2,103.1073 | MuRIL: Multilingual Representations for Indian Languages | ['Simran Khanuja', 'Diksha Bansal', 'Sarvesh Mehtani', 'Savya Khosla', 'Atreyee Dey', 'Balaji Gopalan', 'Dilip Kumar Margam', 'Pooja Aggarwal', 'Rajiv Teja Nagipogu', 'Shachi Dave', 'Shruti Gupta', 'Subhash Chandra Bose Gali', 'Vish Subramanian', 'Partha Talukdar'] | ['cs.CL'] | India is a multilingual society with 1369 rationalized languages and dialects
being spoken across the country (INDIA, 2011). Of these, the 22 scheduled
languages have a staggering total of 1.17 billion speakers and 121 languages
have more than 10,000 speakers (INDIA, 2011). India also has the second largest
(and an eve... | 2021-03-19T11:06:37Z | null | null | null | null | null | null | null | null | null | null |
2,103.10957 | Efficient Visual Pretraining with Contrastive Detection | ['Olivier J. Hénaff', 'Skanda Koppula', 'Jean-Baptiste Alayrac', 'Aaron van den Oord', 'Oriol Vinyals', 'João Carreira'] | ['cs.CV'] | Self-supervised pretraining has been shown to yield powerful representations
for transfer learning. These performance gains come at a large computational
cost however, with state-of-the-art methods requiring an order of magnitude
more computation than supervised pretraining. We tackle this computational
bottleneck by i... | 2021-03-19T14:05:12Z | Technical report | null | null | Efficient Visual Pretraining with Contrastive Detection | ["Olivier J. H'enaff", 'Skanda Koppula', 'Jean-Baptiste Alayrac', 'Aäron van den Oord', 'O. Vinyals', 'João Carreira'] | 2,021 | IEEE International Conference on Computer Vision | 166 | 70 | ['Computer Science'] |
2,103.11401 | SwissDial: Parallel Multidialectal Corpus of Spoken Swiss German | ['Pelin Dogan-Schönberger', 'Julian Mäder', 'Thomas Hofmann'] | ['cs.CL'] | Swiss German is a dialect continuum whose natively acquired dialects
significantly differ from the formal variety of the language. These dialects
are mostly used for verbal communication and do not have standard orthography.
This has led to a lack of annotated datasets, rendering the use of many NLP
methods infeasible.... | 2021-03-21T14:00:09Z | null | null | null | null | null | null | null | null | null | null |
2,103.11408 | L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset | ['Atharva Kulkarni', 'Meet Mandhane', 'Manali Likhitkar', 'Gayatri Kshirsagar', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | Sentiment analysis is one of the most fundamental tasks in Natural Language
Processing. Popular languages like English, Arabic, Russian, Mandarin, and also
Indian languages such as Hindi, Bengali, Tamil have seen a significant amount
of work in this area. However, the Marathi language which is the third most
popular la... | 2021-03-21T14:22:13Z | Accepted at WASSA@EACL 2021 | https://www.aclweb.org/anthology/2021.wassa-1.23/ | null | L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset | ['Atharva Kulkarni', 'Meet Mandhane', 'Manali Likhitkar', 'G. Kshirsagar', 'Raviraj Joshi'] | 2,021 | Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis | 56 | 34 | ['Computer Science'] |
2,103.11811 | MasakhaNER: Named Entity Recognition for African Languages | ['David Ifeoluwa Adelani', 'Jade Abbott', 'Graham Neubig', "Daniel D'souza", 'Julia Kreutzer', 'Constantine Lignos', 'Chester Palen-Michel', 'Happy Buzaaba', 'Shruti Rijhwani', 'Sebastian Ruder', 'Stephen Mayhew', 'Israel Abebe Azime', 'Shamsuddeen Muhammad', 'Chris Chinenye Emezue', 'Joyce Nakatumba-Nabende', 'Perez O... | ['cs.CL', 'cs.AI'] | We take a step towards addressing the under-representation of the African
continent in NLP research by creating the first large publicly available
high-quality dataset for named entity recognition (NER) in ten African
languages, bringing together a variety of stakeholders. We detail
characteristics of the languages to ... | 2021-03-22T13:12:44Z | Accepted to TACL 2021, pre-MIT Press publication version | null | null | MasakhaNER: Named Entity Recognition for African Languages | ['David Ifeoluwa Adelani', 'Jade Z. Abbott', 'Graham Neubig', "Daniel D'souza", 'Julia Kreutzer', 'Constantine Lignos', 'Chester Palen-Michel', 'Happy Buzaaba', 'Shruti Rijhwani', 'Sebastian Ruder', 'Stephen Mayhew', 'Israel Abebe Azime', 'Shamsuddeen Hassan Muhammad', 'Chris C. Emezue', 'J. Nakatumba‐Nabende', 'Perez ... | 2,021 | Transactions of the Association for Computational Linguistics | 195 | 76 | ['Computer Science'] |
2,103.11909 | Identifying Machine-Paraphrased Plagiarism | ['Jan Philip Wahle', 'Terry Ruas', 'Tomáš Foltýnek', 'Norman Meuschke', 'Bela Gipp'] | ['cs.CL', 'cs.AI', 'cs.DL'] | Employing paraphrasing tools to conceal plagiarized text is a severe threat
to academic integrity. To enable the detection of machine-paraphrased text, we
evaluate the effectiveness of five pre-trained word embedding models combined
with machine-learning classifiers and eight state-of-the-art neural language
models. We... | 2021-03-22T14:54:54Z | null | iConference 2022 | 10.1007/978-3-030-96957-8_34 | null | null | null | null | null | null | null |
2,103.11933 | PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and
Classification using Augmented SBERT | ['Hamid Bekamiri', 'Daniel S. Hain', 'Roman Jurowetzki'] | ['cs.LG', 'econ.EM', 'H.0'] | This study provides an efficient approach for using text data to calculate
patent-to-patent (p2p) technological similarity, and presents a hybrid
framework for leveraging the resulting p2p similarity for applications such as
semantic search and automated patent classification. We create embeddings using
Sentence-BERT (... | 2021-03-22T15:23:19Z | 18 pages, 7 figures and 4 Tables | null | null | PatentSBERTa: A deep NLP based hybrid model for patent distance and classification using augmented SBERT | ['Hamid Bekamiri', 'D. Hain', 'Roman Jurowetzki'] | 2,021 | Technological forecasting & social change | 42 | 55 | ['Computer Science', 'Economics'] |
2,103.12157 | Tiny Transformers for Environmental Sound Classification at the Edge | ['David Elliott', 'Carlos E. Otero', 'Steven Wyatt', 'Evan Martino'] | ['cs.SD', 'cs.LG', 'eess.AS'] | With the growth of the Internet of Things and the rise of Big Data, data
processing and machine learning applications are being moved to cheap and low
size, weight, and power (SWaP) devices at the edge, often in the form of mobile
phones, embedded systems, or microcontrollers. The field of Cyber-Physical
Measurements a... | 2021-03-22T20:12:15Z | 12 pages, submitted to IEEE Journal of Internet of Things | null | null | null | null | null | null | null | null | null |
2,103.12528 | Multilingual Autoregressive Entity Linking | ['Nicola De Cao', 'Ledell Wu', 'Kashyap Popat', 'Mikel Artetxe', 'Naman Goyal', 'Mikhail Plekhanov', 'Luke Zettlemoyer', 'Nicola Cancedda', 'Sebastian Riedel', 'Fabio Petroni'] | ['cs.CL', 'cs.AI', 'stat.ML'] | We present mGENRE, a sequence-to-sequence system for the Multilingual Entity
Linking (MEL) problem -- the task of resolving language-specific mentions to a
multilingual Knowledge Base (KB). For a mention in a given language, mGENRE
predicts the name of the target entity left-to-right, token-by-token in an
autoregressiv... | 2021-03-23T13:25:55Z | 20 pages, 8 figures, and 11 tables | null | null | null | null | null | null | null | null | null |
2,103.12693 | QuestEval: Summarization Asks for Fact-based Evaluation | ['Thomas Scialom', 'Paul-Alexis Dray', 'Patrick Gallinari', 'Sylvain Lamprier', 'Benjamin Piwowarski', 'Jacopo Staiano', 'Alex Wang'] | ['cs.CL'] | Summarization evaluation remains an open research problem: current metrics
such as ROUGE are known to be limited and to correlate poorly with human
judgments. To alleviate this issue, recent work has proposed evaluation metrics
which rely on question answering models to assess whether a summary contains
all the relevan... | 2021-03-23T17:16:09Z | project page: https://github.com/recitalAI/QuestEval | null | null | null | null | null | null | null | null | null |
2,103.12731 | Scaling Local Self-Attention for Parameter Efficient Visual Backbones | ['Ashish Vaswani', 'Prajit Ramachandran', 'Aravind Srinivas', 'Niki Parmar', 'Blake Hechtman', 'Jonathon Shlens'] | ['cs.CV'] | Self-attention has the promise of improving computer vision systems due to
parameter-independent scaling of receptive fields and content-dependent
interactions, in contrast to parameter-dependent scaling and
content-independent interactions of convolutions. Self-attention models have
recently been shown to have encoura... | 2021-03-23T17:56:06Z | CVPR 2021 Oral | null | null | Scaling Local Self-Attention for Parameter Efficient Visual Backbones | ['Ashish Vaswani', 'Prajit Ramachandran', 'A. Srinivas', 'Niki Parmar', 'Blake A. Hechtman', 'Jonathon Shlens'] | 2,021 | Computer Vision and Pattern Recognition | 404 | 70 | ['Computer Science'] |
2,103.12864 | Learned complex masks for multi-instrument source separation | ['Andreas Jansson', 'Rachel M. Bittner', 'Nicola Montecchio', 'Tillman Weyde'] | ['cs.SD', 'eess.AS'] | Music source separation in the time-frequency domain is commonly achieved by
applying a soft or binary mask to the magnitude component of (complex)
spectrograms. The phase component is usually not estimated, but instead copied
from the mixture and applied to the magnitudes of the estimated isolated
sources. While this ... | 2021-03-23T21:56:28Z | null | null | null | null | null | null | null | null | null | null |
2,103.13031 | Czert -- Czech BERT-like Model for Language Representation | ['Jakub Sido', 'Ondřej Pražák', 'Pavel Přibáň', 'Jan Pašek', 'Michal Seják', 'Miloslav Konopík'] | ['cs.CL'] | This paper describes the training process of the first Czech monolingual
language representation models based on BERT and ALBERT architectures. We
pre-train our models on more than 340K of sentences, which is 50 times more
than multilingual models that include Czech data. We outperform the
multilingual models on 9 out ... | 2021-03-24T07:27:28Z | 13 pages | null | null | null | null | null | null | null | null | null |
2,103.13282 | AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild | ['Daniel Joska', 'Liam Clark', 'Naoya Muramatsu', 'Ricardo Jericevich', 'Fred Nicolls', 'Alexander Mathis', 'Mackenzie W. Mathis', 'Amir Patel'] | ['cs.CV', 'cs.SY', 'eess.SY', 'q-bio.QM'] | Animals are capable of extreme agility, yet understanding their complex
dynamics, which have ecological, biomechanical and evolutionary implications,
remains challenging. Being able to study this incredible agility will be
critical for the development of next-generation autonomous legged robots. In
particular, the chee... | 2021-03-24T15:54:11Z | Code and data can be found at:
https://github.com/African-Robotics-Unit/AcinoSet | 2021 IEEE International Conference on Robotics and Automation
(ICRA), 2021, pp. 13901-13908 | 10.1109/ICRA48506.2021.9561338 | null | null | null | null | null | null | null |
2,103.13413 | Vision Transformers for Dense Prediction | ['René Ranftl', 'Alexey Bochkovskiy', 'Vladlen Koltun'] | ['cs.CV'] | We introduce dense vision transformers, an architecture that leverages vision
transformers in place of convolutional networks as a backbone for dense
prediction tasks. We assemble tokens from various stages of the vision
transformer into image-like representations at various resolutions and
progressively combine them i... | 2021-03-24T18:01:17Z | 15 pages | null | null | null | null | null | null | null | null | null |
2,103.13581 | EfficientTDNN: Efficient Architecture Search for Speaker Recognition | ['Rui Wang', 'Zhihua Wei', 'Haoran Duan', 'Shouling Ji', 'Yang Long', 'Zhen Hong'] | ['eess.AS', 'cs.AI'] | Convolutional neural networks (CNNs), such as the time-delay neural network
(TDNN), have shown their remarkable capability in learning speaker embedding.
However, they meanwhile bring a huge computational cost in storage size,
processing, and memory. Discovering the specialized CNN that meets a specific
constraint requ... | 2021-03-25T03:28:07Z | 13 pages, 12 figures, accepted to TASLP | null | 10.1109/TASLP.2022.3182856 | EfficientTDNN: Efficient Architecture Search for Speaker Recognition | ['Rui Wang', 'Zhihua Wei', 'Haoran Duan', 'S. Ji', 'Yang Long', 'Zhenhou Hong'] | 2,021 | IEEE/ACM Transactions on Audio Speech and Language Processing | 18 | 55 | ['Computer Science', 'Engineering'] |
2,103.13799 | Bertinho: Galician BERT Representations | ['David Vilares', 'Marcos Garcia', 'Carlos Gómez-Rodríguez'] | ['cs.CL'] | This paper presents a monolingual BERT model for Galician. We follow the
recent trend that shows that it is feasible to build robust monolingual BERT
models even for relatively low-resource languages, while performing better than
the well-known official multilingual BERT (mBERT). More particularly, we
release two monol... | 2021-03-25T12:51:34Z | Accepted in the journal Procesamiento del Lenguaje Natural | Procesamiento del Lenguaje Natural. 66 (2021) 13-26 | 10.26342/2021-66-1 | Bertinho: Galician BERT Representations | ['David Vilares', 'Marcos Garcia', 'Carlos Gómez-Rodríguez'] | 2,021 | Proces. del Leng. Natural | 22 | 58 | ['Computer Science'] |
2,103.14006 | Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution | ['Kai Zhang', 'Jingyun Liang', 'Luc Van Gool', 'Radu Timofte'] | ['eess.IV', 'cs.CV'] | It is widely acknowledged that single image super-resolution (SISR) methods
would not perform well if the assumed degradation model deviates from those in
real images. Although several degradation models take additional factors into
consideration, such as blur, they are still not effective enough to cover the
diverse d... | 2021-03-25T17:40:53Z | ICCV 2021. Code: https://github.com/cszn/BSRGAN | null | null | null | null | null | null | null | null | null |
2,103.1403 | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | ['Ze Liu', 'Yutong Lin', 'Yue Cao', 'Han Hu', 'Yixuan Wei', 'Zheng Zhang', 'Stephen Lin', 'Baining Guo'] | ['cs.CV', 'cs.LG'] | This paper presents a new vision Transformer, called Swin Transformer, that
capably serves as a general-purpose backbone for computer vision. Challenges in
adapting Transformer from language to vision arise from differences between the
two domains, such as large variations in the scale of visual entities and the
high r... | 2021-03-25T17:59:31Z | null | null | null | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | ['Ze Liu', 'Yutong Lin', 'Yue Cao', 'Han Hu', 'Yixuan Wei', 'Zheng Zhang', 'Stephen Lin', 'B. Guo'] | 2,021 | IEEE International Conference on Computer Vision | 21,855 | 86 | ['Computer Science'] |
2,103.14899 | CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image
Classification | ['Chun-Fu Chen', 'Quanfu Fan', 'Rameswar Panda'] | ['cs.CV'] | The recently developed vision transformer (ViT) has achieved promising
results on image classification compared to convolutional neural networks.
Inspired by this, in this paper, we study how to learn multi-scale feature
representations in transformer models for image classification. To this end, we
propose a dual-bran... | 2021-03-27T13:03:17Z | Accepted by ICCV 2021 | null | null | null | null | null | null | null | null | null |
2,103.151 | The General Theory of General Intelligence: A Pragmatic Patternist
Perspective | ['Ben Goertzel'] | ['cs.AI'] | A multi-decade exploration into the theoretical foundations of artificial and
natural general intelligence, which has been expressed in a series of books and
papers and used to guide a series of practical and research-prototype software
systems, is reviewed at a moderate level of detail. The review covers
underlying ph... | 2021-03-28T10:11:25Z | null | null | null | null | null | null | null | null | null | null |
2,103.15691 | ViViT: A Video Vision Transformer | ['Anurag Arnab', 'Mostafa Dehghani', 'Georg Heigold', 'Chen Sun', 'Mario Lučić', 'Cordelia Schmid'] | ['cs.CV'] | We present pure-transformer based models for video classification, drawing
upon the recent success of such models in image classification. Our model
extracts spatio-temporal tokens from the input video, which are then encoded by
a series of transformer layers. In order to handle the long sequences of tokens
encountered... | 2021-03-29T15:27:17Z | ICCV 2021. Code at
https://github.com/google-research/scenic/tree/main/scenic/projects/vivit | null | null | null | null | null | null | null | null | null |
2,103.15808 | CvT: Introducing Convolutions to Vision Transformers | ['Haiping Wu', 'Bin Xiao', 'Noel Codella', 'Mengchen Liu', 'Xiyang Dai', 'Lu Yuan', 'Lei Zhang'] | ['cs.CV'] | We present in this paper a new architecture, named Convolutional vision
Transformer (CvT), that improves Vision Transformer (ViT) in performance and
efficiency by introducing convolutions into ViT to yield the best of both
designs. This is accomplished through two primary modifications: a hierarchy of
Transformers cont... | 2021-03-29T17:58:22Z | null | null | null | null | null | null | null | null | null | null |
2,103.16219 | SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised
Image-to-Image Translation | ['Xuning Shao', 'Weidong Zhang'] | ['cs.CV', 'cs.AI', 'cs.LG', 'eess.IV'] | For unsupervised image-to-image translation, we propose a discriminator
architecture which focuses on the statistical features instead of individual
patches. The network is stabilized by distribution matching of key statistical
features at multiple scales. Unlike the existing methods which impose more and
more constrai... | 2021-03-30T10:03:07Z | null | null | null | null | null | null | null | null | null | null |
2,103.16302 | Rethinking Spatial Dimensions of Vision Transformers | ['Byeongho Heo', 'Sangdoo Yun', 'Dongyoon Han', 'Sanghyuk Chun', 'Junsuk Choe', 'Seong Joon Oh'] | ['cs.CV'] | Vision Transformer (ViT) extends the application range of transformers from
language processing to computer vision tasks as being an alternative
architecture against the existing convolutional neural networks (CNN). Since
the transformer-based architecture has been innovative for computer vision
modeling, the design co... | 2021-03-30T12:51:28Z | ICCV 2021 camera-ready version | null | null | null | null | null | null | null | null | null |
2,103.16554 | Pre-training strategies and datasets for facial representation learning | ['Adrian Bulat', 'Shiyang Cheng', 'Jing Yang', 'Andrew Garbett', 'Enrique Sanchez', 'Georgios Tzimiropoulos'] | ['cs.CV', 'cs.LG'] | What is the best way to learn a universal face representation? Recent work on
Deep Learning in the area of face analysis has focused on supervised learning
for specific tasks of interest (e.g. face recognition, facial landmark
localization etc.) but has overlooked the overarching question of how to find a
facial repres... | 2021-03-30T17:57:25Z | Accepted at ECCV 2022 | null | null | null | null | null | null | null | null | null |
2,103.16801 | Joint Khmer Word Segmentation and Part-of-Speech Tagging Using Deep
Learning | ['Rina Buoy', 'Nguonly Taing', 'Sokchea Kor'] | ['cs.CL', 'cs.LG'] | Khmer text is written from left to right with optional space. Space is not
served as a word boundary but instead, it is used for readability or other
functional purposes. Word segmentation is a prior step for downstream tasks
such as part-of-speech (POS) tagging and thus, the robustness of POS tagging
highly depends on... | 2021-03-31T04:26:54Z | 12 pages, 6 tables, and 6 figures | null | null | null | null | null | null | null | null | null |
2,103.16874 | VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware
Normalization | ['Seunghwan Choi', 'Sunghyun Park', 'Minsoo Lee', 'Jaegul Choo'] | ['cs.CV'] | The task of image-based virtual try-on aims to transfer a target clothing
item onto the corresponding region of a person, which is commonly tackled by
fitting the item to the desired body part and fusing the warped item with the
person. While an increasing number of studies have been conducted, the
resolution of synthe... | 2021-03-31T07:52:41Z | 21 pages; project page: https://psh01087.github.io/VITON-HD; accepted
to CVPR 2021; code URL added, references formatted | null | null | VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization | ['Seunghwan Choi', 'Sunghyun Park', 'M. Lee', 'J. Choo'] | 2,021 | Computer Vision and Pattern Recognition | 264 | 42 | ['Computer Science'] |
2,103.16997 | UA-GEC: Grammatical Error Correction and Fluency Corpus for the
Ukrainian Language | ['Oleksiy Syvokon', 'Olena Nahorna'] | ['cs.CL'] | We present a corpus professionally annotated for grammatical error correction
(GEC) and fluency edits in the Ukrainian language. To the best of our
knowledge, this is the first GEC corpus for the Ukrainian language. We
collected texts with errors (20,715 sentences) from a diverse pool of
contributors, including both na... | 2021-03-31T11:18:36Z | See https://github.com/grammarly/ua-gec for the dataset. Version 2 of
the data is in progress | null | null | UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language | ['Oleksiy Syvokon', 'Olena Nahorna', 'Pavlo Kuchmiichuk'] | 2,021 | UNLP | 33 | 25 | ['Computer Science'] |
2,103.17239 | Going deeper with Image Transformers | ['Hugo Touvron', 'Matthieu Cord', 'Alexandre Sablayrolles', 'Gabriel Synnaeve', 'Hervé Jégou'] | ['cs.CV'] | Transformers have been recently adapted for large scale image classification,
achieving high scores shaking up the long supremacy of convolutional neural
networks. However the optimization of image transformers has been little
studied so far. In this work, we build and optimize deeper transformer networks
for image cla... | 2021-03-31T17:37:32Z | null | null | null | null | null | null | null | null | null | null |
2,103.17263 | Rethinking Self-supervised Correspondence Learning: A Video Frame-level
Similarity Perspective | ['Jiarui Xu', 'Xiaolong Wang'] | ['cs.CV'] | Learning a good representation for space-time correspondence is the key for
various computer vision tasks, including tracking object bounding boxes and
performing video object pixel segmentation. To learn generalizable
representation for correspondence in large-scale, a variety of self-supervised
pretext tasks are prop... | 2021-03-31T17:56:35Z | ICCV 2021 (oral). Project page and code: https://jerryxu.net/VFS | null | null | Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective | ['Jiarui Xu', 'Xiaolong Wang'] | 2,021 | IEEE International Conference on Computer Vision | 95 | 85 | ['Computer Science'] |
2,104.00298 | EfficientNetV2: Smaller Models and Faster Training | ['Mingxing Tan', 'Quoc V. Le'] | ['cs.CV'] | This paper introduces EfficientNetV2, a new family of convolutional networks
that have faster training speed and better parameter efficiency than previous
models. To develop this family of models, we use a combination of
training-aware neural architecture search and scaling, to jointly optimize
training speed and param... | 2021-04-01T07:08:36Z | ICML 2021 | International Conference on Machine Learning, 2021 | null | EfficientNetV2: Smaller Models and Faster Training | ['Mingxing Tan', 'Quoc V. Le'] | 2,021 | International Conference on Machine Learning | 2,768 | 50 | ['Computer Science'] |
2,104.00355 | Speech Resynthesis from Discrete Disentangled Self-Supervised
Representations | ['Adam Polyak', 'Yossi Adi', 'Jade Copet', 'Eugene Kharitonov', 'Kushal Lakhotia', 'Wei-Ning Hsu', 'Abdelrahman Mohamed', 'Emmanuel Dupoux'] | ['cs.SD', 'cs.LG', 'eess.AS'] | We propose using self-supervised discrete representations for the task of
speech resynthesis. To generate disentangled representation, we separately
extract low-bitrate representations for speech content, prosodic information,
and speaker identity. This allows to synthesize speech in a controllable
manner. We analyze v... | 2021-04-01T09:20:33Z | In Proceedings of Interspeech 2021 | null | null | null | null | null | null | null | null | null |
2,104.00677 | Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis | ['Ajay Jain', 'Matthew Tancik', 'Pieter Abbeel'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG'] | We present DietNeRF, a 3D neural scene representation estimated from a few
images. Neural Radiance Fields (NeRF) learn a continuous volumetric
representation of a scene through multi-view consistency, and can be rendered
from novel viewpoints by ray casting. While NeRF has an impressive ability to
reconstruct geometry ... | 2021-04-01T17:59:31Z | Project website: https://www.ajayj.com/dietnerf | null | null | null | null | null | null | null | null | null |
2,104.01027 | Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training | ['Wei-Ning Hsu', 'Anuroop Sriram', 'Alexei Baevski', 'Tatiana Likhomanenko', 'Qiantong Xu', 'Vineel Pratap', 'Jacob Kahn', 'Ann Lee', 'Ronan Collobert', 'Gabriel Synnaeve', 'Michael Auli'] | ['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS'] | Self-supervised learning of speech representations has been a very active
research area but most work is focused on a single domain such as read audio
books for which there exist large quantities of labeled and unlabeled data. In
this paper, we explore more general setups where the domain of the unlabeled
data for pre-... | 2021-04-02T12:53:15Z | null | null | null | Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training | ['Wei-Ning Hsu', 'Anuroop Sriram', 'Alexei Baevski', 'Tatiana Likhomanenko', 'Qiantong Xu', 'Vineel Pratap', 'Jacob Kahn', 'Ann Lee', 'R. Collobert', 'Gabriel Synnaeve', 'Michael Auli'] | 2,021 | Interspeech | 241 | 47 | ['Computer Science', 'Engineering'] |
2,104.01136 | LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference | ['Ben Graham', 'Alaaeldin El-Nouby', 'Hugo Touvron', 'Pierre Stock', 'Armand Joulin', 'Hervé Jégou', 'Matthijs Douze'] | ['cs.CV'] | We design a family of image classification architectures that optimize the
trade-off between accuracy and efficiency in a high-speed regime. Our work
exploits recent findings in attention-based architectures, which are
competitive on highly parallel processing hardware. We revisit principles from
the extensive literatu... | 2021-04-02T16:29:57Z | null | null | null | null | null | null | null | null | null | null |
2,104.01431 | Aggregated Contextual Transformations for High-Resolution Image
Inpainting | ['Yanhong Zeng', 'Jianlong Fu', 'Hongyang Chao', 'Baining Guo'] | ['cs.CV'] | State-of-the-art image inpainting approaches can suffer from generating
distorted structures and blurry textures in high-resolution images (e.g.,
512x512). The challenges mainly drive from (1) image content reasoning from
distant contexts, and (2) fine-grained texture synthesis for a large missing
region. To overcome t... | 2021-04-03T15:50:17Z | This work has been submitted to the IEEE for possible publication | null | null | null | null | null | null | null | null | null |
2,104.01497 | Hi-Fi Multi-Speaker English TTS Dataset | ['Evelina Bakhturina', 'Vitaly Lavrukhin', 'Boris Ginsburg', 'Yang Zhang'] | ['eess.AS'] | This paper introduces a new multi-speaker English dataset for training
text-to-speech models. The dataset is based on LibriVox audiobooks and Project
Gutenberg texts, both in the public domain. The new dataset contains about 292
hours of speech from 10 speakers with at least 17 hours per speaker sampled at
44.1 kHz. To... | 2021-04-03T23:19:50Z | null | null | null | null | null | null | null | null | null | null |
2,104.01604 | Timers and Such: A Practical Benchmark for Spoken Language Understanding
with Numbers | ['Loren Lugosch', 'Piyush Papreja', 'Mirco Ravanelli', 'Abdelwahab Heba', 'Titouan Parcollet'] | ['cs.CL', 'eess.AS'] | This paper introduces Timers and Such, a new open source dataset of spoken
English commands for common voice control use cases involving numbers. We
describe the gap in existing spoken language understanding datasets that Timers
and Such fills, the design and creation of the dataset, and experiments with a
number of AS... | 2021-04-04T12:52:09Z | Accepted to NeurIPS 2021 - Datasets and Benchmarks Track | null | null | Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers | ['Loren Lugosch', 'Piyush Papreja', 'M. Ravanelli', 'A. Heba', 'Titouan Parcollet'] | 2,021 | NeurIPS Datasets and Benchmarks | 14 | 43 | ['Computer Science', 'Engineering'] |
2,104.01721 | Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive
End-to-End Models for Automatic Speech Recognition | ['Somshubra Majumdar', 'Jagadeesh Balam', 'Oleksii Hrinchuk', 'Vitaly Lavrukhin', 'Vahid Noroozi', 'Boris Ginsburg'] | ['eess.AS'] | We propose Citrinet - a new end-to-end convolutional Connectionist Temporal
Classification (CTC) based automatic speech recognition (ASR) model. Citrinet
is deep residual neural model which uses 1D time-channel separable convolutions
combined with sub-word encoding and squeeze-and-excitation. The resulting
architecture... | 2021-04-05T00:16:27Z | null | null | null | Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition | ['Somshubra Majumdar', 'Jagadeesh Balam', 'Oleksii Hrinchuk', 'Vitaly Lavrukhin', 'V. Noroozi', 'Boris Ginsburg'] | 2,021 | null | 66 | 30 | ['Engineering'] |
2,104.01778 | AST: Audio Spectrogram Transformer | ['Yuan Gong', 'Yu-An Chung', 'James Glass'] | ['cs.SD', 'cs.AI'] | In the past decade, convolutional neural networks (CNNs) have been widely
adopted as the main building block for end-to-end audio classification models,
which aim to learn a direct mapping from audio spectrograms to corresponding
labels. To better capture long-range global context, a recent trend is to add a
self-atten... | 2021-04-05T05:26:29Z | Accepted at Interspeech 2021. Code at
https://github.com/YuanGongND/ast | null | null | null | null | null | null | null | null | null |
2,104.02014 | SPGISpeech: 5,000 hours of transcribed financial audio for fully
formatted end-to-end speech recognition | ["Patrick K. O'Neill", 'Vitaly Lavrukhin', 'Somshubra Majumdar', 'Vahid Noroozi', 'Yuekai Zhang', 'Oleksii Kuchaiev', 'Jagadeesh Balam', 'Yuliya Dovzhenko', 'Keenan Freyberg', 'Michael D. Shulman', 'Boris Ginsburg', 'Shinji Watanabe', 'Georg Kucsko'] | ['cs.CL', 'eess.AS'] | In the English speech-to-text (STT) machine learning task, acoustic models
are conventionally trained on uncased Latin characters, and any necessary
orthography (such as capitalization, punctuation, and denormalization of
non-standard words) is imputed by separate post-processing models. This adds
complexity and limits... | 2021-04-05T17:05:28Z | 5 pages, 1 figure. Submitted to INTERSPEECH 2021 | null | null | null | null | null | null | null | null | null |
2,104.02057 | An Empirical Study of Training Self-Supervised Vision Transformers | ['Xinlei Chen', 'Saining Xie', 'Kaiming He'] | ['cs.CV', 'cs.LG'] | This paper does not describe a novel method. Instead, it studies a
straightforward, incremental, yet must-know baseline given the recent progress
in computer vision: self-supervised learning for Vision Transformers (ViT).
While the training recipes for standard convolutional networks have been highly
mature and robust,... | 2021-04-05T17:59:40Z | Camera-ready, ICCV 2021, Oral. Code:
https://github.com/facebookresearch/moco-v3 | null | null | null | null | null | null | null | null | null |
2,104.02112 | Efficient Attentions for Long Document Summarization | ['Luyang Huang', 'Shuyang Cao', 'Nikolaus Parulian', 'Heng Ji', 'Lu Wang'] | ['cs.CL'] | The quadratic computational and memory complexities of large Transformers
have limited their scalability for long document summarization. In this paper,
we propose Hepos, a novel efficient encoder-decoder attention with head-wise
positional strides to effectively pinpoint salient information from the source.
We further... | 2021-04-05T18:45:13Z | Accepted at NAACL 2021 as a long paper | null | null | null | null | null | null | null | null | null |
2,104.02125 | SpeakerStew: Scaling to Many Languages with a Triaged Multilingual
Text-Dependent and Text-Independent Speaker Verification System | ['Roza Chojnacka', 'Jason Pelecanos', 'Quan Wang', 'Ignacio Lopez Moreno'] | ['eess.AS', 'cs.LG', 'cs.SD', 'stat.ML'] | In this paper, we describe SpeakerStew - a hybrid system to perform speaker
verification on 46 languages. Two core ideas were explored in this system: (1)
Pooling training data of different languages together for multilingual
generalization and reducing development cycles; (2) A novel triage mechanism
between text-depe... | 2021-04-05T19:48:16Z | null | null | null | null | null | null | null | null | null | null |
2,104.02321 | NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling | ['Junhyeok Lee', 'Seungu Han'] | ['eess.AS', 'cs.AI', 'cs.LG'] | In this work, we introduce NU-Wave, the first neural audio upsampling model
to produce waveforms of sampling rate 48kHz from coarse 16kHz or 24kHz inputs,
while prior works could generate only up to 16kHz. NU-Wave is the first
diffusion probabilistic model for audio super-resolution which is engineered
based on neural ... | 2021-04-06T06:52:53Z | Accepted to Interspeech 2021 | null | 10.21437/Interspeech.2021-36 | null | null | null | null | null | null | null |
2,104.02443 | CodeTrans: Towards Cracking the Language of Silicon's Code Through
Self-Supervised Deep Learning and High Performance Computing | ['Ahmed Elnaggar', 'Wei Ding', 'Llion Jones', 'Tom Gibbs', 'Tamas Feher', 'Christoph Angerer', 'Silvia Severini', 'Florian Matthes', 'Burkhard Rost'] | ['cs.SE', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.PL'] | Currently, a growing number of mature natural language processing
applications make people's life more convenient. Such applications are built by
source code - the language in software engineering. However, the applications
for understanding source code language to ease the software engineering process
are under-resear... | 2021-04-06T11:57:12Z | 28 pages, 6 tables and 1 figure | null | null | CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing | ['Ahmed Elnaggar', 'Wei Ding', 'Llion Jones', 'Tom Gibbs', 'Tamas B. Fehér', 'Christoph Angerer', 'Silvia Severini', 'F. Matthes', 'B. Rost'] | 2,021 | arXiv.org | 72 | 40 | ['Computer Science'] |
2,104.02821 | Towards Measuring Fairness in AI: the Casual Conversations Dataset | ['Caner Hazirbas', 'Joanna Bitton', 'Brian Dolhansky', 'Jacqueline Pan', 'Albert Gordo', 'Cristian Canton Ferrer'] | ['cs.CV', 'cs.AI', 'cs.LG'] | This paper introduces a novel dataset to help researchers evaluate their
computer vision and audio models for accuracy across a diverse set of age,
genders, apparent skin tones and ambient lighting conditions. Our dataset is
composed of 3,011 subjects and contains over 45,000 videos, with an average of
15 videos per pe... | 2021-04-06T22:48:22Z | null | null | null | null | null | null | null | null | null | null |
2,104.03538 | MetricGAN+: An Improved Version of MetricGAN for Speech Enhancement | ['Szu-Wei Fu', 'Cheng Yu', 'Tsun-An Hsieh', 'Peter Plantinga', 'Mirco Ravanelli', 'Xugang Lu', 'Yu Tsao'] | ['cs.SD', 'cs.AI', 'eess.AS'] | The discrepancy between the cost function used for training a speech
enhancement model and human auditory perception usually makes the quality of
enhanced speech unsatisfactory. Objective evaluation metrics which consider
human perception can hence serve as a bridge to reduce the gap. Our previously
proposed MetricGAN ... | 2021-04-08T06:46:35Z | Accepted by Interspeech 2021 | null | null | MetricGAN+: An Improved Version of MetricGAN for Speech Enhancement | ['Szu-Wei Fu', 'Cheng Yu', 'Tsun-An Hsieh', 'Peter William VanHarn Plantinga', 'M. Ravanelli', 'Xugang Lu', 'Yu Tsao'] | 2,021 | Interspeech | 218 | 44 | ['Computer Science', 'Engineering'] |
2,104.03602 | SiT: Self-supervised vIsion Transformer | ['Sara Atito', 'Muhammad Awais', 'Josef Kittler'] | ['cs.CV', 'cs.LG'] | Self-supervised learning methods are gaining increasing traction in computer
vision due to their recent success in reducing the gap with supervised
learning. In natural language processing (NLP) self-supervised learning and
transformers are already the methods of choice. The recent literature suggests
that the transfor... | 2021-04-08T08:34:04Z | null | null | null | SiT: Self-supervised vIsion Transformer | ['Sara Atito Ali Ahmed', 'Muhammad Awais', 'J. Kittler'] | 2,021 | arXiv.org | 139 | 96 | ['Computer Science'] |
2,104.04045 | End-to-end speaker segmentation for overlap-aware resegmentation | ['Hervé Bredin', 'Antoine Laurent'] | ['eess.AS', 'cs.SD'] | Speaker segmentation consists in partitioning a conversation between one or
more speakers into speaker turns. Usually addressed as the late combination of
three sub-tasks (voice activity detection, speaker change detection, and
overlapped speech detection), we propose to train an end-to-end segmentation
model that does... | 2021-04-08T20:38:17Z | Camera-ready version for Interspeech 2021 with significantly better
voice activity detection, overlapped speech detection, and speaker
diarization results. The code used for results reported in v1 contained a
small bug that has now been fixed | null | null | null | null | null | null | null | null | null |
2,104.04052 | AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your
Hebrew NLP Application With | ['Amit Seker', 'Elron Bandel', 'Dan Bareket', 'Idan Brusilovsky', 'Refael Shaked Greenfeld', 'Reut Tsarfaty'] | ['cs.CL'] | Large Pre-trained Language Models (PLMs) have become ubiquitous in the
development of language understanding technology and lie at the heart of many
artificial intelligence advances. While advances reported for English using
PLMs are unprecedented, reported advances using PLMs in Hebrew are few and far
between. The pro... | 2021-04-08T20:51:29Z | null | null | null | AlephBERT: A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With | ['Amit Seker', 'Elron Bandel', 'Dan Bareket', 'Idan Brusilovsky', 'R. Greenfeld', 'Reut Tsarfaty'] | 2,021 | arXiv.org | 49 | 22 | ['Computer Science'] |
2,104.04108 | XFORMAL: A Benchmark for Multilingual Formality Style Transfer | ['Eleftheria Briakou', 'Di Lu', 'Ke Zhang', 'Joel Tetreault'] | ['cs.CL', 'cs.AI'] | We take the first step towards multilingual style transfer by creating and
releasing XFORMAL, a benchmark of multiple formal reformulations of informal
text in Brazilian Portuguese, French, and Italian. Results on XFORMAL suggest
that state-of-the-art style transfer approaches perform close to simple
baselines, indicat... | 2021-04-08T23:01:17Z | NAACL 2021 | null | null | null | null | null | null | null | null | null |
2,104.04302 | Annotating and Modeling Fine-grained Factuality in Summarization | ['Tanya Goyal', 'Greg Durrett'] | ['cs.CL'] | Recent pre-trained abstractive summarization systems have started to achieve
credible performance, but a major barrier to their use in practice is their
propensity to output summaries that are not faithful to the input and that
contain factual errors. While a number of annotated datasets and statistical
models for asse... | 2021-04-09T11:20:44Z | NAACL 2021 | null | null | Annotating and Modeling Fine-grained Factuality in Summarization | ['Tanya Goyal', 'Greg Durrett'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 153 | 41 | ['Computer Science'] |
2,104.04473 | Efficient Large-Scale Language Model Training on GPU Clusters Using
Megatron-LM | ['Deepak Narayanan', 'Mohammad Shoeybi', 'Jared Casper', 'Patrick LeGresley', 'Mostofa Patwary', 'Vijay Anand Korthikanti', 'Dmitri Vainbrand', 'Prethvi Kashinkunti', 'Julie Bernauer', 'Bryan Catanzaro', 'Amar Phanishayee', 'Matei Zaharia'] | ['cs.CL', 'cs.DC'] | Large language models have led to state-of-the-art accuracies across a range
of tasks. However, training these models efficiently is challenging for two
reasons: a) GPU memory capacity is limited, making it impossible to fit large
models on even a multi-GPU server, and b) the number of compute operations
required to tr... | 2021-04-09T16:43:11Z | Accepted to SC 2021 | null | null | null | null | null | null | null | null | null |
2,104.0463 | WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for
Detecting Toxic Spans | ['Tharindu Ranasinghe', 'Diptanu Sarkar', 'Marcos Zampieri', 'Alexander Ororbia'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In recent years, the widespread use of social media has led to an increase in
the generation of toxic and offensive content on online platforms. In response,
social media platforms have worked on developing automatic detection methods
and employing human moderators to cope with this deluge of offensive content.
While v... | 2021-04-09T22:52:26Z | Accepted to SemEval-2021 | null | null | null | null | null | null | null | null | null |
2,104.04767 | MobileStyleGAN: A Lightweight Convolutional Neural Network for
High-Fidelity Image Synthesis | ['Sergei Belousov'] | ['cs.CV', 'eess.IV'] | In recent years, the use of Generative Adversarial Networks (GANs) has become
very popular in generative image modeling. While style-based GAN architectures
yield state-of-the-art results in high-fidelity image synthesis,
computationally, they are highly complex. In our work, we focus on the
performance optimization of... | 2021-04-10T13:46:49Z | null | null | null | MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis | ['Sergei Belousov'] | 2,021 | arXiv.org | 20 | 34 | ['Computer Science', 'Engineering'] |
2,104.05557 | SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model | ['Edresson Casanova', 'Christopher Shulby', 'Eren Gölge', 'Nicolas Michael Müller', 'Frederico Santos de Oliveira', 'Arnaldo Candido Junior', 'Anderson da Silva Soares', 'Sandra Maria Aluisio', 'Moacir Antonelli Ponti'] | ['eess.AS', 'cs.SD'] | In this paper, we propose SC-GlowTTS: an efficient zero-shot multi-speaker
text-to-speech model that improves similarity for speakers unseen during
training. We propose a speaker-conditional architecture that explores a
flow-based decoder that works in a zero-shot scenario. As text encoders, we
explore a dilated residu... | 2021-04-02T22:31:45Z | Accepted on Interspeech 2021 | null | null | SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model | ['Edresson Casanova', 'C. Shulby', 'Eren Gölge', 'N. Müller', 'F. S. Oliveira', 'Arnaldo Cândido Júnior', 'A. S. Soares', 'S. Aluísio', 'M. Ponti'] | 2,021 | Interspeech | 100 | 36 | ['Engineering', 'Computer Science'] |
2,104.05561 | Evidence for an MHD disk wind via optical forbidden line
spectro-astrometry | ['E. T Whelan', 'I. Pascucci', 'U. Gorti', 'S. Edwards', 'R. D. Alexander', 'M. F. Sterzik', 'C. Melo'] | ['astro-ph.SR'] | Spectro-astrometry is used to investigate the low velocity component (LVC) of
the optical forbidden emission from the T Tauri stars RU Lupi and AS 205 N.
Both stars also have high velocity forbidden emission (HVC) which is tracing a
jet. For AS 205 N, analysis reveals a complicated outflow system. For RU Lupi,
the [O I... | 2021-04-12T15:29:55Z | Accepted by ApJ, 16 pages, 11 figures | null | 10.3847/1538-4357/abf55e | null | null | null | null | null | null | null |
2,104.05704 | Escaping the Big Data Paradigm with Compact Transformers | ['Ali Hassani', 'Steven Walton', 'Nikhil Shah', 'Abulikemu Abuduweili', 'Jiachen Li', 'Humphrey Shi'] | ['cs.CV', 'cs.LG'] | With the rise of Transformers as the standard for language processing, and
their advancements in computer vision, there has been a corresponding growth in
parameter size and amounts of training data. Many have come to believe that
because of this, transformers are not suitable for small sets of data. This
trend leads t... | 2021-04-12T17:58:56Z | Added new results on Flowers-102, distillation | null | null | Escaping the Big Data Paradigm with Compact Transformers | ['Ali Hassani', 'Steven Walton', 'Nikhil Shah', 'Abulikemu Abuduweili', 'Jiachen Li', 'Humphrey Shi'] | 2,021 | arXiv.org | 465 | 64 | ['Computer Science'] |
2,104.05832 | SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning | ['Roshanak Mirzaee', 'Hossein Rajaby Faghihi', 'Qiang Ning', 'Parisa Kordjmashidi'] | ['cs.CL', 'cs.AI'] | This paper proposes a question-answering (QA) benchmark for spatial reasoning
on natural language text which contains more realistic spatial phenomena not
covered by prior work and is challenging for state-of-the-art language models
(LM). We propose a distant supervision method to improve on this task.
Specifically, we... | 2021-04-12T21:37:18Z | NAACL 2021 | null | null | null | null | null | null | null | null | null |
2,104.05938 | QMSum: A New Benchmark for Query-based Multi-domain Meeting
Summarization | ['Ming Zhong', 'Da Yin', 'Tao Yu', 'Ahmad Zaidi', 'Mutethia Mutuma', 'Rahul Jha', 'Ahmed Hassan Awadallah', 'Asli Celikyilmaz', 'Yang Liu', 'Xipeng Qiu', 'Dragomir Radev'] | ['cs.CL'] | Meetings are a key component of human collaboration. As increasing numbers of
meetings are recorded and transcribed, meeting summaries have become essential
to remind those who may or may not have attended the meetings about the key
decisions made and the tasks to be completed. However, it is hard to create a
single sh... | 2021-04-13T05:00:35Z | Accepted by NAACL 2021 | null | null | null | null | null | null | null | null | null |
2,104.06378 | QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering | ['Michihiro Yasunaga', 'Hongyu Ren', 'Antoine Bosselut', 'Percy Liang', 'Jure Leskovec'] | ['cs.CL', 'cs.LG'] | The problem of answering questions using knowledge from pre-trained language
models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA
context (question and answer choice), methods need to (i) identify relevant
knowledge from large KGs, and (ii) perform joint reasoning over the QA context
and KG. In t... | 2021-04-13T17:32:51Z | NAACL 2021. Code & data available at
https://github.com/michiyasunaga/qagnn | null | null | null | null | null | null | null | null | null |
2,104.06399 | Co-Scale Conv-Attentional Image Transformers | ['Weijian Xu', 'Yifan Xu', 'Tyler Chang', 'Zhuowen Tu'] | ['cs.CV', 'cs.LG', 'cs.NE'] | In this paper, we present Co-scale conv-attentional image Transformers
(CoaT), a Transformer-based image classifier equipped with co-scale and
conv-attentional mechanisms. First, the co-scale mechanism maintains the
integrity of Transformers' encoder branches at individual scales, while
allowing representations learned... | 2021-04-13T17:58:29Z | Accepted to ICCV 2021 (Oral) | null | null | null | null | null | null | null | null | null |
2,104.06403 | Lite-HRNet: A Lightweight High-Resolution Network | ['Changqian Yu', 'Bin Xiao', 'Changxin Gao', 'Lu Yuan', 'Lei Zhang', 'Nong Sang', 'Jingdong Wang'] | ['cs.CV'] | We present an efficient high-resolution network, Lite-HRNet, for human pose
estimation. We start by simply applying the efficient shuffle block in
ShuffleNet to HRNet (high-resolution network), yielding stronger performance
over popular lightweight networks, such as MobileNet, ShuffleNet, and Small
HRNet.
We find tha... | 2021-04-13T17:59:31Z | Accepted to CVPR 2021 | null | null | null | null | null | null | null | null | null |
2,104.06486 | MS2: Multi-Document Summarization of Medical Studies | ['Jay DeYoung', 'Iz Beltagy', 'Madeleine van Zuylen', 'Bailey Kuehl', 'Lucy Lu Wang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | To assess the effectiveness of any medical intervention, researchers must
conduct a time-intensive and highly manual literature review. NLP systems can
help to automate or assist in parts of this expensive process. In support of
this goal, we release MS^2 (Multi-Document Summarization of Medical Studies), a
dataset of ... | 2021-04-13T19:59:34Z | 8 pages of content, 20 pages including references and appendix. See
https://github.com/allenai/ms2/ for code,
https://ai2-s2-ms2.s3-us-west-2.amazonaws.com/ms_data_2021-04-12.zip for data
(1.8G, zipped) Published in EMNLP 2021 @
https://aclanthology.org/2021.emnlp-main.594/ | null | null | null | null | null | null | null | null | null |
2,104.06546 | Large-Scale Contextualised Language Modelling for Norwegian | ['Andrey Kutuzov', 'Jeremy Barnes', 'Erik Velldal', 'Lilja Øvrelid', 'Stephan Oepen'] | ['cs.CL'] | We present the ongoing NorLM initiative to support the creation and use of
very large contextualised language models for Norwegian (and in principle other
Nordic languages), including a ready-to-use software environment, as well as an
experience report for data preparation and training. This paper introduces the
first ... | 2021-04-13T23:18:04Z | Accepted to NoDaLiDa'2021 | null | null | Large-Scale Contextualised Language Modelling for Norwegian | ['Andrey Kutuzov', 'Jeremy Barnes', 'Erik Velldal', 'Lilja Ovrelid', 'S. Oepen'] | 2,021 | Nordic Conference of Computational Linguistics | 38 | 30 | ['Computer Science'] |
2,104.06678 | Large-Scale Self- and Semi-Supervised Learning for Speech Translation | ['Changhan Wang', 'Anne Wu', 'Juan Pino', 'Alexei Baevski', 'Michael Auli', 'Alexis Conneau'] | ['cs.CL'] | In this paper, we improve speech translation (ST) through effectively
leveraging large quantities of unlabeled speech and text data in different and
complementary ways. We explore both pretraining and self-training by using the
large Libri-Light speech audio corpus and language modeling with CommonCrawl.
Our experiment... | 2021-04-14T07:44:52Z | null | null | null | Large-Scale Self- and Semi-Supervised Learning for Speech Translation | ['Changhan Wang', 'Anne Wu', 'J. Pino', 'Alexei Baevski', 'Michael Auli', 'Alexis Conneau'] | 2,021 | Interspeech | 46 | 47 | ['Computer Science'] |
2,104.06967 | Efficiently Teaching an Effective Dense Retriever with Balanced Topic
Aware Sampling | ['Sebastian Hofstätter', 'Sheng-Chieh Lin', 'Jheng-Hong Yang', 'Jimmy Lin', 'Allan Hanbury'] | ['cs.IR', 'cs.CL'] | A vital step towards the widespread adoption of neural retrieval models is
their resource efficiency throughout the training, indexing and query
workflows. The neural IR community made great advancements in training
effective dual-encoder dense retrieval (DR) models recently. A dense text
retrieval model uses a single ... | 2021-04-14T16:49:18Z | Accepted at SIGIR 2021 (Full Paper track) | null | null | Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling | ['Sebastian Hofstätter', 'Sheng-Chieh Lin', 'Jheng-Hong Yang', 'Jimmy J. Lin', 'A. Hanbury'] | 2,021 | Annual International ACM SIGIR Conference on Research and Development in Information Retrieval | 406 | 48 | ['Computer Science'] |
2,104.06979 | TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for
Unsupervised Sentence Embedding Learning | ['Kexin Wang', 'Nils Reimers', 'Iryna Gurevych'] | ['cs.CL'] | Learning sentence embeddings often requires a large amount of labeled data.
However, for most tasks and domains, labeled data is seldom available and
creating it is expensive. In this work, we present a new state-of-the-art
unsupervised method based on pre-trained Transformers and Sequential Denoising
Auto-Encoder (TSD... | 2021-04-14T17:02:18Z | Accepted at EMNLP 2021 Findings | null | null | TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning | ['Kexin Wang', 'Nils Reimers', 'Iryna Gurevych'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 189 | 43 | ['Computer Science'] |
2,104.07081 | TWEAC: Transformer with Extendable QA Agent Classifiers | ['Gregor Geigle', 'Nils Reimers', 'Andreas Rücklé', 'Iryna Gurevych'] | ['cs.CL'] | Question answering systems should help users to access knowledge on a broad
range of topics and to answer a wide array of different questions. Most systems
fall short of this expectation as they are only specialized in one particular
setting, e.g., answering factual questions with Wikipedia data. To overcome
this limit... | 2021-04-14T19:06:11Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.