arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,106.04732 | AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain
Adaptation | ['David Berthelot', 'Rebecca Roelofs', 'Kihyuk Sohn', 'Nicholas Carlini', 'Alex Kurakin'] | ['cs.LG', 'cs.AI', 'cs.CV'] | We extend semi-supervised learning to the problem of domain adaptation to
learn significantly higher-accuracy models that train on one data distribution
and test on a different one. With the goal of generality, we introduce
AdaMatch, a method that unifies the tasks of unsupervised domain adaptation
(UDA), semi-supervis... | 2021-06-08T23:39:12Z | Accepted to ICLR 2022 | null | null | AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation | ['David Berthelot', 'R. Roelofs', 'Kihyuk Sohn', 'Nicholas Carlini', 'Alexey Kurakin'] | 2,021 | International Conference on Learning Representations | 145 | 63 | ['Computer Science'] |
2,106.04803 | CoAtNet: Marrying Convolution and Attention for All Data Sizes | ['Zihang Dai', 'Hanxiao Liu', 'Quoc V. Le', 'Mingxing Tan'] | ['cs.CV', 'cs.LG'] | Transformers have attracted increasing interests in computer vision, but they
still fall behind state-of-the-art convolutional networks. In this work, we
show that while Transformers tend to have larger model capacity, their
generalization can be worse than convolutional networks due to the lack of the
right inductive ... | 2021-06-09T04:35:31Z | null | null | null | CoAtNet: Marrying Convolution and Attention for All Data Sizes | ['Zihang Dai', 'Hanxiao Liu', 'Quoc V. Le', 'Mingxing Tan'] | 2,021 | Neural Information Processing Systems | 1,223 | 55 | ['Computer Science'] |
2,106.05234 | Do Transformers Really Perform Bad for Graph Representation? | ['Chengxuan Ying', 'Tianle Cai', 'Shengjie Luo', 'Shuxin Zheng', 'Guolin Ke', 'Di He', 'Yanming Shen', 'Tie-Yan Liu'] | ['cs.LG', 'cs.AI'] | The Transformer architecture has become a dominant choice in many domains,
such as natural language processing and computer vision. Yet, it has not
achieved competitive performance on popular leaderboards of graph-level
prediction compared to mainstream GNN variants. Therefore, it remains a mystery
how Transformers cou... | 2021-06-09T17:18:52Z | null | NeurIPS 2021 | null | null | null | null | null | null | null | null |
2,106.05237 | Knowledge distillation: A good teacher is patient and consistent | ['Lucas Beyer', 'Xiaohua Zhai', 'Amélie Royer', 'Larisa Markeeva', 'Rohan Anil', 'Alexander Kolesnikov'] | ['cs.CV', 'cs.AI', 'cs.LG'] | There is a growing discrepancy in computer vision between large-scale models
that achieve state-of-the-art performance and models that are affordable in
practical applications. In this paper we address this issue and significantly
bridge the gap between these two types of models. Throughout our empirical
investigation ... | 2021-06-09T17:20:40Z | Lucas, Xiaohua, Am\'elie, Larisa, and Alex contributed equally; CVPR
2022 | null | null | Knowledge distillation: A good teacher is patient and consistent | ['Lucas Beyer', 'Xiaohua Zhai', 'Amélie Royer', 'L. Markeeva', 'Rohan Anil', 'Alexander Kolesnikov'] | 2,021 | Computer Vision and Pattern Recognition | 304 | 53 | ['Computer Science'] |
2,106.05779 | Deep Implicit Surface Point Prediction Networks | ['Rahul Venkatesh', 'Tejan Karmali', 'Sarthak Sharma', 'Aurobrata Ghosh', 'R. Venkatesh Babu', 'László A. Jeni', 'Maneesh Singh'] | ['cs.CV', 'cs.GR'] | Deep neural representations of 3D shapes as implicit functions have been
shown to produce high fidelity models surpassing the resolution-memory
trade-off faced by the explicit representations using meshes and point clouds.
However, most such approaches focus on representing closed shapes. Unsigned
distance function (UD... | 2021-06-10T14:31:54Z | 22 pages, 17 figures | null | null | Deep Implicit Surface Point Prediction Networks | ['R. Venkatesh', 'Tejan Karmali', 'Sarthak Sharma', 'Aurobrata Ghosh', 'László A. Jeni', 'R. Venkatesh Babu', 'M. Singh'] | 2,021 | IEEE International Conference on Computer Vision | 47 | 47 | ['Computer Science'] |
2,106.05784 | Programming Puzzles | ['Tal Schuster', 'Ashwin Kalyan', 'Oleksandr Polozov', 'Adam Tauman Kalai'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.PL', 'cs.SE'] | We introduce a new type of programming challenge called programming puzzles,
as an objective and comprehensive evaluation of program synthesis, and release
an open-source dataset of Python Programming Puzzles (P3). Each puzzle is
defined by a short Python program $f$, and the goal is to find an input which
makes $f$ re... | 2021-06-10T14:37:28Z | NeurIPS 2021 (Datasets and Benchmarks Track). Puzzles repository:
https://github.com/microsoft/PythonProgrammingPuzzles | null | null | null | null | null | null | null | null | null |
2,106.05822 | GroupBERT: Enhanced Transformer Architecture with Efficient Grouped
Structures | ['Ivan Chelombiev', 'Daniel Justus', 'Douglas Orr', 'Anastasia Dietrich', 'Frithjof Gressmann', 'Alexandros Koliousis', 'Carlo Luschi'] | ['cs.CL', 'cs.LG'] | Attention based language models have become a critical component in
state-of-the-art natural language processing systems. However, these models
have significant computational requirements, due to long training times, dense
operations and large parameter count. In this work we demonstrate a set of
modifications to the s... | 2021-06-10T15:41:53Z | null | null | null | null | null | null | null | null | null | null |
2,106.06103 | Conditional Variational Autoencoder with Adversarial Learning for
End-to-End Text-to-Speech | ['Jaehyeon Kim', 'Jungil Kong', 'Juhee Son'] | ['cs.SD', 'eess.AS'] | Several recent end-to-end text-to-speech (TTS) models enabling single-stage
training and parallel sampling have been proposed, but their sample quality
does not match that of two-stage TTS systems. In this work, we present a
parallel end-to-end TTS method that generates more natural sounding audio than
current two-stag... | 2021-06-11T01:07:12Z | ICML 2021 | null | null | Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech | ['Jaehyeon Kim', 'Jungil Kong', 'Juhee Son'] | 2,021 | International Conference on Machine Learning | 903 | 45 | ['Computer Science', 'Engineering'] |
2,106.06381 | Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word
Alignment | ['Zewen Chi', 'Li Dong', 'Bo Zheng', 'Shaohan Huang', 'Xian-Ling Mao', 'Heyan Huang', 'Furu Wei'] | ['cs.CL'] | The cross-lingual language models are typically pretrained with masked
language modeling on multilingual text or parallel sentences. In this paper, we
introduce denoising word alignment as a new cross-lingual pre-training task.
Specifically, the model first self-labels word alignments for parallel
sentences. Then we ra... | 2021-06-11T13:36:01Z | ACL 2021 | null | null | Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment | ['Zewen Chi', 'Li Dong', 'Bo Zheng', 'Shaohan Huang', 'Xian-Ling Mao', 'Heyan Huang', 'Furu Wei'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 70 | 53 | ['Computer Science'] |
2,106.06909 | GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of
Transcribed Audio | ['Guoguo Chen', 'Shuzhou Chai', 'Guanbo Wang', 'Jiayu Du', 'Wei-Qiang Zhang', 'Chao Weng', 'Dan Su', 'Daniel Povey', 'Jan Trmal', 'Junbo Zhang', 'Mingjie Jin', 'Sanjeev Khudanpur', 'Shinji Watanabe', 'Shuaijiang Zhao', 'Wei Zou', 'Xiangang Li', 'Xuchen Yao', 'Yongqing Wang', 'Yujun Wang', 'Zhao You', 'Zhiyong Yan'] | ['cs.SD', 'cs.CL', 'eess.AS'] | This paper introduces GigaSpeech, an evolving, multi-domain English speech
recognition corpus with 10,000 hours of high quality labeled audio suitable for
supervised training, and 40,000 hours of total audio suitable for
semi-supervised and unsupervised training. Around 40,000 hours of transcribed
audio is first collec... | 2021-06-13T04:09:16Z | null | INTERSPEECH (2021) 3670-3674 | 10.21437/Interspeech.2021-1965 | null | null | null | null | null | null | null |
2,106.07447 | HuBERT: Self-Supervised Speech Representation Learning by Masked
Prediction of Hidden Units | ['Wei-Ning Hsu', 'Benjamin Bolte', 'Yao-Hung Hubert Tsai', 'Kushal Lakhotia', 'Ruslan Salakhutdinov', 'Abdelrahman Mohamed'] | ['cs.CL', 'cs.AI', 'cs.LG', 'eess.AS'] | Self-supervised approaches for speech representation learning are challenged
by three unique problems: (1) there are multiple sound units in each input
utterance, (2) there is no lexicon of input sound units during the pre-training
phase, and (3) sound units have variable lengths with no explicit segmentation.
To deal ... | 2021-06-14T14:14:28Z | null | null | null | null | null | null | null | null | null | null |
2,106.07499 | An Empirical Survey of Data Augmentation for Limited Data Learning in
NLP | ['Jiaao Chen', 'Derek Tam', 'Colin Raffel', 'Mohit Bansal', 'Diyi Yang'] | ['cs.CL', 'cs.AI'] | NLP has achieved great progress in the past decade through the use of neural
models and large labeled datasets. The dependence on abundant data prevents NLP
models from being applied to low-resource settings or novel tasks where
significant time, money, or expertise is required to label massive amounts of
textual data.... | 2021-06-14T15:27:22Z | null | null | null | An Empirical Survey of Data Augmentation for Limited Data Learning in NLP | ['Jiaao Chen', 'Derek Tam', 'Colin Raffel', 'Mohit Bansal', 'Diyi Yang'] | 2,021 | Transactions of the Association for Computational Linguistics | 178 | 170 | ['Computer Science'] |
2,106.07889 | UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram
Discriminators for High-Fidelity Waveform Generation | ['Won Jang', 'Dan Lim', 'Jaesam Yoon', 'Bongwan Kim', 'Juntae Kim'] | ['eess.AS', 'cs.SD'] | Most neural vocoders employ band-limited mel-spectrograms to generate
waveforms. If full-band spectral features are used as the input, the vocoder
can be provided with as much acoustic information as possible. However, in some
models employing full-band mel-spectrograms, an over-smoothing problem occurs
as part of whic... | 2021-06-15T05:35:34Z | Accepted to INTERSPEECH 2021 | null | null | UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation | ['Won Jang', 'D. Lim', 'Jaesam Yoon', 'Bongwan Kim', 'Juntae Kim'] | 2,021 | Interspeech | 132 | 32 | ['Engineering', 'Computer Science'] |
2,106.07967 | Incorporating Word Sense Disambiguation in Neural Language Models | ['Jan Philip Wahle', 'Terry Ruas', 'Norman Meuschke', 'Bela Gipp'] | ['cs.CL', 'cs.AI'] | We present two supervised (pre-)training methods to incorporate gloss
definitions from lexical resources into neural language models (LMs). The
training improves our models' performance for Word Sense Disambiguation (WSD)
but also benefits general language understanding tasks while adding almost no
parameters. We evalu... | 2021-06-15T08:44:08Z | null | null | null | null | null | null | null | null | null | null |
2,106.08017 | Color2Embed: Fast Exemplar-Based Image Colorization using Color
Embeddings | ['Hengyuan Zhao', 'Wenhao Wu', 'Yihao Liu', 'Dongliang He'] | ['cs.CV', 'cs.MM'] | In this paper, we present a fast exemplar-based image colorization approach
using color embeddings named Color2Embed. Generally, due to the difficulty of
obtaining input and ground truth image pairs, it is hard to train a
exemplar-based colorization model with unsupervised and unpaired training
manner. Current algorith... | 2021-06-15T10:05:58Z | 10 pages, 10 figures | null | null | Color2Embed: Fast Exemplar-Based Image Colorization using Color Embeddings | ['Hengyuan Zhao', 'Wenhao Wu', 'Yihao Liu', 'Dongliang He'] | 2,021 | null | 16 | 51 | ['Computer Science'] |
2,106.08254 | BEiT: BERT Pre-Training of Image Transformers | ['Hangbo Bao', 'Li Dong', 'Songhao Piao', 'Furu Wei'] | ['cs.CV', 'cs.LG'] | We introduce a self-supervised vision representation model BEiT, which stands
for Bidirectional Encoder representation from Image Transformers. Following
BERT developed in the natural language processing area, we propose a masked
image modeling task to pretrain vision Transformers. Specifically, each image
has two view... | 2021-06-15T16:02:37Z | A Path to the BERT Moment of CV | null | null | null | null | null | null | null | null | null |
2,106.08322 | Dynamic Head: Unifying Object Detection Heads with Attentions | ['Xiyang Dai', 'Yinpeng Chen', 'Bin Xiao', 'Dongdong Chen', 'Mengchen Liu', 'Lu Yuan', 'Lei Zhang'] | ['cs.CV'] | The complex nature of combining localization and classification in object
detection has resulted in the flourished development of methods. Previous works
tried to improve the performance in various object detection heads but failed
to present a unified view. In this paper, we present a novel dynamic head
framework to u... | 2021-06-15T17:55:22Z | CVPR 2021 camera ready with extensions | null | null | null | null | null | null | null | null | null |
2,106.09018 | End-to-End Semi-Supervised Object Detection with Soft Teacher | ['Mengde Xu', 'Zheng Zhang', 'Han Hu', 'Jianfeng Wang', 'Lijuan Wang', 'Fangyun Wei', 'Xiang Bai', 'Zicheng Liu'] | ['cs.CV', 'cs.AI'] | This paper presents an end-to-end semi-supervised object detection approach,
in contrast to previous more complex multi-stage methods. The end-to-end
training gradually improves pseudo label qualities during the curriculum, and
the more and more accurate pseudo labels in turn benefit object detection
training. We also ... | 2021-06-16T17:59:30Z | Accepted by ICCV2021 | null | null | End-to-End Semi-Supervised Object Detection with Soft Teacher | ['Mengde Xu', 'Zheng Zhang', 'Han Hu', 'Jianfeng Wang', 'Lijuan Wang', 'Fangyun Wei', 'X. Bai', 'Zicheng Liu'] | 2,021 | IEEE International Conference on Computer Vision | 501 | 36 | ['Computer Science'] |
2,106.09449 | DocNLI: A Large-scale Dataset for Document-level Natural Language
Inference | ['Wenpeng Yin', 'Dragomir Radev', 'Caiming Xiong'] | ['cs.CL'] | Natural language inference (NLI) is formulated as a unified framework for
solving various NLP problems such as relation extraction, question answering,
summarization, etc. It has been studied intensively in the past few years
thanks to the availability of large-scale labeled datasets. However, most
existing studies foc... | 2021-06-17T13:02:26Z | ACL'21 Findings Camera-ready | null | null | DocNLI: A Large-scale Dataset for Document-level Natural Language Inference | ['Wenpeng Yin', 'Dragomir R. Radev', 'Caiming Xiong'] | 2,021 | Findings | 98 | 27 | ['Computer Science'] |
2,106.09462 | pysentimiento: A Python Toolkit for Opinion Mining and Social NLP tasks | ['Juan Manuel Pérez', 'Mariela Rajngewerc', 'Juan Carlos Giudici', 'Damián A. Furman', 'Franco Luque', 'Laura Alonso Alemany', 'María Vanina Martínez'] | ['cs.CL'] | In recent years, the extraction of opinions and information from
user-generated text has attracted a lot of interest, largely due to the
unprecedented volume of content in Social Media. However, social researchers
face some issues in adopting cutting-edge tools for these tasks, as they are
usually behind commercial API... | 2021-06-17T13:15:07Z | null | null | null | pysentimiento: A Python Toolkit for Opinion Mining and Social NLP tasks | ["Juan Manuel P'erez", 'Mariela Rajngewerc', 'Juan Carlos Giudici', 'D. Furman', 'F. Luque', 'Laura Alonso Alemany', "Mar'ia Vanina Mart'inez"] | 2,021 | null | 33 | 79 | ['Computer Science'] |
2,106.09553 | Large-Scale Chemical Language Representations Capture Molecular
Structure and Properties | ['Jerret Ross', 'Brian Belgodere', 'Vijil Chenthamarakshan', 'Inkit Padhi', 'Youssef Mroueh', 'Payel Das'] | ['cs.LG', 'cs.CL', 'q-bio.BM'] | Models based on machine learning can enable accurate and fast molecular
property predictions, which is of interest in drug discovery and material
design. Various supervised machine learning models have demonstrated promising
performance, but the vast chemical space and the limited availability of
property labels make s... | 2021-06-17T14:33:55Z | NMI 2022 | null | null | null | null | null | null | null | null | null |
2,106.09681 | XCiT: Cross-Covariance Image Transformers | ['Alaaeldin El-Nouby', 'Hugo Touvron', 'Mathilde Caron', 'Piotr Bojanowski', 'Matthijs Douze', 'Armand Joulin', 'Ivan Laptev', 'Natalia Neverova', 'Gabriel Synnaeve', 'Jakob Verbeek', 'Hervé Jegou'] | ['cs.CV', 'cs.LG'] | Following their success in natural language processing, transformers have
recently shown much promise for computer vision. The self-attention operation
underlying transformers yields global interactions between all tokens ,i.e.
words or image patches, and enables flexible modelling of image data beyond the
local intera... | 2021-06-17T17:33:35Z | null | null | null | null | null | null | null | null | null | null |
2,106.09685 | LoRA: Low-Rank Adaptation of Large Language Models | ['Edward J. Hu', 'Yelong Shen', 'Phillip Wallis', 'Zeyuan Allen-Zhu', 'Yuanzhi Li', 'Shean Wang', 'Lu Wang', 'Weizhu Chen'] | ['cs.CL', 'cs.AI', 'cs.LG'] | An important paradigm of natural language processing consists of large-scale
pre-training on general domain data and adaptation to particular tasks or
domains. As we pre-train larger models, full fine-tuning, which retrains all
model parameters, becomes less feasible. Using GPT-3 175B as an example --
deploying indepen... | 2021-06-17T17:37:18Z | Draft V2 includes better baselines, experiments on GLUE, and more on
adapter latency | null | null | null | null | null | null | null | null | null |
2,106.09997 | SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question
Answering over Knowledge Graphs | ['Hieu Tran', 'Long Phan', 'James Anibal', 'Binh T. Nguyen', 'Truong-Son Nguyen'] | ['cs.CL'] | In this paper, we propose SPBERT, a transformer-based language model
pre-trained on massive SPARQL query logs. By incorporating masked language
modeling objectives and the word structural objective, SPBERT can learn
general-purpose representations in both natural language and SPARQL query
language. We investigate how S... | 2021-06-18T08:39:26Z | null | null | null | null | null | null | null | null | null | null |
2,106.10161 | Golos: Russian Dataset for Speech Research | ['Nikolay Karpov', 'Alexander Denisenko', 'Fedor Minkin'] | ['eess.AS', 'E.m; I.5.1'] | This paper introduces a novel Russian speech dataset called Golos, a large
corpus suitable for speech research. The dataset mainly consists of recorded
audio files manually annotated on the crowd-sourcing platform. The total
duration of the audio is about 1240 hours. We have made the corpus freely
available to download... | 2021-06-18T14:55:02Z | 5 pages, 3 figures, accepted to Interspeech2021 | null | null | null | null | null | null | null | null | null |
2,106.1027 | How to train your ViT? Data, Augmentation, and Regularization in Vision
Transformers | ['Andreas Steiner', 'Alexander Kolesnikov', 'Xiaohua Zhai', 'Ross Wightman', 'Jakob Uszkoreit', 'Lucas Beyer'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Vision Transformers (ViT) have been shown to attain highly competitive
performance for a wide range of vision applications, such as image
classification, object detection and semantic image segmentation. In comparison
to convolutional neural networks, the Vision Transformer's weaker inductive
bias is generally found to... | 2021-06-18T17:58:20Z | Andreas, Alex, Xiaohua and Lucas contributed equally. We release more
than 50'000 ViT models trained under diverse settings on various datasets.
Available at https://github.com/google-research/big_vision,
https://github.com/google-research/vision_transformer and
https://github.com/rwightman/pytorch-image-models... | Transactions on Machine Learning Research (05/2022) | null | null | null | null | null | null | null | null |
2,106.1152 | BARTScore: Evaluating Generated Text as Text Generation | ['Weizhe Yuan', 'Graham Neubig', 'Pengfei Liu'] | ['cs.CL'] | A wide variety of NLP applications, such as machine translation,
summarization, and dialog, involve text generation. One major challenge for
these applications is how to evaluate whether such generated texts are actually
fluent, accurate, or effective. In this work, we conceptualize the evaluation
of generated text as ... | 2021-06-22T03:20:53Z | NeurIPS 2021 | null | null | BARTScore: Evaluating Generated Text as Text Generation | ['Weizhe Yuan', 'Graham Neubig', 'Pengfei Liu'] | 2,021 | Neural Information Processing Systems | 851 | 76 | ['Computer Science'] |
2,106.12672 | Charformer: Fast Character Transformers via Gradient-based Subword
Tokenization | ['Yi Tay', 'Vinh Q. Tran', 'Sebastian Ruder', 'Jai Gupta', 'Hyung Won Chung', 'Dara Bahri', 'Zhen Qin', 'Simon Baumgartner', 'Cong Yu', 'Donald Metzler'] | ['cs.CL', 'cs.AI', 'cs.LG'] | State-of-the-art models in natural language processing rely on separate rigid
subword tokenization algorithms, which limit their generalization ability and
adaptation to new settings. In this paper, we propose a new model inductive
bias that learns a subword tokenization end-to-end as part of the model. To
this end, we... | 2021-06-23T22:24:14Z | ICLR 2022 Camera Ready | null | null | null | null | null | null | null | null | null |
2,106.13 | QASR: QCRI Aljazeera Speech Resource -- A Large Scale Annotated Arabic
Speech Corpus | ['Hamdy Mubarak', 'Amir Hussein', 'Shammur Absar Chowdhury', 'Ahmed Ali'] | ['cs.CL', 'cs.SD', 'eess.AS'] | We introduce the largest transcribed Arabic speech corpus, QASR, collected
from the broadcast domain. This multi-dialect speech dataset contains 2,000
hours of speech sampled at 16kHz crawled from Aljazeera news channel. The
dataset is released with lightly supervised transcriptions, aligned with the
audio segments. Un... | 2021-06-24T13:20:40Z | Speech Corpus, Spoken Conversation, ASR, Dialect Identification,
Punctuation Restoration, Speaker Verification, NER, Named Entity, Arabic,
Speaker gender, Turn-taking Accepted in ACL 2021 | null | null | QASR: QCRI Aljazeera Speech Resource A Large Scale Annotated Arabic Speech Corpus | ['Hamdy Mubarak', 'A. Hussein', 'S. A. Chowdhury', 'Ahmed M. Ali'] | 2,021 | Annual Meeting of the Association for Computational Linguistics | 49 | 53 | ['Computer Science', 'Engineering'] |
2,106.13008 | Autoformer: Decomposition Transformers with Auto-Correlation for
Long-Term Series Forecasting | ['Haixu Wu', 'Jiehui Xu', 'Jianmin Wang', 'Mingsheng Long'] | ['cs.LG', 'cs.AI'] | Extending the forecasting time is a critical demand for real applications,
such as extreme weather early warning and long-term energy consumption
planning. This paper studies the long-term forecasting problem of time series.
Prior Transformer-based models adopt various self-attention mechanisms to
discover the long-ran... | 2021-06-24T13:43:43Z | null | null | null | null | null | null | null | null | null | null |
2,106.13112 | VOLO: Vision Outlooker for Visual Recognition | ['Li Yuan', 'Qibin Hou', 'Zihang Jiang', 'Jiashi Feng', 'Shuicheng Yan'] | ['cs.CV'] | Visual recognition has been dominated by convolutional neural networks (CNNs)
for years. Though recently the prevailing vision transformers (ViTs) have shown
great potential of self-attention based models in ImageNet classification,
their performance is still inferior to that of the latest SOTA CNNs if no extra
data ar... | 2021-06-24T15:46:54Z | code: https://github.com/sail-sg/volo | null | null | null | null | null | null | null | null | null |
2,106.1323 | Video Swin Transformer | ['Ze Liu', 'Jia Ning', 'Yue Cao', 'Yixuan Wei', 'Zheng Zhang', 'Stephen Lin', 'Han Hu'] | ['cs.CV', 'cs.AI', 'cs.LG'] | The vision community is witnessing a modeling shift from CNNs to
Transformers, where pure Transformer architectures have attained top accuracy
on the major video recognition benchmarks. These video models are all built on
Transformer layers that globally connect patches across the spatial and
temporal dimensions. In th... | 2021-06-24T17:59:46Z | null | null | null | null | null | null | null | null | null | null |
2,106.13553 | Exploring the Representation of Word Meanings in Context: A Case Study
on Homonymy and Synonymy | ['Marcos Garcia'] | ['cs.CL'] | This paper presents a multilingual study of word meaning representations in
context. We assess the ability of both static and contextualized models to
adequately represent different lexical-semantic relations, such as homonymy and
synonymy. To do so, we created a new multilingual dataset that allows us to
perform a con... | 2021-06-25T10:54:23Z | 16 pages, 4 figures | ACL-IJCNLP 2021 | null | null | null | null | null | null | null | null |
2,106.13687 | panda-gym: Open-source goal-conditioned environments for robotic
learning | ['Quentin Gallouédec', 'Nicolas Cazin', 'Emmanuel Dellandréa', 'Liming Chen'] | ['cs.LG'] | This paper presents panda-gym, a set of Reinforcement Learning (RL)
environments for the Franka Emika Panda robot integrated with OpenAI Gym. Five
tasks are included: reach, push, slide, pick & place and stack. They all follow
a Multi-Goal RL framework, allowing to use goal-oriented RL algorithms. To
foster open-resear... | 2021-06-25T15:13:36Z | NeurIPS 2021 Workshop on Robot Learning: Self-Supervised and Lifelong
Learning | null | null | null | null | null | null | null | null | null |
2,106.13731 | Ranger21: a synergistic deep learning optimizer | ['Less Wright', 'Nestor Demeure'] | ['cs.LG', 'I.2.6'] | As optimizers are critical to the performances of neural networks, every year
a large number of papers innovating on the subject are published. However,
while most of these publications provide incremental improvements to existing
algorithms, they tend to be presented as new optimizers rather than composable
algorithms... | 2021-06-25T16:07:59Z | for associated code, see https://github.com/lessw2020/Ranger21 | null | null | Ranger21: a synergistic deep learning optimizer | ['Less Wright', 'Nestor Demeure'] | 2,021 | arXiv.org | 88 | 27 | ['Computer Science'] |
2,106.13736 | DeltaLM: Encoder-Decoder Pre-training for Language Generation and
Translation by Augmenting Pretrained Multilingual Encoders | ['Shuming Ma', 'Li Dong', 'Shaohan Huang', 'Dongdong Zhang', 'Alexandre Muzio', 'Saksham Singhal', 'Hany Hassan Awadalla', 'Xia Song', 'Furu Wei'] | ['cs.CL'] | While pretrained encoders have achieved success in various natural language
understanding (NLU) tasks, there is a gap between these pretrained encoders and
natural language generation (NLG). NLG tasks are often based on the
encoder-decoder framework, where the pretrained encoders can only benefit part
of it. To reduce ... | 2021-06-25T16:12:10Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,106.13797 | PVT v2: Improved Baselines with Pyramid Vision Transformer | ['Wenhai Wang', 'Enze Xie', 'Xiang Li', 'Deng-Ping Fan', 'Kaitao Song', 'Ding Liang', 'Tong Lu', 'Ping Luo', 'Ling Shao'] | ['cs.CV'] | Transformer recently has presented encouraging progress in computer vision.
In this work, we present new baselines by improving the original Pyramid Vision
Transformer (PVT v1) by adding three designs, including (1) linear complexity
attention layer, (2) overlapping patch embedding, and (3) convolutional
feed-forward n... | 2021-06-25T17:51:09Z | Accepted to CVMJ 2022 | Computational Visual Media, 2022, Vol. 8, No. 3, Pages: 415-424 | 10.1007/s41095-022-0274-8 | null | null | null | null | null | null | null |
2,106.14463 | RadGraph: Extracting Clinical Entities and Relations from Radiology
Reports | ['Saahil Jain', 'Ashwin Agrawal', 'Adriel Saporta', 'Steven QH Truong', 'Du Nguyen Duong', 'Tan Bui', 'Pierre Chambon', 'Yuhao Zhang', 'Matthew P. Lungren', 'Andrew Y. Ng', 'Curtis P. Langlotz', 'Pranav Rajpurkar'] | ['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG'] | Extracting structured clinical information from free-text radiology reports
can enable the use of radiology report information for a variety of critical
healthcare applications. In our work, we present RadGraph, a dataset of
entities and relations in full-text chest X-ray radiology reports based on a
novel information ... | 2021-06-28T08:24:23Z | Accepted to the 35th Conference on Neural Information Processing
Systems (NeurIPS 2021) Track on Datasets and Benchmarks | null | null | null | null | null | null | null | null | null |
2,106.14807 | A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for
Information Retrieval Techniques | ['Jimmy Lin', 'Xueguang Ma'] | ['cs.IR', 'cs.CL'] | Recent developments in representational learning for information retrieval
can be organized in a conceptual framework that establishes two pairs of
contrasts: sparse vs. dense representations and unsupervised vs. learned
representations. Sparse learned representations can further be decomposed into
expansion and term w... | 2021-06-28T15:30:42Z | null | null | null | A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques | ['Jimmy J. Lin', 'Xueguang Ma'] | 2,021 | arXiv.org | 148 | 25 | ['Computer Science'] |
2,106.15941 | Augmented Shortcuts for Vision Transformers | ['Yehui Tang', 'Kai Han', 'Chang Xu', 'An Xiao', 'Yiping Deng', 'Chao Xu', 'Yunhe Wang'] | ['cs.CV', 'cs.LG'] | Transformer models have achieved great progress on computer vision tasks
recently. The rapid development of vision transformers is mainly contributed by
their high representation ability for extracting informative features from
input images. However, the mainstream transformer models are designed with deep
architecture... | 2021-06-30T09:48:30Z | null | null | null | null | null | null | null | null | null | null |
2,106.16038 | ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin
Information | ['Zijun Sun', 'Xiaoya Li', 'Xiaofei Sun', 'Yuxian Meng', 'Xiang Ao', 'Qing He', 'Fei Wu', 'Jiwei Li'] | ['cs.CL'] | Recent pretraining models in Chinese neglect two important aspects specific
to the Chinese language: glyph and pinyin, which carry significant syntax and
semantic information for language understanding. In this work, we propose
ChineseBERT, which incorporates both the {\it glyph} and {\it pinyin}
information of Chinese... | 2021-06-30T13:06:00Z | To appear at ACL2021 | null | null | null | null | null | null | null | null | null |
2,106.16163 | The MultiBERTs: BERT Reproductions for Robustness Analysis | ['Thibault Sellam', 'Steve Yadlowsky', 'Jason Wei', 'Naomi Saphra', "Alexander D'Amour", 'Tal Linzen', 'Jasmijn Bastings', 'Iulia Turc', 'Jacob Eisenstein', 'Dipanjan Das', 'Ian Tenney', 'Ellie Pavlick'] | ['cs.CL'] | Experiments with pre-trained models such as BERT are often based on a single
checkpoint. While the conclusions drawn apply to the artifact tested in the
experiment (i.e., the particular instance of the model), it is not always clear
whether they hold for the more general procedure which includes the
architecture, train... | 2021-06-30T15:56:44Z | Accepted at ICLR'22. Checkpoints and example analyses:
http://goo.gle/multiberts | null | null | The MultiBERTs: BERT Reproductions for Robustness Analysis | ['Thibault Sellam', 'Steve Yadlowsky', 'Jason Wei', 'Naomi Saphra', "A. D'Amour", 'Tal Linzen', 'Jasmijn Bastings', 'Iulia Turc', 'Jacob Eisenstein', 'Dipanjan Das', 'Ian Tenney', 'Ellie Pavlick'] | 2,021 | International Conference on Learning Representations | 95 | 74 | ['Computer Science'] |
2,107.01091 | CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio
Transcription | ['Nikita Pavlichenko', 'Ivan Stelmakh', 'Dmitry Ustalov'] | ['cs.SD', 'cs.HC', 'cs.LG', 'eess.AS'] | Domain-specific data is the crux of the successful transfer of machine
learning systems from benchmarks to real life. In simple problems such as image
classification, crowdsourcing has become one of the standard tools for cheap
and time-efficient data collection: thanks in large part to advances in
research on aggregat... | 2021-07-02T14:05:28Z | null | null | null | null | null | null | null | null | null | null |
2,107.02027 | Efficient Sequence Packing without Cross-contamination: Accelerating
Large Language Models without Impacting Performance | ['Mario Michael Krell', 'Matej Kosec', 'Sergio P. Perez', 'Andrew Fitzgibbon'] | ['cs.CL', 'cs.CC', 'cs.IT', 'cs.LG', 'math.IT', '05-08', 'I.2.7; G.2.1'] | Effective training of today's large language models (LLMs) depends on large
batches and long sequences for throughput and accuracy. To handle
variable-length sequences on hardware accelerators, it is common practice to
introduce padding tokens, so that all sequences in a batch have the same
length. We show in this pape... | 2021-06-29T04:37:23Z | Significantly new version with different authors and much more
content. Much larger variety in experiments and exhaustive SOTA analysis | null | null | null | null | null | null | null | null | null |
2,107.02137 | ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language
Understanding and Generation | ['Yu Sun', 'Shuohuan Wang', 'Shikun Feng', 'Siyu Ding', 'Chao Pang', 'Junyuan Shang', 'Jiaxiang Liu', 'Xuyi Chen', 'Yanbin Zhao', 'Yuxiang Lu', 'Weixin Liu', 'Zhihua Wu', 'Weibao Gong', 'Jianzhong Liang', 'Zhizhou Shang', 'Peng Sun', 'Wei Liu', 'Xuan Ouyang', 'Dianhai Yu', 'Hao Tian', 'Hua Wu', 'Haifeng Wang'] | ['cs.CL'] | Pre-trained models have achieved state-of-the-art results in various Natural
Language Processing (NLP) tasks. Recent works such as T5 and GPT-3 have shown
that scaling up pre-trained language models can improve their generalization
abilities. Particularly, the GPT-3 model with 175 billion parameters shows its
strong ta... | 2021-07-05T16:54:59Z | null | null | null | ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation | ['Yu Sun', 'Shuohuan Wang', 'Shikun Feng', 'Siyu Ding', 'Chao Pang', 'Junyuan Shang', 'Jiaxiang Liu', 'Xuyi Chen', 'Yanbin Zhao', 'Yuxiang Lu', 'Weixin Liu', 'Zhihua Wu', 'Weibao Gong', 'Jianzhong Liang', 'Zhizhou Shang', 'Peng Sun', 'Wei Liu', 'Ouyang Xuan', 'Dianhai Yu', 'Hao Tian', 'Hua Wu', 'Haifeng Wang'] | 2,021 | arXiv.org | 475 | 102 | ['Computer Science'] |
2,107.02612 | Combining EfficientNet and Vision Transformers for Video Deepfake
Detection | ['Davide Coccomini', 'Nicola Messina', 'Claudio Gennaro', 'Fabrizio Falchi'] | ['cs.CV'] | Deepfakes are the result of digital manipulation to forge realistic yet fake
imagery. With the astonishing advances in deep generative models, fake images
or videos are nowadays obtained using variational autoencoders (VAEs) or
Generative Adversarial Networks (GANs). These technologies are becoming more
accessible and ... | 2021-07-06T13:35:11Z | null | null | 10.1007/978-3-031-06433-3_19 | Combining EfficientNet and Vision Transformers for Video Deepfake Detection | ['D. Coccomini', 'Nicola Messina', 'C. Gennaro', 'F. Falchi'] | 2,021 | International Conference on Image Analysis and Processing | 176 | 42 | ['Computer Science'] |
2,107.03312 | SoundStream: An End-to-End Neural Audio Codec | ['Neil Zeghidour', 'Alejandro Luebs', 'Ahmed Omran', 'Jan Skoglund', 'Marco Tagliasacchi'] | ['cs.SD', 'cs.LG', 'eess.AS'] | We present SoundStream, a novel neural audio codec that can efficiently
compress speech, music and general audio at bitrates normally targeted by
speech-tailored codecs. SoundStream relies on a model architecture composed by
a fully convolutional encoder/decoder network and a residual vector quantizer,
which are traine... | 2021-07-07T15:45:42Z | null | null | null | null | null | null | null | null | null | null |
2,107.03356 | M-FAC: Efficient Matrix-Free Approximations of Second-Order Information | ['Elias Frantar', 'Eldar Kurtic', 'Dan Alistarh'] | ['cs.LG'] | Efficiently approximating local curvature information of the loss function is
a key tool for optimization and compression of deep neural networks. Yet, most
existing methods to approximate second-order information have high
computational or storage costs, which can limit their practicality. In this
work, we investigate... | 2021-07-07T17:01:34Z | Accepted to NeurIPS 2021 | null | null | M-FAC: Efficient Matrix-Free Approximations of Second-Order Information | ['Elias Frantar', 'Eldar Kurtic', 'Dan Alistarh'] | 2,021 | Neural Information Processing Systems | 59 | 59 | ['Computer Science'] |
2,107.03374 | Evaluating Large Language Models Trained on Code | ['Mark Chen', 'Jerry Tworek', 'Heewoo Jun', 'Qiming Yuan', 'Henrique Ponde de Oliveira Pinto', 'Jared Kaplan', 'Harri Edwards', 'Yuri Burda', 'Nicholas Joseph', 'Greg Brockman', 'Alex Ray', 'Raul Puri', 'Gretchen Krueger', 'Michael Petrov', 'Heidy Khlaaf', 'Girish Sastry', 'Pamela Mishkin', 'Brooke Chan', 'Scott Gray',... | ['cs.LG'] | We introduce Codex, a GPT language model fine-tuned on publicly available
code from GitHub, and study its Python code-writing capabilities. A distinct
production version of Codex powers GitHub Copilot. On HumanEval, a new
evaluation set we release to measure functional correctness for synthesizing
programs from docstri... | 2021-07-07T17:41:24Z | corrected typos, added references, added authors, added
acknowledgements | null | null | null | null | null | null | null | null | null |
2,107.03644 | ComFormer: Code Comment Generation via Transformer and Fusion
Method-based Hybrid Code Representation | ['Guang Yang', 'Xiang Chen', 'Jinxin Cao', 'Shuyuan Xu', 'Zhanqi Cui', 'Chi Yu', 'Ke Liu'] | ['cs.SE'] | Developers often write low-quality code comments due to the lack of
programming experience, which can reduce the efficiency of developers program
comprehension. Therefore, developers hope that code comment generation tools
can be developed to illustrate the functionality and purpose of the code.
Recently, researchers m... | 2021-07-08T07:26:37Z | DSA2021 | null | null | null | null | null | null | null | null | null |
2,107.03844 | A Review of Bangla Natural Language Processing Tasks and the Utility of
Transformer Models | ['Firoj Alam', 'Arid Hasan', 'Tanvirul Alam', 'Akib Khan', 'Janntatul Tajrin', 'Naira Khan', 'Shammur Absar Chowdhury'] | ['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG', '68T50', 'I.2.7'] | Bangla -- ranked as the 6th most widely spoken language across the world
(https://www.ethnologue.com/guides/ethnologue200), with 230 million native
speakers -- is still considered as a low-resource language in the natural
language processing (NLP) community. With three decades of research, Bangla NLP
(BNLP) is still la... | 2021-07-08T13:49:46Z | Under Review, Bangla language processing, text classification,
sequence tagging, datasets, benchmarks, transformer models | null | null | A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models | ['Firoj Alam', 'Md Arid Hasan', 'Tanvirul Alam', 'A. Khan', 'Janntatul Tajrin', 'Naira Khan', 'S. A. Chowdhury'] | 2,021 | arXiv.org | 27 | 207 | ['Computer Science'] |
2,107.04197 | REX: Revisiting Budgeted Training with an Improved Schedule | ['John Chen', 'Cameron Wolfe', 'Anastasios Kyrillidis'] | ['cs.LG'] | Deep learning practitioners often operate on a computational and monetary
budget. Thus, it is critical to design optimization algorithms that perform
well under any budget. The linear learning rate schedule is considered the best
budget-aware schedule, as it outperforms most other schedules in the low budget
regime. On... | 2021-07-09T04:17:35Z | null | null | null | null | null | null | null | null | null | null |
2,107.04771 | Similar Cases Recommendation using Legal Knowledge Graphs | ['Jaspreet Singh Dhani', 'Ruchika Bhatt', 'Balaji Ganesan', 'Parikshet Sirohi', 'Vasudha Bhatnagar'] | ['cs.AI'] | A legal knowledge graph constructed from court cases, judgments, laws and
other legal documents can enable a number of applications like question
answering, document similarity, and search. While the use of knowledge graphs
for distant supervision in NLP tasks is well researched, using knowledge graphs
for applications... | 2021-07-10T06:37:36Z | 10 pages. 6 figures. 3rd Symposium on Artificial Intelligence and
Law. SAIL 2023 | null | null | null | null | null | null | null | null | null |
2,107.0572 | SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking | ['Thibault Formal', 'Benjamin Piwowarski', 'Stéphane Clinchant'] | ['cs.IR'] | In neural Information Retrieval, ongoing research is directed towards
improving the first retriever in ranking pipelines. Learning dense embeddings
to conduct retrieval using efficient approximate nearest neighbors methods has
proven to work well. Meanwhile, there has been a growing interest in learning
sparse represen... | 2021-07-12T20:17:44Z | 5 pages, SIGIR'21 short paper | null | null | SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking | ['Thibault Formal', 'Benjamin Piwowarski', 'S. Clinchant'] | 2,021 | Annual International ACM SIGIR Conference on Research and Development in Information Retrieval | 328 | 29 | ['Computer Science'] |
2,107.06278 | Per-Pixel Classification is Not All You Need for Semantic Segmentation | ['Bowen Cheng', 'Alexander G. Schwing', 'Alexander Kirillov'] | ['cs.CV'] | Modern approaches typically formulate semantic segmentation as a per-pixel
classification task, while instance-level segmentation is handled with an
alternative mask classification. Our key insight: mask classification is
sufficiently general to solve both semantic- and instance-level segmentation
tasks in a unified ma... | 2021-07-13T17:59:50Z | NeurIPS 2021, Spotlight. Project page:
https://bowenc0221.github.io/maskformer | null | null | Per-Pixel Classification is Not All You Need for Semantic Segmentation | ['Bowen Cheng', 'A. Schwing', 'Alexander Kirillov'] | 2,021 | Neural Information Processing Systems | 1,559 | 55 | ['Computer Science'] |
2,107.06499 | Deduplicating Training Data Makes Language Models Better | ['Katherine Lee', 'Daphne Ippolito', 'Andrew Nystrom', 'Chiyuan Zhang', 'Douglas Eck', 'Chris Callison-Burch', 'Nicholas Carlini'] | ['cs.CL', 'cs.LG'] | We find that existing language modeling datasets contain many near-duplicate
examples and long repetitive substrings. As a result, over 1% of the unprompted
output of language models trained on these datasets is copied verbatim from the
training data. We develop two tools that allow us to deduplicate training
datasets ... | 2021-07-14T06:06:52Z | Accepted to ACL 2022 | null | null | null | null | null | null | null | null | null |
2,107.06751 | Tortured phrases: A dubious writing style emerging in science. Evidence
of critical issues affecting established journals | ['Guillaume Cabanac', 'Cyril Labbé', 'Alexander Magazinov'] | ['cs.DL', 'cs.CL', 'cs.CY', 'cs.IR'] | Probabilistic text generators have been used to produce fake scientific
papers for more than a decade. Such nonsensical papers are easily detected by
both human and machine. Now more complex AI-powered generation techniques
produce texts indistinguishable from that of humans and the generation of
scientific texts from ... | 2021-07-12T20:47:08Z | null | null | null | null | null | null | null | null | null | null |
2,107.06955 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | ['Armen Aghajanyan', 'Dmytro Okhonko', 'Mike Lewis', 'Mandar Joshi', 'Hu Xu', 'Gargi Ghosh', 'Luke Zettlemoyer'] | ['cs.CL', 'cs.LG'] | We introduce HTLM, a hyper-text language model trained on a large-scale web
crawl. Modeling hyper-text has a number of advantages: (1) it is easily
gathered at scale, (2) it provides rich document-level and end-task-adjacent
supervision (e.g. class and id attributes often encode document category
information), and (3) ... | 2021-07-14T19:39:31Z | null | null | null | null | null | null | null | null | null | null |
2,107.0715 | Tailor: Generating and Perturbing Text with Semantic Controls | ['Alexis Ross', 'Tongshuang Wu', 'Hao Peng', 'Matthew E. Peters', 'Matt Gardner'] | ['cs.CL'] | Controlled text perturbation is useful for evaluating and improving model
generalizability. However, current techniques rely on training a model for
every target perturbation, which is expensive and hard to generalize. We
present Tailor, a semantically-controlled text generation system. Tailor builds
on a pretrained se... | 2021-07-15T06:38:59Z | null | null | null | null | null | null | null | null | null | null |
2,107.07253 | MarIA: Spanish Language Models | ['Asier Gutiérrez-Fandiño', 'Jordi Armengol-Estapé', 'Marc Pàmies', 'Joan Llop-Palao', 'Joaquín Silveira-Ocampo', 'Casimiro Pio Carrino', 'Aitor Gonzalez-Agirre', 'Carme Armentano-Oller', 'Carlos Rodriguez-Penagos', 'Marta Villegas'] | ['cs.CL', 'cs.AI'] | This work presents MarIA, a family of Spanish language models and associated
resources made available to the industry and the research community. Currently,
MarIA includes RoBERTa-base, RoBERTa-large, GPT2 and GPT2-large Spanish
language models, which can arguably be presented as the largest and most
proficient languag... | 2021-07-15T11:23:05Z | null | Procesamiento del Lenguaje Natural, v. 68, p. 39-60, mar. 2022.
ISSN 1989-7553 | 10.26342/2022-68-3 | null | null | null | null | null | null | null |
2,107.07402 | CLSRIL-23: Cross Lingual Speech Representations for Indic Languages | ['Anirudh Gupta', 'Harveen Singh Chadha', 'Priyanshi Shah', 'Neeraj Chhimwal', 'Ankur Dhuriya', 'Rishabh Gaur', 'Vivek Raghavan'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | We present a CLSRIL-23, a self supervised learning based audio pre-trained
model which learns cross lingual speech representations from raw audio across
23 Indic languages. It is built on top of wav2vec 2.0 which is solved by
training a contrastive task over masked latent speech representations and
jointly learns the q... | 2021-07-15T15:42:43Z | 7 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,107.07498 | FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark | ['Liang Xu', 'Xiaojing Lu', 'Chenyang Yuan', 'Xuanwei Zhang', 'Huilin Xu', 'Hu Yuan', 'Guoao Wei', 'Xiang Pan', 'Xin Tian', 'Libo Qin', 'Hu Hai'] | ['cs.CL', 'cs.AI'] | Pretrained Language Models (PLMs) have achieved tremendous success in natural
language understanding tasks. While different learning schemes -- fine-tuning,
zero-shot, and few-shot learning -- have been widely explored and compared for
languages such as English, there is comparatively little work in Chinese to
fairly a... | 2021-07-15T17:51:25Z | 10 pages, 3 tables | null | null | FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark | ['Liang Xu', 'Xiaojing Lu', 'Chenyang Yuan', 'Xuanwei Zhang', 'Huining Yuan', 'Huilin Xu', 'Guoao Wei', 'X. Pan', 'Hai Hu'] | 2,021 | arXiv.org | 57 | 33 | ['Computer Science'] |
2,107.07653 | TAPEX: Table Pre-training via Learning a Neural SQL Executor | ['Qian Liu', 'Bei Chen', 'Jiaqi Guo', 'Morteza Ziyadi', 'Zeqi Lin', 'Weizhu Chen', 'Jian-Guang Lou'] | ['cs.CL', 'cs.AI'] | Recent progress in language model pre-training has achieved a great success
via leveraging large-scale unstructured textual data. However, it is still a
challenge to apply pre-training on structured tabular data due to the absence
of large-scale high-quality tabular data. In this paper, we propose TAPEX to
show that ta... | 2021-07-16T00:40:11Z | ICLR 2022 camera ready version | null | null | null | null | null | null | null | null | null |
2,107.0843 | YOLOX: Exceeding YOLO Series in 2021 | ['Zheng Ge', 'Songtao Liu', 'Feng Wang', 'Zeming Li', 'Jian Sun'] | ['cs.CV'] | In this report, we present some experienced improvements to YOLO series,
forming a new high-performance detector -- YOLOX. We switch the YOLO detector
to an anchor-free manner and conduct other advanced detection techniques, i.e.,
a decoupled head and the leading label assignment strategy SimOTA to achieve
state-of-the... | 2021-07-18T12:55:11Z | null | null | null | YOLOX: Exceeding YOLO Series in 2021 | ['Zheng Ge', 'Songtao Liu', 'Feng Wang', 'Zeming Li', 'Jian Sun'] | 2,021 | arXiv.org | 4,137 | 40 | ['Computer Science'] |
2,107.10042 | Comparison of Czech Transformers on Text Classification Tasks | ['Jan Lehečka', 'Jan Švec'] | ['cs.CL'] | In this paper, we present our progress in pre-training monolingual
Transformers for Czech and contribute to the research community by releasing
our models for public. The need for such models emerged from our effort to
employ Transformers in our language-specific tasks, but we found the
performance of the published mul... | 2021-07-21T12:22:34Z | https://huggingface.co/fav-kky | Statistical Language and Speech Processing, SLSP 2021. Cham:
Springer, 2021. pages 27-37. ISBN: 978-3-030-89578-5 , ISSN: 0302-9743 | 10.1007/978-3-030-89579-2_3 | null | null | null | null | null | null | null |
2,107.10161 | Evidential Deep Learning for Open Set Action Recognition | ['Wentao Bao', 'Qi Yu', 'Yu Kong'] | ['cs.CV'] | In a real-world scenario, human actions are typically out of the distribution
from training data, which requires a model to both recognize the known actions
and reject the unknown. Different from image data, video actions are more
challenging to be recognized in an open-set setting due to the uncertain
temporal dynamic... | 2021-07-21T15:45:37Z | ICCV 2021 Oral | null | null | null | null | null | null | null | null | null |
2,107.10833 | Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure
Synthetic Data | ['Xintao Wang', 'Liangbin Xie', 'Chao Dong', 'Ying Shan'] | ['eess.IV', 'cs.CV'] | Though many attempts have been made in blind super-resolution to restore
low-resolution images with unknown and complex degradations, they are still far
from addressing general real-world degraded images. In this work, we extend the
powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN),
which is ... | 2021-07-22T17:43:24Z | Tech Report. Training/testing codes and executable files are in
https://github.com/xinntao/Real-ESRGAN | null | null | Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data | ['Xintao Wang', 'Liangbin Xie', 'Chao Dong', 'Ying Shan'] | 2,021 | 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) | 1,190 | 55 | ['Engineering', 'Computer Science'] |
2,107.11414 | Brazilian Portuguese Speech Recognition Using Wav2vec 2.0 | ['Lucas Rafael Stefanel Gris', 'Edresson Casanova', 'Frederico Santos de Oliveira', 'Anderson da Silva Soares', 'Arnaldo Candido Junior'] | ['cs.CL'] | Deep learning techniques have been shown to be efficient in various tasks,
especially in the development of speech recognition systems, that is, systems
that aim to transcribe an audio sentence in a sequence of written words.
Despite the progress in the area, speech recognition can still be considered
difficult, especi... | 2021-07-23T18:54:39Z | null | null | null | null | null | null | null | null | null | null |
2,107.12604 | Image Scene Graph Generation (SGG) Benchmark | ['Xiaotian Han', 'Jianwei Yang', 'Houdong Hu', 'Lei Zhang', 'Jianfeng Gao', 'Pengchuan Zhang'] | ['cs.CV'] | There is a surge of interest in image scene graph generation (object,
attribute and relationship detection) due to the need of building fine-grained
image understanding models that go beyond object detection. Due to the lack of
a good benchmark, the reported results of different scene graph generation
models are not di... | 2021-07-27T05:10:09Z | null | null | null | Image Scene Graph Generation (SGG) Benchmark | ['Xiao Han', 'Jianwei Yang', 'Houdong Hu', 'Lei Zhang', 'Jianfeng Gao', 'Pengchuan Zhang'] | 2,021 | arXiv.org | 38 | 23 | ['Computer Science'] |
2,107.14795 | Perceiver IO: A General Architecture for Structured Inputs & Outputs | ['Andrew Jaegle', 'Sebastian Borgeaud', 'Jean-Baptiste Alayrac', 'Carl Doersch', 'Catalin Ionescu', 'David Ding', 'Skanda Koppula', 'Daniel Zoran', 'Andrew Brock', 'Evan Shelhamer', 'Olivier Hénaff', 'Matthew M. Botvinick', 'Andrew Zisserman', 'Oriol Vinyals', 'Joāo Carreira'] | ['cs.LG', 'cs.CL', 'cs.CV', 'cs.SD', 'eess.AS'] | A central goal of machine learning is the development of systems that can
solve many problems in as many data domains as possible. Current architectures,
however, cannot be applied beyond a small set of stereotyped settings, as they
bake in domain & task assumptions or scale poorly to large inputs or outputs.
In this w... | 2021-07-30T17:53:34Z | ICLR 2022 camera ready. Code: https://dpmd.ai/perceiver-code | null | null | Perceiver IO: A General Architecture for Structured Inputs & Outputs | ['Andrew Jaegle', 'Sebastian Borgeaud', 'Jean-Baptiste Alayrac', 'Carl Doersch', 'Catalin Ionescu', 'David Ding', 'Skanda Koppula', 'Andrew Brock', 'Evan Shelhamer', "Olivier J. H'enaff", 'M. Botvinick', 'Andrew Zisserman', 'O. Vinyals', 'João Carreira'] | 2,021 | International Conference on Learning Representations | 585 | 105 | ['Computer Science', 'Engineering'] |
2,108.00154 | CrossFormer: A Versatile Vision Transformer Hinging on Cross-scale
Attention | ['Wenxiao Wang', 'Lu Yao', 'Long Chen', 'Binbin Lin', 'Deng Cai', 'Xiaofei He', 'Wei Liu'] | ['cs.CV', 'cs.LG'] | Transformers have made great progress in dealing with computer vision tasks.
However, existing vision transformers do not yet possess the ability of
building the interactions among features of different scales, which is
perceptually important to visual inputs. The reasons are two-fold: (1) Input
embeddings of each laye... | 2021-07-31T05:52:21Z | 15 pages, 4 figures, and 9 tables | null | null | null | null | null | null | null | null | null |
2,108.01073 | SDEdit: Guided Image Synthesis and Editing with Stochastic Differential
Equations | ['Chenlin Meng', 'Yutong He', 'Yang Song', 'Jiaming Song', 'Jiajun Wu', 'Jun-Yan Zhu', 'Stefano Ermon'] | ['cs.CV', 'cs.AI'] | Guided image synthesis enables everyday users to create and edit
photo-realistic images with minimum effort. The key challenge is balancing
faithfulness to the user input (e.g., hand-drawn colored strokes) and realism
of the synthesized image. Existing GAN-based methods attempt to achieve such
balance using either cond... | 2021-08-02T17:59:47Z | https://sde-image-editing.github.io/ | null | null | null | null | null | null | null | null | null |
2,108.01139 | PyEuroVoc: A Tool for Multilingual Legal Document Classification with
EuroVoc Descriptors | ['Andrei-Marius Avram', 'Vasile Pais', 'Dan Tufis'] | ['cs.CL', 'cs.AI', 'cs.LG'] | EuroVoc is a multilingual thesaurus that was built for organizing the
legislative documentary of the European Union institutions. It contains
thousands of categories at different levels of specificity and its descriptors
are targeted by legal texts in almost thirty languages. In this work we propose
a unified framework... | 2021-08-02T19:46:21Z | RANLP2021 | null | null | null | null | null | null | null | null | null |
2,108.012 | Multispectral Vineyard Segmentation: A Deep Learning approach | ['T. Barros', 'P. Conde', 'G. Gonçalves', 'C. Premebida', 'M. Monteiro', 'C. S. S. Ferreira', 'U. J. Nunes'] | ['cs.CV', 'cs.RO'] | Digital agriculture has evolved significantly over the last few years due to
the technological developments in automation and computational intelligence
applied to the agricultural sector, including vineyards which are a relevant
crop in the Mediterranean region. In this work, a study is presented of
semantic segmentat... | 2021-08-02T22:36:07Z | Accepted in Computer and Electronics in Agriculture journal | null | 10.1016/j.compag.2022.106782 | Multispectral vineyard segmentation: A deep learning comparison study | ['T. Barros', 'P. Conde', 'G. Gonçalves', 'C. Premebida', 'M. Monteiro', 'C. Ferreira', 'U. Nunes'] | 2,021 | Computers and Electronics in Agriculture | 26 | 34 | ['Computer Science'] |
2,108.01547 | EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative
Pre-Training | ['Hao Zhou', 'Pei Ke', 'Zheng Zhang', 'Yuxian Gu', 'Yinhe Zheng', 'Chujie Zheng', 'Yida Wang', 'Chen Henry Wu', 'Hao Sun', 'Xiaocong Yang', 'Bosi Wen', 'Xiaoyan Zhu', 'Minlie Huang', 'Jie Tang'] | ['cs.CL', 'cs.AI'] | Although pre-trained language models have remarkably enhanced the generation
ability of dialogue systems, open-domain Chinese dialogue systems are still
limited by the dialogue data and the model size compared with English ones. In
this paper, we propose EVA, a Chinese dialogue system that contains the largest
Chinese ... | 2021-08-03T14:55:24Z | 8 pages, 4 figures | null | null | null | null | null | null | null | null | null |
2,108.02927 | DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local
and Global Features | ['Min Yang', 'Dongliang He', 'Miao Fan', 'Baorong Shi', 'Xuetong Xue', 'Fu Li', 'Errui Ding', 'Jizhou Huang'] | ['cs.CV'] | Image Retrieval is a fundamental task of obtaining images similar to the
query one from a database. A common image retrieval practice is to firstly
retrieve candidate images via similarity search using global image features and
then re-rank the candidates by leveraging their local features. Previous
learning-based stud... | 2021-08-06T03:14:09Z | ICCV2021 | null | null | null | null | null | null | null | null | null |
2,108.03265 | Facebook AI WMT21 News Translation Task Submission | ['Chau Tran', 'Shruti Bhosale', 'James Cross', 'Philipp Koehn', 'Sergey Edunov', 'Angela Fan'] | ['cs.CL'] | We describe Facebook's multilingual model submission to the WMT2021 shared
task on news translation. We participate in 14 language directions: English to
and from Czech, German, Hausa, Icelandic, Japanese, Russian, and Chinese. To
develop systems covering all these directions, we focus on multilingual models.
We utiliz... | 2021-08-06T18:26:38Z | null | null | null | null | null | null | null | null | null | null |
2,108.03353 | Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning | ['Bryan Wang', 'Gang Li', 'Xin Zhou', 'Zhourong Chen', 'Tovi Grossman', 'Yang Li'] | ['cs.HC', 'cs.AI', 'cs.LG'] | Mobile User Interface Summarization generates succinct language descriptions
of mobile screens for conveying important contents and functionalities of the
screen, which can be useful for many language-based application scenarios. We
present Screen2Words, a novel screen summarization approach that automatically
encapsul... | 2021-08-07T03:01:23Z | UIST'21 | null | null | null | null | null | null | null | null | null |
2,108.04539 | BROS: A Pre-trained Language Model Focusing on Text and Layout for
Better Key Information Extraction from Documents | ['Teakgyu Hong', 'Donghyun Kim', 'Mingi Ji', 'Wonseok Hwang', 'Daehyun Nam', 'Sungrae Park'] | ['cs.CL'] | Key information extraction (KIE) from document images requires understanding
the contextual and spatial semantics of texts in two-dimensional (2D) space.
Many recent studies try to solve the task by developing pre-trained language
models focusing on combining visual features from document images with texts
and their la... | 2021-08-10T09:30:23Z | AAAI 2022 - Main Technical Track | null | null | BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents | ['Teakgyu Hong', 'Donghyun Kim', 'Mingi Ji', 'Wonseok Hwang', 'Daehyun Nam', 'Sungrae Park'] | 2,021 | AAAI Conference on Artificial Intelligence | 154 | 27 | ['Computer Science'] |
2,108.05198 | Natural Language-Guided Programming | ['Geert Heyman', 'Rafael Huysegems', 'Pascal Justen', 'Tom Van Cutsem'] | ['cs.SE', 'cs.LG', 'cs.PL'] | In today's software world with its cornucopia of reusable software libraries,
when a programmer is faced with a programming task that they suspect can be
completed through the use of a library, they often look for code examples using
a search engine and then manually adapt found examples to their specific
context of us... | 2021-08-11T13:06:33Z | null | null | null | Natural language-guided programming | ['Geert Heyman', 'Rafael Huysegems', 'P. Justen', 'Tom Van Cutsem'] | 2,021 | SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software | 12 | 56 | ['Computer Science'] |
2,108.0554 | Unsupervised Corpus Aware Language Model Pre-training for Dense Passage
Retrieval | ['Luyu Gao', 'Jamie Callan'] | ['cs.IR', 'cs.CL'] | Recent research demonstrates the effectiveness of using fine-tuned language
models~(LM) for dense retrieval. However, dense retrievers are hard to train,
typically requiring heavily engineered fine-tuning pipelines to realize their
full potential. In this paper, we identify and address two underlying problems
of dense ... | 2021-08-12T05:20:27Z | null | null | null | null | null | null | null | null | null | null |
2,108.05857 | How Optimal is Greedy Decoding for Extractive Question Answering? | ['Or Castel', 'Ori Ram', 'Avia Efrat', 'Omer Levy'] | ['cs.CL'] | Fine-tuned language models use greedy decoding to answer reading
comprehension questions with relative success. However, this approach does not
ensure that the answer is a span in the given passage, nor does it guarantee
that it is the most probable one. Does greedy decoding actually perform worse
than an algorithm tha... | 2021-08-12T17:07:31Z | AKBC 2022 12 pages, 3 figures | null | null | How Optimal is Greedy Decoding for Extractive Question Answering? | ['Or Castel', 'Ori Ram', 'Avia Efrat', 'Omer Levy'] | 2,021 | Conference on Automated Knowledge Base Construction | 4 | 36 | ['Computer Science'] |
2,108.05921 | Hatemoji: A Test Suite and Adversarially-Generated Dataset for
Benchmarking and Detecting Emoji-based Hate | ['Hannah Rose Kirk', 'Bertram Vidgen', 'Paul Röttger', 'Tristan Thrush', 'Scott A. Hale'] | ['cs.CL', 'cs.CY'] | Detecting online hate is a complex task, and low-performing models have
harmful consequences when used for sensitive applications such as content
moderation. Emoji-based hate is an emerging challenge for automated detection.
We present HatemojiCheck, a test suite of 3,930 short-form statements that
allows us to evaluat... | 2021-08-12T18:42:06Z | null | 2022 Annual Conference of the North American Chapter of the
Association for Computational Linguistics (NAACL 2022) | null | Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-Based Hate | ['Hannah Rose Kirk', 'B. Vidgen', 'Paul Röttger', 'Tristan Thrush', 'Scott A. Hale'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 63 | 59 | ['Computer Science'] |
2,108.06098 | FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated
Learning | ['Nam Hyeon-Woo', 'Moon Ye-Bin', 'Tae-Hyun Oh'] | ['cs.LG', 'cs.CV'] | In this work, we propose a communication-efficient parameterization, FedPara,
for federated learning (FL) to overcome the burdens on frequent model uploads
and downloads. Our method re-parameterizes weight parameters of layers using
low-rank weights followed by the Hadamard product. Compared to the conventional
low-ran... | 2021-08-13T07:16:40Z | Accepted at ICLR 2022 | null | null | null | null | null | null | null | null | null |
2,108.06152 | Conditional DETR for Fast Training Convergence | ['Depu Meng', 'Xiaokang Chen', 'Zejia Fan', 'Gang Zeng', 'Houqiang Li', 'Yuhui Yuan', 'Lei Sun', 'Jingdong Wang'] | ['cs.CV'] | The recently-developed DETR approach applies the transformer encoder and
decoder architecture to object detection and achieves promising performance. In
this paper, we handle the critical issue, slow training convergence, and
present a conditional cross-attention mechanism for fast DETR training. Our
approach is motiva... | 2021-08-13T10:07:46Z | Accepted by ICCV 2021. The first two authors share first authorship,
and the order was determined by rolling dice | null | null | null | null | null | null | null | null | null |
2,108.06209 | W2v-BERT: Combining Contrastive Learning and Masked Language Modeling
for Self-Supervised Speech Pre-Training | ['Yu-An Chung', 'Yu Zhang', 'Wei Han', 'Chung-Cheng Chiu', 'James Qin', 'Ruoming Pang', 'Yonghui Wu'] | ['cs.LG', 'cs.SD', 'eess.AS'] | Motivated by the success of masked language modeling~(MLM) in pre-training
natural language processing models, we propose w2v-BERT that explores MLM for
self-supervised speech representation learning. w2v-BERT is a framework that
combines contrastive learning and MLM, where the former trains the model to
discretize inp... | 2021-08-07T06:29:36Z | null | null | null | null | null | null | null | null | null | null |
2,108.06897 | AutoChart: A Dataset for Chart-to-Text Generation Task | ['Jiawen Zhu', 'Jinye Ran', 'Roy Ka-wei Lee', 'Kenny Choo', 'Zhi Li'] | ['cs.CL', 'cs.AI', 'cs.MM'] | The analytical description of charts is an exciting and important research
area with many applications in academia and industry. Yet, this challenging
task has received limited attention from the computational linguistics research
community. This paper proposes \textsf{AutoChart}, a large dataset for the
analytical des... | 2021-08-16T05:01:46Z | null | null | null | null | null | null | null | null | null | null |
2,108.07258 | On the Opportunities and Risks of Foundation Models | ['Rishi Bommasani', 'Drew A. Hudson', 'Ehsan Adeli', 'Russ Altman', 'Simran Arora', 'Sydney von Arx', 'Michael S. Bernstein', 'Jeannette Bohg', 'Antoine Bosselut', 'Emma Brunskill', 'Erik Brynjolfsson', 'Shyamal Buch', 'Dallas Card', 'Rodrigo Castellon', 'Niladri Chatterji', 'Annie Chen', 'Kathleen Creel', 'Jared Quinc... | ['cs.LG', 'cs.AI', 'cs.CY'] | AI is undergoing a paradigm shift with the rise of models (e.g., BERT,
DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a
wide range of downstream tasks. We call these models foundation models to
underscore their critically central yet incomplete character. This report
provides a thorough acc... | 2021-08-16T17:50:08Z | Authored by the Center for Research on Foundation Models (CRFM) at
the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Report page with citation guidelines: https://crfm.stanford.edu/report.html | null | null | On the Opportunities and Risks of Foundation Models | ['Rishi Bommasani', 'Drew A. Hudson', 'E. Adeli', 'R. Altman', 'Simran Arora', 'Sydney von Arx', 'Michael S. Bernstein', 'Jeannette Bohg', 'Antoine Bosselut', 'E. Brunskill', 'Erik Brynjolfsson', 'S. Buch', 'Dallas Card', 'Rodrigo Castellon', 'Niladri S. Chatterji', 'Annie S. Chen', 'Kathleen A. Creel', 'Jared Davis', ... | 2,021 | arXiv.org | 4,519 | 0 | ['Computer Science'] |
2,108.07337 | Generative Relation Linking for Question Answering over Knowledge Bases | ['Gaetano Rossiello', 'Nandana Mihindukulasooriya', 'Ibrahim Abdelaziz', 'Mihaela Bornea', 'Alfio Gliozzo', 'Tahira Naseem', 'Pavan Kapanipathi'] | ['cs.CL', 'cs.AI'] | Relation linking is essential to enable question answering over knowledge
bases. Although there are various efforts to improve relation linking
performance, the current state-of-the-art methods do not achieve optimal
results, therefore, negatively impacting the overall end-to-end question
answering performance. In this... | 2021-08-16T20:33:43Z | Accepted at the 20th International Semantic Web Conference (ISWC
2021) | null | null | null | null | null | null | null | null | null |
2,108.07732 | Program Synthesis with Large Language Models | ['Jacob Austin', 'Augustus Odena', 'Maxwell Nye', 'Maarten Bosma', 'Henryk Michalewski', 'David Dohan', 'Ellen Jiang', 'Carrie Cai', 'Michael Terry', 'Quoc Le', 'Charles Sutton'] | ['cs.PL', 'cs.LG'] | This paper explores the limits of the current generation of large language
models for program synthesis in general purpose programming languages. We
evaluate a collection of such models (with between 244M and 137B parameters) on
two new benchmarks, MBPP and MathQA-Python, in both the few-shot and
fine-tuning regimes. O... | 2021-08-16T03:57:30Z | Jacob and Augustus contributed equally | null | null | null | null | null | null | null | null | null |
2,108.08688 | Contrastive Language-Image Pre-training for the Italian Language | ['Federico Bianchi', 'Giuseppe Attanasio', 'Raphael Pisoni', 'Silvia Terragni', 'Gabriele Sarti', 'Sri Lakshmi'] | ['cs.CL', 'cs.CV'] | CLIP (Contrastive Language-Image Pre-training) is a very recent multi-modal
model that jointly learns representations of images and texts. The model is
trained on a massive amount of English data and shows impressive performance on
zero-shot classification tasks. Training the same model on a different language
is not t... | 2021-08-19T13:53:47Z | null | null | null | Contrastive Language–Image Pre-training for the Italian Language | ['Federico Bianchi', 'Giuseppe Attanasio', 'Raphael Pisoni', 'Silvia Terragni', 'Gabriele Sarti', 'S. Lakshmi'] | 2,021 | CLICIT | 30 | 34 | ['Computer Science'] |
2,108.08787 | Mr. TyDi: A Multi-lingual Benchmark for Dense Retrieval | ['Xinyu Zhang', 'Xueguang Ma', 'Peng Shi', 'Jimmy Lin'] | ['cs.CL', 'cs.IR'] | We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual
retrieval in eleven typologically diverse languages, designed to evaluate
ranking with learned dense representations. The goal of this resource is to
spur research in dense retrieval techniques in non-English languages, motivated
by recent observat... | 2021-08-19T16:53:43Z | Workshop on Multilingual Representation Learning at EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,108.08877 | Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text
Models | ['Jianmo Ni', 'Gustavo Hernández Ábrego', 'Noah Constant', 'Ji Ma', 'Keith B. Hall', 'Daniel Cer', 'Yinfei Yang'] | ['cs.CL'] | We provide the first exploration of sentence embeddings from text-to-text
transformers (T5). Sentence embeddings are broadly useful for language
processing tasks. While T5 achieves impressive performance on language tasks
cast as sequence-to-sequence mapping problems, it is unclear how to produce
sentence embeddings fr... | 2021-08-19T18:58:02Z | null | null | null | null | null | null | null | null | null | null |
2,108.09485 | Yseop at FinSim-3 Shared Task 2021: Specializing Financial Domain
Learning with Phrase Representations | ['Hanna Abi Akl', 'Dominique Mariko', 'Hugues de Mazancourt'] | ['cs.CL'] | In this paper, we present our approaches for the FinSim-3 Shared Task 2021:
Learning Semantic Similarities for the Financial Domain. The aim of this shared
task is to correctly classify a list of given terms from the financial domain
into the most relevant hypernym (or top-level) concept in an external ontology.
For ou... | 2021-08-21T10:53:12Z | To be published in ACL Anthology | null | null | null | null | null | null | null | null | null |
2,108.09814 | UzBERT: pretraining a BERT model for Uzbek | ['B. Mansurov', 'A. Mansurov'] | ['cs.CL'] | Pretrained language models based on the Transformer architecture have
achieved state-of-the-art results in various natural language processing tasks
such as part-of-speech tagging, named entity recognition, and question
answering. However, no such monolingual model for the Uzbek language is
publicly available. In this ... | 2021-08-22T18:28:22Z | 9 pages, 1 table | null | null | UzBERT: pretraining a BERT model for Uzbek | ['B. Mansurov', 'A. Mansurov'] | 2,021 | arXiv.org | 13 | 24 | ['Computer Science'] |
2,108.10257 | SwinIR: Image Restoration Using Swin Transformer | ['Jingyun Liang', 'Jiezhang Cao', 'Guolei Sun', 'Kai Zhang', 'Luc Van Gool', 'Radu Timofte'] | ['eess.IV', 'cs.CV'] | Image restoration is a long-standing low-level vision problem that aims to
restore high-quality images from low-quality images (e.g., downscaled, noisy
and compressed images). While state-of-the-art image restoration methods are
based on convolutional neural networks, few attempts have been made with
Transformers which... | 2021-08-23T15:55:32Z | Sota results on classical/lightweight/real-world image SR, image
denoising and JPEG compression artifact reduction. Code:
https://github.com/JingyunLiang/SwinIR | null | null | SwinIR: Image Restoration Using Swin Transformer | ['Jingyun Liang', 'Jie Cao', 'Guolei Sun', 'K. Zhang', 'L. Gool', 'R. Timofte'] | 2,021 | 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) | 2,989 | 98 | ['Engineering', 'Computer Science'] |
2,108.10307 | C5T5: Controllable Generation of Organic Molecules with Transformers | ['Daniel Rothchild', 'Alex Tamkin', 'Julie Yu', 'Ujval Misra', 'Joseph Gonzalez'] | ['cs.LG'] | Methods for designing organic materials with desired properties have high
potential impact across fields such as medicine, renewable energy,
petrochemical engineering, and agriculture. However, using generative modeling
to design substances with desired properties is difficult because candidate
compounds must satisfy m... | 2021-08-23T17:53:07Z | null | null | null | null | null | null | null | null | null | null |
2,108.10447 | One TTS Alignment To Rule Them All | ['Rohan Badlani', 'Adrian Łancucki', 'Kevin J. Shih', 'Rafael Valle', 'Wei Ping', 'Bryan Catanzaro'] | ['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS'] | Speech-to-text alignment is a critical component of neural textto-speech
(TTS) models. Autoregressive TTS models typically use an attention mechanism to
learn these alignments on-line. However, these alignments tend to be brittle
and often fail to generalize to long utterances and out-of-domain text, leading
to missing... | 2021-08-23T23:45:48Z | null | null | null | One TTS Alignment to Rule Them All | ['Rohan Badlani', 'A. Lancucki', 'Kevin J. Shih', 'Rafael Valle', 'Wei Ping', 'Bryan Catanzaro'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 85 | 20 | ['Computer Science', 'Engineering'] |
2,108.10724 | How Hateful are Movies? A Study and Prediction on Movie Subtitles | ['Niklas von Boguszewski', 'Sana Moin', 'Anirban Bhowmick', 'Seid Muhie Yimam', 'Chris Biemann'] | ['cs.CL'] | In this research, we investigate techniques to detect hate speech in movies.
We introduce a new dataset collected from the subtitles of six movies, where
each utterance is annotated either as hate, offensive or normal. We apply
transfer learning techniques of domain adaptation and fine-tuning on existing
social media d... | 2021-08-19T16:07:08Z | null | null | null | null | null | null | null | null | null | null |
2,108.10904 | SimVLM: Simple Visual Language Model Pretraining with Weak Supervision | ['Zirui Wang', 'Jiahui Yu', 'Adams Wei Yu', 'Zihang Dai', 'Yulia Tsvetkov', 'Yuan Cao'] | ['cs.CV', 'cs.CL', 'cs.LG'] | With recent progress in joint modeling of visual and textual representations,
Vision-Language Pretraining (VLP) has achieved impressive performance on many
multimodal downstream tasks. However, the requirement for expensive annotations
including clean image captions and regional labels limits the scalability of
existin... | 2021-08-24T18:14:00Z | Published at ICLR 2022 | null | null | SimVLM: Simple Visual Language Model Pretraining with Weak Supervision | ['Zirui Wang', 'Jiahui Yu', 'Adams Wei Yu', 'Zihang Dai', 'Yulia Tsvetkov', 'Yuan Cao'] | 2,021 | International Conference on Learning Representations | 801 | 66 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.