arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,205.01782
Learning Multi-dimensional Edge Feature-based AU Relation Graph for Facial Action Unit Recognition
['Cheng Luo', 'Siyang Song', 'Weicheng Xie', 'Linlin Shen', 'Hatice Gunes']
['cs.CV', 'cs.AI']
The activations of Facial Action Units (AUs) mutually influence one another. While the relationship between a pair of AUs can be complex and unique, existing approaches fail to specifically and explicitly represent such cues for each pair of AUs in each facial display. This paper proposes an AU relationship modelling a...
2022-05-02T03:38:00Z
IJCAI 2022 conference (accepted)
null
10.24963/ijcai.2022/173
null
null
null
null
null
null
null
2,205.01917
CoCa: Contrastive Captioners are Image-Text Foundation Models
['Jiahui Yu', 'Zirui Wang', 'Vijay Vasudevan', 'Legg Yeung', 'Mojtaba Seyedhosseini', 'Yonghui Wu']
['cs.CV', 'cs.LG', 'cs.MM']
Exploring large-scale pretrained foundation models is of significant interest in computer vision because these models can be quickly transferred to many downstream tasks. This paper presents Contrastive Captioner (CoCa), a minimalist design to pretrain an image-text encoder-decoder foundation model jointly with contras...
2022-05-04T07:01:14Z
Preprint
null
null
null
null
null
null
null
null
null
2,205.01972
Sequencer: Deep LSTM for Image Classification
['Yuki Tatsunami', 'Masato Taki']
['cs.CV', 'cs.AI', 'cs.LG']
In recent computer vision research, the advent of the Vision Transformer (ViT) has rapidly revolutionized various architectural design efforts: ViT achieved state-of-the-art image classification performance using self-attention found in natural language processing, and MLP-Mixer achieved competitive performance using s...
2022-05-04T09:47:46Z
Accepted in NeurIPS 2022; camera ready edition
null
null
Sequencer: Deep LSTM for Image Classification
['Yuki Tatsunami', 'M. Taki']
2,022
Neural Information Processing Systems
82
91
['Computer Science']
2,205.02289
A Dataset for N-ary Relation Extraction of Drug Combinations
['Aryeh Tiktinsky', 'Vijay Viswanathan', 'Danna Niezni', 'Dana Meron Azagury', 'Yosi Shamay', 'Hillel Taub-Tabib', 'Tom Hope', 'Yoav Goldberg']
['cs.CL', 'cs.IR']
Combination therapies have become the standard of care for diseases such as cancer, tuberculosis, malaria and HIV. However, the combinatorial set of available multi-drug treatments creates a challenge in identifying effective combination therapies available in a situation. To assist medical professionals in identifying...
2022-05-04T19:01:16Z
To appear in NAACL 2022
null
null
null
null
null
null
null
null
null
2,205.0234
Knowledge Distillation of Russian Language Models with Reduction of Vocabulary
['Alina Kolesnikova', 'Yuri Kuratov', 'Vasily Konovalov', 'Mikhail Burtsev']
['cs.CL', 'cs.LG']
Today, transformer language models serve as a core component for majority of natural language processing tasks. Industrial application of such models requires minimization of computation time and memory footprint. Knowledge distillation is one of approaches to address this goal. Existing methods in this field are mainl...
2022-05-04T21:56:57Z
null
null
null
null
null
null
null
null
null
null
2,205.02455
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
['Abhinav Joshi', 'Ashwani Bhat', 'Ayush Jain', 'Atin Vikram Singh', 'Ashutosh Modi']
['cs.CL', 'cs.AI', 'cs.LG']
Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person's emotions are influenced by the other speaker's utterances and their own emotional state over the utteranc...
2022-05-05T05:54:24Z
17 pages (9 main + 8 appendix). Accepted at NAACL 2022
null
null
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
['Abhinav Joshi', 'A. Bhat', 'Ayush Jain', 'Atinesh Singh', 'Ashutosh Modi']
2,022
North American Chapter of the Association for Computational Linguistics
80
62
['Computer Science']
2,205.02545
Introducing the Welsh Text Summarisation Dataset and Baseline Systems
['Ignatius Ezeani', 'Mahmoud El-Haj', 'Jonathan Morris', 'Dawn Knight']
['cs.CL', 'cs.IR']
Welsh is an official language in Wales and is spoken by an estimated 884,300 people (29.2% of the population of Wales). Despite this status and estimated increase in speaker numbers since the last (2011) census, Welsh remains a minority language undergoing revitalization and promotion by Welsh Government and relevant s...
2022-05-05T10:12:45Z
null
null
null
null
null
null
null
null
null
null
2,205.02728
CATs are Fuzzy PETs: A Corpus and Analysis of Potentially Euphemistic Terms
['Martha Gavidia', 'Patrick Lee', 'Anna Feldman', 'Jing Peng']
['cs.CL']
Euphemisms have not received much attention in natural language processing, despite being an important element of polite and figurative language. Euphemisms prove to be a difficult topic, not only because they are subject to language change, but also because humans may not agree on what is a euphemism and what is not. ...
2022-05-05T16:01:39Z
Proceedings of LREC 2022
null
null
CATs are Fuzzy PETs: A Corpus and Analysis of Potentially Euphemistic Terms
['M. Gavidia', 'Patrick Lee', 'Anna Feldman', 'Jing Peng']
2,022
International Conference on Language Resources and Evaluation
24
41
['Computer Science']
2,205.03026
Hearing voices at the National Library -- a speech corpus and acoustic model for the Swedish language
['Martin Malmsten', 'Chris Haffenden', 'Love Börjeson']
['cs.CL']
This paper explains our work in developing new acoustic models for automated speech recognition (ASR) at KBLab, the infrastructure for data-driven research at the National Library of Sweden (KB). We evaluate different approaches for a viable speech-to-text pipeline for audiovisual resources in Swedish, using the wav2ve...
2022-05-06T06:06:00Z
null
null
null
Hearing voices at the National Library - a speech corpus and acoustic model for the Swedish language
['Martin Malmsten', 'Chris Haffenden', 'Love Borjeson']
2,022
arXiv.org
10
20
['Computer Science']
2,205.04733
From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective
['Thibault Formal', 'Carlos Lassance', 'Benjamin Piwowarski', 'Stéphane Clinchant']
['cs.IR', 'cs.CL']
Neural retrievers based on dense representations combined with Approximate Nearest Neighbors search have recently received a lot of attention, owing their success to distillation and/or better sampling of examples for training -- while still relying on the same backbone architecture. In the meantime, sparse representat...
2022-05-10T08:08:43Z
Accepted at SIGIR22 as a short paper (this work is the extension of SPLADE v2)
null
null
null
null
null
null
null
null
null
2,205.05131
UL2: Unifying Language Learning Paradigms
['Yi Tay', 'Mostafa Dehghani', 'Vinh Q. Tran', 'Xavier Garcia', 'Jason Wei', 'Xuezhi Wang', 'Hyung Won Chung', 'Siamak Shakeri', 'Dara Bahri', 'Tal Schuster', 'Huaixiu Steven Zheng', 'Denny Zhou', 'Neil Houlsby', 'Donald Metzler']
['cs.CL']
Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setup...
2022-05-10T19:32:20Z
Updated Q1 2023 with Flan-UL2 20B release! :)
null
null
UL2: Unifying Language Learning Paradigms
['Yi Tay', 'Mostafa Dehghani', 'Vinh Q. Tran', 'Xavier García', 'Jason Wei', 'Xuezhi Wang', 'Hyung Won Chung', 'Dara Bahri', 'Tal Schuster', 'H. Zheng', 'Denny Zhou', 'N. Houlsby', 'Donald Metzler']
2,022
International Conference on Learning Representations
313
144
['Computer Science']
2,205.05789
RITA: a Study on Scaling Up Generative Protein Sequence Models
['Daniel Hesslow', 'Niccoló Zanichelli', 'Pascal Notin', 'Iacopo Poli', 'Debora Marks']
['q-bio.QM', 'cs.LG']
In this work we introduce RITA: a suite of autoregressive generative models for protein sequences, with up to 1.2 billion parameters, trained on over 280 million protein sequences belonging to the UniRef-100 database. Such generative models hold the promise of greatly accelerating protein design. We conduct the first s...
2022-05-11T22:06:03Z
null
null
null
RITA: a Study on Scaling Up Generative Protein Sequence Models
['Daniel Hesslow', 'Niccoló Zanichelli', 'Pascal Notin', 'Iacopo Poli', 'D. Marks']
2,022
arXiv.org
99
45
['Computer Science', 'Biology']
2,205.05862
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling
['Haoqin Tu', 'Zhongliang Yang', 'Jinshuai Yang', 'Yongfeng Huang']
['cs.CL']
Variational Auto-Encoder (VAE) has become the de-facto learning paradigm in achieving representation learning and generation for natural language at the same time. Nevertheless, existing VAE-based language models either employ elementary RNNs, which is not powerful to handle complex works in the multi-task situation, o...
2022-05-12T03:22:07Z
null
null
null
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling
['Haoqin Tu', 'Zhongliang Yang', 'Jinshuai Yang', 'Siyu Zhang', 'Yong Huang']
2,022
arXiv.org
12
44
['Computer Science']
2,205.06207
CiteSum: Citation Text-guided Scientific Extreme Summarization and Domain Adaptation with Limited Supervision
['Yuning Mao', 'Ming Zhong', 'Jiawei Han']
['cs.CL']
Scientific extreme summarization (TLDR) aims to form ultra-short summaries of scientific papers. Previous efforts on curating scientific TLDR datasets failed to scale up due to the heavy human annotation and domain expertise required. In this paper, we propose a simple yet effective approach to automatically extracting...
2022-05-12T16:44:19Z
EMNLP 2022. TLDR: By pretraining on (automatically extracted) citation sentences in scientific papers, we achieve SOTA on SciTLDR, XSum, and Gigaword in zero-shot and (or) few-shot settings
null
null
CiteSum: Citation Text-guided Scientific Extreme Summarization and Domain Adaptation with Limited Supervision
['Yuning Mao', 'Ming Zhong', 'Jiawei Han']
2,022
Conference on Empirical Methods in Natural Language Processing
15
49
['Computer Science']
2,205.0623
Simple Open-Vocabulary Object Detection with Vision Transformers
['Matthias Minderer', 'Alexey Gritsenko', 'Austin Stone', 'Maxim Neumann', 'Dirk Weissenborn', 'Alexey Dosovitskiy', 'Aravindh Mahendran', 'Anurag Arnab', 'Mostafa Dehghani', 'Zhuoran Shen', 'Xiao Wang', 'Xiaohua Zhai', 'Thomas Kipf', 'Neil Houlsby']
['cs.CV']
Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, w...
2022-05-12T17:20:36Z
ECCV 2022 camera-ready version
null
null
Simple Open-Vocabulary Object Detection with Vision Transformers
['Matthias Minderer', 'A. Gritsenko', 'Austin Stone', 'Maxim Neumann', 'Dirk Weissenborn', 'Alexey Dosovitskiy', 'Aravindh Mahendran', 'Anurag Arnab', 'Mostafa Dehghani', 'Zhuoran Shen', 'Xiao Wang', 'Xiaohua Zhai', 'Thomas Kipf', 'N. Houlsby']
2,022
arXiv.org
314
49
['Computer Science']
2,205.06421
Talking Face Generation with Multilingual TTS
['Hyoung-Kyu Song', 'Sang Hoon Woo', 'Junhyeok Lee', 'Seungmin Yang', 'Hyunjae Cho', 'Youseong Lee', 'Dongho Choi', 'Kang-wook Kim']
['cs.CV', 'cs.AI']
In this work, we propose a joint system combining a talking face generation system with a text-to-speech system that can generate multilingual talking face videos from only the text input. Our system can synthesize natural multilingual speeches while maintaining the vocal identity of the speaker, as well as lip movemen...
2022-05-13T02:08:35Z
Accepted to CVPR Demo Track (2022)
null
null
null
null
null
null
null
null
null
2,205.06457
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation
['Long Phan', 'Hieu Tran', 'Hieu Nguyen', 'Trieu H. Trinh']
['cs.CL', 'cs.AI']
We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language. With T5-style self-supervised pretraining, ViT5 is trained on a large corpus of high-quality and diverse Vietnamese texts. We benchmark ViT5 on two downstream text generation tasks, Abstractive Text Summarization and Name...
2022-05-13T06:08:35Z
NAACL SRW 2022. arXiv admin note: text overlap with arXiv:2110.04257
null
null
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation
['Long Phan', 'H. Tran', 'H. Nguyen', 'Trieu H. Trinh']
2,022
North American Chapter of the Association for Computational Linguistics
72
27
['Computer Science']
2,205.06885
PathologyBERT -- Pre-trained Vs. A New Transformer Language Model for Pathology Domain
['Thiago Santos', 'Amara Tariq', 'Susmita Das', 'Kavyasree Vayalpati', 'Geoffrey H. Smith', 'Hari Trivedi', 'Imon Banerjee']
['cs.CL']
Pathology text mining is a challenging task given the reporting variability and constant new findings in cancer sub-type definitions. However, successful text mining of a large pathology database can play a critical role to advance 'big data' cancer research like similarity-based treatment selection, case identificatio...
2022-05-13T20:42:07Z
submitted to "American Medical Informatics Association (AMIA)" 2022 Annual Symposium
null
null
PathologyBERT - Pre-trained Vs. A New Transformer Language Model for Pathology Domain
['Thiago Santos', 'Amara Tariq', 'Susmita Das', 'Kavyasree Vayalpati', 'Geoffrey H. Smith', 'H. Trivedi', 'I. Banerjee']
2,022
American Medical Informatics Association Annual Symposium
18
21
['Computer Science', 'Medicine']
2,205.0739
Learning Representations for New Sound Classes With Continual Self-Supervised Learning
['Zhepei Wang', 'Cem Subakan', 'Xilin Jiang', 'Junkai Wu', 'Efthymios Tzinis', 'Mirco Ravanelli', 'Paris Smaragdis']
['eess.AS', 'cs.LG', 'cs.SD', 'eess.SP']
In this paper, we work on a sound recognition system that continually incorporates new sound classes. Our main goal is to develop a framework where the model can be updated without relying on labeled data. For this purpose, we propose adopting representation learning, where an encoder is trained using unlabeled data. T...
2022-05-15T22:15:21Z
Accepted to IEEE Signal Processing Letters
null
10.1109/LSP.2022.3229643
null
null
null
null
null
null
null
2,205.08794
LogiGAN: Learning Logical Reasoning via Adversarial Pre-training
['Xinyu Pi', 'Wanjun Zhong', 'Yan Gao', 'Nan Duan', 'Jian-Guang Lou']
['cs.CL']
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models. Upon automatic identifying logical reasoning phenomena in massive text corpus via detection heuristics, we train language models to predict the masked-out logical statements. Inspired by ...
2022-05-18T08:46:49Z
Accepted by NeurIPS 2022
null
null
null
null
null
null
null
null
null
2,205.08808
Evaluation of Transfer Learning for Polish with a Text-to-Text Model
['Aleksandra Chrabrowa', 'Łukasz Dragan', 'Karol Grzegorczyk', 'Dariusz Kajtoch', 'Mikołaj Koszowski', 'Robert Mroczkowski', 'Piotr Rybak']
['cs.CL', 'cs.LG']
We introduce a new benchmark for assessing the quality of text-to-text models for Polish. The benchmark consists of diverse tasks and datasets: KLEJ benchmark adapted for text-to-text, en-pl translation, summarization, and question answering. In particular, since summarization and question answering lack benchmark data...
2022-05-18T09:17:14Z
Accepted at LREC 2022
null
null
null
null
null
null
null
null
null
2,205.09651
Wojood: Nested Arabic Named Entity Corpus and Recognition using BERT
['Mustafa Jarrar', 'Mohammed Khalilia', 'Sana Ghanem']
['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG']
This paper presents Wojood, a corpus for Arabic nested Named Entity Recognition (NER). Nested entities occur when one entity mention is embedded inside another entity mention. Wojood consists of about 550K Modern Standard Arabic (MSA) and dialect tokens that are manually annotated with 21 entity types including person,...
2022-05-19T16:06:49Z
null
In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2022), Marseille, France. 2022
null
null
null
null
null
null
null
null
2,205.09685
ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD
['Moustafa Al-Hajj', 'Mustafa Jarrar']
['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG']
Using pre-trained transformer models such as BERT has proven to be effective in many NLP tasks. This paper presents our work to fine-tune BERT models for Arabic Word Sense Disambiguation (WSD). We treated the WSD task as a sentence-pair binary classification task. First, we constructed a dataset of labeled Arabic conte...
2022-05-19T16:47:18Z
null
In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), PP 40--48. (2021)
10.26615/978-954-452-072-4_005
null
null
null
null
null
null
null
2,205.09707
PLAID: An Efficient Engine for Late Interaction Retrieval
['Keshav Santhanam', 'Omar Khattab', 'Christopher Potts', 'Matei Zaharia']
['cs.IR', 'cs.CL']
Pre-trained language models are increasingly important components across multiple information retrieval (IR) paradigms. Late interaction, introduced with the ColBERT model and recently refined in ColBERTv2, is a popular paradigm that holds state-of-the-art status across many benchmarks. To dramatically speed up the sea...
2022-05-19T17:19:31Z
Preprint. Omar and Keshav contributed equally to this work
null
null
PLAID: An Efficient Engine for Late Interaction Retrieval
['Keshav Santhanam', 'O. Khattab', 'Christopher Potts', 'M. Zaharia']
2,022
International Conference on Information and Knowledge Management
76
56
['Computer Science']
2,205.09853
MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation
['Vikram Voleti', 'Alexia Jolicoeur-Martineau', 'Christopher Pal']
['cs.CV', 'cs.AI', 'cs.LG']
Video prediction is a challenging task. The quality of video frames from current state-of-the-art (SOTA) generative models tends to be poor and generalization beyond the training data is difficult. Furthermore, existing prediction frameworks are typically not capable of simultaneously handling other video-related tasks...
2022-05-19T20:58:05Z
NeurIPS 2022 ; 10 pages, 4 figures, 7 tables
null
null
null
null
null
null
null
null
null
2,205.09911
Can Foundation Models Wrangle Your Data?
['Avanika Narayan', 'Ines Chami', 'Laurel Orr', 'Simran Arora', 'Christopher Ré']
['cs.LG', 'cs.AI', 'cs.DB']
Foundation Models (FMs) are models trained on large corpora of data that, at very large scale, can generalize to new tasks without any task-specific finetuning. As these models continue to grow in size, innovations continue to push the boundaries of what these models can do on language and image tasks. This paper aims ...
2022-05-20T00:53:43Z
12 pages, 5 figures; additional experiments, typo corrections, modifications to Section 5 (Research Agenda)
null
null
Can Foundation Models Wrangle Your Data?
['A. Narayan', 'Ines Chami', 'Laurel J. Orr', "Christopher R'e"]
2,022
Proceedings of the VLDB Endowment
231
100
['Computer Science']
2,205.09921
KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
['Ta-Chung Chi', 'Ting-Han Fan', 'Peter J. Ramadge', 'Alexander I. Rudnicky']
['cs.CL', 'cs.LG']
Relative positional embeddings (RPE) have received considerable attention since RPEs effectively model the relative distance among tokens and enable length extrapolation. We propose KERPLE, a framework that generalizes relative position embedding for extrapolation by kernelizing positional differences. We achieve this ...
2022-05-20T01:25:57Z
Accepted at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The first two authors contributed equally to this work
null
null
null
null
null
null
null
null
null
2,205.1045
Temporally Precise Action Spotting in Soccer Videos Using Dense Detection Anchors
['João V. B. Soares', 'Avijit Shah', 'Topojoy Biswas']
['cs.CV']
We present a model for temporally precise action spotting in videos, which uses a dense set of detection anchors, predicting a detection confidence and corresponding fine-grained temporal displacement for each anchor. We experiment with two trunk architectures, both of which are able to incorporate large temporal conte...
2022-05-20T22:14:02Z
Accepted in International Conference on Image Processing (ICIP), 2022
null
null
Temporally Precise Action Spotting in Soccer Videos Using Dense Detection Anchors
['Joao V. B. Soares', 'Avijit Shah', 'Topojoy Biswas']
2,022
International Conference on Information Photonics
32
22
['Computer Science']
2,205.10687
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
['Abbas Ghaddar', 'Yimeng Wu', 'Sunyam Bagga', 'Ahmad Rashid', 'Khalil Bibi', 'Mehdi Rezagholizadeh', 'Chao Xing', 'Yasheng Wang', 'Duan Xinyu', 'Zhefeng Wang', 'Baoxing Huai', 'Xin Jiang', 'Qun Liu', 'Philippe Langlais']
['cs.CL']
There is a growing body of work in recent years to develop pre-trained language models (PLMs) for the Arabic language. This work concerns addressing two major problems in existing Arabic PLMs which constraint progress of the Arabic NLU and NLG fields.First, existing Arabic PLMs are not well-explored and their pre-train...
2022-05-21T22:38:19Z
null
null
null
null
null
null
null
null
null
null
2,205.10726
TWEET-FID: An Annotated Dataset for Multiple Foodborne Illness Detection Tasks
['Ruofan Hu', 'Dongyu Zhang', 'Dandan Tao', 'Thomas Hartvigsen', 'Hao Feng', 'Elke Rundensteiner']
['cs.CL', 'cs.AI', 'cs.LG']
Foodborne illness is a serious but preventable public health problem -- with delays in detecting the associated outbreaks resulting in productivity loss, expensive recalls, public safety hazards, and even loss of life. While social media is a promising source for identifying unreported foodborne illnesses, there is a d...
2022-05-22T03:47:18Z
LREC 2022
null
null
TWEET-FID: An Annotated Dataset for Multiple Foodborne Illness Detection Tasks
['Ruofan Hu', 'Dongyu Zhang', 'Dandan Tao', 'Thomas Hartvigsen', 'Hao Feng', 'Elke A. Rundensteiner']
2,022
International Conference on Language Resources and Evaluation
7
21
['Computer Science']
2,205.11081
BanglaNLG and BanglaT5: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla
['Abhik Bhattacharjee', 'Tahmid Hasan', 'Wasi Uddin Ahmad', 'Rifat Shahriyar']
['cs.CL']
This work presents BanglaNLG, a comprehensive benchmark for evaluating natural language generation (NLG) models in Bangla, a widely spoken yet low-resource language. We aggregate six challenging conditional text generation tasks under the BanglaNLG benchmark, introducing a new dataset on dialogue generation in the proc...
2022-05-23T06:54:56Z
Findings of EACL 2023 (camera-ready)
null
null
null
null
null
null
null
null
null
2,205.11111
DistilCamemBERT: a distillation of the French model CamemBERT
['Cyrile Delestre', 'Abibatou Amar']
['cs.CL', 'cs.LG']
Modern Natural Language Processing (NLP) models based on Transformer structures represent the state of the art in terms of performance on very diverse tasks. However, these models are complex and represent several hundred million parameters for the smallest of them. This may hinder their adoption at the industrial leve...
2022-05-23T08:04:58Z
in French language. CAp (Conf{\'e}rence sur l'Apprentissage automatique), Jul 2022, Vannes, France
null
null
null
null
null
null
null
null
null
2,205.11342
The Diminishing Returns of Masked Language Models to Science
['Zhi Hong', 'Aswathy Ajith', 'Gregory Pauloski', 'Eamon Duede', 'Kyle Chard', 'Ian Foster']
['cs.CL', 'cs.LG', 'I.2.7']
Transformer-based masked language models such as BERT, trained on general corpora, have shown impressive performance on downstream tasks. It has also been demonstrated that the downstream task performance of such models can be improved by pretraining larger models for longer on more data. In this work, we empirically e...
2022-05-23T14:35:08Z
12 pages. 3 figures. 5 tables. Accepted to the Findings of ACL 2023
null
null
null
null
null
null
null
null
null
2,205.11487
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
['Chitwan Saharia', 'William Chan', 'Saurabh Saxena', 'Lala Li', 'Jay Whang', 'Emily Denton', 'Seyed Kamyar Seyed Ghasemipour', 'Burcu Karagol Ayan', 'S. Sara Mahdavi', 'Rapha Gontijo Lopes', 'Tim Salimans', 'Jonathan Ho', 'David J Fleet', 'Mohammad Norouzi']
['cs.CV', 'cs.LG']
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key disc...
2022-05-23T17:42:53Z
null
null
null
null
null
null
null
null
null
null
2,205.11656
FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?
['Shikhar Tuli', 'Bhishma Dedhia', 'Shreshth Tuli', 'Niraj K. Jha']
['cs.LG', 'cs.CL']
The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformer-based models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. ...
2022-05-23T22:44:34Z
Preprint. In review
null
null
null
null
null
null
null
null
null
2,205.11916
Large Language Models are Zero-Shot Reasoners
['Takeshi Kojima', 'Shixiang Shane Gu', 'Machel Reid', 'Yutaka Matsuo', 'Yusuke Iwasawa']
['cs.CL', 'cs.AI', 'cs.LG']
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step a...
2022-05-24T09:22:26Z
Accepted to NeurIPS2022. Our code is available at https://github.com/kojima-takeshi188/zero_shot_cot
null
null
null
null
null
null
null
null
null
2,205.11966
Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy
['Shai Gretz', 'Assaf Toledo', 'Roni Friedman', 'Dan Lahav', 'Rose Weeks', 'Naor Bar-Zeev', 'João Sedoc', 'Pooja Sangha', 'Yoav Katz', 'Noam Slonim']
['cs.CL']
The COVID-19 pandemic has made a huge global impact and cost millions of lives. As COVID-19 vaccines were rolled out, they were quickly met with widespread hesitancy. To address the concerns of hesitant people, we launched VIRA, a public dialogue system aimed at addressing questions and concerns surrounding the COVID-1...
2022-05-24T10:58:11Z
null
null
null
Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy
['Shai Gretz', 'Assaf Toledo', 'Roni Friedman', 'Dan Lahav', 'Rose Weeks', 'N. Bar-Zeev', 'João Sedoc', 'P. Sangha', 'Yoav Katz', 'N. Slonim']
2,022
Findings
8
27
['Computer Science']
2,205.12005
mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
['Chenliang Li', 'Haiyang Xu', 'Junfeng Tian', 'Wei Wang', 'Ming Yan', 'Bin Bi', 'Jiabo Ye', 'Hehong Chen', 'Guohai Xu', 'Zheng Cao', 'Ji Zhang', 'Songfang Huang', 'Fei Huang', 'Jingren Zhou', 'Luo Si']
['cs.CL', 'cs.CV']
Large-scale pretrained foundation models have been an emerging paradigm for building artificial intelligence (AI) systems, which can be quickly adapted to a wide range of downstream tasks. This paper presents mPLUG, a new vision-language foundation model for both cross-modal understanding and generation. Most existing ...
2022-05-24T11:52:06Z
null
EMNLP2022
null
mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
['Chenliang Li', 'Haiyang Xu', 'Junfeng Tian', 'Wei Wang', 'Ming Yan', 'Bin Bi', 'Jiabo Ye', 'Hehong Chen', 'Guohai Xu', 'Zheng-da Cao', 'Ji Zhang', 'Songfang Huang', 'Feiran Huang', 'Jingren Zhou', 'Luo Si']
2,022
Conference on Empirical Methods in Natural Language Processing
224
85
['Computer Science']
2,205.1201
SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition
['Yaoyao Zhong', 'Weihong Deng', 'Jiani Hu', 'Dongyue Zhao', 'Xian Li', 'Dongchao Wen']
['cs.CV']
Deep face recognition has achieved great success due to large-scale training databases and rapidly developing loss functions. The existing algorithms devote to realizing an ideal idea: minimizing the intra-class distance and maximizing the inter-class distance. However, they may neglect that there are also low quality ...
2022-05-24T11:54:15Z
12 pages, 9 figures
IEEE Transactions on Image Processing, 2021
10.1109/TIP.2020.3048632
null
null
null
null
null
null
null
2,205.12035
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
['Shitao Xiao', 'Zheng Liu', 'Yingxia Shao', 'Zhao Cao']
['cs.CL']
Despite pre-training's progress in many important NLP tasks, it remains to explore effective pre-training strategies for dense retrieval. In this paper, we propose RetroMAE, a new retrieval oriented pre-training paradigm based on Masked Auto-Encoder (MAE). RetroMAE is highlighted by three critical designs. 1) A novel M...
2022-05-24T12:43:04Z
Accepted to EMNLP 2022
null
null
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
['Shitao Xiao', 'Zheng Liu', 'Yingxia Shao', 'Zhao Cao']
2,022
Conference on Empirical Methods in Natural Language Processing
126
44
['Computer Science']
2,205.12335
K-12BERT: BERT for K-12 education
['Vasu Goel', 'Dhruv Sahnan', 'Venktesh V', 'Gaurav Sharma', 'Deep Dwivedi', 'Mukesh Mohania']
['cs.CL', 'cs.LG']
Online education platforms are powered by various NLP pipelines, which utilize models like BERT to aid in content curation. Since the inception of the pre-trained language models like BERT, there have also been many efforts toward adapting these pre-trained models to specific domains. However, there has not been a mode...
2022-05-24T19:35:41Z
4 pages
null
null
null
null
null
null
null
null
null
2,205.12393
Fine-tuned Language Models are Continual Learners
['Thomas Scialom', 'Tuhin Chakrabarty', 'Smaranda Muresan']
['cs.CL']
Recent work on large language models relies on the intuition that most natural language processing tasks can be described via natural language instructions. Language models trained on these instructions show strong zero-shot performance on several standard datasets. However, these models even though impressive still pe...
2022-05-24T22:53:34Z
null
null
null
Fine-tuned Language Models are Continual Learners
['Thomas Scialom', 'Tuhin Chakrabarty', 'S. Muresan']
2,022
Conference on Empirical Methods in Natural Language Processing
123
48
['Computer Science']
2,205.12446
FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech
['Alexis Conneau', 'Min Ma', 'Simran Khanuja', 'Yu Zhang', 'Vera Axelrod', 'Siddharth Dalmia', 'Jason Riesa', 'Clara Rivera', 'Ankur Bapna']
['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS']
We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark, with approximately 12 hours of speech supervision per language. FLEURS can be used for a varie...
2022-05-25T02:29:03Z
null
null
null
null
null
null
null
null
null
null
2,205.12496
Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts
['Harsh Trivedi', 'Niranjan Balasubramanian', 'Tushar Khot', 'Ashish Sabharwal']
['cs.CL', 'cs.AI']
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real quest...
2022-05-25T05:13:21Z
Accepted at EMNLP'22
null
null
null
null
null
null
null
null
null
2,205.12522
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
['Ashish V. Thapliyal', 'Jordi Pont-Tuset', 'Xi Chen', 'Radu Soricut']
['cs.CV', 'cs.CL']
Research in massively multilingual image captioning has been severely hampered by a lack of high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset (XM3600 in short), a geographically diverse set of 3600 images annotated with human-generated reference captions in 36 languages. The images ...
2022-05-25T06:30:19Z
EMNLP 2022
null
null
null
null
null
null
null
null
null
2,205.12644
LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution
['Shon Otmazgin', 'Arie Cattan', 'Yoav Goldberg']
['cs.CL']
While coreference resolution typically involves various linguistic challenges, recent models are based on a single pairwise scorer for all types of pairs. We present LingMess, a new coreference model that defines different categories of coreference cases and optimize multiple pairwise scorers, where each scorer learns ...
2022-05-25T10:39:46Z
EACL 2023
null
null
null
null
null
null
null
null
null
2,205.12647
Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
['Tu Vu', 'Aditya Barua', 'Brian Lester', 'Daniel Cer', 'Mohit Iyyer', 'Noah Constant']
['cs.CL']
In this paper, we explore the challenging problem of performing a generative task in a target language when labeled data is only available in English, using summarization as a case study. We assume a strict setting with no access to parallel data or machine translation and find that common transfer learning approaches ...
2022-05-25T10:41:34Z
Accepted as a main conference paper at EMNLP 2022, 22 pages, 8 figures, 11 tables
null
null
null
null
null
null
null
null
null
2,205.12673
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning
['Prakhar Gupta', 'Cathy Jiao', 'Yi-Ting Yeh', 'Shikib Mehri', 'Maxine Eskenazi', 'Jeffrey P. Bigham']
['cs.CL']
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especia...
2022-05-25T11:37:06Z
EMNLP 2022
null
null
null
null
null
null
null
null
null
2,205.12854
Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors
['Liyan Tang', 'Tanya Goyal', 'Alexander R. Fabbri', 'Philippe Laban', 'Jiacheng Xu', 'Semih Yavuz', 'Wojciech Kryściński', 'Justin F. Rousseau', 'Greg Durrett']
['cs.CL', 'cs.AI']
The propensity of abstractive summarization models to make factual errors has been studied extensively, including design of metrics to detect factual errors and annotation of errors in current systems' outputs. However, the ever-evolving nature of summarization systems, metrics, and annotated benchmarks makes factualit...
2022-05-25T15:26:48Z
Accepted to ACL 2023
null
null
null
null
null
null
null
null
null
2,205.12934
Amortized Inference for Causal Structure Learning
['Lars Lorch', 'Scott Sussex', 'Jonas Rothfuss', 'Andreas Krause', 'Bernhard Schölkopf']
['cs.LG', 'stat.ML']
Inferring causal structure poses a combinatorial search problem that typically involves evaluating structures with a score or independence test. The resulting search is costly, and designing suitable scores or tests that capture prior knowledge is difficult. In this work, we propose to amortize causal structure learnin...
2022-05-25T17:37:08Z
NeurIPS 2022, fixed formatting of Figure 5
null
null
null
null
null
null
null
null
null
2,205.12952
Pretraining is All You Need for Image-to-Image Translation
['Tengfei Wang', 'Ting Zhang', 'Bo Zhang', 'Hao Ouyang', 'Dong Chen', 'Qifeng Chen', 'Fang Wen']
['cs.CV']
We propose to use pretraining to boost general image-to-image translation. Prior image-to-image translation methods usually need dedicated architectural design and train individual translation models from scratch, struggling for high-quality generation of complex scenes, especially when paired training data are not abu...
2022-05-25T17:58:26Z
Project Page: https://tengfei-wang.github.io/PITI/index.html
null
null
null
null
null
null
null
null
null
2,205.12956
Inception Transformer
['Chenyang Si', 'Weihao Yu', 'Pan Zhou', 'Yichen Zhou', 'Xinchao Wang', 'Shuicheng Yan']
['cs.CV', 'cs.AI', 'cs.LG']
Recent studies show that Transformer has strong capability of building long-range dependencies, yet is incompetent in capturing high frequencies that predominantly convey local information. To tackle this issue, we present a novel and general-purpose Inception Transformer, or iFormer for short, that effectively learns ...
2022-05-25T17:59:54Z
Code and models will be released at https://github.com/sail-sg/iFormer
null
null
null
null
null
null
null
null
null
2,205.13115
Fine-grained Image Captioning with CLIP Reward
['Jaemin Cho', 'Seunghyun Yoon', 'Ajinkya Kale', 'Franck Dernoncourt', 'Trung Bui', 'Mohit Bansal']
['cs.CL', 'cs.AI', 'cs.CV']
Modern image captioning models are usually trained with text similarity objectives. However, since reference captions in public datasets often describe the most salient common objects, models trained with text similarity objectives tend to ignore specific and detailed aspects of an image that distinguish it from others...
2022-05-26T02:46:09Z
NAACL Findings 2022
null
null
Fine-grained Image Captioning with CLIP Reward
['Jaemin Cho', 'Seunghyun Yoon', 'Ajinkya Kale', 'Franck Dernoncourt', 'Trung Bui', 'Mohit Bansal']
2,022
NAACL-HLT
79
41
['Computer Science']
2,205.13147
Matryoshka Representation Learning
['Aditya Kusupati', 'Gantavya Bhatt', 'Aniket Rege', 'Matthew Wallingford', 'Aditya Sinha', 'Vivek Ramanujan', 'William Howard-Snyder', 'Kaifeng Chen', 'Sham Kakade', 'Prateek Jain', 'Ali Farhadi']
['cs.LG', 'cs.CV']
Learned representations are a central component in modern ML systems, serving a multitude of downstream tasks. When training such representations, it is often the case that computational and statistical constraints for each downstream task are unknown. In this context rigid, fixed capacity representations can be either...
2022-05-26T04:33:56Z
Edited related work to include intrinsic dimensionality works
null
null
null
null
null
null
null
null
null
2,205.13636
Quark: Controllable Text Generation with Reinforced Unlearning
['Ximing Lu', 'Sean Welleck', 'Jack Hessel', 'Liwei Jiang', 'Lianhui Qin', 'Peter West', 'Prithviraj Ammanabrolu', 'Yejin Choi']
['cs.CL', 'cs.LG']
Large-scale language models often learn behaviors that are misaligned with user expectations. Generated text may contain offensive or toxic language, contain significant repetition, or be of a different sentiment than desired by the user. We consider the task of unlearning these misalignments by fine-tuning the languag...
2022-05-26T21:11:51Z
null
NeurIPS 2022 (Oral Selection)
null
null
null
null
null
null
null
null
2,205.1376
Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval
['Pascal Notin', 'Mafalda Dias', 'Jonathan Frazer', 'Javier Marchena-Hurtado', 'Aidan Gomez', 'Debora S. Marks', 'Yarin Gal']
['cs.LG']
The ability to accurately model the fitness landscape of protein sequences is critical to a wide range of applications, from quantifying the effects of human variants on disease likelihood, to predicting immune-escape mutations in viruses and designing novel biotherapeutic proteins. Deep generative models of protein se...
2022-05-27T04:51:15Z
ICML 2022
null
null
null
null
null
null
null
null
null
2,205.141
GIT: A Generative Image-to-text Transformer for Vision and Language
['Jianfeng Wang', 'Zhengyuan Yang', 'Xiaowei Hu', 'Linjie Li', 'Kevin Lin', 'Zhe Gan', 'Zicheng Liu', 'Ce Liu', 'Lijuan Wang']
['cs.CV']
In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structure...
2022-05-27T17:03:38Z
null
null
null
GIT: A Generative Image-to-text Transformer for Vision and Language
['Jianfeng Wang', 'Zhengyuan Yang', 'Xiaowei Hu', 'Linjie Li', 'Kevin Lin', 'Zhe Gan', 'Zicheng Liu', 'Ce Liu', 'Lijuan Wang']
2,022
Trans. Mach. Learn. Res.
564
149
['Computer Science']
2,205.14135
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
['Tri Dao', 'Daniel Y. Fu', 'Stefano Ermon', 'Atri Rudra', 'Christopher Ré']
['cs.LG']
Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock spee...
2022-05-27T17:53:09Z
null
null
null
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
['Tri Dao', 'Daniel Y. Fu', 'Stefano Ermon', 'A. Rudra', "Christopher R'e"]
2,022
Neural Information Processing Systems
2,299
111
['Computer Science']
2,205.14304
Multimodal Fake News Detection via CLIP-Guided Learning
['Yangming Zhou', 'Qichao Ying', 'Zhenxing Qian', 'Sheng Li', 'Xinpeng Zhang']
['cs.CV']
Multimodal fake news detection has attracted many research interests in social forensics. Many existing approaches introduce tailored attention mechanisms to guide the fusion of unimodal features. However, how the similarity of these features is calculated and how it will affect the decision-making process in FND are s...
2022-05-28T02:43:18Z
Submitted to CIKM 2022
null
null
null
null
null
null
null
null
null
2,205.14375
WaveMix: A Resource-efficient Neural Network for Image Analysis
['Pranav Jeevan', 'Kavitha Viswanathan', 'Anandu A S', 'Amit Sethi']
['cs.CV', 'cs.AI', 'cs.LG', 'I.2.10; I.4.0; I.4.1; I.4.2; I.4.6; I.4.7; I.4.8; I.4.9; I.4.10;\n I.2.10; I.5.1; I.5.2; I.5.4; J.2']
We propose a novel neural architecture for computer vision -- WaveMix -- that is resource-efficient and yet generalizable and scalable. While using fewer trainable parameters, GPU RAM, and computations, WaveMix networks achieve comparable or better accuracy than the state-of-the-art convolutional neural networks, visio...
2022-05-28T09:08:50Z
20 pages, 5 figures
null
null
WaveMix: A Resource-efficient Neural Network for Image Analysis
['Pranav Jeevan', 'Kavitha Viswanathan', 'S. AnanduA', 'A. Sethi']
2,022
null
21
102
['Computer Science']
2,205.14728
L3Cube-MahaNLP: Marathi Natural Language Processing Datasets, Models, and Library
['Raviraj Joshi']
['cs.CL', 'cs.LG']
Despite being the third most popular language in India, the Marathi language lacks useful NLP resources. Moreover, popular NLP libraries do not have support for the Marathi language. With L3Cube-MahaNLP, we aim to build resources and a library for Marathi natural language processing. We present datasets and transformer...
2022-05-29T17:51:00Z
null
null
null
null
null
null
null
null
null
null
2,205.14756
EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction
['Han Cai', 'Junyan Li', 'Muyan Hu', 'Chuang Gan', 'Song Han']
['cs.CV']
High-resolution dense prediction enables many appealing real-world applications, such as computational photography, autonomous driving, etc. However, the vast computational cost makes deploying state-of-the-art high-resolution dense prediction models on hardware devices difficult. This work presents EfficientViT, a new...
2022-05-29T20:07:23Z
ICCV 2023; Update EfficientViT-SAM results
null
null
null
null
null
null
null
null
null
2,205.14879
Easter2.0: Improving convolutional models for handwritten text recognition
['Kartik Chaudhary', 'Raghav Bali']
['cs.CV', 'cs.AI']
Convolutional Neural Networks (CNN) have shown promising results for the task of Handwritten Text Recognition (HTR) but they still fall behind Recurrent Neural Networks (RNNs)/Transformer based models in terms of performance. In this paper, we propose a CNN based architecture that bridges this gap. Our work, Easter2.0,...
2022-05-30T06:33:15Z
12 pages, 8 figures
null
null
Easter2.0: Improving convolutional models for handwritten text recognition
['Kartik Chaudhary', 'Raghav Bali']
2,022
arXiv.org
10
30
['Computer Science']
2,205.14986
GMML is All you Need
['Sara Atito', 'Muhammad Awais', 'Josef Kittler']
['cs.CV']
Vision transformers have generated significant interest in the computer vision community because of their flexibility in exploiting contextual information, whether it is sharply confined local, or long range global. However, they are known to be data hungry. This has motivated the research in self-supervised transforme...
2022-05-30T10:36:55Z
null
null
null
null
null
null
null
null
null
null
2,205.15575
hmBERT: Historical Multilingual Language Models for Named Entity Recognition
['Stefan Schweter', 'Luisa März', 'Katharina Schmid', 'Erion Çano']
['cs.CL']
Compared to standard Named Entity Recognition (NER), identifying persons, locations, and organizations in historical texts constitutes a big challenge. To obtain machine-readable corpora, the historical text is usually scanned and Optical Character Recognition (OCR) needs to be performed. As a result, the historical co...
2022-05-31T07:30:33Z
Camera-ready HIPE-2022 Working Note Paper for CLEF 2022 (Conference and Labs of the Evaluation Forum (CLEF 2022))
null
null
hmBERT: Historical Multilingual Language Models for Named Entity Recognition
['Stefan Schweter', 'Luisa März', 'Katharina Schmid', 'Erion cCano']
2,022
Conference and Labs of the Evaluation Forum
18
31
['Computer Science']
2,205.15868
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
['Wenyi Hong', 'Ming Ding', 'Wendi Zheng', 'Xinghan Liu', 'Jie Tang']
['cs.CV', 'cs.CL', 'cs.LG']
Large-scale pretrained transformers have created milestones in text (GPT-3) and text-to-image (DALL-E and CogView) generation. Its application to video generation is still facing many challenges: The potential huge computation cost makes the training from scratch unaffordable; The scarcity and weak relevance of text-vi...
2022-05-29T19:02:15Z
null
null
null
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
['Wenyi Hong', 'Ming Ding', 'Wendi Zheng', 'Xinghan Liu', 'Jie Tang']
2,022
International Conference on Learning Representations
633
45
['Computer Science']
2,205.15997
TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving
['Kashyap Chitta', 'Aditya Prakash', 'Bernhard Jaeger', 'Zehao Yu', 'Katrin Renz', 'Andreas Geiger']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO']
How should we integrate representations from complementary sensors for autonomous driving? Geometry-based fusion has shown promise for perception (e.g. object detection, motion forecasting). However, in the context of end-to-end driving, we find that imitation learning based on existing sensor fusion methods underperfo...
2022-05-31T17:57:19Z
arXiv admin note: text overlap with arXiv:2104.09224
null
null
TransFuser: Imitation With Transformer-Based Sensor Fusion for Autonomous Driving
['Kashyap Chitta', 'Aditya Prakash', 'Bernhard Jaeger', 'Zehao Yu', 'Katrin Renz', 'Andreas Geiger']
2,022
IEEE Transactions on Pattern Analysis and Machine Intelligence
335
134
['Computer Science', 'Medicine']
2,205.16007
Improved Vector Quantized Diffusion Models
['Zhicong Tang', 'Shuyang Gu', 'Jianmin Bao', 'Dong Chen', 'Fang Wen']
['cs.CV']
Vector quantized diffusion (VQ-Diffusion) is a powerful generative model for text-to-image synthesis, but sometimes can still generate low-quality samples or weakly correlated images with text input. We find these issues are mainly due to the flawed sampling strategy. In this paper, we propose two important techniques ...
2022-05-31T17:59:53Z
update reference
null
null
null
null
null
null
null
null
null
2,206.00364
Elucidating the Design Space of Diffusion-Based Generative Models
['Tero Karras', 'Miika Aittala', 'Timo Aila', 'Samuli Laine']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.NE', 'stat.ML']
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well a...
2022-06-01T10:03:24Z
NeurIPS 2022
null
null
null
null
null
null
null
null
null
2,206.00888
Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
['Sehoon Kim', 'Amir Gholami', 'Albert Shaw', 'Nicholas Lee', 'Karttikeya Mangalam', 'Jitendra Malik', 'Michael W. Mahoney', 'Kurt Keutzer']
['eess.AS', 'cs.CL', 'cs.SD']
The recently proposed Conformer model has become the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture that captures both local and global features. However, through a series of systematic studies, we find that the Conformer architecture's design choices ...
2022-06-02T06:06:29Z
NeurIPS 2022
null
null
null
null
null
null
null
null
null
2,206.00927
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps
['Cheng Lu', 'Yuhao Zhou', 'Fan Bao', 'Jianfei Chen', 'Chongxuan Li', 'Jun Zhu']
['cs.LG', 'stat.ML']
Diffusion probabilistic models (DPMs) are emerging powerful generative models. Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. Sampling from DP...
2022-06-02T08:43:16Z
Accepted in Neurips 2022
null
null
null
null
null
null
null
null
null
2,206.00929
The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates from Bosnia-Herzegovina, Croatia, and Serbia
['Michal Mochtak', 'Peter Rupnik', 'Nikola Ljubešič']
['cs.CL']
Expression of sentiment in parliamentary debates is deemed to be significantly different from that on social media or in product reviews. This paper adds to an emerging body of research on parliamentary debates with a dataset of sentences annotated for detection sentiment polarity in political discourse. We sample the ...
2022-06-02T08:45:14Z
8 pages, submitted to JT-DH 2022 (Language Technologies and Digital Humanities 2022) conference, number 4293
null
null
null
null
null
null
null
null
null
2,206.01062
DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis
['Birgit Pfitzmann', 'Christoph Auer', 'Michele Dolfi', 'Ahmed S Nassar', 'Peter W J Staar']
['cs.CV', 'cs.LG']
Accurate document layout analysis is a key requirement for high-quality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adeq...
2022-06-02T14:25:12Z
9 pages, 6 figures, 5 tables. Accepted paper at SIGKDD 2022 conference
null
10.1145/3534678.3539043
null
null
null
null
null
null
null
2,206.01191
EfficientFormer: Vision Transformers at MobileNet Speed
['Yanyu Li', 'Geng Yuan', 'Yang Wen', 'Ju Hu', 'Georgios Evangelidis', 'Sergey Tulyakov', 'Yanzhi Wang', 'Jian Ren']
['cs.CV']
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, \textit{e.g.}, attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. The...
2022-06-02T17:51:03Z
null
null
null
null
null
null
null
null
null
null
2,206.01718
A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
['Dustin Schwenk', 'Apoorv Khandelwal', 'Christopher Clark', 'Kenneth Marino', 'Roozbeh Mottaghi']
['cs.CV', 'cs.CL']
The Visual Question Answering (VQA) task aspires to provide a meaningful testbed for the development of AI models that can jointly reason over visual and natural language inputs. Despite a proliferation of VQA datasets, this goal is hindered by a set of common limitations. These include a reliance on relatively simplis...
2022-06-03T17:52:27Z
null
null
null
null
null
null
null
null
null
null
2,206.02066
PIDNet: A Real-time Semantic Segmentation Network Inspired by PID Controllers
['Jiacong Xu', 'Zixiang Xiong', 'Shankar P. Bhattacharyya']
['cs.CV', 'cs.AI']
Two-branch network architecture has shown its efficiency and effectiveness in real-time semantic segmentation tasks. However, direct fusion of high-resolution details and low-frequency context has the drawback of detailed features being easily overwhelmed by surrounding contextual information. This overshoot phenomenon...
2022-06-04T23:16:52Z
11 pages, 9 figures; This paper will be published by CVPR2023 soon, please refer to the official version then
null
null
null
null
null
null
null
null
null
2,206.02262
Diffusion-GAN: Training GANs with Diffusion
['Zhendong Wang', 'Huangjie Zheng', 'Pengcheng He', 'Weizhu Chen', 'Mingyuan Zhou']
['cs.LG', 'stat.ML']
Generative adversarial networks (GANs) are challenging to train stably, and a promising remedy of injecting instance noise into the discriminator input has not been very effective in practice. In this paper, we propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate Gaussian-mi...
2022-06-05T20:45:01Z
Project homepage: https://github.com/Zhendong-Wang/Diffusion-GAN; ICLR 2023 camera ready version
null
null
null
null
null
null
null
null
null
2,206.02369
Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation
['Jin Xu', 'Xiaojiang Liu', 'Jianhao Yan', 'Deng Cai', 'Huayang Li', 'Jian Li']
['cs.CL']
While large-scale neural language models, such as GPT2 and BART, have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e.g.}, greedy search). This phenomenon is counter-intuitive since there are ...
2022-06-06T05:51:12Z
Accepted by NeurIPS 2022. Code is released at https://github.com/Jxu-Thu/DITTO
null
null
null
null
null
null
null
null
null
2,206.0268
Separable Self-attention for Mobile Vision Transformers
['Sachin Mehta', 'Mohammad Rastegari']
['cs.CV', 'cs.AI', 'cs.LG']
Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in Mobile...
2022-06-06T15:31:35Z
Technical report
null
null
null
null
null
null
null
null
null
2,206.02873
No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval
['Guilherme Moraes Rosa', 'Luiz Bonifacio', 'Vitor Jeronymo', 'Hugo Abonizio', 'Marzieh Fadaee', 'Roberto Lotufo', 'Rodrigo Nogueira']
['cs.IR', 'cs.CL', 'cs.PF']
Recent work has shown that small distilled language models are strong competitors to models that are orders of magnitude larger and slower in a wide range of information retrieval tasks. This has made distilled and dense models, due to latency constraints, the go-to choice for deployment in real-world retrieval applica...
2022-06-06T19:56:14Z
null
null
null
null
null
null
null
null
null
null
2,206.03001
PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System
['Chenxia Li', 'Weiwei Liu', 'Ruoyu Guo', 'Xiaoting Yin', 'Kaitao Jiang', 'Yongkun Du', 'Yuning Du', 'Lingfeng Zhu', 'Baohua Lai', 'Xiaoguang Hu', 'Dianhai Yu', 'Yanjun Ma']
['cs.CV']
Optical character recognition (OCR) technology has been widely used in various scenes, as shown in Figure 1. Designing a practical OCR system is still a meaningful but challenging task. In previous work, considering the efficiency and accuracy, we proposed a practical ultra lightweight OCR system (PP-OCR), and an optim...
2022-06-07T04:33:50Z
arXiv admin note: text overlap with arXiv:2109.03144
null
null
null
null
null
null
null
null
null
2,206.03065
Universal Speech Enhancement with Score-based Diffusion
['Joan Serrà', 'Santiago Pascual', 'Jordi Pons', 'R. Oguz Araz', 'Davide Scaini']
['cs.SD', 'cs.LG', 'eess.AS']
Removing background noise from speech audio has been the subject of considerable effort, especially in recent years due to the rise of virtual communication and amateur recordings. Yet background noise is not the only unpleasant disturbance that can prevent intelligibility: reverb, clipping, codec artifacts, problemati...
2022-06-07T07:32:32Z
24 pages, 6 figures; includes appendix; examples in https://serrjoa.github.io/projects/universe/
null
null
null
null
null
null
null
null
null
2,206.03382
Tutel: Adaptive Mixture-of-Experts at Scale
['Changho Hwang', 'Wei Cui', 'Yifan Xiong', 'Ziyue Yang', 'Ze Liu', 'Han Hu', 'Zilong Wang', 'Rafael Salas', 'Jithin Jose', 'Prabhat Ram', 'Joe Chau', 'Peng Cheng', 'Fan Yang', 'Mao Yang', 'Yongqiang Xiong']
['cs.DC', 'cs.CL', 'cs.CV']
Sparsely-gated mixture-of-experts (MoE) has been widely adopted to scale deep learning models to trillion-plus parameters with fixed computational cost. The algorithmic performance of MoE relies on its token routing mechanism that forwards each input token to the right sub-models or experts. While token routing dynamic...
2022-06-07T15:20:20Z
null
null
null
null
null
null
null
null
null
null
2,206.03933
TURJUMAN: A Public Toolkit for Neural Arabic Machine Translation
['El Moatez Billah Nagoudi', 'AbdelRahim Elmadany', 'Muhammad Abdul-Mageed']
['cs.CL', 'cs.AI', 'cs.LG']
We present TURJUMAN, a neural toolkit for translating from 20 languages into Modern Standard Arabic (MSA). TURJUMAN exploits the recently-introduced text-to-text Transformer AraT5 model, endowing it with a powerful ability to decode into Arabic. The toolkit offers the possibility of employing a number of diverse decodi...
2022-05-27T18:05:50Z
All authors contributed equally
Proceedings of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT5), 2022
null
null
null
null
null
null
null
null
2,206.0404
MobileOne: An Improved One millisecond Mobile Backbone
['Pavan Kumar Anasosalu Vasu', 'James Gabriel', 'Jeff Zhu', 'Oncel Tuzel', 'Anurag Ranjan']
['cs.CV']
Efficient neural network backbones for mobile devices are often optimized for metrics such as FLOPs or parameter count. However, these metrics may not correlate well with latency of the network when deployed on a mobile device. Therefore, we perform extensive analysis of different metrics by deploying several mobile-fr...
2022-06-08T17:55:11Z
Accepted at CVPR 2023
null
null
null
null
null
null
null
null
null
2,206.04514
SAR Despeckling using a Denoising Diffusion Probabilistic Model
['Malsha V. Perera', 'Nithin Gopalakrishnan Nair', 'Wele Gedara Chaminda Bandara', 'Vishal M. Patel']
['eess.IV', 'cs.CV']
Speckle is a multiplicative noise which affects all coherent imaging modalities including Synthetic Aperture Radar (SAR) images. The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications such as automatic target recognition and change detection. Thu...
2022-06-09T14:00:26Z
Our code is available at https://github.com/malshaV/SAR_DDPM
null
10.1109/LGRS.2023.3270799
null
null
null
null
null
null
null
2,206.04615
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
['Aarohi Srivastava', 'Abhinav Rastogi', 'Abhishek Rao', 'Abu Awal Md Shoeb', 'Abubakar Abid', 'Adam Fisch', 'Adam R. Brown', 'Adam Santoro', 'Aditya Gupta', 'Adrià Garriga-Alonso', 'Agnieszka Kluska', 'Aitor Lewkowycz', 'Akshat Agarwal', 'Alethea Power', 'Alex Ray', 'Alex Warstadt', 'Alexander W. Kocurek', 'Ali Safaya...
['cs.CL', 'cs.AI', 'cs.CY', 'cs.LG', 'stat.ML']
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate soc...
2022-06-09T17:05:34Z
27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-bench
Transactions on Machine Learning Research, May/2022, https://openreview.net/forum?id=uyTL5Bvosj
null
null
null
null
null
null
null
null
2,206.04658
BigVGAN: A Universal Neural Vocoder with Large-Scale Training
['Sang-gil Lee', 'Wei Ping', 'Boris Ginsburg', 'Bryan Catanzaro', 'Sungroh Yoon']
['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS']
Despite recent progress in generative adversarial network (GAN)-based vocoders, where the model generates raw waveform conditioned on acoustic features, it is challenging to synthesize high-fidelity audio for numerous speakers across various recording environments. In this work, we present BigVGAN, a universal vocoder ...
2022-06-09T17:56:10Z
To appear at ICLR 2023. Listen to audio samples from BigVGAN at: https://bigvgan-demo.github.io/
null
null
null
null
null
null
null
null
null
2,206.04664
On Data Scaling in Masked Image Modeling
['Zhenda Xie', 'Zheng Zhang', 'Yue Cao', 'Yutong Lin', 'Yixuan Wei', 'Qi Dai', 'Han Hu']
['cs.CV']
An important goal of self-supervised learning is to enable model pre-training to benefit from almost unlimited data. However, one method that has recently become popular, namely masked image modeling (MIM), is suspected to be unable to benefit from larger data. In this work, we break this misconception through extensiv...
2022-06-09T17:58:24Z
null
null
null
null
null
null
null
null
null
null
2,206.04674
Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs
['Jinguo Zhu', 'Xizhou Zhu', 'Wenhai Wang', 'Xiaohua Wang', 'Hongsheng Li', 'Xiaogang Wang', 'Jifeng Dai']
['cs.CV']
To build an artificial neural network like the biological intelligence system, recent works have unified numerous tasks into a generalist model, which can process various tasks with shared parameters and do not have any task-specific modules. While generalist models achieve promising results on various benchmarks, they...
2022-06-09T17:59:59Z
Code shall be released at https://github.com/fundamentalvision/Uni-Perceiver
null
null
Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs
['Jinguo Zhu', 'Xizhou Zhu', 'Wenhai Wang', 'Xiaohua Wang', 'Hongsheng Li', 'Xiaogang Wang', 'Jifeng Dai']
2,022
Neural Information Processing Systems
70
102
['Computer Science']
2,206.05408
Multi-instrument Music Synthesis with Spectrogram Diffusion
['Curtis Hawthorne', 'Ian Simon', 'Adam Roberts', 'Neil Zeghidour', 'Josh Gardner', 'Ethan Manilow', 'Jesse Engel']
['cs.SD', 'cs.LG', 'eess.AS']
An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw wavef...
2022-06-11T03:26:15Z
null
null
null
null
null
null
null
null
null
null
2,206.06588
Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search
['Chandan K. Reddy', 'Lluís Màrquez', 'Fran Valero', 'Nikhil Rao', 'Hugo Zaragoza', 'Sambaran Bandyopadhyay', 'Arnab Biswas', 'Anlu Xing', 'Karthik Subbian']
['cs.IR', 'cs.LG']
Improving the quality of search results can significantly enhance users experience and engagement with search engines. In spite of several recent advancements in the fields of machine learning and data mining, correctly classifying items for a particular user search query has been a long-standing challenge, which still...
2022-06-14T04:25:26Z
null
null
null
null
null
null
null
null
null
null
2,206.07038
AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
['Yanze Wu', 'Xintao Wang', 'Gen Li', 'Ying Shan']
['cs.CV', 'cs.AI']
This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blu...
2022-06-14T17:57:11Z
NeurIPS 2022. Codes and models are available at https://github.com/TencentARC/AnimeSR
null
null
AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
['Yanze Wu', 'Xintao Wang', 'Gengyan Li', 'Ying Shan']
2,022
Neural Information Processing Systems
23
57
['Computer Science']
2,206.07293
FRCRN: Boosting Feature Representation using Frequency Recurrence for Monaural Speech Enhancement
['Shengkui Zhao', 'Bin Ma', 'Karn N. Watcharasupat', 'Woon-Seng Gan']
['cs.SD', 'eess.AS']
Convolutional recurrent networks (CRN) integrating a convolutional encoder-decoder (CED) structure and a recurrent structure have achieved promising performance for monaural speech enhancement. However, feature representation across frequency context is highly constrained due to limited receptive fields in the convolut...
2022-06-15T04:29:10Z
The paper has been accepted by ICASSP 2022. 5 pages, 2 figures, 5 tables
null
null
FRCRN: Boosting Feature Representation Using Frequency Recurrence for Monaural Speech Enhancement
['Shengkui Zhao', 'Bin Ma', 'Karn N. Watcharasupat', 'W. Gan']
2,022
IEEE International Conference on Acoustics, Speech, and Signal Processing
88
29
['Computer Science', 'Engineering']
2,206.07557
How to Reduce Change Detection to Semantic Segmentation
['Guo-Hua Wang', 'Bin-Bin Gao', 'Chengjie Wang']
['cs.CV', 'cs.AI']
Change detection (CD) aims to identify changes that occur in an image pair taken different times. Prior methods devise specific networks from scratch to predict change masks in pixel-level, and struggle with general segmentation problems. In this paper, we propose a new paradigm that reduces CD to semantic segmentation...
2022-06-15T14:16:30Z
Accepted by Pattern Recognition. Code is at https://github.com/DoctorKey/C-3PO
null
null
How to Reduce Change Detection to Semantic Segmentation
['G. Wang', 'Bin-Bin Gao', 'Chengjie Wang']
2,022
Pattern Recognition
26
40
['Computer Science']
2,206.07627
Exploring Capabilities of Monolingual Audio Transformers using Large Datasets in Automatic Speech Recognition of Czech
['Jan Lehečka', 'Jan Švec', 'Aleš Pražák', 'Josef V. Psutka']
['cs.CL', 'cs.SD', 'eess.AS']
In this paper, we present our progress in pretraining Czech monolingual audio transformers from a large dataset containing more than 80 thousand hours of unlabeled speech, and subsequently fine-tuning the model on automatic speech recognition tasks using a combination of in-domain data and almost 6 thousand hours of ou...
2022-06-15T16:14:37Z
to be published in Proceedings of INTERSPEECH 2022
Interspeech 2022, 1831-1835
10.21437/Interspeech.2022-10439
null
null
null
null
null
null
null
2,206.07666
Transformer-based Automatic Speech Recognition of Formal and Colloquial Czech in MALACH Project
['Jan Lehečka', 'Josef V. Psutka', 'Josef Psutka']
['cs.CL']
Czech is a very specific language due to its large differences between the formal and the colloquial form of speech. While the formal (written) form is used mainly in official documents, literature, and public speeches, the colloquial (spoken) form is used widely among people in casual speeches. This gap introduces ser...
2022-06-15T17:01:20Z
to be published in Proceedings of TSD 2022
TSD 2022. Lecture Notes in Computer Science, vol 13502. Springer, Cham
10.1007/978-3-031-16270-1_25
null
null
null
null
null
null
null
2,206.07697
MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields
['Ilyes Batatia', 'Dávid Péter Kovács', 'Gregor N. C. Simm', 'Christoph Ortner', 'Gábor Csányi']
['stat.ML', 'cond-mat.mtrl-sci', 'cs.LG', 'physics.chem-ph']
Creating fast and accurate force fields is a long-standing challenge in computational chemistry and materials science. Recently, several equivariant message passing neural networks (MPNNs) have been shown to outperform models built using other approaches in terms of accuracy. However, most MPNNs suffer from high comput...
2022-06-15T17:46:05Z
Advances in Neural Information Processing Systems, 2022
null
null
null
null
null
null
null
null
null
2,206.07846
Action Spotting using Dense Detection Anchors Revisited: Submission to the SoccerNet Challenge 2022
['João V. B. Soares', 'Avijit Shah']
['cs.CV']
This brief technical report describes our submission to the Action Spotting SoccerNet Challenge 2022. The challenge was part of the CVPR 2022 ActivityNet Workshop. Our submission was based on a recently proposed method which focuses on increasing temporal precision via a densely sampled set of detection anchors. Due to...
2022-06-15T23:22:36Z
v2: a few more experiments, more detailed method description
null
null
null
null
null
null
null
null
null
2,206.07959
Simple-BEV: What Really Matters for Multi-Sensor BEV Perception?
['Adam W. Harley', 'Zhaoyuan Fang', 'Jie Li', 'Rares Ambrus', 'Katerina Fragkiadaki']
['cs.CV']
Building 3D perception systems for autonomous vehicles that do not rely on high-density LiDAR is a critical research problem because of the expense of LiDAR systems compared to cameras and other sensors. Recent research has developed a variety of camera-only methods, where features are differentiably "lifted" from the ...
2022-06-16T06:57:32Z
null
null
null
Simple-BEV: What Really Matters for Multi-Sensor BEV Perception?
['Adam W. Harley', 'Zhaoyuan Fang', 'Jie Li', 'Rares Ambrus', 'Katerina Fragkiadaki']
2,022
IEEE International Conference on Robotics and Automation
131
47
['Computer Science']