arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,108.1125 | YOLOP: You Only Look Once for Panoptic Driving Perception | ['Dong Wu', 'Manwen Liao', 'Weitian Zhang', 'Xinggang Wang', 'Xiang Bai', 'Wenqing Cheng', 'Wenyu Liu'] | ['cs.CV'] | A panoptic driving perception system is an essential part of autonomous
driving. A high-precision and real-time perception system can assist the
vehicle in making the reasonable decision while driving. We present a panoptic
driving perception network (YOLOP) to perform traffic object detection,
drivable area segmentati... | 2021-08-25T14:19:42Z | null | [J]. Machine Intelligence Research, 2022: 1-13 | 10.1007/s11633-022-1339-y | null | null | null | null | null | null | null |
2,108.12009 | EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa | ['Taewoon Kim', 'Piek Vossen'] | ['cs.CL'] | We present EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with
RoBERTa, a simple yet expressive scheme of solving the ERC (emotion recognition
in conversation) task. By simply prepending speaker names to utterances and
inserting separation tokens between the utterances in a dialogue, EmoBERTa can
learn int... | 2021-08-26T19:34:26Z | 4 pages, not including references and appendix | null | null | EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa | ['Taewoon Kim', 'Piek Vossen'] | 2,021 | arXiv.org | 102 | 31 | ['Computer Science'] |
2,108.12409 | Train Short, Test Long: Attention with Linear Biases Enables Input
Length Extrapolation | ['Ofir Press', 'Noah A. Smith', 'Mike Lewis'] | ['cs.CL'] | Since the introduction of the transformer model by Vaswani et al. (2017), a
fundamental question has yet to be answered: how does a model achieve
extrapolation at inference time for sequences that are longer than it saw
during training? We first show that extrapolation can be enabled by simply
changing the position rep... | 2021-08-27T17:35:06Z | null | null | null | null | null | null | null | null | null | null |
2,108.12626 | HeadlineCause: A Dataset of News Headlines for Detecting Causalities | ['Ilya Gusev', 'Alexey Tikhonov'] | ['cs.CL', 'cs.LG'] | Detecting implicit causal relations in texts is a task that requires both
common sense and world knowledge. Existing datasets are focused either on
commonsense causal reasoning or explicit causal relations. In this work, we
present HeadlineCause, a dataset for detecting implicit causal relations
between pairs of news h... | 2021-08-28T11:12:49Z | null | null | null | null | null | null | null | null | null | null |
2,108.1296 | LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text
Understanding and Generation | ['Jian Guan', 'Zhuoer Feng', 'Yamei Chen', 'Ruilin He', 'Xiaoxi Mao', 'Changjie Fan', 'Minlie Huang'] | ['cs.CL'] | Standard multi-task benchmarks are essential for developing pretraining
models that can generalize to various downstream tasks. Existing benchmarks for
natural language processing (NLP) usually focus only on understanding or
generating short texts. However, long text modeling requires many distinct
abilities in contras... | 2021-08-30T02:38:32Z | Accepted by TACL 2022. Benchmark datasets, pretraining models,
appendix url: https://github.com/thu-coai/LOT-LongLM | null | null | null | null | null | null | null | null | null |
2,108.1332 | Neural HMMs are all you need (for high-quality attention-free TTS) | ['Shivam Mehta', 'Éva Székely', 'Jonas Beskow', 'Gustav Eje Henter'] | ['eess.AS', 'cs.HC', 'cs.LG', 'cs.SD', '68T07', 'I.2.7; I.2.6; G.3; H.5.5'] | Neural sequence-to-sequence TTS has achieved significantly better output
quality than statistical speech synthesis using HMMs. However, neural TTS is
generally not probabilistic and uses non-monotonic attention. Attention
failures increase training time and can make synthesis babble incoherently.
This paper describes h... | 2021-08-30T15:38:00Z | 5 pages, 2 figures; final version for ICASSP 2022 | null | 10.1109/ICASSP43922.2022.9746686 | Neural HMMS Are All You Need (For High-Quality Attention-Free TTS) | ['Shivam Mehta', 'Éva Székely', 'J. Beskow', 'G. Henter'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 18 | 50 | ['Computer Science', 'Engineering'] |
2,108.13493 | Semi-Supervised Exaggeration Detection of Health Science Press Releases | ['Dustin Wright', 'Isabelle Augenstein'] | ['cs.CL', 'cs.LG'] | Public trust in science depends on honest and factual communication of
scientific papers. However, recent studies have demonstrated a tendency of news
media to misrepresent scientific papers by exaggerating their findings. Given
this, we present a formalization of and study into the problem of exaggeration
detection in... | 2021-08-30T19:32:20Z | Accepted to EMNLP 2021; 13 pages, 6 figures, 9 tables | null | null | Semi-Supervised Exaggeration Detection of Health Science Press Releases | ['Dustin Wright', 'Isabelle Augenstein'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 13 | 38 | ['Computer Science'] |
2,108.13751 | A Search Engine for Discovery of Scientific Challenges and Directions | ['Dan Lahav', 'Jon Saad Falcon', 'Bailey Kuehl', 'Sophie Johnson', 'Sravanthi Parasa', 'Noam Shomron', 'Duen Horng Chau', 'Diyi Yang', 'Eric Horvitz', 'Daniel S. Weld', 'Tom Hope'] | ['cs.CL', 'cs.HC', 'cs.IR'] | Keeping track of scientific challenges, advances and emerging directions is a
fundamental part of research. However, researchers face a flood of papers that
hinders discovery of important knowledge. In biomedicine, this directly impacts
human lives. To address this problem, we present a novel task of extraction and
sea... | 2021-08-31T11:08:20Z | AAAI 2022 | AAAI 2022 | null | null | null | null | null | null | null | null |
2,108.13897 | mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset | ['Luiz Bonifacio', 'Vitor Jeronymo', 'Hugo Queiroz Abonizio', 'Israel Campiotti', 'Marzieh Fadaee', 'Roberto Lotufo', 'Rodrigo Nogueira'] | ['cs.CL', 'cs.AI'] | The MS MARCO ranking dataset has been widely used for training deep learning
models for IR tasks, achieving considerable effectiveness on diverse zero-shot
scenarios. However, this type of resource is scarce in languages other than
English. In this work, we present mMARCO, a multilingual version of the MS
MARCO passage... | 2021-08-31T14:53:37Z | null | null | null | mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset | ['L. Bonifacio', 'Israel Campiotti', 'R. Lotufo', 'Rodrigo Nogueira'] | 2,021 | null | 114 | 52 | ['Computer Science'] |
2,109.00122 | FinQA: A Dataset of Numerical Reasoning over Financial Data | ['Zhiyu Chen', 'Wenhu Chen', 'Charese Smiley', 'Sameena Shah', 'Iana Borova', 'Dylan Langdon', 'Reema Moussa', 'Matt Beane', 'Ting-Hao Huang', 'Bryan Routledge', 'William Yang Wang'] | ['cs.CL'] | The sheer volume of financial statements makes it difficult for humans to
access and analyze a business's financials. Robust numerical reasoning likewise
faces unique challenges in this domain. In this work, we focus on answering
deep questions over financial data, aiming to automate the analysis of a large
corpus of f... | 2021-09-01T00:08:14Z | EMNLP 2021 | null | null | FinQA: A Dataset of Numerical Reasoning over Financial Data | ['Zhiyu Chen', 'Wenhu Chen', 'Charese Smiley', 'Sameena Shah', 'Iana Borova', 'Dylan Langdon', 'Reema Moussa', 'Matthew I. Beane', "Ting-Hao 'Kenneth' Huang", 'Bryan R. Routledge', 'W. Wang'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 356 | 44 | ['Computer Science'] |
2,109.00859 | CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
Code Understanding and Generation | ['Yue Wang', 'Weishi Wang', 'Shafiq Joty', 'Steven C. H. Hoi'] | ['cs.CL', 'cs.PL'] | Pre-trained models for Natural Languages (NL) like BERT and GPT have been
recently shown to transfer well to Programming Languages (PL) and largely
benefit a broad set of code-related tasks. Despite their success, most current
methods either rely on an encoder-only (or decoder-only) pre-training that is
suboptimal for ... | 2021-09-02T12:21:06Z | Accepted to EMNLP 2021. 13 pages | null | null | null | null | null | null | null | null | null |
2,109.00904 | MultiEURLEX -- A multi-lingual and multi-label legal document
classification dataset for zero-shot cross-lingual transfer | ['Ilias Chalkidis', 'Manos Fergadiotis', 'Ion Androutsopoulos'] | ['cs.CL'] | We introduce MULTI-EURLEX, a new multilingual dataset for topic
classification of legal documents. The dataset comprises 65k European Union
(EU) laws, officially translated in 23 languages, annotated with multiple
labels from the EUROVOC taxonomy. We highlight the effect of temporal concept
drift and the importance of ... | 2021-09-02T12:52:55Z | 9 pages, long paper at EMNLP 2021 proceedings | null | null | null | null | null | null | null | null | null |
2,109.01078 | Skim-Attention: Learning to Focus via Document Layout | ['Laura Nguyen', 'Thomas Scialom', 'Jacopo Staiano', 'Benjamin Piwowarski'] | ['cs.CL'] | Transformer-based pre-training techniques of text and layout have proven
effective in a number of document understanding tasks. Despite this success,
multimodal pre-training models suffer from very high computational and memory
costs. Motivated by human reading strategies, this paper presents
Skim-Attention, a new atte... | 2021-09-02T16:44:22Z | 15 pages, 6 figures, to be published in EMNLP 2021 Findings | null | null | null | null | null | null | null | null | null |
2,109.01134 | Learning to Prompt for Vision-Language Models | ['Kaiyang Zhou', 'Jingkang Yang', 'Chen Change Loy', 'Ziwei Liu'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Large pre-trained vision-language models like CLIP have shown great potential
in learning representations that are transferable across a wide range of
downstream tasks. Different from the traditional representation learning that
is based mostly on discretized labels, vision-language pre-training aligns
images and texts... | 2021-09-02T17:57:31Z | International Journal of Computer Vision (IJCV), 2022. Update: Adds
results on the DOSCO (DOmain Shift in COntext) benchmark | null | 10.1007/s11263-022-01653-1 | null | null | null | null | null | null | null |
2,109.01163 | Efficient conformer: Progressive downsampling and grouped attention for
automatic speech recognition | ['Maxime Burchi', 'Valentin Vielzeuf'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.SD'] | The recently proposed Conformer architecture has shown state-of-the-art
performances in Automatic Speech Recognition by combining convolution with
attention to model both local and global dependencies. In this paper, we study
how to reduce the Conformer architecture complexity with a limited computing
budget, leading t... | 2021-08-31T07:48:06Z | null | ASRU 2021, Dec 2021, Cartagena, Colombia | null | null | null | null | null | null | null | null |
2,109.01652 | Finetuned Language Models Are Zero-Shot Learners | ['Jason Wei', 'Maarten Bosma', 'Vincent Y. Zhao', 'Kelvin Guu', 'Adams Wei Yu', 'Brian Lester', 'Nan Du', 'Andrew M. Dai', 'Quoc V. Le'] | ['cs.CL'] | This paper explores a simple method for improving the zero-shot learning
abilities of language models. We show that instruction tuning -- finetuning
language models on a collection of tasks described via instructions --
substantially improves zero-shot performance on unseen tasks.
We take a 137B parameter pretrained ... | 2021-09-03T17:55:52Z | Version 5. Find list of changes in Appendix F (page 35) | null | null | Finetuned Language Models Are Zero-Shot Learners | ['Jason Wei', 'Maarten Bosma', 'Vincent Zhao', 'Kelvin Guu', 'Adams Wei Yu', 'Brian Lester', 'Nan Du', 'Andrew M. Dai', 'Quoc V. Le'] | 2,021 | International Conference on Learning Representations | 3,814 | 169 | ['Computer Science'] |
2,109.01653 | CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge | ['Yasumasa Onoe', 'Michael J. Q. Zhang', 'Eunsol Choi', 'Greg Durrett'] | ['cs.CL', 'cs.AI'] | Most benchmark datasets targeting commonsense reasoning focus on everyday
scenarios: physical knowledge like knowing that you could fill a cup under a
waterfall [Talmor et al., 2019], social knowledge like bumping into someone is
awkward [Sap et al., 2019], and other generic situations. However, there is a
rich space o... | 2021-09-03T17:56:40Z | null | null | null | CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge | ['Yasumasa Onoe', 'Michael J.Q. Zhang', 'Eunsol Choi', 'Greg Durrett'] | 2,021 | NeurIPS Datasets and Benchmarks | 87 | 44 | ['Computer Science'] |
2,109.01903 | Robust fine-tuning of zero-shot models | ['Mitchell Wortsman', 'Gabriel Ilharco', 'Jong Wook Kim', 'Mike Li', 'Simon Kornblith', 'Rebecca Roelofs', 'Raphael Gontijo-Lopes', 'Hannaneh Hajishirzi', 'Ali Farhadi', 'Hongseok Namkoong', 'Ludwig Schmidt'] | ['cs.CV', 'cs.LG'] | Large pre-trained models such as CLIP or ALIGN offer consistent accuracy
across a range of data distributions when performing zero-shot inference (i.e.,
without fine-tuning on a specific dataset). Although existing fine-tuning
methods substantially improve accuracy on a given target distribution, they
often reduce robu... | 2021-09-04T17:11:28Z | CVPR 2022 | null | null | null | null | null | null | null | null | null |
2,109.02492 | DialogLM: Pre-trained Model for Long Dialogue Understanding and
Summarization | ['Ming Zhong', 'Yang Liu', 'Yichong Xu', 'Chenguang Zhu', 'Michael Zeng'] | ['cs.CL'] | Dialogue is an essential part of human communication and cooperation.
Existing research mainly focuses on short dialogue scenarios in a one-on-one
fashion. However, multi-person interactions in the real world, such as meetings
or interviews, are frequently over a few thousand words. There is still a lack
of correspondi... | 2021-09-06T13:55:03Z | Accepted by AAAI 2022 | null | null | null | null | null | null | null | null | null |
2,109.02844 | Quasinormal modes in two-photon autocorrelation and the geometric-optics
approximation | ['Wei-Liang Qian', 'Kai Lin', 'Xiao-Mei Kuang', 'Bin Wang', 'Rui-Hong Yue'] | ['gr-qc'] | In this work, we study the black hole light echoes in terms of the two-photon
autocorrelation and explore their connection with the quasinormal modes. It is
shown that the above time-domain phenomenon can be analyzed by utilizing the
well-known frequency-domain relations between the quasinormal modes and
characteristic... | 2021-09-07T03:55:51Z | 12 pages, 3 figures | null | 10.1140/epjc/s10052-022-10155-w | null | null | null | null | null | null | null |
2,109.02903 | IndicBART: A Pre-trained Model for Indic Natural Language Generation | ['Raj Dabre', 'Himani Shrotriya', 'Anoop Kunchukuttan', 'Ratish Puduppully', 'Mitesh M. Khapra', 'Pratyush Kumar'] | ['cs.CL', 'cs.AI'] | In this paper, we study pre-trained sequence-to-sequence models for a group
of related languages, with a focus on Indic languages. We present IndicBART, a
multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic
languages and English. IndicBART utilizes the orthographic similarity between
Indic scripts... | 2021-09-07T07:08:33Z | Published at ACL 2022, 15 pages | null | 10.18653/v1/2022.findings-acl.145 | IndicBART: A Pre-trained Model for Indic Natural Language Generation | ['Raj Dabre', 'Himani Shrotriya', 'Anoop Kunchukuttan', 'Ratish Puduppully', 'Mitesh M. Khapra', 'Pratyush Kumar'] | 2,021 | Findings | 74 | 55 | ['Computer Science'] |
2,109.03564 | NSP-BERT: A Prompt-based Few-Shot Learner Through an Original
Pre-training Task--Next Sentence Prediction | ['Yi Sun', 'Yu Zheng', 'Chao Hao', 'Hangping Qiu'] | ['cs.CL', 'cs.AI'] | Using prompts to utilize language models to perform various downstream tasks,
also known as prompt-based learning or prompt-learning, has lately gained
significant success in comparison to the pre-train and fine-tune paradigm.
Nonetheless, virtually all prompt-based methods are token-level, meaning they
all utilize GPT... | 2021-09-08T11:57:08Z | Published at COLING2022, long paper | null | null | null | null | null | null | null | null | null |
2,109.0357 | Biomedical and Clinical Language Models for Spanish: On the Benefits of
Domain-Specific Pretraining in a Mid-Resource Scenario | ['Casimiro Pio Carrino', 'Jordi Armengol-Estapé', 'Asier Gutiérrez-Fandiño', 'Joan Llop-Palao', 'Marc Pàmies', 'Aitor Gonzalez-Agirre', 'Marta Villegas'] | ['cs.CL'] | This work presents biomedical and clinical language models for Spanish by
experimenting with different pretraining choices, such as masking at word and
subword level, varying the vocabulary size and testing with domain data,
looking for better language representations. Interestingly, in the absence of
enough clinical d... | 2021-09-08T12:12:07Z | 9 pages | null | null | Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario | ['C. Carrino', "Jordi Armengol-Estap'e", 'Asier Gutiérrez-Fandiño', 'Joan Llop-Palao', 'Marc Pàmies', 'Aitor Gonzalez-Agirre', 'Marta Villegas'] | 2,021 | arXiv.org | 44 | 34 | ['Computer Science'] |
2,109.03814 | Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with
Transformers | ['Zhiqi Li', 'Wenhai Wang', 'Enze Xie', 'Zhiding Yu', 'Anima Anandkumar', 'Jose M. Alvarez', 'Ping Luo', 'Tong Lu'] | ['cs.CV'] | Panoptic segmentation involves a combination of joint semantic segmentation
and instance segmentation, where image contents are divided into two types:
things and stuff. We present Panoptic SegFormer, a general framework for
panoptic segmentation with transformers. It contains three innovative
components: an efficient ... | 2021-09-08T17:59:12Z | Accepted to CVPR 2022 | null | null | null | null | null | null | null | null | null |
2,109.03955 | NU:BRIEF -- A Privacy-aware Newsletter Personalization Engine for
Publishers | ['Ernesto Diaz-Aviles', 'Claudia Orellana-Rodriguez', 'Igor Brigadir', 'Reshma Narayanan Kutty'] | ['cs.DL', 'cs.CY', 'cs.HC', 'cs.IR', 'cs.LG'] | Newsletters have (re-) emerged as a powerful tool for publishers to engage
with their readers directly and more effectively. Despite the diversity in
their audiences, publishers' newsletters remain largely a one-size-fits-all
offering, which is suboptimal. In this paper, we present NU:BRIEF, a web
application for publi... | 2021-09-08T22:36:05Z | Fifteenth ACM Conference on Recommender Systems (RecSys '21),
September 27-October 1, 2021, Amsterdam, Netherlands | null | 10.1145/3460231.3478884 | null | null | null | null | null | null | null |
2,109.04127 | Word-Level Coreference Resolution | ['Vladimir Dobrovolskii'] | ['cs.CL', 'I.2.7'] | Recent coreference resolution models rely heavily on span representations to
find coreference links between word spans. As the number of spans is $O(n^2)$
in the length of text and the number of potential links is $O(n^4)$, various
pruning techniques are necessary to make this approach computationally
feasible. We prop... | 2021-09-09T09:26:02Z | Accepted to EMNLP-2021 | In Proceedings of the 2021 Conference on Empirical Methods in
Natural Language Processing (pp. 7670-7675). Association for Computational
Linguistics 2021 | 10.18653/v1/2021.emnlp-main.605 | Word-Level Coreference Resolution | ['V. Dobrovolskii'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 74 | 22 | ['Computer Science'] |
2,109.04263 | Complex chiral columns made of achiral quinoxaline derivatives with
semi-flexible cores | ['Paulina Rybak', 'Adam Krowczynski', 'Jadwiga Szydlowska', 'Damian Pociecha', 'Ewa Gorecka'] | ['cond-mat.mtrl-sci', 'cond-mat.soft'] | Mesogenic materials, quinoxaline derivatives with semi-flexible cores, are
reported to form new type of 3D columnar structure with large crystallographic
unit cell and Fddd symmetry below columnar hexagonal phase. The 3D columnar
structure is a result of frustration imposed by arrangement of helical columns
of opposite... | 2021-09-09T13:33:22Z | null | null | null | null | null | null | null | null | null | null |
2,109.04607 | IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with
Effective Domain-Specific Vocabulary Initialization | ['Fajri Koto', 'Jey Han Lau', 'Timothy Baldwin'] | ['cs.CL'] | We present IndoBERTweet, the first large-scale pretrained model for
Indonesian Twitter that is trained by extending a monolingually-trained
Indonesian BERT model with additive domain-specific vocabulary. We focus in
particular on efficient model adaptation under vocabulary mismatch, and
benchmark different ways of init... | 2021-09-10T01:27:51Z | Accepted at EMNLP 2021 | null | null | IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization | ['Fajri Koto', 'Jey Han Lau', 'Timothy Baldwin'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 85 | 38 | ['Computer Science'] |
2,109.0465 | What Changes Can Large-scale Language Models Bring? Intensive Study on
HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers | ['Boseop Kim', 'HyoungSeok Kim', 'Sang-Woo Lee', 'Gichang Lee', 'Donghyun Kwak', 'Dong Hyeon Jeon', 'Sunghyun Park', 'Sungju Kim', 'Seonhoon Kim', 'Dongpil Seo', 'Heungsub Lee', 'Minyoung Jeong', 'Sungjae Lee', 'Minsub Kim', 'Suk Hyun Ko', 'Seokhun Kim', 'Taeyong Park', 'Jinuk Kim', 'Soyoung Kang', 'Na-Hyeon Ryu', 'Kan... | ['cs.CL'] | GPT-3 shows remarkable in-context learning ability of large-scale language
models (LMs) trained on hundreds of billion scale data. Here we address some
remaining issues less reported by the GPT-3 paper, such as a non-English LM,
the performances of different sized models, and the effect of recently
introduced prompt op... | 2021-09-10T03:32:19Z | Accepted to EMNLP2021 as a long paper. Fixed some typos | null | null | What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers | ['Boseop Kim', 'Hyoungseok Kim', 'Sang-Woo Lee', 'Gichang Lee', 'Donghyun Kwak', 'D. Jeon', 'Sunghyun Park', 'Sungju Kim', 'Seonhoon Kim', 'D. Seo', 'Heungsub Lee', 'Minyoung Jeong', 'Sungjae Lee', 'Minsub Kim', 'SukHyun Ko', 'Seokhun Kim', 'Taeyong Park', 'Jinuk Kim', 'Soyoung Kang', 'Nahyeon Ryu', 'Kang Min Yoo', 'Mi... | 2,021 | Conference on Empirical Methods in Natural Language Processing | 124 | 56 | ['Computer Science'] |
2,109.04655 | Zero-Shot Dialogue State Tracking via Cross-Task Transfer | ['Zhaojiang Lin', 'Bing Liu', 'Andrea Madotto', 'Seungwhan Moon', 'Paul Crook', 'Zhenpeng Zhou', 'Zhiguang Wang', 'Zhou Yu', 'Eunjoon Cho', 'Rajen Subba', 'Pascale Fung'] | ['cs.CL'] | Zero-shot transfer learning for dialogue state tracking (DST) enables us to
handle a variety of task-oriented dialogue domains without the expense of
collecting in-domain data. In this work, we propose to transfer the
\textit{cross-task} knowledge from general question answering (QA) corpora for
the zero-shot DST task.... | 2021-09-10T03:57:56Z | EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,109.04689 | Generating Self-Contained and Summary-Centric Question Answer Pairs via
Differentiable Reward Imitation Learning | ['Li Zhou', 'Kevin Small', 'Yong Zhang', 'Sandeep Atluri'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Motivated by suggested question generation in conversational news
recommendation systems, we propose a model for generating question-answer pairs
(QA pairs) with self-contained, summary-centric questions and
length-constrained, article-summarizing answers. We begin by collecting a new
dataset of news articles with ques... | 2021-09-10T06:34:55Z | To appear in Proceedings of EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,109.04711 | Pre-train or Annotate? Domain Adaptation with a Constrained Budget | ['Fan Bai', 'Alan Ritter', 'Wei Xu'] | ['cs.CL'] | Recent work has demonstrated that pre-training in-domain language models can
boost performance when adapting to a new domain. However, the costs associated
with pre-training raise an important question: given a fixed budget, what steps
should an NLP practitioner take to maximize performance? In this paper, we view
doma... | 2021-09-10T07:28:26Z | Accepted to EMNLP 2021 | null | null | null | null | null | null | null | null | null |
2,109.04838 | Block Pruning For Faster Transformers | ['François Lagunas', 'Ella Charlaix', 'Victor Sanh', 'Alexander M. Rush'] | ['cs.LG', 'cs.CL', 'I.2.6; I.2.7'] | Pre-training has improved model accuracy for both classification and
generation tasks at the cost of introducing much larger and slower models.
Pruning methods have proven to be an effective way of reducing model size,
whereas distillation methods are proven for speeding up inference. We introduce
a block pruning appro... | 2021-09-10T12:46:32Z | EMNLP 2021. Code, hyper-parameters, evaluation results and
checkpoints available at https://github.com/huggingface/nn_pruning | null | null | null | null | null | null | null | null | null |
2,109.05014 | An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA | ['Zhengyuan Yang', 'Zhe Gan', 'Jianfeng Wang', 'Xiaowei Hu', 'Yumao Lu', 'Zicheng Liu', 'Lijuan Wang'] | ['cs.CV'] | Knowledge-based visual question answering (VQA) involves answering questions
that require external knowledge not present in the image. Existing methods
first retrieve knowledge from external resources, then reason over the selected
knowledge, the input image, and question for answer prediction. However, this
two-step a... | 2021-09-10T17:51:06Z | AAAI 2022 (Oral Presentation) | null | null | null | null | null | null | null | null | null |
2,109.0507 | Instance-Conditioned GAN | ['Arantxa Casanova', 'Marlène Careil', 'Jakob Verbeek', 'Michal Drozdzal', 'Adriana Romero-Soriano'] | ['cs.CV', 'cs.LG'] | Generative Adversarial Networks (GANs) can generate near photo realistic
images in narrow domains such as human faces. Yet, modeling complex
distributions of datasets such as ImageNet and COCO-Stuff remains challenging
in unconditional settings. In this paper, we take inspiration from kernel
density estimation techniqu... | 2021-09-10T19:08:45Z | Accepted at NeurIPS2021 | null | null | Instance-Conditioned GAN | ['Arantxa Casanova', 'Marlene Careil', 'Jakob Verbeek', 'M. Drozdzal', 'Adriana Romero-Soriano'] | 2,021 | Neural Information Processing Systems | 138 | 59 | ['Computer Science'] |
2,109.05093 | PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding
from Language Models | ['Torsten Scholak', 'Nathan Schucher', 'Dzmitry Bahdanau'] | ['cs.CL', 'cs.PL'] | Large pre-trained language models for textual data have an unconstrained
output space; at each decoding step, they can produce any of 10,000s of
sub-word tokens. When fine-tuned to target constrained formal languages like
SQL, these models often generate invalid code, rendering it unusable. We
propose PICARD (code and ... | 2021-09-10T20:14:08Z | Accepted to EMNLP 2021. 7 pages | null | null | PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models | ['Torsten Scholak', 'Nathan Schucher', 'Dzmitry Bahdanau'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 397 | 20 | ['Computer Science'] |
2,109.05153 | Natural SQL: Making SQL Easier to Infer from Natural Language
Specifications | ['Yujian Gan', 'Xinyun Chen', 'Jinxia Xie', 'Matthew Purver', 'John R. Woodward', 'John Drake', 'Qiaofu Zhang'] | ['cs.CL'] | Addressing the mismatch between natural language descriptions and the
corresponding SQL queries is a key challenge for text-to-SQL translation. To
bridge this gap, we propose an SQL intermediate representation (IR) called
Natural SQL (NatSQL). Specifically, NatSQL preserves the core functionalities
of SQL, while it sim... | 2021-09-11T01:53:55Z | To appear in EMNLP Findings 2021 | null | null | Natural SQL: Making SQL Easier to Infer from Natural Language Specifications | ['Yujian Gan', 'Xinyun Chen', 'Jinxia Xie', 'Matthew Purver', 'J. Woodward', 'J. Drake', 'Qiaofu Zhang'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 95 | 32 | ['Computer Science'] |
2,109.05217 | Empirical Analysis of Training Strategies of Transformer-based Japanese
Chit-chat Systems | ['Hiroaki Sugiyama', 'Masahiro Mizukami', 'Tsunehiro Arimoto', 'Hiromi Narimatsu', 'Yuya Chiba', 'Hideharu Nakajima', 'Toyomi Meguro'] | ['cs.CL', 'cs.AI'] | In recent years, several high-performance conversational systems have been
proposed based on the Transformer encoder-decoder model. Although previous
studies analyzed the effects of the model parameters and the decoding method on
subjective dialogue evaluations with overall metrics, they did not analyze how
the differe... | 2021-09-11T08:24:23Z | 11 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,109.0546 | End-to-End Conversational Search for Online Shopping with Utterance
Transfer | ['Liqiang Xiao', 'Jun Ma2', 'Xin Luna Dong', 'Pascual Martinez-Gomez', 'Nasser Zalmout', 'Wei Chen', 'Tong Zhao', 'Hao He', 'Yaohui Jin'] | ['cs.CL', 'cs.AI'] | Successful conversational search systems can present natural, adaptive and
interactive shopping experience for online shopping customers. However,
building such systems from scratch faces real word challenges from both
imperfect product schema/knowledge and lack of training dialog data.In this
work we first propose Con... | 2021-09-12T08:33:44Z | null | null | null | End-to-End Conversational Search for Online Shopping with Utterance Transfer | ['Liqiang Xiao', 'Jun Ma', 'Xin Dong', 'Pascual Martínez-Gómez', 'Nasser Zalmout', 'Wei Chen', 'Tong Zhao', 'Hao He', 'Yaohui Jin'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 12 | 29 | ['Computer Science'] |
2,109.05729 | CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language
Understanding and Generation | ['Yunfan Shao', 'Zhichao Geng', 'Yitao Liu', 'Junqi Dai', 'Hang Yan', 'Fei Yang', 'Li Zhe', 'Hujun Bao', 'Xipeng Qiu'] | ['cs.CL'] | In this paper, we take the advantage of previous pre-trained models (PTMs)
and propose a novel Chinese Pre-trained Unbalanced Transformer (CPT). Different
from previous Chinese PTMs, CPT is designed to utilize the shared knowledge
between natural language understanding (NLU) and natural language generation
(NLG) to boo... | 2021-09-13T06:25:45Z | Code is available at https://github.com/fastnlp/CPT | null | 10.1007/s11432-021-3536-5 | CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation | ['Yunfan Shao', 'Zhichao Geng', 'Yitao Liu', 'Junqi Dai', 'Fei Yang', 'Li Zhe', 'H. Bao', 'Xipeng Qiu'] | 2,021 | Science China Information Sciences | 152 | 46 | ['Computer Science'] |
2,109.06304 | Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to
Corpus Exploration | ['Shufan Wang', 'Laure Thompson', 'Mohit Iyyer'] | ['cs.CL'] | Phrase representations derived from BERT often do not exhibit complex phrasal
compositionality, as the model relies instead on lexical similarity to
determine semantic relatedness. In this paper, we propose a contrastive
fine-tuning objective that enables BERT to produce more powerful phrase
embeddings. Our approach (P... | 2021-09-13T20:31:57Z | EMNLP 2021 Conference Camera Ready | null | null | Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration | ['Shufan Wang', 'Laure Thompson', 'Mohit Iyyer'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 68 | 49 | ['Computer Science'] |
2,109.06379 | Compression, Transduction, and Creation: A Unified Framework for
Evaluating Natural Language Generation | ['Mingkai Deng', 'Bowen Tan', 'Zhengzhong Liu', 'Eric P. Xing', 'Zhiting Hu'] | ['cs.CL', 'cs.LG'] | Natural language generation (NLG) spans a broad range of tasks, each of which
serves for specific objectives and desires different properties of generated
text. The complexity makes automatic evaluation of NLG particularly
challenging. Previous work has typically focused on a single task and developed
individual evalua... | 2021-09-14T01:00:42Z | EMNLP 2021, Code available at
https://github.com/tanyuqian/ctc-gen-eval | null | null | null | null | null | null | null | null | null |
2,109.06402 | Exploring Personality and Online Social Engagement: An Investigation of
MBTI Users on Twitter | ['Partha Kadambi'] | ['cs.CL', 'cs.LG'] | Text-based personality prediction by computational models is an emerging
field with the potential to significantly improve on key weaknesses of
survey-based personality assessment. We investigate 3848 profiles from Twitter
with self-labeled Myers-Briggs personality traits (MBTI) - a framework closely
related to the Fiv... | 2021-09-14T02:26:30Z | null | null | null | null | null | null | null | null | null | null |
2,109.0687 | Performance-Efficiency Trade-offs in Unsupervised Pre-training for
Speech Recognition | ['Felix Wu', 'Kwangyoun Kim', 'Jing Pan', 'Kyu Han', 'Kilian Q. Weinberger', 'Yoav Artzi'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | This paper is a study of performance-efficiency trade-offs in pre-trained
models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and
formalize several architecture designs that influence both the model
performance and its efficiency. Putting together all our observations, we
introduce SEW (Squeezed and... | 2021-09-14T17:58:09Z | Code available at https://github.com/asappresearch/sew | null | null | Performance-Efficiency Trade-Offs in Unsupervised Pre-Training for Speech Recognition | ['Felix Wu', 'Kwangyoun Kim', 'Jing Pan', 'Kyu J. Han', 'Kilian Q. Weinberger', 'Yoav Artzi'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 75 | 73 | ['Computer Science', 'Engineering'] |
2,109.06912 | fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit | ['Changhan Wang', 'Wei-Ning Hsu', 'Yossi Adi', 'Adam Polyak', 'Ann Lee', 'Peng-Jen Chen', 'Jiatao Gu', 'Juan Pino'] | ['eess.AS', 'cs.CL', 'cs.SD'] | This paper presents fairseq S^2, a fairseq extension for speech synthesis. We
implement a number of autoregressive (AR) and non-AR text-to-speech models, and
their multi-speaker variants. To enable training speech synthesis models with
less curated data, a number of preprocessing tools are built and their
importance is... | 2021-09-14T18:20:28Z | Accepted to EMNLP 2021 Demo | null | null | null | null | null | null | null | null | null |
2,109.07161 | Resolution-robust Large Mask Inpainting with Fourier Convolutions | ['Roman Suvorov', 'Elizaveta Logacheva', 'Anton Mashikhin', 'Anastasia Remizova', 'Arsenii Ashukha', 'Aleksei Silvestrov', 'Naejin Kong', 'Harshith Goka', 'Kiwoong Park', 'Victor Lempitsky'] | ['cs.CV', 'eess.IV'] | Modern image inpainting systems, despite the significant progress, often
struggle with large missing areas, complex geometric structures, and
high-resolution images. We find that one of the main reasons for that is the
lack of an effective receptive field in both the inpainting network and the
loss function. To allevia... | 2021-09-15T08:54:29Z | Winter Conference on Applications of Computer Vision (WACV 2022) | null | null | null | null | null | null | null | null | null |
2,109.07306 | Allocating Large Vocabulary Capacity for Cross-lingual Language Model
Pre-training | ['Bo Zheng', 'Li Dong', 'Shaohan Huang', 'Saksham Singhal', 'Wanxiang Che', 'Ting Liu', 'Xia Song', 'Furu Wei'] | ['cs.CL'] | Compared to monolingual models, cross-lingual models usually require a more
expressive vocabulary to represent all languages adequately. We find that many
languages are under-represented in recent cross-lingual language models due to
the limited vocabulary capacity. To this end, we propose an algorithm VoCap to
determi... | 2021-09-15T14:04:16Z | EMNLP 2021 | null | null | Allocating Large Vocabulary Capacity for Cross-Lingual Language Model Pre-Training | ['Bo Zheng', 'Li Dong', 'Shaohan Huang', 'Saksham Singhal', 'Wanxiang Che', 'Ting Liu', 'Xia Song', 'Furu Wei'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 22 | 43 | ['Computer Science'] |
2,109.07765 | Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish
Biomedical Language Models | ['Casimiro Pio Carrino', 'Jordi Armengol-Estapé', 'Ona de Gibert Bonet', 'Asier Gutiérrez-Fandiño', 'Aitor Gonzalez-Agirre', 'Martin Krallinger', 'Marta Villegas'] | ['cs.CL'] | We introduce CoWeSe (the Corpus Web Salud Espa\~nol), the largest Spanish
biomedical corpus to date, consisting of 4.5GB (about 750M tokens) of clean
plain text. CoWeSe is the result of a massive crawler on 3000 Spanish domains
executed in 2020. The corpus is openly available and already preprocessed.
CoWeSe is an impo... | 2021-09-16T07:22:28Z | null | null | null | null | null | null | null | null | null | null |
2,109.07958 | TruthfulQA: Measuring How Models Mimic Human Falsehoods | ['Stephanie Lin', 'Jacob Hilton', 'Owain Evans'] | ['cs.CL', 'cs.AI', 'cs.CY', 'cs.LG'] | We propose a benchmark to measure whether a language model is truthful in
generating answers to questions. The benchmark comprises 817 questions that
span 38 categories, including health, law, finance and politics. We crafted
questions that some humans would answer falsely due to a false belief or
misconception. To per... | 2021-09-08T17:15:27Z | ACL 2022 (main conference); the TruthfulQA benchmark and evaluation
code is available at https://github.com/sylinrl/TruthfulQA | null | null | null | null | null | null | null | null | null |
2,109.08079 | Context-NER : Contextual Phrase Generation at Scale | ['Himanshu Gupta', 'Shreyas Verma', 'Santosh Mashetty', 'Swaroop Mishra'] | ['cs.IR', 'cs.CL', 'cs.LG'] | Named Entity Recognition (NER) has seen significant progress in recent years,
with numerous state-of-the-art (SOTA) models achieving high performance.
However, very few studies have focused on the generation of entities' context.
In this paper, we introduce CONTEXT-NER, a task that aims to generate the
relevant context... | 2021-09-16T16:10:05Z | 29 pages, 5 Figures, 2 AlgorithmS, 17 Tables. Accepted in NeurIPS
2022 - Efficient Natural Language and Speech Processing (ENLSP) Workshop | null | null | null | null | null | null | null | null | null |
2,109.08203 | Torch.manual_seed(3407) is all you need: On the influence of random
seeds in deep learning architectures for computer vision | ['David Picard'] | ['cs.CV'] | In this paper I investigate the effect of random seed selection on the
accuracy when using popular deep learning architectures for computer vision. I
scan a large amount of seeds (up to $10^4$) on CIFAR 10 and I also scan fewer
seeds on Imagenet using pre-trained models to investigate large scale datasets.
The conclusi... | 2021-09-16T20:10:12Z | fixed typos | null | null | null | null | null | null | null | null | null |
2,109.08238 | Habitat-Matterport 3D Dataset (HM3D): 1000 Large-scale 3D Environments
for Embodied AI | ['Santhosh K. Ramakrishnan', 'Aaron Gokaslan', 'Erik Wijmans', 'Oleksandr Maksymets', 'Alex Clegg', 'John Turner', 'Eric Undersander', 'Wojciech Galuba', 'Andrew Westbury', 'Angel X. Chang', 'Manolis Savva', 'Yili Zhao', 'Dhruv Batra'] | ['cs.CV', 'cs.AI'] | We present the Habitat-Matterport 3D (HM3D) dataset. HM3D is a large-scale
dataset of 1,000 building-scale 3D reconstructions from a diverse set of
real-world locations. Each scene in the dataset consists of a textured 3D mesh
reconstruction of interiors such as multi-floor residences, stores, and other
private indoor ... | 2021-09-16T22:01:24Z | 21 pages, 14 figures | null | null | null | null | null | null | null | null | null |
2,109.08564 | Slot Filling for Biomedical Information Extraction | ['Yannis Papanikolaou', 'Marlene Staib', 'Justin Grace', 'Francine Bennett'] | ['cs.CL', 'cs.IR', 'cs.LG'] | Information Extraction (IE) from text refers to the task of extracting
structured knowledge from unstructured text. The task typically consists of a
series of sub-tasks such as Named Entity Recognition and Relation Extraction.
Sourcing entity and relation type specific training data is a major bottleneck
in domains wit... | 2021-09-17T14:16:00Z | null | null | null | null | null | null | null | null | null | null |
2,109.08914 | Text Detoxification using Large Pre-trained Neural Models | ['David Dale', 'Anton Voronov', 'Daryna Dementieva', 'Varvara Logacheva', 'Olga Kozlova', 'Nikita Semenov', 'Alexander Panchenko'] | ['cs.CL', 'cs.LG'] | We present two novel unsupervised methods for eliminating toxicity in text.
Our first method combines two recent ideas: (1) guidance of the generation
process with small style-conditional language models and (2) use of
paraphrasing models to perform style transfer. We use a well-performing
paraphraser guided by style-t... | 2021-09-18T11:55:32Z | Accepted to the EMNLP 2021 conference | null | null | null | null | null | null | null | null | null |
2,109.09209 | CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in
Abstractive Summarization | ['Shuyang Cao', 'Lu Wang'] | ['cs.CL'] | We study generating abstractive summaries that are faithful and factually
consistent with the given articles. A novel contrastive learning formulation is
presented, which leverages both reference summaries, as positive training data,
and automatically generated erroneous summaries, as negative training data, to
train s... | 2021-09-19T20:05:21Z | EMNLP 2021 | null | null | CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization | ['Shuyang Cao', 'Lu Wang'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 182 | 58 | ['Computer Science'] |
2,109.09667 | On Generalization in Coreference Resolution | ['Shubham Toshniwal', 'Patrick Xia', 'Sam Wiseman', 'Karen Livescu', 'Kevin Gimpel'] | ['cs.CL'] | While coreference resolution is defined independently of dataset domain, most
models for performing coreference resolution do not transfer well to unseen
domains. We consolidate a set of 8 coreference resolution datasets targeting
different domains to evaluate the off-the-shelf performance of models. We then
mix three ... | 2021-09-20T16:33:22Z | CRAC 2021 | null | null | null | null | null | null | null | null | null |
2,109.09701 | BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese | ['Nguyen Luong Tran', 'Duong Minh Le', 'Dat Quoc Nguyen'] | ['cs.CL'] | We present BARTpho with two versions, BARTpho-syllable and BARTpho-word,
which are the first public large-scale monolingual sequence-to-sequence models
pre-trained for Vietnamese. BARTpho uses the "large" architecture and the
pre-training scheme of the sequence-to-sequence denoising autoencoder BART,
thus it is especia... | 2021-09-20T17:14:22Z | In Proceedings of INTERSPEECH 2022 (to appear) | null | null | null | null | null | null | null | null | null |
2,109.10086 | SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval | ['Thibault Formal', 'Carlos Lassance', 'Benjamin Piwowarski', 'Stéphane Clinchant'] | ['cs.IR', 'cs.AI', 'cs.CL'] | In neural Information Retrieval (IR), ongoing research is directed towards
improving the first retriever in ranking pipelines. Learning dense embeddings
to conduct retrieval using efficient approximate nearest neighbors methods has
proven to work well. Meanwhile, there has been a growing interest in learning
\emph{spar... | 2021-09-21T10:43:42Z | 5 pages. arXiv admin note: substantial text overlap with
arXiv:2107.05720 | null | null | null | null | null | null | null | null | null |
2,109.10282 | TrOCR: Transformer-based Optical Character Recognition with Pre-trained
Models | ['Minghao Li', 'Tengchao Lv', 'Jingye Chen', 'Lei Cui', 'Yijuan Lu', 'Dinei Florencio', 'Cha Zhang', 'Zhoujun Li', 'Furu Wei'] | ['cs.CL', 'cs.CV'] | Text recognition is a long-standing research problem for document
digitalization. Existing approaches are usually built based on CNN for image
understanding and RNN for char-level text generation. In addition, another
language model is usually needed to improve the overall accuracy as a
post-processing step. In this pa... | 2021-09-21T16:01:56Z | Work in Progress | null | null | null | null | null | null | null | null | null |
2,109.10686 | Scale Efficiently: Insights from Pre-training and Fine-tuning
Transformers | ['Yi Tay', 'Mostafa Dehghani', 'Jinfeng Rao', 'William Fedus', 'Samira Abnar', 'Hyung Won Chung', 'Sharan Narang', 'Dani Yogatama', 'Ashish Vaswani', 'Donald Metzler'] | ['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG'] | There remain many open questions pertaining to the scaling behaviour of
Transformer architectures. These scaling decisions and findings can be
critical, as training runs often come with an associated computational cost
which have both financial and/or environmental impact. The goal of this paper
is to present scaling i... | 2021-09-22T12:29:15Z | ICLR 2022 + Updated Checkpoint Release | null | null | null | null | null | null | null | null | null |
2,109.11314 | ParaShoot: A Hebrew Question Answering Dataset | ['Omri Keren', 'Omer Levy'] | ['cs.CL'] | NLP research in Hebrew has largely focused on morphology and syntax, where
rich annotated datasets in the spirit of Universal Dependencies are available.
Semantic datasets, however, are in short supply, hindering crucial advances in
the development of NLP technology in Hebrew. In this work, we present
ParaShoot, the fi... | 2021-09-23T11:59:38Z | null | null | null | ParaShoot: A Hebrew Question Answering Dataset | ['Omri Keren', 'Omer Levy'] | 2,021 | Workshop on Machine Reading for Question Answering | 17 | 20 | ['Computer Science'] |
2,109.1168 | Simple and Effective Zero-shot Cross-lingual Phoneme Recognition | ['Qiantong Xu', 'Alexei Baevski', 'Michael Auli'] | ['cs.CL', 'cs.LG', 'cs.SD'] | Recent progress in self-training, self-supervised pretraining and
unsupervised learning enabled well performing speech recognition systems
without any labeled data. However, in many cases there is labeled data
available for related languages which is not utilized by these methods. This
paper extends previous work on ze... | 2021-09-23T22:50:32Z | null | null | null | null | null | null | null | null | null | null |
2,109.12068 | AraT5: Text-to-Text Transformers for Arabic Language Generation | ['El Moatez Billah Nagoudi', 'AbdelRahim Elmadany', 'Muhammad Abdul-Mageed'] | ['cs.CL'] | Transfer learning with a unified Transformer framework (T5) that converts all
language problems into a text-to-text format was recently proposed as a simple
and effective transfer learning approach. Although a multilingual version of
the T5 model (mT5) was also introduced, it is not clear how well it can fare on
non-En... | 2021-08-31T02:02:10Z | Proceedings of the 60th Annual Meeting of the Association for
Computational Linguistics (ACL 2022). All authors contributed equally | null | null | null | null | null | null | null | null | null |
2,109.12346 | DziriBERT: a Pre-trained Language Model for the Algerian Dialect | ['Amine Abdaoui', 'Mohamed Berrimi', 'Mourad Oussalah', 'Abdelouahab Moussaoui'] | ['cs.CL', 'cs.LG'] | Pre-trained transformers are now the de facto models in Natural Language
Processing given their state-of-the-art results in many tasks and languages.
However, most of the current models have been trained on languages for which
large text resources are already available (such as English, French, Arabic,
etc.). Therefore... | 2021-09-25T11:51:35Z | 4 Pages | null | null | DziriBERT: a Pre-trained Language Model for the Algerian Dialect | ['Amine Abdaoui', 'Mohamed Berrimi', 'M. Oussalah', 'A. Moussaoui'] | 2,021 | arXiv.org | 45 | 27 | ['Computer Science'] |
2,109.12848 | A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented
Object Detection | ['Zhanchao Huang', 'Wei Li', 'Xiang-Gen Xia', 'Ran Tao'] | ['cs.CV'] | Recently, many arbitrary-oriented object detection (AOOD) methods have been
proposed and attracted widespread attention in many fields. However, most of
them are based on anchor-boxes or standard Gaussian heatmaps. Such label
assignment strategy may not only fail to reflect the shape and direction
characteristics of ar... | 2021-09-27T07:46:09Z | 16 pages, 13 figures | IEEE Transactions on Image Processing 2022 | 10.1109/TIP.2022.3148874 | A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented Object Detection | ['Zhanchao Huang', 'Wei Li', 'X. Xia', 'R. Tao'] | 2,021 | IEEE Transactions on Image Processing | 99 | 48 | ['Medicine', 'Computer Science'] |
2,109.1287 | MFAQ: a Multilingual FAQ Dataset | ['Maxime De Bruyn', 'Ehsan Lotfi', 'Jeska Buhmann', 'Walter Daelemans'] | ['cs.CL'] | In this paper, we present the first multilingual FAQ dataset publicly
available. We collected around 6M FAQ pairs from the web, in 21 different
languages. Although this is significantly larger than existing FAQ retrieval
datasets, it comes with its own challenges: duplication of content and uneven
distribution of topic... | 2021-09-27T08:43:25Z | Accepted at MRQA workshop (EMNLP 2021) | null | null | null | null | null | null | null | null | null |
2,109.13059 | Trans-Encoder: Unsupervised sentence-pair modelling through self- and
mutual-distillations | ['Fangyu Liu', 'Yunlong Jiao', 'Jordan Massiah', 'Emine Yilmaz', 'Serhii Havrylov'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In NLP, a large volume of tasks involve pairwise comparison between two
sequences (e.g. sentence similarity and paraphrase identification).
Predominantly, two formulations are used for sentence-pair tasks: bi-encoders
and cross-encoders. Bi-encoders produce fixed-dimensional sentence
representations and are computation... | 2021-09-27T14:06:47Z | ICLR 2022; code and models are released at
https://github.com/amzn/trans-encoder | null | null | Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations | ['Fangyu Liu', 'Serhii Havrylov', 'Yunlong Jiao', 'Jordan Massiah', 'Emine Yilmaz'] | 2,021 | International Conference on Learning Representations | 29 | 39 | ['Computer Science'] |
2,109.13228 | PASS: An ImageNet replacement for self-supervised pretraining without
humans | ['Yuki M. Asano', 'Christian Rupprecht', 'Andrew Zisserman', 'Andrea Vedaldi'] | ['cs.CV', 'cs.CY'] | Computer vision has long relied on ImageNet and other large datasets of
images sampled from the Internet for pretraining models. However, these
datasets have ethical and technical shortcomings, such as containing personal
information taken without consent, unclear license usage, biases, and, in some
cases, even problem... | 2021-09-27T17:59:39Z | Accepted to NeurIPS Track on Datasets and Benchmarks 2021. Webpage:
https://www.robots.ox.ac.uk/~vgg/research/pass/ | null | null | PASS: An ImageNet replacement for self-supervised pretraining without humans | ['Yuki M. Asano', 'C. Rupprecht', 'Andrew Zisserman', 'A. Vedaldi'] | 2,021 | NeurIPS Datasets and Benchmarks | 58 | 112 | ['Computer Science'] |
2,109.13821 | Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling
Scheme | ['Vadim Popov', 'Ivan Vovk', 'Vladimir Gogoryan', 'Tasnima Sadekova', 'Mikhail Kudinov', 'Jiansheng Wei'] | ['cs.SD', 'cs.LG', 'stat.ML'] | Voice conversion is a common speech synthesis task which can be solved in
different ways depending on a particular real-world scenario. The most
challenging one often referred to as one-shot many-to-many voice conversion
consists in copying the target voice from only one reference utterance in the
most general case whe... | 2021-09-28T15:48:22Z | null | null | null | Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme | ['Vadim Popov', 'Ivan Vovk', 'Vladimir Gogoryan', 'Tasnima Sadekova', 'Mikhail Kudinov', 'Jiansheng Wei'] | 2,021 | International Conference on Learning Representations | 136 | 46 | ['Computer Science', 'Mathematics'] |
2,109.15099 | PP-LCNet: A Lightweight CPU Convolutional Neural Network | ['Cheng Cui', 'Tingquan Gao', 'Shengyu Wei', 'Yuning Du', 'Ruoyu Guo', 'Shuilong Dong', 'Bin Lu', 'Ying Zhou', 'Xueying Lv', 'Qiwen Liu', 'Xiaoguang Hu', 'Dianhai Yu', 'Yanjun Ma'] | ['cs.CV'] | We propose a lightweight CPU network based on the MKLDNN acceleration
strategy, named PP-LCNet, which improves the performance of lightweight models
on multiple tasks. This paper lists technologies which can improve network
accuracy while the latency is almost constant. With these improvements, the
accuracy of PP-LCNet... | 2021-09-17T11:35:32Z | 8 pages, 2 figures, 9 tables | null | null | PP-LCNet: A Lightweight CPU Convolutional Neural Network | ['Cheng Cui', 'Tingquan Gao', 'Shengyun Wei', 'Yuning Du', 'Ruoyu Guo', 'Shuilong Dong', 'Bin Lu', 'Ying Zhou', 'X. Lv', 'Qiwen Liu', 'Xiaoguang Hu', 'Dianhai Yu', 'Yanjun Ma'] | 2,021 | arXiv.org | 127 | 31 | ['Computer Science'] |
2,109.15107 | CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact
Verification Models | ['Minwoo Lee', 'Seungpil Won', 'Juae Kim', 'Hwanhee Lee', 'Cheoneum Park', 'Kyomin Jung'] | ['cs.CL', 'cs.AI'] | Fact verification datasets are typically constructed using crowdsourcing
techniques due to the lack of text sources with veracity labels. However, the
crowdsourcing process often produces undesired biases in data that cause models
to learn spurious patterns. In this paper, we propose CrossAug, a contrastive
data augmen... | 2021-09-30T13:19:19Z | 5 pages, accepted as a short paper at CIKM 2021 | null | 10.1145/3459637.3482078 | null | null | null | null | null | null | null |
2,109.15254 | SlovakBERT: Slovak Masked Language Model | ['Matúš Pikuliak', 'Štefan Grivalský', 'Martin Konôpka', 'Miroslav Blšták', 'Martin Tamajka', 'Viktor Bachratý', 'Marián Šimko', 'Pavol Balážik', 'Michal Trnka', 'Filip Uhlárik'] | ['cs.CL'] | We introduce a new Slovak masked language model called SlovakBERT. This is to
our best knowledge the first paper discussing Slovak transformers-based
language models. We evaluate our model on several NLP tasks and achieve
state-of-the-art results. This evaluation is likewise the first attempt to
establish a benchmark f... | 2021-09-30T16:36:49Z | 12 pages, 2 figures | null | null | SlovakBERT: Slovak Masked Language Model | ['Matúš Pikuliak', 'Stefan Grivalsky', 'Martin Konopka', 'Miroslav Blšták', 'Martin Tamajka', "Viktor Bachrat'y", 'Marián Simko', 'Pavol Balázik', 'Michal Trnka', "Filip Uhl'arik"] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 27 | 49 | ['Computer Science'] |
2,110.00061 | PubTables-1M: Towards comprehensive table extraction from unstructured
documents | ['Brandon Smock', 'Rohith Pesala', 'Robin Abraham'] | ['cs.LG', 'cs.CV'] | Recently, significant progress has been made applying machine learning to the
problem of table structure inference and extraction from unstructured
documents. However, one of the greatest challenges remains the creation of
datasets with complete, unambiguous ground truth at scale. To address this, we
develop a new, mor... | 2021-09-30T19:42:07Z | null | null | null | PubTables-1M: Towards comprehensive table extraction from unstructured documents | ['B. Smock', 'Rohith Pesala', 'Robin Abraham'] | 2,021 | Computer Vision and Pattern Recognition | 103 | 28 | ['Computer Science'] |
2,110.00075 | Noise2Recon: Enabling Joint MRI Reconstruction and Denoising with
Semi-Supervised and Self-Supervised Learning | ['Arjun D Desai', 'Batu M Ozturkler', 'Christopher M Sandino', 'Robert Boutin', 'Marc Willis', 'Shreyas Vasanawala', 'Brian A Hargreaves', 'Christopher M Ré', 'John M Pauly', 'Akshay S Chaudhari'] | ['eess.IV', 'cs.CV'] | Deep learning (DL) has shown promise for faster, high quality accelerated MRI
reconstruction. However, supervised DL methods depend on extensive amounts of
fully-sampled (labeled) data and are sensitive to out-of-distribution (OOD)
shifts, particularly low signal-to-noise ratio (SNR) acquisitions. To alleviate
this cha... | 2021-09-30T20:06:43Z | null | null | null | Noise2Recon: Enabling Joint MRI Reconstruction and Denoising with Semi-Supervised and Self-Supervised Learning | ['Arjun D Desai', 'Batu Mehmet Ozturkler', 'Christopher M. Sandino', 'R. Boutin', 'M. Willis', 'S. Vasanawala', 'B. Hargreaves', 'Christopher Ré', 'J. Pauly', 'A. Chaudhari'] | 2,021 | null | 3 | 72 | ['Engineering', 'Computer Science'] |
2,110.00476 | ResNet strikes back: An improved training procedure in timm | ['Ross Wightman', 'Hugo Touvron', 'Hervé Jégou'] | ['cs.CV', 'cs.LG'] | The influential Residual Networks designed by He et al. remain the
gold-standard architecture in numerous scientific publications. They typically
serve as the default architecture in studies, or as baselines when new
architectures are proposed. Yet there has been significant progress on best
practices for training neur... | 2021-10-01T15:09:22Z | null | null | null | null | null | null | null | null | null | null |
2,110.00976 | LexGLUE: A Benchmark Dataset for Legal Language Understanding in English | ['Ilias Chalkidis', 'Abhik Jana', 'Dirk Hartung', 'Michael Bommarito', 'Ion Androutsopoulos', 'Daniel Martin Katz', 'Nikolaos Aletras'] | ['cs.CL'] | Laws and their interpretations, legal arguments and agreements\ are typically
expressed in writing, leading to the production of vast corpora of legal text.
Their analysis, which is at the center of legal practice, becomes increasingly
elaborate as these collections grow in size. Natural language understanding
(NLU) te... | 2021-10-03T10:50:51Z | 9 pages, long paper at ACL 2022 proceedings. LexGLUE benchmark is
available at: https://huggingface.co/datasets/lex_glue. Code is available at:
https://github.com/coastalcph/lex-glue. Update TFIDF-SVM scores in the last
version | null | null | null | null | null | null | null | null | null |
2,110.01485 | JuriBERT: A Masked-Language Model Adaptation for French Legal Text | ['Stella Douka', 'Hadi Abdine', 'Michalis Vazirgiannis', 'Rajaa El Hamdani', 'David Restrepo Amariles'] | ['cs.CL'] | Language models have proven to be very useful when adapted to specific
domains. Nonetheless, little research has been done on the adaptation of
domain-specific BERT models in the French language. In this paper, we focus on
creating a language model adapted to French legal text with the goal of helping
law professionals... | 2021-10-04T14:51:24Z | 7 pages | null | null | null | null | null | null | null | null | null |
2,110.01509 | DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained
Neural Text2Text Language Models | ['Gregor Betz', 'Kyle Richardson'] | ['cs.CL', 'cs.AI'] | In this paper, we present and implement a multi-dimensional, modular
framework for performing deep argument analysis (DeepA2) using current
pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et
al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts,
which advance an infor... | 2021-10-04T15:24:07Z | A Demo is available at
https://huggingface.co/spaces/debatelab/deepa2-demo , the model can be
downloaded from https://huggingface.co/debatelab/argument-analyst , and the
datasets can be accessed at https://huggingface.co/datasets/debatelab/aaac | *SEM 2022 | null | DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models | ['Gregor Betz', 'Kyle Richardson'] | 2,021 | STARSEM | 8 | 73 | ['Computer Science'] |
2,110.01518 | Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics | ['Prajjwal Bhargava', 'Aleksandr Drozd', 'Anna Rogers'] | ['cs.CL'] | Much of recent progress in NLU was shown to be due to models' learning
dataset-specific heuristics. We conduct a case study of generalization in NLI
(from MNLI to the adversarially constructed HANS dataset) in a range of
BERT-based architectures (adapters, Siamese Transformers, HEX debiasing), as
well as with subsampli... | 2021-10-04T15:37:07Z | Workshop on Insights from Negative Results (EMNLP 2021) | null | null | null | null | null | null | null | null | null |
2,110.0171 | PyTorrent: A Python Library Corpus for Large-scale Language Models | ['Mehdi Bahrami', 'N. C. Shrikanth', 'Shade Ruangwan', 'Lei Liu', 'Yuji Mizobuchi', 'Masahiro Fukuyori', 'Wei-Peng Chen', 'Kazuki Munakata', 'Tim Menzies'] | ['cs.SE'] | A large scale collection of both semantic and natural language resources is
essential to leverage active Software Engineering research areas such as code
reuse and code comprehensibility. Existing machine learning models ingest data
from Open Source repositories (like GitHub projects) and forum discussions
(like Stacko... | 2021-10-04T20:48:31Z | 10 pages, 2 figures, 5 tables | null | null | PyTorrent: A Python Library Corpus for Large-scale Language Models | ['M. Bahrami', 'Shrikanth N. C.', 'Shade Ruangwan', 'Lei Liu', 'Yuji Mizobuchi', 'M. Fukuyori', 'Wei-Peng Chen', 'Kazuki Munakata', 'T. Menzies'] | 2,021 | arXiv.org | 12 | 38 | ['Computer Science'] |
2,110.01786 | MoEfication: Transformer Feed-forward Layers are Mixtures of Experts | ['Zhengyan Zhang', 'Yankai Lin', 'Zhiyuan Liu', 'Peng Li', 'Maosong Sun', 'Jie Zhou'] | ['cs.CL'] | Recent work has shown that feed-forward networks (FFNs) in pre-trained
Transformers are a key component, storing various linguistic and factual
knowledge. However, the computational patterns of FFNs are still unclear. In
this work, we study the computational patterns of FFNs and observe that most
inputs only activate a... | 2021-10-05T02:14:38Z | Accepted to ACL Findings 2022 | null | null | MoEfication: Transformer Feed-forward Layers are Mixtures of Experts | ['Zhengyan Zhang', 'Yankai Lin', 'Zhiyuan Liu', 'Peng Li', 'Maosong Sun', 'Jie Zhou'] | 2,021 | Findings | 129 | 67 | ['Computer Science'] |
2,110.01799 | ContractNLI: A Dataset for Document-level Natural Language Inference for
Contracts | ['Yuta Koreeda', 'Christopher D. Manning'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Reviewing contracts is a time-consuming procedure that incurs large expenses
to companies and social inequality to those who cannot afford it. In this work,
we propose "document-level natural language inference (NLI) for contracts", a
novel, real-world application of NLI that addresses such problems. In this
task, a sy... | 2021-10-05T03:22:31Z | Accepted at the Findings of the Association for Computational
Linguistics: EMNLP 2021 | null | null | ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts | ['Yuta Koreeda', 'Christopher D. Manning'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 106 | 24 | ['Computer Science'] |
2,110.019 | DistilHuBERT: Speech Representation Learning by Layer-wise Distillation
of Hidden-unit BERT | ['Heng-Jui Chang', 'Shu-wen Yang', 'Hung-yi Lee'] | ['cs.CL', 'eess.AS'] | Self-supervised speech representation learning methods like wav2vec 2.0 and
Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and
offer good representations for numerous speech processing tasks. Despite the
success of these methods, they require large memory and high pre-training
costs, making t... | 2021-10-05T09:34:44Z | Accepted to ICASSP 2022 | null | null | Distilhubert: Speech Representation Learning by Layer-Wise Distillation of Hidden-Unit Bert | ['Heng-Jui Chang', 'Shu-Wen Yang', 'Hung-yi Lee'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 175 | 36 | ['Computer Science', 'Engineering'] |
2,110.01938 | Sicilian Translator: A Recipe for Low-Resource NMT | ['Eryk Wdowiak'] | ['cs.CL', 'I.2.7'] | With 17,000 pairs of Sicilian-English translated sentences, Arba Sicula
developed the first neural machine translator for the Sicilian language. Using
small subword vocabularies, we trained small Transformer models with high
dropout parameters and achieved BLEU scores in the upper 20s. Then we
supplemented our dataset ... | 2021-10-05T11:04:13Z | 7 pages, 2 tables | null | null | null | null | null | null | null | null | null |
2,110.0203 | Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs
for Semantic Sentence Embeddings | ['Marco Di Giovanni', 'Marco Brambilla'] | ['cs.CL'] | Semantic sentence embeddings are usually supervisedly built minimizing
distances between pairs of embeddings of sentences labelled as semantically
similar by annotators. Since big labelled datasets are rare, in particular for
non-English languages, and expensive, recent studies focus on unsupervised
approaches that req... | 2021-10-05T13:21:40Z | 9 pages, 3 figures, accepted at EMNLP2021 | null | null | null | null | null | null | null | null | null |
2,110.02178 | MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
Transformer | ['Sachin Mehta', 'Mohammad Rastegari'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Light-weight convolutional neural networks (CNNs) are the de-facto for mobile
vision tasks. Their spatial inductive biases allow them to learn
representations with fewer parameters across different vision tasks. However,
these networks are spatially local. To learn global representations,
self-attention-based vision tr... | 2021-10-05T17:07:53Z | Accepted at ICLR'22 | null | null | MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer | ['Sachin Mehta', 'Mohammad Rastegari'] | 2,021 | International Conference on Learning Representations | 1,306 | 65 | ['Computer Science'] |
2,110.02442 | PoNet: Pooling Network for Efficient Token Mixing in Long Sequences | ['Chao-Hong Tan', 'Qian Chen', 'Wen Wang', 'Qinglin Zhang', 'Siqi Zheng', 'Zhen-Hua Ling'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Transformer-based models have achieved great success in various NLP, vision,
and speech tasks. However, the core of Transformer, the self-attention
mechanism, has a quadratic time and memory complexity with respect to the
sequence length, which hinders applications of Transformer-based models to long
sequences. Many ap... | 2021-10-06T01:07:54Z | Accepted by ICLR 2022. Codes and checkpoints are also available on
huggingface hub: https://huggingface.co/chtan/ponet-base-uncased | null | null | null | null | null | null | null | null | null |
2,110.02711 | DiffusionCLIP: Text-Guided Diffusion Models for Robust Image
Manipulation | ['Gwanghyun Kim', 'Taesung Kwon', 'Jong Chul Ye'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Recently, GAN inversion methods combined with Contrastive Language-Image
Pretraining (CLIP) enables zero-shot image manipulation guided by text prompts.
However, their applications to diverse real images are still difficult due to
the limited GAN inversion capability. Specifically, these approaches often have
difficult... | 2021-10-06T12:59:39Z | Accepted to CVPR 2022 | null | null | DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation | ['Gwanghyun Kim', 'Taesung Kwon', 'Jong-Chul Ye'] | 2,021 | Computer Vision and Pattern Recognition | 657 | 69 | ['Computer Science'] |
2,110.02861 | 8-bit Optimizers via Block-wise Quantization | ['Tim Dettmers', 'Mike Lewis', 'Sam Shleifer', 'Luke Zettlemoyer'] | ['cs.LG'] | Stateful optimizers maintain gradient statistics over time, e.g., the
exponentially smoothed sum (SGD with momentum) or squared sum (Adam) of past
gradient values. This state can be used to accelerate optimization compared to
plain stochastic gradient descent but uses memory that might otherwise be
allocated to model p... | 2021-10-06T15:43:20Z | ICLR2022 spotlight version | null | null | null | null | null | null | null | null | null |
2,110.03546 | mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer | ['Marcelo Archanjo José', 'Fabio Gagliardi Cozman'] | ['cs.CL', 'cs.AI', '68T07, 68T50', 'I.2.7; H.3.3'] | The translation of natural language questions to SQL queries has attracted
growing attention, in particular in connection with transformers and similar
language models. A large number of techniques are geared towards the English
language; in this work, we thus investigated translation to SQL when input
questions are gi... | 2021-10-07T15:08:24Z | Published in: Intelligent Systems. BRACIS 2021. Lecture Notes in
Computer Science | vol 13074, 2021, pp 511-525 | 10.1007/978-3-030-91699-2_35 | null | null | null | null | null | null | null |
2,110.03584 | Mixer-TTS: non-autoregressive, fast and compact text-to-speech model
conditioned on language model embeddings | ['Oktai Tatanov', 'Stanislav Beliaev', 'Boris Ginsburg'] | ['eess.AS'] | This paper describes Mixer-TTS, a non-autoregressive model for
mel-spectrogram generation. The model is based on the MLP-Mixer architecture
adapted for speech synthesis. The basic Mixer-TTS contains pitch and duration
predictors, with the latter being trained with an unsupervised TTS alignment
framework. Alongside the ... | 2021-10-07T16:07:58Z | Preprint. Submitted to ICASSP-22 | null | null | Mixer-TTS: Non-Autoregressive, Fast and Compact Text-to-Speech Model Conditioned on Language Model Embeddings | ['Oktai Tatanov', 'Stanislav Beliaev', 'Boris Ginsburg'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 16 | 23 | ['Engineering', 'Computer Science'] |
2,110.03895 | ALL-IN-ONE: Multi-Task Learning BERT models for Evaluating Peer
Assessments | ['Qinjin Jia', 'Jialin Cui', 'Yunkai Xiao', 'Chengyuan Liu', 'Parvez Rashid', 'Edward F. Gehringer'] | ['cs.CL', 'cs.AI'] | Peer assessment has been widely applied across diverse academic fields over
the last few decades and has demonstrated its effectiveness. However, the
advantages of peer assessment can only be achieved with high-quality peer
reviews. Previous studies have found that high-quality review comments usually
comprise several ... | 2021-10-08T05:13:41Z | null | null | null | null | null | null | null | null | null | null |
2,110.04057 | FAST-RIR: Fast neural diffuse room impulse response generator | ['Anton Ratnarajah', 'Shi-Xiong Zhang', 'Meng Yu', 'Zhenyu Tang', 'Dinesh Manocha', 'Dong Yu'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | We present a neural-network-based fast diffuse room impulse response
generator (FAST-RIR) for generating room impulse responses (RIRs) for a given
acoustic environment. Our FAST-RIR takes rectangular room dimensions, listener
and speaker positions, and reverberation time as inputs and generates specular
and diffuse ref... | 2021-10-07T05:21:01Z | Accepted to ICASSP 2022. More results and source code is available at
https://anton-jeran.github.io/FRIR/ | null | null | null | null | null | null | null | null | null |
2,110.0441 | TitaNet: Neural Model for speaker representation with 1D Depth-wise
separable convolutions and global context | ['Nithin Rao Koluguri', 'Taejin Park', 'Boris Ginsburg'] | ['eess.AS', 'cs.SD'] | In this paper, we propose TitaNet, a novel neural network architecture for
extracting speaker representations. We employ 1D depth-wise separable
convolutions with Squeeze-and-Excitation (SE) layers with global context
followed by channel attention based statistics pooling layer to map
variable-length utterances to a fi... | 2021-10-08T23:49:42Z | preprint. Submitted to ICASSP 2022 | null | null | TitaNet: Neural Model for Speaker Representation with 1D Depth-Wise Separable Convolutions and Global Context | ['N. Koluguri', 'Taejin Park', 'Boris Ginsburg'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 104 | 32 | ['Computer Science', 'Engineering'] |
2,110.04725 | Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and
Few-Shot Learning | ['Shaohua Wu', 'Xudong Zhao', 'Tong Yu', 'Rongguo Zhang', 'Chong Shen', 'Hongli Liu', 'Feng Li', 'Hong Zhu', 'Jiangang Luo', 'Liang Xu', 'Xuanwei Zhang'] | ['cs.CL', 'cs.AI'] | Recent work like GPT-3 has demonstrated excellent performance of Zero-Shot
and Few-Shot learning on many natural language processing (NLP) tasks by
scaling up model size, dataset size and the amount of computation. However,
training a model like GPT-3 requires huge amount of computational resources
which makes it chall... | 2021-10-10T07:40:22Z | null | null | null | null | null | null | null | null | null | null |
2,110.04994 | Omnidata: A Scalable Pipeline for Making Multi-Task Mid-Level Vision
Datasets from 3D Scans | ['Ainaz Eftekhar', 'Alexander Sax', 'Roman Bachmann', 'Jitendra Malik', 'Amir Zamir'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.RO'] | This paper introduces a pipeline to parametrically sample and render
multi-task vision datasets from comprehensive 3D scans from the real world.
Changing the sampling parameters allows one to "steer" the generated datasets
to emphasize specific information. In addition to enabling interesting lines of
research, we show... | 2021-10-11T04:21:46Z | ICCV 2021: See project website https://omnidata.vision | null | null | null | null | null | null | null | null | null |
2,110.05069 | Efficient Training of Audio Transformers with Patchout | ['Khaled Koutini', 'Jan Schlüter', 'Hamid Eghbal-zadeh', 'Gerhard Widmer'] | ['cs.SD', 'cs.LG', 'eess.AS'] | The great success of transformer-based models in natural language processing
(NLP) has led to various attempts at adapting these architectures to other
domains such as vision and audio. Recent work has shown that transformers can
outperform Convolutional Neural Networks (CNNs) on vision and audio tasks.
However, one of... | 2021-10-11T08:07:50Z | Submitted to Interspeech 2022. Source code:
https://github.com/kkoutini/PaSST | null | 10.21437/Interspeech.2022-227 | Efficient Training of Audio Transformers with Patchout | ['Khaled Koutini', 'Jan Schlüter', 'Hamid Eghbalzadeh', 'G. Widmer'] | 2,021 | Interspeech | 263 | 30 | ['Computer Science', 'Engineering'] |
2,110.05679 | Large Language Models Can Be Strong Differentially Private Learners | ['Xuechen Li', 'Florian Tramèr', 'Percy Liang', 'Tatsunori Hashimoto'] | ['cs.LG', 'cs.CL'] | Differentially Private (DP) learning has seen limited success for building
large deep learning models of text, and straightforward attempts at applying
Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have
resulted in large performance drops and high computational overhead. We show
that this per... | 2021-10-12T01:45:27Z | 31 pages; update ethics statement to clarify benefits and potential
long-term harms | null | null | null | null | null | null | null | null | null |
2,110.05752 | UniSpeech-SAT: Universal Speech Representation Learning with Speaker
Aware Pre-Training | ['Sanyuan Chen', 'Yu Wu', 'Chengyi Wang', 'Zhengyang Chen', 'Zhuo Chen', 'Shujie Liu', 'Jian Wu', 'Yao Qian', 'Furu Wei', 'Jinyu Li', 'Xiangzhan Yu'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Self-supervised learning (SSL) is a long-standing goal for speech processing,
since it utilizes large-scale unlabeled data and avoids extensive human
labeling. Recent years witness great successes in applying self-supervised
learning in speech recognition, while limited exploration was attempted in
applying SSL for mod... | 2021-10-12T05:43:30Z | ICASSP 2022 Submission | null | null | null | null | null | null | null | null | null |
2,110.05781 | BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection
for Air Traffic Control Communications | ['Juan Zuluaga-Gomez', 'Seyyed Saeed Sarfjoo', 'Amrutha Prasad', 'Iuliia Nigmatulina', 'Petr Motlicek', 'Karel Ondrej', 'Oliver Ohneiser', 'Hartmut Helmke'] | ['eess.AS', 'cs.CL', 'cs.LG'] | Automatic speech recognition (ASR) allows transcribing the communications
between air traffic controllers (ATCOs) and aircraft pilots. The transcriptions
are used later to extract ATC named entities, e.g., aircraft callsigns. One
common challenge is speech activity detection (SAD) and speaker diarization
(SD). In the f... | 2021-10-12T07:25:12Z | To be published in the 2022 IEEE Spoken Language Technology Workshop
(SLT) (SLT 2022) | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.