arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,110.05896 | LaoPLM: Pre-trained Language Models for Lao | ['Nankai Lin', 'Yingwen Fu', 'Chuwei Chen', 'Ziyu Yang', 'Shengyi Jiang'] | ['cs.CL'] | Trained on the large corpus, pre-trained language models (PLMs) can capture
different levels of concepts in context and hence generate universal language
representations. They can benefit multiple downstream natural language
processing (NLP) tasks. Although PTMs have been widely used in most NLP
applications, especiall... | 2021-10-12T11:13:07Z | null | null | null | LaoPLM: Pre-trained Language Models for Lao | ['Nankai Lin', 'Yingwen Fu', 'Chuwei Chen', 'Ziyu Yang', 'Shengyi Jiang'] | 2,021 | International Conference on Language Resources and Evaluation | 3 | 32 | ['Computer Science'] |
2,110.06128 | Regionalized models for Spanish language variations based on Twitter | ['Eric S. Tellez', 'Daniela Moctezuma', 'Sabino Miranda', 'Mario Graff', 'Guillermo Ruiz'] | ['cs.CL', 'cs.CY', 'cs.SI'] | Spanish is one of the most spoken languages in the globe, but not necessarily
Spanish is written and spoken in the same way in different countries.
Understanding local language variations can help to improve model performances
on regional tasks, both understanding local structures and also improving the
message's conte... | 2021-10-12T16:21:03Z | null | null | null | null | null | null | null | null | null | null |
2,110.06263 | Speech Summarization using Restricted Self-Attention | ['Roshan Sharma', 'Shruti Palaskar', 'Alan W Black', 'Florian Metze'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | Speech summarization is typically performed by using a cascade of speech
recognition and text summarization models. End-to-end modeling of speech
summarization models is challenging due to memory and compute constraints
arising from long input audio sequences. Recent work in document summarization
has inspired methods ... | 2021-10-12T18:21:23Z | Accepted at ICASSP 2022 | null | null | End-to-End Speech Summarization Using Restricted Self-Attention | ['Roshan Sharma', 'Shruti Palaskar', 'A. Black', 'Florian Metze'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 34 | 29 | ['Computer Science', 'Engineering'] |
2,110.06273 | Småprat: DialoGPT for Natural Language Generation of Swedish
Dialogue by Transfer Learning | ['Tosin Adewumi', 'Rickard Brännvall', 'Nosheen Abid', 'Maryam Pahlavan', 'Sana Sabah Sabry', 'Foteini Liwicki', 'Marcus Liwicki'] | ['cs.CL', 'cs.LG'] | Building open-domain conversational systems (or chatbots) that produce
convincing responses is a recognized challenge. Recent state-of-the-art (SoTA)
transformer-based models for the generation of natural language dialogue have
demonstrated impressive performance in simulating human-like, single-turn
conversations in E... | 2021-10-12T18:46:43Z | Presented at Northern Lights Deep Learning Conference (NLDL) 2022,
Tromso, Norway | null | null | null | null | null | null | null | null | null |
2,110.06609 | MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
Translators | ['Zhixing Tan', 'Xiangwen Zhang', 'Shuo Wang', 'Yang Liu'] | ['cs.CL'] | Prompting has recently been shown as a promising approach for applying
pre-trained language models to perform downstream tasks. We present Multi-Stage
Prompting (MSP), a simple and automatic approach for leveraging pre-trained
language models to translation tasks. To better mitigate the discrepancy
between pre-training... | 2021-10-13T10:06:21Z | ACL 2022 | null | null | null | null | null | null | null | null | null |
2,110.06696 | Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese | ['Zhuosheng Zhang', 'Hanqing Zhang', 'Keming Chen', 'Yuhang Guo', 'Jingyun Hua', 'Yulong Wang', 'Ming Zhou'] | ['cs.CL', 'cs.AI'] | Although pre-trained models (PLMs) have achieved remarkable improvements in a
wide range of NLP tasks, they are expensive in terms of time and resources.
This calls for the study of training more efficient models with less
computation but still ensures impressive performance. Instead of pursuing a
larger scale, we are ... | 2021-10-13T13:14:32Z | null | null | null | Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese | ['Zhuosheng Zhang', 'Hanqing Zhang', 'Keming Chen', 'Yuhang Guo', 'Jingyun Hua', 'Yulong Wang', 'Ming Zhou'] | 2,021 | arXiv.org | 72 | 44 | ['Computer Science'] |
2,110.06848 | Decoupled Contrastive Learning | ['Chun-Hsiao Yeh', 'Cheng-Yao Hong', 'Yen-Chi Hsu', 'Tyng-Luh Liu', 'Yubei Chen', 'Yann LeCun'] | ['cs.LG', 'cs.CV'] | Contrastive learning (CL) is one of the most successful paradigms for
self-supervised learning (SSL). In a principled way, it considers two augmented
"views" of the same image as positive to be pulled closer, and all other images
as negative to be pushed further apart. However, behind the impressive success
of CL-based... | 2021-10-13T16:38:43Z | Accepted by ECCV2022 | null | null | Decoupled Contrastive Learning | ['Chun-Hsiao Yeh', 'Cheng-Yao Hong', 'Yen-Chi Hsu', 'Tyng-Luh Liu', 'Yubei Chen', 'Yann LeCun'] | 2,021 | European Conference on Computer Vision | 192 | 51 | ['Computer Science'] |
2,110.06864 | ByteTrack: Multi-Object Tracking by Associating Every Detection Box | ['Yifu Zhang', 'Peize Sun', 'Yi Jiang', 'Dongdong Yu', 'Fucheng Weng', 'Zehuan Yuan', 'Ping Luo', 'Wenyu Liu', 'Xinggang Wang'] | ['cs.CV'] | Multi-object tracking (MOT) aims at estimating bounding boxes and identities
of objects in videos. Most methods obtain identities by associating detection
boxes whose scores are higher than a threshold. The objects with low detection
scores, e.g. occluded objects, are simply thrown away, which brings
non-negligible tru... | 2021-10-13T17:01:26Z | null | null | null | null | null | null | null | null | null | null |
2,110.06918 | Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a
Sparse One? | ['Xilun Chen', 'Kushal Lakhotia', 'Barlas Oğuz', 'Anchit Gupta', 'Patrick Lewis', 'Stan Peshterliev', 'Yashar Mehdad', 'Sonal Gupta', 'Wen-tau Yih'] | ['cs.CL', 'cs.IR', 'cs.LG'] | Despite their recent popularity and well-known advantages, dense retrievers
still lag behind sparse methods such as BM25 in their ability to reliably match
salient phrases and rare entities in the query and to generalize to
out-of-domain data. It has been argued that this is an inherent limitation of
dense models. We r... | 2021-10-13T17:56:19Z | null | null | null | null | null | null | null | null | null | null |
2,110.07038 | Towards Efficient NLP: A Standard Evaluation and A Strong Baseline | ['Xiangyang Liu', 'Tianxiang Sun', 'Junliang He', 'Jiawen Wu', 'Lingling Wu', 'Xinyu Zhang', 'Hao Jiang', 'Zhao Cao', 'Xuanjing Huang', 'Xipeng Qiu'] | ['cs.CL', 'cs.AI'] | Supersized pre-trained language models have pushed the accuracy of various
natural language processing (NLP) tasks to a new state-of-the-art (SOTA).
Rather than pursuing the reachless SOTA accuracy, more and more researchers
start paying attention on model efficiency and usability. Different from
accuracy, the metric f... | 2021-10-13T21:17:15Z | Accepted to the main conference of NAACL-2022 | null | null | null | null | null | null | null | null | null |
2,110.07058 | Ego4D: Around the World in 3,000 Hours of Egocentric Video | ['Kristen Grauman', 'Andrew Westbury', 'Eugene Byrne', 'Zachary Chavis', 'Antonino Furnari', 'Rohit Girdhar', 'Jackson Hamburger', 'Hao Jiang', 'Miao Liu', 'Xingyu Liu', 'Miguel Martin', 'Tushar Nagarajan', 'Ilija Radosavovic', 'Santhosh Kumar Ramakrishnan', 'Fiona Ryan', 'Jayant Sharma', 'Michael Wray', 'Mengmeng Xu',... | ['cs.CV', 'cs.AI'] | We introduce Ego4D, a massive-scale egocentric video dataset and benchmark
suite. It offers 3,670 hours of daily-life activity video spanning hundreds of
scenarios (household, outdoor, workplace, leisure, etc.) captured by 931 unique
camera wearers from 74 worldwide locations and 9 different countries. The
approach to ... | 2021-10-13T22:19:32Z | To appear in the Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 2022. This version updates the
baseline result numbers for the Hands and Objects benchmark (appendix) | null | null | null | null | null | null | null | null | null |
2,110.07166 | CaPE: Contrastive Parameter Ensembling for Reducing Hallucination in
Abstractive Summarization | ['Prafulla Kumar Choubey', 'Alexander R. Fabbri', 'Jesse Vig', 'Chien-Sheng Wu', 'Wenhao Liu', 'Nazneen Fatema Rajani'] | ['cs.CL'] | Hallucination is a known issue for neural abstractive summarization models.
Recent work suggests that the degree of hallucination may depend on errors in
the training data. In this work, we propose a new method called Contrastive
Parameter Ensembling (CaPE) to use training data more effectively, utilizing
variations in... | 2021-10-14T06:02:54Z | null | null | null | null | null | null | null | null | null | null |
2,110.07205 | SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language
Processing | ['Junyi Ao', 'Rui Wang', 'Long Zhou', 'Chengyi Wang', 'Shuo Ren', 'Yu Wu', 'Shujie Liu', 'Tom Ko', 'Qing Li', 'Yu Zhang', 'Zhihua Wei', 'Yao Qian', 'Jinyu Li', 'Furu Wei'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Motivated by the success of T5 (Text-To-Text Transfer Transformer) in
pre-trained natural language processing models, we propose a unified-modal
SpeechT5 framework that explores the encoder-decoder pre-training for
self-supervised speech/text representation learning. The SpeechT5 framework
consists of a shared encoder-... | 2021-10-14T07:59:27Z | Accepted by ACL 2022 main conference | null | null | null | null | null | null | null | null | null |
2,110.07244 | Building Chinese Biomedical Language Models via Multi-Level Text
Discrimination | ['Quan Wang', 'Songtai Dai', 'Benfeng Xu', 'Yajuan Lyu', 'Yong Zhu', 'Hua Wu', 'Haifeng Wang'] | ['cs.CL', 'cs.AI'] | Pre-trained language models (PLMs), such as BERT and GPT, have revolutionized
the field of NLP, not only in the general domain but also in the biomedical
domain. Most prior efforts in building biomedical PLMs have resorted simply to
domain adaptation and focused mainly on English. In this work we introduce
eHealth, a C... | 2021-10-14T10:43:28Z | null | null | null | Building Chinese Biomedical Language Models via Multi-Level Text Discrimination | ['Quan Wang', 'Songtai Dai', 'Benfeng Xu', 'Yajuan Lyu', 'Yong Zhu', 'Hua Wu', 'Haifeng Wang'] | 2,021 | arXiv.org | 15 | 45 | ['Computer Science'] |
2,110.07602 | P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
Across Scales and Tasks | ['Xiao Liu', 'Kaixuan Ji', 'Yicheng Fu', 'Weng Lam Tam', 'Zhengxiao Du', 'Zhilin Yang', 'Jie Tang'] | ['cs.CL'] | Prompt tuning, which only tunes continuous prompts with a frozen language
model, substantially reduces per-task storage and memory usage at training.
However, in the context of NLU, prior work reveals that prompt tuning does not
perform well for normal-sized pretrained models. We also find that existing
methods of prom... | 2021-10-14T17:58:47Z | Proceedings of the 60th Annual Meeting of the Association of
Computational Linguistics, 2022 | null | null | P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks | ['Xiao Liu', 'Kaixuan Ji', 'Yicheng Fu', 'Zhengxiao Du', 'Zhilin Yang', 'Jie Tang'] | 2,021 | arXiv.org | 867 | 57 | ['Computer Science'] |
2,110.07827 | DirectQuote: A Dataset for Direct Quotation Extraction and Attribution
in News Articles | ['Yuanchi Zhang', 'Yang Liu'] | ['cs.CL'] | Quotation extraction and attribution are challenging tasks, aiming at
determining the spans containing quotations and attributing each quotation to
the original speaker. Applying this task to news data is highly related to
fact-checking, media monitoring and news tracking. Direct quotations are more
traceable and infor... | 2021-10-15T02:50:09Z | null | null | null | DirectQuote: A Dataset for Direct Quotation Extraction and Attribution in News Articles | ['Yuan Zhang', 'Yang Liu'] | 2,021 | International Conference on Language Resources and Evaluation | 12 | 27 | ['Computer Science'] |
2,110.08175 | MixQG: Neural Question Generation with Mixed Answer Types | ["Lidiya Murakhovs'ka", 'Chien-Sheng Wu', 'Philippe Laban', 'Tong Niu', 'Wenhao Liu', 'Caiming Xiong'] | ['cs.CL'] | Asking good questions is an essential ability for both human and machine
intelligence. However, existing neural question generation approaches mainly
focus on the short factoid type of answers. In this paper, we propose a neural
question generator, MixQG, to bridge this gap. We combine 9 question answering
datasets wit... | 2021-10-15T16:03:40Z | camera-ready version | null | null | null | null | null | null | null | null | null |
2,110.08193 | BBQ: A Hand-Built Bias Benchmark for Question Answering | ['Alicia Parrish', 'Angelica Chen', 'Nikita Nangia', 'Vishakh Padmakumar', 'Jason Phang', 'Jana Thompson', 'Phu Mon Htut', 'Samuel R. Bowman'] | ['cs.CL'] | It is well documented that NLP models learn social biases, but little work
has been done on how these biases manifest in model outputs for applied tasks
like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a
dataset of question sets constructed by the authors that highlight attested
social biases... | 2021-10-15T16:43:46Z | Accepted to ACL 2022 Findings. 20 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,110.08207 | Multitask Prompted Training Enables Zero-Shot Task Generalization | ['Victor Sanh', 'Albert Webson', 'Colin Raffel', 'Stephen H. Bach', 'Lintang Sutawika', 'Zaid Alyafeai', 'Antoine Chaffin', 'Arnaud Stiegler', 'Teven Le Scao', 'Arun Raja', 'Manan Dey', 'M Saiful Bari', 'Canwen Xu', 'Urmish Thakker', 'Shanya Sharma Sharma', 'Eliza Szczechla', 'Taewoon Kim', 'Gunjan Chhablani', 'Nihal N... | ['cs.LG', 'cs.CL'] | Large language models have recently been shown to attain reasonable zero-shot
generalization on a diverse set of tasks (Brown et al., 2020). It has been
hypothesized that this is a consequence of implicit multitask learning in
language models' pretraining (Radford et al., 2019). Can zero-shot
generalization instead be ... | 2021-10-15T17:08:57Z | ICLR 2022 Spotlight (with extended discussion) | null | null | null | null | null | null | null | null | null |
2,110.08426 | EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models | ['Frederick Liu', 'Terry Huang', 'Shihang Lyu', 'Siamak Shakeri', 'Hongkun Yu', 'Jing Li'] | ['cs.CL'] | Pre-trained encoder-decoder transformer architectures have become
increasingly popular recently with the advent of T5 models. T5 has also become
more favorable over other architectures like BERT due to the amount of data
that it is pre-trained on, increased scale of model parameter sizes and easy
applicability to a div... | 2021-10-16T00:50:08Z | Update multi-label and structured prediction results | null | null | null | null | null | null | null | null | null |
2,110.08518 | MarkupLM: Pre-training of Text and Markup Language for Visually-rich
Document Understanding | ['Junlong Li', 'Yiheng Xu', 'Lei Cui', 'Furu Wei'] | ['cs.CL'] | Multimodal pre-training with text, layout, and image has made significant
progress for Visually Rich Document Understanding (VRDU), especially the
fixed-layout documents such as scanned document images. While, there are still
a large number of digital documents where the layout information is not fixed
and needs to be ... | 2021-10-16T09:17:28Z | ACL 2022 | null | null | null | null | null | null | null | null | null |
2,110.08527 | An Empirical Survey of the Effectiveness of Debiasing Techniques for
Pre-trained Language Models | ['Nicholas Meade', 'Elinor Poole-Dayan', 'Siva Reddy'] | ['cs.CL', 'cs.LG'] | Recent work has shown pre-trained language models capture social biases from
the large amounts of text they are trained on. This has attracted attention to
developing techniques that mitigate such biases. In this work, we perform an
empirical survey of five recently proposed bias mitigation techniques:
Counterfactual D... | 2021-10-16T09:40:30Z | ACL 2022 | null | null | null | null | null | null | null | null | null |
2,110.08554 | PAGnol: An Extra-Large French Generative Model | ['Julien Launay', 'Elena Tommasone', 'Baptiste Pannier', 'François Boniface', 'Amélie Chatelain', 'Alessandro Cappelli', 'Iacopo Poli', 'Djamé Seddah'] | ['cs.CL'] | Access to large pre-trained models of varied architectures, in many different
languages, is central to the democratization of NLP. We introduce PAGnol, a
collection of French GPT models. Using scaling laws, we efficiently train
PAGnol-XL (1.5B parameters) with the same computational budget as CamemBERT, a
model 13 time... | 2021-10-16T11:44:23Z | null | null | null | null | null | null | null | null | null | null |
2,110.08559 | FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor
Automatic Text Generation | ['Moussa Kamal Eddine', 'Guokan Shang', 'Antoine J. -P. Tixier', 'Michalis Vazirgiannis'] | ['cs.CL'] | Fast and reliable evaluation metrics are key to R&D progress. While
traditional natural language generation metrics are fast, they are not very
reliable. Conversely, new metrics based on large pretrained language models are
much more reliable, but require significant computational resources. In this
paper, we propose F... | 2021-10-16T11:59:48Z | null | null | null | null | null | null | null | null | null | null |
2,110.08604 | LSA: Modeling Aspect Sentiment Coherency via Local Sentiment Aggregation | ['Heng Yang', 'Ke Li'] | ['cs.CL'] | Aspect sentiment coherency is an intriguing yet underexplored topic in the
field of aspect-based sentiment classification. This concept reflects the
common pattern where adjacent aspects often share similar sentiments. Despite
its prevalence, current studies have not fully recognized the potential of
modeling aspect se... | 2021-10-16T16:22:43Z | Accepted to EACL 2024 | null | null | null | null | null | null | null | null | null |
2,110.09456 | NormFormer: Improved Transformer Pretraining with Extra Normalization | ['Sam Shleifer', 'Jason Weston', 'Myle Ott'] | ['cs.CL', 'cs.AI'] | During pretraining, the Pre-LayerNorm transformer suffers from a gradient
magnitude mismatch: gradients at early layers are much larger than at later
layers. These issues can be alleviated by our proposed NormFormer architecture,
which adds three normalization operations to each layer: a Layer Norm after
self attention... | 2021-10-18T16:47:45Z | null | null | null | null | null | null | null | null | null | null |
2,110.09772 | Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry | ['Cho-Ying Wu', 'Qiangeng Xu', 'Ulrich Neumann'] | ['cs.CV', 'cs.GR'] | This work studies learning from a synergy process of 3D Morphable Models
(3DMM) and 3D facial landmarks to predict complete 3D facial geometry,
including 3D alignment, face orientation, and 3D face modeling. Our synergy
process leverages a representation cycle for 3DMM parameters and 3D landmarks.
3D landmarks can be e... | 2021-10-19T07:29:14Z | Accepted at 3DV 2021. This conference version supersedes
arXiv:2104.08403 | null | null | Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry | ['Cho-Ying Wu', 'Qiangeng Xu', 'U. Neumann'] | 2,021 | International Conference on 3D Vision | 60 | 78 | ['Computer Science'] |
2,110.09784 | SSAST: Self-Supervised Audio Spectrogram Transformer | ['Yuan Gong', 'Cheng-I Jeff Lai', 'Yu-An Chung', 'James Glass'] | ['cs.SD', 'cs.AI', 'eess.AS'] | Recently, neural networks based purely on self-attention, such as the Vision
Transformer (ViT), have been shown to outperform deep learning models
constructed with convolutional neural networks (CNNs) on various vision tasks,
thus extending the success of Transformers, which were originally developed for
language proce... | 2021-10-19T07:58:28Z | Accepted at AAAI2022. Code at https://github.com/YuanGongND/ssast | null | null | SSAST: Self-Supervised Audio Spectrogram Transformer | ['Yuan Gong', 'Cheng-I Lai', 'Yu-An Chung', 'James R. Glass'] | 2,021 | AAAI Conference on Artificial Intelligence | 277 | 37 | ['Computer Science', 'Engineering'] |
2,110.10404 | JavaBERT: Training a transformer-based model for the Java programming
language | ['Nelson Tavares de Sousa', 'Wilhelm Hasselbring'] | ['cs.SE', 'cs.LG', 'D.2.5'] | Code quality is and will be a crucial factor while developing new software
code, requiring appropriate tools to ensure functional and reliable code.
Machine learning techniques are still rarely used for software engineering
tools, missing out the potential benefits of its application. Natural language
processing has sh... | 2021-10-20T06:49:41Z | 6 pages, to appear in the Proceedings of the 9th International
Workshop on Realizing Artificial Intelligence Synergies in Software
Engineering (RAISE'2021) | null | null | null | null | null | null | null | null | null |
2,110.10812 | REAL-M: Towards Speech Separation on Real Mixtures | ['Cem Subakan', 'Mirco Ravanelli', 'Samuele Cornell', 'François Grondin'] | ['eess.AS', 'cs.LG', 'cs.SD', 'eess.SP'] | In recent years, deep learning based source separation has achieved
impressive results. Most studies, however, still evaluate separation models on
synthetic datasets, while the performance of state-of-the-art techniques on
in-the-wild speech data remains an open question. This paper contributes to
fill this gap in two ... | 2021-10-20T22:39:35Z | Submitted to ICASSP 2022 | null | null | null | null | null | null | null | null | null |
2,110.11316 | CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP | ['Andreas Fürst', 'Elisabeth Rumetshofer', 'Johannes Lehner', 'Viet Tran', 'Fei Tang', 'Hubert Ramsauer', 'David Kreil', 'Michael Kopp', 'Günter Klambauer', 'Angela Bitto-Nemling', 'Sepp Hochreiter'] | ['cs.LG', 'cs.CV'] | CLIP yielded impressive results on zero-shot transfer learning tasks and is
considered as a foundation model like BERT or GPT3. CLIP vision models that
have a rich representation are pre-trained using the InfoNCE objective and
natural language supervision before they are fine-tuned on particular tasks.
Though CLIP exce... | 2021-10-21T17:50:48Z | Published at NeurIPS 2022; Blog: https://ml-jku.github.io/cloob;
GitHub: https://github.com/ml-jku/cloob | null | null | null | null | null | null | null | null | null |
2,110.11624 | SciCap: Generating Captions for Scientific Figures | ['Ting-Yao Hsu', 'C. Lee Giles', "Ting-Hao 'Kenneth' Huang"] | ['cs.CL', 'cs.AI', 'cs.CV'] | Researchers use figures to communicate rich, complex information in
scientific papers. The captions of these figures are critical to conveying
effective messages. However, low-quality figure captions commonly occur in
scientific articles and may decrease understanding. In this paper, we propose
an end-to-end neural fra... | 2021-10-22T07:10:41Z | To Appear in EMNLP 2021 Findings. The dataset is available at:
https://github.com/tingyaohsu/SciCap | null | null | null | null | null | null | null | null | null |
2,110.11773 | Sinkformers: Transformers with Doubly Stochastic Attention | ['Michael E. Sander', 'Pierre Ablin', 'Mathieu Blondel', 'Gabriel Peyré'] | ['cs.LG', 'stat.ML'] | Attention based models such as Transformers involve pairwise interactions
between data points, modeled with a learnable attention matrix. Importantly,
this attention matrix is normalized with the SoftMax operator, which makes it
row-wise stochastic. In this paper, we propose instead to use Sinkhorn's
algorithm to make ... | 2021-10-22T13:25:01Z | Accepted at AISTATS | null | null | null | null | null | null | null | null | null |
2,110.1201 | ClimateBert: A Pretrained Language Model for Climate-Related Text | ['Nicolas Webersinke', 'Mathias Kraus', 'Julia Anna Bingler', 'Markus Leippold'] | ['cs.CL'] | Over the recent years, large pretrained language models (LM) have
revolutionized the field of natural language processing (NLP). However, while
pretraining on general language has been shown to work very well for common
language, it has been observed that niche language poses problems. In
particular, climate-related te... | 2021-10-22T18:47:34Z | null | null | null | ClimateBert: A Pretrained Language Model for Climate-Related Text | ['Nicolas Webersinke', 'Mathias Kraus', 'J. Bingler', 'Markus Leippold'] | 2,021 | Social Science Research Network | 145 | 32 | ['Computer Science'] |
2,110.122 | Hate and Offensive Speech Detection in Hindi and Marathi | ['Abhishek Velankar', 'Hrushikesh Patil', 'Amol Gore', 'Shubham Salunke', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | Sentiment analysis is the most basic NLP task to determine the polarity of
text data. There has been a significant amount of work in the area of
multilingual text as well. Still hate and offensive speech detection faces a
challenge due to inadequate availability of data, especially for Indian
languages like Hindi and M... | 2021-10-23T11:57:36Z | Accepted at HASOC @Forum for Information Retrieval Evaluation(FIRE)
2021 | null | null | null | null | null | null | null | null | null |
2,110.12201 | Spanish Legalese Language Model and Corpora | ['Asier Gutiérrez-Fandiño', 'Jordi Armengol-Estapé', 'Aitor Gonzalez-Agirre', 'Marta Villegas'] | ['cs.CL', 'cs.AI'] | There are many Language Models for the English language according to its
worldwide relevance. However, for the Spanish language, even if it is a widely
spoken language, there are very few Spanish Language Models which result to be
small and too general. Legal slang could be think of a Spanish variant on its
own as it i... | 2021-10-23T12:06:51Z | null | null | null | null | null | null | null | null | null | null |
2,110.12555 | hSDB-instrument: Instrument Localization Database for Laparoscopic and
Robotic Surgeries | ['Jihun Yoon', 'Jiwon Lee', 'Sunghwan Heo', 'Hayeong Yu', 'Jayeon Lim', 'Chi Hyun Song', 'SeulGi Hong', 'Seungbum Hong', 'Bokyung Park', 'SungHyun Park', 'Woo Jin Hyung', 'Min-Kook Choi'] | ['cs.CV'] | Automated surgical instrument localization is an important technology to
understand the surgical process and in order to analyze them to provide
meaningful guidance during surgery or surgical index after surgery to the
surgeon. We introduce a new dataset that reflects the kinematic characteristics
of surgical instrumen... | 2021-10-24T23:35:37Z | https://hsdb-instrument.github.io | MICCAI 2021 pp 393-402 | 10.1007/978-3-030-87202-1_38 10.1007/978-3-030-87202-1_38 | hSDB-instrument: Instrument Localization Database for Laparoscopic and Robotic Surgeries | ['Jihun Yoon', 'Jiwon Lee', 'Sung-Woo Heo', 'Hayeong Yu', 'Jayeon Lim', 'C. Song', 'SeulGi Hong', 'Seungbum Hong', 'Bokyung Park', 'Sunghyun Park', 'W. Hyung', 'Min-Kook Choi'] | 2,021 | International Conference on Medical Image Computing and Computer-Assisted Intervention | 4 | 29 | ['Computer Science'] |
2,110.12612 | DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard
Challenge 2021 | ['Yanqing Liu', 'Zhihang Xu', 'Gang Wang', 'Kuan Chen', 'Bohan Li', 'Xu Tan', 'Jinzhu Li', 'Lei He', 'Sheng Zhao'] | ['cs.SD', 'cs.LG', 'eess.AS'] | This paper describes the Microsoft end-to-end neural text to speech (TTS)
system: DelightfulTTS for Blizzard Challenge 2021. The goal of this challenge
is to synthesize natural and high-quality speech from text, and we approach
this goal in two perspectives: The first is to directly model and generate
waveform in 48 kH... | 2021-10-25T02:47:59Z | null | null | null | DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard Challenge 2021 | ['Yanqing Liu', 'Zhihang Xu', 'G. Wang', 'Kuan-Hen Chen', 'Bohan Li', 'Xu Tan', 'Jinzhu Li', 'Lei He', 'Sheng Zhao'] | 2,021 | Blizzard Challenge | 55 | 30 | ['Computer Science', 'Engineering'] |
2,110.12628 | Recurrent Off-policy Baselines for Memory-based Continuous Control | ['Zhihan Yang', 'Hai Nguyen'] | ['cs.LG', 'cs.AI', 'cs.RO'] | When the environment is partially observable (PO), a deep reinforcement
learning (RL) agent must learn a suitable temporal representation of the entire
history in addition to a strategy to control. This problem is not novel, and
there have been model-free and model-based algorithms proposed for this
problem. However, i... | 2021-10-25T04:08:57Z | null | null | null | Recurrent Off-policy Baselines for Memory-based Continuous Control | ['Zhihan Yang', 'Hai V. Nguyen'] | 2,021 | arXiv.org | 24 | 33 | ['Computer Science'] |
2,110.139 | WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech
Processing | ['Sanyuan Chen', 'Chengyi Wang', 'Zhengyang Chen', 'Yu Wu', 'Shujie Liu', 'Zhuo Chen', 'Jinyu Li', 'Naoyuki Kanda', 'Takuya Yoshioka', 'Xiong Xiao', 'Jian Wu', 'Long Zhou', 'Shuo Ren', 'Yanmin Qian', 'Yao Qian', 'Jian Wu', 'Michael Zeng', 'Xiangzhan Yu', 'Furu Wei'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Self-supervised learning (SSL) achieves great success in speech recognition,
while limited exploration has been attempted for other speech processing tasks.
As speech signal contains multi-faceted information including speaker identity,
paralinguistics, spoken content, etc., learning universal representations for
all s... | 2021-10-26T17:55:19Z | Submitted to the Journal of Selected Topics in Signal Processing
(JSTSP) | null | 10.1109/JSTSP.2022.3188113 | null | null | null | null | null | null | null |
2,110.14038 | Robustness of Graph Neural Networks at Scale | ['Simon Geisler', 'Tobias Schmidt', 'Hakan Şirin', 'Daniel Zügner', 'Aleksandar Bojchevski', 'Stephan Günnemann'] | ['cs.LG', 'stat.ML'] | Graph Neural Networks (GNNs) are increasingly important given their
popularity and the diversity of applications. Yet, existing studies of their
vulnerability to adversarial attacks rely on relatively small graphs. We
address this gap and study how to attack and defend GNNs at scale. We propose
two sparsity-aware first... | 2021-10-26T21:31:17Z | 39 pages, 22 figures, 17 tables NeurIPS 2021 | null | null | Robustness of Graph Neural Networks at Scale | ['Simon Geisler', 'Tobias Schmidt', 'Hakan cSirin', 'Daniel Zugner', 'Aleksandar Bojchevski', 'Stephan Gunnemann'] | 2,021 | Neural Information Processing Systems | 135 | 52 | ['Computer Science', 'Mathematics'] |
2,110.14168 | Training Verifiers to Solve Math Word Problems | ['Karl Cobbe', 'Vineet Kosaraju', 'Mohammad Bavarian', 'Mark Chen', 'Heewoo Jun', 'Lukasz Kaiser', 'Matthias Plappert', 'Jerry Tworek', 'Jacob Hilton', 'Reiichiro Nakano', 'Christopher Hesse', 'John Schulman'] | ['cs.LG', 'cs.CL'] | State-of-the-art language models can match human performance on many tasks,
but they still struggle to robustly perform multi-step mathematical reasoning.
To diagnose the failures of current models and support research, we introduce
GSM8K, a dataset of 8.5K high quality linguistically diverse grade school math
word pro... | 2021-10-27T04:49:45Z | null | null | null | null | null | null | null | null | null | null |
2,110.14566 | IndoNLI: A Natural Language Inference Dataset for Indonesian | ['Rahmad Mahendra', 'Alham Fikri Aji', 'Samuel Louvan', 'Fahrurrozi Rahman', 'Clara Vania'] | ['cs.CL'] | We present IndoNLI, the first human-elicited NLI dataset for Indonesian. We
adapt the data collection protocol for MNLI and collect nearly 18K sentence
pairs annotated by crowd workers and experts. The expert-annotated data is used
exclusively as a test set. It is designed to provide a challenging test-bed for
Indonesi... | 2021-10-27T16:37:13Z | Accepted at EMNLP 2021 main conference | https://aclanthology.org/2021.emnlp-main.821/ | 10.18653/v1/2021.emnlp-main.821 | null | null | null | null | null | null | null |
2,110.14883 | Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel
Training | ['Shenggui Li', 'Hongxin Liu', 'Zhengda Bian', 'Jiarui Fang', 'Haichen Huang', 'Yuliang Liu', 'Boxiang Wang', 'Yang You'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV', 'cs.DC'] | The success of Transformer models has pushed the deep learning model scale to
billions of parameters. Due to the limited memory resource of a single GPU,
However, the best practice for choosing the optimal parallel strategy is still
lacking, since it requires domain expertise in both deep learning and parallel
computin... | 2021-10-28T04:45:55Z | null | null | null | null | null | null | null | null | null | null |
2,110.15621 | MentalBERT: Publicly Available Pretrained Language Models for Mental
Healthcare | ['Shaoxiong Ji', 'Tianlin Zhang', 'Luna Ansari', 'Jie Fu', 'Prayag Tiwari', 'Erik Cambria'] | ['cs.CL'] | Mental health is a critical issue in modern society, and mental disorders
could sometimes turn to suicidal ideation without adequate treatment. Early
detection of mental disorders and suicidal ideation from social content
provides a potential way for effective social intervention. Recent advances in
pretrained contextu... | 2021-10-29T08:36:47Z | null | Proceedings of the Language Resources and Evaluation Conference
(LREC), 2022 | null | MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare | ['Shaoxiong Ji', 'Tianlin Zhang', 'Luna Ansari', 'Jie Fu', 'P. Tiwari', 'E. Cambria'] | 2,021 | International Conference on Language Resources and Evaluation | 236 | 50 | ['Computer Science'] |
2,110.15709 | LegalNLP -- Natural Language Processing methods for the Brazilian Legal
Language | ['Felipe Maia Polo', 'Gabriel Caiaffa Floriano Mendonça', 'Kauê Capellato J. Parreira', 'Lucka Gianvechio', 'Peterson Cordeiro', 'Jonathan Batista Ferreira', 'Leticia Maria Paz de Lima', 'Antônio Carlos do Amaral Maia', 'Renato Vicente'] | ['cs.CL', 'cs.LG'] | We present and make available pre-trained language models (Phraser, Word2Vec,
Doc2Vec, FastText, and BERT) for the Brazilian legal language, a Python package
with functions to facilitate their use, and a set of demonstrations/tutorials
containing some applications involving them. Given that our material is built
upon l... | 2021-10-05T04:44:37Z | null | null | null | LegalNLP - Natural Language Processing methods for the Brazilian Legal Language | ['Felipe Maia Polo', 'Gabriel Caiaffa Floriano Mendonça', 'K. C. J. Parreira', 'L. Gianvechio', 'Peterson Cordeiro', 'Jonathan Batista Ferreira', 'Leticia Maria Paz de Lima', 'Antonio Carlos do Amaral Maia', 'R. Vicente'] | 2,021 | Anais do XVIII Encontro Nacional de Inteligência Artificial e Computacional (ENIAC 2021) | 14 | 15 | ['Computer Science'] |
2,110.15731 | CORAA: a large corpus of spontaneous and prepared speech manually
validated for speech recognition in Brazilian Portuguese | ['Arnaldo Candido Junior', 'Edresson Casanova', 'Anderson Soares', 'Frederico Santos de Oliveira', 'Lucas Oliveira', 'Ricardo Corso Fernandes Junior', 'Daniel Peixoto Pinto da Silva', 'Fernando Gorgulho Fayet', 'Bruno Baldissera Carlotto', 'Lucas Rafael Stefanel Gris', 'Sandra Maria Aluísio'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Automatic Speech recognition (ASR) is a complex and challenging task. In
recent years, there have been significant advances in the area. In particular,
for the Brazilian Portuguese (BP) language, there were about 376 hours public
available for ASR task until the second half of 2020. With the release of new
datasets in ... | 2021-10-14T13:50:52Z | This paper is under consideration at Language Resources and
Evaluation (LREV) | null | null | CORAA: a large corpus of spontaneous and prepared speech manually validated for speech recognition in Brazilian Portuguese | ['Arnaldo Cândido Júnior', 'Edresson Casanova', 'A. Soares', 'F. S. Oliveira', 'L. Oliveira', 'Ricardo Corso Fernandes Junior', 'Daniel Peixoto Pinto da Silva', 'Fernando Gorgulho Fayet', 'B. Carlotto', 'L. Gris', "S. Alu'isio"] | 2,021 | arXiv.org | 15 | 42 | ['Computer Science', 'Engineering'] |
2,111.00161 | Pseudo-Labeling for Massively Multilingual Speech Recognition | ['Loren Lugosch', 'Tatiana Likhomanenko', 'Gabriel Synnaeve', 'Ronan Collobert'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Semi-supervised learning through pseudo-labeling has become a staple of
state-of-the-art monolingual speech recognition systems. In this work, we
extend pseudo-labeling to massively multilingual speech recognition with 60
languages. We propose a simple pseudo-labeling recipe that works well even with
low-resource langu... | 2021-10-30T03:30:17Z | Accepted to ICASSP 2022. New version has links to code/models + more
training curves for larger model. (Fixed code link.) | null | null | null | null | null | null | null | null | null |
2,111.0021 | Mastering Atari Games with Limited Data | ['Weirui Ye', 'Shaohuai Liu', 'Thanard Kurutach', 'Pieter Abbeel', 'Yang Gao'] | ['cs.LG', 'cs.AI', 'cs.CV', 'cs.RO'] | Reinforcement learning has achieved great success in many applications.
However, sample efficiency remains a key challenge, with prominent methods
requiring millions (or even billions) of environment steps to train. Recently,
there has been significant progress in sample efficient image-based RL
algorithms; however, co... | 2021-10-30T09:13:39Z | Published at NeurIPS 2021; Homepage:
https://yewr.github.io/projects/efficientzero/ | null | null | Mastering Atari Games with Limited Data | ['Weirui Ye', 'Shao-Wei Liu', 'Thanard Kurutach', 'P. Abbeel', 'Yang Gao'] | 2,021 | Neural Information Processing Systems | 242 | 49 | ['Computer Science'] |
2,111.00396 | Efficiently Modeling Long Sequences with Structured State Spaces | ['Albert Gu', 'Karan Goel', 'Christopher Ré'] | ['cs.LG'] | A central goal of sequence modeling is designing a single principled model
that can address sequence data across a range of modalities and tasks,
particularly on long-range dependencies. Although conventional models including
RNNs, CNNs, and Transformers have specialized variants for capturing long
dependencies, they s... | 2021-10-31T03:32:18Z | ICLR 2022 (Outstanding Paper HM) | null | null | null | null | null | null | null | null | null |
2,111.00526 | FinEAS: Financial Embedding Analysis of Sentiment | ['Asier Gutiérrez-Fandiño', 'Miquel Noguer i Alonso', 'Petter Kolm', 'Jordi Armengol-Estapé'] | ['cs.CL', 'q-fin.CP', 'q-fin.PM'] | We introduce a new language representation model in finance called Financial
Embedding Analysis of Sentiment (FinEAS). In financial markets, news and
investor sentiment are significant drivers of security prices. Thus, leveraging
the capabilities of modern NLP approaches for financial sentiment analysis is a
crucial co... | 2021-10-31T15:41:56Z | null | null | null | FinEAS: Financial Embedding Analysis of Sentiment | ['Asier Gutiérrez-Fandiño', 'M. N. Alonso', 'P. Kolm', 'Jordi Armengol-Estapé'] | 2,021 | Social Science Research Network | 6 | 16 | ['Computer Science', 'Economics'] |
2,111.00595 | TorchXRayVision: A library of chest X-ray datasets and models | ['Joseph Paul Cohen', 'Joseph D. Viviano', 'Paul Bertin', 'Paul Morrison', 'Parsa Torabian', 'Matteo Guarrera', 'Matthew P Lungren', 'Akshay Chaudhari', 'Rupert Brooks', 'Mohammad Hashir', 'Hadrien Bertrand'] | ['eess.IV', 'cs.AI', 'cs.CV'] | TorchXRayVision is an open source software library for working with chest
X-ray datasets and deep learning models. It provides a common interface and
common pre-processing chain for a wide set of publicly available chest X-ray
datasets. In addition, a number of classification and representation learning
models with dif... | 2021-10-31T21:19:08Z | Library source code: https://github.com/mlmed/torchxrayvision | null | null | null | null | null | null | null | null | null |
2,111.00899 | Equivariant Contrastive Learning | ['Rumen Dangovski', 'Li Jing', 'Charlotte Loh', 'Seungwook Han', 'Akash Srivastava', 'Brian Cheung', 'Pulkit Agrawal', 'Marin Soljačić'] | ['cs.CV', 'cs.LG', 'eess.IV', 'physics.app-ph'] | In state-of-the-art self-supervised learning (SSL) pre-training produces
semantically good representations by encouraging them to be invariant under
meaningful transformations prescribed from human knowledge. In fact, the
property of invariance is a trivial instance of a broader class called
equivariance, which can be ... | 2021-10-28T17:21:33Z | Camera Ready Revision. ICLR 2022. Discussion:
https://openreview.net/forum?id=gKLAAfiytI Code:
https://github.com/rdangovs/essl | null | null | null | null | null | null | null | null | null |
2,111.01007 | Projected GANs Converge Faster | ['Axel Sauer', 'Kashyap Chitta', 'Jens Müller', 'Andreas Geiger'] | ['cs.CV', 'cs.LG'] | Generative Adversarial Networks (GANs) produce high-quality images but are
challenging to train. They need careful regularization, vast amounts of
compute, and expensive hyper-parameter sweeps. We make significant headway on
these issues by projecting generated and real samples into a fixed, pretrained
feature space. M... | 2021-11-01T15:11:01Z | To appear in NeurIPS 2021. Project Page:
https://sites.google.com/view/projected-gan/ | null | null | Projected GANs Converge Faster | ['Axel Sauer', 'Kashyap Chitta', 'Jens Muller', 'Andreas Geiger'] | 2,021 | Neural Information Processing Systems | 237 | 95 | ['Computer Science'] |
2,111.01253 | Neural Scene Flow Prior | ['Xueqian Li', 'Jhony Kaesemodel Pontes', 'Simon Lucey'] | ['cs.CV'] | Before the deep learning revolution, many perception algorithms were based on
runtime optimization in conjunction with a strong prior/regularization penalty.
A prime example of this in computer vision is optical and scene flow.
Supervised learning has largely displaced the need for explicit regularization.
Instead, the... | 2021-11-01T20:44:12Z | accepted by NeurIPS 2021 as "spotlight" | null | null | Neural Scene Flow Prior | ['Xueqian Li', 'J. K. Pontes', 'S. Lucey'] | 2,021 | Neural Information Processing Systems | 95 | 82 | ['Computer Science'] |
2,111.01722 | Predicting the Location of Bicycle-sharing Stations using OpenStreetMap
Data | ['Kamil Raczycki'] | ['cs.LG', 'cs.AI', 'cs.CY'] | Planning the layout of bicycle-sharing stations is a complex process,
especially in cities where bicycle sharing systems are just being implemented.
Urban planners often have to make a lot of estimates based on both publicly
available data and privately provided data from the administration and then use
the Location-Al... | 2021-11-02T16:44:00Z | Codebase and interactive website available at
https://pwr-inf.github.io/Transfer-learning-approach-to-bicycle-sharing-systems-station-location-planning-using-OpenStreetMap.
arXiv admin note: text overlap with arXiv:2111.00990 | null | null | Predicting the Location of Bicycle-sharing Stations using OpenStreetMap Data | ['Kamil Raczycki'] | 2,021 | arXiv.org | 0 | 0 | ['Computer Science'] |
2,111.02114 | LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs | ['Christoph Schuhmann', 'Richard Vencu', 'Romain Beaumont', 'Robert Kaczmarczyk', 'Clayton Mullis', 'Aarush Katta', 'Theo Coombes', 'Jenia Jitsev', 'Aran Komatsuzaki'] | ['cs.CV', 'cs.CL', 'cs.LG'] | Multi-modal language-vision models trained on hundreds of millions of
image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable
capability to perform zero- or few-shot learning and transfer even in absence
of per-sample labels on target image data. Despite this trend, to date there
has been no publ... | 2021-11-03T10:16:39Z | Short version. Accepted at Data Centric AI NeurIPS Workshop 2021 | null | null | LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs | ['Christoph Schuhmann', 'R. Vencu', 'R. Beaumont', 'R. Kaczmarczyk', 'Clayton Mullis', 'Aarush Katta', 'Theo Coombes', 'J. Jitsev', 'Aran Komatsuzaki'] | 2,021 | arXiv.org | 1,446 | 12 | ['Computer Science'] |
2,111.02392 | A Comparison of Discrete and Soft Speech Units for Improved Voice
Conversion | ['Benjamin van Niekerk', 'Marc-André Carbonneau', 'Julian Zaïdi', 'Mathew Baas', 'Hugo Seuté', 'Herman Kamper'] | ['eess.AS', 'cs.SD'] | The goal of voice conversion is to transform source speech into a target
voice, keeping the content unchanged. In this paper, we focus on
self-supervised representation learning for voice conversion. Specifically, we
compare discrete and soft speech units as input features. We find that discrete
representations effecti... | 2021-11-03T17:58:03Z | 5 pages, 2 figures, 2 tables. Accepted at ICASSP 2022 | null | 10.1109/ICASSP43922.2022.9746484 | A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion | ['B. V. Niekerk', 'M. Carbonneau', 'Julian Zaïdi', 'Matthew Baas', 'Hugo Seuté', 'H. Kamper'] | 2,021 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 123 | 32 | ['Computer Science', 'Engineering'] |
2,111.02394 | FAST: Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel
Representation | ['Zhe Chen', 'Jiahao Wang', 'Wenhai Wang', 'Guo Chen', 'Enze Xie', 'Ping Luo', 'Tong Lu'] | ['cs.CV'] | We propose an accurate and efficient scene text detection framework, termed
FAST (i.e., faster arbitrarily-shaped text detector). Different from recent
advanced text detectors that used complicated post-processing and hand-crafted
network architectures, resulting in low inference speed, FAST has two new
designs. (1) We... | 2021-11-03T17:58:47Z | null | null | null | null | null | null | null | null | null | null |
2,111.02549 | VORTEX: Physics-Driven Data Augmentations Using Consistency Training for
Robust Accelerated MRI Reconstruction | ['Arjun D Desai', 'Beliz Gunel', 'Batu M Ozturkler', 'Harris Beg', 'Shreyas Vasanawala', 'Brian A Hargreaves', 'Christopher Ré', 'John M Pauly', 'Akshay S Chaudhari'] | ['eess.IV', 'physics.med-ph'] | Deep neural networks have enabled improved image quality and fast inference
times for various inverse problems, including accelerated magnetic resonance
imaging (MRI) reconstruction. However, such models require a large number of
fully-sampled ground truth datasets, which are difficult to curate, and are
sensitive to d... | 2021-11-03T22:34:16Z | Accepted to MIDL 2022 | null | null | VORTEX: Physics-Driven Data Augmentations Using Consistency Training for Robust Accelerated MRI Reconstruction | ['Arjun D Desai', 'Beliz Gunel', 'Batu Mehmet Ozturkler', 'Harris Beg', 'S. Vasanawala', 'B. Hargreaves', 'Christopher Ré', 'J. Pauly', 'A. Chaudhari'] | 2,021 | International Conference on Medical Imaging with Deep Learning | 25 | 66 | ['Engineering', 'Physics', 'Computer Science'] |
2,111.02813 | WaveFake: A Data Set to Facilitate Audio Deepfake Detection | ['Joel Frank', 'Lea Schönherr'] | ['cs.LG', 'cs.CR', 'cs.SD', 'eess.AS'] | Deep generative modeling has the potential to cause significant harm to
society. Recognizing this threat, a magnitude of research into detecting
so-called "Deepfakes" has emerged. This research most often focuses on the
image domain, while studies exploring generated audio signals have, so-far,
been neglected. In this ... | 2021-11-04T12:26:34Z | Accepted to NeurIPS 2021 (Benchmark and Dataset Track); Code:
https://github.com/RUB-SysSec/WaveFake; Data:
https://zenodo.org/record/5642694 | null | null | null | null | null | null | null | null | null |
2,111.03452 | Generalized Radiograph Representation Learning via Cross-supervision
between Images and Free-text Radiology Reports | ['Hong-Yu Zhou', 'Xiaoyu Chen', 'Yinghao Zhang', 'Ruibang Luo', 'Liansheng Wang', 'Yizhou Yu'] | ['eess.IV', 'cs.CV', 'cs.LG'] | Pre-training lays the foundation for recent successes in radiograph analysis
supported by deep learning. It learns transferable image representations by
conducting large-scale fully-supervised or self-supervised learning on a source
domain. However, supervised pre-training requires a complex and labor intensive
two-sta... | 2021-11-04T14:28:22Z | Accepted by Nature Machine Intelligence. The official version is at
https://www.nature.com/articles/s42256-021-00425-9. Codes are available at
https://github.com/funnyzhou/REFERS | null | 10.1038/s42256-021-00425-9 | null | null | null | null | null | null | null |
2,111.04551 | Sexism Prediction in Spanish and English Tweets Using Monolingual and
Multilingual BERT and Ensemble Models | ['Angel Felipe Magnossão de Paula', 'Roberto Fray da Silva', 'Ipek Baris Schlicht'] | ['cs.CL', 'cs.AI', 'cs.CY', 'cs.LG'] | The popularity of social media has created problems such as hate speech and
sexism. The identification and classification of sexism in social media are
very relevant tasks, as they would allow building a healthier social
environment. Nevertheless, these tasks are considerably challenging. This work
proposes a system to... | 2021-11-08T15:01:06Z | 18 pages, presented at IberLEF:
http://ceur-ws.org/Vol-2943/exist_paper2.pdf, the best scoring system at
EXIST | null | null | null | null | null | null | null | null | null |
2,111.05011 | RAVE: A variational autoencoder for fast and high-quality neural audio
synthesis | ['Antoine Caillon', 'Philippe Esling'] | ['cs.LG', 'cs.SD', 'eess.AS'] | Deep generative models applied to audio have improved by a large margin the
state-of-the-art in many speech and music related tasks. However, as raw
waveform modelling remains an inherently difficult task, audio generative
models are either computationally intensive, rely on low sampling rates, are
complicated to contr... | 2021-11-09T09:07:30Z | null | null | null | null | null | null | null | null | null | null |
2,111.05754 | Prune Once for All: Sparse Pre-Trained Language Models | ['Ofir Zafrir', 'Ariel Larey', 'Guy Boudoukh', 'Haihao Shen', 'Moshe Wasserblat'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Transformer-based language models are applied to a wide range of applications
in natural language processing. However, they are inefficient and difficult to
deploy. In recent years, many compression algorithms have been proposed to
increase the implementation efficiency of large Transformer-based models on
target hardw... | 2021-11-10T15:52:40Z | ENLSP NeurIPS Workshop 2021, 12 pages | null | null | null | null | null | null | null | null | null |
2,111.06053 | Improving Large-scale Language Models and Resources for Filipino | ['Jan Christian Blaise Cruz', 'Charibeth Cheng'] | ['cs.CL'] | In this paper, we improve on existing language resources for the low-resource
Filipino language in two ways. First, we outline the construction of the
TLUnified dataset, a large-scale pretraining corpus that serves as an
improvement over smaller existing pretraining datasets for the language in
terms of scale and topic... | 2021-11-11T05:00:58Z | Resources are available at blaisecruz.com/resources | null | null | null | null | null | null | null | null | null |
2,111.06377 | Masked Autoencoders Are Scalable Vision Learners | ['Kaiming He', 'Xinlei Chen', 'Saining Xie', 'Yanghao Li', 'Piotr Dollár', 'Ross Girshick'] | ['cs.CV'] | This paper shows that masked autoencoders (MAE) are scalable self-supervised
learners for computer vision. Our MAE approach is simple: we mask random
patches of the input image and reconstruct the missing pixels. It is based on
two core designs. First, we develop an asymmetric encoder-decoder architecture,
with an enco... | 2021-11-11T18:46:40Z | Tech report. arXiv v2: add more transfer learning results; v3: add
robustness evaluation | null | null | null | null | null | null | null | null | null |
2,111.06476 | Automated question generation and question answering from Turkish texts | ['Fatih Cagatay Akyon', 'Devrim Cavusoglu', 'Cemil Cengiz', 'Sinan Onur Altinuc', 'Alptekin Temizel'] | ['cs.LG'] | While exam-style questions are a fundamental educational tool serving a
variety of purposes, manual construction of questions is a complex process that
requires training, experience and resources. Automatic question generation (QG)
techniques can be utilized to satisfy the need for a continuous supply of new
questions ... | 2021-11-11T22:00:45Z | 14 pages, 1 figure, 13 tables | null | null | null | null | null | null | null | null | null |
2,111.06693 | Deep-learning in the bioimaging wild: Handling ambiguous data with
deepflash2 | ['Matthias Griebel', 'Dennis Segebarth', 'Nikolai Stein', 'Nina Schukraft', 'Philip Tovote', 'Robert Blum', 'Christoph M. Flath'] | ['q-bio.QM', 'cs.CV'] | We present deepflash2, a deep learning solution that facilitates the
objective and reliable segmentation of ambiguous bioimages through multi-expert
annotations and integrated quality assurance. Thereby, deepflash2 addresses
typical challenges that arise during training, evaluation, and application of
deep learning mod... | 2021-11-12T12:35:26Z | null | null | null | Deep-learning in the bioimaging wild: Handling ambiguous data with deepflash2 | ['M. Griebel', 'Dennis Segebarth', 'N. Stein', 'Nina Schukraft', 'P. Tovote', 'R. Blum', 'C. Flath'] | 2,021 | arXiv.org | 2 | 56 | ['Computer Science', 'Biology'] |
2,111.07047 | Facial Landmark Points Detection Using Knowledge Distillation-Based
Neural Networks | ['Ali Pourramezan Fard', 'Mohammad H. Mahoor'] | ['cs.CV'] | Facial landmark detection is a vital step for numerous facial image analysis
applications. Although some deep learning-based methods have achieved good
performances in this task, they are often not suitable for running on mobile
devices. Such methods rely on networks with many parameters, which makes the
training and i... | 2021-11-13T05:45:14Z | Accepted in Computer Vision and Image Understanding Journal | null | null | Facial Landmark Points Detection Using Knowledge Distillation-Based Neural Networks | ['A. P. Fard', 'M. Mahoor'] | 2,021 | Computer Vision and Image Understanding | 28 | 72 | ['Computer Science'] |
2,111.07991 | LiT: Zero-Shot Transfer with Locked-image text Tuning | ['Xiaohua Zhai', 'Xiao Wang', 'Basil Mustafa', 'Andreas Steiner', 'Daniel Keysers', 'Alexander Kolesnikov', 'Lucas Beyer'] | ['cs.CV', 'cs.CL', 'cs.LG'] | This paper presents contrastive-tuning, a simple method employing contrastive
training to align image and text models while still taking advantage of their
pre-training. In our empirical study we find that locked pre-trained image
models with unlocked text models work best. We call this instance of
contrastive-tuning "... | 2021-11-15T18:53:48Z | Xiaohua, Xiao, Basil, Andreas and Lucas contributed equally; CVPR
2022 | null | null | null | null | null | null | null | null | null |
2,111.08276 | Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual
Concepts | ['Yan Zeng', 'Xinsong Zhang', 'Hang Li'] | ['cs.CL', 'cs.CV'] | Most existing methods in vision language pre-training rely on object-centric
features extracted through object detection and make fine-grained alignments
between the extracted features and texts. It is challenging for these methods
to learn relations among multiple objects. To this end, we propose a new method
called X... | 2021-11-16T07:55:26Z | ICML 2022 | null | null | null | null | null | null | null | null | null |
2,111.08366 | Multi-Vector Models with Textual Guidance for Fine-Grained Scientific
Document Similarity | ['Sheshera Mysore', 'Arman Cohan', 'Tom Hope'] | ['cs.CL', 'cs.IR'] | We present a new scientific document similarity model based on matching
fine-grained aspects of texts. To train our model, we exploit a
naturally-occurring source of supervision: sentences in the full-text of papers
that cite multiple papers together (co-citations). Such co-citations not only
reflect close paper relate... | 2021-11-16T11:12:30Z | NAACL 2022 camera-ready | null | null | null | null | null | null | null | null | null |
2,111.09296 | XLS-R: Self-supervised Cross-lingual Speech Representation Learning at
Scale | ['Arun Babu', 'Changhan Wang', 'Andros Tjandra', 'Kushal Lakhotia', 'Qiantong Xu', 'Naman Goyal', 'Kritika Singh', 'Patrick von Platen', 'Yatharth Saraf', 'Juan Pino', 'Alexei Baevski', 'Alexis Conneau', 'Michael Auli'] | ['cs.CL', 'cs.SD', 'eess.AS'] | This paper presents XLS-R, a large-scale model for cross-lingual speech
representation learning based on wav2vec 2.0. We train models with up to 2B
parameters on nearly half a million hours of publicly available speech audio in
128 languages, an order of magnitude more public data than the largest known
prior work. Our... | 2021-11-17T18:49:42Z | null | null | null | XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale | ['Arun Babu', 'Changhan Wang', 'Andros Tjandra', 'Kushal Lakhotia', 'Qiantong Xu', 'Naman Goyal', 'Kritika Singh', 'Patrick von Platen', 'Yatharth Saraf', 'J. Pino', 'Alexei Baevski', 'Alexis Conneau', 'Michael Auli'] | 2,021 | Interspeech | 713 | 69 | ['Computer Science', 'Engineering'] |
2,111.09453 | RoBERTuito: a pre-trained language model for social media text in
Spanish | ['Juan Manuel Pérez', 'Damián A. Furman', 'Laura Alonso Alemany', 'Franco Luque'] | ['cs.CL', 'cs.AI'] | Since BERT appeared, Transformer language models and transfer learning have
become state-of-the-art for Natural Language Understanding tasks. Recently,
some works geared towards pre-training specially-crafted models for particular
domains, such as scientific papers, medical documents, user-generated texts,
among others... | 2021-11-18T00:10:25Z | LREC 2022 | null | null | RoBERTuito: a pre-trained language model for social media text in Spanish | ['Juan Manuel Pérez', 'D. Furman', 'L. A. Alemany', 'F. Luque'] | 2,021 | International Conference on Language Resources and Evaluation | 100 | 38 | ['Computer Science'] |
2,111.09525 | SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in
Summarization | ['Philippe Laban', 'Tobias Schnabel', 'Paul N. Bennett', 'Marti A. Hearst'] | ['cs.CL'] | In the summarization domain, a key requirement for summaries is to be
factually consistent with the input document. Previous work has found that
natural language inference (NLI) models do not perform competitively when
applied to inconsistency detection. In this work, we revisit the use of NLI for
inconsistency detecti... | 2021-11-18T05:02:31Z | TACL pre-MIT Press publication version; 11 pages, 2 figures, 5 tables | null | null | null | null | null | null | null | null | null |
2,111.09543 | DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with
Gradient-Disentangled Embedding Sharing | ['Pengcheng He', 'Jianfeng Gao', 'Weizhu Chen'] | ['cs.CL', 'cs.LG', 'cs.CL, cs.GL', 'I.2; I.7'] | This paper presents a new pre-trained language model, DeBERTaV3, which
improves the original DeBERTa model by replacing mask language modeling (MLM)
with replaced token detection (RTD), a more sample-efficient pre-training task.
Our analysis shows that vanilla embedding sharing in ELECTRA hurts training
efficiency and ... | 2021-11-18T06:48:00Z | 16 pages, 10 tables, 2 Figures. The DeBERTaV3 model significantly
improves performance of the downstream NLU tasks over models with a similar
structure, e.g. DeBERTaV3 large achieves 91.37% average GLUE score which is
1.37% over DeBERTa large. XSmall has only 22M backbone parameters, but
significantly outperfor... | null | null | null | null | null | null | null | null | null |
2,111.09645 | Dynamic-TinyBERT: Boost TinyBERT's Inference Efficiency by Dynamic
Sequence Length | ['Shira Guskin', 'Moshe Wasserblat', 'Ke Ding', 'Gyuwan Kim'] | ['cs.CL', 'cs.LG'] | Limited computational budgets often prevent transformers from being used in
production and from having their high accuracy utilized. TinyBERT addresses the
computational efficiency by self-distilling BERT into a smaller transformer
representation having fewer layers and smaller internal embedding. However,
TinyBERT's p... | 2021-11-18T11:58:19Z | ENLSP NeurIPS Workshop 2021, 7 pages | null | null | null | null | null | null | null | null | null |
2,111.09714 | You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli
Sampling | ['Zhanpeng Zeng', 'Yunyang Xiong', 'Sathya N. Ravi', 'Shailesh Acharya', 'Glenn Fung', 'Vikas Singh'] | ['cs.LG', 'cs.CL'] | Transformer-based models are widely used in natural language processing
(NLP). Central to the transformer model is the self-attention mechanism, which
captures the interactions of token pairs in the input sequences and depends
quadratically on the sequence length. Training such models on longer sequences
is expensive. ... | 2021-11-18T14:24:34Z | Proceedings of the 38th ICML (2021) | null | null | null | null | null | null | null | null | null |
2,111.09734 | ClipCap: CLIP Prefix for Image Captioning | ['Ron Mokady', 'Amir Hertz', 'Amit H. Bermano'] | ['cs.CV'] | Image captioning is a fundamental task in vision-language understanding,
where the model predicts a textual informative caption to a given input image.
In this paper, we present a simple approach to address this task. We use CLIP
encoding as a prefix to the caption, by employing a simple mapping network, and
then fine-... | 2021-11-18T14:49:15Z | null | null | null | null | null | null | null | null | null | null |
2,111.09832 | Merging Models with Fisher-Weighted Averaging | ['Michael Matena', 'Colin Raffel'] | ['cs.LG'] | Averaging the parameters of models that have the same architecture and
initialization can provide a means of combining their respective capabilities.
In this paper, we take the perspective that this "merging" operation can be
seen as choosing parameters that approximately maximize the joint likelihood of
the posteriors... | 2021-11-18T17:59:35Z | null | null | null | Merging Models with Fisher-Weighted Averaging | ['Michael Matena', 'Colin Raffel'] | 2,021 | Neural Information Processing Systems | 403 | 72 | ['Computer Science'] |
2,111.09883 | Swin Transformer V2: Scaling Up Capacity and Resolution | ['Ze Liu', 'Han Hu', 'Yutong Lin', 'Zhuliang Yao', 'Zhenda Xie', 'Yixuan Wei', 'Jia Ning', 'Yue Cao', 'Zheng Zhang', 'Li Dong', 'Furu Wei', 'Baining Guo'] | ['cs.CV'] | Large-scale NLP models have been shown to significantly improve the
performance on language tasks with no signs of saturation. They also
demonstrate amazing few-shot capabilities like that of human beings. This paper
aims to explore large-scale models in computer vision. We tackle three major
issues in training and app... | 2021-11-18T18:59:33Z | null | CVPR2022 | null | null | null | null | null | null | null | null |
2,111.09886 | SimMIM: A Simple Framework for Masked Image Modeling | ['Zhenda Xie', 'Zheng Zhang', 'Yue Cao', 'Yutong Lin', 'Jianmin Bao', 'Zhuliang Yao', 'Qi Dai', 'Han Hu'] | ['cs.CV'] | This paper presents SimMIM, a simple framework for masked image modeling. We
simplify recently proposed related approaches without special designs such as
block-wise masking and tokenization via discrete VAE or clustering. To study
what let the masked image modeling task learn good representations, we
systematically st... | 2021-11-18T18:59:45Z | null | null | null | null | null | null | null | null | null | null |
2,111.1005 | Combined Scaling for Zero-shot Transfer Learning | ['Hieu Pham', 'Zihang Dai', 'Golnaz Ghiasi', 'Kenji Kawaguchi', 'Hanxiao Liu', 'Adams Wei Yu', 'Jiahui Yu', 'Yi-Ting Chen', 'Minh-Thang Luong', 'Yonghui Wu', 'Mingxing Tan', 'Quoc V. Le'] | ['cs.LG', 'cs.CL', 'cs.CV'] | We present a combined scaling method - named BASIC - that achieves 85.7%
top-1 accuracy on the ImageNet ILSVRC-2012 validation set without learning from
any labeled ImageNet example. This accuracy surpasses best published similar
models - CLIP and ALIGN - by 9.3%. Our BASIC model also shows significant
improvements in ... | 2021-11-19T05:25:46Z | null | null | null | Combined Scaling for Zero-shot Transfer Learning | ['Hieu Pham', 'Zihang Dai', 'Golnaz Ghiasi', 'Hanxiao Liu', 'Adams Wei Yu', 'Minh-Thang Luong', 'Mingxing Tan', 'Quoc V. Le'] | 2,021 | Neurocomputing | 202 | 121 | ['Computer Science'] |
2,111.10142 | Between welcome culture and border fence. A dataset on the European
refugee crisis in German newspaper reports | ['Nico Blokker', 'André Blessing', 'Erenay Dayanik', 'Jonas Kuhn', 'Sebastian Padó', 'Gabriella Lapesa'] | ['cs.CL'] | Newspaper reports provide a rich source of information on the unfolding of
public debate on specific policy fields that can serve as basis for inquiry in
political science. Such debates are often triggered by critical events, which
attract public attention and incite the reactions of political actors: crisis
sparks the... | 2021-11-19T10:34:23Z | Submitted to Language Resources and Evaluation. This manuscript is an
extended version of https://aclanthology.org/2020.lrec-1.115 | null | null | null | null | null | null | null | null | null |
2,111.10952 | ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning | ['Vamsi Aribandi', 'Yi Tay', 'Tal Schuster', 'Jinfeng Rao', 'Huaixiu Steven Zheng', 'Sanket Vaibhav Mehta', 'Honglei Zhuang', 'Vinh Q. Tran', 'Dara Bahri', 'Jianmo Ni', 'Jai Gupta', 'Kai Hui', 'Sebastian Ruder', 'Donald Metzler'] | ['cs.CL', 'cs.LG'] | Despite the recent success of multi-task learning and transfer learning for
natural language processing (NLP), few works have systematically studied the
effect of scaling up the number of tasks during pre-training. Towards this
goal, this paper introduces ExMix (Extreme Mixture): a massive collection of
107 supervised ... | 2021-11-22T02:34:46Z | ICLR 2022; see https://youtu.be/FbRcbM4T-50 for a video overview of
the paper | null | null | null | null | null | null | null | null | null |
2,111.1109 | Optimistic Temporal Difference Learning for 2048 | ['Hung Guei', 'Lung-Pin Chen', 'I-Chen Wu'] | ['cs.AI', 'cs.LG', 'I.2.6; I.2.8'] | Temporal difference (TD) learning and its variants, such as multistage TD
(MS-TD) learning and temporal coherence (TC) learning, have been successfully
applied to 2048. These methods rely on the stochasticity of the environment of
2048 for exploration. In this paper, we propose to employ optimistic
initialization (OI) ... | 2021-11-22T10:09:36Z | Accepted by the IEEE Transactions on Games, September 3, 2021 | null | 10.1109/TG.2021.3109887 | Optimistic Temporal Difference Learning for 2048 | ['Hung Guei', 'Lung-Pin Chen', 'I-Chen Wu'] | 2,021 | IEEE Transactions on Games | 7 | 38 | ['Computer Science', 'Psychology'] |
2,111.11418 | MetaFormer Is Actually What You Need for Vision | ['Weihao Yu', 'Mi Luo', 'Pan Zhou', 'Chenyang Si', 'Yichen Zhou', 'Xinchao Wang', 'Jiashi Feng', 'Shuicheng Yan'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Transformers have shown great potential in computer vision tasks. A common
belief is their attention-based token mixer module contributes most to their
competence. However, recent works show the attention-based module in
Transformers can be replaced by spatial MLPs and the resulted models still
perform quite well. Base... | 2021-11-22T18:52:03Z | CVPR 2022 (Oral). Code: https://github.com/sail-sg/poolformer | null | null | MetaFormer is Actually What You Need for Vision | ['Weihao Yu', 'Mi Luo', 'Pan Zhou', 'Chenyang Si', 'Yichen Zhou', 'Xinchao Wang', 'Jiashi Feng', 'Shuicheng Yan'] | 2,021 | Computer Vision and Pattern Recognition | 928 | 70 | ['Computer Science'] |
2,111.12085 | UniTAB: Unifying Text and Box Outputs for Grounded Vision-Language
Modeling | ['Zhengyuan Yang', 'Zhe Gan', 'Jianfeng Wang', 'Xiaowei Hu', 'Faisal Ahmed', 'Zicheng Liu', 'Yumao Lu', 'Lijuan Wang'] | ['cs.CV'] | We propose UniTAB that Unifies Text And Box outputs for grounded
vision-language (VL) modeling. Grounded VL tasks such as grounded captioning
require the model to generate a text description and align predicted words with
object regions. To achieve this, models must generate desired text and box
outputs together, and m... | 2021-11-23T18:59:14Z | ECCV 2022 (Oral Presentation) | null | null | UniTAB: Unifying Text and Box Outputs for Grounded Vision-Language Modeling | ['Zhengyuan Yang', 'Zhe Gan', 'Jianfeng Wang', 'Xiaowei Hu', 'Faisal Ahmed', 'Zicheng Liu', 'Yumao Lu', 'Lijuan Wang'] | 2,021 | European Conference on Computer Vision | 117 | 88 | ['Computer Science'] |
2,111.14448 | AVA-AVD: Audio-Visual Speaker Diarization in the Wild | ['Eric Zhongcong Xu', 'Zeyang Song', 'Satoshi Tsutsui', 'Chao Feng', 'Mang Ye', 'Mike Zheng Shou'] | ['cs.CV', 'cs.MM', 'eess.AS'] | Audio-visual speaker diarization aims at detecting "who spoke when" using
both auditory and visual signals. Existing audio-visual diarization datasets
are mainly focused on indoor environments like meeting rooms or news studios,
which are quite different from in-the-wild videos in many scenarios such as
movies, documen... | 2021-11-29T11:02:41Z | ACMMM 2022 | null | 10.1145/3503161.3548027 | AVA-AVD: Audio-visual Speaker Diarization in the Wild | ['Eric Z. Xu', 'Zeyang Song', 'C. Feng', 'Mang Ye', 'Mike Zheng Shou'] | 2,021 | ACM Multimedia | 43 | 79 | ['Computer Science', 'Engineering'] |
2,111.14706 | ESPnet-SLU: Advancing Spoken Language Understanding through ESPnet | ['Siddhant Arora', 'Siddharth Dalmia', 'Pavel Denisov', 'Xuankai Chang', 'Yushi Ueda', 'Yifan Peng', 'Yuekai Zhang', 'Sujay Kumar', 'Karthik Ganesan', 'Brian Yan', 'Ngoc Thang Vu', 'Alan W Black', 'Shinji Watanabe'] | ['cs.CL', 'cs.SD', 'eess.AS'] | As Automatic Speech Processing (ASR) systems are getting better, there is an
increasing interest of using the ASR output to do downstream Natural Language
Processing (NLP) tasks. However, there are few open source toolkits that can be
used to generate reproducible results on different Spoken Language
Understanding (SLU... | 2021-11-29T17:05:49Z | Accepted at ICASSP 2022 (5 pages) | null | null | null | null | null | null | null | null | null |
2,111.14725 | Searching the Search Space of Vision Transformer | ['Minghao Chen', 'Kan Wu', 'Bolin Ni', 'Houwen Peng', 'Bei Liu', 'Jianlong Fu', 'Hongyang Chao', 'Haibin Ling'] | ['cs.CV'] | Vision Transformer has shown great visual representation power in substantial
vision tasks such as recognition and detection, and thus been attracting
fast-growing efforts on manually designing more effective architectures. In
this paper, we propose to use neural architecture search to automate this
process, by searchi... | 2021-11-29T17:26:07Z | Accepted to NIPS 2021 | null | null | null | null | null | null | null | null | null |
2,111.14791 | Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image
Analysis | ['Yucheng Tang', 'Dong Yang', 'Wenqi Li', 'Holger Roth', 'Bennett Landman', 'Daguang Xu', 'Vishwesh Nath', 'Ali Hatamizadeh'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Vision Transformers (ViT)s have shown great performance in self-supervised
learning of global and local representations that can be transferred to
downstream applications. Inspired by these results, we introduce a novel
self-supervised learning framework with tailored proxy tasks for medical image
analysis. Specificall... | 2021-11-29T18:45:20Z | CVPR'22 Accepted Paper | null | null | Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis | ['Yucheng Tang', 'Dong Yang', 'Wenqi Li', 'H. Roth', 'B. Landman', 'Daguang Xu', 'V. Nath', 'Ali Hatamizadeh'] | 2,021 | Computer Vision and Pattern Recognition | 538 | 62 | ['Computer Science'] |
2,111.15557 | Low-light Image Enhancement via Breaking Down the Darkness | ['Qiming Hu', 'Xiaojie Guo'] | ['cs.CV'] | Images captured in low-light environment often suffer from complex
degradation. Simply adjusting light would inevitably result in burst of hidden
noise and color distortion. To seek results with satisfied lighting,
cleanliness, and realism from degraded inputs, this paper presents a novel
framework inspired by the divi... | 2021-11-30T16:50:59Z | 9 pages, 9 figures | null | null | Low-light Image Enhancement via Breaking Down the Darkness | ['Qiming Hu', 'Xiaojie Guo'] | 2,021 | International Journal of Computer Vision | 128 | 54 | ['Computer Science'] |
2,111.15592 | MapReader: A Computer Vision Pipeline for the Semantic Exploration of
Maps at Scale | ['Kasra Hosseini', 'Daniel C. S. Wilson', 'Kaspar Beelen', 'Katherine McDonough'] | ['cs.CV', 'cs.LG'] | We present MapReader, a free, open-source software library written in Python
for analyzing large map collections (scanned or born-digital). This library
transforms the way historians can use maps by turning extensive, homogeneous
map sets into searchable primary sources. MapReader allows users with little or
no compute... | 2021-11-30T17:37:01Z | 13 pages, 9 figures | null | null | null | null | null | null | null | null | null |
2,111.15664 | OCR-free Document Understanding Transformer | ['Geewook Kim', 'Teakgyu Hong', 'Moonbin Yim', 'Jeongyeon Nam', 'Jinyoung Park', 'Jinyeong Yim', 'Wonseok Hwang', 'Sangdoo Yun', 'Dongyoon Han', 'Seunghyun Park'] | ['cs.LG', 'cs.AI'] | Understanding document images (e.g., invoices) is a core but challenging task
since it requires complex functions such as reading text and a holistic
understanding of the document. Current Visual Document Understanding (VDU)
methods outsource the task of reading text to off-the-shelf Optical Character
Recognition (OCR)... | 2021-11-30T18:55:19Z | ECCV 2022. (v5) update table 2 and figures; add LayoutLM and update
scores with the latest test script at https://github.com/clovaai/donut | null | null | OCR-Free Document Understanding Transformer | ['Geewook Kim', 'Teakgyu Hong', 'Moonbin Yim', 'JeongYeon Nam', 'Jinyoung Park', 'Jinyeong Yim', 'Wonseok Hwang', 'Sangdoo Yun', 'Dongyoon Han', 'Seunghyun Park'] | 2,021 | European Conference on Computer Vision | 279 | 72 | ['Computer Science'] |
2,112.0059 | Building astroBERT, a language model for Astronomy & Astrophysics | ['Felix Grezes', 'Sergi Blanco-Cuaresma', 'Alberto Accomazzi', 'Michael J. Kurtz', 'Golnaz Shapurian', 'Edwin Henneken', 'Carolyn S. Grant', 'Donna M. Thompson', 'Roman Chyla', 'Stephen McDonald', 'Timothy W. Hostetler', 'Matthew R. Templeton', 'Kelly E. Lockhart', 'Nemanja Martinovic', 'Shinyi Chen', 'Chris Tanner', '... | ['cs.CL', 'astro-ph.IM'] | The existing search tools for exploring the NASA Astrophysics Data System
(ADS) can be quite rich and empowering (e.g., similar and trending operators),
but researchers are not yet allowed to fully leverage semantic search. For
example, a query for "results from the Planck mission" should be able to
distinguish between... | 2021-12-01T16:01:46Z | null | null | null | null | null | null | null | null | null | null |
2,112.00861 | A General Language Assistant as a Laboratory for Alignment | ['Amanda Askell', 'Yuntao Bai', 'Anna Chen', 'Dawn Drain', 'Deep Ganguli', 'Tom Henighan', 'Andy Jones', 'Nicholas Joseph', 'Ben Mann', 'Nova DasSarma', 'Nelson Elhage', 'Zac Hatfield-Dodds', 'Danny Hernandez', 'Jackson Kernion', 'Kamal Ndousse', 'Catherine Olsson', 'Dario Amodei', 'Tom Brown', 'Jack Clark', 'Sam McCan... | ['cs.CL', 'cs.LG'] | Given the broad capabilities of large language models, it should be possible
to work towards a general-purpose, text-based assistant that is aligned with
human values, meaning that it is helpful, honest, and harmless. As an initial
foray in this direction we study simple baseline techniques and evaluations,
such as pro... | 2021-12-01T22:24:34Z | 26+19 pages; v2 typos fixed, refs added, figure scale / colors fixed;
v3 correct very non-standard TruthfulQA formatting and metric, alignment
implications slightly improved | null | null | A General Language Assistant as a Laboratory for Alignment | ['Amanda Askell', 'Yuntao Bai', 'Anna Chen', 'Dawn Drain', 'Deep Ganguli', 'T. Henighan', 'Andy Jones', 'Nicholas Joseph', 'Benjamin Mann', 'Nova Dassarma', 'Nelson Elhage', 'Zac Hatfield-Dodds', 'Danny Hernandez', 'John Kernion', 'Kamal Ndousse', 'Catherine Olsson', 'Dario Amodei', 'Tom B. Brown', 'Jack Clark', 'Sam M... | 2,021 | arXiv.org | 791 | 60 | ['Computer Science'] |
2,112.01047 | DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for
Natural Language Understanding | ['Taolin Zhang', 'Chengyu Wang', 'Nan Hu', 'Minghui Qiu', 'Chengguang Tang', 'Xiaofeng He', 'Jun Huang'] | ['cs.CL'] | Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained
models with relation triples injecting from knowledge graphs to improve
language understanding abilities. To guarantee effective knowledge injection,
previous studies integrate models with knowledge encoders for representing
knowledge retrieved fro... | 2021-12-02T08:19:42Z | Accepted by AAAI22 | null | null | null | null | null | null | null | null | null |
2,112.01488 | ColBERTv2: Effective and Efficient Retrieval via Lightweight Late
Interaction | ['Keshav Santhanam', 'Omar Khattab', 'Jon Saad-Falcon', 'Christopher Potts', 'Matei Zaharia'] | ['cs.IR', 'cs.CL'] | Neural information retrieval (IR) has greatly advanced search and other
knowledge-intensive language tasks. While many neural IR methods encode queries
and documents into single-vector representations, late interaction models
produce multi-vector representations at the granularity of each token and
decompose relevance ... | 2021-12-02T18:38:50Z | NAACL 2022. Omar and Keshav contributed equally to this work | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.