arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,112.01522 | Uni-Perceiver: Pre-training Unified Architecture for Generic Perception
for Zero-shot and Few-shot Tasks | ['Xizhou Zhu', 'Jinguo Zhu', 'Hao Li', 'Xiaoshi Wu', 'Xiaogang Wang', 'Hongsheng Li', 'Xiaohua Wang', 'Jifeng Dai'] | ['cs.CV'] | Biological intelligence systems of animals perceive the world by integrating
information in different modalities and processing simultaneously for various
tasks. In contrast, current machine learning research follows a task-specific
paradigm, leading to inefficient collaboration between tasks and high marginal
costs of... | 2021-12-02T18:59:50Z | null | null | null | null | null | null | null | null | null | null |
2,112.01526 | MViTv2: Improved Multiscale Vision Transformers for Classification and
Detection | ['Yanghao Li', 'Chao-Yuan Wu', 'Haoqi Fan', 'Karttikeya Mangalam', 'Bo Xiong', 'Jitendra Malik', 'Christoph Feichtenhofer'] | ['cs.CV'] | In this paper, we study Multiscale Vision Transformers (MViTv2) as a unified
architecture for image and video classification, as well as object detection.
We present an improved version of MViT that incorporates decomposed relative
positional embeddings and residual pooling connections. We instantiate this
architecture... | 2021-12-02T18:59:57Z | CVPR 2022 Camera Ready | null | null | null | null | null | null | null | null | null |
2,112.01527 | Masked-attention Mask Transformer for Universal Image Segmentation | ['Bowen Cheng', 'Ishan Misra', 'Alexander G. Schwing', 'Alexander Kirillov', 'Rohit Girdhar'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Image segmentation is about grouping pixels with different semantics, e.g.,
category or instance membership, where each choice of semantics defines a task.
While only the semantics of each task differ, current research focuses on
designing specialized architectures for each task. We present Masked-attention
Mask Transf... | 2021-12-02T18:59:58Z | CVPR 2022. Project page/code/models:
https://bowenc0221.github.io/mask2former | null | null | Masked-attention Mask Transformer for Universal Image Segmentation | ['Bowen Cheng', 'Ishan Misra', 'A. Schwing', 'Alexander Kirillov', 'Rohit Girdhar'] | 2,021 | Computer Vision and Pattern Recognition | 2,407 | 65 | ['Computer Science'] |
2,112.0164 | MultiVerS: Improving scientific claim verification with weak supervision
and full-document context | ['David Wadden', 'Kyle Lo', 'Lucy Lu Wang', 'Arman Cohan', 'Iz Beltagy', 'Hannaneh Hajishirzi'] | ['cs.CL', 'cs.AI'] | The scientific claim verification task requires an NLP system to label
scientific documents which Support or Refute an input claim, and to select
evidentiary sentences (or rationales) justifying each predicted label. In this
work, we present MultiVerS, which predicts a fact-checking label and identifies
rationales in a... | 2021-12-02T23:37:16Z | NAACL Findings 2022. Github: https://github.com/dwadden/multivers | null | null | null | null | null | null | null | null | null |
2,112.0181 | Siamese BERT-based Model for Web Search Relevance Ranking Evaluated on a
New Czech Dataset | ['Matěj Kocián', 'Jakub Náplava', 'Daniel Štancl', 'Vladimír Kadlec'] | ['cs.IR', 'cs.CL'] | Web search engines focus on serving highly relevant results within hundreds
of milliseconds. Pre-trained language transformer models such as BERT are
therefore hard to use in this scenario due to their high computational demands.
We present our real-time approach to the document ranking problem leveraging a
BERT-based ... | 2021-12-03T09:45:18Z | Accepted at the Thirty-Fourth Annual Conference on Innovative
Applications of Artificial Intelligence (IAAI-22). IAAI Innovative
Application Award. 9 pages, 3 figures, 8 tables | null | null | Siamese BERT-based Model for Web Search Relevance Ranking Evaluated on a New Czech Dataset | ['Matej Kocián', "Jakub N'aplava", 'Daniel Stancl', 'V. Kadlec'] | 2,021 | AAAI Conference on Artificial Intelligence | 18 | 29 | ['Computer Science'] |
2,112.01922 | MetaQA: Combining Expert Agents for Multi-Skill Question Answering | ['Haritz Puerto', 'Gözde Gül Şahin', 'Iryna Gurevych'] | ['cs.CL', 'cs.LG'] | The recent explosion of question answering (QA) datasets and models has
increased the interest in the generalization of models across multiple domains
and formats by either training on multiple datasets or by combining multiple
models. Despite the promising results of multi-dataset models, some domains or
QA formats ma... | 2021-12-03T14:05:52Z | Accepted at EACL 2023 | null | null | null | null | null | null | null | null | null |
2,112.02418 | YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice
Conversion for everyone | ['Edresson Casanova', 'Julian Weber', 'Christopher Shulby', 'Arnaldo Candido Junior', 'Eren Gölge', 'Moacir Antonelli Ponti'] | ['cs.SD', 'cs.CL', 'eess.AS'] | YourTTS brings the power of a multilingual approach to the task of zero-shot
multi-speaker TTS. Our method builds upon the VITS model and adds several novel
modifications for zero-shot multi-speaker and multilingual training. We
achieved state-of-the-art (SOTA) results in zero-shot multi-speaker TTS and
results compara... | 2021-12-04T19:50:29Z | An Erratum was added on the last page of this paper | Proceedings of the 39th International Conference on Machine
Learning, PMLR 162:2709-2720, 2022 | null | null | null | null | null | null | null | null |
2,112.02749 | One-shot Talking Face Generation from Single-speaker Audio-Visual
Correlation Learning | ['Suzhen Wang', 'Lincheng Li', 'Yu Ding', 'Xin Yu'] | ['cs.CV'] | Audio-driven one-shot talking face generation methods are usually trained on
video resources of various persons. However, their created videos often suffer
unnatural mouth shapes and asynchronous lips because those methods struggle to
learn a consistent speech style from different speakers. We observe that it
would be ... | 2021-12-06T02:53:51Z | Accepted by AAAI 2022 | AAAI 2022 | null | null | null | null | null | null | null | null |
2,112.03849 | Natural Answer Generation: From Factoid Answer to Full-length Answer
using Grammar Correction | ['Manas Jain', 'Sriparna Saha', 'Pushpak Bhattacharyya', 'Gladvin Chinnadurai', 'Manish Kumar Vatsa'] | ['cs.CL', 'cs.AI'] | Question Answering systems these days typically use template-based language
generation. Though adequate for a domain-specific task, these systems are too
restrictive and predefined for domain-independent systems. This paper proposes
a system that outputs a full-length answer given a question and the extracted
factoid a... | 2021-12-07T17:39:21Z | null | null | null | null | null | null | null | null | null | null |
2,112.03868 | EmTract: Extracting Emotions from Social Media | ['Domonkos F. Vamossy', 'Rolf Skog'] | ['q-fin.PR', 'cs.CL'] | We develop an open-source tool (EmTract) that extracts emotions from social
media text tailed for financial context. To do so, we annotate ten thousand
short messages from a financial social media platform (StockTwits) and combine
it with open-source emotion data. We then use a pre-tuned NLP model,
DistilBERT, augment ... | 2021-12-07T18:01:35Z | Substantial changes to the project | null | null | EmTract: Extracting emotions from social media | ['Domonkos F. Vamossy', 'Rolf Skog'] | 2,021 | International Review of Financial Analysis | 10 | 44 | ['Economics', 'Computer Science'] |
2,112.03877 | Is Complexity Important for Philosophy of Mind? | ['Kristina Šekrst', 'Sandro Skansi'] | ['cs.LO', 'cs.AI'] | Computational complexity has often been ignored in philosophy of mind, in
philosophical artificial intelligence studies. The purpose of this paper is
threefold. First and foremost, to show the importance of complexity rather than
computability in philosophical and AI problems. Second, to rephrase the notion
of computab... | 2021-11-02T09:35:30Z | null | null | null | null | null | null | null | null | null | null |
2,112.04212 | Do Pedestrians Pay Attention? Eye Contact Detection in the Wild | ['Younes Belkada', 'Lorenzo Bertoni', 'Romain Caristan', 'Taylor Mordan', 'Alexandre Alahi'] | ['cs.CV'] | In urban or crowded environments, humans rely on eye contact for fast and
efficient communication with nearby people. Autonomous agents also need to
detect eye contact to interact with pedestrians and safely navigate around
them. In this paper, we focus on eye contact detection in the wild, i.e.,
real-world scenarios f... | 2021-12-08T10:21:28Z | Project website: https://looking-vita-epfl.github.io | null | null | Do Pedestrians Pay Attention? Eye Contact Detection in the Wild | ['Younes Belkada', 'Lorenzo Bertoni', 'Romain Caristan', 'Taylor Mordan', 'Alexandre Alahi'] | 2,021 | arXiv.org | 12 | 39 | ['Computer Science'] |
2,112.04213 | Convergence Results For Q-Learning With Experience Replay | ['Liran Szlak', 'Ohad Shamir'] | ['cs.LG', 'cs.AI'] | A commonly used heuristic in RL is experience replay
(e.g.~\citet{lin1993reinforcement, mnih2015human}), in which a learner stores
and re-uses past trajectories as if they were sampled online. In this work, we
initiate a rigorous study of this heuristic in the setting of tabular
Q-learning. We provide a convergence rat... | 2021-12-08T10:22:49Z | null | null | null | null | null | null | null | null | null | null |
2,112.04283 | Adverse Weather Image Translation with Asymmetric and Uncertainty-aware
GAN | ['Jeong-gi Kwak', 'Youngsaeng Jin', 'Yuanming Li', 'Dongsik Yoon', 'Donghyeon Kim', 'Hanseok Ko'] | ['cs.CV', 'cs.GR'] | Adverse weather image translation belongs to the unsupervised image-to-image
(I2I) translation task which aims to transfer adverse condition domain (eg,
rainy night) to standard domain (eg, day). It is a challenging task because
images from adverse domains have some artifacts and insufficient information.
Recently, man... | 2021-12-08T13:41:24Z | BMVC 2021, codes are available in here:
https://github.com/jgkwak95/AU-GAN | null | null | Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN | ['Jeong-gi Kwak', 'Youngsaeng Jin', 'Yuanming Li', 'Dongsik Yoon', 'Donghyeon Kim', 'Hanseok Ko'] | 2,021 | British Machine Vision Conference | 15 | 42 | ['Computer Science'] |
2,112.04329 | JABER and SABER: Junior and Senior Arabic BERt | ['Abbas Ghaddar', 'Yimeng Wu', 'Ahmad Rashid', 'Khalil Bibi', 'Mehdi Rezagholizadeh', 'Chao Xing', 'Yasheng Wang', 'Duan Xinyu', 'Zhefeng Wang', 'Baoxing Huai', 'Xin Jiang', 'Qun Liu', 'Philippe Langlais'] | ['cs.CL'] | Language-specific pre-trained models have proven to be more accurate than
multilingual ones in a monolingual evaluation setting, Arabic is no exception.
However, we found that previously released Arabic BERT models were
significantly under-trained. In this technical report, we present JABER and
SABER, Junior and Senior... | 2021-12-08T15:19:24Z | Technical Report; v2: add SABER and CAMeLBERT evaluation; v3: fix
minor typos and grammatical errors | null | null | JABER and SABER: Junior and Senior Arabic BERt | ['Abbas Ghaddar', 'Yimeng Wu', 'Ahmad Rashid', 'Khalil Bibi', 'Mehdi Rezagholizadeh', 'Chao Xing', 'Yasheng Wang', 'Duan Xinyu', 'Zhefeng Wang', 'Baoxing Huai', 'Xin Jiang', 'Qun Liu', 'P. Langlais'] | 2,021 | null | 5 | 57 | ['Computer Science'] |
2,112.04426 | Improving language models by retrieving from trillions of tokens | ['Sebastian Borgeaud', 'Arthur Mensch', 'Jordan Hoffmann', 'Trevor Cai', 'Eliza Rutherford', 'Katie Millican', 'George van den Driessche', 'Jean-Baptiste Lespiau', 'Bogdan Damoc', 'Aidan Clark', 'Diego de Las Casas', 'Aurelia Guy', 'Jacob Menick', 'Roman Ring', 'Tom Hennigan', 'Saffron Huang', 'Loren Maggiore', 'Chris ... | ['cs.CL', 'cs.LG'] | We enhance auto-regressive language models by conditioning on document chunks
retrieved from a large corpus, based on local similarity with preceding tokens.
With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO)
obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite
using 2... | 2021-12-08T17:32:34Z | Fix incorrect reported numbers in Table 14 | null | null | null | null | null | null | null | null | null |
2,112.04482 | FLAVA: A Foundational Language And Vision Alignment Model | ['Amanpreet Singh', 'Ronghang Hu', 'Vedanuj Goswami', 'Guillaume Couairon', 'Wojciech Galuba', 'Marcus Rohrbach', 'Douwe Kiela'] | ['cs.CV', 'cs.CL'] | State-of-the-art vision and vision-and-language models rely on large-scale
visio-linguistic pretraining for obtaining good performance on a variety of
downstream tasks. Generally, such models are often either cross-modal
(contrastive) or multi-modal (with earlier fusion) but not both; and they often
only target specifi... | 2021-12-08T18:59:16Z | CVPR 2022 | null | null | null | null | null | null | null | null | null |
2,112.04666 | Densifying Sparse Representations for Passage Retrieval by
Representational Slicing | ['Sheng-Chieh Lin', 'Jimmy Lin'] | ['cs.IR'] | Learned sparse and dense representations capture different successful
approaches to text retrieval and the fusion of their results has proven to be
more effective and robust. Prior work combines dense and sparse retrievers by
fusing their model scores. As an alternative, this paper presents a simple
approach to densify... | 2021-12-09T02:51:15Z | null | null | null | Densifying Sparse Representations for Passage Retrieval by Representational Slicing | ['Sheng-Chieh Lin', 'Jimmy J. Lin'] | 2,021 | arXiv.org | 13 | 31 | ['Computer Science'] |
2,112.05142 | HairCLIP: Design Your Hair by Text and Reference Image | ['Tianyi Wei', 'Dongdong Chen', 'Wenbo Zhou', 'Jing Liao', 'Zhentao Tan', 'Lu Yuan', 'Weiming Zhang', 'Nenghai Yu'] | ['cs.CV', 'cs.GR'] | Hair editing is an interesting and challenging problem in computer vision and
graphics. Many existing methods require well-drawn sketches or masks as
conditional inputs for editing, however these interactions are neither
straightforward nor efficient. In order to free users from the tedious
interaction process, this pa... | 2021-12-09T18:59:58Z | To Appear at CVPR 2022 | null | null | HairCLIP: Design Your Hair by Text and Reference Image | ['Tianyi Wei', 'Dongdong Chen', 'Wenbo Zhou', 'Jing Liao', 'Zhentao Tan', 'Lu Yuan', 'Weiming Zhang', 'Nenghai Yu'] | 2,021 | Computer Vision and Pattern Recognition | 111 | 51 | ['Computer Science'] |
2,112.05224 | Spinning Language Models: Risks of Propaganda-As-A-Service and
Countermeasures | ['Eugene Bagdasaryan', 'Vitaly Shmatikov'] | ['cs.CR', 'cs.CL', 'cs.LG'] | We investigate a new threat to neural sequence-to-sequence (seq2seq) models:
training-time attacks that cause models to "spin" their outputs so as to
support an adversary-chosen sentiment or point of view -- but only when the
input contains adversary-chosen trigger words. For example, a spinned
summarization model outp... | 2021-12-09T21:48:29Z | IEEE S&P 2022. arXiv admin note: text overlap with arXiv:2107.10443 | null | 10.1109/SP46214.2022.9833572 | null | null | null | null | null | null | null |
2,112.05253 | MAGMA -- Multimodal Augmentation of Generative Models through
Adapter-based Finetuning | ['Constantin Eichenberg', 'Sidney Black', 'Samuel Weinbach', 'Letitia Parcalabescu', 'Anette Frank'] | ['cs.CV', 'cs.CL', 'I.2.7; I.4.8; I.5.1'] | Large-scale pretraining is fast becoming the norm in Vision-Language (VL)
modeling. However, prevailing VL approaches are limited by the requirement for
labeled data and the use of complex multi-step pretraining objectives. We
present MAGMA - a simple method for augmenting generative language models with
additional mod... | 2021-12-09T23:58:45Z | 13 pages, 6 figures, 2 tables. Minor improvements. Accepted at EMNLP
2022 | null | null | null | null | null | null | null | null | null |
2,112.05682 | Self-attention Does Not Need $O(n^2)$ Memory | ['Markus N. Rabe', 'Charles Staats'] | ['cs.LG'] | We present a very simple algorithm for attention that requires $O(1)$ memory
with respect to sequence length and an extension to self-attention that
requires $O(\log n)$ memory. This is in contrast with the frequently stated
belief that self-attention requires $O(n^2)$ memory. While the time complexity
is still $O(n^2)... | 2021-12-10T17:25:07Z | null | null | null | null | null | null | null | null | null | null |
2,112.05787 | Representation Learning for Conversational Data using Discourse Mutual
Information Maximization | ['Bishal Santra', 'Sumegh Roychowdhury', 'Aishik Mandal', 'Vasu Gurram', 'Atharva Naik', 'Manish Gupta', 'Pawan Goyal'] | ['cs.CL'] | Although many pretrained models exist for text or images, there have been
relatively fewer attempts to train representations specifically for dialog
understanding. Prior works usually relied on finetuned representations based on
generic text representation models like BERT or GPT-2. But such language
modeling pretraini... | 2021-12-04T13:17:07Z | Preprint, 15 pages, To appear in NAACL 2022 (Main) | null | null | null | null | null | null | null | null | null |
2,112.06598 | WECHSEL: Effective initialization of subword embeddings for
cross-lingual transfer of monolingual language models | ['Benjamin Minixhofer', 'Fabian Paischer', 'Navid Rekabsaz'] | ['cs.CL'] | Large pretrained language models (LMs) have become the central building block
of many NLP applications. Training these models requires ever more
computational resources and most of the existing models are trained on English
text only. It is exceedingly expensive to train these models in other
languages. To alleviate th... | 2021-12-13T12:26:02Z | NAACL 2022 | null | 10.18653/v1/2022.naacl-main.293 | null | null | null | null | null | null | null |
2,112.06905 | GLaM: Efficient Scaling of Language Models with Mixture-of-Experts | ['Nan Du', 'Yanping Huang', 'Andrew M. Dai', 'Simon Tong', 'Dmitry Lepikhin', 'Yuanzhong Xu', 'Maxim Krikun', 'Yanqi Zhou', 'Adams Wei Yu', 'Orhan Firat', 'Barret Zoph', 'Liam Fedus', 'Maarten Bosma', 'Zongwei Zhou', 'Tao Wang', 'Yu Emma Wang', 'Kellie Webster', 'Marie Pellat', 'Kevin Robinson', 'Kathleen Meier-Hellste... | ['cs.CL'] | Scaling language models with more data, compute and parameters has driven
significant progress in natural language processing. For example, thanks to
scaling, GPT-3 was able to achieve strong results on in-context learning tasks.
However, training these large dense models requires significant amounts of
computing resou... | 2021-12-13T18:58:19Z | Accepted to ICML 2022 | null | null | null | null | null | null | null | null | null |
2,112.07577 | GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of
Dense Retrieval | ['Kexin Wang', 'Nandan Thakur', 'Nils Reimers', 'Iryna Gurevych'] | ['cs.CL', 'cs.IR'] | Dense retrieval approaches can overcome the lexical gap and lead to
significantly improved search results. However, they require large amounts of
training data which is not available for most domains. As shown in previous
work (Thakur et al., 2021b), the performance of dense retrievers severely
degrades under a domain ... | 2021-12-14T17:34:43Z | Accepted at NAACL 2022 | null | null | null | null | null | null | null | null | null |
2,112.07708 | Learning to Retrieve Passages without Supervision | ['Ori Ram', 'Gal Shachaf', 'Omer Levy', 'Jonathan Berant', 'Amir Globerson'] | ['cs.CL', 'cs.IR'] | Dense retrievers for open-domain question answering (ODQA) have been shown to
achieve impressive performance by training on large datasets of
question-passage pairs. In this work we ask whether this dependence on labeled
data can be reduced via unsupervised pretraining that is geared towards ODQA.
We show this is in fa... | 2021-12-14T19:18:08Z | NAACL 2022 | null | null | null | null | null | null | null | null | null |
2,112.07772 | Do Answers to Boolean Questions Need Explanations? Yes | ['Sara Rosenthal', 'Mihaela Bornea', 'Avirup Sil', 'Radu Florian', 'Scott McCarley'] | ['cs.CL'] | Existing datasets that contain boolean questions, such as BoolQ and TYDI QA ,
provide the user with a YES/NO response to the question. However, a one word
response is not sufficient for an explainable system. We promote explainability
by releasing a new set of annotations marking the evidence in existing TyDi QA
and Bo... | 2021-12-14T22:40:28Z | 9 pages | null | null | Do Answers to Boolean Questions Need Explanations? Yes | ['Sara Rosenthal', 'Mihaela A. Bornea', 'Avirup Sil', 'Radu Florian', 'S. McCarley'] | 2,021 | arXiv.org | 4 | 30 | ['Computer Science'] |
2,112.07869 | Fine-Tuning Large Neural Language Models for Biomedical Natural Language
Processing | ['Robert Tinn', 'Hao Cheng', 'Yu Gu', 'Naoto Usuyama', 'Xiaodong Liu', 'Tristan Naumann', 'Jianfeng Gao', 'Hoifung Poon'] | ['cs.CL', 'cs.LG'] | Motivation: A perennial challenge for biomedical researchers and clinical
practitioners is to stay abreast with the rapid growth of publications and
medical notes. Natural language processing (NLP) has emerged as a promising
direction for taming information overload. In particular, large neural language
models facilita... | 2021-12-15T04:20:35Z | null | null | null | Fine-tuning large neural language models for biomedical natural language processing | ['Robert Tinn', 'Hao Cheng', 'Yu Gu', 'N. Usuyama', 'Xiaodong Liu', 'Tristan Naumann', 'Jianfeng Gao', 'Hoifung Poon'] | 2,021 | Patterns | 117 | 54 | ['Computer Science', 'Medicine'] |
2,112.07887 | Knowledge-Rich Self-Supervision for Biomedical Entity Linking | ['Sheng Zhang', 'Hao Cheng', 'Shikhar Vashishth', 'Cliff Wong', 'Jinfeng Xiao', 'Xiaodong Liu', 'Tristan Naumann', 'Jianfeng Gao', 'Hoifung Poon'] | ['cs.CL'] | Entity linking faces significant challenges such as prolific variations and
prevalent ambiguities, especially in high-value domains with myriad entities.
Standard classification approaches suffer from the annotation bottleneck and
cannot effectively handle unseen entities. Zero-shot entity linking has emerged
as a prom... | 2021-12-15T05:05:12Z | null | null | null | Knowledge-Rich Self-Supervision for Biomedical Entity Linking | ['Sheng Zhang', 'Hao Cheng', 'Shikhar Vashishth', 'Cliff Wong', 'Jinfeng Xiao', 'Xiaodong Liu', 'Tristan Naumann', 'Jianfeng Gao', 'Hoifung Poon'] | 2,021 | Conference on Empirical Methods in Natural Language Processing | 42 | 64 | ['Computer Science'] |
2,112.07899 | Large Dual Encoders Are Generalizable Retrievers | ['Jianmo Ni', 'Chen Qu', 'Jing Lu', 'Zhuyun Dai', 'Gustavo Hernández Ábrego', 'Ji Ma', 'Vincent Y. Zhao', 'Yi Luan', 'Keith B. Hall', 'Ming-Wei Chang', 'Yinfei Yang'] | ['cs.IR', 'cs.CL'] | It has been shown that dual encoders trained on one domain often fail to
generalize to other domains for retrieval tasks. One widespread belief is that
the bottleneck layer of a dual encoder, where the final score is simply a
dot-product between a query vector and a passage vector, is too limited to make
dual encoders ... | 2021-12-15T05:33:27Z | null | null | null | null | null | null | null | null | null | null |
2,112.07916 | LongT5: Efficient Text-To-Text Transformer for Long Sequences | ['Mandy Guo', 'Joshua Ainslie', 'David Uthus', 'Santiago Ontanon', 'Jianmo Ni', 'Yun-Hsuan Sung', 'Yinfei Yang'] | ['cs.CL'] | Recent work has shown that either (1) increasing the input length or (2)
increasing model size can improve the performance of Transformer-based neural
models. In this paper, we present a new model, called LongT5, with which we
explore the effects of scaling both the input length and model size at the same
time. Specifi... | 2021-12-15T06:35:29Z | Accepted in NAACL 2022 | null | null | null | null | null | null | null | null | null |
2,112.08185 | Learning Cross-Lingual IR from an English Retriever | ['Yulong Li', 'Martin Franz', 'Md Arafat Sultan', 'Bhavani Iyer', 'Young-Suk Lee', 'Avirup Sil'] | ['cs.CL', 'cs.AI'] | We present DR.DECR (Dense Retrieval with Distillation-Enhanced Cross-Lingual
Representation), a new cross-lingual information retrieval (CLIR) system
trained using multi-stage knowledge distillation (KD). The teacher of DR.DECR
relies on a highly effective but computationally expensive two-stage inference
process consi... | 2021-12-15T15:07:54Z | Presented at NAACL 2022 main conference Code can be found at:
https://github.com/primeqa/primeqa | null | null | null | null | null | null | null | null | null |
2,112.08352 | Textless Speech-to-Speech Translation on Real Data | ['Ann Lee', 'Hongyu Gong', 'Paul-Ambroise Duquenne', 'Holger Schwenk', 'Peng-Jen Chen', 'Changhan Wang', 'Sravya Popuri', 'Yossi Adi', 'Juan Pino', 'Jiatao Gu', 'Wei-Ning Hsu'] | ['cs.CL', 'cs.AI', 'cs.LG', 'eess.AS'] | We present a textless speech-to-speech translation (S2ST) system that can
translate speech from one language into another language and can be built
without the need of any text data. Different from existing work in the
literature, we tackle the challenge in modeling multi-speaker target speech and
train the systems wit... | 2021-12-15T18:56:35Z | Accepted to NAACL 2022 (long paper) | null | null | null | null | null | null | null | null | null |
2,112.08542 | QAFactEval: Improved QA-Based Factual Consistency Evaluation for
Summarization | ['Alexander R. Fabbri', 'Chien-Sheng Wu', 'Wenhao Liu', 'Caiming Xiong'] | ['cs.CL'] | Factual consistency is an essential quality of text summarization models in
practical settings. Existing work in evaluating this dimension can be broadly
categorized into two lines of research, entailment-based and question answering
(QA)-based metrics, and different experimental setups often lead to contrasting
conclu... | 2021-12-16T00:38:35Z | NAACL 2022 | null | null | QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization | ['Alexander R. Fabbri', 'C. Wu', 'Wenhao Liu', 'Caiming Xiong'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 219 | 60 | ['Computer Science'] |
2,112.08547 | Learning Rich Representation of Keyphrases from Text | ['Mayank Kulkarni', 'Debanjan Mahata', 'Ravneet Arora', 'Rajarshi Bhowmik'] | ['cs.CL', 'cs.IR', 'cs.LG'] | In this work, we explore how to train task-specific language models aimed
towards learning rich representation of keyphrases from text documents. We
experiment with different masking strategies for pre-training transformer
language models (LMs) in discriminative as well as generative settings. In the
discriminative set... | 2021-12-16T01:09:51Z | null | null | null | Learning Rich Representation of Keyphrases from Text | ['Mayank Kulkarni', 'Debanjan Mahata', 'Ravneet Arora', 'Rajarshi Bhowmik'] | 2,021 | NAACL-HLT | 68 | 74 | ['Computer Science'] |
2,112.08634 | FRUIT: Faithfully Reflecting Updated Information in Text | ['Robert L. Logan IV', 'Alexandre Passos', 'Sameer Singh', 'Ming-Wei Chang'] | ['cs.CL'] | Textual knowledge bases such as Wikipedia require considerable effort to keep
up to date and consistent. While automated writing assistants could potentially
ease this burden, the problem of suggesting edits grounded in external
knowledge has been under-explored. In this paper, we introduce the novel
generation task of... | 2021-12-16T05:21:24Z | v2.0, NAACL 2022 | null | null | null | null | null | null | null | null | null |
2,112.08656 | DREAM: Improving Situational QA by First Elaborating the Situation | ['Yuling Gu', 'Bhavana Dalvi Mishra', 'Peter Clark'] | ['cs.CL', 'cs.AI'] | When people answer questions about a specific situation, e.g., "I cheated on
my mid-term exam last week. Was that wrong?", cognitive science suggests that
they form a mental picture of that situation before answering. While we do not
know how language models (LMs) answer such questions, we conjecture that they
may answ... | 2021-12-16T06:22:47Z | to be published in NAACL 2022 | null | null | DREAM: Improving Situational QA by First Elaborating the Situation | ['Yuling Gu', 'Bhavana Dalvi', 'Peter Clark'] | 2,021 | North American Chapter of the Association for Computational Linguistics | 18 | 44 | ['Computer Science'] |
2,112.08754 | CLIN-X: pre-trained language models and a study on cross-task transfer
for concept extraction in the clinical domain | ['Lukas Lange', 'Heike Adel', 'Jannik Strötgen', 'Dietrich Klakow'] | ['cs.CL', 'cs.LG'] | The field of natural language processing (NLP) has recently seen a large
change towards using pre-trained language models for solving almost any task.
Despite showing great improvements in benchmark datasets for various tasks,
these models often perform sub-optimal in non-standard domains like the
clinical domain where... | 2021-12-16T10:07:39Z | This article has been accepted for publication in Bioinformatics
\c{opyright}: 2022 The Author(s). Published by Oxford University Press. All
rights reserved. The published manuscript can be found here:
https://doi.org/10.1093/bioinformatics/btac297 | null | 10.1093/bioinformatics/btac297 | null | null | null | null | null | null | null |
2,112.08804 | CrossSum: Beyond English-Centric Cross-Lingual Summarization for 1,500+
Language Pairs | ['Abhik Bhattacharjee', 'Tahmid Hasan', 'Wasi Uddin Ahmad', 'Yuan-Fang Li', 'Yong-Bin Kang', 'Rifat Shahriyar'] | ['cs.CL'] | We present CrossSum, a large-scale cross-lingual summarization dataset
comprising 1.68 million article-summary samples in 1,500+ language pairs. We
create CrossSum by aligning parallel articles written in different languages
via cross-lingual retrieval from a multilingual abstractive summarization
dataset and perform a... | 2021-12-16T11:40:36Z | ACL 2023 (camera-ready) | null | null | null | null | null | null | null | null | null |
2,112.09106 | RegionCLIP: Region-based Language-Image Pretraining | ['Yiwu Zhong', 'Jianwei Yang', 'Pengchuan Zhang', 'Chunyuan Li', 'Noel Codella', 'Liunian Harold Li', 'Luowei Zhou', 'Xiyang Dai', 'Lu Yuan', 'Yin Li', 'Jianfeng Gao'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Contrastive language-image pretraining (CLIP) using image-text pairs has
achieved impressive results on image classification in both zero-shot and
transfer learning settings. However, we show that directly applying such models
to recognize image regions for object detection leads to poor performance due
to a domain shi... | 2021-12-16T18:39:36Z | Technical report | null | null | null | null | null | null | null | null | null |
2,112.09118 | Unsupervised Dense Information Retrieval with Contrastive Learning | ['Gautier Izacard', 'Mathilde Caron', 'Lucas Hosseini', 'Sebastian Riedel', 'Piotr Bojanowski', 'Armand Joulin', 'Edouard Grave'] | ['cs.IR', 'cs.AI', 'cs.CL'] | Recently, information retrieval has seen the emergence of dense retrievers,
using neural networks, as an alternative to classical sparse methods based on
term-frequency. These models have obtained state-of-the-art results on datasets
and tasks where large training sets are available. However, they do not
transfer well ... | 2021-12-16T18:57:37Z | null | null | null | null | null | null | null | null | null | null |
2,112.09127 | ICON: Implicit Clothed humans Obtained from Normals | ['Yuliang Xiu', 'Jinlong Yang', 'Dimitrios Tzionas', 'Michael J. Black'] | ['cs.CV', 'cs.AI', 'cs.GR'] | Current methods for learning realistic and animatable 3D clothed avatars need
either posed 3D scans or 2D images with carefully controlled user poses. In
contrast, our goal is to learn an avatar from only 2D images of people in
unconstrained poses. Given a set of images, our method estimates a detailed 3D
surface from ... | 2021-12-16T18:59:41Z | Project page: https://icon.is.tue.mpg.de/. Accepted by CVPR 2022 | null | null | null | null | null | null | null | null | null |
2,112.0929 | PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer
Vision | ['Salehe Erfanian Ebadi', 'You-Cyuan Jhang', 'Alex Zook', 'Saurav Dhakad', 'Adam Crespi', 'Pete Parisi', 'Steven Borkman', 'Jonathan Hogins', 'Sujoy Ganguly'] | ['cs.CV', 'cs.AI', 'cs.DB', 'cs.GR', 'cs.LG'] | In recent years, person detection and human pose estimation have made great
strides, helped by large-scale labeled datasets. However, these datasets had no
guarantees or analysis of human activities, poses, or context diversity.
Additionally, privacy, legal, safety, and ethical concerns may limit the
ability to collect... | 2021-12-17T02:33:31Z | PeopleSansPeople template Unity environment, benchmark binaries, and
source code is available at:
https://github.com/Unity-Technologies/PeopleSansPeople | null | null | PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer Vision | ['Salehe Erfanian Ebadi', 'Y. Jhang', 'Alexander Zook', 'S. Dhakad', 'A. Crespi', 'Pete Parisi', 'S. Borkman', 'Jonathan Hogins', 'Sujoy Ganguly'] | 2,021 | arXiv.org | 21 | 50 | ['Computer Science'] |
2,112.09331 | Contrastive Vision-Language Pre-training with Limited Resources | ['Quan Cui', 'Boyan Zhou', 'Yu Guo', 'Weidong Yin', 'Hao Wu', 'Osamu Yoshie', 'Yubo Chen'] | ['cs.CV', 'cs.MM'] | Pioneering dual-encoder pre-training works (e.g., CLIP and ALIGN) have
revealed the potential of aligning multi-modal representations with contrastive
learning. However, these works require a tremendous amount of data and
computational resources (e.g., billion-level web data and hundreds of GPUs),
which prevent researc... | 2021-12-17T05:40:28Z | Accepted to ECCV2022 | null | null | null | null | null | null | null | null | null |
2,112.09332 | WebGPT: Browser-assisted question-answering with human feedback | ['Reiichiro Nakano', 'Jacob Hilton', 'Suchir Balaji', 'Jeff Wu', 'Long Ouyang', 'Christina Kim', 'Christopher Hesse', 'Shantanu Jain', 'Vineet Kosaraju', 'William Saunders', 'Xu Jiang', 'Karl Cobbe', 'Tyna Eloundou', 'Gretchen Krueger', 'Kevin Button', 'Matthew Knight', 'Benjamin Chess', 'John Schulman'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We fine-tune GPT-3 to answer long-form questions using a text-based
web-browsing environment, which allows the model to search and navigate the
web. By setting up the task so that it can be performed by humans, we are able
to train models on the task using imitation learning, and then optimize answer
quality with human... | 2021-12-17T05:43:43Z | 32 pages | null | null | WebGPT: Browser-assisted question-answering with human feedback | ['Reiichiro Nakano', 'Jacob Hilton', 'S. Balaji', 'Jeff Wu', 'Ouyang Long', 'Christina Kim', 'Christopher Hesse', 'Shantanu Jain', 'Vineet Kosaraju', 'W. Saunders', 'Xu Jiang', 'K. Cobbe', 'Tyna Eloundou', 'Gretchen Krueger', 'Kevin Button', 'Matthew Knight', 'Benjamin Chess', 'John Schulman'] | 2,021 | arXiv.org | 1,299 | 44 | ['Computer Science'] |
2,112.09866 | Cascading Adaptors to Leverage English Data to Improve Performance of
Question Answering for Low-Resource Languages | ['Hariom A. Pandya', 'Bhavik Ardeshna', 'Brijesh S. Bhatt'] | ['cs.CL', 'cs.AI', 'cs.HC', 'cs.IR', 'cs.LG'] | Transformer based architectures have shown notable results on many down
streaming tasks including question answering. The availability of data, on the
other hand, impedes obtaining legitimate performance for low-resource
languages. In this paper, we investigate the applicability of pre-trained
multilingual models to im... | 2021-12-18T07:40:37Z | null | null | null | Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages | ['Hariom A. Pandya', 'Bhavik Ardeshna', 'Brijesh S. Bhatt'] | 2,021 | ICON | 6 | 23 | ['Computer Science'] |
2,112.10003 | Image Segmentation Using Text and Image Prompts | ['Timo Lüddecke', 'Alexander S. Ecker'] | ['cs.CV'] | Image segmentation is usually addressed by training a model for a fixed set
of object classes. Incorporating additional classes or more complex queries
later is expensive as it requires re-training the model on a dataset that
encompasses these expressions. Here we propose a system that can generate image
segmentations ... | 2021-12-18T21:27:19Z | CVPR 2022 | null | null | Image Segmentation Using Text and Image Prompts | ['Timo Lüddecke', 'Alexander S. Ecker'] | 2,021 | Computer Vision and Pattern Recognition | 477 | 59 | ['Computer Science'] |
2,112.10668 | Few-shot Learning with Multilingual Language Models | ['Xi Victoria Lin', 'Todor Mihaylov', 'Mikel Artetxe', 'Tianlu Wang', 'Shuohui Chen', 'Daniel Simig', 'Myle Ott', 'Naman Goyal', 'Shruti Bhosale', 'Jingfei Du', 'Ramakanth Pasunuru', 'Sam Shleifer', 'Punit Singh Koura', 'Vishrav Chaudhary', "Brian O'Horo", 'Jeff Wang', 'Luke Zettlemoyer', 'Zornitsa Kozareva', 'Mona Dia... | ['cs.CL', 'cs.AI'] | Large-scale generative language models such as GPT-3 are competitive few-shot
learners. While these models are known to be able to jointly represent many
different languages, their training data is dominated by English, potentially
limiting their cross-lingual generalization. In this work, we train
multilingual generat... | 2021-12-20T16:52:35Z | Accepted to EMNLP 2022; 34 pages | null | null | null | null | null | null | null | null | null |
2,112.10684 | Efficient Large Scale Language Modeling with Mixtures of Experts | ['Mikel Artetxe', 'Shruti Bhosale', 'Naman Goyal', 'Todor Mihaylov', 'Myle Ott', 'Sam Shleifer', 'Xi Victoria Lin', 'Jingfei Du', 'Srinivasan Iyer', 'Ramakanth Pasunuru', 'Giri Anantharaman', 'Xian Li', 'Shuohui Chen', 'Halil Akin', 'Mandeep Baines', 'Louis Martin', 'Xing Zhou', 'Punit Singh Koura', "Brian O'Horo", 'Je... | ['cs.CL', 'cs.AI', 'cs.LG'] | Mixture of Experts layers (MoEs) enable efficient scaling of language models
through conditional computation. This paper presents a detailed empirical study
of how autoregressive MoE language models scale in comparison with dense models
in a wide range of settings: in- and out-of-domain language modeling, zero- and
few... | 2021-12-20T17:05:11Z | EMNLP 2022 | null | null | null | null | null | null | null | null | null |
2,112.10741 | GLIDE: Towards Photorealistic Image Generation and Editing with
Text-Guided Diffusion Models | ['Alex Nichol', 'Prafulla Dhariwal', 'Aditya Ramesh', 'Pranav Shyam', 'Pamela Mishkin', 'Bob McGrew', 'Ilya Sutskever', 'Mark Chen'] | ['cs.CV', 'cs.GR', 'cs.LG'] | Diffusion models have recently been shown to generate high-quality synthetic
images, especially when paired with a guidance technique to trade off diversity
for fidelity. We explore diffusion models for the problem of text-conditional
image synthesis and compare two different guidance strategies: CLIP guidance
and clas... | 2021-12-20T18:42:55Z | 20 pages, 18 figures | null | null | GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models | ['Alex Nichol', 'Prafulla Dhariwal', 'A. Ramesh', 'Pranav Shyam', 'Pamela Mishkin', 'Bob McGrew', 'I. Sutskever', 'Mark Chen'] | 2,021 | International Conference on Machine Learning | 3,641 | 51 | ['Computer Science'] |
2,112.10752 | High-Resolution Image Synthesis with Latent Diffusion Models | ['Robin Rombach', 'Andreas Blattmann', 'Dominik Lorenz', 'Patrick Esser', 'Björn Ommer'] | ['cs.CV'] | By decomposing the image formation process into a sequential application of
denoising autoencoders, diffusion models (DMs) achieve state-of-the-art
synthesis results on image data and beyond. Additionally, their formulation
allows for a guiding mechanism to control the image generation process without
retraining. Howev... | 2021-12-20T18:55:25Z | CVPR 2022 | null | null | null | null | null | null | null | null | null |
2,112.10764 | Mask2Former for Video Instance Segmentation | ['Bowen Cheng', 'Anwesa Choudhuri', 'Ishan Misra', 'Alexander Kirillov', 'Rohit Girdhar', 'Alexander G. Schwing'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We find Mask2Former also achieves state-of-the-art performance on video
instance segmentation without modifying the architecture, the loss or even the
training pipeline. In this report, we show universal image segmentation
architectures trivially generalize to video segmentation by directly predicting
3D segmentation v... | 2021-12-20T18:59:59Z | Code and models: https://github.com/facebookresearch/Mask2Former | null | null | null | null | null | null | null | null | null |
2,112.11446 | Scaling Language Models: Methods, Analysis & Insights from Training
Gopher | ['Jack W. Rae', 'Sebastian Borgeaud', 'Trevor Cai', 'Katie Millican', 'Jordan Hoffmann', 'Francis Song', 'John Aslanides', 'Sarah Henderson', 'Roman Ring', 'Susannah Young', 'Eliza Rutherford', 'Tom Hennigan', 'Jacob Menick', 'Albin Cassirer', 'Richard Powell', 'George van den Driessche', 'Lisa Anne Hendricks', 'Maribe... | ['cs.CL', 'cs.AI'] | Language modelling provides a step towards intelligent communication systems
by harnessing large repositories of written human knowledge to better predict
and understand the world. In this paper, we present an analysis of
Transformer-based language model performance across a wide range of model
scales -- from models wi... | 2021-12-08T19:41:47Z | 120 pages | null | null | null | null | null | null | null | null | null |
2,112.1265 | Distilling the Knowledge of Romanian BERTs Using Multiple Teachers | ['Andrei-Marius Avram', 'Darius Catrina', 'Dumitru-Clementin Cercel', 'Mihai Dascălu', 'Traian Rebedea', 'Vasile Păiş', 'Dan Tufiş'] | ['cs.CL', 'cs.LG'] | Running large-scale pre-trained language models in computationally
constrained environments remains a challenging problem yet to be addressed,
while transfer learning from these models has become prevalent in Natural
Language Processing tasks. Several solutions, including knowledge distillation,
network quantization, o... | 2021-12-23T15:37:58Z | 10 pages, accepted to LREC2022 in the main conference | null | null | Distilling the Knowledge of Romanian BERTs Using Multiple Teachers | ['Andrei-Marius Avram', 'Darius Catrina', 'Dumitru-Clementin Cercel', 'Mihai Dascualu', 'Traian Rebedea', 'Vasile Puaics', 'Dan Tufics'] | 2,021 | International Conference on Language Resources and Evaluation | 12 | 48 | ['Computer Science'] |
2,112.12731 | ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training
for Language Understanding and Generation | ['Shuohuan Wang', 'Yu Sun', 'Yang Xiang', 'Zhihua Wu', 'Siyu Ding', 'Weibao Gong', 'Shikun Feng', 'Junyuan Shang', 'Yanbin Zhao', 'Chao Pang', 'Jiaxiang Liu', 'Xuyi Chen', 'Yuxiang Lu', 'Weixin Liu', 'Xi Wang', 'Yangfan Bai', 'Qiuliang Chen', 'Li Zhao', 'Shiyong Li', 'Peng Sun', 'Dianhai Yu', 'Yanjun Ma', 'Hao Tian', '... | ['cs.CL'] | Pre-trained language models have achieved state-of-the-art results in various
Natural Language Processing (NLP) tasks. GPT-3 has shown that scaling up
pre-trained language models can further exploit their enormous potential. A
unified framework named ERNIE 3.0 was recently proposed for pre-training
large-scale knowledg... | 2021-12-23T17:35:48Z | arXiv admin note: text overlap with arXiv:2107.02137 | null | null | ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation | ['Shuohuan Wang', 'Yu Sun', 'Yang Xiang', 'Zhihua Wu', 'Siyu Ding', 'Weibao Gong', 'Shi Feng', 'Junyuan Shang', 'Yanbin Zhao', 'Chao Pang', 'Jiaxiang Liu', 'Xuyi Chen', 'Yuxiang Lu', 'Weixin Liu', 'Xi Wang', 'Yangfan Bai', 'Qiuliang Chen', 'Li Zhao', 'Shiyong Li', 'Peng Sun', 'Dianhai Yu', 'Yanjun Ma', 'Hao Tian', 'Hua... | 2,021 | arXiv.org | 78 | 104 | ['Computer Science'] |
2,112.13492 | Vision Transformer for Small-Size Datasets | ['Seung Hoon Lee', 'Seunghyun Lee', 'Byung Cheol Song'] | ['cs.CV'] | Recently, the Vision Transformer (ViT), which applied the transformer
structure to the image classification task, has outperformed convolutional
neural networks. However, the high performance of the ViT results from
pre-training using a large-size dataset such as JFT-300M, and its dependence on
a large dataset is inter... | 2021-12-27T03:24:03Z | null | null | null | null | null | null | null | null | null | null |
2,112.13906 | Does CLIP Benefit Visual Question Answering in the Medical Domain as
Much as it Does in the General Domain? | ['Sedigheh Eslami', 'Gerard de Melo', 'Christoph Meinel'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | Contrastive Language--Image Pre-training (CLIP) has shown remarkable success
in learning with cross-modal supervision from extensive amounts of image--text
pairs collected online. Thus far, the effectiveness of CLIP has been
investigated primarily in general-domain multimodal problems. This work
evaluates the effective... | 2021-12-27T21:19:23Z | null | null | null | Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain? | ['Sedigheh Eslami', 'Gerard de Melo', 'C. Meinel'] | 2,021 | arXiv.org | 121 | 32 | ['Computer Science'] |
2,112.14569 | Fine-Tuning Transformers: Vocabulary Transfer | ['Vladislav Mosin', 'Igor Samenko', 'Alexey Tikhonov', 'Borislav Kozlovskii', 'Ivan P. Yamshchikov'] | ['cs.CL', 'cs.AI', 'cs.LG', '68T50, 91F20', 'I.2.7'] | Transformers are responsible for the vast majority of recent advances in
natural language processing. The majority of practical natural language
processing applications of these models are typically enabled through transfer
learning. This paper studies if corpus-specific tokenization used for
fine-tuning improves the r... | 2021-12-29T14:22:42Z | null | null | 10.1016/j.artint.2023.103860 | null | null | null | null | null | null | null |
2,112.14731 | LeSICiN: A Heterogeneous Graph-based Approach for Automatic Legal
Statute Identification from Indian Legal Documents | ['Shounak Paul', 'Pawan Goyal', 'Saptarshi Ghosh'] | ['cs.CL', 'I.2.1; I.2.7'] | The task of Legal Statute Identification (LSI) aims to identify the legal
statutes that are relevant to a given description of Facts or evidence of a
legal case. Existing methods only utilize the textual content of Facts and
legal articles to guide such a task. However, the citation network among case
documents and leg... | 2021-12-29T18:39:35Z | This paper has been accepted at the Main Track of the AAAI Conference
on Artificial Intelligence (AAAI) 2022. Dataset and codes are available at
https://github.com/Law-AI/LeSICiN | null | null | null | null | null | null | null | null | null |
2,112.15272 | ViNMT: Neural Machine Translation Toolkit | ['Nguyen Hoang Quan', 'Nguyen Thanh Dat', 'Nguyen Hoang Minh Cong', 'Nguyen Van Vinh', 'Ngo Thi Vinh', 'Nguyen Phuong Thai', 'Tran Hong Viet'] | ['cs.CL', 'cs.LG'] | We present an open-source toolkit for neural machine translation (NMT). The
new toolkit is mainly based on vaulted Transformer (Vaswani et al., 2017) along
with many other improvements detailed below, in order to create a
self-contained, simple to use, consistent and comprehensive framework for
Machine Translation task... | 2021-12-31T02:42:39Z | null | null | null | ViNMT: Neural Machine Translation Toolkit | ['Nguyen Hoang Quan', 'N. T. Dat', 'Nguyen Hoang Minh Cong', 'Nguyen Van Vinh', 'Ngo Thi Vinh', 'N. Thai', 'T. Viet'] | 2,021 | null | 2 | 16 | ['Computer Science'] |
2,112.15417 | Hypers at ComMA@ICON: Modelling Aggressiveness, Gender Bias and Communal
Bias Identification | ['Sean Benhur', 'Roshan Nayak', 'Kanchana Sivanraju', 'Adeep Hande', 'Subalalitha Chinnaudayar Navaneethakrishnan', 'Ruba Priyadharshini', 'Bharathi Raja Chakravarthi'] | ['cs.CL'] | Due to the exponentially increasing reach of social media, it is essential to
focus on its negative aspects as it can potentially divide society and incite
people into violence. In this paper, we present our system description of work
on the shared task ComMA@ICON, where we have to classify how aggressive the
sentence ... | 2021-12-31T12:50:38Z | 5 pages | null | null | null | null | null | null | null | null | null |
2,201.00487 | Language as Queries for Referring Video Object Segmentation | ['Jiannan Wu', 'Yi Jiang', 'Peize Sun', 'Zehuan Yuan', 'Ping Luo'] | ['cs.CV'] | Referring video object segmentation (R-VOS) is an emerging cross-modal task
that aims to segment the target object referred by a language expression in all
video frames. In this work, we propose a simple and unified framework built
upon Transformer, termed ReferFormer. It views the language as queries and
directly atte... | 2022-01-03T05:54:00Z | 14 pages, accepted by CVPR2022 | null | null | Language as Queries for Referring Video Object Segmentation | ['Jiannan Wu', 'Yi Jiang', 'Pei Sun', 'Zehuan Yuan', 'Ping Luo'] | 2,022 | Computer Vision and Pattern Recognition | 155 | 62 | ['Computer Science'] |
2,201.01266 | Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors
in MRI Images | ['Ali Hatamizadeh', 'Vishwesh Nath', 'Yucheng Tang', 'Dong Yang', 'Holger Roth', 'Daguang Xu'] | ['eess.IV', 'cs.CV', 'cs.LG'] | Semantic segmentation of brain tumors is a fundamental medical image analysis
task involving multiple MRI imaging modalities that can assist clinicians in
diagnosing the patient and successively studying the progression of the
malignant entity. In recent years, Fully Convolutional Neural Networks (FCNNs)
approaches hav... | 2022-01-04T18:01:34Z | 13 pages, 3 figures | null | null | Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images | ['Ali Hatamizadeh', 'V. Nath', 'Yucheng Tang', 'Dong Yang', 'H. Roth', 'Daguang Xu'] | 2,022 | BrainLes@MICCAI | 1,165 | 41 | ['Computer Science', 'Engineering'] |
2,201.02026 | Fortunately, Discourse Markers Can Enhance Language Models for Sentiment
Analysis | ['Liat Ein-Dor', 'Ilya Shnayderman', 'Artem Spector', 'Lena Dankin', 'Ranit Aharonov', 'Noam Slonim'] | ['cs.CL'] | In recent years, pretrained language models have revolutionized the NLP
world, while achieving state of the art performance in various downstream
tasks. However, in many cases, these models do not perform well when labeled
data is scarce and the model is expected to perform in the zero or few shot
setting. Recently, se... | 2022-01-06T12:33:47Z | Published in AAAI 2022 | null | null | null | null | null | null | null | null | null |
2,201.02184 | Learning Audio-Visual Speech Representation by Masked Multimodal Cluster
Prediction | ['Bowen Shi', 'Wei-Ning Hsu', 'Kushal Lakhotia', 'Abdelrahman Mohamed'] | ['eess.AS', 'cs.CV', 'cs.SD'] | Video recordings of speech contain correlated audio and visual information,
providing a strong signal for speech representation learning from the speaker's
lip movements and the produced sound. We introduce Audio-Visual Hidden Unit
BERT (AV-HuBERT), a self-supervised representation learning framework for
audio-visual s... | 2022-01-05T17:40:45Z | ICLR 2022 | null | null | null | null | null | null | null | null | null |
2,201.02419 | Automatic Speech Recognition Datasets in Cantonese: A Survey and New
Dataset | ['Tiezheng Yu', 'Rita Frieske', 'Peng Xu', 'Samuel Cahyawijaya', 'Cheuk Tung Shadow Yiu', 'Holy Lovenia', 'Wenliang Dai', 'Elham J. Barezi', 'Qifeng Chen', 'Xiaojuan Ma', 'Bertram E. Shi', 'Pascale Fung'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Automatic speech recognition (ASR) on low resource languages improves the
access of linguistic minorities to technological advantages provided by
artificial intelligence (AI). In this paper, we address the problem of data
scarcity for the Hong Kong Cantonese language by creating a new Cantonese
dataset. Our dataset, Mu... | 2022-01-07T12:09:15Z | null | null | null | null | null | null | null | null | null | null |
2,201.02605 | Detecting Twenty-thousand Classes using Image-level Supervision | ['Xingyi Zhou', 'Rohit Girdhar', 'Armand Joulin', 'Philipp Krähenbühl', 'Ishan Misra'] | ['cs.CV'] | Current object detectors are limited in vocabulary size due to the small
scale of detection datasets. Image classifiers, on the other hand, reason about
much larger vocabularies, as their datasets are larger and easier to collect.
We propose Detic, which simply trains the classifiers of a detector on image
classificati... | 2022-01-07T18:57:19Z | ECCV 2022 camera ready. Code is available at
https://github.com/facebookresearch/Detic | null | null | Detecting Twenty-thousand Classes using Image-level Supervision | ['Xingyi Zhou', 'Rohit Girdhar', 'Armand Joulin', 'Phillip Krahenbuhl', 'Ishan Misra'] | 2,022 | European Conference on Computer Vision | 621 | 81 | ['Computer Science'] |
2,201.02729 | Bitcoin Price Predictive Modeling Using Expert Correction | ['Bohdan M. Pavlyshenko'] | ['q-fin.ST', 'cs.LG'] | The paper studies the linear model for Bitcoin price which includes
regression features based on Bitcoin currency statistics, mining processes,
Google search trends, Wikipedia pages visits. The pattern of deviation of
regression model prediction from real prices is simpler comparing to price time
series. It is assumed ... | 2022-01-06T15:11:51Z | null | null | null | null | null | null | null | null | null | null |
2,201.02973 | MAXIM: Multi-Axis MLP for Image Processing | ['Zhengzhong Tu', 'Hossein Talebi', 'Han Zhang', 'Feng Yang', 'Peyman Milanfar', 'Alan Bovik', 'Yinxiao Li'] | ['eess.IV', 'cs.CV'] | Recent progress on Transformers and multi-layer perceptron (MLP) models
provide new network architectural designs for computer vision tasks. Although
these models proved to be effective in many vision tasks such as image
recognition, there remain challenges in adapting them for low-level vision. The
inflexibility to su... | 2022-01-09T09:59:32Z | CVPR 2022 Oral; Code: \url{https://github.com/google-research/maxim} | null | null | MAXIM: Multi-Axis MLP for Image Processing | ['Zhengzhong Tu', 'Hossein Talebi', 'Han Zhang', 'Feng Yang', 'P. Milanfar', 'A. Bovik', 'Yinxiao Li'] | 2,022 | Computer Vision and Pattern Recognition | 484 | 123 | ['Engineering', 'Computer Science'] |
2,201.03545 | A ConvNet for the 2020s | ['Zhuang Liu', 'Hanzi Mao', 'Chao-Yuan Wu', 'Christoph Feichtenhofer', 'Trevor Darrell', 'Saining Xie'] | ['cs.CV'] | The "Roaring 20s" of visual recognition began with the introduction of Vision
Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art
image classification model. A vanilla ViT, on the other hand, faces
difficulties when applied to general computer vision tasks such as object
detection and semanti... | 2022-01-10T18:59:10Z | CVPR 2022; Code: https://github.com/facebookresearch/ConvNeXt | null | null | null | null | null | null | null | null | null |
2,201.03713 | CVSS Corpus and Massively Multilingual Speech-to-Speech Translation | ['Ye Jia', 'Michelle Tadmor Ramanovich', 'Quan Wang', 'Heiga Zen'] | ['cs.CL', 'cs.SD', 'eess.AS'] | We introduce CVSS, a massively multilingual-to-English speech-to-speech
translation (S2ST) corpus, covering sentence-level parallel S2ST pairs from 21
languages into English. CVSS is derived from the Common Voice speech corpus and
the CoVoST 2 speech-to-text translation (ST) corpus, by synthesizing the
translation text... | 2022-01-11T00:27:08Z | LREC 2022 | null | null | null | null | null | null | null | null | null |
2,201.04676 | UniFormer: Unified Transformer for Efficient Spatiotemporal
Representation Learning | ['Kunchang Li', 'Yali Wang', 'Peng Gao', 'Guanglu Song', 'Yu Liu', 'Hongsheng Li', 'Yu Qiao'] | ['cs.CV'] | It is a challenging task to learn rich and multi-scale spatiotemporal
semantics from high-dimensional videos, due to large local redundancy and
complex global dependency between video frames. The recent advances in this
research have been mainly driven by 3D convolutional neural networks and vision
transformers. Althou... | 2022-01-12T20:02:32Z | Published as a conference paper at ICLR 2022; 19pages, 7 figures | null | null | null | null | null | null | null | null | null |
2,201.05051 | Speech Resources in the Tamasheq Language | ['Marcely Zanon Boito', 'Fethi Bougares', 'Florentin Barbier', 'Souhir Gahbiche', 'Loïc Barrault', 'Mickael Rouvier', 'Yannick Estève'] | ['cs.CL'] | In this paper we present two datasets for Tamasheq, a developing language
mainly spoken in Mali and Niger. These two datasets were made available for the
IWSLT 2022 low-resource speech translation track, and they consist of
collections of radio recordings from daily broadcast news in Niger (Studio
Kalangou) and Mali (S... | 2022-01-13T16:24:06Z | Accepted to LREC 2022 | null | null | Speech Resources in the Tamasheq Language | ['Marcely Zanon Boito', 'Fethi Bougares', 'Florentin Barbier', 'Souhir Gahbiche', 'Loïc Barrault', 'Mickael Rouvier', 'Y. Estève'] | 2,022 | International Conference on Language Resources and Evaluation | 16 | 27 | ['Computer Science'] |
2,201.05601 | A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language
Models | ['Vésteinn Snæbjarnarson', 'Haukur Barri Símonarson', 'Pétur Orri Ragnarsson', 'Svanhvít Lilja Ingólfsdóttir', 'Haukur Páll Jónsson', 'Vilhjálmur Þorsteinsson', 'Hafsteinn Einarsson'] | ['cs.CL'] | We train several language models for Icelandic, including IceBERT, that
achieve state-of-the-art performance in a variety of downstream tasks,
including part-of-speech tagging, named entity recognition, grammatical error
detection and constituency parsing. To train the models we introduce a new
corpus of Icelandic text... | 2022-01-14T18:45:31Z | null | null | null | A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models | ['Vésteinn Snæbjarnarson', 'Haukur Barri Símonarson', 'Pétur Orri Ragnarsson', 'Svanhvít Lilja Ingólfsdóttir', 'H. Jónsson', 'Vilhjálmur Þorsteinsson', 'H. Einarsson'] | 2,022 | International Conference on Language Resources and Evaluation | 26 | 52 | ['Computer Science'] |
2,201.06025 | COLD: A Benchmark for Chinese Offensive Language Detection | ['Jiawen Deng', 'Jingyan Zhou', 'Hao Sun', 'Chujie Zheng', 'Fei Mi', 'Helen Meng', 'Minlie Huang'] | ['cs.CL', 'cs.AI'] | Offensive language detection is increasingly crucial for maintaining a
civilized social media platform and deploying pre-trained language models.
However, this task in Chinese is still under exploration due to the scarcity of
reliable datasets. To this end, we propose a benchmark --COLD for Chinese
offensive language a... | 2022-01-16T11:47:23Z | 19 pages | null | null | null | null | null | null | null | null | null |
2,201.0691 | ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves
Zero-Shot Generalization | ['Hanwei Xu', 'Yujun Chen', 'Yulun Du', 'Nan Shao', 'Yanggang Wang', 'Haiyu Li', 'Zhilin Yang'] | ['cs.LG', 'cs.CL'] | We propose a multitask pretraining approach ZeroPrompt for zero-shot
generalization, focusing on task scaling and zero-shot prompting. While
previous models are trained on only a few dozen tasks, we scale to 1,000 tasks
for the first time using real-world data. This leads to a crucial discovery
that task scaling can be... | 2022-01-18T12:30:17Z | 18 pages | null | null | ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization | ['Hanwei Xu', 'Yujun Chen', 'Yulun Du', 'Nan Shao', 'Yanggang Wang', 'Haiyu Li', 'Zhilin Yang'] | 2,022 | Conference on Empirical Methods in Natural Language Processing | 69 | 43 | ['Computer Science'] |
2,201.07281 | Annotating the Tweebank Corpus on Named Entity Recognition and Building
NLP Models for Social Media Analysis | ['Hang Jiang', 'Yining Hua', 'Doug Beeferman', 'Deb Roy'] | ['cs.CL'] | Social media data such as Twitter messages ("tweets") pose a particular
challenge to NLP systems because of their short, noisy, and colloquial nature.
Tasks such as Named Entity Recognition (NER) and syntactic parsing require
highly domain-matched training data for good performance. To date, there is no
complete traini... | 2022-01-18T19:34:23Z | Accepted at LREC 2022 (Long Papers) | null | null | Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis | ['Hang Jiang', 'Y. Hua', 'Doug Beeferman', 'Dwaipayan Roy'] | 2,022 | International Conference on Language Resources and Evaluation | 23 | 41 | ['Computer Science'] |
2,201.07311 | Datasheet for the Pile | ['Stella Biderman', 'Kieran Bicheno', 'Leo Gao'] | ['cs.CL'] | This datasheet describes the Pile, a 825 GiB dataset of human-authored text
compiled by EleutherAI for use in large-scale language modeling. The Pile is
comprised of 22 different text sources, ranging from original scrapes done for
this project, to text data made available by the data owners, to third-party
scrapes ava... | 2022-01-13T23:45:24Z | Accompanies "The Pile: An 800GB Dataset of Diverse Text for Language
Modeling" arXiv:2101.00027 | null | null | Datasheet for the Pile | ['Stella Biderman', 'Kieran Bicheno', 'Leo Gao'] | 2,022 | arXiv.org | 36 | 85 | ['Computer Science'] |
2,201.07436 | Global-Local Path Networks for Monocular Depth Estimation with Vertical
CutDepth | ['Doyeon Kim', 'Woonghyun Ka', 'Pyungwhan Ahn', 'Donggyu Joo', 'Sehwan Chun', 'Junmo Kim'] | ['cs.CV'] | Depth estimation from a single image is an important task that can be applied
to various fields in computer vision, and has grown rapidly with the
development of convolutional neural networks. In this paper, we propose a novel
structure and training strategy for monocular depth estimation to further
improve the predict... | 2022-01-19T06:37:21Z | 11pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,201.08277 | NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual
Sentiment Analysis | ['Shamsuddeen Hassan Muhammad', 'David Ifeoluwa Adelani', 'Sebastian Ruder', 'Ibrahim Said Ahmad', 'Idris Abdulmumin', 'Bello Shehu Bello', 'Monojit Choudhury', 'Chris Chinenye Emezue', 'Saheed Salahudeen Abdullahi', 'Anuoluwapo Aremu', 'Alipio Jeorge', 'Pavel Brazdil'] | ['cs.CL', 'cs.AI'] | Sentiment analysis is one of the most widely studied applications in NLP, but
most work focuses on languages with large amounts of data. We introduce the
first large-scale human-annotated Twitter sentiment dataset for the four most
widely spoken languages in Nigeria (Hausa, Igbo, Nigerian-Pidgin, and
Yor\`ub\'a ) consi... | 2022-01-20T16:28:06Z | Submitted to LREC 2022, 13 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,201.08371 | Revisiting Weakly Supervised Pre-Training of Visual Perception Models | ['Mannat Singh', 'Laura Gustafson', 'Aaron Adcock', 'Vinicius de Freitas Reis', 'Bugra Gedik', 'Raj Prateek Kosaraju', 'Dhruv Mahajan', 'Ross Girshick', 'Piotr Dollár', 'Laurens van der Maaten'] | ['cs.CV'] | Model pre-training is a cornerstone of modern visual recognition systems.
Although fully supervised pre-training on datasets like ImageNet is still the
de-facto standard, recent studies suggest that large-scale weakly supervised
pre-training can outperform fully supervised approaches. This paper revisits
weakly-supervi... | 2022-01-20T18:55:06Z | CVPR 2022 | null | null | Revisiting Weakly Supervised Pre-Training of Visual Perception Models | ['Mannat Singh', 'Laura Gustafson', 'Aaron B. Adcock', 'Vinicius de Freitas Reis', 'B. Gedik', 'Raj Prateek Kosaraju', 'D. Mahajan', 'Ross B. Girshick', "Piotr Doll'ar", 'L. Maaten'] | 2,022 | Computer Vision and Pattern Recognition | 130 | 80 | ['Computer Science'] |
2,201.08471 | Transfer Learning Approaches for Building Cross-Language Dense Retrieval
Models | ['Suraj Nair', 'Eugene Yang', 'Dawn Lawrie', 'Kevin Duh', 'Paul McNamee', 'Kenton Murray', 'James Mayfield', 'Douglas W. Oard'] | ['cs.IR', 'cs.CL'] | The advent of transformer-based models such as BERT has led to the rise of
neural ranking models. These models have improved the effectiveness of
retrieval systems well beyond that of lexical term matching models such as
BM25. While monolingual retrieval tasks have benefited from large-scale
training collections such a... | 2022-01-20T22:11:38Z | Accepted at ECIR 2022 (Full paper) | null | null | null | null | null | null | null | null | null |
2,201.08542 | Can Model Compression Improve NLP Fairness | ['Guangxuan Xu', 'Qingyuan Hu'] | ['cs.CL'] | Model compression techniques are receiving increasing attention; however, the
effect of compression on model fairness is still under explored. This is the
first paper to examine the effect of distillation and pruning on the toxicity
and bias of generative language models. We test Knowledge Distillation and
Pruning meth... | 2022-01-21T05:14:51Z | null | null | null | Can Model Compression Improve NLP Fairness | ['Guangxuan Xu', 'Qingyuan Hu'] | 2,022 | arXiv.org | 28 | 34 | ['Computer Science'] |
2,201.08698 | Natural Attack for Pre-trained Models of Code | ['Zhou Yang', 'Jieke Shi', 'Junda He', 'David Lo'] | ['cs.SE'] | Pre-trained models of code have achieved success in many important software
engineering tasks. However, these powerful models are vulnerable to adversarial
attacks that slightly perturb model inputs to make a victim model produce wrong
outputs. Current works mainly attack models of code with examples that preserve
oper... | 2022-01-21T13:50:51Z | To appear in the Technical Track of ICSE 2022 | null | 10.1145/3510003.3510146 | null | null | null | null | null | null | null |
2,201.0886 | GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering | ['Xikun Zhang', 'Antoine Bosselut', 'Michihiro Yasunaga', 'Hongyu Ren', 'Percy Liang', 'Christopher D. Manning', 'Jure Leskovec'] | ['cs.CL', 'cs.LG'] | Answering complex questions about textual narratives requires reasoning over
both stated context and the world knowledge that underlies it. However,
pretrained language models (LM), the foundation of most modern QA systems, do
not robustly represent latent relationships between concepts, which is
necessary for reasonin... | 2022-01-21T19:00:05Z | Published at ICLR 2022. All code, data, and pretrained models are
available at https://github.com/snap-stanford/GreaseLM | null | null | GreaseLM: Graph REASoning Enhanced Language Models for Question Answering | ['Xikun Zhang', 'Antoine Bosselut', 'Michihiro Yasunaga', 'Hongyu Ren', 'Percy Liang', 'Christopher D. Manning', 'J. Leskovec'] | 2,022 | International Conference on Learning Representations | 231 | 49 | ['Computer Science'] |
2,201.09061 | Explore the Expression: Facial Expression Generation using Auxiliary
Classifier Generative Adversarial Network | ['J. Rafid Siddiqui'] | ['cs.CV', 'cs.GR', 'cs.LG'] | Facial expressions are a form of non-verbal communication that humans perform
seamlessly for meaningful transfer of information. Most of the literature
addresses the facial expression recognition aspect however, with the advent of
Generative Models, it has become possible to explore the affect space in
addition to mere... | 2022-01-22T14:37:13Z | null | null | null | null | null | null | null | null | null | null |
2,201.0945 | UniFormer: Unifying Convolution and Self-attention for Visual
Recognition | ['Kunchang Li', 'Yali Wang', 'Junhao Zhang', 'Peng Gao', 'Guanglu Song', 'Yu Liu', 'Hongsheng Li', 'Yu Qiao'] | ['cs.CV'] | It is a challenging task to learn discriminative representation from images
and videos, due to large local redundancy and complex global dependency in
these visual data. Convolution neural networks (CNNs) and vision transformers
(ViTs) have been two dominant frameworks in the past few years. Though CNNs can
efficiently... | 2022-01-24T04:39:39Z | 18 pages, 10 figures, 23 tables. This work has been submitted to the
IEEE for possible publication | null | null | null | null | null | null | null | null | null |
2,201.09792 | Patches Are All You Need? | ['Asher Trockman', 'J. Zico Kolter'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Although convolutional networks have been the dominant architecture for
vision tasks for many years, recent experiments have shown that
Transformer-based models, most notably the Vision Transformer (ViT), may exceed
their performance in some settings. However, due to the quadratic runtime of
the self-attention layers i... | 2022-01-24T16:42:56Z | null | null | null | Patches Are All You Need? | ['Asher Trockman', 'J. Z. Kolter'] | 2,022 | Trans. Mach. Learn. Res. | 416 | 35 | ['Computer Science'] |
2,201.10005 | Text and Code Embeddings by Contrastive Pre-Training | ['Arvind Neelakantan', 'Tao Xu', 'Raul Puri', 'Alec Radford', 'Jesse Michael Han', 'Jerry Tworek', 'Qiming Yuan', 'Nikolas Tezak', 'Jong Wook Kim', 'Chris Hallacy', 'Johannes Heidecke', 'Pranav Shyam', 'Boris Power', 'Tyna Eloundou Nekoul', 'Girish Sastry', 'Gretchen Krueger', 'David Schnurr', 'Felipe Petroski Such', '... | ['cs.CL', 'cs.LG'] | Text embeddings are useful features in many applications such as semantic
search and computing text similarity. Previous work typically trains models
customized for different use cases, varying in dataset choice, training
objective and model architecture. In this work, we show that contrastive
pre-training on unsupervi... | 2022-01-24T23:36:20Z | null | null | null | null | null | null | null | null | null | null |
2,201.108 | SkiM: Skipping Memory LSTM for Low-Latency Real-Time Continuous Speech
Separation | ['Chenda Li', 'Lei Yang', 'Weiqin Wang', 'Yanmin Qian'] | ['eess.AS', 'cs.SD'] | Continuous speech separation for meeting pre-processing has recently become a
focused research topic. Compared to the data in utterance-level speech
separation, the meeting-style audio stream lasts longer, has an uncertain
number of speakers. We adopt the time-domain speech separation method and the
recently proposed G... | 2022-01-26T08:16:56Z | Accepted by ICASSP 2022 | null | null | null | null | null | null | null | null | null |
2,201.10801 | When Shift Operation Meets Vision Transformer: An Extremely Simple
Alternative to Attention Mechanism | ['Guangting Wang', 'Yucheng Zhao', 'Chuanxin Tang', 'Chong Luo', 'Wenjun Zeng'] | ['cs.CV'] | Attention mechanism has been widely believed as the key to success of vision
transformers (ViTs), since it provides a flexible and powerful way to model
spatial relationships. However, is the attention mechanism truly an
indispensable part of ViT? Can it be replaced by some other alternatives? To
demystify the role of ... | 2022-01-26T08:17:06Z | accepted by AAAI-22 | null | null | null | null | null | null | null | null | null |
2,201.11115 | CsFEVER and CTKFacts: Acquiring Czech data for fact verification | ['Herbert Ullrich', 'Jan Drchal', 'Martin Rýpar', 'Hana Vincourová', 'Václav Moravec'] | ['cs.CL', 'cs.LG'] | In this paper, we examine several methods of acquiring Czech data for
automated fact-checking, which is a task commonly modeled as a classification
of textual claim veracity w.r.t. a corpus of trusted ground truths. We attempt
to collect sets of data in form of a factual claim, evidence within the ground
truth corpus, ... | 2022-01-26T18:48:42Z | submitted to LREV journal for review, resubmission, changed title
according to reviewer suggestion | null | 10.1007/s10579-023-09654-3 | CsFEVER and CTKFacts: acquiring Czech data for fact verification | ['Herbert Ullrich', 'Jan Drchal', "Martin R'ypar", "Hana Vincourov'a", 'Václav Moravec'] | 2,022 | Language Resources and Evaluation | 9 | 65 | ['Computer Science', 'Medicine'] |
2,201.11838 | Clinical-Longformer and Clinical-BigBird: Transformers for long clinical
sequences | ['Yikuan Li', 'Ramsey M. Wehbe', 'Faraz S. Ahmad', 'Hanyin Wang', 'Yuan Luo'] | ['cs.CL', 'cs.AI'] | Transformers-based models, such as BERT, have dramatically improved the
performance for various natural language processing tasks. The clinical
knowledge enriched model, namely ClinicalBERT, also achieved state-of-the-art
results when performed on clinical named entity recognition and natural
language inference tasks. ... | 2022-01-27T22:51:58Z | null | null | null | null | null | null | null | null | null | null |
2,201.11903 | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models | ['Jason Wei', 'Xuezhi Wang', 'Dale Schuurmans', 'Maarten Bosma', 'Brian Ichter', 'Fei Xia', 'Ed Chi', 'Quoc Le', 'Denny Zhou'] | ['cs.CL', 'cs.AI'] | We explore how generating a chain of thought -- a series of intermediate
reasoning steps -- significantly improves the ability of large language models
to perform complex reasoning. In particular, we show how such reasoning
abilities emerge naturally in sufficiently large language models via a simple
method called chai... | 2022-01-28T02:33:07Z | null | null | null | null | null | null | null | null | null | null |
2,201.1199 | Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
Large-Scale Generative Language Model | ['Shaden Smith', 'Mostofa Patwary', 'Brandon Norick', 'Patrick LeGresley', 'Samyam Rajbhandari', 'Jared Casper', 'Zhun Liu', 'Shrimai Prabhumoye', 'George Zerveas', 'Vijay Korthikanti', 'Elton Zhang', 'Rewon Child', 'Reza Yazdani Aminabadi', 'Julie Bernauer', 'Xia Song', 'Mohammad Shoeybi', 'Yuxiong He', 'Michael Houst... | ['cs.CL'] | Pretrained general-purpose language models can achieve state-of-the-art
accuracies in various natural language processing domains by adapting to
downstream tasks via zero-shot, few-shot and fine-tuning techniques. Because of
their success, the size of these models has increased rapidly, requiring
high-performance hardw... | 2022-01-28T08:59:57Z | Shaden Smith and Mostofa Patwary contributed equally | null | null | Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model | ['Shaden Smith', 'M. Patwary', 'Brandon Norick', 'P. LeGresley', 'Samyam Rajbhandari', 'J. Casper', 'Zhun Liu', 'Shrimai Prabhumoye', 'George Zerveas', 'V. Korthikanti', 'Elton Zhang', 'R. Child', 'Reza Yazdani Aminabadi', 'J. Bernauer', 'Xia Song', 'M. Shoeybi', 'Yuxiong He', 'Michael Houston', 'Saurabh Tiwary', 'Brya... | 2,022 | arXiv.org | 745 | 78 | ['Computer Science'] |
2,201.12086 | BLIP: Bootstrapping Language-Image Pre-training for Unified
Vision-Language Understanding and Generation | ['Junnan Li', 'Dongxu Li', 'Caiming Xiong', 'Steven Hoi'] | ['cs.CV'] | Vision-Language Pre-training (VLP) has advanced the performance for many
vision-language tasks. However, most existing pre-trained models only excel in
either understanding-based tasks or generation-based tasks. Furthermore,
performance improvement has been largely achieved by scaling up the dataset
with noisy image-te... | 2022-01-28T12:49:48Z | null | null | null | BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | ['Junnan Li', 'Dongxu Li', 'Caiming Xiong', 'S. Hoi'] | 2,022 | International Conference on Machine Learning | 4,444 | 60 | ['Computer Science'] |
2,201.12091 | Linear Adversarial Concept Erasure | ['Shauli Ravfogel', 'Michael Twiton', 'Yoav Goldberg', 'Ryan Cotterell'] | ['cs.LG', 'cs.CL'] | Modern neural models trained on textual data rely on pre-trained
representations that emerge without direct supervision. As these
representations are increasingly being used in real-world applications, the
inability to \emph{control} their content becomes an increasingly important
problem. We formulate the problem of i... | 2022-01-28T13:00:17Z | Accepted in ICML 2022; a revised version | null | null | null | null | null | null | null | null | null |
2,201.12329 | DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR | ['Shilong Liu', 'Feng Li', 'Hao Zhang', 'Xiao Yang', 'Xianbiao Qi', 'Hang Su', 'Jun Zhu', 'Lei Zhang'] | ['cs.CV'] | We present in this paper a novel query formulation using dynamic anchor boxes
for DETR (DEtection TRansformer) and offer a deeper understanding of the role
of queries in DETR. This new formulation directly uses box coordinates as
queries in Transformer decoders and dynamically updates them layer-by-layer.
Using box coo... | 2022-01-28T18:51:09Z | Accepted to ICLR 2022 | null | null | DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR | ['Shilong Liu', 'Feng Li', 'Hao Zhang', 'X. Yang', 'Xianbiao Qi', 'Hang Su', 'Jun Zhu', 'Lei Zhang'] | 2,022 | International Conference on Learning Representations | 772 | 23 | ['Computer Science'] |
2,201.12431 | Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval | ['Uri Alon', 'Frank F. Xu', 'Junxian He', 'Sudipta Sengupta', 'Dan Roth', 'Graham Neubig'] | ['cs.CL', 'cs.LG'] | Retrieval-based language models (R-LM) model the probability of natural
language text by combining a standard language model (LM) with examples
retrieved from an external datastore at test time. While effective, a major
bottleneck of using these models in practice is the computationally costly
datastore search, which c... | 2022-01-28T21:38:56Z | Accepted to ICML'2022. Code and models are available at
https://github.com/neulab/retomaton | null | null | Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval | ['Uri Alon', 'Frank F. Xu', 'Junxian He', 'Sudipta Sengupta', 'D. Roth', 'Graham Neubig'] | 2,022 | International Conference on Machine Learning | 64 | 47 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.