arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,211.08233
Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech Emotion Recognition
['Jiaxin Ye', 'Xin-cheng Wen', 'Yujie Wei', 'Yong Xu', 'Kunhong Liu', 'Hongming Shan']
['cs.SD', 'cs.CL', 'eess.AS']
Speech emotion recognition (SER) plays a vital role in improving the interactions between humans and machines by inferring human emotion and affective states from speech signals. Whereas recent works primarily focus on mining spatiotemporal information from hand-crafted features, we explore how to model the temporal pa...
2022-11-14T13:35:01Z
ICASSP 2023
IEEE ICASSP 2023
10.1109/ICASSP49357.2023.10096370
null
null
null
null
null
null
null
2,211.08332
Versatile Diffusion: Text, Images and Variations All in One Diffusion Model
['Xingqian Xu', 'Zhangyang Wang', 'Eric Zhang', 'Kai Wang', 'Humphrey Shi']
['cs.CV']
Recent advances in diffusion models have set an impressive milestone in many generation tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiri...
2022-11-15T17:44:05Z
ICCV 2023; Github link: https://github.com/SHI-Labs/Versatile-Diffusion
null
null
Versatile Diffusion: Text, Images and Variations All in One Diffusion Model
['Xingqian Xu', 'Zhangyang Wang', 'Eric Zhang', 'Kai Wang', 'Humphrey Shi']
2,022
IEEE International Conference on Computer Vision
198
117
['Computer Science']
2,211.08609
R-Pred: Two-Stage Motion Prediction Via Tube-Query Attention-Based Trajectory Refinement
['Sehwan Choi', 'Jungho Kim', 'Junyong Yun', 'Jun Won Choi']
['cs.CV']
Predicting the future motion of dynamic agents is of paramount importance to ensuring safety and assessing risks in motion planning for autonomous robots. In this study, we propose a two-stage motion prediction method, called R-Pred, designed to effectively utilize both scene and interaction context using a cascade of ...
2022-11-16T01:43:39Z
null
null
null
null
null
null
null
null
null
null
2,211.08769
RetroMAE v2: Duplex Masked Auto-Encoder For Pre-Training Retrieval-Oriented Language Models
['Shitao Xiao', 'Zheng Liu']
['cs.CL', 'cs.IR']
To better support retrieval applications such as web search and question answering, growing effort is made to develop retrieval-oriented language models. Most of the existing works focus on improving the semantic representation capability for the contextualized embedding of [CLS] token. However, recent study shows that...
2022-11-16T08:57:55Z
null
null
null
RetroMAE v2: Duplex Masked Auto-Encoder For Pre-Training Retrieval-Oriented Language Models
['Shitao Xiao', 'Zheng Liu']
2,022
arXiv.org
2
35
['Computer Science']
2,211.09085
Galactica: A Large Language Model for Science
['Ross Taylor', 'Marcin Kardas', 'Guillem Cucurull', 'Thomas Scialom', 'Anthony Hartshorn', 'Elvis Saravia', 'Andrew Poulton', 'Viktor Kerkez', 'Robert Stojnic']
['cs.CL', 'stat.ML']
Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge ...
2022-11-16T18:06:33Z
null
null
null
null
null
null
null
null
null
null
2,211.0911
Holistic Evaluation of Language Models
['Percy Liang', 'Rishi Bommasani', 'Tony Lee', 'Dimitris Tsipras', 'Dilara Soylu', 'Michihiro Yasunaga', 'Yian Zhang', 'Deepak Narayanan', 'Yuhuai Wu', 'Ananya Kumar', 'Benjamin Newman', 'Binhang Yuan', 'Bobby Yan', 'Ce Zhang', 'Christian Cosgrove', 'Christopher D. Manning', 'Christopher Ré', 'Diana Acosta-Navas', 'Dre...
['cs.CL', 'cs.AI', 'cs.LG']
Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential ...
2022-11-16T18:51:34Z
Authored by the Center for Research on Foundation Models (CRFM) at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Project page: https://crfm.stanford.edu/helm/v1.0
Published in Transactions on Machine Learning Research (TMLR), 2023
null
null
null
null
null
null
null
null
2,211.0926
Task-aware Retrieval with Instructions
['Akari Asai', 'Timo Schick', 'Patrick Lewis', 'Xilun Chen', 'Gautier Izacard', 'Sebastian Riedel', 'Hannaneh Hajishirzi', 'Wen-tau Yih']
['cs.CL']
We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries. We aim to develop a general-purpose task-aware retrieval system using multi-task instruction tuning, which can follow human-written instructions to find the best documents fo...
2022-11-16T23:13:22Z
Code, data and pretrained model checkpoints are available at https://github.com/facebookresearch/tart
null
null
null
null
null
null
null
null
null
2,211.09552
UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
['Kunchang Li', 'Yali Wang', 'Yinan He', 'Yizhuo Li', 'Yi Wang', 'Limin Wang', 'Yu Qiao']
['cs.CV']
Learning discriminative spatiotemporal representation is the key problem of video understanding. Recently, Vision Transformers (ViTs) have shown their power in learning long-term video dependency with self-attention. Unfortunately, they exhibit limitations in tackling local video redundancy, due to the blind global com...
2022-11-17T14:17:40Z
24 pages, 4 figures, 20 tables
null
null
UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
['Kunchang Li', 'Yali Wang', 'Yinan He', 'Yizhuo Li', 'Yi Wang', 'Limin Wang', 'Y. Qiao']
2,022
arXiv.org
113
93
['Computer Science']
2,211.09699
PromptCap: Prompt-Guided Task-Aware Image Captioning
['Yushi Hu', 'Hang Hua', 'Zhengyuan Yang', 'Weijia Shi', 'Noah A Smith', 'Jiebo Luo']
['cs.CV', 'cs.CL']
Knowledge-based visual question answering (VQA) involves questions that require world knowledge beyond the image to yield the correct answer. Large language models (LMs) like GPT-3 are particularly helpful for this task because of their strong knowledge retrieval and reasoning capabilities. To enable LM to understand i...
2022-11-15T19:07:53Z
Accepted to ICCV 2023
null
null
PromptCap: Prompt-Guided Task-Aware Image Captioning
['Yushi Hu', 'Hang Hua', 'Zhengyuan Yang', 'Weijia Shi', 'Noah A. Smith', 'Jiebo Luo']
2,022
arXiv.org
106
108
['Computer Science']
2,211.09707
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models
['Simon Alexanderson', 'Rajmund Nagy', 'Jonas Beskow', 'Gustav Eje Henter']
['cs.LG', 'cs.CV', 'cs.GR', 'cs.HC', 'cs.SD', 'eess.AS', '68T07', 'G.3; I.2.6; I.3.7; J.5']
Diffusion models have experienced a surge of interest as highly expressive yet efficiently trainable probabilistic models. We show that these models are an excellent fit for synthesising human motion that co-occurs with audio, e.g., dancing and co-speech gesticulation, since motion is complex and highly ambiguous given...
2022-11-17T17:41:00Z
20 pages, 9 figures. Published in ACM ToG and presented at SIGGRAPH 2023
ACM Trans. Graph. 42, 4 (August 2023), 20 pages
10.1145/3592458
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models
['Simon Alexanderson', 'Rajmund Nagy', 'J. Beskow', 'G. Henter']
2,022
ACM Transactions on Graphics
174
168
['Computer Science', 'Engineering']
2,211.0976
VeLO: Training Versatile Learned Optimizers by Scaling Up
['Luke Metz', 'James Harrison', 'C. Daniel Freeman', 'Amil Merchant', 'Lucas Beyer', 'James Bradbury', 'Naman Agrawal', 'Ben Poole', 'Igor Mordatch', 'Adam Roberts', 'Jascha Sohl-Dickstein']
['cs.LG', 'math.OC', 'stat.ML']
While deep learning models have replaced hand-designed features across many domains, these models are still trained with hand-designed optimizers. In this work, we leverage the same scaling approach behind the success of deep learning to learn versatile optimizers. We train an optimizer for deep learning which is itsel...
2022-11-17T18:39:07Z
null
null
null
VeLO: Training Versatile Learned Optimizers by Scaling Up
['Luke Metz', 'James Harrison', 'C. Freeman', 'Amil Merchant', 'Lucas Beyer', 'James Bradbury', 'Naman Agrawal', 'Ben Poole', 'Igor Mordatch', 'Adam Roberts', 'Jascha Narain Sohl-Dickstein']
2,022
arXiv.org
60
150
['Computer Science', 'Mathematics']
2,211.098
InstructPix2Pix: Learning to Follow Image Editing Instructions
['Tim Brooks', 'Aleksander Holynski', 'Alexei A. Efros']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.GR', 'cs.LG']
We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (G...
2022-11-17T18:58:43Z
Project page with code: https://www.timothybrooks.com/instruct-pix2pix
null
null
null
null
null
null
null
null
null
2,211.09807
Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information
['Weijie Su', 'Xizhou Zhu', 'Chenxin Tao', 'Lewei Lu', 'Bin Li', 'Gao Huang', 'Yu Qiao', 'Xiaogang Wang', 'Jie Zhou', 'Jifeng Dai']
['cs.CV']
To effectively exploit the potential of large-scale models, various pre-training strategies supported by massive data from different sources are proposed, including supervised pre-training, weakly-supervised pre-training, and self-supervised pre-training. It has been proved that combining multiple pre-training strategi...
2022-11-17T18:59:49Z
null
null
null
null
null
null
null
null
null
null
2,211.09808
Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and Vision-Language Tasks
['Hao Li', 'Jinguo Zhu', 'Xiaohu Jiang', 'Xizhou Zhu', 'Hongsheng Li', 'Chun Yuan', 'Xiaohua Wang', 'Yu Qiao', 'Xiaogang Wang', 'Wenhai Wang', 'Jifeng Dai']
['cs.CV']
Despite the remarkable success of foundation models, their task-specific fine-tuning paradigm makes them inconsistent with the goal of general perception modeling. The key to eliminating this inconsistency is to use generalist models for general task modeling. However, existing attempts at generalist models are inadequ...
2022-11-17T18:59:52Z
Code shall be released at https://github.com/fundamentalvision/Uni-Perceiver
null
null
null
null
null
null
null
null
null
2,211.10086
Metadata Might Make Language Models Better
['Kaspar Beelen', 'Daniel van Strien']
['cs.CL', 'cs.DL']
This paper discusses the benefits of including metadata when training language models on historical collections. Using 19th-century newspapers as a case study, we extend the time-masking approach proposed by Rosin et al., 2022 and compare different strategies for inserting temporal, political and geographical informati...
2022-11-18T08:29:00Z
null
null
null
null
null
null
null
null
null
null
2,211.1033
GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
['Biyang Guo', 'Yeyun Gong', 'Yelong Shen', 'Songqiao Han', 'Hailiang Huang', 'Nan Duan', 'Weizhu Chen']
['cs.CL']
We introduce GENIUS: a conditional text generation model using sketches as input, which can fill in the missing contexts for a given sketch (key information consisting of textual spans, phrases, or words, concatenated by mask tokens). GENIUS is pre-trained on a large-scale textual corpus with a novel reconstruction fro...
2022-11-18T16:39:45Z
21 pages
null
null
null
null
null
null
null
null
null
2,211.10438
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
['Guangxuan Xiao', 'Ji Lin', 'Mickael Seznec', 'Hao Wu', 'Julien Demouth', 'Song Han']
['cs.CL', 'cs.AI', 'cs.LG']
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-p...
2022-11-18T18:59:33Z
ICML 2023. First two authors contributed equally to this work
null
null
null
null
null
null
null
null
null
2,211.10439
BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision
['Chenyu Yang', 'Yuntao Chen', 'Hao Tian', 'Chenxin Tao', 'Xizhou Zhu', 'Zhaoxiang Zhang', 'Gao Huang', 'Hongyang Li', 'Yu Qiao', 'Lewei Lu', 'Jie Zhou', 'Jifeng Dai']
['cs.CV']
We present a novel bird's-eye-view (BEV) detector with perspective supervision, which converges faster and better suits modern image backbones. Existing state-of-the-art BEV detectors are often tied to certain depth pre-trained backbones like VoVNet, hindering the synergy between booming image backbones and BEV detecto...
2022-11-18T18:59:48Z
null
null
null
BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision
['Chenyu Yang', 'Yuntao Chen', 'Haofei Tian', 'Chenxin Tao', 'Xizhou Zhu', 'Zhaoxiang Zhang', 'Gao Huang', 'Hongyang Li', 'Y. Qiao', 'Lewei Lu', 'Jie Zhou', 'Jifeng Dai']
2,022
Computer Vision and Pattern Recognition
279
46
['Computer Science']
2,211.11187
L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi
['Ananya Joshi', 'Aditi Kajale', 'Janhavi Gadre', 'Samruddhi Deode', 'Raviraj Joshi']
['cs.CL', 'cs.LG']
Sentence representation from vanilla BERT models does not work well on sentence similarity tasks. Sentence-BERT models specifically trained on STS or NLI datasets are shown to provide state-of-the-art performance. However, building these models for low-resource languages is not straightforward due to the lack of these ...
2022-11-21T05:15:48Z
Accepted at Computing Conference 2023
null
null
L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi
['Ananya Joshi', 'Aditi Kajale', 'Janhavi Gadre', 'Samruddhi Deode', 'Raviraj Joshi']
2,022
arXiv.org
12
32
['Computer Science']
2,211.11216
Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task
['Shangda Wu', 'Maosong Sun']
['cs.SD', 'cs.CL', 'eess.AS']
Benefiting from large-scale datasets and pre-trained models, the field of generative models has recently gained significant momentum. However, most datasets for symbolic music are very small, which potentially limits the performance of data-driven multimodal models. An intuitive solution to this problem is to leverage ...
2022-11-21T07:19:17Z
Accepted by the Creative AI Across Modalities workshop at AAAI 2023
null
null
Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task
['Shangda Wu', 'Maosong Sun']
2,022
arXiv.org
20
24
['Computer Science', 'Engineering']
2,211.11304
TCBERT: A Technical Report for Chinese Topic Classification BERT
['Ting Han', 'Kunhao Pan', 'Xinyu Chen', 'Dingjie Song', 'Yuchen Fan', 'Xinyu Gao', 'Ruyi Gan', 'Jiaxing Zhang']
['cs.CL']
Bidirectional Encoder Representations from Transformers or BERT~\cite{devlin-etal-2019-bert} has been one of the base models for various NLP tasks due to its remarkable performance. Variants customized for different languages and tasks are proposed to further improve the performance. In this work, we investigate superv...
2022-11-21T09:45:15Z
null
null
null
null
null
null
null
null
null
null
2,211.11418
L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages
['Raviraj Joshi']
['cs.CL', 'cs.LG']
The monolingual Hindi BERT models currently available on the model hub do not perform better than the multi-lingual models on downstream tasks. We present L3Cube-HindBERT, a Hindi BERT model pre-trained on Hindi monolingual corpus. Further, since Indic languages, Hindi and Marathi share the Devanagari script, we train ...
2022-11-21T13:02:52Z
Accepted at ICICC 2023
null
null
null
null
null
null
null
null
null
2,211.12194
SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
['Wenxuan Zhang', 'Xiaodong Cun', 'Xuan Wang', 'Yong Zhang', 'Xi Shen', 'Yu Guo', 'Ying Shan', 'Fei Wang']
['cs.CV']
Generating talking head videos through a face image and a piece of speech audio still contains many challenges. ie, unnatural head movement, distorted expression, and identity modification. We argue that these issues are mainly because of learning from the coupled 2D motion fields. On the other hand, explicitly using 3...
2022-11-22T11:35:07Z
Accepted by CVPR 2023, Project page: https://sadtalker.github.io, Code: https://github.com/Winfredy/SadTalker
null
null
SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
['Wenxuan Zhang', 'Xiaodong Cun', 'Xuan Wang', 'Yong Zhang', 'Xiaodong Shen', 'Yu Guo', 'Ying Shan', 'Fei Wang']
2,022
Computer Vision and Pattern Recognition
256
55
['Computer Science']
2,211.12509
SimVPv2: Towards Simple yet Powerful Spatiotemporal Predictive Learning
['Cheng Tan', 'Zhangyang Gao', 'Siyuan Li', 'Stan Z. Li']
['cs.LG']
Recent years have witnessed remarkable advances in spatiotemporal predictive learning, with methods incorporating auxiliary inputs, complex neural architectures, and sophisticated training strategies. While SimVP has introduced a simpler, CNN-based baseline for this task, it still relies on heavy Unet-like architecture...
2022-11-22T08:01:33Z
Accepted by TMM
null
null
SimVPv2: Towards Simple yet Powerful Spatiotemporal Predictive Learning
['Cheng Tan', 'Zhangyang Gao', 'Siyuan Li', 'Stan Z. Li']
2,022
IEEE transactions on multimedia
3
107
['Computer Science']
2,211.12588
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
['Wenhu Chen', 'Xueguang Ma', 'Xinyi Wang', 'William W. Cohen']
['cs.CL', 'cs.AI']
Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-ste...
2022-11-22T21:06:00Z
Published at TMLR 2023
null
null
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
['Wenhu Chen', 'Xueguang Ma', 'Xinyi Wang', 'William W. Cohen']
2,022
Trans. Mach. Learn. Res.
829
49
['Computer Science']
2,211.12905
GhostNetV2: Enhance Cheap Operation with Long-Range Attention
['Yehui Tang', 'Kai Han', 'Jianyuan Guo', 'Chang Xu', 'Chao Xu', 'Yunhe Wang']
['cs.CV']
Light-weight convolutional neural networks (CNNs) are specially designed for applications on mobile devices with faster inference speed. The convolutional operation can only capture local information in a window region, which prevents performance from being further improved. Introducing self-attention into convolution ...
2022-11-23T12:16:59Z
This paper is accepted by NeurIPS 2022 (Spotlight)
null
null
null
null
null
null
null
null
null
2,211.12979
FLAIR #1: semantic segmentation and domain adaptation dataset
['Anatol Garioud', 'Stéphane Peillet', 'Eva Bookjans', 'Sébastien Giordano', 'Boris Wattrelos']
['cs.CV', 'eess.IV']
The French National Institute of Geographical and Forest Information (IGN) has the mission to document and measure land-cover on French territory and provides referential geographical datasets, including high-resolution aerial images and topographic maps. The monitoring of land-cover plays a crucial role in land manage...
2022-11-23T14:38:59Z
Data access update
null
10.13140/RG.2.2.30183.73128/1
null
null
null
null
null
null
null
2,211.13221
Latent Video Diffusion Models for High-Fidelity Long Video Generation
['Yingqing He', 'Tianyu Yang', 'Yong Zhang', 'Ying Shan', 'Qifeng Chen']
['cs.CV', 'cs.AI']
AI-generated content has attracted lots of attention recently, but photo-realistic video synthesis is still challenging. Although many attempts using GANs and autoregressive models have been made in this area, the visual quality and length of generated videos are far from satisfactory. Diffusion models have shown remar...
2022-11-23T18:58:39Z
Project Page: https://yingqinghe.github.io/LVDM/ Github: https://github.com/YingqingHe/LVDM
null
null
Latent Video Diffusion Models for High-Fidelity Long Video Generation
['Yin-Yin He', 'Tianyu Yang', 'Yong Zhang', 'Ying Shan', 'Qifeng Chen']
2,022
null
243
47
['Computer Science']
2,211.13227
Paint by Example: Exemplar-based Image Editing with Diffusion Models
['Binxin Yang', 'Shuyang Gu', 'Bo Zhang', 'Ting Zhang', 'Xuejin Chen', 'Xiaoyan Sun', 'Dong Chen', 'Fang Wen']
['cs.CV']
Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive ap...
2022-11-23T18:59:52Z
Code: https://github.com/Fantasy-Studio/Paint-by-Example
null
null
null
null
null
null
null
null
null
2,211.14275
Solving math word problems with process- and outcome-based feedback
['Jonathan Uesato', 'Nate Kushman', 'Ramana Kumar', 'Francis Song', 'Noah Siegel', 'Lisa Wang', 'Antonia Creswell', 'Geoffrey Irving', 'Irina Higgins']
['cs.LG', 'cs.AI', 'cs.CL']
Recent work has shown that asking language models to generate reasoning steps improves performance on many reasoning tasks. When moving beyond prompting, this raises the question of how we should supervise such models: outcome-based approaches which supervise the final result, or process-based approaches which supervis...
2022-11-25T18:19:44Z
null
null
null
null
null
null
null
null
null
null
2,211.14304
BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
['German Barquero', 'Sergio Escalera', 'Cristina Palmero']
['cs.CV']
Stochastic human motion prediction (HMP) has generally been tackled with generative adversarial networks and variational autoencoders. Most prior works aim at predicting highly diverse movements in terms of the skeleton joints' dispersion. This has led to methods predicting fast and motion-divergent movements, which ar...
2022-11-25T18:59:03Z
ICCV 2023 Camera-ready version. Project page: https://barquerogerman.github.io/BeLFusion/
Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023
null
null
null
null
null
null
null
null
2,211.1473
A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
['Yuqi Nie', 'Nam H. Nguyen', 'Phanwadee Sinthong', 'Jayant Kalagnanam']
['cs.LG', 'cs.AI']
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence wher...
2022-11-27T05:15:42Z
Accepted by ICLR 2023
null
null
A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
['Yuqi Nie', 'Nam H. Nguyen', 'Phanwadee Sinthong', 'J. Kalagnanam']
2,022
International Conference on Learning Representations
1,449
45
['Computer Science']
2,211.14758
VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
['Kun Cheng', 'Xiaodong Cun', 'Yong Zhang', 'Menghan Xia', 'Fei Yin', 'Mingrui Zhu', 'Xuan Wang', 'Jue Wang', 'Nannan Wang']
['cs.CV']
We present VideoReTalking, a new system to edit the faces of a real-world talking head video according to input audio, producing a high-quality and lip-syncing output video even with a different emotion. Our system disentangles this objective into three sequential tasks: (1) face video generation with a canonical expre...
2022-11-27T08:14:23Z
Accepted by SIGGRAPH Asia 2022 Conference Proceedings. Project page: https://vinthony.github.io/video-retalking/
null
null
VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
['K. Cheng', 'Xiaodong Cun', 'Yong Zhang', 'Menghan Xia', 'Fei Yin', 'Mingrui Zhu', 'Xuanxia Wang', 'Jue Wang', 'Nan Wang']
2,022
ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia
106
62
['Computer Science']
2,211.15199
Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All
['Eylon Gueta', 'Avi Shmidman', 'Shaltiel Shmidman', 'Cheyn Shmuel Shmidman', 'Joshua Guedalia', 'Moshe Koppel', 'Dan Bareket', 'Amit Seker', 'Reut Tsarfaty']
['cs.CL']
We present a new pre-trained language model (PLM) for modern Hebrew, termed AlephBERTGimmel, which employs a much larger vocabulary (128K items) than standard Hebrew PLMs before. We perform a contrastive analysis of this model against all previous Hebrew PLMs (mBERT, heBERT, AlephBERT) and assess the effects of larger ...
2022-11-28T10:17:35Z
null
null
null
Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All
['Eylon Guetta', 'Avi Shmidman', 'Shaltiel Shmidman', 'C. Shmidman', 'Joshua Guedalia', 'Moshe Koppel', 'Dan Bareket', 'Amit Seker', 'Reut Tsarfaty']
2,022
arXiv.org
15
14
['Computer Science']
2,211.15444
DAMO-YOLO : A Report on Real-Time Object Detection Design
['Xianzhe Xu', 'Yiqi Jiang', 'Weihua Chen', 'Yilun Huang', 'Yuan Zhang', 'Xiuyu Sun']
['cs.CV']
In this report, we present a fast and accurate object detection method dubbed DAMO-YOLO, which achieves higher performance than the state-of-the-art YOLO series. DAMO-YOLO is extended from YOLO with some new technologies, including Neural Architecture Search (NAS), efficient Reparameterized Generalized-FPN (RepGFPN), a...
2022-11-23T17:59:12Z
Project Website: https://github.com/tinyvision/damo-yolo
null
null
null
null
null
null
null
null
null
2,211.15518
ReCo: Region-Controlled Text-to-Image Generation
['Zhengyuan Yang', 'Jianfeng Wang', 'Zhe Gan', 'Linjie Li', 'Kevin Lin', 'Chenfei Wu', 'Nan Duan', 'Zicheng Liu', 'Ce Liu', 'Michael Zeng', 'Lijuan Wang']
['cs.CV']
Recently, large-scale text-to-image (T2I) models have shown impressive performance in generating high-fidelity images, but with limited controllability, e.g., precisely specifying the content in a specific region with a free-form text description. In this paper, we propose an effective technique for such regional contr...
2022-11-23T18:56:31Z
null
null
null
ReCo: Region-Controlled Text-to-Image Generation
['Zhengyuan Yang', 'Jianfeng Wang', 'Zhe Gan', 'Linjie Li', 'Kevin Lin', 'Chenfei Wu', 'Nan Duan', 'Zicheng Liu', 'Ce Liu', 'Michael Zeng', 'Lijuan Wang']
2,022
Computer Vision and Pattern Recognition
150
53
['Computer Science']
2,211.15533
The Stack: 3 TB of permissively licensed source code
['Denis Kocetkov', 'Raymond Li', 'Loubna Ben Allal', 'Jia Li', 'Chenghao Mou', 'Carlos Muñoz Ferrandis', 'Yacine Jernite', 'Margaret Mitchell', 'Sean Hughes', 'Thomas Wolf', 'Dzmitry Bahdanau', 'Leandro von Werra', 'Harm de Vries']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language processing but also for code understanding and generation. To stimulate open and responsible research on LLMs for code, we introduce The Stack, a 3.1 TB dataset consisting of permissivel...
2022-11-20T18:15:30Z
null
null
null
The Stack: 3 TB of permissively licensed source code
['Denis Kocetkov', 'Raymond Li', 'Loubna Ben Allal', 'Jia Li', 'Chenghao Mou', 'Carlos Muñoz Ferrandis', 'Yacine Jernite', 'Margaret Mitchell', 'Sean Hughes', 'Thomas Wolf', 'Dzmitry Bahdanau', 'L. V. Werra', 'H. D. Vries']
2,022
Trans. Mach. Learn. Res.
339
50
['Computer Science']
2,211.15613
Frustratingly Easy Label Projection for Cross-lingual Transfer
['Yang Chen', 'Chao Jiang', 'Alan Ritter', 'Wei Xu']
['cs.CL', 'cs.AI']
Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. R...
2022-11-28T18:11:48Z
This paper has been accepted at Findings of ACL 2023
null
null
null
null
null
null
null
null
null
2,211.1566
SatlasPretrain: A Large-Scale Dataset for Remote Sensing Image Understanding
['Favyen Bastani', 'Piper Wolters', 'Ritwik Gupta', 'Joe Ferdinando', 'Aniruddha Kembhavi']
['cs.CV']
Remote sensing images are useful for a wide variety of planet monitoring applications, from tracking deforestation to tackling illegal fishing. The Earth is extremely diverse -- the amount of potential tasks in remote sensing images is massive, and the sizes of features range from several kilometers to just tens of cen...
2022-11-28T18:59:26Z
ICCV 2023
null
null
null
null
null
null
null
null
null
2,211.15841
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
['Trevor Gale', 'Deepak Narayanan', 'Cliff Young', 'Matei Zaharia']
['cs.LG', 'cs.AI', 'cs.DC']
We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model qual...
2022-11-29T00:27:08Z
null
null
null
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
['Trevor Gale', 'D. Narayanan', 'C. Young', 'M. Zaharia']
2,022
Conference on Machine Learning and Systems
109
51
['Computer Science']
2,211.16028
JaCappella Corpus: A Japanese a Cappella Vocal Ensemble Corpus
['Tomohiko Nakamura', 'Shinnosuke Takamichi', 'Naoko Tanji', 'Satoru Fukayama', 'Hiroshi Saruwatari']
['eess.AS', 'cs.LG', 'cs.SD']
We construct a corpus of Japanese a cappella vocal ensembles (jaCappella corpus) for vocal ensemble separation and synthesis. It consists of 35 copyright-cleared vocal ensemble songs and their audio recordings of individual voice parts. These songs were arranged from out-of-copyright Japanese children's songs and have ...
2022-11-29T08:52:29Z
Accepted for ICASSP2023
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Jun. 2023, 5 pages
10.1109/ICASSP49357.2023.10095569
null
null
null
null
null
null
null
2,211.16349
BARTSmiles: Generative Masked Language Models for Molecular Representations
['Gayane Chilingaryan', 'Hovhannes Tamoyan', 'Ani Tevosyan', 'Nelly Babayan', 'Lusine Khondkaryan', 'Karen Hambardzumyan', 'Zaven Navoyan', 'Hrant Khachatrian', 'Armen Aghajanyan']
['cs.LG', 'q-bio.BM']
We discover a robust self-supervised strategy tailored towards molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pre-training strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised ...
2022-11-29T16:30:53Z
27 pages (including appendix)
null
null
BARTSmiles: Generative Masked Language Models for Molecular Representations
['Gayane Chilingaryan', 'Hovhannes Tamoyan', 'A. Tevosyan', 'N. Babayan', 'Lusine Khondkaryan', 'Karen Hambardzumyan', 'Z. Navoyan', 'Hrant Khachatrian', 'Armen Aghajanyan']
2,022
Journal of Chemical Information and Modeling
28
74
['Medicine', 'Computer Science', 'Biology']
2,211.16492
Abstract Visual Reasoning with Tangram Shapes
['Anya Ji', 'Noriyuki Kojima', 'Noah Rush', 'Alane Suhr', 'Wai Keen Vong', 'Robert D. Hawkins', 'Yoav Artzi']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG']
We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both...
2022-11-29T18:57:06Z
EMNLP 2022 long paper
null
null
null
null
null
null
null
null
null
2,211.17046
Rationale-Guided Few-Shot Classification to Detect Abusive Language
['Punyajoy Saha', 'Divyanshu Sheth', 'Kushal Kedia', 'Binny Mathew', 'Animesh Mukherjee']
['cs.CL', 'cs.CY']
Abusive language is a concerning problem in online social media. Past research on detecting abusive language covers different platforms, languages, demographies, etc. However, models trained using these datasets do not perform well in cross-domain evaluation settings. To overcome this, a common strategy is to use a few...
2022-11-30T14:47:14Z
11 pages, 14 tables, 3 figures, The code repository is https://github.com/punyajoy/RGFS_ECAI
null
null
Rationale-Guided Few-Shot Classification to Detect Abusive Language
['Punyajoy Saha', 'Divyanshu Sheth', 'K. Kedia', 'Binny Mathew', 'Animesh Mukherjee']
2,022
European Conference on Artificial Intelligence
3
49
['Computer Science']
2,211.17135
BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model From Scratch?
['Joel Niklaus', 'Daniele Giofré']
['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2; I.7']
Pretrained transformer models have achieved state-of-the-art results in many tasks and benchmarks recently. Many state-of-the-art Language Models (LMs), however, do not scale well above the threshold of 512 input tokens. In specialized domains though (such as legal, scientific or biomedical), models often need to proce...
2022-11-30T16:09:20Z
Accepted at ENLSP @ NeurIPS 2022
null
null
null
null
null
null
null
null
null
2,211.17192
Fast Inference from Transformers via Speculative Decoding
['Yaniv Leviathan', 'Matan Kalman', 'Yossi Matias']
['cs.LG', 'cs.CL']
Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart o...
2022-11-30T17:33:28Z
ICML 2023 Oral
null
null
Fast Inference from Transformers via Speculative Decoding
['Yaniv Leviathan', 'Matan Kalman', 'Yossi Matias']
2,022
International Conference on Machine Learning
738
31
['Computer Science']
2,212.00794
Scaling Language-Image Pre-training via Masking
['Yanghao Li', 'Haoqi Fan', 'Ronghang Hu', 'Christoph Feichtenhofer', 'Kaiming He']
['cs.CV']
We present Fast Language-Image Pre-training (FLIP), a simple and more efficient method for training CLIP. Our method randomly masks out and removes a large portion of image patches during training. Masking allows us to learn from more image-text pairs given the same wall-clock time and contrast more samples per iterati...
2022-12-01T18:59:57Z
Tech report; arXiv v2: update scaling results and add code repo
null
null
Scaling Language-Image Pre-Training via Masking
['Yanghao Li', 'Haoqi Fan', 'Ronghang Hu', 'Christoph Feichtenhofer', 'Kaiming He']
2,022
Computer Vision and Pattern Recognition
330
75
['Computer Science']
2,212.01349
Nonparametric Masked Language Modeling
['Sewon Min', 'Weijia Shi', 'Mike Lewis', 'Xilun Chen', 'Wen-tau Yih', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']
['cs.CL', 'cs.AI', 'cs.LG']
Existing language models (LMs) predict tokens with a softmax over a finite vocabulary, which can make it difficult to predict rare tokens or phrases. We introduce NPM, the first nonparametric masked language model that replaces this softmax with a nonparametric distribution over every phrase in a reference corpus. NPM ...
2022-12-02T18:10:42Z
20 pages; 9 figures. Published at ACL 2023 Findings. Code available at https://github.com/facebookresearch/NPM
null
null
Nonparametric Masked Language Modeling
['Sewon Min', 'Weijia Shi', 'M. Lewis', 'Xilun Chen', 'Wen-tau Yih', 'Hannaneh Hajishirzi', 'Luke Zettlemoyer']
2,022
Annual Meeting of the Association for Computational Linguistics
51
95
['Computer Science']
2,212.01378
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
['Shachar Don-Yehiya', 'Elad Venezian', 'Colin Raffel', 'Noam Slonim', 'Yoav Katz', 'Leshem Choshen']
['cs.LG', 'cs.CL', 'cs.DC']
We propose a new paradigm to continually evolve pretrained models, denoted ColD Fusion. It provides the benefits of multitask learning but leverages distributed computation with limited communication and eliminates the need for shared data. Consequentially, ColD Fusion can give rise to a synergistic loop, where finetun...
2022-12-02T18:59:04Z
ACL 23
null
null
ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
['Shachar Don-Yehiya', 'Elad Venezian', 'Colin Raffel', 'N. Slonim', 'Yoav Katz', 'Leshem Choshen']
2,022
Annual Meeting of the Association for Computational Linguistics
55
91
['Computer Science']
2,212.02027
Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer
['Zhengbao Jiang', 'Luyu Gao', 'Jun Araki', 'Haibo Ding', 'Zhiruo Wang', 'Jamie Callan', 'Graham Neubig']
['cs.CL', 'cs.LG']
Systems for knowledge-intensive tasks such as open-domain question answering (QA) usually consist of two stages: efficient retrieval of relevant documents from a large corpus and detailed reading of the selected documents to generate answers. Retrievers and readers are usually modeled separately, which necessitates a c...
2022-12-05T04:51:21Z
EMNLP 2022
null
null
Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer
['Zhengbao Jiang', 'Luyu Gao', 'J. Araki', 'Haibo Ding', 'Zhiruo Wang', 'Jamie Callan', 'Graham Neubig']
2,022
Conference on Empirical Methods in Natural Language Processing
43
52
['Computer Science']
2,212.02499
Images Speak in Images: A Generalist Painter for In-Context Visual Learning
['Xinlong Wang', 'Wen Wang', 'Yue Cao', 'Chunhua Shen', 'Tiejun Huang']
['cs.CV']
In-context learning, as a new paradigm in NLP, allows the model to rapidly adapt to various tasks with only a handful of prompts and examples. But in computer vision, the difficulties for in-context learning lie in that tasks vary significantly in the output representations, thus it is unclear how to define the general...
2022-12-05T18:59:50Z
Accepted to CVPR 2023. Code and model is available at: https://github.com/baaivision/Painter
null
null
Images Speak in Images: A Generalist Painter for In-Context Visual Learning
['Xinlong Wang', 'Wen Wang', 'Yue Cao', 'Chunhua Shen', 'Tiejun Huang']
2,022
Computer Vision and Pattern Recognition
262
58
['Computer Science']
2,212.02508
MAP-Music2Vec: A Simple and Effective Baseline for Self-Supervised Music Audio Representation Learning
['Yizhi Li', 'Ruibin Yuan', 'Ge Zhang', 'Yinghao Ma', 'Chenghua Lin', 'Xingran Chen', 'Anton Ragni', 'Hanzhi Yin', 'Zhijie Hu', 'Haoyu He', 'Emmanouil Benetos', 'Norbert Gyenge', 'Ruibo Liu', 'Jie Fu']
['cs.SD', 'cs.AI', 'cs.LG', 'cs.MM', 'eess.AS']
The deep learning community has witnessed an exponentially growing interest in self-supervised learning (SSL). However, it still remains unexplored how to build a framework for learning useful representations of raw music waveforms in a self-supervised manner. In this work, we design Music2Vec, a framework exploring di...
2022-12-05T16:04:26Z
null
null
null
null
null
null
null
null
null
null
2,212.02623
Unifying Vision, Text, and Layout for Universal Document Processing
['Zineng Tang', 'Ziyi Yang', 'Guoxin Wang', 'Yuwei Fang', 'Yang Liu', 'Chenguang Zhu', 'Michael Zeng', 'Cha Zhang', 'Mohit Bansal']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and...
2022-12-05T22:14:49Z
CVPR 2023
null
null
null
null
null
null
null
null
null
2,212.02974
CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain
['Markus Bayer', 'Philipp Kuehn', 'Ramin Shanehsaz', 'Christian Reuter']
['cs.CR', 'cs.CL']
The field of cybersecurity is evolving fast. Experts need to be informed about past, current and - in the best case - upcoming threats, because attacks are becoming more advanced, targets bigger and systems more complex. As this cannot be addressed manually, cybersecurity experts need to rely on machine learning techni...
2022-12-06T13:49:12Z
13 Pages, 7 tables, 1 figure
null
null
CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain
['Markus Bayer', 'Philip D. . Kuehn', 'Ramin Shanehsaz', 'Christian A. Reuter']
2,022
ACM Transactions on Privacy and Security
50
56
['Computer Science']
2,212.03191
InternVideo: General Video Foundation Models via Generative and Discriminative Learning
['Yi Wang', 'Kunchang Li', 'Yizhuo Li', 'Yinan He', 'Bingkun Huang', 'Zhiyu Zhao', 'Hongjie Zhang', 'Jilan Xu', 'Yi Liu', 'Zun Wang', 'Sen Xing', 'Guo Chen', 'Junting Pan', 'Jiashuo Yu', 'Yali Wang', 'Limin Wang', 'Yu Qiao']
['cs.CV']
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we presen...
2022-12-06T18:09:49Z
technical report
null
null
null
null
null
null
null
null
null
2,212.03533
Text Embeddings by Weakly-Supervised Contrastive Pre-training
['Liang Wang', 'Nan Yang', 'Xiaolong Huang', 'Binxing Jiao', 'Linjun Yang', 'Daxin Jiang', 'Rangan Majumder', 'Furu Wei']
['cs.CL', 'cs.IR']
This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for an...
2022-12-07T09:25:54Z
17 pages, v2 fixes the SummEval numbers
null
null
Text Embeddings by Weakly-Supervised Contrastive Pre-training
['Liang Wang', 'Nan Yang', 'Xiaolong Huang', 'Binxing Jiao', 'Linjun Yang', 'Daxin Jiang', 'Rangan Majumder', 'Furu Wei']
2,022
arXiv.org
625
66
['Computer Science']
2,212.0386
Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models
['Gowthami Somepalli', 'Vasu Singla', 'Micah Goldblum', 'Jonas Geiping', 'Tom Goldstein']
['cs.LG', 'cs.CV', 'cs.CY']
Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes. But do diffusion models create unique works of art, or are they replicating content directly from their training sets? In this work, we study image retrieval frame...
2022-12-07T18:58:02Z
Updated draft with the following changes (1) Clarified the LAION Aesthetics versions everywhere (2) Correction on which LAION Aesthetics version SD - 1.4 is finetuned on and updated figure 12 based on this (3) A section on possible causes of replication
null
null
null
null
null
null
null
null
null
2,212.03984
Elucidation of Relaxation Dynamics Beyond Equilibrium Through AI-informed X-ray Photon Correlation Spectroscopy
['James P. Horwath', 'Xiao-Min Lin', 'Hongrui He', 'Qingteng Zhang', 'Eric M. Dufresne', 'Miaoqi Chu', 'Subramanian K. R. S. Sankaranarayanan', 'Wei Chen', 'Suresh Narayanan', 'Mathew J. Cherukara']
['cond-mat.mtrl-sci', 'cond-mat.mes-hall']
Understanding and interpreting dynamics of functional materials \textit{in situ} is a grand challenge in physics and materials science due to the difficulty of experimentally probing materials at varied length and time scales. X-ray photon correlation spectroscopy (XPCS) is uniquely well-suited for characterizing mater...
2022-12-07T22:36:53Z
null
null
null
null
null
null
null
null
null
null
2,212.04068
Investigating Glyph Phonetic Information for Chinese Spell Checking: What Works and What's Next
['Xiaotian Zhang', 'Yanjun Zheng', 'Hang Yan', 'Xipeng Qiu']
['cs.CL', 'cs.AI']
While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability to distinguish misspelled characters, with good...
2022-12-08T04:37:29Z
null
null
null
Investigating Glyph Phonetic Information for Chinese Spell Checking: What Works and What's Next
['Xiaotian Zhang', 'Yanjun Zheng', 'Hang Yan', 'Xipeng Qiu']
2,022
Annual Meeting of the Association for Computational Linguistics
5
44
['Computer Science']
2,212.04089
Editing Models with Task Arithmetic
['Gabriel Ilharco', 'Marco Tulio Ribeiro', 'Mitchell Wortsman', 'Suchin Gururangan', 'Ludwig Schmidt', 'Hannaneh Hajishirzi', 'Ali Farhadi']
['cs.LG', 'cs.CL', 'cs.CV']
Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems. In this work, we propose a new paradigm for steering the behavior of neural networks, centered around \texti...
2022-12-08T05:50:53Z
In Proceedings of the 11th International Conference on Learning Representations (ICLR 2023)
null
null
Editing Models with Task Arithmetic
['Gabriel Ilharco', 'Marco Tulio Ribeiro', 'Mitchell Wortsman', 'Suchin Gururangan', 'Ludwig Schmidt', 'Hannaneh Hajishirzi', 'Ali Farhadi']
2,022
International Conference on Learning Representations
523
111
['Computer Science']
2,212.04129
Deep Incubation: Training Large Models by Divide-and-Conquering
['Zanlin Ni', 'Yulin Wang', 'Jiangwei Yu', 'Haojun Jiang', 'Yue Cao', 'Gao Huang']
['cs.CV', 'cs.AI', 'cs.LG']
Recent years have witnessed a remarkable success of large deep learning models. However, training these models is challenging due to high computational costs, painfully slow convergence, and overfitting issues. In this paper, we present Deep Incubation, a novel approach that enables the efficient and effective training...
2022-12-08T08:04:06Z
null
null
null
null
null
null
null
null
null
null
2,212.04246
ViTPose++: Vision Transformer for Generic Body Pose Estimation
['Yufei Xu', 'Jing Zhang', 'Qiming Zhang', 'Dacheng Tao']
['cs.CV']
In this paper, we show the surprisingly good properties of plain vision transformers for body pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model dubbed V...
2022-12-07T12:33:28Z
Extension of ViTPose paper, accepted by TPAMI
null
null
null
null
null
null
null
null
null
2,212.04356
Robust Speech Recognition via Large-Scale Weak Supervision
['Alec Radford', 'Jong Wook Kim', 'Tao Xu', 'Greg Brockman', 'Christine McLeavey', 'Ilya Sutskever']
['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD']
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervise...
2022-12-06T18:46:04Z
null
null
null
Robust Speech Recognition via Large-Scale Weak Supervision
['Alec Radford', 'Jong Wook Kim', 'Tao Xu', 'Greg Brockman', 'Christine McLeavey', 'I. Sutskever']
2,022
International Conference on Machine Learning
3,780
100
['Engineering', 'Computer Science']
2,212.04582
Towards Holistic Surgical Scene Understanding
['Natalia Valderrama', 'Paola Ruiz Puentes', 'Isabela Hernández', 'Nicolás Ayobi', 'Mathilde Verlyk', 'Jessica Santander', 'Juan Caicedo', 'Nicolás Fernández', 'Pablo Arbeláez']
['cs.CV', 'cs.AI']
Most benchmarks for studying surgical interventions focus on a specific challenge instead of leveraging the intrinsic complementarity among different tasks. In this work, we present a new experimental framework towards holistic surgical scene understanding. First, we introduce the Phase, Step, Instrument, and Atomic Vi...
2022-12-08T22:15:27Z
MICCAI 2022 Oral. Official extension published at arXiv:2401.11174 . Data and codes available at https://github.com/BCV-Uniandes/TAPIR
Medical Image Computing and Computer Assisted Intervention 2022,
10.1007/978-3-031-16449-1_42
Towards Holistic Surgical Scene Understanding
['Natalia Valderrama', 'Paola Ruiz Puentes', 'Isabela Hernández', 'Nicolás Ayobi', 'Mathilde Verlyck', 'J. Santander', 'J. Caicedo', 'N. Fernández', 'P. Arbeláez']
2,022
International Conference on Medical Image Computing and Computer-Assisted Intervention
36
34
['Computer Science']
2,212.0469
Benchmarking Self-Supervised Learning on Diverse Pathology Datasets
['Mingu Kang', 'Heon Song', 'Seonwook Park', 'Donggeun Yoo', 'Sérgio Pereira']
['cs.CV', 'cs.LG']
Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet...
2022-12-09T06:38:34Z
Accepted to CVPR 2023
null
null
null
null
null
null
null
null
null
2,212.04755
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader
['Weiwen Xu', 'Xin Li', 'Wenxuan Zhang', 'Meng Zhou', 'Wai Lam', 'Luo Si', 'Lidong Bing']
['cs.CL']
We present Pre-trained Machine Reader (PMR), a novel method for retrofitting pre-trained masked language models (MLMs) to pre-trained machine reading comprehension (MRC) models without acquiring labeled data. PMR can resolve the discrepancy between model pre-training and downstream fine-tuning of existing MLMs. To buil...
2022-12-09T10:21:56Z
Accepted to NeurIPS 2023
null
null
null
null
null
null
null
null
null
2,212.04917
TRBLLmaker -- Transformer Reads Between Lyrics Lines maker
['Mor Ventura', 'Michael Toker']
['cs.CL', 'cs.AI']
Even for us, it can be challenging to comprehend the meaning of songs. As part of this project, we explore the process of generating the meaning of songs. Despite the widespread use of text-to-text models, few attempts have been made to achieve a similar objective. Songs are primarily studied in the context of sentimen...
2022-12-09T15:27:36Z
null
null
null
TRBLLmaker - Transformer Reads Between Lyrics Lines maker
['Mor Ventura', 'Michael Toker']
2,022
arXiv.org
2
20
['Computer Science']
2,212.05055
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
['Aran Komatsuzaki', 'Joan Puigcerver', 'James Lee-Thorp', 'Carlos Riquelme Ruiz', 'Basil Mustafa', 'Joshua Ainslie', 'Yi Tay', 'Mostafa Dehghani', 'Neil Houlsby']
['cs.LG', 'cs.CL', 'cs.CV']
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attrac...
2022-12-09T18:57:37Z
null
null
null
null
null
null
null
null
null
null
2,212.05702
Implementing Deep Learning-Based Approaches for Article Summarization in Indian Languages
['Rahul Tangsali', 'Aabha Pingle', 'Aditya Vyawahare', 'Isha Joshi', 'Raviraj Joshi']
['cs.CL', 'cs.LG']
The research on text summarization for low-resource Indian languages has been limited due to the availability of relevant datasets. This paper presents a summary of various deep-learning approaches used for the ILSUM 2022 Indic language summarization datasets. The ISUM 2022 dataset consists of news articles written in ...
2022-12-12T04:50:43Z
Accepted at ILSUM at FIRE 2022
null
null
null
null
null
null
null
null
null
2,212.05935
Hierarchical multimodal transformers for Multi-Page DocVQA
['Rubèn Tito', 'Dimosthenis Karatzas', 'Ernest Valveny']
['cs.CV', 'cs.AI', 'cs.CL']
Document Visual Question Answering (DocVQA) refers to the task of answering questions from document images. Existing work on DocVQA only considers single-page documents. However, in real scenarios documents are mostly composed of multiple pages that should be processed altogether. In this work we extend DocVQA to the m...
2022-12-07T10:09:49Z
null
null
null
null
null
null
null
null
null
null
2,212.06042
AD-BERT: Using Pre-trained contextualized embeddings to Predict the Progression from Mild Cognitive Impairment to Alzheimer's Disease
['Chengsheng Mao', 'Jie Xu', 'Luke Rasmussen', 'Yikuan Li', 'Prakash Adekkanattu', 'Jennifer Pacheco', 'Borna Bonakdarpour', 'Robert Vassar', 'Guoqian Jiang', 'Fei Wang', 'Jyotishman Pathak', 'Yuan Luo']
['cs.CL', 'cs.LG']
Objective: We develop a deep learning framework based on the pre-trained Bidirectional Encoder Representations from Transformers (BERT) model using unstructured clinical notes from electronic health records (EHRs) to predict the risk of disease progression from Mild Cognitive Impairment (MCI) to Alzheimer's Disease (AD...
2022-11-07T04:05:46Z
null
null
null
null
null
null
null
null
null
null
2,212.06137
NMS Strikes Back
['Jeffrey Ouyang-Zhang', 'Jang Hyun Cho', 'Xingyi Zhou', 'Philipp Krähenbühl']
['cs.CV']
Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in mult...
2022-12-12T18:59:58Z
Code is available at https://github.com/jozhang97/DETA
null
null
null
null
null
null
null
null
null
2,212.06385
TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
['Zhe Zhao', 'Yudong Li', 'Cheng Hou', 'Jing Zhao', 'Rong Tian', 'Weijie Liu', 'Yiren Chen', 'Ningyuan Sun', 'Haoyan Liu', 'Weiquan Mao', 'Han Guo', 'Weigang Guo', 'Taiqiang Wu', 'Tao Zhu', 'Wenhang Shi', 'Chen Chen', 'Shan Huang', 'Sihong Chen', 'Liqun Liu', 'Feifei Li', 'Xiaoshuai Chen', 'Xingwu Sun', 'Zhanhui Kang',...
['cs.CL']
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models ...
2022-12-13T05:46:40Z
null
null
null
null
null
null
null
null
null
null
2,212.06512
DifFace: Blind Face Restoration with Diffused Error Contraction
['Zongsheng Yue', 'Chen Change Loy']
['cs.CV', 'I.4.4']
While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and...
2022-12-13T11:52:33Z
Accepted by TPAMI@2024. Project: https://github.com/zsyOAOA/DifFace
null
null
null
null
null
null
null
null
null
2,212.06742
ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
['Yekun Chai', 'Shuohuan Wang', 'Chao Pang', 'Yu Sun', 'Hao Tian', 'Hua Wu']
['cs.CL', 'cs.LG', 'cs.PL', 'cs.SE']
Software engineers working with the same programming language (PL) may speak different natural languages (NLs) and vice versa, erecting huge barriers to communication and working efficiency. Recent studies have demonstrated the effectiveness of generative pre-training in computer programs, yet they are always English-c...
2022-12-13T17:21:44Z
Accepted at ACL 2023 (Findings)
null
null
null
null
null
null
null
null
null
2,212.07016
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
['Chengzhi Mao', 'Scott Geng', 'Junfeng Yang', 'Xin Wang', 'Carl Vondrick']
['cs.CV']
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP's performance on new tasks. In this work, we identify and explore the problem of \emph{adapting large-scale models for zero-shot adver...
2022-12-14T04:08:56Z
null
null
null
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
['Chengzhi Mao', 'Scott Geng', 'Junfeng Yang', 'Xin Eric Wang', 'Carl Vondrick']
2,022
International Conference on Learning Representations
71
76
['Computer Science']
2,212.07143
Reproducible scaling laws for contrastive language-image learning
['Mehdi Cherti', 'Romain Beaumont', 'Ross Wightman', 'Mitchell Wortsman', 'Gabriel Ilharco', 'Cade Gordon', 'Christoph Schuhmann', 'Ludwig Schmidt', 'Jenia Jitsev']
['cs.LG', 'cs.AI', 'cs.CV']
Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previo...
2022-12-14T10:24:50Z
CVPR 2023. Version with minor extension. Original: https://openaccess.thecvf.com/content/CVPR2023/html/Cherti_Reproducible_Scaling_Laws_for_Contrastive_Language-Image_Learning_CVPR_2023_paper
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 2818-2829
10.1109/CVPR52729.2023.00276
null
null
null
null
null
null
null
2,212.07249
APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
['Jiashuo Sun', 'Hang Zhang', 'Chen Lin', 'Xiangdong Su', 'Yeyun Gong', 'Jian Guo']
['cs.CL', 'cs.LG']
Long-form numerical reasoning in financial analysis aims to generate a reasoning program to calculate the correct answer for a given question. Previous work followed a retriever-generator framework, where the retriever selects key facts from a long-form document, and the generator generates a reasoning program based on...
2022-12-14T14:34:15Z
Accepted by COLING 2024
null
null
null
null
null
null
null
null
null
2,212.07652
Body-Part Joint Detection and Association via Extended Object Representation
['Huayi Zhou', 'Fei Jiang', 'Hongtao Lu']
['cs.CV']
The detection of human body and its related parts (e.g., face, head or hands) have been intensively studied and greatly improved since the breakthrough of deep CNNs. However, most of these detectors are trained independently, making it a challenging task to associate detected body parts with people. This paper focuses ...
2022-12-15T08:19:02Z
accepted by ICME2023
null
null
Body-Part Joint Detection and Association via Extended Object Representation
['Huayi Zhou', 'Fei Jiang', 'Hongtao Lu']
2,022
IEEE International Conference on Multimedia and Expo
9
35
['Computer Science']
2,212.07841
MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
['Kun Zhou', 'Xiao Liu', 'Yeyun Gong', 'Wayne Xin Zhao', 'Daxin Jiang', 'Nan Duan', 'Ji-Rong Wen']
['cs.CL', 'cs.IR']
Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors. Although various novel and effective tasks have been proposed, their differ...
2022-12-15T13:57:07Z
Accepted by ECML-PKDD 2023, 16 pages
null
null
null
null
null
null
null
null
null
2,212.07919
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning
['Olga Golovneva', 'Moya Chen', 'Spencer Poff', 'Martin Corredor', 'Luke Zettlemoyer', 'Maryam Fazel-Zarandi', 'Asli Celikyilmaz']
['cs.CL', 'cs.LG']
Large language models show improved downstream task performance when prompted to generate step-by-step reasoning to justify their final answers. These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness (independent of the final answer) is difficult withou...
2022-12-15T15:52:39Z
null
null
null
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning
['O. Yu. Golovneva', 'Moya Chen', 'Spencer Poff', 'Martin Corredor', 'Luke Zettlemoyer', 'Maryam Fazel-Zarandi', 'Asli Celikyilmaz']
2,022
arXiv.org
152
54
['Computer Science']
2,212.08013
FlexiViT: One Model for All Patch Sizes
['Lucas Beyer', 'Pavel Izmailov', 'Alexander Kolesnikov', 'Mathilde Caron', 'Simon Kornblith', 'Xiaohua Zhai', 'Matthias Minderer', 'Michael Tschannen', 'Ibrahim Alabdulmohsin', 'Filip Pavetic']
['cs.CV', 'cs.AI', 'cs.LG']
Vision Transformers convert images to sequences by slicing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but changing the patch size typically requires retraining the model. In this paper, we demonstrate th...
2022-12-15T18:18:38Z
Code and pre-trained models available at https://github.com/google-research/big_vision. All authors made significant technical contributions. CVPR 2023
null
null
FlexiViT: One Model for All Patch Sizes
['Lucas Beyer', 'Pavel Izmailov', 'Alexander Kolesnikov', 'Mathilde Caron', 'Simon Kornblith', 'Xiaohua Zhai', 'Matthias Minderer', 'Michael Tschannen', 'Ibrahim M. Alabdulmohsin', 'Filip Pavetic']
2,022
Computer Vision and Pattern Recognition
94
83
['Computer Science']
2,212.08059
Rethinking Vision Transformers for MobileNet Size and Speed
['Yanyu Li', 'Ju Hu', 'Yang Wen', 'Georgios Evangelidis', 'Kamyar Salahi', 'Yanzhi Wang', 'Sergey Tulyakov', 'Jian Ren']
['cs.CV', 'cs.AI', 'cs.LG']
With the success of Vision Transformers (ViTs) in computer vision tasks, recent arts try to optimize the performance and complexity of ViTs to enable efficient deployment on mobile devices. Multiple approaches are proposed to accelerate attention mechanism, improve inefficient designs, or incorporate mobile-friendly li...
2022-12-15T18:59:12Z
Code is available at: https://github.com/snap-research/EfficientFormer
null
null
null
null
null
null
null
null
null
2,212.08073
Constitutional AI: Harmlessness from AI Feedback
['Yuntao Bai', 'Saurav Kadavath', 'Sandipan Kundu', 'Amanda Askell', 'Jackson Kernion', 'Andy Jones', 'Anna Chen', 'Anna Goldie', 'Azalia Mirhoseini', 'Cameron McKinnon', 'Carol Chen', 'Catherine Olsson', 'Christopher Olah', 'Danny Hernandez', 'Dawn Drain', 'Deep Ganguli', 'Dustin Li', 'Eli Tran-Johnson', 'Ethan Perez'...
['cs.CL', 'cs.AI']
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so...
2022-12-15T06:19:23Z
null
null
null
Constitutional AI: Harmlessness from AI Feedback
['Yuntao Bai', 'Saurav Kadavath', 'Sandipan Kundu', 'Amanda Askell', 'John Kernion', 'Andy Jones', 'A. Chen', 'Anna Goldie', 'Azalia Mirhoseini', 'C. McKinnon', 'Carol Chen', 'Catherine Olsson', 'Chris Olah', 'Danny Hernandez', 'Dawn Drain', 'Deep Ganguli', 'Dustin Li', 'Eli Tran-Johnson', 'E. Perez', 'Jamie Kerr', 'J....
2,022
arXiv.org
1,651
82
['Computer Science']
2,212.08751
Point-E: A System for Generating 3D Point Clouds from Complex Prompts
['Alex Nichol', 'Heewoo Jun', 'Prafulla Dhariwal', 'Pamela Mishkin', 'Mark Chen']
['cs.CV', 'cs.LG']
While recent work on text-conditional 3D object generation has shown promising results, the state-of-the-art methods typically require multiple GPU-hours to produce a single sample. This is in stark contrast to state-of-the-art generative image models, which produce samples in a number of seconds or minutes. In this pa...
2022-12-16T23:22:59Z
8 pages, 11 figures
null
null
null
null
null
null
null
null
null
2,212.09019
Fast FullSubNet: Accelerate Full-band and Sub-band Fusion Model for Single-channel Speech Enhancement
['Xiang Hao', 'Xiaofei Li']
['eess.AS', 'eess.SP']
FullSubNet is our recently proposed real-time single-channel speech enhancement network that achieves outstanding performance on the Deep Noise Suppression (DNS) Challenge dataset. A number of variants of FullSubNet have been proposed, but they all focus on the structure design towards better performance and are rarely...
2022-12-18T05:41:33Z
null
null
null
null
null
null
null
null
null
null
2,212.09058
BEATs: Audio Pre-Training with Acoustic Tokenizers
['Sanyuan Chen', 'Yu Wu', 'Chengyi Wang', 'Shujie Liu', 'Daniel Tompkins', 'Zhuo Chen', 'Furu Wei']
['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.SD']
The massive growth of self-supervised learning (SSL) has been witnessed in language, vision, speech, and audio domains over the past few years. While discrete label prediction is widely adopted for other modalities, the state-of-the-art audio SSL models still employ reconstruction loss for pre-training. Compared with r...
2022-12-18T10:41:55Z
null
null
null
null
null
null
null
null
null
null
2,212.09255
Multi hash embeddings in spaCy
['Lester James Miranda', 'Ákos Kádár', 'Adriane Boyd', 'Sofie Van Landeghem', 'Anders Søgaard', 'Matthew Honnibal']
['cs.CL', 'I.2.7']
The distributed representation of symbols is one of the key technologies in machine learning systems today, playing a pivotal role in modern natural language processing. Traditional word embeddings associate a separate vector with each word. While this approach is simple and leads to good performance, it requires a lot...
2022-12-19T06:03:04Z
null
null
null
null
null
null
null
null
null
null
2,212.09462
Latent Diffusion for Language Generation
['Justin Lovelace', 'Varsha Kishore', 'Chao Wan', 'Eliot Shekhtman', 'Kilian Q. Weinberger']
['cs.CL', 'cs.LG']
Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing pretrained language models. We view ...
2022-12-19T13:57:06Z
NeurIPS 2023
null
null
Latent Diffusion for Language Generation
['Justin Lovelace', 'Varsha Kishore', 'Chao-gang Wan', 'Eliot Shekhtman', 'Kilian Q. Weinberger']
2,022
Neural Information Processing Systems
82
81
['Computer Science']
2,212.09535
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
['Zheng-Xin Yong', 'Hailey Schoelkopf', 'Niklas Muennighoff', 'Alham Fikri Aji', 'David Ifeoluwa Adelani', 'Khalid Almubarak', 'M Saiful Bari', 'Lintang Sutawika', 'Jungo Kasai', 'Ahmed Baruwa', 'Genta Indra Winata', 'Stella Biderman', 'Edward Raff', 'Dragomir Radev', 'Vassilina Nikoulina']
['cs.CL', 'cs.AI', 'cs.LG']
The BLOOM model is a large publicly available multilingual language model, but its pretraining was limited to 46 languages. To extend the benefits of BLOOM to other languages without incurring prohibitively large costs, it is desirable to adapt BLOOM to new languages not seen during pretraining. In this work, we apply ...
2022-12-19T15:24:45Z
ACL 2023
null
null
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
['Zheng-Xin Yong', 'Hailey Schoelkopf', 'Niklas Muennighoff', 'Alham Fikri Aji', 'David Ifeoluwa Adelani', 'Khalid Almubarak', 'M Saiful Bari', 'Lintang Sutawika', 'Jungo Kasai', 'Ahmed Baruwa', 'Genta Indra Winata', 'Stella Biderman', 'Dragomir R. Radev', 'Vassilina Nikoulina']
2,022
Annual Meeting of the Association for Computational Linguistics
89
78
['Computer Science']
2,212.09662
MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering
['Fangyu Liu', 'Francesco Piccinno', 'Syrine Krichene', 'Chenxi Pang', 'Kenton Lee', 'Mandar Joshi', 'Yasemin Altun', 'Nigel Collier', 'Julian Martin Eisenschlos']
['cs.CL', 'cs.AI', 'cs.CV']
Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling cha...
2022-12-19T17:44:54Z
ACL 2023
null
null
null
null
null
null
null
null
null
2,212.09682
Multilingual Sequence-to-Sequence Models for Hebrew NLP
['Matan Eyal', 'Hila Noga', 'Roee Aharoni', 'Idan Szpektor', 'Reut Tsarfaty']
['cs.CL']
Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Heb...
2022-12-19T18:10:23Z
null
null
null
null
null
null
null
null
null
null
2,212.09689
Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor
['Or Honovich', 'Thomas Scialom', 'Omer Levy', 'Timo Schick']
['cs.CL', 'cs.AI', 'cs.LG']
Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions. In this work, we introduce Unnatural Instructions: a large dataset of creati...
2022-12-19T18:21:00Z
18 pages, 7 figures
null
null
Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor
['Or Honovich', 'Thomas Scialom', 'Omer Levy', 'Timo Schick']
2,022
Annual Meeting of the Association for Computational Linguistics
374
43
['Computer Science']
2,212.0972
The case for 4-bit precision: k-bit Inference Scaling Laws
['Tim Dettmers', 'Luke Zettlemoyer']
['cs.LG', 'cs.NE']
Quantization methods reduce the number of bits required to represent each parameter in a model, trading accuracy for smaller memory footprints and inference latencies. However, the final model size depends on both the number of parameters of the original model and the rate of compression. For example, a 30B 8-bit model...
2022-12-19T18:48:33Z
null
null
null
null
null
null
null
null
null
null
2,212.0973
Speaking Style Conversion in the Waveform Domain Using Discrete Self-Supervised Units
['Gallil Maimon', 'Yossi Adi']
['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS']
We introduce DISSC, a novel, lightweight method that converts the rhythm, pitch contour and timbre of a recording to a target speaker in a textless manner. Unlike DISSC, most voice conversion (VC) methods focus primarily on timbre, and ignore people's unique speaking style (prosody). The proposed approach uses a pretra...
2022-12-19T18:53:04Z
Accepted at EMNLP 2023
null
null
null
null
null
null
null
null
null
2,212.09739
LENS: A Learnable Evaluation Metric for Text Simplification
['Mounica Maddela', 'Yao Dou', 'David Heineman', 'Wei Xu']
['cs.CL']
Training learnable metrics using modern language models has recently emerged as a promising method for the automatic evaluation of machine translation. However, existing human evaluation datasets for text simplification have limited annotations that are based on unitary or outdated models, making them unsuitable for th...
2022-12-19T18:56:52Z
Accepted at ACL 2023
null
null
null
null
null
null
null
null
null
2,212.09741
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
['Hongjin Su', 'Weijia Shi', 'Jungo Kasai', 'Yizhong Wang', 'Yushi Hu', 'Mari Ostendorf', 'Wen-tau Yih', 'Noah A. Smith', 'Luke Zettlemoyer', 'Tao Yu']
['cs.CL']
We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate ...
2022-12-19T18:57:05Z
Accepted in ACL2023 Findings
null
null
null
null
null
null
null
null
null
2,212.09748
Scalable Diffusion Models with Transformers
['William Peebles', 'Saining Xie']
['cs.CV', 'cs.LG']
We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass co...
2022-12-19T18:59:58Z
Code, project page and videos available at https://www.wpeebles.com/DiT
null
null
null
null
null
null
null
null
null
2,212.10057
WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning
['Wenhao Wu', 'Wei Li', 'Xinyan Xiao', 'Jiachen Liu', 'Sujian Li', 'Yajuan Lv']
['cs.CL']
A crucial issue of current text generation models is that they often uncontrollably generate factually inconsistent text with respective of their inputs. Limited by the lack of annotated data, existing works in evaluating factual consistency directly transfer the reasoning ability of models trained on other data-rich u...
2022-12-20T08:04:36Z
ACL 2023 Main Conference
null
null
null
null
null
null
null
null
null
2,212.10168
Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages
['Arnav Mhaske', 'Harshit Kedia', 'Sumanth Doddapaneni', 'Mitesh M. Khapra', 'Pratyush Kumar', 'Rudra Murthy V', 'Anoop Kunchukuttan']
['cs.CL']
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. The dataset contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location, and, Organiz...
2022-12-20T11:15:24Z
ACL 2023
null
null
Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages
['A. Mhaske', 'Harsh Kedia', 'Sumanth Doddapaneni', 'Mitesh M. Khapra', 'Pratyush Kumar', 'V. Rudramurthy', 'Anoop Kunchukuttan']
2,022
Annual Meeting of the Association for Computational Linguistics
31
51
['Computer Science']