arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,306.02928
LRVS-Fashion: Extending Visual Search with Referring Instructions
['Simon Lepage', 'Jérémie Mary', 'David Picard']
['cs.CV', '68T07 (Primary) 68T45 (Secondary)', 'I.2.10']
This paper introduces a new challenge for image similarity search in the context of fashion, addressing the inherent ambiguity in this domain stemming from complex images. We present Referred Visual Search (RVS), a task allowing users to define more precisely the desired similarity, following recent interest in the ind...
2023-06-05T14:45:38Z
29 pages, 14 figures, 5 tables
null
null
null
null
null
null
null
null
null
2,306.0303
Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset
['Junling Liu', 'Peilin Zhou', 'Yining Hua', 'Dading Chong', 'Zhongyu Tian', 'Andrew Liu', 'Helin Wang', 'Chenyu You', 'Zhenhua Guo', 'Lei Zhu', 'Michael Lingzhi Li']
['cs.CL']
Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National Medical Licensin...
2023-06-05T16:48:41Z
Accepted by NeurIPS 2023 Datasets and Benchmarks Track
null
null
null
null
null
null
null
null
null
2,306.03268
Skill over Scale: The Case for Medium, Domain-Specific Models for SE
['Manisha Mukherjee', 'Vincent J. Hellendoorn']
['cs.CL', 'cs.SE']
Recent advancements in AI have sparked a trend in constructing large, generalist language models that handle a multitude of tasks, including many code-related ones. While these models are expensive to train and are often closed-source, they have enjoyed broad adoption because they tend to outperform smaller, domain-spe...
2023-06-05T21:38:30Z
null
null
null
null
null
null
null
null
null
null
2,306.03341
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
['Kenneth Li', 'Oam Patel', 'Fernanda Viégas', 'Hanspeter Pfister', 'Martin Wattenberg']
['cs.LG', 'cs.AI', 'cs.CL']
We introduce Inference-Time Intervention (ITI), a technique designed to enhance the "truthfulness" of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performa...
2023-06-06T01:26:53Z
NeurIPS 2023 spotlight; code: https://github.com/likenneth/honest_llama
null
null
null
null
null
null
null
null
null
2,306.0335
Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning
['Chujie Zheng', 'Pei Ke', 'Zheng Zhang', 'Minlie Huang']
['cs.CL']
It has always been an important yet challenging problem to control language models to avoid generating texts with undesirable attributes, such as toxic language and unnatural repetition. We introduce Click for controllable text generation, which needs no modification to the model architecture and facilitates out-of-the...
2023-06-06T01:56:44Z
Findings of ACL 2023
null
null
null
null
null
null
null
null
null
2,306.03423
I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models
['Max Reuter', 'William Schulze']
['cs.AI']
Since the release of OpenAI's ChatGPT, generative language models have attracted extensive public attention. The increased usage has highlighted generative models' broad utility, but also revealed several forms of embedded bias. Some is induced by the pre-training corpus; but additional bias specific to generative mode...
2023-06-06T05:50:58Z
Submitted for review to KDD 2023 via the workshop "Foundations and Applications in Large-scale AI Models: Pre-training, Fine-tuning, and Prompt-based Learning"
null
null
null
null
null
null
null
null
null
2,306.03514
Recognize Anything: A Strong Image Tagging Model
['Youcai Zhang', 'Xinyu Huang', 'Jinyu Ma', 'Zhaoyang Li', 'Zhaochuan Luo', 'Yanchun Xie', 'Yuzhuo Qin', 'Tong Luo', 'Yaqian Li', 'Shilong Liu', 'Yandong Guo', 'Lei Zhang']
['cs.CV']
We present the Recognize Anything Model (RAM): a strong foundation model for image tagging. RAM makes a substantial step for large models in computer vision, demonstrating the zero-shot ability to recognize any common category with high accuracy. RAM introduces a new paradigm for image tagging, leveraging large-scale i...
2023-06-06T09:00:10Z
Homepage: https://recognize-anything.github.io/
null
null
null
null
null
null
null
null
null
2,306.03767
Metal artefact reduction sequences for a piezoelectric bone conduction implant using a realistic head phantom in MRI
['Guy Fierens', 'Joris Walraevens', 'Ronald Peeters', 'Christ Glorieux', 'Nicolas Verhaert']
['physics.med-ph']
Industry standards require medical device manufacturers to perform implant-induced artefact testing in phantoms at a pre-clinical stage to define the extent of artefacts that can be expected during MRI. Once a device is commercially available, studies on volunteers, cadavers or patients are performed to investigate imp...
2023-06-06T15:28:52Z
22 pages, 12 figures including supplementary information
null
null
null
null
null
null
null
null
null
2,306.03809
Can large language models democratize access to dual-use biotechnology?
['Emily H. Soice', 'Rafael Rocha', 'Kimberlee Cordova', 'Michael Specter', 'Kevin M. Esvelt']
['cs.CY', 'cs.AI']
Large language models (LLMs) such as those embedded in 'chatbots' are accelerating and democratizing research by providing comprehensible information and expertise from many different fields. However, these models may also confer easy access to dual-use technologies capable of inflicting great harm. To evaluate this ri...
2023-06-06T15:52:05Z
6 pages, 0 figures
null
null
null
null
null
null
null
null
null
2,306.03819
LEACE: Perfect linear concept erasure in closed form
['Nora Belrose', 'David Schneider-Joseph', 'Shauli Ravfogel', 'Ryan Cotterell', 'Edward Raff', 'Stella Biderman']
['cs.LG', 'cs.CL', 'cs.CY']
Concept erasure aims to remove specified features from an embedding. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provab...
2023-06-06T16:07:24Z
null
null
null
null
null
null
null
null
null
null
2,306.04054
RescueSpeech: A German Corpus for Speech Recognition in Search and Rescue Domain
['Sangeet Sagar', 'Mirco Ravanelli', 'Bernd Kiefer', 'Ivana Kruijff Korbayova', 'Josef van Genabith']
['eess.AS', 'cs.LG', 'cs.SD', 'eess.SP']
Despite the recent advancements in speech recognition, there are still difficulties in accurately transcribing conversational and emotional speech in noisy and reverberant acoustic environments. This poses a particular challenge in the search and rescue (SAR) domain, where transcribing conversations among rescue team m...
2023-06-06T23:04:22Z
null
null
null
null
null
null
null
null
null
null
2,306.04306
Allophant: Cross-lingual Phoneme Recognition with Articulatory Attributes
['Kevin Glocker', 'Aaricia Herygers', 'Munir Georges']
['cs.CL', 'cs.SD', 'eess.AS', 'I.2.7']
This paper proposes Allophant, a multilingual phoneme recognizer. It requires only a phoneme inventory for cross-lingual transfer to a target language, allowing for low-resource recognition. The architecture combines a compositional phone embedding approach with individually supervised phonetic attribute classifiers in...
2023-06-07T10:11:09Z
5 pages, 2 figures, 2 tables, accepted to INTERSPEECH 2023; published version
Proc. INTERSPEECH 2023, 2258-2262
10.21437/Interspeech.2023-772
null
null
null
null
null
null
null
2,306.04387
M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning
['Lei Li', 'Yuwei Yin', 'Shicheng Li', 'Liang Chen', 'Peiyi Wang', 'Shuhuai Ren', 'Mukai Li', 'Yazheng Yang', 'Jingjing Xu', 'Xu Sun', 'Lingpeng Kong', 'Qi Liu']
['cs.CV', 'cs.CL']
Instruction tuning has significantly advanced large language models (LLMs) such as ChatGPT, enabling them to align with human instructions across diverse tasks. However, progress in open vision-language models (VLMs) has been limited due to the scarcity of high-quality instruction datasets. To tackle this challenge and...
2023-06-07T12:35:37Z
Fix dataset url: https://huggingface.co/datasets/MMInstruction/M3IT Project: https://m3-it.github.io/
null
null
null
null
null
null
null
null
null
2,306.04399
Transfer Learning of Transformer-based Speech Recognition Models from Czech to Slovak
['Jan Lehečka', 'Josef V. Psutka', 'Josef Psutka']
['cs.CL']
In this paper, we are comparing several methods of training the Slovak speech recognition models based on the Transformers architecture. Specifically, we are exploring the approach of transfer learning from the existing Czech pre-trained Wav2Vec 2.0 model into Slovak. We are demonstrating the benefits of the proposed a...
2023-06-07T12:58:46Z
Accepted to TSD 2023
Text, Speech, and Dialogue: 26th International Conference, TSD 2023
10.1007/978-3-031-40498-6_29
Transfer Learning of Transformer-based Speech Recognition Models from Czech to Slovak
['Jan Lehecka', 'J. Psutka', 'J. Psutka']
2,023
International Conference on Text, Speech and Dialogue
2
18
['Computer Science']
2,306.04488
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
['Alexandre Ramé', 'Guillaume Couairon', 'Mustafa Shukor', 'Corentin Dancette', 'Jean-Baptiste Gaya', 'Laure Soulier', 'Matthieu Cord']
['cs.LG', 'cs.AI', 'cs.CV']
Foundation models are first pre-trained on vast unsupervised datasets and then fine-tuned on labeled data. Reinforcement learning, notably from human feedback (RLHF), can further align the network with the intended usage. Yet the imperfections in the proxy reward may hinder the training and lead to suboptimal results; ...
2023-06-07T14:58:15Z
null
null
null
null
null
null
null
null
null
null
2,306.04527
ContriMix: Scalable stain color augmentation for domain generalization without domain labels in digital pathology
['Tan H. Nguyen', 'Dinkar Juyal', 'Jin Li', 'Aaditya Prakash', 'Shima Nofallah', 'Chintan Shah', 'Sai Chowdary Gullapally', 'Limin Yu', 'Michael Griffin', 'Anand Sampat', 'John Abel', 'Justin Lee', 'Amaro Taylor-Weiner']
['eess.IV', 'cs.CV', 'cs.LG']
Differences in staining and imaging procedures can cause significant color variations in histopathology images, leading to poor generalization when deploying deep-learning models trained from a different data source. Various color augmentation methods have been proposed to generate synthetic images during training to m...
2023-06-07T15:36:26Z
null
null
null
null
null
null
null
null
null
null
2,306.04632
Designing a Better Asymmetric VQGAN for StableDiffusion
['Zixin Zhu', 'Xuelu Feng', 'Dongdong Chen', 'Jianmin Bao', 'Le Wang', 'Yinpeng Chen', 'Lu Yuan', 'Gang Hua']
['cs.CV', 'cs.GR']
StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not o...
2023-06-07T17:56:02Z
code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN
null
null
null
null
null
null
null
null
null
2,306.0464
ModuleFormer: Modularity Emerges from Mixture-of-Experts
['Yikang Shen', 'Zheyu Zhang', 'Tianyou Cao', 'Shawn Tan', 'Zhenfang Chen', 'Chuang Gan']
['cs.CL', 'cs.AI', 'cs.LG']
Large Language Models (LLMs) have achieved remarkable results. However, existing models are expensive to train and deploy, and it is also difficult to expand their knowledge beyond pre-training data without forgetting previous knowledge. This paper proposes a new neural network architecture, ModuleFormer, that leverage...
2023-06-07T17:59:57Z
null
null
null
null
null
null
null
null
null
null
2,306.04675
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models
['George Stein', 'Jesse C. Cresswell', 'Rasa Hosseinzadeh', 'Yi Sui', 'Brendan Leigh Ross', 'Valentin Villecroze', 'Zhaoyan Liu', 'Anthony L. Caterini', 'J. Eric T. Taylor', 'Gabriel Loaiza-Ganem']
['cs.LG', 'cs.CV', 'stat.ML']
We systematically study a wide variety of generative models spanning semantically-diverse image datasets to understand and improve the feature extractors and metrics used to evaluate them. Using best practices in psychophysics, we measure human perception of image realism for generated samples by conducting the largest...
2023-06-07T18:00:00Z
NeurIPS 2023. 53 pages, 29 figures, 12 tables. Code at https://github.com/layer6ai-labs/dgm-eval, reviews at https://openreview.net/forum?id=08zf7kTOoh
Thirty-seventh Conference on Neural Information Processing Systems (2023)
null
null
null
null
null
null
null
null
2,306.04751
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources
['Yizhong Wang', 'Hamish Ivison', 'Pradeep Dasigi', 'Jack Hessel', 'Tushar Khot', 'Khyathi Raghavi Chandu', 'David Wadden', 'Kelsey MacMillan', 'Noah A. Smith', 'Iz Beltagy', 'Hannaneh Hajishirzi']
['cs.CL']
In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied by limited evaluation, making it difficult to compare models ...
2023-06-07T19:59:23Z
18 pages, 6 figure, 10 tables. NeurIPS 2023 Datasets and Benchmarks Track Camera Ready
null
null
null
null
null
null
null
null
null
2,306.04757
INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models
['Yew Ken Chia', 'Pengfei Hong', 'Lidong Bing', 'Soujanya Poria']
['cs.CL', 'cs.AI']
Instruction-tuned large language models have revolutionized natural language processing and have shown great potential in applications such as conversational agents. These models, such as GPT-4, can not only master language but also solve complex tasks in areas like mathematics, coding, medicine, and law. Despite their...
2023-06-07T20:12:29Z
Github: https://github.com/declare-lab/instruct-eval Leaderboard: https://declare-lab.github.io/instruct-eval/
null
null
null
null
null
null
null
null
null
2,306.05087
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
['Yidong Wang', 'Zhuohao Yu', 'Zhengran Zeng', 'Linyi Yang', 'Cunxiang Wang', 'Hao Chen', 'Chaoya Jiang', 'Rui Xie', 'Jindong Wang', 'Xing Xie', 'Wei Ye', 'Shikun Zhang', 'Yue Zhang']
['cs.CL', 'cs.AI']
Instruction tuning large language models (LLMs) remains a challenging task, owing to the complexity of hyperparameter selection and the difficulty involved in evaluating the tuned models. To determine the optimal hyperparameters, an automatic, robust, and reliable evaluation benchmark is essential. However, establishin...
2023-06-08T10:41:56Z
Accepted by ICLR 2024
null
null
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
['Yidong Wang', 'Zhuohao Yu', 'Zhengran Zeng', 'Linyi Yang', 'Cunxiang Wang', 'Hao Chen', 'Chaoya Jiang', 'Rui Xie', 'Jindong Wang', 'Xingxu Xie', 'Wei Ye', 'Shi-Bo Zhang', 'Yue Zhang']
2,023
International Conference on Learning Representations
249
90
['Computer Science']
2,306.05179
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models
['Wenxuan Zhang', 'Sharifah Mahani Aljunied', 'Chang Gao', 'Yew Ken Chia', 'Lidong Bing']
['cs.CL', 'cs.CV']
Despite the existence of various benchmarks for evaluating natural language processing models, we argue that human exams are a more suitable means of evaluating general intelligence for large language models (LLMs), as they inherently demand a much wider range of abilities such as language understanding, domain knowled...
2023-06-08T13:21:29Z
NeurIPS 2023 (Datasets and Benchmarks)
null
null
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models
['Wenxuan Zhang', 'Sharifah Mahani Aljunied', 'Chang Gao', 'Yew Ken Chia', 'Lidong Bing']
2,023
Neural Information Processing Systems
87
45
['Computer Science']
2,306.05284
Simple and Controllable Music Generation
['Jade Copet', 'Felix Kreuk', 'Itai Gat', 'Tal Remez', 'David Kant', 'Gabriel Synnaeve', 'Yossi Adi', 'Alexandre Défossez']
['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS']
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patte...
2023-06-08T15:31:05Z
Published at Neurips 2023
null
null
null
null
null
null
null
null
null
2,306.05301
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
['Qiaoyu Tang', 'Ziliang Deng', 'Hongyu Lin', 'Xianpei Han', 'Qiao Liang', 'Boxi Cao', 'Le Sun']
['cs.CL']
Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervis...
2023-06-08T15:46:32Z
null
null
null
null
null
null
null
null
null
null
2,306.05399
Matting Anything
['Jiachen Li', 'Jitesh Jain', 'Humphrey Shi']
['cs.CV']
In this paper, we propose the Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance. MAM offers several significant advantages over previous specialized image matting networks:...
2023-06-08T17:51:58Z
Project web-page: https://chrisjuniorli.github.io/project/Matting-Anything/
null
null
null
null
null
null
null
null
null
2,306.05423
ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process
['Changyao Tian', 'Chenxin Tao', 'Jifeng Dai', 'Hao Li', 'Ziheng Li', 'Lewei Lu', 'Xiaogang Wang', 'Hongsheng Li', 'Gao Huang', 'Xizhou Zhu']
['cs.CV']
Image recognition and generation have long been developed independently of each other. With the recent trend towards general-purpose representation learning, the development of general representations for both recognition and generation tasks is also promoted. However, preliminary attempts mainly focus on generation pe...
2023-06-08T17:59:32Z
Accepted by ICLR2024
null
null
null
null
null
null
null
null
null
2,306.05425
MIMIC-IT: Multi-Modal In-Context Instruction Tuning
['Bo Li', 'Yuanhan Zhang', 'Liangyu Chen', 'Jinghao Wang', 'Fanyi Pu', 'Jingkang Yang', 'Chunyuan Li', 'Ziwei Liu']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.HC']
High-quality instructions and responses are essential for the zero-shot performance of large language models on interactive natural language tasks. For interactive vision-language tasks involving intricate visual scenes, a large quantity of diverse and creative instruction-response pairs should be imperative to tune vi...
2023-06-08T17:59:56Z
Project page: https://otter-ntu.github.io/ Dataset & code: https://github.com/Luodian/otter Initial release, work in progress
null
null
MIMIC-IT: Multi-Modal In-Context Instruction Tuning
['Bo Li', 'Yuanhan Zhang', 'Liangyu Chen', 'Jinghao Wang', 'Fanyi Pu', 'Jingkang Yang', 'C. Li', 'Ziwei Liu']
2,023
arXiv.org
240
55
['Computer Science']
2,306.05443
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance
['Qianqian Xie', 'Weiguang Han', 'Xiao Zhang', 'Yanzhao Lai', 'Min Peng', 'Alejandro Lopez-Lira', 'Jimin Huang']
['cs.CL', 'cs.AI']
Although large language models (LLMs) has shown great performance on natural language processing (NLP) in the financial domain, there are no publicly available financial tailtored LLMs, instruction tuning datasets, and evaluation benchmarks, which is critical for continually pushing forward the open-source development ...
2023-06-08T14:20:29Z
12 pages, 1 figures
null
null
null
null
null
null
null
null
null
2,306.05685
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
['Lianmin Zheng', 'Wei-Lin Chiang', 'Ying Sheng', 'Siyuan Zhuang', 'Zhanghao Wu', 'Yonghao Zhuang', 'Zi Lin', 'Zhuohan Li', 'Dacheng Li', 'Eric P. Xing', 'Hao Zhang', 'Joseph E. Gonzalez', 'Ion Stoica']
['cs.CL', 'cs.AI']
Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and lim...
2023-06-09T05:55:52Z
NeurIPS 2023 Datasets and Benchmarks Track
null
null
null
null
null
null
null
null
null
2,306.06081
Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial Robustness
['Emanuele Ballarin', 'Alessio Ansuini', 'Luca Bortolussi']
['cs.CV', 'cs.AI', 'cs.CR', 'cs.LG']
In this work, we propose a novel adversarial defence mechanism for image classification - CARSO - blending the paradigms of adversarial training and adversarial purification in a synergistic robustness-enhancing way. The method builds upon an adversarially-trained classifier, and learns to map its internal representati...
2023-05-25T09:04:31Z
25 pages, 1 figure, 16 tables
null
null
Carefully Blending Adversarial Training, Purification, and Aggregation Improves Adversarial Robustness
['Emanuele Ballarin', 'A. Ansuini', 'L. Bortolussi']
2,023
null
0
71
['Computer Science']
2,306.06189
FasterViT: Fast Vision Transformers with Hierarchical Attention
['Ali Hatamizadeh', 'Greg Heinrich', 'Hongxu Yin', 'Andrew Tao', 'Jose M. Alvarez', 'Jan Kautz', 'Pavlo Molchanov']
['cs.CV', 'cs.AI', 'cs.LG']
We design a new family of hybrid CNN-ViT neural networks, named FasterViT, with a focus on high image throughput for computer vision (CV) applications. FasterViT combines the benefits of fast local representation learning in CNNs and global modeling properties in ViT. Our newly introduced Hierarchical Attention (HAT) a...
2023-06-09T18:41:37Z
ICLR'24 Accepted Paper
null
null
FasterViT: Fast Vision Transformers with Hierarchical Attention
['Ali Hatamizadeh', 'Greg Heinrich', 'Hongxu Yin', 'Andrew Tao', 'J. Álvarez', 'J. Kautz', 'Pavlo Molchanov']
2,023
International Conference on Learning Representations
72
83
['Computer Science']
2,306.06289
SegViTv2: Exploring Efficient and Continual Semantic Segmentation with Plain Vision Transformers
['Bowen Zhang', 'Liyang Liu', 'Minh Hieu Phan', 'Zhi Tian', 'Chunhua Shen', 'Yifan Liu']
['cs.CV']
This paper investigates the capability of plain Vision Transformers (ViTs) for semantic segmentation using the encoder-decoder framework and introduces \textbf{SegViTv2}. In this study, we introduce a novel Attention-to-Mask (\atm) module to design a lightweight decoder effective for plain ViT. The proposed ATM convert...
2023-06-09T22:29:56Z
IJCV 2023 accepted, 21 pages, 8 figures, 12 tables
null
null
SegViTv2: Exploring Efficient and Continual Semantic Segmentation with Plain Vision Transformers
['Bowen Zhang', 'Liyang Liu', 'Minh-Hieu Phan', 'Zhi Tian', 'Chunhua Shen', 'Yifan Liu']
2,023
International Journal of Computer Vision
30
95
['Computer Science']
2,306.06482
TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials
['Guillem Simeon', 'Gianni de Fabritiis']
['cs.LG', 'physics.chem-ph', 'physics.comp-ph']
The development of efficient machine learning models for molecular systems representation is becoming crucial in scientific research. We introduce TensorNet, an innovative O(3)-equivariant message-passing neural network architecture that leverages Cartesian tensor representations. By using Cartesian tensor atomic embed...
2023-06-10T16:41:18Z
NeurIPS 2023, camera-ready version
null
null
null
null
null
null
null
null
null
2,306.06546
High-Fidelity Audio Compression with Improved RVQGAN
['Rithesh Kumar', 'Prem Seetharaman', 'Alejandro Luebs', 'Ishaan Kumar', 'Kundan Kumar']
['cs.SD', 'cs.LG', 'eess.AS']
Language models have been successfully used to model natural signals, such as images, speech, and music. A key component of these models is a high quality neural compression model that can compress high-dimensional natural signals into lower dimensional discrete tokens. To that end, we introduce a high-fidelity univers...
2023-06-11T00:13:00Z
Accepted at NeurIPS 2023 (spotlight)
null
null
High-Fidelity Audio Compression with Improved RVQGAN
['Rithesh Kumar', 'Prem Seetharaman', 'Alejandro Luebs', 'I. Kumar', 'Kundan Kumar']
2,023
Neural Information Processing Systems
338
47
['Computer Science', 'Engineering']
2,306.06687
LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark
['Zhenfei Yin', 'Jiong Wang', 'Jianjian Cao', 'Zhelun Shi', 'Dingning Liu', 'Mukai Li', 'Lu Sheng', 'Lei Bai', 'Xiaoshui Huang', 'Zhiyong Wang', 'Jing Shao', 'Wanli Ouyang']
['cs.CV']
Large language models have emerged as a promising approach towards achieving general-purpose AI agents. The thriving open-source LLM community has greatly accelerated the development of agents that support human-machine dialogue interaction through natural language processing. However, human interaction with the world ...
2023-06-11T14:01:17Z
NeurIPS2023 camera ready ; 37 pages, 33 figures. Code available at https://github.com/OpenLAMM/LAMM ; Project page: https://openlamm.github.io/
null
null
null
null
null
null
null
null
null
2,306.06851
UniPoll: A Unified Social Media Poll Generation Framework via Multi-Objective Optimization
['Yixia Li', 'Rong Xiang', 'Yanlin Song', 'Jing Li']
['cs.CL']
Social media platforms are vital for expressing opinions and understanding public sentiment, yet many analytical tools overlook passive users who mainly consume content without engaging actively. To address this, we introduce UniPoll, an advanced framework designed to automatically generate polls from social media post...
2023-06-12T03:54:04Z
Accepted by IEEE Transactions on Neural Networks and Learning Systems. Project page is live at https://uni-poll.github.io . Code are available at https://github.com/X1AOX1A/UniPoll
null
10.1109/TNNLS.2024.3512868
null
null
null
null
null
null
null
2,306.07174
Augmenting Language Models with Long-Term Memory
['Weizhi Wang', 'Li Dong', 'Hao Cheng', 'Xiaodong Liu', 'Xifeng Yan', 'Jianfeng Gao', 'Furu Wei']
['cs.CL']
Existing large language models (LLMs) can only afford fix-sized inputs due to the input length limit, preventing them from utilizing rich long-context information from past inputs. To address this, we propose a framework, Language Models Augmented with Long-Term Memory (LongMem), which enables LLMs to memorize long his...
2023-06-12T15:13:39Z
null
null
null
null
null
null
null
null
null
null
2,306.07197
AROID: Improving Adversarial Robustness Through Online Instance-Wise Data Augmentation
['Lin Li', 'Jianing Qiu', 'Michael Spratling']
['cs.CV', 'cs.AI', 'cs.LG']
Deep neural networks are vulnerable to adversarial examples. Adversarial training (AT) is an effective defense against adversarial examples. However, AT is prone to overfitting which degrades robustness substantially. Recently, data augmentation (DA) was shown to be effective in mitigating robust overfitting if appropr...
2023-06-12T15:54:52Z
published at the IJCV in press
null
null
null
null
null
null
null
null
null
2,306.0728
Controlling Text-to-Image Diffusion by Orthogonal Finetuning
['Zeju Qiu', 'Weiyang Liu', 'Haiwen Feng', 'Yuxuan Xue', 'Yao Feng', 'Zhen Liu', 'Dan Zhang', 'Adrian Weller', 'Bernhard Schölkopf']
['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG']
Large text-to-image diffusion models have impressive capabilities in generating photorealistic images from text prompts. How to effectively guide or control these powerful models to perform different downstream tasks becomes an important open problem. To tackle this challenge, we introduce a principled finetuning metho...
2023-06-12T17:59:23Z
NeurIPS 2023 (v3: fixed formula typos in Section 3.5, 43 pages, 34 figures, project page: https://oft.wyliu.com/)
null
null
Controlling Text-to-Image Diffusion by Orthogonal Finetuning
['Zeju Qiu', 'Wei-yu Liu', 'Haiwen Feng', 'Yuxuan Xue', 'Yao Feng', 'Zhen Liu', 'Dan Zhang', 'Adrian Weller', 'B. Scholkopf']
2,023
Neural Information Processing Systems
159
71
['Computer Science']
2,306.07373
EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing
['Iker de la Iglesia', 'Aitziber Atutxa', 'Koldo Gojenola', 'Ander Barrena']
['cs.CL']
The utilization of clinical reports for various secondary purposes, including health research and treatment monitoring, is crucial for enhancing patient care. Natural Language Processing (NLP) tools have emerged as valuable assets for extracting and processing relevant information from these reports. However, the avail...
2023-06-12T18:56:25Z
null
null
null
null
null
null
null
null
null
null
2,306.07629
SqueezeLLM: Dense-and-Sparse Quantization
['Sehoon Kim', 'Coleman Hooper', 'Amir Gholami', 'Zhen Dong', 'Xiuyu Li', 'Sheng Shen', 'Michael W. Mahoney', 'Kurt Keutzer']
['cs.CL', 'cs.LG']
Generative Large Language Models (LLMs) have demonstrated remarkable results for a wide range of tasks. However, deploying these models for inference has been a significant challenge due to their unprecedented resource requirements. This has forced existing deployment frameworks to use multi-GPU inference pipelines, wh...
2023-06-13T08:57:54Z
ICML 2024
null
null
null
null
null
null
null
null
null
2,306.07691
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
['Yinghao Aaron Li', 'Cong Han', 'Vinay S. Raghavan', 'Gavin Mischler', 'Nima Mesgarani']
['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.SD']
In this paper, we present StyleTTS 2, a text-to-speech (TTS) model that leverages style diffusion and adversarial training with large speech language models (SLMs) to achieve human-level TTS synthesis. StyleTTS 2 differs from its predecessor by modeling styles as a latent random variable through diffusion models to gen...
2023-06-13T11:04:43Z
NeurIPS 2023
null
null
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
['Yinghao Aaron Li', 'Cong Han', 'Vinay S. Raghavan', 'Gavin Mischler', 'N. Mesgarani']
2,023
Neural Information Processing Systems
127
65
['Computer Science', 'Medicine', 'Engineering']
2,306.07906
WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences
['Xiao Liu', 'Hanyu Lai', 'Hao Yu', 'Yifan Xu', 'Aohan Zeng', 'Zhengxiao Du', 'Peng Zhang', 'Yuxiao Dong', 'Jie Tang']
['cs.CL', 'cs.AI']
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM). Its goal is to augment a pre-trained large language model (LLM) with web search and retrieval capabilities while being efficient for real-world deployments. To achieve this, we develop WebGLM with strategies for the L...
2023-06-13T16:57:53Z
Accepted to KDD 2023
null
null
null
null
null
null
null
null
null
2,306.07934
BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information
['Mehran Kazemi', 'Quan Yuan', 'Deepti Bhatia', 'Najoung Kim', 'Xin Xu', 'Vaiva Imbrasaite', 'Deepak Ramachandran']
['cs.CL', 'cs.AI', 'cs.LG']
Automated reasoning with unstructured natural text is a key requirement for many potential applications of NLP and for developing robust AI systems. Recently, Language Models (LMs) have demonstrated complex reasoning capacities even without any finetuning. However, existing evaluation for automated reasoning assumes ac...
2023-06-13T17:39:20Z
null
null
null
BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information
['Mehran Kazemi', 'Quan Yuan', 'Deepti Bhatia', 'Najoung Kim', 'Xin Xu', 'Vaiva Imbrasaite', 'Deepak Ramachandran']
2,023
Neural Information Processing Systems
50
64
['Computer Science']
2,306.07957
Hidden Biases of End-to-End Driving Models
['Bernhard Jaeger', 'Kashyap Chitta', 'Andreas Geiger']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO']
End-to-end driving systems have recently made rapid progress, in particular on CARLA. Independent of their major contribution, they introduce changes to minor system components. Consequently, the source of improvements is unclear. We identify two biases that recur in nearly all state-of-the-art methods and are critical...
2023-06-13T17:55:17Z
Accepted at ICCV 2023. Camera ready version
null
null
Hidden Biases of End-to-End Driving Models
['Bernhard Jaeger', 'Kashyap Chitta', 'Andreas Geiger']
2,023
IEEE International Conference on Computer Vision
69
44
['Computer Science']
2,306.07967
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
['Arnav Chavan', 'Zhuang Liu', 'Deepak Gupta', 'Eric Xing', 'Zhiqiang Shen']
['cs.LG', 'cs.AI', 'cs.CV']
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tuning tasks. Enhancing Low-Rank Adaptation (LoRA), GLoRA employs a generalized prompt module to optimize pre-trained model weights and adjust intermediate activations, providing more flexibility and capability across diver...
2023-06-13T17:59:32Z
Technical report. v2: Add LLaMA-1&2 results. Code and models at https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA
null
null
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
['Arnav Chavan', 'Zhuang Liu', 'D. Gupta', 'Eric P. Xing', 'Zhiqiang Shen']
2,023
arXiv.org
92
49
['Computer Science']
2,306.08018
Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
['Yin Fang', 'Xiaozhuan Liang', 'Ningyu Zhang', 'Kangwei Liu', 'Rui Huang', 'Zhuo Chen', 'Xiaohui Fan', 'Huajun Chen']
['q-bio.QM', 'cs.AI', 'cs.CE', 'cs.CL', 'cs.IR', 'cs.LG']
Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Inst...
2023-06-13T14:35:34Z
ICLR 2024. Project homepage: https://github.com/zjunlp/Mol-Instructions
null
null
Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
['Yin Fang', 'Xiaozhuan Liang', 'Ningyu Zhang', 'Kangwei Liu', 'Rui Huang', 'Zhuo Chen', 'Xiaohui Fan', 'Huajun Chen']
2,023
International Conference on Learning Representations
88
80
['Computer Science', 'Biology']
2,306.08161
h2oGPT: Democratizing Large Language Models
['Arno Candel', 'Jon McKinney', 'Philipp Singer', 'Pascal Pfeiffer', 'Maximilian Jeblick', 'Prithvi Prabhu', 'Jeff Gambera', 'Mark Landry', 'Shivam Bansal', 'Ryan Chesler', 'Chun Ming Lee', 'Marcos V. Conde', 'Pasha Stetsenko', 'Olivier Grellier', 'SriSatish Ambati']
['cs.CL', 'cs.AI', 'cs.HC', 'cs.IR', 'cs.LG']
Applications built on top of Large Language Models (LLMs) such as GPT-4 represent a revolution in AI due to their human-level capabilities in natural language processing. However, they also pose many significant risks such as the presence of biased, private, or harmful text, and the unauthorized inclusion of copyrighte...
2023-06-13T22:19:53Z
Work in progress by H2O.ai, Inc
null
null
h2oGPT: Democratizing Large Language Models
['A. Candel', 'Jon McKinney', 'Philipp Singer', 'Pascal Pfeiffer', 'Maximilian Jeblick', 'Prithvi Prabhu', 'Jeff Gambera', 'Mark Landry', 'Shivam Bansal', 'Ryan Chesler', 'Chun Ming Lee', 'Marcos V. Conde', 'Pasha Stetsenko', 'O. Grellier', 'SriSatish Ambati']
2,023
arXiv.org
7
4
['Computer Science']
2,306.08502
ITALIC: An Italian Intent Classification Dataset
['Alkis Koudounas', 'Moreno La Quatra', 'Lorenzo Vaiani', 'Luca Colomba', 'Giuseppe Attanasio', 'Eliana Pastor', 'Luca Cagliero', 'Elena Baralis']
['cs.CL', 'cs.SD', 'eess.AS']
Recent large-scale Spoken Language Understanding datasets focus predominantly on English and do not account for language-specific phenomena such as particular phonemes or words in different lects. We introduce ITALIC, the first large-scale speech dataset designed for intent classification in Italian. The dataset compri...
2023-06-14T13:36:24Z
Accepted at INTERSPEECH 2023. Data and code at https://github.com/RiTA-nlp/ITALIC
null
10.21437/Interspeech.2023-1980
ITALIC: An Italian Intent Classification Dataset
['Alkis Koudounas', 'Moreno La Quatra', 'Lorenzo Vaiani', 'Luca Colomba', 'Giuseppe Attanasio', 'Eliana Pastor', 'Luca Cagliero', 'Elena Baralis']
2,023
Interspeech
25
21
['Computer Science', 'Engineering']
2,306.08526
AlbMoRe: A Corpus of Movie Reviews for Sentiment Analysis in Albanian
['Erion Çano']
['cs.CL', 'cs.AI', 'cs.LG']
Lack of available resources such as text corpora for low-resource languages seriously hinders research on natural language processing and computational linguistics. This paper presents AlbMoRe, a corpus of 800 sentiment annotated movie reviews in Albanian. Each text is labeled as positive or negative and can be used fo...
2023-06-14T14:21:55Z
4 pages, 3 tables
null
null
AlbMoRe: A Corpus of Movie Reviews for Sentiment Analysis in Albanian
['Erion cCano']
2,023
null
3
19
['Computer Science']
2,306.08543
MiniLLM: Knowledge Distillation of Large Language Models
['Yuxian Gu', 'Li Dong', 'Furu Wei', 'Minlie Huang']
['cs.CL', 'cs.AI']
Knowledge Distillation (KD) is a promising technique for reducing the high computational demand of large language models (LLMs). However, previous KD methods are primarily applied to white-box classification models or training small models to imitate black-box model APIs like ChatGPT. How to effectively distill the kno...
2023-06-14T14:44:03Z
Published as a conference paper in ICLR 2024
null
null
null
null
null
null
null
null
null
2,306.08568
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
['Ziyang Luo', 'Can Xu', 'Pu Zhao', 'Qingfeng Sun', 'Xiubo Geng', 'Wenxiang Hu', 'Chongyang Tao', 'Jing Ma', 'Qingwei Lin', 'Daxin Jiang']
['cs.CL', 'cs.AI']
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex inst...
2023-06-14T15:18:48Z
Large Language model, Code Generation, Code LLMs.This paper has been accepted to ICLR 2024. Please cite the ICLR version
The Twelfth International Conference on Learning Representations (ICLR 2024)
null
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
['Ziyang Luo', 'Can Xu', 'Pu Zhao', 'Qingfeng Sun', 'Xiubo Geng', 'Wenxiang Hu', 'Chongyang Tao', 'Jing Ma', 'Qingwei Lin', 'Daxin Jiang']
2,023
International Conference on Learning Representations
698
49
['Computer Science']
2,306.0862
Anticipatory Music Transformer
['John Thickstun', 'David Hall', 'Chris Donahue', 'Percy Liang']
['cs.SD', 'cs.LG', 'eess.AS', 'stat.ML']
We introduce anticipation: a method for constructing a controllable generative model of a temporal point process (the event process) conditioned asynchronously on realizations of a second, correlated process (the control process). We achieve this by interleaving sequences of events and controls, such that controls appe...
2023-06-14T16:27:53Z
TMLR accepted version
null
null
Anticipatory Music Transformer
['John Thickstun', 'D. Hall', 'Chris Donahue', 'Percy Liang']
2,023
Trans. Mach. Learn. Res.
16
122
['Computer Science', 'Engineering', 'Mathematics']
2,306.08637
TAPIR: Tracking Any Point with per-frame Initialization and temporal Refinement
['Carl Doersch', 'Yi Yang', 'Mel Vecerik', 'Dilara Gokay', 'Ankush Gupta', 'Yusuf Aytar', 'Joao Carreira', 'Andrew Zisserman']
['cs.CV']
We present a novel model for Tracking Any Point (TAP) that effectively tracks any queried point on any physical surface throughout a video sequence. Our approach employs two stages: (1) a matching stage, which independently locates a suitable candidate point match for the query point on every other frame, and (2) a ref...
2023-06-14T17:07:51Z
Published at ICCV 2023
null
null
null
null
null
null
null
null
null
2,306.08685
World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
['Ziqiao Ma', 'Jiayi Pan', 'Joyce Chai']
['cs.CL', 'cs.AI', 'cs.CV']
The ability to connect language units to their referents in the physical world, referred to as grounding, is crucial to learning and understanding grounded meanings of words. While humans demonstrate fast mapping in new word learning, it remains unclear whether modern vision-language models can truly represent language...
2023-06-14T18:10:05Z
ACL 2023 Outstanding Paper
null
null
null
null
null
null
null
null
null
2,306.08832
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional Understanding
['Le Zhang', 'Rabiul Awal', 'Aishwarya Agrawal']
['cs.CV']
Vision-Language Models (VLMs), such as CLIP, exhibit strong image-text comprehension abilities, facilitating advances in several downstream tasks such as zero-shot image classification, image-text retrieval, and text-to-image generation. However, the compositional reasoning abilities of existing VLMs remains subpar. Th...
2023-06-15T03:26:28Z
CVPR 2024
null
null
null
null
null
null
null
null
null
2,306.08887
SplatFlow: Learning Multi-frame Optical Flow via Splatting
['Bo Wang', 'Yifan Zhang', 'Jian Li', 'Yang Yu', 'Zhenping Sun', 'Li Liu', 'Dewen Hu']
['cs.CV']
The occlusion problem remains a crucial challenge in optical flow estimation (OFE). Despite the recent significant progress brought about by deep learning, most existing deep learning OFE methods still struggle to handle occlusions; in particular, those based on two frames cannot correctly handle occlusions because occ...
2023-06-15T06:41:21Z
null
International Journal of Computer Vision (IJCV), 2024
10.1007/s11263-024-01993-0
null
null
null
null
null
null
null
2,306.092
ChessGPT: Bridging Policy Learning and Language Modeling
['Xidong Feng', 'Yicheng Luo', 'Ziyan Wang', 'Hongrui Tang', 'Mengyue Yang', 'Kun Shao', 'David Mguni', 'Yali Du', 'Jun Wang']
['cs.LG', 'cs.AI']
When solving decision-making tasks, humans typically depend on information from two key sources: (1) Historical policy data, which provides interaction replay from the environment, and (2) Analytical insights in natural language form, exposing the invaluable thought process or strategic considerations. Despite this, th...
2023-06-15T15:35:31Z
Published as a conference article in NeurIPS 2023
null
null
ChessGPT: Bridging Policy Learning and Language Modeling
['Xidong Feng', 'Yicheng Luo', 'Ziyan Wang', 'Hongrui Tang', 'Mengyue Yang', 'Kun Shao', 'D. Mguni', 'Yali Du', 'Jun Wang']
2,023
Neural Information Processing Systems
44
61
['Computer Science']
2,306.09212
CMMLU: Measuring massive multitask language understanding in Chinese
['Haonan Li', 'Yixuan Zhang', 'Fajri Koto', 'Yifei Yang', 'Hai Zhao', 'Yeyun Gong', 'Nan Duan', 'Timothy Baldwin']
['cs.CL']
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, enginee...
2023-06-15T15:49:51Z
null
null
null
CMMLU: Measuring massive multitask language understanding in Chinese
['Haonan Li', 'Yixuan Zhang', 'Fajri Koto', 'Yifei Yang', 'Hai Zhao', 'Yeyun Gong', 'Nan Duan', 'Tim Baldwin']
2,023
Annual Meeting of the Association for Computational Linguistics
274
50
['Computer Science']
2,306.09237
One Law, Many Languages: Benchmarking Multilingual Legal Reasoning for Judicial Support
['Ronja Stern', 'Vishvaksenan Rasiah', 'Veton Matoshi', 'Srinanda Brügger Bose', 'Matthias Stürmer', 'Ilias Chalkidis', 'Daniel E. Ho', 'Joel Niklaus']
['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2']
Recent strides in Large Language Models (LLMs) have saturated many Natural Language Processing (NLP) benchmarks, emphasizing the need for more challenging ones to properly assess LLM capabilities. However, domain-specific and multilingual benchmarks are rare because they require in-depth expertise to develop. Still, mo...
2023-06-15T16:19:15Z
null
null
null
One Law, Many Languages: Benchmarking Multilingual Legal Reasoning for Judicial Support
['Vishvaksenan Rasiah', 'Ronja Stern', 'Veton Matoshi', 'Matthias Sturmer', 'Ilias Chalkidis', 'Daniel Ho', 'Joel Niklaus']
2,023
null
11
0
['Computer Science']
2,306.09364
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting
['Vijay Ekambaram', 'Arindam Jati', 'Nam Nguyen', 'Phanwadee Sinthong', 'Jayant Kalagnanam']
['cs.LG', 'cs.AI', 'I.2']
Transformers have gained popularity in time series forecasting for their ability to capture long-sequence interactions. However, their high memory and computing requirements pose a critical bottleneck for long-term forecasting. To address this, we propose TSMixer, a lightweight neural architecture exclusively composed ...
2023-06-14T06:26:23Z
Accepted in the Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 23), Research Track. Delayed release in arXiv to comply with the conference policies on the double-blind review process. This paper has been submitted to the KDD peer-review process on Feb 02, 2023
null
10.1145/3580305.3599533
null
null
null
null
null
null
null
2,306.09683
Scaling Open-Vocabulary Object Detection
['Matthias Minderer', 'Alexey Gritsenko', 'Neil Houlsby']
['cs.CV']
Open-vocabulary object detection has benefited greatly from pretrained vision-language models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to image-le...
2023-06-16T08:27:46Z
null
null
null
null
null
null
null
null
null
null
2,306.09802
RED$^{\rm FM}$: a Filtered and Multilingual Relation Extraction Dataset
['Pere-Lluís Huguet Cabot', 'Simone Tedeschi', 'Axel-Cyrille Ngonga Ngomo', 'Roberto Navigli']
['cs.CL']
Relation Extraction (RE) is a task that identifies relationships between entities in a text, enabling the acquisition of relational facts and bridging the gap between natural language and structured knowledge. However, current RE models often rely on small datasets with low coverage of relation types, particularly when...
2023-06-16T12:29:59Z
ACL 2023. Please cite authors correctly using both lastnames ("Huguet Cabot", "Ngonga Ngomo")
null
null
null
null
null
null
null
null
null
2,306.09968
ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation
['Guangyu Wang', 'Guoxing Yang', 'Zongxin Du', 'Longjun Fan', 'Xiaohu Li']
['cs.CL']
Large language models have exhibited exceptional performance on various Natural Language Processing (NLP) tasks, leveraging techniques such as the pre-training, and instruction fine-tuning. Despite these advances, their effectiveness in medical applications is limited, due to challenges such as factual inaccuracies, re...
2023-06-16T16:56:32Z
null
null
null
null
null
null
null
null
null
null
2,306.10315
FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue
['Weihao Zeng', 'Keqing He', 'Yejie Wang', 'Chen Zeng', 'Jingang Wang', 'Yunsen Xian', 'Weiran Xu']
['cs.CL']
Pre-trained language models based on general text enable huge success in the NLP scenario. But the intrinsical difference of linguistic patterns between general text and task-oriented dialogues makes existing pre-trained language models less useful in practice. Current dialogue pre-training methods rely on a contrastiv...
2023-06-17T10:40:07Z
ACL 2023 Main Conference
null
null
null
null
null
null
null
null
null
2,306.10968
BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models
['Shaolei Zhang', 'Qingkai Fang', 'Zhuocheng Zhang', 'Zhengrui Ma', 'Yan Zhou', 'Langlin Huang', 'Mengyu Bu', 'Shangtong Gui', 'Yunji Chen', 'Xilin Chen', 'Yang Feng']
['cs.CL', 'cs.AI']
Large language models (LLMs) have demonstrated remarkable prowess in language understanding and generation. Advancing from foundation LLMs to instructionfollowing LLMs, instruction tuning plays a vital role in aligning LLMs to human preferences. However, the existing LLMs are usually focused on English, leading to infe...
2023-06-19T14:30:52Z
Try BayLing's online demo at http://nlp.ict.ac.cn/bayling/demo
null
null
BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models
['Shaolei Zhang', 'Qingkai Fang', 'Zhuocheng Zhang', 'Zhengrui Ma', 'Yan Zhou', 'Langlin Huang', 'Mengyu Bu', 'Shangtong Gui', 'Yunji Chen', 'Xilin Chen', 'Yang Feng']
2,023
arXiv.org
42
29
['Computer Science']
2,306.10998
RepoFusion: Training Code Models to Understand Your Repository
['Disha Shrivastava', 'Denis Kocetkov', 'Harm de Vries', 'Dzmitry Bahdanau', 'Torsten Scholak']
['cs.LG', 'cs.AI', 'cs.PL', 'cs.SE']
Despite the huge success of Large Language Models (LLMs) in coding assistants like GitHub Copilot, these models struggle to understand the context present in the repository (e.g., imports, parent classes, files with similar names, etc.), thereby producing inaccurate code completions. This effect is more pronounced when...
2023-06-19T15:05:31Z
null
null
null
RepoFusion: Training Code Models to Understand Your Repository
['Disha Shrivastava', 'Denis Kocetkov', 'H. D. Vries', 'Dzmitry Bahdanau', 'Torsten Scholak']
2,023
arXiv.org
29
42
['Computer Science']
2,306.11207
Quilt-1M: One Million Image-Text Pairs for Histopathology
['Wisdom Oluchi Ikezogwo', 'Mehmet Saygin Seyfioglu', 'Fatemeh Ghezloo', 'Dylan Stefan Chan Geva', 'Fatwir Sheikh Mohammed', 'Pavan Kumar Anand', 'Ranjay Krishna', 'Linda Shapiro']
['cs.CV', 'cs.CL', 'cs.LG']
Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of analogous data in the medical field, specifically in histopathology, has slowed comparable progress. To enable similar representation learning for histopathology, ...
2023-06-20T00:14:47Z
null
null
null
null
null
null
null
null
null
null
2,306.11247
DICES Dataset: Diversity in Conversational AI Evaluation for Safety
['Lora Aroyo', 'Alex S. Taylor', 'Mark Diaz', 'Christopher M. Homan', 'Alicia Parrish', 'Greg Serapio-Garcia', 'Vinodkumar Prabhakaran', 'Ding Wang']
['cs.HC']
Machine learning approaches often require training and evaluation datasets with a clear separation between positive and negative examples. This risks simplifying and even obscuring the inherent subjectivity present in many tasks. Preserving such variance in content and diversity in datasets is often expensive and labor...
2023-06-20T03:00:12Z
null
null
null
null
null
null
null
null
null
null
2,306.11249
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
['Cheng Tan', 'Siyuan Li', 'Zhangyang Gao', 'Wenfei Guan', 'Zedong Wang', 'Zicheng Liu', 'Lirong Wu', 'Stan Z. Li']
['cs.CV', 'cs.AI']
Spatio-temporal predictive learning is a learning paradigm that enables models to learn spatial and temporal patterns by predicting future frames from given past frames in an unsupervised manner. Despite remarkable progress in recent years, a lack of systematic understanding persists due to the diverse settings, comple...
2023-06-20T03:02:14Z
Accepted by NeurIPS 2023. 33 pages, 17 figures, 19 tables. Under review. For more details, please refer to https://github.com/chengtan9907/OpenSTL
null
null
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
['Cheng Tan', 'Siyuan Li', 'Zhangyang Gao', 'Wen-Cai Guan', 'Zedong Wang', 'Zicheng Liu', 'Lirong Wu', 'Stan Z. Li']
2,023
Neural Information Processing Systems
63
68
['Computer Science']
2,306.11372
Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts
['Xuan-Phi Nguyen', 'Sharifah Mahani Aljunied', 'Shafiq Joty', 'Lidong Bing']
['cs.CL', 'cs.AI']
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars. However, in low-resource languages, obtaining such hand-picked exemplars can still be challenging, where unsupervised techniques may be necessary. Moreover, competent generative capabilities of LLMs are observed only ...
2023-06-20T08:27:47Z
ACL 2024 Main Conference
null
null
Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts
['Xuan-Phi Nguyen', 'Sharifah Mahani Aljunied', 'Shafiq R. Joty', 'Lidong Bing']
2,023
Annual Meeting of the Association for Computational Linguistics
39
57
['Computer Science']
2,306.11644
Textbooks Are All You Need
['Suriya Gunasekar', 'Yi Zhang', 'Jyoti Aneja', 'Caio César Teodoro Mendes', 'Allie Del Giorno', 'Sivakanth Gopi', 'Mojan Javaheripi', 'Piero Kauffmann', 'Gustavo de Rosa', 'Olli Saarikivi', 'Adil Salim', 'Shital Shah', 'Harkirat Singh Behl', 'Xin Wang', 'Sébastien Bubeck', 'Ronen Eldan', 'Adam Tauman Kalai', 'Yin Tat ...
['cs.CL', 'cs.AI', 'cs.LG']
We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook quality" data from the web (6B tokens) and synthetically generated textbooks and exercis...
2023-06-20T16:14:25Z
26 pages; changed color scheme of plot. fixed minor typos and added couple clarifications
null
null
null
null
null
null
null
null
null
2,306.11695
A Simple and Effective Pruning Approach for Large Language Models
['Mingjie Sun', 'Zhuang Liu', 'Anna Bair', 'J. Zico Kolter']
['cs.CL', 'cs.AI', 'cs.LG']
As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance. Existing methods, however, require either retraining, which is rarely affordable for billion-scale LLMs, or solving a weight...
2023-06-20T17:18:20Z
ICLR 2024. Website at https://eric-mingjie.github.io/wanda/home.html
null
null
A Simple and Effective Pruning Approach for Large Language Models
['Mingjie Sun', 'Zhuang Liu', 'Anna Bair', 'J. Z. Kolter']
2,023
International Conference on Learning Representations
443
107
['Computer Science']
2,306.11925
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
['Duy M. H. Nguyen', 'Hoang Nguyen', 'Nghiem T. Diep', 'Tan N. Pham', 'Tri Cao', 'Binh T. Nguyen', 'Paul Swoboda', 'Nhat Ho', 'Shadi Albarqouni', 'Pengtao Xie', 'Daniel Sonntag', 'Mathias Niepert']
['cs.CV']
Obtaining large pre-trained models that can be fine-tuned to new tasks with limited annotated samples has remained an open challenge for medical imaging data. While pre-trained deep networks on ImageNet and vision-language foundation models trained on web-scale data are prevailing approaches, their effectiveness on med...
2023-06-20T22:21:34Z
Accepted at NeurIPS 2023
null
null
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
['D. M. Nguyen', 'Hoang Nguyen', 'N. T. Diep', 'T. Pham', 'T. Cao', 'Binh Duc Nguyen', 'P. Swoboda', 'Nhat Ho', 'Shadi Albarqouni', 'Pengtao Xie', 'Daniel Sonntag', 'Mathias Niepert']
2,023
Neural Information Processing Systems
55
140
['Computer Science']
2,306.12059
EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
['Yi-Lun Liao', 'Brandon Wood', 'Abhishek Das', 'Tess Smidt']
['cs.LG', 'cs.AI', 'physics.comp-ph']
Equivariant Transformers such as Equiformer have demonstrated the efficacy of applying Transformers to the domain of 3D atomistic systems. However, they are limited to small degrees of equivariant representations due to their computational complexity. In this paper, we investigate whether these architectures can scale ...
2023-06-21T07:01:38Z
Published as a conference paper at ICLR 2024
null
null
EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations
['Yidong Liao', 'Brandon Wood', 'Abhishek Das', 'T. Smidt']
2,023
International Conference on Learning Representations
159
80
['Computer Science', 'Physics']
2,306.12156
Fast Segment Anything
['Xu Zhao', 'Wenchao Ding', 'Yongqi An', 'Yinglong Du', 'Tao Yu', 'Min Li', 'Ming Tang', 'Jinqiao Wang']
['cs.CV', 'cs.AI']
The recently proposed segment anything model (SAM) has made a significant influence in many computer vision tasks. It is becoming a foundation step for many high-level tasks, like image segmentation, image caption, and image editing. However, its huge computation costs prevent it from wider applications in industry sce...
2023-06-21T10:08:29Z
Technical Report. The code is released at https://github.com/CASIA-IVA-Lab/FastSAM
null
null
null
null
null
null
null
null
null
2,306.1242
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
['Shizhe Diao', 'Rui Pan', 'Hanze Dong', 'Ka Shun Shum', 'Jipeng Zhang', 'Wei Xiong', 'Tong Zhang']
['cs.CL', 'cs.AI']
Foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, an increasing number of foundation models are becoming publicly accessible. However, a significant shortcoming of most of ...
2023-06-21T17:58:25Z
Published in NAACL 2024 Demo Track
null
null
null
null
null
null
null
null
null
2,306.12766
Mapping and Cleaning Open Commonsense Knowledge Bases with Generative Translation
['Julien Romero', 'Simon Razniewski']
['cs.CL']
Structured knowledge bases (KBs) are the backbone of many know\-ledge-intensive applications, and their automated construction has received considerable attention. In particular, open information extraction (OpenIE) is often used to induce structure from a text. However, although it allows high recall, the extracted kn...
2023-06-22T09:42:54Z
null
null
null
null
null
null
null
null
null
null
2,306.12802
Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
['Hoang Thanh Lam', 'Marco Luca Sbodio', 'Marcos Martínez Galindo', 'Mykhaylo Zayats', 'Raúl Fernández-Díaz', 'Víctor Valls', 'Gabriele Picco', 'Cesar Berrospi Ramis', 'Vanessa López']
['cs.LG', 'cs.AI', 'q-bio.BM']
Recent research on predicting the binding affinity between drug molecules and proteins use representations learned, through unsupervised learning techniques, from large databases of molecule SMILES and protein sequences. While these representations have significantly enhanced the predictions, they are usually based on ...
2023-06-22T11:01:41Z
null
null
null
Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
['Hoang Thanh Lam', 'M. Sbodio', 'Marcos Martínez Galindo', 'Mykhaylo Zayats', "Ra'ul Fern'andez-D'iaz", 'Victor Valls', 'Gabriele Picco', 'Cesar Berrospi Ramis', "V. L'opez"]
2,023
arXiv.org
8
38
['Computer Science', 'Biology']
2,306.12991
Speech Emotion Diarization: Which Emotion Appears When?
['Yingzhi Wang', 'Mirco Ravanelli', 'Alya Yacoubi']
['cs.CL']
Speech Emotion Recognition (SER) typically relies on utterance-level solutions. However, emotions conveyed through speech should be considered as discrete speech events with definite temporal boundaries, rather than attributes of the entire utterance. To reflect the fine-grained nature of speech emotions, we propose a ...
2023-06-22T15:47:36Z
Accepted to ASRU 2023
null
null
null
null
null
null
null
null
null
2,306.13643
LightGlue: Local Feature Matching at Light Speed
['Philipp Lindenberger', 'Paul-Edouard Sarlin', 'Marc Pollefeys']
['cs.CV']
We introduce LightGlue, a deep neural network that learns to match local features across images. We revisit multiple design decisions of SuperGlue, the state of the art in sparse matching, and derive simple but effective improvements. Cumulatively, they make LightGlue more efficient - in terms of both memory and comput...
2023-06-23T17:52:54Z
null
null
null
LightGlue: Local Feature Matching at Light Speed
['Philipp Lindenberger', 'Paul-Edouard Sarlin', 'M. Pollefeys']
2,023
IEEE International Conference on Computer Vision
456
84
['Computer Science']
2,306.13649
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes
['Rishabh Agarwal', 'Nino Vieillard', 'Yongchao Zhou', 'Piotr Stanczyk', 'Sabela Ramos', 'Matthieu Geist', 'Olivier Bachem']
['cs.LG', 'cs.AI', 'cs.CL']
Knowledge distillation (KD) is widely used for compressing a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, current KD methods for auto-regressive sequence models suffer from distribution mismatch between output sequences seen during training and those gen...
2023-06-23T17:56:26Z
Accepted at ICLR 2024. First two authors contributed equally
null
null
null
null
null
null
null
null
null
2,306.13888
L3Cube-MahaSent-MD: A Multi-domain Marathi Sentiment Analysis Dataset and Transformer Models
['Aabha Pingle', 'Aditya Vyawahare', 'Isha Joshi', 'Rahul Tangsali', 'Raviraj Joshi']
['cs.CL', 'cs.LG']
The exploration of sentiment analysis in low-resource languages, such as Marathi, has been limited due to the availability of suitable datasets. In this work, we present L3Cube-MahaSent-MD, a multi-domain Marathi sentiment analysis dataset, with four different domains - movie reviews, general tweets, TV show subtitles,...
2023-06-24T07:27:53Z
Accepted at DMLR Workshop @ ICML 2023
null
null
L3Cube-MahaSent-MD: A Multi-domain Marathi Sentiment Analysis Dataset and Transformer Models
['Aabha Pingle', 'Aditya Vyawahare', 'Isha Joshi', 'Rahul Tangsali', 'Raviraj Joshi']
2,023
Pacific Asia Conference on Language, Information and Computation
9
27
['Computer Science']
2,306.1403
My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks
['Tanmay Chavan', 'Omkar Gokhale', 'Aditya Kane', 'Shantanu Patankar', 'Raviraj Joshi']
['cs.CL', 'cs.LG']
The research on code-mixed data is limited due to the unavailability of dedicated code-mixed datasets and pre-trained language models. In this work, we focus on the low-resource Indian language Marathi which lacks any prior work in code-mixing. We present L3Cube-MeCorpus, a large code-mixed Marathi-English (Mr-En) corp...
2023-06-24T18:17:38Z
null
null
null
My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks
['Tanmay Chavan', 'Omkar Gokhale', 'Aditya Kane', 'Shantanu Patankar', 'Raviraj Joshi']
2,023
International Joint Conference on Natural Language Processing
3
22
['Computer Science']
2,306.14256
A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
['Marcelo Archanjo Jose', 'Fabio Gagliardi Cozman']
['cs.AI', '68T07, 68T50', 'I.2.7; H.3.3']
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema)...
2023-06-25T14:28:12Z
This preprint has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this article is published in International Journal of Information Technology, and is available online at https://doi.org/10.1007/s41870-023-01342-3 . SharedIt link: https://rdcu.be/dff19
null
10.1007/s41870-023-01342-3
null
null
null
null
null
null
null
2,306.14289
Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
['Chaoning Zhang', 'Dongshen Han', 'Yu Qiao', 'Jung Uk Kim', 'Sung-Ho Bae', 'Seungkyu Lee', 'Choong Seon Hong']
['cs.CV']
Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need to be run on resource-constraint edge devices, like mobile phones. In...
2023-06-25T16:37:25Z
First work to make SAM lightweight for mobile applications
null
null
Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
['Chaoning Zhang', 'Dongshen Han', 'Yu Qiao', 'Jung Uk Kim', 'S. Bae', 'Seungkyu Lee', 'Choong-Seon Hong']
2,023
arXiv.org
364
41
['Computer Science']
2,306.14291
Hyp-OW: Exploiting Hierarchical Structure Learning with Hyperbolic Distance Enhances Open World Object Detection
['Thang Doan', 'Xin Li', 'Sima Behpour', 'Wenbin He', 'Liang Gou', 'Liu Ren']
['cs.CV', 'cs.LG']
Open World Object Detection (OWOD) is a challenging and realistic task that extends beyond the scope of standard Object Detection task. It involves detecting both known and unknown objects while integrating learned knowledge for future tasks. However, the level of "unknownness" varies significantly depending on the con...
2023-06-25T16:45:20Z
Accepted at AAAI 2024 || keywords: Open World Object Detection, Hyperbolic Distance, Unknown Detection, Deformable Transformers, Hierarchical Representation Learning
null
null
null
null
null
null
null
null
null
2,306.14517
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition
['Samuel Cahyawijaya', 'Holy Lovenia', 'Willy Chung', 'Rita Frieske', 'Zihan Liu', 'Pascale Fung']
['cs.CL', 'cs.SD', 'eess.AS']
Speech emotion recognition plays a crucial role in human-computer interactions. However, most speech emotion recognition research is biased toward English-speaking adults, which hinders its applicability to other demographic groups in different languages and age groups. In this work, we analyze the transferability of e...
2023-06-26T08:48:08Z
Accepted in INTERSPEECH 2023
null
null
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition
['Samuel Cahyawijaya', 'Holy Lovenia', 'Willy Chung', 'Rita Frieske', 'Zihan Liu', 'Pascale Fung']
2,023
arXiv.org
1
52
['Computer Science', 'Engineering']
2,306.14592
Transfer Learning across Several Centuries: Machine and Historian Integrated Method to Decipher Royal Secretary's Diary
['Sojung Lucia Kim', 'Taehong Jang', 'Joonmo Ahn', 'Hyungil Lee', 'Jaehyuk Lee']
['cs.CL', 'cs.DL']
A named entity recognition and classification plays the first and foremost important role in capturing semantics in data and anchoring in translation as well as downstream study for history. However, NER in historical text has faced challenges such as scarcity of annotated corpus, multilanguage variety, various noise, ...
2023-06-26T11:00:35Z
7 pages, 9 figures
null
null
Transfer Learning across Several Centuries: Machine and Historian Integrated Method to Decipher Royal Secretary's Diary
['Sojung Lucia Kim', 'Tae Young Jang', 'Joonmo Ahn', 'Hyungi Lee', 'Jaehyuk Lee']
2,023
arXiv.org
1
22
['Computer Science']
2,306.14795
MotionGPT: Human Motion as a Foreign Language
['Biao Jiang', 'Xin Chen', 'Wen Liu', 'Jingyi Yu', 'Gang Yu', 'Tao Chen']
['cs.CV', 'cs.CL', 'cs.GR']
Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of ...
2023-06-26T15:53:02Z
Project Page: https://github.com/OpenMotionLab/MotionGPT
null
null
MotionGPT: Human Motion as a Foreign Language
['Biao Jiang', 'Xin Chen', 'Wen Liu', 'Jingyi Yu', 'Gang Yu', 'Tao Chen']
2,023
Neural Information Processing Systems
297
73
['Computer Science']
2,306.14895
Large Multimodal Models: Notes on CVPR 2023 Tutorial
['Chunyuan Li']
['cs.CV']
This tutorial note summarizes the presentation on ``Large Multimodal Models: Towards Building and Surpassing Multimodal GPT-4'', a part of CVPR 2023 tutorial on ``Recent Advances in Vision Foundation Models''. The tutorial consists of three parts. We first introduce the background on recent GPT-like large models for vi...
2023-06-26T17:59:31Z
27 pages, 24 figures; Tutorial website: https://vlp-tutorial.github.io/
null
null
Large Multimodal Models: Notes on CVPR 2023 Tutorial
['Chunyuan Li']
2,023
arXiv.org
20
65
['Computer Science']
2,306.15006
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome
['Zhihan Zhou', 'Yanrong Ji', 'Weijian Li', 'Pratik Dutta', 'Ramana Davuluri', 'Han Liu']
['q-bio.GN', 'cs.AI', 'cs.CE', 'cs.CL']
Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area. Existing works have largely hinged on k-mer, fixed-length permutations of A, T, C, and G, as the token of the geno...
2023-06-26T18:43:46Z
Accepted by ICLR 2024
null
null
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genome
['Zhihan Zhou', 'Yanrong Ji', 'Weijian Li', 'Pratik Dutta', 'R. Davuluri', 'Han Liu']
2,023
arXiv.org
199
38
['Biology', 'Computer Science']
2,306.1535
CellViT: Vision Transformers for Precise Cell Segmentation and Classification
['Fabian Hörst', 'Moritz Rempe', 'Lukas Heine', 'Constantin Seibold', 'Julius Keyl', 'Giulia Baldini', 'Selma Ugurel', 'Jens Siveke', 'Barbara Grünwald', 'Jan Egger', 'Jens Kleesiek']
['eess.IV', 'cs.CV', 'cs.LG']
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural netw...
2023-06-27T10:03:15Z
18 pages, 5 figures, appendix included
null
null
null
null
null
null
null
null
null
2,306.15447
Are aligned neural networks adversarially aligned?
['Nicholas Carlini', 'Milad Nasr', 'Christopher A. Choquette-Choo', 'Matthew Jagielski', 'Irena Gao', 'Anas Awadalla', 'Pang Wei Koh', 'Daphne Ippolito', 'Katherine Lee', 'Florian Tramer', 'Ludwig Schmidt']
['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG']
Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In thi...
2023-06-26T17:18:44Z
null
null
null
Are aligned neural networks adversarially aligned?
['Nicholas Carlini', 'Milad Nasr', 'Christopher A. Choquette-Choo', 'Matthew Jagielski', 'Irena Gao', 'Anas Awadalla', 'Pang Wei Koh', 'Daphne Ippolito', 'Katherine Lee', 'Florian Tramèr', 'Ludwig Schmidt']
2,023
Neural Information Processing Systems
254
57
['Computer Science']
2,306.15595
Extending Context Window of Large Language Models via Positional Interpolation
['Shouyuan Chen', 'Sherman Wong', 'Liangjian Chen', 'Yuandong Tian']
['cs.CL', 'cs.AI', 'cs.LG']
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language mode...
2023-06-27T16:26:26Z
Fix template issues
null
null
Extending Context Window of Large Language Models via Positional Interpolation
['Shouyuan Chen', 'Sherman Wong', 'Liangjian Chen', 'Yuandong Tian']
2,023
arXiv.org
544
47
['Computer Science']
2,306.15604
Constructing Multilingual Code Search Dataset Using Neural Machine Translation
['Ryo Sekizawa', 'Nan Duan', 'Shuai Lu', 'Hitomi Yanaka']
['cs.CL', 'cs.SE']
Code search is a task to find programming codes that semantically match the given natural language queries. Even though some of the existing datasets for this task are multilingual on the programming language side, their query data are only in English. In this research, we create a multilingual code search dataset in f...
2023-06-27T16:42:36Z
To appear in the Proceedings of the ACL2023 Student Research Workshop (SRW)
null
null
null
null
null
null
null
null
null
2,306.15626
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
['Kaiyu Yang', 'Aidan M. Swope', 'Alex Gu', 'Rahul Chalamala', 'Peiyang Song', 'Shixing Yu', 'Saad Godil', 'Ryan Prenger', 'Anima Anandkumar']
['cs.LG', 'cs.AI', 'cs.LO', 'stat.ML']
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for t...
2023-06-27T17:05:32Z
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
null
null
null
null
null
null
null
null
2,306.15658
CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy
['Xianhang Li', 'Zeyu Wang', 'Cihang Xie']
['cs.CV']
The recent work CLIPA presents an inverse scaling law for CLIP training -- whereby the larger the image/text encoders used, the shorter the sequence length of image/text tokens that can be applied in training. This finding enables us to train high-performance CLIP models with significantly reduced computations. Buildin...
2023-06-27T17:51:06Z
Tech Report. Code is available at https://github.com/UCSC-VLAA/CLIPA
null
null
null
null
null
null
null
null
null
2,306.15687
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
['Matthew Le', 'Apoorv Vyas', 'Bowen Shi', 'Brian Karrer', 'Leda Sari', 'Rashel Moritz', 'Mary Williamson', 'Vimal Manohar', 'Yossi Adi', 'Jay Mahadeokar', 'Wei-Ning Hsu']
['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD']
Large-scale generative models such as GPT and DALL-E have revolutionized the research community. These models not only generate high fidelity outputs, but are also generalists which can solve tasks not explicitly taught. In contrast, speech generative models are still primitive in terms of scale and task generalization...
2023-06-23T16:23:24Z
Accepted to NeurIPS 2023
null
null
null
null
null
null
null
null
null