arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,308.02019
Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty
['Inar Timiryasov', 'Jean-Loup Tastet']
['cs.CL', 'I.2.7']
We present our submission to the BabyLM challenge, whose goal was to improve the sample efficiency of language models. We trained an ensemble consisting of a GPT-2 and small LLaMA models on the developmentally-plausible, 10M-word BabyLM dataset, then distilled it into a small, 58M-parameter LLaMA model, which exceeds i...
2023-08-03T20:20:01Z
11 pages, 4 figures, 4 tables, submitted to the BabyLM Challenge and accepted as archival full paper (CoNLL--CMCL 2023 Shared Task), checkpoint available at https://huggingface.co/timinar/baby-llama-58m, training code available at https://github.com/timinar/BabyLlama
null
null
null
null
null
null
null
null
null
2,308.02142
Tweet Insights: A Visualization Platform to Extract Temporal Insights from Twitter
['Daniel Loureiro', 'Kiamehr Rezaee', 'Talayeh Riahi', 'Francesco Barbieri', 'Leonardo Neves', 'Luis Espinosa Anke', 'Jose Camacho-Collados']
['cs.CL', 'cs.SI']
This paper introduces a large collection of time series data derived from Twitter, postprocessed using word embedding techniques, as well as specialized fine-tuned language models. This data comprises the past five years and captures changes in n-gram frequency, similarity, sentiment and topic distribution. The interfa...
2023-08-04T05:39:26Z
Demo paper. Visualization platform available at https://tweetnlp.org/insights
null
null
null
null
null
null
null
null
null
2,308.02223
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation
['Chenglong Wang', 'Hang Zhou', 'Yimin Hu', 'Yifu Huo', 'Bei Li', 'Tongran Liu', 'Tong Xiao', 'Jingbo Zhu']
['cs.CL']
Applying Reinforcement Learning (RL) to sequence generation models enables the direct optimization of long-term rewards (\textit{e.g.,} BLEU and human feedback), but typically requires large-scale sampling over a space of action sequences. This is a computational challenge as presented by the practice of sequence gener...
2023-08-04T09:35:45Z
null
null
null
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation
['Chenglong Wang', 'Hang Zhou', 'Yimin Hu', 'Yi Huo', 'Bei Li', 'Tongran Liu', 'Tong Xiao', 'Jingbo Zhu']
2,023
AAAI Conference on Artificial Intelligence
9
56
['Computer Science']
2,308.02559
DLSIA: Deep Learning for Scientific Image Analysis
['Eric J Roberts', 'Tanny Chavez', 'Alexander Hexemer', 'Petrus H. Zwart']
['cs.CV', 'cs.LG', 'hep-ex']
We introduce DLSIA (Deep Learning for Scientific Image Analysis), a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in d...
2023-08-02T21:32:41Z
10 pages, two column, 9 figures, 1 Supplementary section
null
null
null
null
null
null
null
null
null
2,308.02976
Spanish Pre-trained BERT Model and Evaluation Data
['José Cañete', 'Gabriel Chaperon', 'Rodrigo Fuentes', 'Jou-Hui Ho', 'Hojin Kang', 'Jorge Pérez']
['cs.CL', 'cs.AI', 'cs.LG']
The Spanish language is one of the top 5 spoken languages in the world. Nevertheless, finding resources to train or evaluate Spanish language models is not an easy task. In this paper we help bridge this gap by presenting a BERT-based language model pre-trained exclusively on Spanish data. As a second contribution, we ...
2023-08-06T00:16:04Z
Published as workshop paper at Practical ML for Developing Countries Workshop @ ICLR 2020
null
null
Spanish Pre-trained BERT Model and Evaluation Data
['J. Cañete', 'Gabriel Chaperon', 'Rodrigo Fuentes', 'Jou-Hui Ho', 'Hojin Kang', "Jorge P'erez"]
2,023
arXiv.org
667
50
['Computer Science']
2,308.03279
UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition
['Wenxuan Zhou', 'Sheng Zhang', 'Yu Gu', 'Muhao Chen', 'Hoifung Poon']
['cs.CL']
Large language models (LLMs) have demonstrated remarkable generalizability, such as understanding arbitrary entities and relations. Instruction tuning has proven effective for distilling LLMs into more cost-efficient models such as Alpaca and Vicuna. Yet such student models still trail the original LLMs by large margin...
2023-08-07T03:39:52Z
Accepted at ICLR 2024. Project page: https://universal-ner.github.io/
null
null
null
null
null
null
null
null
null
2,308.03281
Towards General Text Embeddings with Multi-stage Contrastive Learning
['Zehan Li', 'Xin Zhang', 'Yanzhao Zhang', 'Dingkun Long', 'Pengjun Xie', 'Meishan Zhang']
['cs.CL']
We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastive learning over a diverse mixture of datasets from multiple sources. B...
2023-08-07T03:52:59Z
null
null
null
Towards General Text Embeddings with Multi-stage Contrastive Learning
['Zehan Li', 'Xin Zhang', 'Yanzhao Zhang', 'Dingkun Long', 'Pengjun Xie', 'Meishan Zhang']
2,023
arXiv.org
422
66
['Computer Science']
2,308.03364
Dual Aggregation Transformer for Image Super-Resolution
['Zheng Chen', 'Yulun Zhang', 'Jinjin Gu', 'Linghe Kong', 'Xiaokang Yang', 'Fisher Yu']
['cs.CV']
Transformer has recently gained considerable popularity in low-level vision tasks, including image super-resolution (SR). These networks utilize self-attention along different dimensions, spatial or channel, and achieve impressive performance. This inspires us to combine the two dimensions in Transformer for a more pow...
2023-08-07T07:39:39Z
Accepted to ICCV 2023. Code is available at https://github.com/zhengchen1999/DAT
null
null
Dual Aggregation Transformer for Image Super-Resolution
['Zheng Chen', 'Yulun Zhang', 'Jinjin Gu', 'L. Kong', 'Xiaokang Yang', 'F. Yu']
2,023
IEEE International Conference on Computer Vision
189
66
['Computer Science']
2,308.03463
DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis
['Zhongjie Duan', 'Lizhou You', 'Chengyu Wang', 'Cen Chen', 'Ziheng Wu', 'Weining Qian', 'Jun Huang']
['cs.CV', 'cs.MM']
In recent years, diffusion models have emerged as the most powerful approach in image synthesis. However, applying these models directly to video synthesis presents challenges, as it often leads to noticeable flickering contents. Although recently proposed zero-shot methods can alleviate flicker to some extent, we stil...
2023-08-07T10:41:52Z
9 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,308.03549
Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue
['Songhua Yang', 'Hanjie Zhao', 'Senbin Zhu', 'Guangyu Zhou', 'Hongfei Xu', 'Yuxiang Jia', 'Hongying Zan']
['cs.CL']
Recent advances in Large Language Models (LLMs) have achieved remarkable breakthroughs in understanding and responding to user intents. However, their performance lag behind general use cases in some expertise domains, such as Chinese medicine. Existing efforts to incorporate Chinese medicine into LLMs rely on Supervis...
2023-08-07T12:56:13Z
null
null
null
Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue
['Songhua Yang', 'Hanjia Zhao', 'Senbin Zhu', 'Guangyu Zhou', 'Hongfei Xu', 'Yuxiang Jia', 'Hongying Zan']
2,023
AAAI Conference on Artificial Intelligence
137
42
['Computer Science']
2,308.0361
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose
['Huichao Zhang', 'Bowen Chen', 'Hao Yang', 'Liao Qu', 'Xu Wang', 'Li Chen', 'Chao Long', 'Feida Zhu', 'Kang Du', 'Min Zheng']
['cs.CV']
Creating expressive, diverse and high-quality 3D avatars from highly customized text descriptions and pose guidance is a challenging task, due to the intricacy of modeling and texturing in 3D that ensure details and various styles (realistic, fictional, etc). We present AvatarVerse, a stable pipeline for generating exp...
2023-08-07T14:09:46Z
null
null
null
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose
['Huichao Zhang', 'Bo Chen', 'Hao Yang', 'Liao Qu', 'Xu Wang', 'Li Chen', 'Chao Long', 'Feida Zhu', 'Kang Du', 'Minghang Zheng']
2,023
AAAI Conference on Artificial Intelligence
53
42
['Computer Science']
2,308.03712
Scaling may be all you need for achieving human-level object recognition capacity with human-like visual experience
['A. Emin Orhan']
['cs.CV', 'cs.LG', 'cs.NE', 'q-bio.NC']
This paper asks whether current self-supervised learning methods, if sufficiently scaled up, would be able to reach human-level visual object recognition capabilities with the same type and amount of visual experience humans learn from. Previous work on this question only considered the scaling of data size. Here, we c...
2023-08-07T16:31:38Z
v2 adds an Appendix containing results with alternative scaling functions; code & models available from https://github.com/eminorhan/humanlike-vits
null
null
Scaling may be all you need for achieving human-level object recognition capacity with human-like visual experience
['Emin Orhan']
2,023
arXiv.org
3
17
['Computer Science', 'Biology']
2,308.03825
"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
['Xinyue Shen', 'Zeyuan Chen', 'Michael Backes', 'Yun Shen', 'Yang Zhang']
['cs.CR', 'cs.LG']
The misuse of large language models (LLMs) has drawn significant attention from the general public and LLM vendors. One particular type of adversarial prompt, known as jailbreak prompt, has emerged as the main attack vector to bypass the safeguards and elicit harmful content from LLMs. In this paper, employing our new ...
2023-08-07T16:55:20Z
null
null
null
null
null
null
null
null
null
null
2,308.04014
Continual Pre-Training of Large Language Models: How to (re)warm your model?
['Kshitij Gupta', 'Benjamin Thérien', 'Adam Ibrahim', 'Mats L. Richter', 'Quentin Anthony', 'Eugene Belilovsky', 'Irina Rish', 'Timothée Lesort']
['cs.CL', 'cs.LG']
Large language models (LLMs) are routinely pre-trained on billions of tokens, only to restart the process over again once new data becomes available. A much cheaper and more efficient solution would be to enable the continual pre-training of these models, i.e. updating pre-trained models with new data instead of re-tra...
2023-08-08T03:18:18Z
null
null
null
null
null
null
null
null
null
null
2,308.04657
Which Tokens to Use? Investigating Token Reduction in Vision Transformers
['Joakim Bruslund Haurum', 'Sergio Escalera', 'Graham W. Taylor', 'Thomas B. Moeslund']
['cs.CV']
Since the introduction of the Vision Transformer (ViT), researchers have sought to make ViTs more efficient by removing redundant information in the processed tokens. While different methods have been explored to achieve this goal, we still lack understanding of the resulting reduction patterns and how those patterns d...
2023-08-09T01:51:07Z
ICCV 2023 NIVT Workshop. Project webpage https://vap.aau.dk/tokens
null
null
null
null
null
null
null
null
null
2,308.04913
LLaMA-E: Empowering E-commerce Authoring with Object-Interleaved Instruction Following
['Kaize Shi', 'Xueyao Sun', 'Dingxian Wang', 'Yinlin Fu', 'Guandong Xu', 'Qing Li']
['cs.CL', 'cs.AI', 'cs.IR']
E-commerce authoring entails creating engaging, diverse, and targeted content to enhance preference elicitation and retrieval experience. While Large Language Models (LLMs) have revolutionized content generation, they often fall short in e-commerce applications due to their limited memorization of domain-specific featu...
2023-08-09T12:26:37Z
null
null
null
LLaMA-E: Empowering E-commerce Authoring with Object-Interleaved Instruction Following
['Kaize Shi', 'Xueyao Sun', 'Dingxian Wang', 'Yinlin Fu', 'Guandong Xu', 'Qing Li']
2,023
International Conference on Computational Linguistics
4
41
['Computer Science']
2,308.04948
Extrapolating Large Language Models to Non-English by Aligning Languages
['Wenhao Zhu', 'Yunzhe Lv', 'Qingxiu Dong', 'Fei Yuan', 'Jingjing Xu', 'Shujian Huang', 'Lingpeng Kong', 'Jiajun Chen', 'Lei Li']
['cs.CL']
Existing large language models show disparate capability across different languages, due to the imbalance in the training data. Their performances on English tasks are often stronger than on tasks of other languages. In this paper, we empower pre-trained LLMs on non-English languages by building semantic alignment acro...
2023-08-09T13:32:06Z
null
null
null
null
null
null
null
null
null
null
2,308.05725
EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
['Tu Anh Nguyen', 'Wei-Ning Hsu', "Antony D'Avirro", 'Bowen Shi', 'Itai Gat', 'Maryam Fazel-Zarani', 'Tal Remez', 'Jade Copet', 'Gabriel Synnaeve', 'Michael Hassid', 'Felix Kreuk', 'Yossi Adi', 'Emmanuel Dupoux']
['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS']
Recent work has shown that it is possible to resynthesize high-quality speech based, not on text, but on low bitrate discrete units that have been learned in a self-supervised fashion and can therefore capture expressive aspects of speech that are hard to transcribe (prosody, voice styles, non-verbal vocalization). The...
2023-08-10T17:41:19Z
null
null
null
null
null
null
null
null
null
null
2,308.05734
AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining
['Haohe Liu', 'Yi Yuan', 'Xubo Liu', 'Xinhao Mei', 'Qiuqiang Kong', 'Qiao Tian', 'Yuping Wang', 'Wenwu Wang', 'Yuxuan Wang', 'Mark D. Plumbley']
['cs.SD', 'cs.AI', 'cs.MM', 'eess.AS', 'eess.SP']
Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective o...
2023-08-10T17:55:13Z
Accepted by IEEE/ACM Transactions on Audio, Speech and Language Processing. Project page is https://audioldm.github.io/audioldm2
null
null
AudioLDM 2: Learning Holistic Audio Generation With Self-Supervised Pretraining
['Haohe Liu', 'Qiao Tian', 'Yiitan Yuan', 'Xubo Liu', 'Xinhao Mei', 'Qiuqiang Kong', 'Yuping Wang', 'Wenwu Wang', 'Yuxuan Wang', 'Mark D. Plumbley']
2,023
IEEE/ACM Transactions on Audio Speech and Language Processing
247
100
['Computer Science', 'Engineering']
2,308.05884
PIPPA: A Partially Synthetic Conversational Dataset
['Tear Gosling', 'Alpin Dale', 'Yinhe Zheng']
['cs.CL']
With the emergence of increasingly powerful large language models, there is a burgeoning interest in leveraging these models for casual conversation and role-play applications. However, existing conversational and role-playing datasets often fail to capture the diverse and nuanced interactions typically exhibited by re...
2023-08-11T00:33:26Z
13 pages, 5 figures
null
null
PIPPA: A Partially Synthetic Conversational Dataset
['Tear Gosling', 'Alpin Dale', 'Yinhe Zheng']
2,023
arXiv.org
7
17
['Computer Science']
2,308.06259
Self-Alignment with Instruction Backtranslation
['Xian Li', 'Ping Yu', 'Chunting Zhou', 'Timo Schick', 'Omer Levy', 'Luke Zettlemoyer', 'Jason Weston', 'Mike Lewis']
['cs.CL']
We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The see...
2023-08-11T17:47:54Z
ICLR2024 camera ready
null
null
null
null
null
null
null
null
null
2,308.06502
Three Ways of Using Large Language Models to Evaluate Chat
['Ondřej Plátek', 'Vojtěch Hudeček', 'Patricia Schmidtová', 'Mateusz Lango', 'Ondřej Dušek']
['cs.CL', 'cs.AI']
This paper describes the systems submitted by team6 for ChatEval, the DSTC 11 Track 4 competition. We present three different approaches to predicting turn-level qualities of chatbot responses based on large language models (LLMs). We report improvement over the baseline using dynamic few-shot examples from a vector st...
2023-08-12T08:34:15Z
Accepted to DSTC11 workshop https://dstc11.dstc.community/
null
null
null
null
null
null
null
null
null
2,308.06571
ModelScope Text-to-Video Technical Report
['Jiuniu Wang', 'Hangjie Yuan', 'Dayou Chen', 'Yingya Zhang', 'Xiang Wang', 'Shiwei Zhang']
['cs.CV', 'cs.AI']
This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during ...
2023-08-12T13:53:10Z
Technical report. Project page: \url{https://modelscope.cn/models/damo/text-to-video-synthesis/summary}
null
null
null
null
null
null
null
null
null
2,308.0661
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation
['Ambrose Robinson', 'William Thorne', 'Ben P. Wu', 'Abdullah Pandor', 'Munira Essat', 'Mark Stevenson', 'Xingyi Song']
['cs.CL', 'cs.AI']
Medical systematic reviews can be very costly and resource intensive. We explore how Large Language Models (LLMs) can support and be trained to perform literature screening when provided with a detailed set of selection criteria. Specifically, we instruction tune LLaMA and Guanaco models to perform abstract screening f...
2023-08-12T16:56:55Z
null
null
null
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation
['Ambrose Robinson', 'William Thorne', 'Ben Wu', 'A. Pandor', 'M. Essat', 'Mark Stevenson', 'Xingyi Song']
2,023
arXiv.org
8
47
['Computer Science']
2,308.06693
Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation
['Yichen Yuan', 'Yifan Wang', 'Lijun Wang', 'Xiaoqi Zhao', 'Huchuan Lu', 'Yu Wang', 'Weibo Su', 'Lei Zhang']
['cs.CV']
Recent leading zero-shot video object segmentation (ZVOS) works devote to integrating appearance and motion information by elaborately designing feature fusion modules and identically applying them in multiple feature stages. Our preliminary experiments show that with the strong long-range dependency modeling capacity ...
2023-08-13T06:12:00Z
ICCV2023
null
null
Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation
['Yichen Yuan', 'Yifan Wang', 'Lijun Wang', 'Xiaoqi Zhao', 'Huchuan Lu', 'Yu Wang', 'Wei Su', 'Lei Zhang']
2,023
IEEE International Conference on Computer Vision
11
74
['Computer Science']
2,308.06721
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
['Hu Ye', 'Jun Zhang', 'Sibo Liu', 'Xiao Han', 'Wei Yang']
['cs.CV', 'cs.AI']
Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images. However, it is very tricky to generate desired images using only text prompt as it often involves complex prompt engineering. An alternative to text prompt is ima...
2023-08-13T08:34:51Z
null
null
null
null
null
null
null
null
null
null
2,308.07026
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
['Ziqi Zhou', 'Shengshan Hu', 'Minghui Li', 'Hangtao Zhang', 'Yechao Zhang', 'Hai Jin']
['cs.CV']
Multimodal contrastive learning aims to train a general-purpose feature extractor, such as CLIP, on vast amounts of raw, unlabeled paired image-text data. This can greatly benefit various complex downstream tasks, including cross-modal image-text retrieval and image classification. Despite its promising prospect, the s...
2023-08-14T09:29:22Z
This paper has been accepted by the ACM International Conference on Multimedia (ACM MM '23, October 29-November 3, 2023, Ottawa, ON, Canada)
null
null
null
null
null
null
null
null
null
2,308.07037
Bayesian Flow Networks
['Alex Graves', 'Rupesh Kumar Srivastava', 'Timothy Atkinson', 'Faustino Gomez']
['cs.LG', 'cs.AI']
This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with Bayesian inference in the light of noisy data samples, then passed as input to a neural network that outputs a second, interdependent distribution. Startin...
2023-08-14T09:56:35Z
null
null
null
Bayesian Flow Networks
['Alex Graves', 'R. Srivastava', 'Timothy James Atkinson', 'Faustino J. Gomez']
2,023
arXiv.org
45
50
['Computer Science']
2,308.07074
#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models
['Keming Lu', 'Hongyi Yuan', 'Zheng Yuan', 'Runji Lin', 'Junyang Lin', 'Chuanqi Tan', 'Chang Zhou', 'Jingren Zhou']
['cs.CL', 'cs.AI', 'cs.LG']
Foundation language models obtain the instruction-following ability through supervised fine-tuning (SFT). Diversity and complexity are considered critical factors of a successful SFT dataset, while their definitions remain obscure and lack quantitative analyses. In this work, we propose InsTag, an open-set fine-grained...
2023-08-14T11:16:28Z
null
null
null
#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models
['K. Lu', 'Hongyi Yuan', 'Zheng Yuan', 'Runji Lin', 'Junyang Lin', 'Chuanqi Tan', 'Chang Zhou']
2,023
International Conference on Learning Representations
77
43
['Computer Science']
2,308.07124
OctoPack: Instruction Tuning Code Large Language Models
['Niklas Muennighoff', 'Qian Liu', 'Armel Zebaze', 'Qinkai Zheng', 'Binyuan Hui', 'Terry Yue Zhuo', 'Swayam Singh', 'Xiangru Tang', 'Leandro von Werra', 'Shayne Longpre']
['cs.CL', 'cs.AI']
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350...
2023-08-14T13:53:54Z
60 pages (9 main), 40 figures, 19 tables
null
null
OctoPack: Instruction Tuning Code Large Language Models
['Niklas Muennighoff', 'Qian Liu', 'Qi Liu', 'A. Zebaze', 'Qinkai Zheng', 'Binyuan Hui', 'Terry Yue Zhuo', 'Swayam Singh', 'Xiangru Tang', 'L. V. Werra', 'S. Longpre']
2,023
International Conference on Learning Representations
140
168
['Computer Science']
2,308.07136
Pairing interacting protein sequences using masked language modeling
['Umberto Lupo', 'Damiano Sgarbossa', 'Anne-Florence Bitbol']
['q-bio.BM', 'cs.LG', '68T07, 68T50, 92-08, 92B20', 'J.3; I.2.7']
Predicting which proteins interact together from amino-acid sequences is an important task. We develop a method to pair interacting protein sequences which leverages the power of protein language models trained on multiple sequence alignments, such as MSA Transformer and the EvoFormer module of AlphaFold. We formulate ...
2023-08-14T13:42:09Z
33 pages, 14 figures, 2 tables
Proc. Natl. Acad. Sci. U.S.A. 121(27): e2311887121 (2024)
10.1073/pnas.231188712
null
null
null
null
null
null
null
2,308.07317
Platypus: Quick, Cheap, and Powerful Refinement of LLMs
['Ariel N. Lee', 'Cole J. Hunter', 'Nataniel Ruiz']
['cs.CL']
We present $\textbf{Platypus}$, a family of fine-tuned and merged Large Language Models (LLMs) that achieves the strongest performance and currently stands at first place in HuggingFace's Open LLM Leaderboard as of the release date of this work. In this work we describe (1) our curated dataset $\textbf{Open-Platypus}$,...
2023-08-14T17:59:56Z
Workshop on Instruction Tuning and Instruction Following at NeurIPS 2023
null
null
null
null
null
null
null
null
null
2,308.07655
From Commit Message Generation to History-Aware Commit Message Completion
['Aleksandra Eliseeva', 'Yaroslav Sokolov', 'Egor Bogomolov', 'Yaroslav Golubev', 'Danny Dig', 'Timofey Bryksin']
['cs.SE', 'cs.LG']
Commit messages are crucial to software development, allowing developers to track changes and collaborate effectively. Despite their utility, most commit messages lack important information since writing high-quality commit messages is tedious and time-consuming. The active research on commit message generation (CMG) h...
2023-08-15T09:10:49Z
Accepted to ASE'23. 13 pages, 5 figures
null
null
From Commit Message Generation to History-Aware Commit Message Completion
['Aleksandra V. Eliseeva', 'Yaroslav Sokolov', 'Egor Bogomolov', 'Yaroslav Golubev', 'Danny Dig', 'T. Bryksin']
2,023
International Conference on Automated Software Engineering
20
71
['Computer Science']
2,308.07662
Gradient-Based Post-Training Quantization: Challenging the Status Quo
['Edouard Yvinec', 'Arnaud Dapogny', 'Kevin Bailly']
['cs.LG', 'cs.CV']
Quantization has become a crucial step for the efficient deployment of deep neural networks, where floating point operations are converted to simpler fixed point operations. In its most naive form, it simply consists in a combination of scaling and rounding transformations, leading to either a limited compression rate ...
2023-08-15T09:25:11Z
null
null
null
null
null
null
null
null
null
null
2,308.07891
Link-Context Learning for Multimodal LLMs
['Yan Tai', 'Weichen Fan', 'Zhao Zhang', 'Feng Zhu', 'Rui Zhao', 'Ziwei Liu']
['cs.CV', 'cs.CL']
The ability to learn from context with novel concepts, and deliver appropriate responses are essential in human conversations. Despite current Multimodal Large Language Models (MLLMs) and Large Language Models (LLMs) being trained on mega-scale datasets, recognizing unseen images or understanding novel concepts in a tr...
2023-08-15T17:33:24Z
10 pages, 8 figures
null
null
Link-Context Learning for Multimodal LLMs
['Yan Tai', 'Weichen Fan', 'Zhao Zhang', 'Feng Zhu', 'Rui Zhao', 'Ziwei Liu']
2,023
Computer Vision and Pattern Recognition
19
37
['Computer Science']
2,308.07898
A Foundation Language-Image Model of the Retina (FLAIR): Encoding Expert Knowledge in Text Supervision
['Julio Silva-Rodríguez', 'Hadi Chakor', 'Riadh Kobbi', 'Jose Dolz', 'Ismail Ben Ayed']
['cs.CV']
Foundation vision-language models are currently transforming computer vision, and are on the rise in medical imaging fueled by their very promising generalization capabilities. However, the initial attempts to transfer this new paradigm to medical imaging have shown less impressive performances than those observed in o...
2023-08-15T17:39:52Z
Accepted in Medical Image Analysis. The pre-trained model is available at: https://github.com/jusiro/FLAIR
null
10.1016/j.media.2024.103357
null
null
null
null
null
null
null
2,308.08089
DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
['Shengming Yin', 'Chenfei Wu', 'Jian Liang', 'Jie Shi', 'Houqiang Li', 'Gong Ming', 'Nan Duan']
['cs.CV']
Controllable video generation has gained significant attention in recent years. However, two main limitations persist: Firstly, most existing works focus on either text, image, or trajectory-based control, leading to an inability to achieve fine-grained control in videos. Secondly, trajectory control research is still ...
2023-08-16T01:43:41Z
null
null
null
DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
['Sheng-Siang Yin', 'Chenfei Wu', 'Jian Liang', 'Jie Shi', 'Houqiang Li', 'Gong Ming', 'Nan Duan']
2,023
arXiv.org
145
31
['Computer Science']
2,308.08155
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
['Qingyun Wu', 'Gagan Bansal', 'Jieyu Zhang', 'Yiran Wu', 'Beibin Li', 'Erkang Zhu', 'Li Jiang', 'Xiaoyun Zhang', 'Shaokun Zhang', 'Jiale Liu', 'Ahmed Hassan Awadallah', 'Ryen W White', 'Doug Burger', 'Chi Wang']
['cs.AI', 'cs.CL']
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. Using AutoGen, develop...
2023-08-16T05:57:52Z
43 pages (10 pages for the main text, 3 pages for references, and 30 pages for appendices)
null
null
null
null
null
null
null
null
null
2,308.08239
MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation
['Junru Lu', 'Siyu An', 'Mingbao Lin', 'Gabriele Pergola', 'Yulan He', 'Di Yin', 'Xing Sun', 'Yunsheng Wu']
['cs.CL']
We propose MemoChat, a pipeline for refining instructions that enables large language models (LLMs) to effectively employ self-composed memos for maintaining consistent long-range open-domain conversations. We demonstrate a long-range open-domain conversation through iterative "memorization-retrieval-response" cycles. ...
2023-08-16T09:15:18Z
null
null
null
null
null
null
null
null
null
null
2,308.08295
CMD: a framework for Context-aware Model self-Detoxification
['Zecheng Tang', 'Keyan Zhou', 'Juntao Li', 'Yuyang Ding', 'Pinzheng Wang', 'Bowen Yan', 'Rejie Hua', 'Min Zhang']
['cs.CL']
Text detoxification aims to minimize the risk of language models producing toxic content. Existing detoxification methods of directly constraining the model output or further training the model on the non-toxic corpus fail to achieve a decent balance between detoxification effectiveness and generation quality. This iss...
2023-08-16T11:50:38Z
null
null
null
null
null
null
null
null
null
null
2,308.08625
BIOptimus: Pre-training an Optimal Biomedical Language Model with Curriculum Learning for Named Entity Recognition
['Pavlova Vera', 'Mohammed Makhlouf']
['cs.CL']
Using language models (LMs) pre-trained in a self-supervised setting on large corpora and then fine-tuning for a downstream task has helped to deal with the problem of limited label data for supervised learning tasks such as Named Entity Recognition (NER). Recent research in biomedical language processing has offered a...
2023-08-16T18:48:01Z
null
https://aclanthology.org/2023.bionlp-1.31/
null
null
null
null
null
null
null
null
2,308.08708
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
['Patrick Butlin', 'Robert Long', 'Eric Elmoznino', 'Yoshua Bengio', 'Jonathan Birch', 'Axel Constant', 'George Deane', 'Stephen M. Fleming', 'Chris Frith', 'Xu Ji', 'Ryota Kanai', 'Colin Klein', 'Grace Lindsay', 'Matthias Michel', 'Liad Mudrik', 'Megan A. K. Peters', 'Eric Schwitzgebel', 'Jonathan Simon', 'Rufin VanRu...
['cs.AI', 'cs.CY', 'cs.LG', 'q-bio.NC']
Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific t...
2023-08-17T00:10:16Z
null
null
null
null
null
null
null
null
null
null
2,308.08796
Chinese Spelling Correction as Rephrasing Language Model
['Linfeng Liu', 'Hongqiu Wu', 'Hai Zhao']
['cs.CL']
This paper studies Chinese Spelling Correction (CSC), which aims to detect and correct the potential spelling errors in a given sentence. Current state-of-the-art methods regard CSC as a sequence tagging task and fine-tune BERT-based models on sentence pairs. However, we note a critical flaw in the process of tagging o...
2023-08-17T06:04:28Z
Accepted by AAAI'2024
null
null
null
null
null
null
null
null
null
2,308.08827
Factuality Detection using Machine Translation -- a Use Case for German Clinical Text
['Mohammed Bin Sumait', 'Aleksandra Gabryszak', 'Leonhard Hennig', 'Roland Roller']
['cs.CL']
Factuality can play an important role when automatically processing clinical text, as it makes a difference if particular symptoms are explicitly not present, possibly present, not mentioned, or affirmed. In most cases, a sufficient number of examples is necessary to handle such phenomena in a supervised machine learni...
2023-08-17T07:24:06Z
Accepted at KONVENS 2023
null
null
Factuality Detection using Machine Translation – a Use Case for German Clinical Text
['Mohammed Mustafa Ahmed Bin Sumait', 'Aleksandra Gabryszak', 'Leonhard Hennig', 'Roland Roller']
2,023
Conference on Natural Language Processing
0
37
['Computer Science']
2,308.08926
Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement
['Ye-Xin Lu', 'Yang Ai', 'Zhen-Hua Ling']
['eess.AS', 'cs.SD']
Phase information has a significant impact on speech perceptual quality and intelligibility. However, existing speech enhancement methods encounter limitations in explicit phase estimation due to the non-structural nature and wrapping characteristics of the phase, leading to a bottleneck in enhanced speech quality. To ...
2023-08-17T11:37:52Z
Submmited to IEEE Transactions on Audio, Speech and Language Processing
null
null
null
null
null
null
null
null
null
2,308.08998
Reinforced Self-Training (ReST) for Language Modeling
['Caglar Gulcehre', 'Tom Le Paine', 'Srivatsan Srinivasan', 'Ksenia Konyushkova', 'Lotte Weerts', 'Abhishek Sharma', 'Aditya Siddhant', 'Alex Ahern', 'Miaosen Wang', 'Chenjie Gu', 'Wolfgang Macherey', 'Arnaud Doucet', 'Orhan Firat', 'Nando de Freitas']
['cs.CL', 'cs.LG']
Reinforcement learning from human feedback (RLHF) can improve the quality of large language model's (LLM) outputs by aligning them with human preferences. We propose a simple algorithm for aligning LLMs with human preferences inspired by growing batch reinforcement learning (RL), which we call Reinforced Self-Training ...
2023-08-17T14:12:48Z
23 pages, 16 figures
null
null
null
null
null
null
null
null
null
2,308.09126
EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding
['Karttikeya Mangalam', 'Raiymbek Akshulakov', 'Jitendra Malik']
['cs.CV', 'cs.AI', 'cs.CL']
We introduce EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video...
2023-08-17T17:59:59Z
https://egoschema.github.io/
null
null
null
null
null
null
null
null
null
2,308.09435
A Methodology for Generative Spelling Correction via Natural Spelling Errors Emulation across Multiple Domains and Languages
['Nikita Martynov', 'Mark Baushenko', 'Anastasia Kozlova', 'Katerina Kolomeytseva', 'Aleksandr Abramov', 'Alena Fenogenova']
['cs.CL']
Modern large language models demonstrate impressive capabilities in text generation and generalization. However, they often struggle with solving text editing tasks, particularly when it comes to correcting spelling errors and mistypings. In this paper, we present a methodology for generative spelling correction (SC), ...
2023-08-18T10:07:28Z
to appear in EACL 2024
null
null
A Methodology for Generative Spelling Correction via Natural Spelling Errors Emulation across Multiple Domains and Languages
['Nikita Martynov', 'Mark Baushenko', 'A. Kozlova', 'Katerina Kolomeytseva', 'Aleksandr Abramov', 'Alena Fenogenova']
2,023
Findings
4
49
['Computer Science']
2,308.09442
BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine
['Yizhen Luo', 'Jiahuan Zhang', 'Siqi Fan', 'Kai Yang', 'Yushuai Wu', 'Mu Qiao', 'Zaiqing Nie']
['cs.CE']
Foundation models (FMs) have exhibited remarkable performance across a wide range of downstream tasks in many domains. Nevertheless, general-purpose FMs often face challenges when confronted with domain-specific problems, due to their limited access to the proprietary training data in a particular domain. In biomedicin...
2023-08-18T10:14:35Z
12 pages, 4 figures
null
null
BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine
['Yi Luo', 'Jiahuan Zhang', 'Siqi Fan', 'Kai Yang', 'Yushuai Wu', 'Mu Qiao', 'Zaiqing Nie']
2,023
arXiv.org
90
34
['Computer Science']
2,308.09583
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
['Haipeng Luo', 'Qingfeng Sun', 'Can Xu', 'Pu Zhao', 'Jianguang Lou', 'Chongyang Tao', 'Xiubo Geng', 'Qingwei Lin', 'Shifeng Chen', 'Yansong Tang', 'Dongmei Zhang']
['cs.CL', 'cs.AI', 'cs.LG']
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we pr...
2023-08-18T14:23:21Z
This paper has been accepted to ICLR 2025 as an Oral presentation
null
null
null
null
null
null
null
null
null
2,308.09662
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
['Rishabh Bhardwaj', 'Soujanya Poria']
['cs.CL']
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deploym...
2023-08-18T16:27:04Z
null
null
null
null
null
null
null
null
null
null
2,308.09716
Diff2Lip: Audio Conditioned Diffusion Models for Lip-Synchronization
['Soumik Mukhopadhyay', 'Saksham Suri', 'Ravi Teja Gadde', 'Abhinav Shrivastava']
['cs.CV', 'cs.AI']
The task of lip synchronization (lip-sync) seeks to match the lips of human faces with different audio. It has various applications in the film industry as well as for creating virtual avatars and for video conferencing. This is a challenging problem as one needs to simultaneously introduce detailed, realistic lip move...
2023-08-18T17:59:40Z
Website: see https://soumik-kanad.github.io/diff2lip . Submission under review
null
null
null
null
null
null
null
null
null
2,308.09891
SwinLSTM:Improving Spatiotemporal Prediction Accuracy using Swin Transformer and LSTM
['Song Tang', 'Chuang Li', 'Pu Zhang', 'RongNian Tang']
['cs.CV', 'cs.AI']
Integrating CNNs and RNNs to capture spatiotemporal dependencies is a prevalent strategy for spatiotemporal prediction tasks. However, the property of CNNs to learn local spatial information decreases their efficiency in capturing spatiotemporal dependencies, thereby limiting their prediction accuracy. In this paper, w...
2023-08-19T03:08:28Z
This paper has been accepted by ICCV 2023
null
null
SwinLSTM: Improving Spatiotemporal Prediction Accuracy using Swin Transformer and LSTM
['Song Tang', 'Chuang Li', 'Pufen Zhang', 'R. Tang']
2,023
IEEE International Conference on Computer Vision
51
44
['Computer Science']
2,308.09892
Utilizing Semantic Textual Similarity for Clinical Survey Data Feature Selection
['Benjamin C. Warner', 'Ziqi Xu', 'Simon Haroutounian', 'Thomas Kannampallil', 'Chenyang Lu']
['cs.CL', 'cs.LG']
Survey data can contain a high number of features while having a comparatively low quantity of examples. Machine learning models that attempt to predict outcomes from survey data under these conditions can overfit and result in poor generalizability. One remedy to this issue is feature selection, which attempts to sele...
2023-08-19T03:10:51Z
null
null
null
Utilizing Semantic Textual Similarity for Clinical Survey Data Feature Selection
['Benjamin C. Warner', 'Ziqi Xu', 'S. Haroutounian', 'T. Kannampallil', 'Chenyang Lu']
2,023
arXiv.org
2
53
['Computer Science']
2,308.09895
Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs
['Federico Cassano', 'John Gouwar', 'Francesca Lucchetti', 'Claire Schlesinger', 'Anders Freeman', 'Carolyn Jane Anderson', 'Molly Q Feldman', 'Michael Greenberg', 'Abhinav Jangda', 'Arjun Guha']
['cs.PL', 'cs.LG']
Over the past few years, Large Language Models of Code (Code LLMs) have started to have a significant impact on programming practice. Code LLMs are also emerging as building blocks for research in programming languages and software engineering. However, Code LLMs produce impressive results on programming languages that...
2023-08-19T03:19:01Z
null
null
null
null
null
null
null
null
null
null
2,308.09991
AltDiffusion: A Multilingual Text-to-Image Diffusion Model
['Fulong Ye', 'Guang Liu', 'Xinya Wu', 'Ledell Wu']
['cs.CV']
Large Text-to-Image(T2I) diffusion models have shown a remarkable capability to produce photorealistic and diverse images based on text inputs. However, existing works only support limited language input, e.g., English, Chinese, and Japanese, leaving users beyond these languages underserved and blocking the global expa...
2023-08-19T11:52:12Z
15 pages; 17 figures
null
null
null
null
null
null
null
null
null
2,308.10092
Open, Closed, or Small Language Models for Text Classification?
['Hao Yu', 'Zachary Yang', 'Kellin Pelrine', 'Jean Francois Godbout', 'Reihaneh Rabbany']
['cs.CL', 'cs.AI']
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks. But many questions remain, including whether open-source models match closed ones, why these models excel or struggle with certain tasks, and what types of practical procedures can improve performance. We ad...
2023-08-19T18:58:32Z
14 pages, 15 Tables, 1 Figure
null
null
Open, Closed, or Small Language Models for Text Classification?
['Hao Yu', 'Zachary Yang', 'Kellin Pelrine', 'J. Godbout', 'Reihaneh Rabbany']
2,023
arXiv.org
36
51
['Computer Science']
2,308.10526
UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language
['Chongyang Wang', 'Yuan Feng', 'Lingxiao Zhong', 'Siyi Zhu', 'Chi Zhang', 'Siqi Zheng', 'Chen Liang', 'Yuntao Wang', 'Chengqi He', 'Chun Yu', 'Yuanchun Shi']
['cs.HC']
We introduce UbiPhysio, a milestone framework that delivers fine-grained action description and feedback in natural language to support people's daily functioning, fitness, and rehabilitation activities. This expert-like capability assists users in properly executing actions and maintaining engagement in remote fitness...
2023-08-21T07:26:05Z
Accepted by IMWUT/Ubicomp'24
null
null
UbiPhysio
['Chongyang Wang', 'Yuan Feng', 'L. Zhong', 'Siyi Zhu', 'Chi Zhang', 'Siqi Zheng', 'Chen Liang', 'Yuntao Wang', 'Chen-Jun He', 'Chun Yu', 'Yuanchun Shi']
2,023
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies
6
72
['Computer Science']
2,308.10529
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
['Tianyu Yu', 'Chengyue Jiang', 'Chao Lou', 'Shen Huang', 'Xiaobin Wang', 'Wei Liu', 'Jiong Cai', 'Yangning Li', 'Yinghui Li', 'Kewei Tu', 'Hai-Tao Zheng', 'Ningyu Zhang', 'Pengjun Xie', 'Fei Huang', 'Yong Jiang']
['cs.CL']
Large language models (LLMs) have shown impressive ability for open-domain NLP tasks. However, LLMs are sometimes too footloose for natural language understanding (NLU) tasks which always have restricted output and input format. Their performances on NLU tasks are highly related to prompts or demonstrations and are sho...
2023-08-21T07:31:19Z
Initial version of SeqGPT
null
null
null
null
null
null
null
null
null
2,308.10564
Software Entity Recognition with Noise-Robust Learning
['Tai Nguyen', 'Yifeng Di', 'Joohan Lee', 'Muhao Chen', 'Tianyi Zhang']
['cs.SE', 'cs.CL']
Recognizing software entities such as library names from free-form text is essential to enable many software engineering (SE) technologies, such as traceability link recovery, automated documentation, and API recommendation. While many approaches have been proposed to address this problem, they suffer from small entity...
2023-08-21T08:41:46Z
ASE 2023
null
null
null
null
null
null
null
null
null
2,308.10592
BAN-PL: a Novel Polish Dataset of Banned Harmful and Offensive Content from Wykop.pl web service
['Anna Kołos', 'Inez Okulska', 'Kinga Głąbińska', 'Agnieszka Karlińska', 'Emilia Wiśnios', 'Paweł Ellerik', 'Andrzej Prałat']
['cs.CL']
Since the Internet is flooded with hate, it is one of the main tasks for NLP experts to master automated online content moderation. However, advancements in this field require improved access to publicly available accurate and non-synthetic datasets of social media content. For the Polish language, such resources are v...
2023-08-21T09:47:31Z
Accepted for LREC-COLING 2024 Conference
null
null
null
null
null
null
null
null
null
2,308.10882
Giraffe: Adventures in Expanding Context Lengths in LLMs
['Arka Pal', 'Deep Karkhanis', 'Manley Roberts', 'Samuel Dooley', 'Arvind Sundararajan', 'Siddartha Naidu']
['cs.AI', 'cs.CL']
Modern large language models (LLMs) that rely on attention mechanisms are typically trained with fixed context lengths which enforce upper limits on the length of input sequences that they can handle at evaluation time. To use these models on sequences longer than the train-time context length, one might employ techniq...
2023-08-21T17:30:16Z
null
null
null
Giraffe: Adventures in Expanding Context Lengths in LLMs
['Arka Pal', 'Deep Karkhanis', 'Manley Roberts', 'Samuel Dooley', 'A. Sundararajan', 'Siddartha Naidu']
2,023
arXiv.org
40
25
['Computer Science']
2,308.1138
Convoifilter: A case study of doing cocktail party speech recognition
['Thai-Binh Nguyen', 'Alexander Waibel']
['cs.SD', 'cs.CL', 'eess.AS']
This paper presents an end-to-end model designed to improve automatic speech recognition (ASR) for a particular speaker in a crowded, noisy environment. The model utilizes a single-channel speech enhancement module that isolates the speaker's voice from background noise (ConVoiFilter) and an ASR module. The model can d...
2023-08-22T12:09:30Z
Accepted at HSCMA 2024
null
null
null
null
null
null
null
null
null
2,308.11408
MatFuse: Controllable Material Generation with Diffusion Models
['Giuseppe Vecchio', 'Renato Sortino', 'Simone Palazzo', 'Concetto Spampinato']
['cs.CV', 'cs.GR']
Creating high-quality materials in computer graphics is a challenging and time-consuming task, which requires great expertise. To simplify this process, we introduce MatFuse, a unified approach that harnesses the generative power of diffusion models for creation and editing of 3D materials. Our method integrates multip...
2023-08-22T12:54:48Z
null
null
10.1109/CVPR52733.2024.00424
null
null
null
null
null
null
null
2,308.11509
SwinFace: A Multi-task Transformer for Face Recognition, Expression Recognition, Age Estimation and Attribute Estimation
['Lixiong Qin', 'Mei Wang', 'Chao Deng', 'Ke Wang', 'Xi Chen', 'Jiani Hu', 'Weihong Deng']
['cs.CV']
In recent years, vision transformers have been introduced into face recognition and analysis and have achieved performance breakthroughs. However, most previous methods generally train a single model or an ensemble of models to perform the desired task, which ignores the synergy among different tasks and fails to achie...
2023-08-22T15:38:39Z
null
null
10.1109/TCSVT.2023.3304724
null
null
null
null
null
null
null
2,308.11596
SeamlessM4T: Massively Multilingual & Multimodal Machine Translation
['Seamless Communication', 'Loïc Barrault', 'Yu-An Chung', 'Mariano Cora Meglioli', 'David Dale', 'Ning Dong', 'Paul-Ambroise Duquenne', 'Hady Elsahar', 'Hongyu Gong', 'Kevin Heffernan', 'John Hoffman', 'Christopher Klaiber', 'Pengwei Li', 'Daniel Licht', 'Jean Maillard', 'Alice Rakotoarison', 'Kaushik Ram Sadagopan', ...
['cs.CL', 'I.2.7']
What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More sp...
2023-08-22T17:44:18Z
null
null
null
SeamlessM4T: Massively Multilingual&Multimodal Machine Translation
['Seamless Communication', 'Loïc Barrault', 'Yu-An Chung', 'Mariano Cora Meglioli', 'David Dale', 'Ning Dong', 'Paul-Ambroise Duquenne', 'Hady ElSahar', 'Hongyu Gong', 'Kevin Heffernan', 'John Hoffman', 'Christopher Klaiber', 'Peng Li', 'Daniel Licht', 'Jean Maillard', 'Alice Rakotoarison', 'Kaushik Ram Sadagopan', 'Gu...
2,023
null
97
0
['Computer Science']
2,308.11878
Cabrita: closing the gap for foreign languages
['Celio Larcher', 'Marcos Piau', 'Paulo Finardi', 'Pedro Gengo', 'Piero Esposito', 'Vinicius Caridá']
['cs.CL', 'cs.AI', 'cs.LG']
The strategy of training the model from scratch in a specific language or domain serves two essential purposes: i) enhancing performance in the particular linguistic or domain context, and ii) ensuring effective tokenization. The main limitation inherent to this approach lies in the associated cost, which can reach six...
2023-08-23T02:49:35Z
9 pages, 1 figure
null
null
Cabrita: closing the gap for foreign languages
['Celio H. N. Larcher', 'Marcos Piau', 'Paulo Finardi', 'P. Gengo', 'P. Esposito', "Vinicius Carid'a"]
2,023
arXiv.org
21
29
['Computer Science']
2,308.11957
CED: Consistent ensemble distillation for audio tagging
['Heinrich Dinkel', 'Yongqing Wang', 'Zhiyong Yan', 'Junbo Zhang', 'Yujun Wang']
['cs.SD', 'eess.AS']
Augmentation and knowledge distillation (KD) are well-established techniques employed in audio classification tasks, aimed at enhancing performance and reducing model sizes on the widely recognized Audioset (AS) benchmark. Although both techniques are effective individually, their combined use, called consistent teachi...
2023-08-23T06:57:00Z
null
null
null
null
null
null
null
null
null
null
2,308.12008
Graecia capta ferum victorem cepit. Detecting Latin Allusions to Ancient Greek Literature
['Frederick Riemenschneider', 'Anette Frank']
['cs.CL', 'I.2.7']
Intertextual allusions hold a pivotal role in Classical Philology, with Latin authors frequently referencing Ancient Greek texts. Until now, the automatic identification of these intertextual references has been constrained to monolingual approaches, seeking parallels solely within Latin or Greek texts. In this study, ...
2023-08-23T08:54:05Z
Paper accepted for publication at the First Workshop on Ancient Language Processing (ALP) 2023; 9 pages, 5 tables
null
null
null
null
null
null
null
null
null
2,308.12038
Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages
['Jinyi Hu', 'Yuan Yao', 'Chongyi Wang', 'Shan Wang', 'Yinxu Pan', 'Qianyu Chen', 'Tianyu Yu', 'Hanghao Wu', 'Yue Zhao', 'Haoye Zhang', 'Xu Han', 'Yankai Lin', 'Jiao Xue', 'Dahai Li', 'Zhiyuan Liu', 'Maosong Sun']
['cs.CL', 'cs.CV']
Recently there has been a significant surge in multimodal learning in terms of both image-to-text and text-to-image generation. However, the success is typically limited to English, leaving other languages largely behind. Building a competitive counterpart in other languages is highly challenging due to the low-resourc...
2023-08-23T09:55:41Z
https://github.com/OpenBMB/VisCPM.git
null
null
null
null
null
null
null
null
null
2,308.1277
WavMark: Watermarking for Audio Generation
['Guangyu Chen', 'Yu Wu', 'Shujie Liu', 'Tao Liu', 'Xiaoyong Du', 'Furu Wei']
['cs.SD', 'cs.CL', 'eess.AS']
Recent breakthroughs in zero-shot voice synthesis have enabled imitating a speaker's voice using just a few seconds of recording while maintaining a high level of realism. Alongside its potential benefits, this powerful technology introduces notable risks, including voice fraud and speaker impersonation. Unlike the con...
2023-08-24T13:17:35Z
null
null
null
null
null
null
null
null
null
null
2,308.12823
Uncovering a Massive z~7.7 Galaxy Hosting a Heavily Obscured Radio-Loud QSO Candidate in COSMOS-Web
['Erini Lambrides', 'Marco Chiaberge', 'Arianna Long', 'Daizhong Liu', 'Hollis B. Akins', 'Andrew F. Ptak', 'Irham Taufik Andika', 'Alessandro Capetti', 'Caitlin M. Casey', 'Jaclyn B. Champagne', 'Katherine Chworowsky', 'Tracy E. Clarke', 'Olivia R. Cooper', 'Xuheng Ding', 'Dillon Z. Dong', 'Andreas L. Faisst', 'Jordan...
['astro-ph.GA']
In this letter, we report the discovery of the highest redshift, heavily obscured, radio-loud AGN candidate selected using JWST NIRCam/MIRI, mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. Using multi-frequency radio observations and mid-IR photometry, we identify a powerful, radio-loud (RL), growing superma...
2023-08-24T14:26:21Z
Accepted to ApJL
null
null
Uncovering a Massive z~7.7 Galaxy Hosting a Heavily Obscured Radio-Loud QSO Candidate in COSMOS-Web
['E. Lambrides', 'M. Chiaberge', 'A. Long', 'Daizhong Liu', 'H. Akins', 'A. Ptak', 'I. Andika', 'A. Capetti', 'C. Casey', 'J. Champagne', 'Katherine Chworowsky', 'O. Cooper', 'Xuheng Ding', 'A. Faisst', 'Maximilien Franco', 'S. Gillman', 'G. Gozaliasl', 'K. Hall', 'S. Harish', 'C. Hayward', 'M. Hirschmann', 'T. Hutchis...
2,023
null
1
0
['Physics']
2,308.12908
POLCA: Power Oversubscription in LLM Cloud Providers
['Pratyush Patel', 'Esha Choukse', 'Chaojie Zhang', 'Íñigo Goiri', 'Brijesh Warrier', 'Nithish Mahalingam', 'Ricardo Bianchini']
['cs.DC', 'cs.AR', 'cs.LG']
Recent innovation in large language models (LLMs), and their myriad use-cases have rapidly driven up the compute capacity demand for datacenter GPUs. Several cloud providers and other enterprises have made substantial plans of growth in their datacenters to support these new workloads. One of the key bottleneck resourc...
2023-08-24T16:32:34Z
null
null
null
POLCA: Power Oversubscription in LLM Cloud Providers
['Pratyush Patel', 'Esha Choukse', 'Chaojie Zhang', 'Íñigo Goiri', 'Brijesh Warrier', 'Nithish Mahalingam', 'R. Bianchini']
2,023
arXiv.org
14
52
['Computer Science']
2,308.1295
Code Llama: Open Foundation Models for Code
['Baptiste Rozière', 'Jonas Gehring', 'Fabian Gloeckle', 'Sten Sootla', 'Itai Gat', 'Xiaoqing Ellen Tan', 'Yossi Adi', 'Jingyu Liu', 'Romain Sauvestre', 'Tal Remez', 'Jérémy Rapin', 'Artyom Kozhevnikov', 'Ivan Evtimov', 'Joanna Bitton', 'Manish Bhatt', 'Cristian Canton Ferrer', 'Aaron Grattafiori', 'Wenhan Xiong', 'Ale...
['cs.CL']
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of a...
2023-08-24T17:39:13Z
null
null
null
null
null
null
null
null
null
null
2,308.12966
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
['Jinze Bai', 'Shuai Bai', 'Shusheng Yang', 'Shijie Wang', 'Sinan Tan', 'Peng Wang', 'Junyang Lin', 'Chang Zhou', 'Jingren Zhou']
['cs.CV', 'cs.CL']
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously designed (i) visual receptor, (ii) input-output interface, (iii) 3...
2023-08-24T17:59:17Z
Code, demo and models are available at https://github.com/QwenLM/Qwen-VL
null
null
null
null
null
null
null
null
null
2,308.12967
NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes
['Muhammad Zubair Irshad', 'Sergey Zakharov', 'Katherine Liu', 'Vitor Guizilini', 'Thomas Kollar', 'Adrien Gaidon', 'Zsolt Kira', 'Rares Ambrus']
['cs.CV', 'cs.AI', 'cs.LG']
Recent implicit neural representations have shown great results for novel view synthesis. However, existing methods require expensive per-scene optimization from many views hence limiting their application to real-world unbounded urban settings where the objects of interest or backgrounds are observed from very few vie...
2023-08-24T17:59:50Z
Accepted to International Conference on Computer Vision (ICCV), 2023. Project page: https://zubair-irshad.github.io/projects/neo360.html
null
null
null
null
null
null
null
null
null
2,308.13032
Financial News Analytics Using Fine-Tuned Llama 2 GPT Model
['Bohdan M. Pavlyshenko']
['cs.CL', 'cs.AI', 'cs.CE', 'cs.IR', 'cs.LG']
The paper considers the possibility to fine-tune Llama 2 GPT large language model (LLM) for the multitask analysis of financial news. For fine-tuning, the PEFT/LoRA based approach was used. In the study, the model was fine-tuned for the following tasks: analysing a text from financial market perspectives, highlighting ...
2023-08-24T18:58:10Z
null
null
null
Financial News Analytics Using Fine-Tuned Llama 2 GPT Model
['Bohdan M. Pavlyshenko']
2,023
arXiv.org
20
15
['Computer Science']
2,308.13093
EgoBlur: Responsible Innovation in Aria
['Nikhil Raina', 'Guruprasad Somasundaram', 'Kang Zheng', 'Sagar Miglani', 'Steve Saarinen', 'Jeff Meissner', 'Mark Schwesinger', 'Luis Pesqueira', 'Ishita Prasad', 'Edward Miller', 'Prince Gupta', 'Mingfei Yan', 'Richard Newcombe', 'Carl Ren', 'Omkar M Parkhi']
['cs.CV']
Project Aria pushes the frontiers of Egocentric AI with large-scale real-world data collection using purposely designed glasses with privacy first approach. To protect the privacy of bystanders being recorded by the glasses, our research protocols are designed to ensure recorded video is processed by an AI anonymizatio...
2023-08-24T21:36:11Z
null
null
null
null
null
null
null
null
null
null
2,308.13116
Sentence Embedding Models for Ancient Greek Using Multilingual Knowledge Distillation
['Kevin Krahn', 'Derrick Tate', 'Andrew C. Lamicela']
['cs.CL', 'I.2.7']
Contextual language models have been trained on Classical languages, including Ancient Greek and Latin, for tasks such as lemmatization, morphological tagging, part of speech tagging, authorship attribution, and detection of scribal errors. However, high-quality sentence embedding models for these historical languages ...
2023-08-24T23:38:44Z
Paper accepted for publication at the First Workshop on Ancient Language Processing (ALP) 2023; 10 pages, 3 figures, 9 tables
null
null
null
null
null
null
null
null
null
2,308.13137
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
['Wenqi Shao', 'Mengzhao Chen', 'Zhaoyang Zhang', 'Peng Xu', 'Lirui Zhao', 'Zhiqian Li', 'Kaipeng Zhang', 'Peng Gao', 'Yu Qiao', 'Ping Luo']
['cs.LG', 'cs.CL']
Large language models (LLMs) have revolutionized natural language processing tasks. However, their practical deployment is hindered by their immense memory and computation requirements. Although recent post-training quantization (PTQ) methods are effective in reducing memory footprint and improving the computational ef...
2023-08-25T02:28:35Z
ICLR 2024 Camera Ready
null
null
null
null
null
null
null
null
null
2,308.13177
How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
['Yiyang Yao', 'Peng Liu', 'Tiancheng Zhao', 'Qianqian Zhang', 'Jiajia Liao', 'Chunxin Fang', 'Kyusong Lee', 'Qing Wang']
['cs.CV', 'cs.CL']
Object detection (OD) in computer vision has made significant progress in recent years, transitioning from closed-set labels to open-vocabulary detection (OVD) based on large-scale vision-language pre-training (VLP). However, current evaluation methods and datasets are limited to testing generalization over object type...
2023-08-25T04:54:32Z
Long paper accepted at AAAI 2024
null
null
How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection
['Yi Yao', 'Peng Liu', 'Tiancheng Zhao', 'Qianqian Zhang', 'Jiajia Liao', 'Chunxin Fang', 'Kyusong Lee', 'Qing Wang']
2,023
AAAI Conference on Artificial Intelligence
13
35
['Computer Science']
2,308.13387
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
['Yuxia Wang', 'Haonan Li', 'Xudong Han', 'Preslav Nakov', 'Timothy Baldwin']
['cs.CL']
With the rapid evolution of large language models (LLMs), new and hard-to-predict harmful capabilities are emerging. This requires developers to be able to identify risks through the evaluation of "dangerous capabilities" in order to responsibly deploy LLMs. In this work, we collect the first open-source dataset to eva...
2023-08-25T14:02:12Z
18 pages, 9 figures, 11 tables
null
null
null
null
null
null
null
null
null
2,308.13418
Nougat: Neural Optical Understanding for Academic Documents
['Lukas Blecher', 'Guillem Cucurull', 'Thomas Scialom', 'Robert Stojnic']
['cs.LG', 'cs.CV']
Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that p...
2023-08-25T15:03:36Z
17 pages, 10 figures
null
null
Nougat: Neural Optical Understanding for Academic Documents
['Lukas Blecher', 'Guillem Cucurull', 'Thomas Scialom', 'Robert Stojnic']
2,023
International Conference on Learning Representations
120
49
['Computer Science']
2,308.13437
Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models
['Chi Chen', 'Ruoyu Qin', 'Fuwen Luo', 'Xiaoyue Mi', 'Peng Li', 'Maosong Sun', 'Yang Liu']
['cs.CV']
Recently, Multimodal Large Language Models (MLLMs) that enable Large Language Models (LLMs) to interpret images through visual instruction tuning have achieved significant success. However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities...
2023-08-25T15:33:47Z
null
null
null
Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models
['Chi Chen', 'Ruoyu Qin', 'Fuwen Luo', 'Xiaoyue Mi', 'Peng Li', 'Maosong Sun', 'Yang Liu']
2,023
arXiv.org
45
34
['Computer Science']
2,308.13449
The Poison of Alignment
['Aibek Bekbayev', 'Sungbae Chun', 'Yerzat Dulat', 'James Yamazaki']
['cs.CL']
From the perspective of content safety issues, alignment has shown to limit large language models' (LLMs) harmful content generation. This intentional method of reinforcing models to not respond to certain user inputs seem to be present in many modern open-source instruction tuning datasets such as OpenAssistant or Gua...
2023-08-25T15:51:15Z
null
null
null
The Poison of Alignment
['Aibek Bekbayev', 'Sungbae Chun', 'Yerzat Dulat', 'James Yamazaki']
2,023
arXiv.org
9
28
['Computer Science']
2,308.14024
Balanced Representation Learning for Long-tailed Skeleton-based Action Recognition
['Hongda Liu', 'Yunlong Wang', 'Min Ren', 'Junxing Hu', 'Zhengquan Luo', 'Guangqi Hou', 'Zhenan Sun']
['cs.CV']
Skeleton-based action recognition has recently made significant progress. However, data imbalance is still a great challenge in real-world scenarios. The performance of current action recognition algorithms declines sharply when training data suffers from heavy class imbalance. The imbalanced data actually degrades the...
2023-08-27T07:25:51Z
Accepted by Machine Intelligence Research https://link.springer.com/article/10.1007/s11633-023-1487-8
null
10.1007/s11633-023-1487-8
null
null
null
null
null
null
null
2,308.1428
FonMTL: Towards Multitask Learning for the Fon Language
['Bonaventure F. P. Dossou', 'Iffanice Houndayi', 'Pamely Zantou', 'Gilles Hacheme']
['cs.CL', 'cs.AI']
The Fon language, spoken by an average 2 million of people, is a truly low-resourced African language, with a limited online presence, and existing datasets (just to name but a few). Multitask learning is a learning paradigm that aims to improve the generalization capacity of a model by sharing knowledge across differe...
2023-08-28T03:26:21Z
Accepted at WiNLP workshop, co-located at EMNLP 2023
null
null
FonMTL: Towards Multitask Learning for the Fon Language
['Bonaventure F. P. Dossou', 'Iffanice B. Houndayi', 'Pamely Zantou', 'Gilles Hacheme']
2,023
arXiv.org
0
29
['Computer Science']
2,308.14346
DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation
['Zhijie Bao', 'Wei Chen', 'Shengze Xiao', 'Kuang Ren', 'Jiaao Wu', 'Cheng Zhong', 'Jiajie Peng', 'Xuanjing Huang', 'Zhongyu Wei']
['cs.CL', 'cs.AI']
We propose DISC-MedLLM, a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful medical response in end-to-end conversational healthcare services. To construct high-quality Supervised Fine-Tuning (SFT) datasets, we employ three strategies: utilizing medical knowledge-graphs...
2023-08-28T06:41:49Z
Work in progress
null
null
DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation
['Zhijie Bao', 'Wei Chen', 'Shengze Xiao', 'Kuang Ren', 'Jiaao Wu', 'Cheng Zhong', 'J. Peng', 'Xuanjing Huang', 'Zhongyu Wei']
2,023
arXiv.org
84
50
['Computer Science']
2,308.14469
Pixel-Aware Stable Diffusion for Realistic Image Super-resolution and Personalized Stylization
['Tao Yang', 'Rongyuan Wu', 'Peiran Ren', 'Xuansong Xie', 'Lei Zhang']
['cs.CV']
Diffusion models have demonstrated impressive performance in various image generation, editing, enhancement and translation tasks. In particular, the pre-trained text-to-image stable diffusion models provide a potential solution to the challenging realistic image super-resolution (Real-ISR) and image stylization proble...
2023-08-28T10:15:57Z
null
The European Conference on Computer Vision (ECCV) 2024
null
null
null
null
null
null
null
null
2,308.14508
LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
['Yushi Bai', 'Xin Lv', 'Jiajie Zhang', 'Hongchang Lyu', 'Jiankai Tang', 'Zhidian Huang', 'Zhengxiao Du', 'Xiao Liu', 'Aohan Zeng', 'Lei Hou', 'Yuxiao Dong', 'Jie Tang', 'Juanzi Li']
['cs.CL']
Although large language models (LLMs) demonstrate impressive performance for many language tasks, most of them can only handle texts a few thousand tokens long, limiting their applications on longer sequence inputs, such as books, reports, and codebases. Recent works have proposed methods to improve LLMs' long context ...
2023-08-28T11:53:40Z
ACL 2024
null
null
LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
['Yushi Bai', 'Xin Lv', 'Jiajie Zhang', 'Hong Lyu', 'Jiankai Tang', 'Zhidian Huang', 'Zhengxiao Du', 'Xiao Liu', 'Aohan Zeng', 'Lei Hou', 'Yuxiao Dong', 'Jie Tang', 'Juanzi Li']
2,023
Annual Meeting of the Association for Computational Linguistics
605
69
['Computer Science']
2,308.14669
ANER: Arabic and Arabizi Named Entity Recognition using Transformer-Based Approach
['Abdelrahman "Boda" Sadallah', 'Omar Ahmed', 'Shimaa Mohamed', 'Omar Hatem', 'Doaa Hesham', 'Ahmed H. Yousef']
['cs.CL', 'cs.AI']
One of the main tasks of Natural Language Processing (NLP), is Named Entity Recognition (NER). It is used in many applications and also can be used as an intermediate step for other tasks. We present ANER, a web-based named entity recognizer for the Arabic, and Arabizi languages. The model is built upon BERT, which is ...
2023-08-28T15:54:48Z
null
null
10.1109/IMSA58542.2023.10217635
ANER: Arabic and Arabizi Named Entity Recognition using Transformer-Based Approach
['A. Sadallah', 'Omar Ahmed', 'Shimaa S. Mohamed', 'Omar Hatem', 'Doaa Hesham', 'A. Yousef']
2,023
Internet, Multimedia Systems and Applications
2
17
['Computer Science']
2,308.14752
AI Deception: A Survey of Examples, Risks, and Potential Solutions
['Peter S. Park', 'Simon Goldstein', "Aidan O'Gara", 'Michael Chen', 'Dan Hendrycks']
['cs.CY', 'cs.AI', 'cs.HC']
This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CIC...
2023-08-28T17:59:35Z
18 pages (not including executive summary, references, and appendix), six figures
null
null
null
null
null
null
null
null
null
2,308.1492
Matbench Discovery -- A framework to evaluate machine learning crystal stability predictions
['Janosh Riebesell', 'Rhys E. A. Goodall', 'Philipp Benner', 'Yuan Chiang', 'Bowen Deng', 'Gerbrand Ceder', 'Mark Asta', 'Alpha A. Lee', 'Anubhav Jain', 'Kristin A. Persson']
['cond-mat.mtrl-sci', 'cs.LG']
The rapid adoption of machine learning (ML) in domain sciences necessitates best practices and standardized benchmarking for performance evaluation. We present Matbench Discovery, an evaluation framework for ML energy models, applied as pre-filters for high-throughput searches of stable inorganic crystals. This framewo...
2023-08-28T22:29:57Z
Please see online leaderboard at: https://matbench-discovery.materialsproject.org/
null
null
null
null
null
null
null
null
null
2,308.1507
DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior
['Xinqi Lin', 'Jingwen He', 'Ziyan Chen', 'Zhaoyang Lyu', 'Bo Dai', 'Fanghua Yu', 'Wanli Ouyang', 'Yu Qiao', 'Chao Dong']
['cs.CV']
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks in a unified framework. DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image conte...
2023-08-29T07:11:52Z
null
null
null
null
null
null
null
null
null
null
2,308.15085
Learning to Upsample by Learning to Sample
['Wenze Liu', 'Hao Lu', 'Hongtao Fu', 'Zhiguo Cao']
['cs.CV']
We present DySample, an ultra-lightweight and effective dynamic upsampler. While impressive performance gains have been witnessed from recent kernel-based dynamic upsamplers such as CARAFE, FADE, and SAPA, they introduce much workload, mostly due to the time-consuming dynamic convolution and the additional sub-network ...
2023-08-29T07:50:11Z
Accepted by ICCV 2023
null
null
null
null
null
null
null
null
null
2,308.15366
AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models
['Zhaopeng Gu', 'Bingke Zhu', 'Guibo Zhu', 'Yingying Chen', 'Ming Tang', 'Jinqiao Wang']
['cs.CV']
Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA have demonstrated the capability of understanding images and achieved remarkable performance in various visual tasks. Despite their strong abilities in recognizing common objects due to extensive training datasets, they lack specific domain knowledge and ...
2023-08-29T15:02:53Z
Accepted by AAAI 2024; Project page: https://anomalygpt.github.io
null
null
AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models
['Zhaopeng Gu', 'Bingke Zhu', 'Guibo Zhu', 'Yingying Chen', 'Ming Tang', 'Jinqiao Wang']
2,023
AAAI Conference on Artificial Intelligence
117
41
['Computer Science']
2,308.15777
DeFTAN-II: Efficient Multichannel Speech Enhancement with Subgroup Processing
['Dongheon Lee', 'Jung-Woo Choi']
['eess.AS', 'eess.SP']
In this work, we present DeFTAN-II, an efficient multichannel speech enhancement model based on transformer architecture and subgroup processing. Despite the success of transformers in speech enhancement, they face challenges in capturing local relations, reducing the high computational complexity, and lowering memory ...
2023-08-30T06:08:27Z
13 pages, 6 figures, submitted to IEEE/ACM Trans. Audio, Speech, Lang. Process
null
null
null
null
null
null
null
null
null
2,308.15812
Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models
['Hritik Bansal', 'John Dang', 'Aditya Grover']
['cs.LG', 'cs.AI', 'cs.CL']
Aligning large language models (LLMs) with human values and intents critically involves the use of human or AI feedback. While dense feedback annotations are expensive to acquire and integrate, sparse feedback presents a structural design choice between ratings (e.g., score Response A on a scale of 1-7) and rankings (e...
2023-08-30T07:35:32Z
31 pages, Accepted to ICLR 2024
null
null
Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models
['Hritik Bansal', 'John Dang', 'Aditya Grover']
2,023
International Conference on Learning Representations
21
63
['Computer Science']
2,308.16137
LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models
['Chi Han', 'Qifan Wang', 'Hao Peng', 'Wenhan Xiong', 'Yu Chen', 'Heng Ji', 'Sinong Wang']
['cs.CL', 'cs.AI']
Today's large language models (LLMs) typically train on short text segments (e.g., <4K tokens) due to the quadratic complexity of their Transformer architectures. As a result, their performance suffers drastically on inputs longer than those encountered during training, substantially limiting their applications in real...
2023-08-30T16:47:51Z
NAACL 2024 Outstanding paper, 9 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,308.16149
Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models
['Neha Sengupta', 'Sunil Kumar Sahu', 'Bokang Jia', 'Satheesh Katipomu', 'Haonan Li', 'Fajri Koto', 'William Marshall', 'Gurpreet Gosal', 'Cynthia Liu', 'Zhiming Chen', 'Osama Mohammed Afzal', 'Samta Kamboj', 'Onkar Pandit', 'Rahul Pal', 'Lalit Pradhan', 'Zain Muhammad Mujahid', 'Massa Baali', 'Xudong Han', 'Sondos Mah...
['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'F.2.2; I.2.7']
We introduce Jais and Jais-chat, new state-of-the-art Arabic-centric foundation and instruction-tuned open generative large language models (LLMs). The models are based on the GPT-3 decoder-only architecture and are pretrained on a mixture of Arabic and English texts, including source code in various programming langua...
2023-08-30T17:07:17Z
Arabic-centric, foundation model, large-language model, LLM, generative model, instruction-tuned, Jais, Jais-chat
null
null
null
null
null
null
null
null
null