arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,310.16127 | Octopus: A Multitask Model and Toolkit for Arabic Natural Language
Generation | ['AbdelRahim Elmadany', 'El Moatez Billah Nagoudi', 'Muhammad Abdul-Mageed'] | ['cs.CL'] | Understanding Arabic text and generating human-like responses is a
challenging endeavor. While many researchers have proposed models and solutions
for individual problems, there is an acute shortage of a comprehensive Arabic
natural language generation toolkit that is capable of handling a wide range of
tasks. In this ... | 2023-10-24T19:06:55Z | null | null | null | Octopus: A Multitask Model and Toolkit for Arabic Natural Language Generation | ['AbdelRahim Elmadany', 'El Moatez Billah Nagoudi', 'M. Abdul-Mageed'] | 2,023 | ARABICNLP | 12 | 52 | ['Computer Science'] |
2,310.16225 | CleanCoNLL: A Nearly Noise-Free Named Entity Recognition Dataset | ['Susanna Rücker', 'Alan Akbik'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The CoNLL-03 corpus is arguably the most well-known and utilized benchmark
dataset for named entity recognition (NER). However, prior works found
significant numbers of annotation errors, incompleteness, and inconsistencies
in the data. This poses challenges to objectively comparing NER approaches and
analyzing their e... | 2023-10-24T22:34:43Z | EMNLP 2023 camera-ready version | null | null | null | null | null | null | null | null | null |
2,310.16226 | TiC-CLIP: Continual Training of CLIP Models | ['Saurabh Garg', 'Mehrdad Farajtabar', 'Hadi Pouransari', 'Raviteja Vemulapalli', 'Sachin Mehta', 'Oncel Tuzel', 'Vaishaal Shankar', 'Fartash Faghri'] | ['cs.CV', 'cs.CL', 'cs.LG'] | Keeping large foundation models up to date on latest data is inherently
expensive. To avoid the prohibitive costs of constantly retraining, it is
imperative to continually train these models. This problem is exacerbated by
the lack of any large scale continual learning benchmarks or baselines. We
introduce the first se... | 2023-10-24T22:41:14Z | ICLR 2024 | null | null | TiC-CLIP: Continual Training of CLIP Models | ['Saurabh Garg', 'Mehrdad Farajtabar', 'Hadi Pouransari', 'Raviteja Vemulapalli', 'Sachin Mehta', 'Oncel Tuzel', 'Vaishaal Shankar', 'Fartash Faghri'] | 2,023 | International Conference on Learning Representations | 31 | 107 | ['Computer Science'] |
2,310.16248 | GlotLID: Language Identification for Low-Resource Languages | ['Amir Hossein Kargaran', 'Ayyoob Imani', 'François Yvon', 'Hinrich Schütze'] | ['cs.CL'] | Several recent papers have published good solutions for language
identification (LID) for about 300 high-resource and medium-resource languages.
However, there is no LID available that (i) covers a wide range of low-resource
languages, (ii) is rigorously evaluated and reliable and (iii) efficient and
easy to use. Here,... | 2023-10-24T23:45:57Z | EMNLP 2023 | null | 10.18653/v1/2023.findings-emnlp.410 | GlotLID: Language Identification for Low-Resource Languages | ['Amir Hossein Kargaran', 'Ayyoob Imani', 'François Yvon', 'Hinrich Schütze'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 15 | 85 | ['Computer Science'] |
2,310.16338 | Generative Pre-training for Speech with Flow Matching | ['Alexander H. Liu', 'Matt Le', 'Apoorv Vyas', 'Bowen Shi', 'Andros Tjandra', 'Wei-Ning Hsu'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Generative models have gained more and more attention in recent years for
their remarkable success in tasks that required estimating and sampling data
distribution to generate high-fidelity synthetic data. In speech,
text-to-speech synthesis and neural vocoder are good examples where generative
models have shined. Whil... | 2023-10-25T03:40:50Z | ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,310.1645 | CLEX: Continuous Length Extrapolation for Large Language Models | ['Guanzheng Chen', 'Xin Li', 'Zaiqiao Meng', 'Shangsong Liang', 'Lidong Bing'] | ['cs.CL'] | Transformer-based Large Language Models (LLMs) are pioneering advances in
many natural language processing tasks, however, their exceptional capabilities
are restricted within the preset context window of Transformer. Position
Embedding (PE) scaling methods, while effective in extending the context window
to a specific... | 2023-10-25T08:13:02Z | ICLR 2024 | null | null | CLEX: Continuous Length Extrapolation for Large Language Models | ['Guanzheng Chen', 'Xin Li', 'Zaiqiao Meng', 'Shangsong Liang', 'Li Bing'] | 2,023 | International Conference on Learning Representations | 32 | 30 | ['Computer Science'] |
2,310.16517 | OccuQuest: Mitigating Occupational Bias for Inclusive Large Language
Models | ['Mingfeng Xue', 'Dayiheng Liu', 'Kexin Yang', 'Guanting Dong', 'Wenqiang Lei', 'Zheng Yuan', 'Chang Zhou', 'Jingren Zhou'] | ['cs.CL'] | The emergence of large language models (LLMs) has revolutionized natural
language processing tasks. However, existing instruction-tuning datasets suffer
from occupational bias: the majority of data relates to only a few occupations,
which hampers the instruction-tuned LLMs to generate helpful responses to
professional ... | 2023-10-25T10:06:17Z | null | null | null | null | null | null | null | null | null | null |
2,310.16609 | Back Transcription as a Method for Evaluating Robustness of Natural
Language Understanding Models to Speech Recognition Errors | ['Marek Kubis', 'Paweł Skórzewski', 'Marcin Sowański', 'Tomasz Ziętkiewicz'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | In a spoken dialogue system, an NLU model is preceded by a speech recognition
system that can deteriorate the performance of natural language understanding.
This paper proposes a method for investigating the impact of speech recognition
errors on the performance of natural language understanding models. The
proposed me... | 2023-10-25T13:07:07Z | Accepted to EMNLP 2023 main conference | null | null | null | null | null | null | null | null | null |
2,310.16621 | ArTST: Arabic Text and Speech Transformer | ['Hawau Olamide Toyin', 'Amirbek Djanibekov', 'Ajinkya Kulkarni', 'Hanan Aldarmaki'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | We present ArTST, a pre-trained Arabic text and speech transformer for
supporting open-source speech technologies for the Arabic language. The model
architecture follows the unified-modal framework, SpeechT5, that was recently
released for English, and is focused on Modern Standard Arabic (MSA), with
plans to extend th... | 2023-10-25T13:20:54Z | 11 pages, 1 figure, SIGARAB ArabicNLP 2023 | null | null | ArTST: Arabic Text and Speech Transformer | ['Hawau Olamide Toyin', 'Amirbek Djanibekov', 'Ajinkya Kulkarni', 'Hanan Aldarmaki'] | 2,023 | ARABICNLP | 10 | 34 | ['Computer Science', 'Engineering'] |
2,310.16713 | SkyMath: Technical Report | ['Liu Yang', 'Haihua Yang', 'Wenjun Cheng', 'Lei Lin', 'Chenxia Li', 'Yifu Chen', 'Lunan Liu', 'Jianfei Pan', 'Tianwen Wei', 'Biye Li', 'Liang Zhao', 'Lijie Wang', 'Bo Zhu', 'Guoliang Li', 'Xuejie Wu', 'Xilin Luo', 'Rui Hu'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have shown great potential to solve varieties of
natural language processing (NLP) tasks, including mathematical reasoning. In
this work, we present SkyMath, a large language model for mathematics with 13
billion parameters. By applying self-compare fine-tuning, we have enhanced
mathematica... | 2023-10-25T15:34:55Z | null | null | null | null | null | null | null | null | null | null |
2,310.16825 | CommonCanvas: An Open Diffusion Model Trained with Creative-Commons
Images | ['Aaron Gokaslan', 'A. Feder Cooper', 'Jasmine Collins', 'Landan Seguin', 'Austin Jacobson', 'Mihir Patel', 'Jonathan Frankle', 'Cory Stephenson', 'Volodymyr Kuleshov'] | ['cs.CV', 'cs.CY'] | We assemble a dataset of Creative-Commons-licensed (CC) images, which we use
to train a set of open diffusion models that are qualitatively competitive with
Stable Diffusion 2 (SD2). This task presents two challenges: (1)
high-resolution CC images lack the captions necessary to train text-to-image
generative models; (2... | 2023-10-25T17:56:07Z | null | null | null | null | null | null | null | null | null | null |
2,310.16828 | TD-MPC2: Scalable, Robust World Models for Continuous Control | ['Nicklas Hansen', 'Hao Su', 'Xiaolong Wang'] | ['cs.LG', 'cs.AI', 'cs.CV', 'cs.RO'] | TD-MPC is a model-based reinforcement learning (RL) algorithm that performs
local trajectory optimization in the latent space of a learned implicit
(decoder-free) world model. In this work, we present TD-MPC2: a series of
improvements upon the TD-MPC algorithm. We demonstrate that TD-MPC2 improves
significantly over ba... | 2023-10-25T17:57:07Z | ICLR 2024. Explore videos, models, data, code, and more at
https://tdmpc2.com | null | null | TD-MPC2: Scalable, Robust World Models for Continuous Control | ['Nicklas Hansen', 'Hao Su', 'Xiaolong Wang'] | 2,023 | International Conference on Learning Representations | 159 | 66 | ['Computer Science'] |
2,310.16834 | Discrete Diffusion Modeling by Estimating the Ratios of the Data
Distribution | ['Aaron Lou', 'Chenlin Meng', 'Stefano Ermon'] | ['stat.ML', 'cs.CL', 'cs.LG'] | Despite their groundbreaking performance for many generative modeling tasks,
diffusion models have fallen short on discrete data domains such as natural
language. Crucially, standard diffusion models rely on the well-established
theory of score matching, but efforts to generalize this to discrete structures
have not yi... | 2023-10-25T17:59:12Z | ICML 2024 Oral. Code at
https://github.com/louaaron/Score-Entropy-Discrete-Diffusion | null | null | null | null | null | null | null | null | null |
2,310.16944 | Zephyr: Direct Distillation of LM Alignment | ['Lewis Tunstall', 'Edward Beeching', 'Nathan Lambert', 'Nazneen Rajani', 'Kashif Rasul', 'Younes Belkada', 'Shengyi Huang', 'Leandro von Werra', 'Clémentine Fourrier', 'Nathan Habib', 'Nathan Sarrazin', 'Omar Sanseviero', 'Alexander M. Rush', 'Thomas Wolf'] | ['cs.LG', 'cs.CL'] | We aim to produce a smaller language model that is aligned to user intent.
Previous research has shown that applying distilled supervised fine-tuning
(dSFT) on larger models significantly improves task accuracy; however, these
models are unaligned, i.e. they do not respond well to natural prompts. To
distill this prope... | 2023-10-25T19:25:16Z | null | null | null | null | null | null | null | null | null | null |
2,310.17025 | netFound: Foundation Model for Network Security | ['Satyandra Guthula', 'Roman Beltiukov', 'Navya Battula', 'Wenbo Guo', 'Arpit Gupta', 'Inder Monga'] | ['cs.NI', 'cs.AI'] | Developing generalizable ML-based solutions for disparate learning problems
in network security is highly desired. However, despite a rich history of
applying ML to network security, most existing solutions lack generalizability.
This lack of progress can be attributed to an overreliance on supervised
learning techniqu... | 2023-10-25T22:04:57Z | null | null | null | null | null | null | null | null | null | null |
2,310.17389 | ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in
Real-World User-AI Conversation | ['Zi Lin', 'Zihan Wang', 'Yongqi Tong', 'Yangkun Wang', 'Yuxin Guo', 'Yujia Wang', 'Jingbo Shang'] | ['cs.CL', 'cs.AI'] | Despite remarkable advances that large language models have achieved in
chatbots, maintaining a non-toxic user-AI interactive environment has become
increasingly critical nowadays. However, previous efforts in toxicity detection
have been mostly based on benchmarks derived from social media content, leaving
the unique ... | 2023-10-26T13:35:41Z | null | EMNLP findings 2023 | null | null | null | null | null | null | null | null |
2,310.17631 | JudgeLM: Fine-tuned Large Language Models are Scalable Judges | ['Lianghui Zhu', 'Xinggang Wang', 'Xinlong Wang'] | ['cs.CL', 'cs.AI'] | Evaluating Large Language Models (LLMs) in open-ended scenarios is
challenging because existing benchmarks and metrics can not measure them
comprehensively. To address this problem, we propose to fine-tune LLMs as
scalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in
open-ended benchmarks. We first ... | 2023-10-26T17:48:58Z | JudgeLM is accepted by ICLR2025. Code is available at
https://github.com/baaivision/JudgeLM | null | null | JudgeLM: Fine-tuned Large Language Models are Scalable Judges | ['Lianghui Zhu', 'Xinggang Wang', 'Xinlong Wang'] | 2,023 | International Conference on Learning Representations | 143 | 56 | ['Computer Science'] |
2,310.17644 | torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free
Deep Learning Studies: A Case Study on NLP | ['Yoshitomo Matsubara'] | ['cs.CL', 'cs.CV', 'cs.LG'] | Reproducibility in scientific work has been becoming increasingly important
in research communities such as machine learning, natural language processing,
and computer vision communities due to the rapid development of the research
domains supported by recent advances in deep learning. In this work, we present
a signif... | 2023-10-26T17:57:15Z | Accepted at the 3rd Workshop for Natural Language Processing Open
Source Software (NLP-OSS) at EMNLP 2023 | Proceedings of the 3rd Workshop for Natural Language Processing
Open Source Software (NLP-OSS 2023) | 10.18653/v1/2023.nlposs-1.18 | torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP | ['Yoshitomo Matsubara'] | 2,023 | NLPOSS | 1 | 86 | ['Computer Science'] |
2,310.17953 | Developing a Multilingual Dataset and Evaluation Metrics for
Code-Switching: A Focus on Hong Kong's Polylingual Dynamics | ['Peng Xie', 'Kani Chen'] | ['cs.SD', 'cs.CL', 'eess.AS'] | The existing audio datasets are predominantly tailored towards single
languages, overlooking the complex linguistic behaviors of multilingual
communities that engage in code-switching. This practice, where individuals
frequently mix two or more languages in their daily interactions, is
particularly prevalent in multili... | 2023-10-27T08:01:55Z | null | null | null | null | null | null | null | null | null | null |
2,310.18336 | AITA Generating Moral Judgements of the Crowd with Reasoning | ['Osama Bsher', 'Ameer Sabri'] | ['cs.CL', 'cs.LG'] | Morality is a fundamental aspect of human behavior and ethics, influencing
how we interact with each other and the world around us. When faced with a
moral dilemma, a person's ability to make clear moral judgments can be clouded.
Due to many factors such as personal biases, emotions and situational factors
people can f... | 2023-10-21T10:27:22Z | null | null | null | AITA Generating Moral Judgements of the Crowd with Reasoning | ['Osama Bsher', 'Ameer Sabri'] | 2,023 | arXiv.org | 0 | 25 | ['Computer Science'] |
2,310.18341 | CXR-LLAVA: a multimodal large language model for interpreting chest
X-ray images | ['Seowoo Lee', 'Jiwon Youn', 'Hyungjin Kim', 'Mansu Kim', 'Soon Ho Yoon'] | ['cs.CL', 'cs.AI'] | Purpose: This study aimed to develop an open-source multimodal large language
model (CXR-LLAVA) for interpreting chest X-ray images (CXRs), leveraging recent
advances in large language models (LLMs) to potentially replicate the image
interpretation skills of human radiologists Materials and Methods: For
training, we co... | 2023-10-22T06:22:37Z | null | null | null | null | null | null | null | null | null | null |
2,310.18361 | Clinical Decision Support System for Unani Medicine Practitioners | ['Haider Sultan', 'Hafiza Farwa Mahmood', 'Noor Fatima', 'Marriyam Nadeem', 'Talha Waheed'] | ['cs.AI'] | Like other fields of Traditional Medicines, Unani Medicines have been found
as an effective medical practice for ages. It is still widely used in the
subcontinent, particularly in Pakistan and India. However, Unani Medicines
Practitioners are lacking modern IT applications in their everyday clinical
practices. An Onlin... | 2023-10-24T13:49:18Z | 59 pages, 11 figures, Computer Science Bachelor's Thesis on use of
Artificial Intelligence in Clinical Decision Support System for Unani
Medicines | null | 10.13140/RG.2.2.15161.54887/1 | null | null | null | null | null | null | null |
2,310.18547 | Punica: Multi-Tenant LoRA Serving | ['Lequn Chen', 'Zihao Ye', 'Yongji Wu', 'Danyang Zhuo', 'Luis Ceze', 'Arvind Krishnamurthy'] | ['cs.DC', 'cs.LG'] | Low-rank adaptation (LoRA) has become an important and popular method to
adapt pre-trained models to specific domains. We present Punica, a system to
serve multiple LoRA models in a shared GPU cluster. Punica contains a new CUDA
kernel design that allows batching of GPU operations for different LoRA models.
This allows... | 2023-10-28T00:33:37Z | null | null | null | null | null | null | null | null | null | null |
2,310.18653 | Feature Guided Masked Autoencoder for Self-supervised Learning in Remote
Sensing | ['Yi Wang', 'Hugo Hernández Hernández', 'Conrad M Albrecht', 'Xiao Xiang Zhu'] | ['cs.CV'] | Self-supervised learning guided by masked image modelling, such as Masked
AutoEncoder (MAE), has attracted wide attention for pretraining vision
transformers in remote sensing. However, MAE tends to excessively focus on
pixel details, thereby limiting the model's capacity for semantic
understanding, in particular for n... | 2023-10-28T09:43:13Z | 13 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,310.1866 | Foundation Models for Generalist Geospatial Artificial Intelligence | ['Johannes Jakubik', 'Sujit Roy', 'C. E. Phillips', 'Paolo Fraccaro', 'Denys Godwin', 'Bianca Zadrozny', 'Daniela Szwarcman', 'Carlos Gomes', 'Gabby Nyirjesy', 'Blair Edwards', 'Daiki Kimura', 'Naomi Simumba', 'Linsong Chu', 'S. Karthik Mukkavilli', 'Devyani Lambhate', 'Kamal Das', 'Ranjini Bangalore', 'Dario Oliveira'... | ['cs.CV', 'cs.LG'] | Significant progress in the development of highly adaptable and reusable
Artificial Intelligence (AI) models is expected to have a significant impact on
Earth science and remote sensing. Foundation models are pre-trained on large
unlabeled datasets through self-supervision, and then fine-tuned for various
downstream ta... | 2023-10-28T10:19:55Z | null | null | null | null | null | null | null | null | null | null |
2,310.18709 | Audio-Visual Instance Segmentation | ['Ruohao Guo', 'Xianghua Ying', 'Yaru Chen', 'Dantong Niu', 'Guangyao Li', 'Liao Qu', 'Yanyu Qi', 'Jinxing Zhou', 'Bowei Xing', 'Wenzhen Yue', 'Ji Shi', 'Qixun Wang', 'Peiliang Zhang', 'Buwen Liang'] | ['cs.CV', 'cs.LG', 'cs.MM', 'cs.SD', 'eess.AS'] | In this paper, we propose a new multi-modal task, termed audio-visual
instance segmentation (AVIS), which aims to simultaneously identify, segment
and track individual sounding object instances in audible videos. To facilitate
this research, we introduce a high-quality benchmark named AVISeg, containing
over 90K instan... | 2023-10-28T13:37:52Z | Accepted by CVPR 2025 | null | null | null | null | null | null | null | null | null |
2,310.1878 | Laughing Hyena Distillery: Extracting Compact Recurrences From
Convolutions | ['Stefano Massaroli', 'Michael Poli', 'Daniel Y. Fu', 'Hermann Kumbong', 'Rom N. Parnichkun', 'Aman Timalsina', 'David W. Romero', 'Quinn McIntyre', 'Beidi Chen', 'Atri Rudra', 'Ce Zhang', 'Christopher Re', 'Stefano Ermon', 'Yoshua Bengio'] | ['cs.LG', 'cs.AI', 'eess.SP'] | Recent advances in attention-free sequence models rely on convolutions as
alternatives to the attention operator at the core of Transformers. In
particular, long convolution sequence models have achieved state-of-the-art
performance in many domains, but incur a significant cost during
auto-regressive inference workload... | 2023-10-28T18:40:03Z | null | null | null | null | null | null | null | null | null | null |
2,310.18961 | AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly
Detection | ['Qihang Zhou', 'Guansong Pang', 'Yu Tian', 'Shibo He', 'Jiming Chen'] | ['cs.CV'] | Zero-shot anomaly detection (ZSAD) requires detection models trained using
auxiliary data to detect anomalies without any training sample in a target
dataset. It is a crucial task when training data is not accessible due to
various concerns, eg, data privacy, yet it is challenging since the models need
to generalize to... | 2023-10-29T10:03:49Z | Accepted by ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,310.19102 | Atom: Low-bit Quantization for Efficient and Accurate LLM Serving | ['Yilong Zhao', 'Chien-Yu Lin', 'Kan Zhu', 'Zihao Ye', 'Lequn Chen', 'Size Zheng', 'Luis Ceze', 'Arvind Krishnamurthy', 'Tianqi Chen', 'Baris Kasikci'] | ['cs.LG'] | The growing demand for Large Language Models (LLMs) in applications such as
content generation, intelligent chatbots, and sentiment analysis poses
considerable challenges for LLM service providers. To efficiently use GPU
resources and boost throughput, batching multiple requests has emerged as a
popular paradigm; to fu... | 2023-10-29T18:33:05Z | null | null | null | null | null | null | null | null | null | null |
2,310.19341 | Skywork: A More Open Bilingual Foundation Model | ['Tianwen Wei', 'Liang Zhao', 'Lichang Zhang', 'Bo Zhu', 'Lijie Wang', 'Haihua Yang', 'Biye Li', 'Cheng Cheng', 'Weiwei Lü', 'Rui Hu', 'Chenxia Li', 'Liu Yang', 'Xilin Luo', 'Xuejie Wu', 'Lunan Liu', 'Wenjun Cheng', 'Peng Cheng', 'Jianhao Zhang', 'Xiaoyu Zhang', 'Lei Lin', 'Xiaokun Wang', 'Yutuan Ma', 'Chuanhai Dong', ... | ['cs.CL', 'cs.AI'] | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-s... | 2023-10-30T08:31:47Z | null | null | null | null | null | null | null | null | null | null |
2,310.19349 | Japanese SimCSE Technical Report | ['Hayato Tsukagoshi', 'Ryohei Sasano', 'Koichi Takeda'] | ['cs.CL'] | We report the development of Japanese SimCSE, Japanese sentence embedding
models fine-tuned with SimCSE. Since there is a lack of sentence embedding
models for Japanese that can be used as a baseline in sentence embedding
research, we conducted extensive experiments on Japanese sentence embeddings
involving 24 pre-trai... | 2023-10-30T08:43:26Z | null | null | null | null | null | null | null | null | null | null |
2,310.19512 | VideoCrafter1: Open Diffusion Models for High-Quality Video Generation | ['Haoxin Chen', 'Menghan Xia', 'Yingqing He', 'Yong Zhang', 'Xiaodong Cun', 'Shaoshu Yang', 'Jinbo Xing', 'Yaofang Liu', 'Qifeng Chen', 'Xintao Wang', 'Chao Weng', 'Ying Shan'] | ['cs.CV'] | Video generation has increasingly gained interest in both academia and
industry. Although commercial tools can generate plausible videos, there is a
limited number of open-source models available for researchers and engineers.
In this work, we introduce two diffusion models for high-quality video
generation, namely tex... | 2023-10-30T13:12:40Z | Tech Report; Github: https://github.com/AILab-CVC/VideoCrafter
Homepage: https://ailab-cvc.github.io/videocrafter/ | null | null | VideoCrafter1: Open Diffusion Models for High-Quality Video Generation | ['Haoxin Chen', 'Menghan Xia', 'Yin-Yin He', 'Yong Zhang', 'Xiaodong Cun', 'Shaoshu Yang', 'Jinbo Xing', 'Yaofang Liu', 'Qifeng Chen', 'Xintao Wang', 'Chao-Liang Weng', 'Ying Shan'] | 2,023 | arXiv.org | 314 | 58 | ['Computer Science'] |
2,310.19727 | Generating Medical Prescriptions with Conditional Transformer | ['Samuel Belkadi', 'Nicolo Micheletti', 'Lifeng Han', 'Warren Del-Pinto', 'Goran Nenadic'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Access to real-world medication prescriptions is essential for medical
research and healthcare quality improvement. However, access to real medication
prescriptions is often limited due to the sensitive nature of the information
expressed. Additionally, manually labelling these instructions for training and
fine-tuning... | 2023-10-30T16:53:11Z | Accepted to: Workshop on Synthetic Data Generation with Generative AI
(SyntheticData4ML Workshop) at NeurIPS 2023 | null | null | Generating Medical Prescriptions with Conditional Transformer | ['Samuel Belkadi', 'Nicolo Micheletti', 'Lifeng Han', 'Warren Del-Pinto', 'Goran Nenadic'] | 2,023 | null | 5 | 30 | ['Computer Science'] |
2,310.19923 | Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long
Documents | ['Michael Günther', 'Jackmin Ong', 'Isabelle Mohr', 'Alaeddine Abdessalem', 'Tanguy Abel', 'Mohammad Kalim Akram', 'Susana Guzman', 'Georgios Mastrapas', 'Saba Sturua', 'Bo Wang', 'Maximilian Werk', 'Nan Wang', 'Han Xiao'] | ['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2.7'] | Text embedding models have emerged as powerful tools for transforming
sentences into fixed-sized feature vectors that encapsulate semantic
information. While these models are essential for tasks like information
retrieval, semantic clustering, and text re-ranking, most existing open-source
models, especially those buil... | 2023-10-30T18:35:30Z | 14 pages | null | null | null | null | null | null | null | null | null |
2,310.20246 | Breaking Language Barriers in Multilingual Mathematical Reasoning:
Insights and Observations | ['Nuo Chen', 'Zinan Zheng', 'Ning Wu', 'Ming Gong', 'Dongmei Zhang', 'Jia Li'] | ['cs.CL', 'cs.AI'] | Existing research predominantly focuses on developing powerful language
learning models (LLMs) for mathematical reasoning within monolingual languages,
with few explorations in preserving efficacy in a multilingual context. To
bridge this gap, this paper pioneers exploring and training powerful
Multilingual Math Reason... | 2023-10-31T08:09:20Z | Work in Progress | null | null | null | null | null | null | null | null | null |
2,310.20589 | Increasing The Performance of Cognitively Inspired Data-Efficient
Language Models via Implicit Structure Building | ['Omar Momen', 'David Arps', 'Laura Kallmeyer'] | ['cs.CL'] | In this paper, we describe our submission to the BabyLM Challenge 2023 shared
task on data-efficient language model (LM) pretraining (Warstadt et al., 2023).
We train transformer-based masked language models that incorporate unsupervised
predictions about hierarchical sentence structure into the model architecture.
Con... | 2023-10-31T16:26:36Z | Accepted at the BabyLM shared task at CoNLL 2023 | null | 10.18653/v1/2023.conll-babylm.29 | null | null | null | null | null | null | null |
2,310.20695 | HAP: Structure-Aware Masked Image Modeling for Human-Centric Perception | ['Junkun Yuan', 'Xinyu Zhang', 'Hao Zhou', 'Jian Wang', 'Zhongwei Qiu', 'Zhiyin Shao', 'Shaofeng Zhang', 'Sifan Long', 'Kun Kuang', 'Kun Yao', 'Junyu Han', 'Errui Ding', 'Lanfen Lin', 'Fei Wu', 'Jingdong Wang'] | ['cs.CV', 'cs.AI'] | Model pre-training is essential in human-centric perception. In this paper,
we first introduce masked image modeling (MIM) as a pre-training approach for
this task. Upon revisiting the MIM training strategy, we reveal that human
structure priors offer significant potential. Motivated by this insight, we
further incorpo... | 2023-10-31T17:56:11Z | Accepted by NeurIPS 2023 | null | null | null | null | null | null | null | null | null |
2,310.207 | SEINE: Short-to-Long Video Diffusion Model for Generative Transition and
Prediction | ['Xinyuan Chen', 'Yaohui Wang', 'Lingjun Zhang', 'Shaobin Zhuang', 'Xin Ma', 'Jiashuo Yu', 'Yali Wang', 'Dahua Lin', 'Yu Qiao', 'Ziwei Liu'] | ['cs.CV'] | Recently video generation has achieved substantial progress with realistic
results. Nevertheless, existing AI-generated videos are usually very short
clips ("shot-level") depicting a single scene. To deliver a coherent long video
("story-level"), it is desirable to have creative transition and prediction
effects across... | 2023-10-31T17:58:17Z | Project page: https://vchitect.github.io/SEINE-project/ | null | null | null | null | null | null | null | null | null |
2,311.00408 | AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot
Classification | ['Yongxin Huang', 'Kexin Wang', 'Sourav Dutta', 'Raj Nath Patel', 'Goran Glavaš', 'Iryna Gurevych'] | ['cs.CL'] | Recent work has found that few-shot sentence classification based on
pre-trained Sentence Encoders (SEs) is efficient, robust, and effective. In
this work, we investigate strategies for domain-specialization in the context
of few-shot sentence classification with SEs. We first establish that
unsupervised Domain-Adaptiv... | 2023-11-01T10:00:15Z | Accepted at EMNLP 2023 Main | null | null | null | null | null | null | null | null | null |
2,311.0043 | Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo
Labelling | ['Sanchit Gandhi', 'Patrick von Platen', 'Alexander M. Rush'] | ['cs.CL', 'cs.SD', 'eess.AS'] | As the size of pre-trained speech recognition models increases, running these
large models in low-latency or resource-constrained environments becomes
challenging. In this work, we leverage pseudo-labelling to assemble a
large-scale open-source dataset which we use to distill the Whisper model into
a smaller variant, c... | 2023-11-01T10:45:07Z | 30 pages, 2 figures, 25 tables | null | null | null | null | null | null | null | null | null |
2,311.00571 | LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation,
Generation and Editing | ['Wei-Ge Chen', 'Irina Spiridonova', 'Jianwei Yang', 'Jianfeng Gao', 'Chunyuan Li'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.HC', 'cs.MM'] | LLaVA-Interactive is a research prototype for multimodal human-AI
interaction. The system can have multi-turn dialogues with human users by
taking multimodal user inputs and generating multimodal responses. Importantly,
LLaVA-Interactive goes beyond language prompt, where visual prompt is enabled
to align human intents... | 2023-11-01T15:13:43Z | 31 pages, 22 figures, 30M PDF file size; Project Page:
https://llava-vl.github.io/llava-interactive/ | null | null | LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing | ['Wei-Ge Chen', 'Irina Spiridonova', 'Jianwei Yang', 'Jianfeng Gao', 'Chun-yue Li'] | 2,023 | arXiv.org | 37 | 36 | ['Computer Science'] |
2,311.00835 | Calibrated Seq2seq Models for Efficient and Generalizable Ultra-fine
Entity Typing | ['Yanlin Feng', 'Adithya Pratapa', 'David R Mortensen'] | ['cs.CL'] | Ultra-fine entity typing plays a crucial role in information extraction by
predicting fine-grained semantic types for entity mentions in text. However,
this task poses significant challenges due to the massive number of entity
types in the output space. The current state-of-the-art approaches, based on
standard multi-l... | 2023-11-01T20:39:12Z | null | null | null | Calibrated Seq2seq Models for Efficient and Generalizable Ultra-fine Entity Typing | ['Yanlin Feng', 'Adithya Pratapa', 'David R Mortensen'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 5 | 26 | ['Computer Science'] |
2,311.00926 | M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place | ['Wentao Yuan', 'Adithyavairavan Murali', 'Arsalan Mousavian', 'Dieter Fox'] | ['cs.RO', 'cs.AI', 'cs.CV'] | With the advent of large language models and large-scale robotic datasets,
there has been tremendous progress in high-level decision-making for object
manipulation. These generic models are able to interpret complex tasks using
language commands, but they often have difficulties generalizing to
out-of-distribution obje... | 2023-11-02T01:42:52Z | 12 pages, 8 figures, accepted by CoRL 2023 | null | null | null | null | null | null | null | null | null |
2,311.0107 | Multilingual DistilWhisper: Efficient Distillation of Multi-task Speech
Models via Language-Specific Experts | ['Thomas Palmeira Ferraz', 'Marcely Zanon Boito', 'Caroline Brun', 'Vassilina Nikoulina'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Whisper is a multitask and multilingual speech model covering 99 languages.
It yields commendable automatic speech recognition (ASR) results in a subset of
its covered languages, but the model still underperforms on a non-negligible
number of under-represented languages, a problem exacerbated in smaller model
versions.... | 2023-11-02T08:37:30Z | Accepted to IEEE ICASSP 2024 | null | null | null | null | null | null | null | null | null |
2,311.01751 | EmojiLM: Modeling the New Emoji Language | ['Letian Peng', 'Zilong Wang', 'Hang Liu', 'Zihan Wang', 'Jingbo Shang'] | ['cs.CL'] | With the rapid development of the internet, online social media welcomes
people with different backgrounds through its diverse content. The increasing
usage of emoji becomes a noticeable trend thanks to emoji's rich information
beyond cultural or linguistic borders. However, the current study on emojis is
limited to si... | 2023-11-03T07:06:51Z | null | null | null | EmojiLM: Modeling the New Emoji Language | ['Letian Peng', 'Zilong Wang', 'Hang Liu', 'Zihan Wang', 'Jingbo Shang'] | 2,023 | arXiv.org | 7 | 18 | ['Computer Science'] |
2,311.01804 | inkn'hue: Enhancing Manga Colorization from Multiple Priors with
Alignment Multi-Encoder VAE | ['Tawin Jiramahapokee'] | ['cs.CV', 'eess.IV'] | Manga, a form of Japanese comics and distinct visual storytelling, has
captivated readers worldwide. Traditionally presented in black and white,
manga's appeal lies in its ability to convey complex narratives and emotions
through intricate line art and shading. Yet, the desire to experience manga in
vibrant colors has ... | 2023-11-03T09:33:32Z | arXiv preprint. Project page: https://github.com/rossiyareich/inknhue | null | null | null | null | null | null | null | null | null |
2,311.02041 | Quantum circuit synthesis with diffusion models | ['Florian Fürrutter', 'Gorka Muñoz-Gil', 'Hans J. Briegel'] | ['quant-ph', 'cs.AI', 'cs.LG'] | Quantum computing has recently emerged as a transformative technology. Yet,
its promised advantages rely on efficiently translating quantum operations into
viable physical realizations. In this work, we use generative machine learning
models, specifically denoising diffusion models (DMs), to facilitate this
transformat... | 2023-11-03T17:17:08Z | Code available at: https://github.com/FlorianFuerrutter/genQC | Nature Machine Intelligence (2024) | 10.1038/s42256-024-00831-9 | Quantum circuit synthesis with diffusion models | ['Florian Fürrutter', 'G. Muñoz-Gil', 'H. Briegel'] | 2,023 | Nat. Mac. Intell. | 24 | 55 | ['Computer Science', 'Physics'] |
2,311.02303 | MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning | ['Bingchang Liu', 'Chaoyu Chen', 'Cong Liao', 'Zi Gong', 'Huan Wang', 'Zhichao Lei', 'Ming Liang', 'Dajun Chen', 'Min Shen', 'Hailian Zhou', 'Hang Yu', 'Jianguo Li'] | ['cs.LG', 'cs.AI'] | Code LLMs have emerged as a specialized research field, with remarkable
studies dedicated to enhancing model's coding capabilities through fine-tuning
on pre-trained models. Previous fine-tuning approaches were typically tailored
to specific downstream tasks or scenarios, which meant separate fine-tuning for
each task,... | 2023-11-04T02:22:40Z | null | null | null | null | null | null | null | null | null | null |
2,311.02401 | BarcodeBERT: Transformers for Biodiversity Analysis | ['Pablo Millan Arias', 'Niousha Sadjadi', 'Monireh Safari', 'ZeMing Gong', 'Austin T. Wang', 'Joakim Bruslund Haurum', 'Iuliia Zarubiieva', 'Dirk Steinke', 'Lila Kari', 'Angel X. Chang', 'Scott C. Lowe', 'Graham W. Taylor'] | ['cs.LG'] | In the global challenge of understanding and characterizing biodiversity,
short species-specific genomic sequences known as DNA barcodes play a critical
role, enabling fine-grained comparisons among organisms within the same kingdom
of life. Although machine learning algorithms specifically designed for the
analysis of... | 2023-11-04T13:25:49Z | Main text: 14 pages, Total: 23 pages, 10 figures, formerly accepted
at the 4th Workshop on Self-Supervised Learning: Theory and Practice (NeurIPS
2023) | null | null | null | null | null | null | null | null | null |
2,311.02945 | PhoGPT: Generative Pre-training for Vietnamese | ['Dat Quoc Nguyen', 'Linh The Nguyen', 'Chi Tran', 'Dung Ngoc Nguyen', 'Dinh Phung', 'Hung Bui'] | ['cs.CL'] | We open-source a state-of-the-art 4B-parameter generative model series for
Vietnamese, which includes the base pre-trained monolingual model PhoGPT-4B and
its chat variant, PhoGPT-4B-Chat. The base model, PhoGPT-4B, with exactly 3.7B
parameters, is pre-trained from scratch on a Vietnamese corpus of 102B tokens,
with an... | 2023-11-06T08:26:14Z | PhoGPT-4B Technical Report - 5 pages | null | null | null | null | null | null | null | null | null |
2,311.03054 | AnyText: Multilingual Visual Text Generation And Editing | ['Yuxiang Tuo', 'Wangmeng Xiang', 'Jun-Yan He', 'Yifeng Geng', 'Xuansong Xie'] | ['cs.CV'] | Diffusion model based Text-to-Image has achieved impressive achievements
recently. Although current technology for synthesizing images is highly
advanced and capable of generating images with high fidelity, it is still
possible to give the show away when focusing on the text area in the generated
image. To address this... | 2023-11-06T12:10:43Z | null | null | null | null | null | null | null | null | null | null |
2,311.03057 | GLEN: Generative Retrieval via Lexical Index Learning | ['Sunkyung Lee', 'Minjin Choi', 'Jongwuk Lee'] | ['cs.IR', 'cs.CL'] | Generative retrieval shed light on a new paradigm of document retrieval,
aiming to directly generate the identifier of a relevant document for a query.
While it takes advantage of bypassing the construction of auxiliary index
structures, existing studies face two significant challenges: (i) the
discrepancy between the ... | 2023-11-06T12:35:06Z | In Proceedings of the 2023 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2023) main conference. 12 pages, 2 figures, 8
tables | null | null | GLEN: Generative Retrieval via Lexical Index Learning | ['Sunkyung Lee', 'Minjin Choi', 'Jongwuk Lee'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 12 | 37 | ['Computer Science'] |
2,311.03079 | CogVLM: Visual Expert for Pretrained Language Models | ['Weihan Wang', 'Qingsong Lv', 'Wenmeng Yu', 'Wenyi Hong', 'Ji Qi', 'Yan Wang', 'Junhui Ji', 'Zhuoyi Yang', 'Lei Zhao', 'Xixuan Song', 'Jiazheng Xu', 'Bin Xu', 'Juanzi Li', 'Yuxiao Dong', 'Ming Ding', 'Jie Tang'] | ['cs.CV'] | We introduce CogVLM, a powerful open-source visual language foundation model.
Different from the popular shallow alignment method which maps image features
into the input space of language model, CogVLM bridges the gap between the
frozen pretrained language model and image encoder by a trainable visual expert
module in... | 2023-11-06T13:04:39Z | null | null | null | null | null | null | null | null | null | null |
2,311.03099 | Language Models are Super Mario: Absorbing Abilities from Homologous
Models as a Free Lunch | ['Le Yu', 'Bowen Yu', 'Haiyang Yu', 'Fei Huang', 'Yongbin Li'] | ['cs.CL', 'cs.LG'] | In this paper, we unveil that Language Models (LMs) can acquire new
capabilities by assimilating parameters from homologous models without
retraining or GPUs. We first introduce DARE to set most delta parameters (i.e.,
the disparity between fine-tuned and pre-trained parameters) to zeros without
affecting the abilities... | 2023-11-06T13:43:07Z | Accepted at ICML 2024 | null | null | null | null | null | null | null | null | null |
2,311.03226 | LDM3D-VR: Latent Diffusion Model for 3D VR | ['Gabriela Ben Melech Stan', 'Diana Wofk', 'Estelle Aflalo', 'Shao-Yen Tseng', 'Zhipeng Cai', 'Michael Paulitsch', 'Vasudev Lal'] | ['cs.CV', 'cs.AI'] | Latent diffusion models have proven to be state-of-the-art in the creation
and manipulation of visual outputs. However, as far as we know, the generation
of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite
of diffusion models targeting virtual reality development that includes
LDM3D-pano and... | 2023-11-06T16:12:10Z | Accepted to Workshop on Diffusion Models, NeurIPS 2023 | null | null | LDM3D-VR: Latent Diffusion Model for 3D VR | ['Gabriela Ben Melech Stan', 'Diana Wofk', 'Estelle Aflalo', 'Shao-Yen Tseng', 'Z. Cai', 'Michael Paulitsch', 'Vasudev Lal'] | 2,023 | arXiv.org | 8 | 46 | ['Computer Science'] |
2,311.03228 | An Efficient Self-Supervised Cross-View Training For Sentence Embedding | ['Peerat Limkonchotiwat', 'Wuttikorn Ponwitayarat', 'Lalita Lowphansirikul', 'Can Udomcharoenchaikit', 'Ekapol Chuangsuwanich', 'Sarana Nutanong'] | ['cs.CL', 'cs.AI'] | Self-supervised sentence representation learning is the task of constructing
an embedding space for sentences without relying on human annotation efforts.
One straightforward approach is to finetune a pretrained language model (PLM)
with a representation learning method such as contrastive learning. While this
approach... | 2023-11-06T16:12:25Z | Accepted to TACL. The code and pre-trained models are available at
https://github.com/mrpeerat/SCT | null | null | null | null | null | null | null | null | null |
2,311.03243 | Safurai-Csharp: Harnessing Synthetic Data to improve language-specific
Code LLM | ['Davide Cifarelli', 'Leonardo Boiardi', 'Alessandro Puppo', 'Leon Jovanovic'] | ['cs.CL'] | This paper introduces Safurai-Csharp, an open-source model designed to
specialize in the generation, completion, and debugging of C# code.
Safurai-Csharp is built upon the novel CodeLlama 34B model and leverages the
EvolInstruct technique, creating a refined and expanded dataset for its
fine-tuning process. The results... | 2023-11-06T16:31:48Z | null | null | null | null | null | null | null | null | null | null |
2,311.03301 | Ziya2: Data-centric Learning is All LLMs Need | ['Ruyi Gan', 'Ziwei Wu', 'Renliang Sun', 'Junyu Lu', 'Xiaojun Wu', 'Dixiang Zhang', 'Kunhao Pan', 'Junqing He', 'Yuanhe Tian', 'Ping Yang', 'Qi Yang', 'Hao Wang', 'Jiaxing Zhang', 'Yan Song'] | ['cs.CL'] | Various large language models (LLMs) have been proposed in recent years,
including closed- and open-source ones, continually setting new records on
multiple benchmarks. However, the development of LLMs still faces several
issues, such as high cost of training models from scratch, and continual
pre-training leading to c... | 2023-11-06T17:49:34Z | null | null | null | Ziya2: Data-centric Learning is All LLMs Need | ['Ruyi Gan', 'Ziwei Wu', 'Renliang Sun', 'Junyu Lu', 'Xiaojun Wu', 'Di Zhang', 'Kunhao Pan', 'Ping Yang', 'Qi Yang', 'Jiaxing Zhang', 'Yan Song'] | 2,023 | arXiv.org | 19 | 69 | ['Computer Science'] |
2,311.03356 | GLaMM: Pixel Grounding Large Multimodal Model | ['Hanoona Rasheed', 'Muhammad Maaz', 'Sahal Shaji Mullappilly', 'Abdelrahman Shaker', 'Salman Khan', 'Hisham Cholakkal', 'Rao M. Anwer', 'Erix Xing', 'Ming-Hsuan Yang', 'Fahad S. Khan'] | ['cs.CV', 'cs.AI'] | Large Multimodal Models (LMMs) extend Large Language Models to the vision
domain. Initial LMMs used holistic images and text prompts to generate
ungrounded textual responses. Recently, region-level LMMs have been used to
generate visually grounded responses. However, they are limited to only
referring to a single objec... | 2023-11-06T18:59:57Z | CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,311.03764 | Neuro-GPT: Towards A Foundation Model for EEG | ['Wenhui Cui', 'Woojae Jeong', 'Philipp Thölke', 'Takfarinas Medani', 'Karim Jerbi', 'Anand A. Joshi', 'Richard M. Leahy'] | ['cs.LG', 'eess.SP'] | To handle the scarcity and heterogeneity of electroencephalography (EEG) data
for Brain-Computer Interface (BCI) tasks, and to harness the power of large
publicly available data sets, we propose Neuro-GPT, a foundation model
consisting of an EEG encoder and a GPT model. The foundation model is
pre-trained on a large-sc... | 2023-11-07T07:07:18Z | Paper accepted by the 2024 IEEE International Symposium on Biomedical
Imaging (ISBI) | null | null | null | null | null | null | null | null | null |
2,311.03812 | Conversations in Galician: a Large Language Model for an
Underrepresented Language | ['Eliseo Bao', 'Anxo Pérez', 'Javier Parapar'] | ['cs.CL'] | The recent proliferation of Large Conversation Language Models has
highlighted the economic significance of widespread access to this type of AI
technologies in the current information age. Nevertheless, prevailing models
have primarily been trained on corpora consisting of documents written in
popular languages. The d... | 2023-11-07T08:52:28Z | 5 pages | null | null | Conversations in Galician: a Large Language Model for an Underrepresented Language | ['Eliseo Bao', 'Anxo Perez', 'Javier Parapar'] | 2,023 | arXiv.org | 2 | 7 | ['Computer Science'] |
2,311.04145 | I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion
Models | ['Shiwei Zhang', 'Jiayu Wang', 'Yingya Zhang', 'Kang Zhao', 'Hangjie Yuan', 'Zhiwu Qin', 'Xiang Wang', 'Deli Zhao', 'Jingren Zhou'] | ['cs.CV'] | Video synthesis has recently made remarkable strides benefiting from the
rapid development of diffusion models. However, it still encounters challenges
in terms of semantic accuracy, clarity and spatio-temporal continuity. They
primarily arise from the scarcity of well-aligned text-video data and the
complex inherent s... | 2023-11-07T17:16:06Z | Project page: https://i2vgen-xl.github.io | null | null | null | null | null | null | null | null | null |
2,311.04155 | Black-Box Prompt Optimization: Aligning Large Language Models without
Model Training | ['Jiale Cheng', 'Xiao Liu', 'Kehan Zheng', 'Pei Ke', 'Hongning Wang', 'Yuxiao Dong', 'Jie Tang', 'Minlie Huang'] | ['cs.CL'] | Large language models (LLMs) have shown impressive success in various
applications. However, these models are often not well aligned with human
intents, which calls for additional treatments on them; that is, the alignment
problem. To make LLMs better follow user instructions, existing alignment
methods primarily focus... | 2023-11-07T17:31:50Z | Accepted to ACL 2024 | null | null | null | null | null | null | null | null | null |
2,311.04157 | A Simple Interpretable Transformer for Fine-Grained Image Classification
and Analysis | ['Dipanjyoti Paul', 'Arpita Chowdhury', 'Xinqi Xiong', 'Feng-Ju Chang', 'David Carlyn', 'Samuel Stevens', 'Kaiya L. Provost', 'Anuj Karpatne', 'Bryan Carstens', 'Daniel Rubenstein', 'Charles Stewart', 'Tanya Berger-Wolf', 'Yu Su', 'Wei-Lun Chao'] | ['cs.CV', 'cs.AI'] | We present a novel usage of Transformers to make image classification
interpretable. Unlike mainstream classifiers that wait until the last fully
connected layer to incorporate class information to make predictions, we
investigate a proactive approach, asking each class to search for itself in an
image. We realize this... | 2023-11-07T17:32:55Z | Accepted to International Conference on Learning Representations 2024
(ICLR 2024) | null | null | null | null | null | null | null | null | null |
2,311.04257 | mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with
Modality Collaboration | ['Qinghao Ye', 'Haiyang Xu', 'Jiabo Ye', 'Ming Yan', 'Anwen Hu', 'Haowei Liu', 'Qi Qian', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou'] | ['cs.CL', 'cs.CV'] | Multi-modal Large Language Models (MLLMs) have demonstrated impressive
instruction abilities across various open-ended tasks. However, previous
methods primarily focus on enhancing multi-modal capabilities. In this work, we
introduce a versatile multi-modal large language model, mPLUG-Owl2, which
effectively leverages ... | 2023-11-07T14:21:29Z | null | null | null | null | null | null | null | null | null | null |
2,311.04335 | Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic
Representations | ['Sihao Chen', 'Hongming Zhang', 'Tong Chen', 'Ben Zhou', 'Wenhao Yu', 'Dian Yu', 'Baolin Peng', 'Hongwei Wang', 'Dan Roth', 'Dong Yu'] | ['cs.CL', 'cs.AI'] | We introduce sub-sentence encoder, a contrastively-learned contextual
embedding model for fine-grained semantic representation of text. In contrast
to the standard practice with sentence embeddings, where the meaning of an
entire sequence of text is encoded into a fixed-length vector, the sub-sentence
encoder learns to... | 2023-11-07T20:38:30Z | null | null | null | null | null | null | null | null | null | null |
2,311.044 | LRM: Large Reconstruction Model for Single Image to 3D | ['Yicong Hong', 'Kai Zhang', 'Jiuxiang Gu', 'Sai Bi', 'Yang Zhou', 'Difan Liu', 'Feng Liu', 'Kalyan Sunkavalli', 'Trung Bui', 'Hao Tan'] | ['cs.CV', 'cs.AI', 'cs.GR', 'cs.LG'] | We propose the first Large Reconstruction Model (LRM) that predicts the 3D
model of an object from a single input image within just 5 seconds. In contrast
to many previous methods that are trained on small-scale datasets such as
ShapeNet in a category-specific fashion, LRM adopts a highly scalable
transformer-based arc... | 2023-11-08T00:03:52Z | ICLR 2024 | null | null | LRM: Large Reconstruction Model for Single Image to 3D | ['Yicong Hong', 'Kai Zhang', 'Jiuxiang Gu', 'Sai Bi', 'Yang Zhou', 'Difan Liu', 'Feng Liu', 'Kalyan Sunkavalli', 'Trung Bui', 'Hao Tan'] | 2,023 | International Conference on Learning Representations | 453 | 101 | ['Computer Science'] |
2,311.04459 | Improving Pacing in Long-Form Story Planning | ['Yichen Wang', 'Kevin Yang', 'Xiaoming Liu', 'Dan Klein'] | ['cs.CL', 'cs.AI'] | Existing LLM-based systems for writing long-form stories or story outlines
frequently suffer from unnatural pacing, whether glossing over important events
or over-elaborating on insignificant details, resulting in a jarring experience
for the reader. We propose a CONCrete Outline ConTrol (CONCOCT) system to
improve pac... | 2023-11-08T04:58:29Z | EMNLP Findings 2023 | null | null | Improving Pacing in Long-Form Story Planning | ['Yichen Wang', 'Kevin Yang', 'Xiaoming Liu', 'Dan Klein'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 19 | 0 | ['Computer Science'] |
2,311.04879 | LongQLoRA: Efficient and Effective Method to Extend Context Length of
Large Language Models | ['Jianxin Yang'] | ['cs.CL', 'cs.AI'] | We present LongQLoRA, an efficient and effective method to extend context
length of large language models with less training resources. LongQLoRA
combines the advantages of Position Interpolation, QLoRA and Shift Short
Attention of LongLoRA. With a single 32GB V100 GPU, LongQLoRA can extend the
context length of LLaMA2... | 2023-11-08T18:33:06Z | null | null | null | null | null | null | null | null | null | null |
2,311.05296 | BeLLM: Backward Dependency Enhanced Large Language Model for Sentence
Embeddings | ['Xianming Li', 'Jing Li'] | ['cs.CL'] | Sentence embeddings are crucial in measuring semantic similarity. Most recent
studies employed large language models (LLMs) to learn sentence embeddings.
Existing LLMs mainly adopted autoregressive architecture without explicit
backward dependency modeling. Therefore, we examined the effects of backward
dependencies in... | 2023-11-09T11:53:52Z | Accepted by NAACL24 Main Conference | null | null | null | null | null | null | null | null | null |
2,311.05419 | Mirror: A Universal Framework for Various Information Extraction Tasks | ['Tong Zhu', 'Junfei Ren', 'Zijian Yu', 'Mengsong Wu', 'Guoliang Zhang', 'Xiaoye Qu', 'Wenliang Chen', 'Zhefeng Wang', 'Baoxing Huai', 'Min Zhang'] | ['cs.CL', 'cs.AI'] | Sharing knowledge between information extraction tasks has always been a
challenge due to the diverse data formats and task variations. Meanwhile, this
divergence leads to information waste and increases difficulties in building
complex applications in real scenarios. Recent studies often formulate IE tasks
as a triple... | 2023-11-09T14:58:46Z | Accepted to EMNLP23 main conference | null | null | null | null | null | null | null | null | null |
2,311.05437 | LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents | ['Shilong Liu', 'Hao Cheng', 'Haotian Liu', 'Hao Zhang', 'Feng Li', 'Tianhe Ren', 'Xueyan Zou', 'Jianwei Yang', 'Hang Su', 'Jun Zhu', 'Lei Zhang', 'Jianfeng Gao', 'Chunyuan Li'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM'] | LLaVA-Plus is a general-purpose multimodal assistant that expands the
capabilities of large multimodal models. It maintains a skill repository of
pre-trained vision and vision-language models and can activate relevant tools
based on users' inputs to fulfill real-world tasks. LLaVA-Plus is trained on
multimodal instruct... | 2023-11-09T15:22:26Z | 25 pages, 25M file size. Project Page:
https://llava-vl.github.io/llava-plus/ | null | null | LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents | ['Shilong Liu', 'Hao Cheng', 'Haotian Liu', 'Hao Zhang', 'Feng Li', 'Tianhe Ren', 'Xueyan Zou', 'Jianwei Yang', 'Hang Su', 'Jun-Juan Zhu', 'Lei Zhang', 'Jianfeng Gao', 'Chun-yue Li'] | 2,023 | European Conference on Computer Vision | 126 | 52 | ['Computer Science'] |
2,311.05481 | META4: Semantically-Aligned Generation of Metaphoric Gestures Using
Self-Supervised Text and Speech Representation | ['Mireille Fares', 'Catherine Pelachaud', 'Nicolas Obin'] | ['cs.AI'] | Image Schemas are repetitive cognitive patterns that influence the way we
conceptualize and reason about various concepts present in speech. These
patterns are deeply embedded within our cognitive processes and are reflected
in our bodily expressions including gestures. Particularly, metaphoric gestures
possess essenti... | 2023-11-09T16:16:31Z | null | null | null | null | null | null | null | null | null | null |
2,311.05556 | LCM-LoRA: A Universal Stable-Diffusion Acceleration Module | ['Simian Luo', 'Yiqin Tan', 'Suraj Patil', 'Daniel Gu', 'Patrick von Platen', 'Apolinário Passos', 'Longbo Huang', 'Jian Li', 'Hang Zhao'] | ['cs.CV', 'cs.LG'] | Latent Consistency Models (LCMs) have achieved impressive performance in
accelerating text-to-image generative tasks, producing high-quality images with
minimal inference steps. LCMs are distilled from pre-trained latent diffusion
models (LDMs), requiring only ~32 A100 GPU training hours. This report further
extends LC... | 2023-11-09T18:04:15Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,311.05613 | Window Attention is Bugged: How not to Interpolate Position Embeddings | ['Daniel Bolya', 'Chaitanya Ryali', 'Judy Hoffman', 'Christoph Feichtenhofer'] | ['cs.CV'] | Window attention, position embeddings, and high resolution finetuning are
core concepts in the modern transformer era of computer vision. However, we
find that naively combining these near ubiquitous components can have a
detrimental effect on performance. The issue is simple: interpolating position
embeddings while us... | 2023-11-09T18:59:58Z | Preprint. Code release will be coming in the future | null | null | null | null | null | null | null | null | null |
2,311.05657 | Agent Lumos: Unified and Modular Training for Open-Source Language
Agents | ['Da Yin', 'Faeze Brahman', 'Abhilasha Ravichander', 'Khyathi Chandu', 'Kai-Wei Chang', 'Yejin Choi', 'Bill Yuchen Lin'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Closed-source agents suffer from several issues such as a lack of
affordability, transparency, and reproducibility, particularly on complex
interactive tasks. This motivates the development of open-source alternatives.
We introduce LUMOS, one of the first frameworks for training open-source
LLM-based agents. LUMOS feat... | 2023-11-09T00:30:13Z | Accepted to ACL 2024 Main Conference; Camera Ready. Project website:
https://allenai.github.io/lumos/ | null | null | null | null | null | null | null | null | null |
2,311.05741 | Efficiently Adapting Pretrained Language Models To New Languages | ['Zoltan Csaki', 'Pian Pawakapan', 'Urmish Thakker', 'Qiantong Xu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent large language models (LLM) exhibit sub-optimal performance on
low-resource languages, as the training data of these models is usually
dominated by English and other high-resource languages. Furthermore, it is
challenging to train models for low-resource languages, especially from
scratch, due to a lack of high ... | 2023-11-09T20:59:08Z | Accepted to "The third Neurips Workshop on Efficient Natural Language
and Speech Processing 2023" (ENLSP-III) | null | null | Efficiently Adapting Pretrained Language Models To New Languages | ['Zoltan Csaki', 'Pian Pawakapan', 'Urmish Thakker', 'Qiantong Xu'] | 2,023 | arXiv.org | 18 | 77 | ['Computer Science'] |
2,311.05845 | Tamil-Llama: A New Tamil Language Model Based on Llama 2 | ['Abhinand Balachandran'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Language modeling has witnessed remarkable advancements in recent years, with
Large Language Models (LLMs) like ChatGPT setting unparalleled benchmarks in
human-like text generation. However, a prevailing limitation is the
underrepresentation of languages like Tamil in these cutting-edge models,
leading to suboptimal p... | 2023-11-10T03:02:39Z | 19 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,311.05908 | FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor
Cores | ['Daniel Y. Fu', 'Hermann Kumbong', 'Eric Nguyen', 'Christopher Ré'] | ['cs.LG'] | Convolution models with long filters have demonstrated state-of-the-art
reasoning abilities in many long-sequence tasks but lag behind the most
optimized Transformers in wall-clock time. A major bottleneck is the Fast
Fourier Transform (FFT)--which allows long convolutions to run in $O(N logN)$
time in sequence length ... | 2023-11-10T07:33:35Z | null | null | null | FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores | ['Daniel Y. Fu', 'Hermann Kumbong', 'Eric N. D. Nguyen', "Christopher R'e"] | 2,023 | International Conference on Learning Representations | 30 | 114 | ['Computer Science'] |
2,311.06025 | ChiMed-GPT: A Chinese Medical Large Language Model with Full Training
Regime and Better Alignment to Human Preferences | ['Yuanhe Tian', 'Ruyi Gan', 'Yan Song', 'Jiaxing Zhang', 'Yongdong Zhang'] | ['cs.CL'] | Recently, the increasing demand for superior medical services has highlighted
the discrepancies in the medical infrastructure. With big data, especially
texts, forming the foundation of medical services, there is an exigent need for
effective natural language processing (NLP) solutions tailored to the
healthcare domain... | 2023-11-10T12:25:32Z | 18 pages, 3 figures; Accepted by ACL-2024 | null | null | null | null | null | null | null | null | null |
2,311.06158 | Language Models can be Logical Solvers | ['Jiazhan Feng', 'Ruochen Xu', 'Junheng Hao', 'Hiteshi Sharma', 'Yelong Shen', 'Dongyan Zhao', 'Weizhu Chen'] | ['cs.CL', 'cs.AI'] | Logical reasoning is a fundamental aspect of human intelligence and a key
component of tasks like problem-solving and decision-making. Recent
advancements have enabled Large Language Models (LLMs) to potentially exhibit
reasoning capabilities, but complex logical reasoning remains a challenge. The
state-of-the-art, sol... | 2023-11-10T16:23:50Z | Preprint | null | null | null | null | null | null | null | null | null |
2,311.06242 | Florence-2: Advancing a Unified Representation for a Variety of Vision
Tasks | ['Bin Xiao', 'Haiping Wu', 'Weijian Xu', 'Xiyang Dai', 'Houdong Hu', 'Yumao Lu', 'Michael Zeng', 'Ce Liu', 'Lu Yuan'] | ['cs.CV'] | We introduce Florence-2, a novel vision foundation model with a unified,
prompt-based representation for a variety of computer vision and
vision-language tasks. While existing large vision models excel in transfer
learning, they struggle to perform a diversity of tasks with simple
instructions, a capability that implie... | 2023-11-10T18:59:08Z | null | null | null | null | null | null | null | null | null | null |
2,311.0631 | Labor Space: A Unifying Representation of the Labor Market via Large
Language Models | ['Seongwoon Kim', 'Yong-Yeol Ahn', 'Jaehyuk Park'] | ['physics.soc-ph', 'cs.AI'] | The labor market is a complex ecosystem comprising diverse, interconnected
entities, such as industries, occupations, skills, and firms. Due to the lack
of a systematic method to map these heterogeneous entities together, each
entity has been analyzed in isolation or only through pairwise relationships,
inhibiting comp... | 2023-11-09T06:41:10Z | 11 pages, 5 figures | null | 10.1145/3589334.3645464 | null | null | null | null | null | null | null |
2,311.06364 | Relation Extraction in underexplored biomedical domains: A
diversity-optimised sampling and synthetic data generation approach | ['Maxime Delmas', 'Magdalena Wysocka', 'André Freitas'] | ['cs.CL'] | The sparsity of labelled data is an obstacle to the development of Relation
Extraction models and the completion of databases in various biomedical areas.
While being of high interest in drug-discovery, the natural-products
literature, reporting the identification of potential bioactive compounds from
organisms, is a c... | 2023-11-10T19:36:00Z | null | null | null | null | null | null | null | null | null | null |
2,311.06607 | Monkey: Image Resolution and Text Label Are Important Things for Large
Multi-modal Models | ['Zhang Li', 'Biao Yang', 'Qiang Liu', 'Zhiyin Ma', 'Shuo Zhang', 'Jingxu Yang', 'Yabo Sun', 'Yuliang Liu', 'Xiang Bai'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Large Multimodal Models (LMMs) have shown promise in vision-language tasks
but struggle with high-resolution input and detailed scene understanding.
Addressing these challenges, we introduce Monkey to enhance LMM capabilities.
Firstly, Monkey processes input images by dividing them into uniform patches,
each matching t... | 2023-11-11T16:37:41Z | CVPR 2024 Highlight | null | null | null | null | null | null | null | null | null |
2,311.06708 | ReactionT5: a large-scale pre-trained model towards application of
limited reaction data | ['Tatsuya Sagawa', 'Ryosuke Kojima'] | ['physics.chem-ph', 'cs.LG'] | Transformer-based deep neural networks have revolutionized the field of
molecular-related prediction tasks by treating molecules as symbolic sequences.
These models have been successfully applied in various organic chemical
applications by pretraining them with extensive compound libraries and
subsequently fine-tuning ... | 2023-11-12T02:25:00Z | null | null | null | null | null | null | null | null | null | null |
2,311.0672 | Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small
Scorer | ['Bowen Tan', 'Yun Zhu', 'Lijuan Liu', 'Eric Xing', 'Zhiting Hu', 'Jindong Chen'] | ['cs.LG', 'cs.CL'] | Large language models (LLMs) such as T0, FLAN, and OPT-IML, excel in
multi-tasking under a unified instruction-following paradigm, where they also
exhibit remarkable generalization abilities to unseen tasks. Despite their
impressive performance, these LLMs, with sizes ranging from several billion to
hundreds of billion... | 2023-11-12T03:25:34Z | In proceedings of NeurIPS 2023; Code and model available at
https://github.com/tanyuqian/cappy and
https://huggingface.co/btan2/cappy-large, respectively | null | null | null | null | null | null | null | null | null |
2,311.06783 | Q-Instruct: Improving Low-level Visual Abilities for Multi-modality
Foundation Models | ['Haoning Wu', 'Zicheng Zhang', 'Erli Zhang', 'Chaofeng Chen', 'Liang Liao', 'Annan Wang', 'Kaixin Xu', 'Chunyi Li', 'Jingwen Hou', 'Guangtao Zhai', 'Geng Xue', 'Wenxiu Sun', 'Qiong Yan', 'Weisi Lin'] | ['cs.CV', 'cs.MM'] | Multi-modality foundation models, as represented by GPT-4V, have brought a
new paradigm for low-level visual perception and understanding tasks, that can
respond to a broad range of natural human instructions in a model. While
existing foundation models have shown exciting potentials on low-level visual
tasks, their re... | 2023-11-12T09:10:51Z | 16 pages, 11 figures, page 12-16 as appendix | null | null | null | null | null | null | null | null | null |
2,311.06838 | GIELLM: Japanese General Information Extraction Large Language Model
Utilizing Mutual Reinforcement Effect | ['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori'] | ['cs.CL'] | Information Extraction (IE) stands as a cornerstone in natural language
processing, traditionally segmented into distinct sub-tasks. The advent of
Large Language Models (LLMs) heralds a paradigm shift, suggesting the
feasibility of a singular model addressing multiple IE subtasks. In this vein,
we introduce the General... | 2023-11-12T13:30:38Z | 10 pages, 6 figures | null | null | GIELLM: Japanese General Information Extraction Large Language Model Utilizing Mutual Reinforcement Effect | ['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori'] | 2,023 | arXiv.org | 7 | 28 | ['Computer Science'] |
2,311.06899 | Flames: Benchmarking Value Alignment of LLMs in Chinese | ['Kexin Huang', 'Xiangyang Liu', 'Qianyu Guo', 'Tianxiang Sun', 'Jiawei Sun', 'Yaru Wang', 'Zeyang Zhou', 'Yixu Wang', 'Yan Teng', 'Xipeng Qiu', 'Yingchun Wang', 'Dahua Lin'] | ['cs.CL', 'cs.AI'] | The widespread adoption of large language models (LLMs) across various
regions underscores the urgent need to evaluate their alignment with human
values. Current benchmarks, however, fall short of effectively uncovering
safety vulnerabilities in LLMs. Despite numerous models achieving high scores
and 'topping the chart... | 2023-11-12T17:18:21Z | Accepted to the NAACL 2024 | null | null | null | null | null | null | null | null | null |
2,311.07052 | Towards the Law of Capacity Gap in Distilling Language Models | ['Chen Zhang', 'Dawei Song', 'Zheyu Ye', 'Yan Gao'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Language model (LM) distillation is a trending area that aims to distil the
knowledge residing in a large teacher LM to a small student one. While various
methods have been proposed to maximize the effectiveness of the distillation,
significant challenges persist, particularly when there is a substantial
capacity gap b... | 2023-11-13T03:36:18Z | 32 pages, 10 figures, 15 tables, work in progress. Code and
checkpoints are available at https://github.com/GeneZC/MiniMA | null | null | null | null | null | null | null | null | null |
2,311.07171 | calamanCy: A Tagalog Natural Language Processing Toolkit | ['Lester James V. Miranda'] | ['cs.CL'] | We introduce calamanCy, an open-source toolkit for constructing natural
language processing (NLP) pipelines for Tagalog. It is built on top of spaCy,
enabling easy experimentation and integration with other frameworks. calamanCy
addresses the development gap by providing a consistent API for building NLP
applications a... | 2023-11-13T09:06:43Z | To be published in The Third Workshop for NLP-OSS at EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,311.07362 | Volcano: Mitigating Multimodal Hallucination through Self-Feedback
Guided Revision | ['Seongyun Lee', 'Sue Hyun Park', 'Yongrae Jo', 'Minjoon Seo'] | ['cs.CL', 'cs.CV'] | Large multimodal models suffer from multimodal hallucination, where they
provide incorrect responses misaligned with the given visual information.
Recent works have conjectured that one of the reasons behind multimodal
hallucination is due to the vision encoder failing to ground on the image
properly. To mitigate this ... | 2023-11-13T14:26:24Z | null | null | null | Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision | ['Seongyun Lee', 'Sue Hyun Park', 'Yongrae Jo', 'Minjoon Seo'] | 2,023 | North American Chapter of the Association for Computational Linguistics | 62 | 46 | ['Computer Science'] |
2,311.07575 | SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for
Multi-modal Large Language Models | ['Ziyi Lin', 'Chris Liu', 'Renrui Zhang', 'Peng Gao', 'Longtian Qiu', 'Han Xiao', 'Han Qiu', 'Chen Lin', 'Wenqi Shao', 'Keqin Chen', 'Jiaming Han', 'Siyuan Huang', 'Yichi Zhang', 'Xuming He', 'Hongsheng Li', 'Yu Qiao'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | We present SPHINX, a versatile multi-modal large language model (MLLM) with a
joint mixing of model weights, tuning tasks, and visual embeddings. First, for
stronger vision-language alignment, we unfreeze the large language model (LLM)
during pre-training, and introduce a weight mix strategy between LLMs trained
by rea... | 2023-11-13T18:59:47Z | Work in progress. Code and demos are released at
https://github.com/Alpha-VLLM/LLaMA2-Accessory | null | null | null | null | null | null | null | null | null |
2,311.0759 | Large Language Models can Strategically Deceive their Users when Put
Under Pressure | ['Jérémy Scheurer', 'Mikita Balesni', 'Marius Hobbhahn'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We demonstrate a situation in which Large Language Models, trained to be
helpful, harmless, and honest, can display misaligned behavior and
strategically deceive their users about this behavior without being instructed
to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated
environment, where it ass... | 2023-11-09T17:12:44Z | null | null | null | null | null | null | null | null | null | null |
2,311.07598 | Multi-Label Topic Model for Financial Textual Data | ['Moritz Scherrmann'] | ['q-fin.ST', 'cs.CL', 'cs.LG'] | This paper presents a multi-label topic model for financial texts like ad-hoc
announcements, 8-K filings, finance related news or annual reports. I train the
model on a new financial multi-label database consisting of 3,044 German ad-hoc
announcements that are labeled manually using 20 predefined, economically
motivate... | 2023-11-10T12:56:07Z | null | null | null | Multi-Label Topic Model for Financial Textual Data | ['Moritz Scherrmann'] | 2,023 | arXiv.org | 1 | 27 | ['Economics', 'Computer Science'] |
2,311.07767 | GreekT5: A Series of Greek Sequence-to-Sequence Models for News
Summarization | ['Nikolaos Giarelis', 'Charalampos Mastrokostas', 'Nikos Karacapilidis'] | ['cs.CL', 'cs.AI', '68T07, 68T50', 'I.2.7'] | Text summarization (TS) is a natural language processing (NLP) subtask
pertaining to the automatic formulation of a concise and coherent summary that
covers the major concepts and topics from one or multiple documents. Recent
advancements in deep learning have led to the development of abstractive
summarization transfo... | 2023-11-13T21:33:12Z | 26 pages, 0 figures | null | null | GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization | ['Nikolaos Giarelis', 'Charalampos Mastrokostas', 'N. Karacapilidis'] | 2,023 | arXiv.org | 3 | 29 | ['Computer Science'] |
2,311.07816 | Leveraging Large Language Models to Detect Influence Campaigns in Social
Media | ['Luca Luceri', 'Eric Boniardi', 'Emilio Ferrara'] | ['cs.SI', 'cs.AI'] | Social media influence campaigns pose significant challenges to public
discourse and democracy. Traditional detection methods fall short due to the
complexity and dynamic nature of social media. Addressing this, we propose a
novel detection method using Large Language Models (LLMs) that incorporates
both user metadata ... | 2023-11-14T00:25:09Z | null | null | null | null | null | null | null | null | null | null |
2,311.07911 | Instruction-Following Evaluation for Large Language Models | ['Jeffrey Zhou', 'Tianjian Lu', 'Swaroop Mishra', 'Siddhartha Brahma', 'Sujoy Basu', 'Yi Luan', 'Denny Zhou', 'Le Hou'] | ['cs.CL', 'cs.AI', 'cs.LG', '68T50 (Primary) 68T99 (Secondary)', 'I.2.7'] | One core capability of Large Language Models (LLMs) is to follow natural
language instructions. However, the evaluation of such abilities is not
standardized: Human evaluations are expensive, slow, and not objectively
reproducible, while LLM-based auto-evaluation is potentially biased or limited
by the ability of the e... | 2023-11-14T05:13:55Z | null | null | null | null | null | null | null | null | null | null |
2,311.07919 | Qwen-Audio: Advancing Universal Audio Understanding via Unified
Large-Scale Audio-Language Models | ['Yunfei Chu', 'Jin Xu', 'Xiaohuan Zhou', 'Qian Yang', 'Shiliang Zhang', 'Zhijie Yan', 'Chang Zhou', 'Jingren Zhou'] | ['eess.AS', 'cs.CL', 'cs.LG'] | Recently, instruction-following audio-language models have received broad
attention for audio interaction with humans. However, the absence of
pre-trained audio models capable of handling diverse audio types and tasks has
hindered progress in this field. Consequently, most existing works have only
been able to support ... | 2023-11-14T05:34:50Z | The code, checkpoints and demo are released at
https://github.com/QwenLM/Qwen-Audio | null | null | Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models | ['Yunfei Chu', 'Jin Xu', 'Xiaohuan Zhou', 'Qian Yang', 'Shiliang Zhang', 'Zhijie Yan', 'Chang Zhou', 'Jingren Zhou'] | 2,023 | arXiv.org | 351 | 63 | ['Computer Science', 'Engineering'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.