arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,308.16184 | SAM-Med2D | ['Junlong Cheng', 'Jin Ye', 'Zhongying Deng', 'Jianpin Chen', 'Tianbin Li', 'Haoyu Wang', 'Yanzhou Su', 'Ziyan Huang', 'Jilong Chen', 'Lei Jiang', 'Hui Sun', 'Junjun He', 'Shaoting Zhang', 'Min Zhu', 'Yu Qiao'] | ['cs.CV'] | The Segment Anything Model (SAM) represents a state-of-the-art research
advancement in natural image segmentation, achieving impressive results with
input prompts such as points and bounding boxes. However, our evaluation and
recent research indicate that directly applying the pretrained SAM to medical
image segmentati... | 2023-08-30T17:59:02Z | null | null | null | null | null | null | null | null | null | null |
2,308.16361 | Large Language Models as Data Preprocessors | ['Haochen Zhang', 'Yuyang Dong', 'Chuan Xiao', 'Masafumi Oyamada'] | ['cs.AI', 'cs.DB'] | Large Language Models (LLMs), typified by OpenAI's GPT, have marked a
significant advancement in artificial intelligence. Trained on vast amounts of
text data, LLMs are capable of understanding and generating human-like text
across a diverse range of topics. This study expands on the applications of
LLMs, exploring the... | 2023-08-30T23:28:43Z | TaDA 2024 (workshop in conjunction with VLDB 2024) | null | null | null | null | null | null | null | null | null |
2,308.16512 | MVDream: Multi-view Diffusion for 3D Generation | ['Yichun Shi', 'Peng Wang', 'Jianglong Ye', 'Mai Long', 'Kejie Li', 'Xiao Yang'] | ['cs.CV'] | We introduce MVDream, a diffusion model that is able to generate consistent
multi-view images from a given text prompt. Learning from both 2D and 3D data,
a multi-view diffusion model can achieve the generalizability of 2D diffusion
models and the consistency of 3D renderings. We demonstrate that such a
multi-view diff... | 2023-08-31T07:49:06Z | Reorganized for arXiv; Our project page is https://MV-Dream.github.io | null | null | MVDream: Multi-view Diffusion for 3D Generation | ['Yichun Shi', 'Peng Wang', 'Jianglong Ye', 'Mai Long', 'Kejie Li', 'X. Yang'] | 2,023 | International Conference on Learning Representations | 631 | 55 | ['Computer Science'] |
2,308.16687 | DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew | ['Shaltiel Shmidman', 'Avi Shmidman', 'Moshe Koppel'] | ['cs.CL'] | We present DictaBERT, a new state-of-the-art pre-trained BERT model for
modern Hebrew, outperforming existing models on most benchmarks. Additionally,
we release three fine-tuned versions of the model, designed to perform three
specific foundational tasks in the analysis of Hebrew texts: prefix
segmentation, morphologi... | 2023-08-31T12:43:18Z | Updated second version, with links to two question-answering models | null | null | null | null | null | null | null | null | null |
2,308.16692 | SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language
Models | ['Xin Zhang', 'Dong Zhang', 'Shimin Li', 'Yaqian Zhou', 'Xipeng Qiu'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Current speech large language models build upon discrete speech
representations, which can be categorized into semantic tokens and acoustic
tokens. However, existing speech tokens are not specifically designed for
speech language modeling. To assess the suitability of speech tokens for
building speech language models, ... | 2023-08-31T12:53:09Z | Accepted by ICLR 2024. Project page is at
https://0nutation.github.io/SpeechTokenizer.github.io/ | null | null | null | null | null | null | null | null | null |
2,308.16884 | The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122
Language Variants | ['Lucas Bandarkar', 'Davis Liang', 'Benjamin Muller', 'Mikel Artetxe', 'Satya Narayan Shukla', 'Donald Husa', 'Naman Goyal', 'Abhinandan Krishnan', 'Luke Zettlemoyer', 'Madian Khabsa'] | ['cs.CL', 'cs.AI', 'cs.LG', 'I.2.7'] | We present Belebele, a multiple-choice machine reading comprehension (MRC)
dataset spanning 122 language variants. Significantly expanding the language
coverage of natural language understanding (NLU) benchmarks, this dataset
enables the evaluation of text models in high-, medium-, and low-resource
languages. Each ques... | 2023-08-31T17:43:08Z | ACL 2024 | Proceedings of the 62nd Annual Meeting of the Association for
Computational Linguistics 749-775 2024 | 10.18653/v1/2024.acl-long.44 | The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants | ['Lucas Bandarkar', 'Davis Liang', 'Benjamin Muller', 'Mikel Artetxe', 'Satya Narayan Shukla', 'Don Husa', 'Naman Goyal', 'Abhinandan Krishnan', 'Luke Zettlemoyer', 'Madian Khabsa'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 157 | 91 | ['Computer Science'] |
2,308.16911 | PointLLM: Empowering Large Language Models to Understand Point Clouds | ['Runsen Xu', 'Xiaolong Wang', 'Tai Wang', 'Yilun Chen', 'Jiangmiao Pang', 'Dahua Lin'] | ['cs.CV', 'cs.AI', 'cs.CL'] | The unprecedented advancements in Large Language Models (LLMs) have shown a
profound impact on natural language processing but are yet to fully embrace the
realm of 3D understanding. This paper introduces PointLLM, a preliminary effort
to fill this gap, enabling LLMs to understand point clouds and offering a new
avenue... | 2023-08-31T17:59:46Z | ECCV 2024 Oral Camera Ready. This version includes clearer writing
and additional experimental results compared to previous versions. Project
page: https://runsenxu.com/projects/PointLLM | null | null | null | null | null | null | null | null | null |
2,309.00071 | YaRN: Efficient Context Window Extension of Large Language Models | ['Bowen Peng', 'Jeffrey Quesnelle', 'Honglu Fan', 'Enrico Shippole'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Rotary Position Embeddings (RoPE) have been shown to effectively encode
positional information in transformer-based language models. However, these
models fail to generalize past the sequence length they were trained on. We
present YaRN (Yet another RoPE extensioN method), a compute-efficient method to
extend the conte... | 2023-08-31T18:18:07Z | null | null | null | null | null | null | null | null | null | null |
2,309.00237 | Publicly Shareable Clinical Large Language Model Built on Synthetic
Clinical Notes | ['Sunjun Kweon', 'Junu Kim', 'Jiyoun Kim', 'Sujeong Im', 'Eunbyeol Cho', 'Seongsu Bae', 'Jungwoo Oh', 'Gyubok Lee', 'Jong Hak Moon', 'Seng Chan You', 'Seungjin Baek', 'Chang Hoon Han', 'Yoon Bin Jung', 'Yohan Jo', 'Edward Choi'] | ['cs.CL', 'cs.AI'] | The development of large language models tailored for handling patients'
clinical notes is often hindered by the limited accessibility and usability of
these notes due to strict privacy regulations. To address these challenges, we
first create synthetic large-scale clinical notes using publicly available case
reports e... | 2023-09-01T04:01:20Z | ACL 2024 (Findings) | null | null | null | null | null | null | null | null | null |
2,309.00359 | Large Content And Behavior Models To Understand, Simulate, And Optimize
Content And Behavior | ['Ashmit Khandelwal', 'Aditya Agrawal', 'Aanisha Bhattacharyya', 'Yaman K Singla', 'Somesh Singh', 'Uttaran Bhattacharya', 'Ishita Dasgupta', 'Stefano Petrangeli', 'Rajiv Ratn Shah', 'Changyou Chen', 'Balaji Krishnamurthy'] | ['cs.CL', 'cs.CV'] | Shannon and Weaver's seminal information theory divides communication into
three levels: technical, semantic, and effectiveness. While the technical level
deals with the accurate reconstruction of transmitted symbols, the semantic and
effectiveness levels deal with the inferred meaning and its effect on the
receiver. L... | 2023-09-01T09:34:49Z | null | null | null | Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior | ['Ashmit Khandelwal', 'Aditya Agrawal', 'Aanisha Bhattacharyya', 'Yaman Kumar Singla', 'Somesh Singh', 'Uttaran Bhattacharya', 'Ishita Dasgupta', 'Stefano Petrangeli', 'R. Shah', 'Changyou Chen', 'Balaji Krishnamurthy'] | 2,023 | International Conference on Learning Representations | 8 | 55 | ['Computer Science'] |
2,309.00454 | CoNeTTE: An efficient Audio Captioning system leveraging multiple
datasets with Task Embedding | ['Étienne Labbé', 'Thomas Pellegrini', 'Julien Pinquier'] | ['cs.SD', 'eess.AS'] | Automated Audio Captioning (AAC) involves generating natural language
descriptions of audio content, using encoder-decoder architectures. An audio
encoder produces audio embeddings fed to a decoder, usually a Transformer
decoder, for caption generation. In this work, we describe our model, which
novelty, compared to ex... | 2023-09-01T13:35:44Z | null | null | null | CoNeTTE: An Efficient Audio Captioning System Leveraging Multiple Datasets With Task Embedding | ['Étienne Labbé', 'Thomas Pellegrini', 'J. Pinquier'] | 2,023 | IEEE/ACM Transactions on Audio Speech and Language Processing | 14 | 65 | ['Computer Science', 'Engineering'] |
2,309.0061 | CityDreamer: Compositional Generative Model of Unbounded 3D Cities | ['Haozhe Xie', 'Zhaoxi Chen', 'Fangzhou Hong', 'Ziwei Liu'] | ['cs.CV'] | 3D city generation is a desirable yet challenging task, since humans are more
sensitive to structural distortions in urban environments. Additionally,
generating 3D cities is more complex than 3D natural scenes since buildings, as
objects of the same class, exhibit a wider range of appearances compared to the
relativel... | 2023-09-01T17:57:02Z | CVPR 2024. Project page: https://haozhexie.com/project/city-dreamer | null | null | CityDreamer: Compositional Generative Model of Unbounded 3D Cities | ['Haozhe Xie', 'Zhaoxi Chen', 'Fangzhou Hong', 'Ziwei Liu'] | 2,023 | Computer Vision and Pattern Recognition | 43 | 65 | ['Computer Science'] |
2,309.00615 | Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D
Understanding, Generation, and Instruction Following | ['Ziyu Guo', 'Renrui Zhang', 'Xiangyang Zhu', 'Yiwen Tang', 'Xianzheng Ma', 'Jiaming Han', 'Kexin Chen', 'Peng Gao', 'Xianzhi Li', 'Hongsheng Li', 'Pheng-Ann Heng'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM'] | We introduce Point-Bind, a 3D multi-modality model aligning point clouds with
2D image, language, audio, and video. Guided by ImageBind, we construct a joint
embedding space between 3D and multi-modalities, enabling many promising
applications, e.g., any-to-3D generation, 3D embedding arithmetic, and 3D
open-world unde... | 2023-09-01T17:59:47Z | Work in progress. Code is available at
https://github.com/ZiyuGuo99/Point-Bind_Point-LLM | null | null | Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following | ['Ziyu Guo', 'Renrui Zhang', 'Xiangyang Zhu', 'Yiwen Tang', 'Xianzheng Ma', 'Jiaming Han', 'Ke Chen', 'Peng Gao', 'Xianzhi Li', 'Hongsheng Li', 'P. Heng'] | 2,023 | arXiv.org | 146 | 99 | ['Computer Science'] |
2,309.00779 | Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights,
and Duties | ['Taylor Sorensen', 'Liwei Jiang', 'Jena Hwang', 'Sydney Levine', 'Valentina Pyatkin', 'Peter West', 'Nouha Dziri', 'Ximing Lu', 'Kavel Rao', 'Chandra Bhagavatula', 'Maarten Sap', 'John Tasioulas', 'Yejin Choi'] | ['cs.CL', 'cs.AI'] | Human values are crucial to human decision-making. Value pluralism is the
view that multiple correct values may be held in tension with one another
(e.g., when considering lying to a friend to protect their feelings, how does
one balance honesty with friendship?). As statistical learners, AI systems fit
to averages by ... | 2023-09-02T01:24:59Z | Proceedings of the AAAI Conference on Artificial Intelligence, 38 | Vol. 38 No. 18: AAAI-24 Technical Tracks 18; 2024; 19937-19947 | 10.1609/aaai.v38i18.29970 | null | null | null | null | null | null | null |
2,309.00789 | LinkTransformer: A Unified Package for Record Linkage with Transformer
Language Models | ['Abhishek Arora', 'Melissa Dell'] | ['cs.CL'] | Linking information across sources is fundamental to a variety of analyses in
social science, business, and government. While large language models (LLMs)
offer enormous promise for improving record linkage in noisy datasets, in many
domains approximate string matching packages in popular softwares such as R and
Stata ... | 2023-09-02T01:45:27Z | null | null | null | null | null | null | null | null | null | null |
2,309.00952 | Bridge Diffusion Model: bridge non-English language-native text-to-image
diffusion model with English communities | ['Shanyuan Liu', 'Dawei Leng', 'Yuhui Yin'] | ['cs.CL', 'cs.AI'] | Text-to-Image generation (TTI) technologies are advancing rapidly, especially
in the English language communities. However, English-native TTI models
inherently carry biases from English world centric training data, which creates
a dilemma for development of other language-native TTI models. One common
choice is fine-t... | 2023-09-02T14:30:56Z | null | null | null | Bridge Diffusion Model: bridge non-English language-native text-to-image diffusion model with English communities | ['Shanyuan Liu', 'Dawei Leng', 'Yuhui Yin'] | 2,023 | arXiv.org | 7 | 50 | ['Computer Science'] |
2,309.00986 | ModelScope-Agent: Building Your Customizable Agent System with
Open-source Large Language Models | ['Chenliang Li', 'Hehong Chen', 'Ming Yan', 'Weizhou Shen', 'Haiyang Xu', 'Zhikai Wu', 'Zhicheng Zhang', 'Wenmeng Zhou', 'Yingda Chen', 'Chen Cheng', 'Hongzhu Shi', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou'] | ['cs.CL'] | Large language models (LLMs) have recently demonstrated remarkable
capabilities to comprehend human intentions, engage in reasoning, and design
planning-like behavior. To further unleash the power of LLMs to accomplish
complex tasks, there is a growing trend to build agent framework that equips
LLMs, such as ChatGPT, w... | 2023-09-02T16:50:30Z | null | null | null | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | ['Chenliang Li', 'Hehong Chen', 'Mingshi Yan', 'Weizhou Shen', 'Haiyang Xu', 'Zhikai Wu', 'Zhicheng Zhang', 'Wenmeng Zhou', 'Yingda Chen', 'Chen Cheng', 'Hongzhu Shi', 'Ji Zhang', 'Fei Huang', 'Jingren Zhou'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 21 | 25 | ['Computer Science'] |
2,309.01246 | Towards Generic Image Manipulation Detection with Weakly-Supervised
Self-Consistency Learning | ['Yuanhao Zhai', 'Tianyu Luan', 'David Doermann', 'Junsong Yuan'] | ['cs.CV'] | As advanced image manipulation techniques emerge, detecting the manipulation
becomes increasingly important. Despite the success of recent learning-based
approaches for image manipulation detection, they typically require expensive
pixel-level annotations to train, while exhibiting degraded performance when
testing on ... | 2023-09-03T19:19:56Z | Accepted to ICCV 2023, code: https://github.com/yhZhai/WSCL | null | null | null | null | null | null | null | null | null |
2,309.0127 | COMEDIAN: Self-Supervised Learning and Knowledge Distillation for Action
Spotting using Transformers | ['Julien Denize', 'Mykola Liashuha', 'Jaonary Rabarisoa', 'Astrid Orcesi', 'Romain Hérault'] | ['cs.CV', 'cs.AI', 'cs.LG'] | We present COMEDIAN, a novel pipeline to initialize spatiotemporal
transformers for action spotting, which involves self-supervised learning and
knowledge distillation. Action spotting is a timestamp-level temporal action
detection task. Our pipeline consists of three steps, with two initialization
stages. First, we pe... | 2023-09-03T20:50:53Z | Source code is available here:
https://github.com/juliendenize/eztorch | null | null | COMEDIAN: Self-Supervised Learning and Knowledge Distillation for Action Spotting Using Transformers | ['J. Denize', 'Mykola Liashuha', 'Jaonary Rabarisoa', 'Astrid Orcesi', "Romain H'erault"] | 2,023 | 2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW) | 13 | 66 | ['Computer Science'] |
2,309.01859 | NLLB-CLIP -- train performant multilingual image retrieval model on a
budget | ['Alexander Visheratin'] | ['cs.CV'] | Today, the exponential rise of large models developed by academic and
industrial institutions with the help of massive computing resources raises the
question of whether someone without access to such resources can make a
valuable scientific contribution. To explore this, we tried to solve the
challenging task of multi... | 2023-09-04T23:26:11Z | null | null | null | null | null | null | null | null | null | null |
2,309.01952 | Deep Imitation Learning for Humanoid Loco-manipulation through Human
Teleoperation | ['Mingyo Seo', 'Steve Han', 'Kyutae Sim', 'Seung Hyeon Bang', 'Carlos Gonzalez', 'Luis Sentis', 'Yuke Zhu'] | ['cs.RO'] | We tackle the problem of developing humanoid loco-manipulation skills with
deep imitation learning. The difficulty of collecting task demonstrations and
training policies for humanoids with a high degree of freedom presents
substantial challenges. We introduce TRILL, a data-efficient framework for
training humanoid loc... | 2023-09-05T05:05:05Z | Accepted to Humanoids 2023 | null | null | Deep Imitation Learning for Humanoid Loco-manipulation Through Human Teleoperation | ['Mingyo Seo', 'Steve Han', 'Kyutae Sim', 'S. Bang', 'Carlos Gonzalez', 'Luis Sentis', 'Yuke Zhu'] | 2,023 | IEEE-RAS International Conference on Humanoid Robots | 57 | 45 | ['Computer Science'] |
2,309.02033 | Data-Juicer: A One-Stop Data Processing System for Large Language Models | ['Daoyuan Chen', 'Yilun Huang', 'Zhijian Ma', 'Hesen Chen', 'Xuchen Pan', 'Ce Ge', 'Dawei Gao', 'Yuexiang Xie', 'Zhaoyang Liu', 'Jinyang Gao', 'Yaliang Li', 'Bolin Ding', 'Jingren Zhou'] | ['cs.LG', 'cs.DB', 'cs.DC'] | The immense evolution in Large Language Models (LLMs) has underscored the
importance of massive, heterogeneous, and high-quality data. A data recipe is a
mixture of data from different sources for training LLMs, which plays a vital
role in LLMs' performance. Existing open-source tools for LLM data processing
are mostly... | 2023-09-05T08:22:07Z | 20 Pages, 10 figures, 9 tables. The system, data recipes, and demos
are continuously maintained at https://github.com/alibaba/data-juicer | null | null | null | null | null | null | null | null | null |
2,309.02119 | Hierarchical Masked 3D Diffusion Model for Video Outpainting | ['Fanda Fan', 'Chaoxu Guo', 'Litong Gong', 'Biao Wang', 'Tiezheng Ge', 'Yuning Jiang', 'Chunjie Luo', 'Jianfeng Zhan'] | ['cs.CV'] | Video outpainting aims to adequately complete missing areas at the edges of
video frames. Compared to image outpainting, it presents an additional
challenge as the model should maintain the temporal consistency of the filled
area. In this paper, we introduce a masked 3D diffusion model for video
outpainting. We use the... | 2023-09-05T10:52:21Z | Accepted to ACM MM 2023 | null | null | null | null | null | null | null | null | null |
2,309.02233 | Augmenting Black-box LLMs with Medical Textbooks for Biomedical Question
Answering | ['Yubo Wang', 'Xueguang Ma', 'Wenhu Chen'] | ['cs.CL', 'cs.AI'] | Large-scale language models (LLMs) like ChatGPT have demonstrated impressive
abilities in generating responses based on human instructions. However, their
use in the medical field can be challenging due to their lack of specific,
in-depth knowledge. In this study, we present a system called LLMs Augmented
with Medical ... | 2023-09-05T13:39:38Z | This version has been accepted and published at EMNLP Findings 2024 | null | null | Augmenting Black-box LLMs with Medical Textbooks for Biomedical Question Answering | ['Yubo Wang', 'Xueguang Ma', 'Wenhu Chen'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 11 | 54 | ['Computer Science'] |
2,309.02373 | nanoT5: A PyTorch Framework for Pre-training and Fine-tuning T5-style
Models with Limited Resources | ['Piotr Nawrot'] | ['cs.CL'] | State-of-the-art language models like T5 have revolutionized the NLP
landscape, but their computational demands hinder a large portion of the
research community. To address this challenge, we present nanoT5, a
specially-optimized PyTorch framework for efficient pre-training and
fine-tuning of T5 models. Drawing on insi... | 2023-09-05T16:35:41Z | To appear at 3rd Workshop for Natural Language Processing Open Source
Software | null | null | nanoT5: Fast & Simple Pre-training and Fine-tuning of T5 Models with Limited Resources | ['Piotr Nawrot'] | 2,023 | NLPOSS | 10 | 31 | ['Computer Science'] |
2,309.02561 | Physically Grounded Vision-Language Models for Robotic Manipulation | ['Jensen Gao', 'Bidipta Sarkar', 'Fei Xia', 'Ted Xiao', 'Jiajun Wu', 'Brian Ichter', 'Anirudha Majumdar', 'Dorsa Sadigh'] | ['cs.RO', 'cs.AI', 'cs.CV'] | Recent advances in vision-language models (VLMs) have led to improved
performance on tasks such as visual question answering and image captioning.
Consequently, these models are now well-positioned to reason about the physical
world, particularly within domains such as robotic manipulation. However,
current VLMs are li... | 2023-09-05T20:21:03Z | Updated version for ICRA 2024 | null | null | null | null | null | null | null | null | null |
2,309.02724 | Offensive Hebrew Corpus and Detection using BERT | ['Nagham Hamad', 'Mustafa Jarrar', 'Mohammad Khalilia', 'Nadim Nashif'] | ['cs.CL', 'cs.AI', 'cs.LG', 'I.2.1; I.2.6; I.2.7; I.5.1'] | Offensive language detection has been well studied in many languages, but it
is lagging behind in low-resource languages, such as Hebrew. In this paper, we
present a new offensive language corpus in Hebrew. A total of 15,881 tweets
were retrieved from Twitter. Each was labeled with one or more of five classes
(abusive,... | 2023-09-06T05:18:43Z | 8 pages, 1 figure, The 20th ACS/IEEE International Conference on
Computer Systems and Applications (AICCSA) | null | null | null | null | null | null | null | null | null |
2,309.02836 | BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial
Network | ['Takashi Shibuya', 'Yuhta Takida', 'Yuki Mitsufuji'] | ['cs.SD', 'cs.LG', 'eess.AS'] | Generative adversarial network (GAN)-based vocoders have been intensively
studied because they can synthesize high-fidelity audio waveforms faster than
real-time. However, it has been reported that most GANs fail to obtain the
optimal projection for discriminating between real and fake data in the feature
space. In the... | 2023-09-06T08:48:03Z | Accepted at ICASSP 2024. Equation (5) in the previous version is
wrong. We modified it | null | null | null | null | null | null | null | null | null |
2,309.02887 | A deep Natural Language Inference predictor without language-specific
training data | ['Lorenzo Corradi', 'Alessandro Manenti', 'Francesca Del Bonifro', 'Francesco Setti', 'Dario Del Sorbo'] | ['cs.CL', 'cs.AI'] | In this paper we present a technique of NLP to tackle the problem of
inference relation (NLI) between pairs of sentences in a target language of
choice without a language-specific training dataset. We exploit a generic
translation dataset, manually translated, along with two instances of the same
pre-trained model - th... | 2023-09-06T10:20:59Z | Conference: ICIAP2023 | null | 10.1007/978-3-031-43153-1_15 | A Deep Natural Language Inference Predictor Without Language-Specific Training Data | ['Lorenzo Corradi', 'Alessandro Manenti', 'Francesca Del Bonifro', 'Francesco Setti', 'D. Sorbo'] | 2,023 | International Conference on Image Analysis and Processing | 0 | 27 | ['Computer Science'] |
2,309.03057 | Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy
Protection | ['Yu Chen', 'Tingxin Li', 'Huiming Liu', 'Yang Yu'] | ['cs.CR', 'cs.AI'] | Numerous companies have started offering services based on large language
models (LLM), such as ChatGPT, which inevitably raises privacy concerns as
users' prompts are exposed to the model provider. Previous research on secure
reasoning using multi-party computation (MPC) has proven to be impractical for
LLM applicatio... | 2023-09-06T14:54:11Z | null | null | null | Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy Protection | ['Yu Chen', 'Tingxin Li', 'Huiming Liu', 'Yang Yu'] | 2,023 | arXiv.org | 31 | 9 | ['Computer Science'] |
2,309.03199 | Matcha-TTS: A fast TTS architecture with conditional flow matching | ['Shivam Mehta', 'Ruibo Tu', 'Jonas Beskow', 'Éva Székely', 'Gustav Eje Henter'] | ['eess.AS', 'cs.HC', 'cs.LG', 'cs.SD', '68T07', 'I.2.7; I.2.6; H.5.5'] | We introduce Matcha-TTS, a new encoder-decoder architecture for speedy TTS
acoustic modelling, trained using optimal-transport conditional flow matching
(OT-CFM). This yields an ODE-based decoder capable of high output quality in
fewer synthesis steps than models trained using score matching. Careful design
choices add... | 2023-09-06T17:59:57Z | 5 pages, 3 figures. Final version, accepted to IEEE ICASSP 2024 | null | null | Matcha-TTS: A Fast TTS Architecture with Conditional Flow Matching | ['Shivam Mehta', 'Ruibo Tu', 'J. Beskow', 'Éva Székely', 'G. Henter'] | 2,023 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 96 | 43 | ['Engineering', 'Computer Science'] |
2,309.03241 | GPT Can Solve Mathematical Problems Without a Calculator | ['Zhen Yang', 'Ming Ding', 'Qingsong Lv', 'Zhihuan Jiang', 'Zehai He', 'Yuyi Guo', 'Jinfeng Bai', 'Jie Tang'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Previous studies have typically assumed that large language models are unable
to accurately perform arithmetic operations, particularly multiplication of >8
digits, and operations involving decimals and fractions, without the use of
calculator tools. This paper aims to challenge this misconception. With
sufficient trai... | 2023-09-06T06:18:16Z | 26pages,14figures | null | null | null | null | null | null | null | null | null |
2,309.0345 | XGen-7B Technical Report | ['Erik Nijkamp', 'Tian Xie', 'Hiroaki Hayashi', 'Bo Pang', 'Congying Xia', 'Chen Xing', 'Jesse Vig', 'Semih Yavuz', 'Philippe Laban', 'Ben Krause', 'Senthil Purushwalkam', 'Tong Niu', 'Wojciech Kryściński', "Lidiya Murakhovs'ka", 'Prafulla Kumar Choubey', 'Alex Fabbri', 'Ye Liu', 'Rui Meng', 'Lifu Tu', 'Meghana Bhat', ... | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) have become ubiquitous across various domains,
transforming the way we interact with information and conduct research.
However, most high-performing LLMs remain confined behind proprietary walls,
hindering scientific progress. Most open-source LLMs, on the other hand, are
limited in their a... | 2023-09-07T02:20:03Z | null | null | null | null | null | null | null | null | null | null |
2,309.03453 | SyncDreamer: Generating Multiview-consistent Images from a Single-view
Image | ['Yuan Liu', 'Cheng Lin', 'Zijiao Zeng', 'Xiaoxiao Long', 'Lingjie Liu', 'Taku Komura', 'Wenping Wang'] | ['cs.CV', 'cs.AI', 'cs.GR'] | In this paper, we present a novel diffusion model called that generates
multiview-consistent images from a single-view image. Using pretrained
large-scale 2D diffusion models, recent work Zero123 demonstrates the ability
to generate plausible novel views from a single-view image of an object.
However, maintaining consi... | 2023-09-07T02:28:04Z | ICLR 2024 Spotlight. Project page:
https://liuyuan-pal.github.io/SyncDreamer/ Code:
https://github.com/liuyuan-pal/SyncDreamer | null | null | null | null | null | null | null | null | null |
2,309.03787 | USA: Universal Sentiment Analysis Model & Construction of Japanese
Sentiment Text Classification and Part of Speech Dataset | ['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori'] | ['cs.CL'] | Sentiment analysis is a pivotal task in the domain of natural language
processing. It encompasses both text-level sentiment polarity classification
and word-level Part of Speech(POS) sentiment polarity determination. Such
analysis challenges models to understand text holistically while also
extracting nuanced informati... | 2023-09-07T15:35:00Z | Model already Open Sourced, Dataset will release soon | null | null | USA: Universal Sentiment Analysis Model & Construction of Japanese Sentiment Text Classification and Part of Speech Dataset | ['Chengguang Gan', 'Qinghao Zhang', 'Tatsunori Mori'] | 2,023 | arXiv.org | 4 | 28 | ['Computer Science'] |
2,309.03905 | ImageBind-LLM: Multi-modality Instruction Tuning | ['Jiaming Han', 'Renrui Zhang', 'Wenqi Shao', 'Peng Gao', 'Peng Xu', 'Han Xiao', 'Kaipeng Zhang', 'Chris Liu', 'Song Wen', 'Ziyu Guo', 'Xudong Lu', 'Shuai Ren', 'Yafei Wen', 'Xiaoxin Chen', 'Xiangyu Yue', 'Hongsheng Li', 'Yu Qiao'] | ['cs.MM', 'cs.CL', 'cs.CV', 'cs.LG', 'cs.SD', 'eess.AS'] | We present ImageBind-LLM, a multi-modality instruction tuning method of large
language models (LLMs) via ImageBind. Existing works mainly focus on language
and image instruction tuning, different from which, our ImageBind-LLM can
respond to multi-modality conditions, including audio, 3D point clouds, video,
and their e... | 2023-09-07T17:59:45Z | Code is available at https://github.com/OpenGVLab/LLaMA-Adapter | null | null | null | null | null | null | null | null | null |
2,309.04175 | Knowledge-tuning Large Language Models with Structured Medical Knowledge
Bases for Reliable Response Generation in Chinese | ['Haochun Wang', 'Sendong Zhao', 'Zewen Qiang', 'Zijian Li', 'Nuwa Xi', 'Yanrui Du', 'MuZhen Cai', 'Haoqiang Guo', 'Yuhan Chen', 'Haoming Xu', 'Bing Qin', 'Ting Liu'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have demonstrated remarkable success in diverse
natural language processing (NLP) tasks in general domains. However, LLMs
sometimes generate responses with the hallucination about medical facts due to
limited domain knowledge. Such shortcomings pose potential risks in the
utilization of LLM... | 2023-09-08T07:42:57Z | 11 pages, 5 figures | null | 10.1145/3686807 | Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Trustworthy Response Generation in Chinese | ['Hao Wang', 'Sendong Zhao', 'Zewen Qiang', 'Zijian Li', 'Nuwa Xi', 'Yanrui Du', 'Muzhen Cai', 'Haoqiang Guo', 'Yuhan Chen', 'Haoming Xu', 'Bing Qin', 'Ting Liu'] | 2,023 | ACM Transactions on Knowledge Discovery from Data | 21 | 49 | ['Computer Science'] |
2,309.04198 | Don't Ignore Dual Logic Ability of LLMs while Privatizing: A
Data-Intensive Analysis in Medical Domain | ['Yanrui Du', 'Sendong Zhao', 'Muzhen Cai', 'Ming Ma', 'Danyang Zhao', 'Jiawei Cao', 'Bing Qin'] | ['cs.CL'] | Extensive studies have been devoted to privatizing general-domain Large
Language Models (LLMs) as Domain-Specific LLMs via feeding specific-domain
data. However, these privatization efforts often ignored a critical aspect:
Dual Logic Ability, which is a core reasoning ability for LLMs. The dual logic
ability of LLMs en... | 2023-09-08T08:20:46Z | null | null | null | Don't Ignore Dual Logic Ability of LLMs while Privatizing: A Data-Intensive Analysis in Medical Domain | ['Yanrui Du', 'Sendong Zhao', 'Yuhan Chen', 'Rai Bai', 'Jing Liu', 'Huaqin Wu', 'Haifeng Wang', 'Bing Qin'] | 2,023 | null | 3 | 17 | ['Computer Science'] |
2,309.04662 | MADLAD-400: A Multilingual And Document-Level Large Audited Dataset | ['Sneha Kudugunta', 'Isaac Caswell', 'Biao Zhang', 'Xavier Garcia', 'Christopher A. Choquette-Choo', 'Katherine Lee', 'Derrick Xin', 'Aditya Kusupati', 'Romi Stella', 'Ankur Bapna', 'Orhan Firat'] | ['cs.CL', 'cs.LG'] | We introduce MADLAD-400, a manually audited, general domain 3T token
monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss
the limitations revealed by self-auditing MADLAD-400, and the role data
auditing had in the dataset creation process. We then train and release a
10.7B-parameter multilingual... | 2023-09-09T02:34:01Z | Preprint | null | null | null | null | null | null | null | null | null |
2,309.04669 | Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual
Tokenization | ['Yang Jin', 'Kun Xu', 'Kun Xu', 'Liwei Chen', 'Chao Liao', 'Jianchao Tan', 'Quzhe Huang', 'Bin Chen', 'Chenyi Lei', 'An Liu', 'Chengru Song', 'Xiaoqiang Lei', 'Di Zhang', 'Wenwu Ou', 'Kun Gai', 'Yadong Mu'] | ['cs.CV'] | Recently, the remarkable advance of the Large Language Model (LLM) has
inspired researchers to transfer its extraordinary reasoning capability to both
vision and language data. However, the prevailing approaches primarily regard
the visual input as a prompt and focus exclusively on optimizing the text
generation proces... | 2023-09-09T03:01:38Z | ICLR 2024 | null | null | Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization | ['Yang Jin', 'Kun Xu', 'Kun Xu', 'Liwei Chen', 'Chao Liao', 'Jianchao Tan', 'Quzhe Huang', 'Bin Chen', 'Chenyi Lei', 'An Liu', 'Chengru Song', 'Xiaoqiang Lei', 'Di Zhang', 'Wenwu Ou', 'Kun Gai', 'Yadong Mu'] | 2,023 | International Conference on Learning Representations | 50 | 58 | ['Computer Science'] |
2,309.04704 | Analysis of Disinformation and Fake News Detection Using Fine-Tuned
Large Language Model | ['Bohdan M. Pavlyshenko'] | ['cs.CL', 'cs.AI', 'cs.CY', 'cs.IR', 'cs.LG'] | The paper considers the possibility of fine-tuning Llama 2 large language
model (LLM) for the disinformation analysis and fake news detection. For
fine-tuning, the PEFT/LoRA based approach was used. In the study, the model was
fine-tuned for the following tasks: analysing a text on revealing
disinformation and propagan... | 2023-09-09T07:10:19Z | null | null | null | null | null | null | null | null | null | null |
2,309.05019 | SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models | ['Shuchen Xue', 'Mingyang Yi', 'Weijian Luo', 'Shifeng Zhang', 'Jiacheng Sun', 'Zhenguo Li', 'Zhi-Ming Ma'] | ['cs.LG', 'stat.ML'] | Diffusion Probabilistic Models (DPMs) have achieved considerable success in
generation tasks. As sampling from DPMs is equivalent to solving diffusion SDE
or ODE which is time-consuming, numerous fast sampling methods built upon
improved differential equation solvers are proposed. The majority of such
techniques consid... | 2023-09-10T12:44:54Z | Accepted in NeurIPS 2023 | null | null | SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models | ['Shuchen Xue', 'Mingyang Yi', 'Weijian Luo', 'Shifeng Zhang', 'Jiacheng Sun', 'Z. Li', 'Zhi-Ming Ma'] | 2,023 | Neural Information Processing Systems | 52 | 44 | ['Computer Science', 'Mathematics'] |
2,309.05196 | Does Writing with Language Models Reduce Content Diversity? | ['Vishakh Padmakumar', 'He He'] | ['cs.CL', 'cs.CY', 'cs.HC', 'cs.LG'] | Large language models (LLMs) have led to a surge in collaborative writing
with model assistance. As different users incorporate suggestions from the same
model, there is a risk of decreased diversity in the produced content,
potentially limiting diverse perspectives in public discourse. In this work, we
measure the imp... | 2023-09-11T02:16:47Z | ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,309.05203 | From Artificially Real to Real: Leveraging Pseudo Data from Large
Language Models for Low-Resource Molecule Discovery | ['Yuhan Chen', 'Nuwa Xi', 'Yanrui Du', 'Haochun Wang', 'Jianyu Chen', 'Sendong Zhao', 'Bing Qin'] | ['cs.CL'] | Molecule discovery serves as a cornerstone in numerous scientific domains,
fueling the development of new materials and innovative drug designs. Recent
developments of in-silico molecule discovery have highlighted the promising
results of cross-modal techniques, which bridge molecular structures with their
descriptive ... | 2023-09-11T02:35:36Z | AAAI2024 | null | null | null | null | null | null | null | null | null |
2,309.05248 | Enhancing Speaker Diarization with Large Language Models: A Contextual
Beam Search Approach | ['Tae Jin Park', 'Kunal Dhawan', 'Nithin Koluguri', 'Jagadeesh Balam'] | ['eess.AS', 'cs.SD'] | Large language models (LLMs) have shown great promise for capturing
contextual information in natural language processing tasks. We propose a novel
approach to speaker diarization that incorporates the prowess of LLMs to
exploit contextual cues in human dialogues. Our method builds upon an
acoustic-based speaker diariz... | 2023-09-11T05:47:56Z | 4 pages 1 reference page, ICASSP format | null | null | null | null | null | null | null | null | null |
2,309.053 | Decoupling Common and Unique Representations for Multimodal
Self-supervised Learning | ['Yi Wang', 'Conrad M Albrecht', 'Nassim Ait Ali Braham', 'Chenying Liu', 'Zhitong Xiong', 'Xiao Xiang Zhu'] | ['cs.CV'] | The increasing availability of multi-sensor data sparks wide interest in
multimodal self-supervised learning. However, most existing approaches learn
only common representations across modalities while ignoring intra-modal
training and modality-unique representations. We propose Decoupling Common and
Unique Representat... | 2023-09-11T08:35:23Z | Accepted to ECCV 2024. 27 pages, 8 figures | null | null | Decoupling Common and Unique Representations for Multimodal Self-supervised Learning | ['Yi Wang', 'C. Albrecht', 'Nassim Ait Ali Braham', 'Chenying Liu', 'Zhitong Xiong', 'Xiaoxiang Zhu'] | 2,023 | European Conference on Computer Vision | 19 | 64 | ['Computer Science'] |
2,309.05447 | DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded
Instruction Wrapping | ['Yongrui Chen', 'Haiyun Jiang', 'Xinting Huang', 'Shuming Shi', 'Guilin Qi'] | ['cs.CL'] | The improvement of LLMs' instruction-following capabilities relies heavily on
the availability of high-quality instruction-response pairs. Unfortunately, the
current methods used to collect the pairs suffer from either unaffordable labor
costs or severe hallucinations in the self-generation of LLM. To tackle these
chal... | 2023-09-11T13:41:18Z | Accepted in NAACL 2024 | null | null | DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping | ['Yongrui Chen', 'Haiyun Jiang', 'Xinting Huang', 'Shuming Shi', 'Guilin Qi'] | 2,023 | North American Chapter of the Association for Computational Linguistics | 11 | 28 | ['Computer Science'] |
2,309.05463 | Textbooks Are All You Need II: phi-1.5 technical report | ['Yuanzhi Li', 'Sébastien Bubeck', 'Ronen Eldan', 'Allie Del Giorno', 'Suriya Gunasekar', 'Yin Tat Lee'] | ['cs.CL', 'cs.AI'] | We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of... | 2023-09-11T14:01:45Z | null | null | null | null | null | null | null | null | null | null |
2,309.05472 | LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for
Self-supervised Representations of French Speech | ['Titouan Parcollet', 'Ha Nguyen', 'Solene Evain', 'Marcely Zanon Boito', 'Adrien Pupier', 'Salima Mdhaffar', 'Hang Le', 'Sina Alisamir', 'Natalia Tomashenko', 'Marco Dinarelli', 'Shucong Zhang', 'Alexandre Allauzen', 'Maximin Coavoux', 'Yannick Esteve', 'Mickael Rouvier', 'Jerome Goulian', 'Benjamin Lecouteux', 'Franc... | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | Self-supervised learning (SSL) is at the origin of unprecedented improvements
in many different domains including computer vision and natural language
processing. Speech processing drastically benefitted from SSL as most of the
current domain-related tasks are now being approached with pre-trained models.
This work int... | 2023-09-11T14:13:09Z | Published in Computer Science and Language. Preprint allowed | null | null | null | null | null | null | null | null | null |
2,309.05516 | Optimize Weight Rounding via Signed Gradient Descent for the
Quantization of LLMs | ['Wenhua Cheng', 'Weiwei Zhang', 'Haihao Shen', 'Yiyang Cai', 'Xin He', 'Kaokao Lv', 'Yi Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) have demonstrated exceptional proficiency in
language-related tasks, but their deployment poses significant challenges due
to substantial memory and storage requirements. Weight-only quantization has
emerged as a promising solution, significantly reducing memory and storage
needs without sa... | 2023-09-11T14:58:23Z | EMNLP24 Findings | null | null | null | null | null | null | null | null | null |
2,309.05519 | NExT-GPT: Any-to-Any Multimodal LLM | ['Shengqiong Wu', 'Hao Fei', 'Leigang Qu', 'Wei Ji', 'Tat-Seng Chua'] | ['cs.AI', 'cs.CL', 'cs.LG'] | While recently Multimodal Large Language Models (MM-LLMs) have made exciting
strides, they mostly fall prey to the limitation of only input-side multimodal
understanding, without the ability to produce content in multiple modalities.
As we humans always perceive the world and communicate with people through
various mod... | 2023-09-11T15:02:25Z | ICML 2024 (Oral) | null | null | null | null | null | null | null | null | null |
2,309.05653 | MAmmoTH: Building Math Generalist Models through Hybrid Instruction
Tuning | ['Xiang Yue', 'Xingwei Qu', 'Ge Zhang', 'Yao Fu', 'Wenhao Huang', 'Huan Sun', 'Yu Su', 'Wenhu Chen'] | ['cs.CL'] | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of whic... | 2023-09-11T17:47:22Z | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | null | MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning | ['Xiang Yue', 'Xingwei Qu', 'Ge Zhang', 'Yao Fu', 'Wenhao Huang', 'Huan Sun', 'Yu Su', 'Wenhu Chen'] | 2,023 | International Conference on Learning Representations | 404 | 76 | ['Computer Science'] |
2,309.05767 | Natural Language Supervision for General-Purpose Audio Representations | ['Benjamin Elizalde', 'Soham Deshmukh', 'Huaming Wang'] | ['cs.SD', 'eess.AS'] | Audio-Language models jointly learn multimodal text and audio representations
that enable Zero-Shot inference. Models rely on the encoders to create powerful
representations of the input and generalize to multiple tasks ranging from
sounds, music, and speech. Although models have achieved remarkable
performance, there ... | 2023-09-11T18:50:21Z | null | null | null | null | null | null | null | null | null | null |
2,309.05793 | PhotoVerse: Tuning-Free Image Customization with Text-to-Image Diffusion
Models | ['Li Chen', 'Mengyi Zhao', 'Yiheng Liu', 'Mingxu Ding', 'Yangyang Song', 'Shizun Wang', 'Xu Wang', 'Hao Yang', 'Jing Liu', 'Kang Du', 'Min Zheng'] | ['cs.CV', 'cs.AI'] | Personalized text-to-image generation has emerged as a powerful and
sought-after tool, empowering users to create customized images based on their
specific concepts and prompts. However, existing approaches to personalization
encounter multiple challenges, including long tuning times, large storage
requirements, the ne... | 2023-09-11T19:59:43Z | null | null | null | PhotoVerse: Tuning-Free Image Customization with Text-to-Image Diffusion Models | ['Li Chen', 'Mengyi Zhao', 'Yiheng Liu', 'Mingxu Ding', 'Yangyang Song', 'Shizun Wang', 'Xu Wang', 'Hao Yang', 'Jing Liu', 'Kang Du', 'Minghang Zheng'] | 2,023 | arXiv.org | 55 | 36 | ['Computer Science'] |
2,309.06085 | BHASA: A Holistic Southeast Asian Linguistic and Cultural Evaluation
Suite for Large Language Models | ['Wei Qi Leong', 'Jian Gang Ngui', 'Yosephine Susanto', 'Hamsawardhini Rengarajan', 'Kengatharaiyer Sarveswaran', 'William Chandra Tjhi'] | ['cs.CL'] | The rapid development of Large Language Models (LLMs) and the emergence of
novel abilities with scale have necessitated the construction of holistic,
diverse and challenging benchmarks such as HELM and BIG-bench. However, at the
moment, most of these benchmarks focus only on performance in English and
evaluations that ... | 2023-09-12T09:31:25Z | 86 pages, 7 figures, added link to repository in abstract, minor
formatting changes and typo corrections | null | null | BHASA: A Holistic Southeast Asian Linguistic and Cultural Evaluation Suite for Large Language Models | ['Wei Qi Leong', 'Jian Gang Ngui', 'Yosephine Susanto', 'Hamsawardhini Rengarajan', 'Kengatharaiyer Sarveswaran', 'William-Chandra Tjhi'] | 2,023 | arXiv.org | 9 | 193 | ['Computer Science'] |
2,309.06126 | AstroLLaMA: Towards Specialized Foundation Models in Astronomy | ['Tuan Dung Nguyen', 'Yuan-Sen Ting', 'Ioana Ciucă', "Charlie O'Neill", 'Ze-Chang Sun', 'Maja Jabłońska', 'Sandor Kruk', 'Ernest Perkowski', 'Jack Miller', 'Jason Li', 'Josh Peek', 'Kartheik Iyer', 'Tomasz Różański', 'Pranav Khetarpal', 'Sharaf Zaman', 'David Brodrick', 'Sergio J. Rodríguez Méndez', 'Thang Bui', 'Alyss... | ['astro-ph.IM', 'astro-ph.CO', 'astro-ph.GA', 'astro-ph.HE', 'cs.CL', 'cs.LG'] | Large language models excel in many human-language tasks but often falter in
highly specialized domains like scholarly astronomy. To bridge this gap, we
introduce AstroLLaMA, a 7-billion-parameter model fine-tuned from LLaMA-2 using
over 300,000 astronomy abstracts from arXiv. Optimized for traditional causal
language ... | 2023-09-12T11:02:27Z | 6 pages, 3 figures, submitted to IJCNLP-AACL 2023. Comments are
welcome. The model can be found on Hugging Face -
https://huggingface.co/universeTBD/astrollama | null | null | null | null | null | null | null | null | null |
2,309.0618 | Efficient Memory Management for Large Language Model Serving with
PagedAttention | ['Woosuk Kwon', 'Zhuohan Li', 'Siyuan Zhuang', 'Ying Sheng', 'Lianmin Zheng', 'Cody Hao Yu', 'Joseph E. Gonzalez', 'Hao Zhang', 'Ion Stoica'] | ['cs.LG', 'cs.DC'] | High throughput serving of large language models (LLMs) requires batching
sufficiently many requests at a time. However, existing systems struggle
because the key-value cache (KV cache) memory for each request is huge and
grows and shrinks dynamically. When managed inefficiently, this memory can be
significantly wasted... | 2023-09-12T12:50:04Z | SOSP 2023 | null | null | null | null | null | null | null | null | null |
2,309.0638 | InstaFlow: One Step is Enough for High-Quality Diffusion-Based
Text-to-Image Generation | ['Xingchao Liu', 'Xiwen Zhang', 'Jianzhu Ma', 'Jian Peng', 'Qiang Liu'] | ['cs.LG', 'cs.CV'] | Diffusion models have revolutionized text-to-image generation with its
exceptional quality and creativity. However, its multi-step sampling process is
known to be slow, often requiring tens of inference steps to obtain
satisfactory results. Previous attempts to improve its sampling speed and
reduce computational costs ... | 2023-09-12T16:42:09Z | ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,309.06497 | A Distributed Data-Parallel PyTorch Implementation of the Distributed
Shampoo Optimizer for Training Neural Networks At-Scale | ['Hao-Jun Michael Shi', 'Tsung-Hsien Lee', 'Shintaro Iwasaki', 'Jose Gallego-Posada', 'Zhijing Li', 'Kaushik Rangadurai', 'Dheevatsa Mudigere', 'Michael Rabbat'] | ['cs.LG', 'cs.DC', 'cs.MS', 'math.OC'] | Shampoo is an online and stochastic optimization algorithm belonging to the
AdaGrad family of methods for training neural networks. It constructs a
block-diagonal preconditioner where each block consists of a coarse Kronecker
product approximation to full-matrix AdaGrad for each parameter of the neural
network. In this... | 2023-09-12T18:11:10Z | 38 pages, 8 figures, 5 tables | null | null | null | null | null | null | null | null | null |
2,309.06891 | Keep It SimPool: Who Said Supervised Transformers Suffer from Attention
Deficit? | ['Bill Psomas', 'Ioannis Kakogeorgiou', 'Konstantinos Karantzalos', 'Yannis Avrithis'] | ['cs.CV', 'cs.LG'] | Convolutional networks and vision transformers have different forms of
pairwise interactions, pooling across layers and pooling at the end of the
network. Does the latter really need to be different? As a by-product of
pooling, vision transformers provide spatial attention for free, but this is
most often of low qualit... | 2023-09-13T11:28:27Z | ICCV 2023. Code and models: https://github.com/billpsomas/simpool | International Conference on Computer Vision (2023) | null | null | null | null | null | null | null | null |
2,309.07207 | EarthPT: a time series foundation model for Earth Observation | ['Michael J. Smith', 'Luke Fleming', 'James E. Geach'] | ['cs.LG', 'physics.geo-ph'] | We introduce EarthPT -- an Earth Observation (EO) pretrained transformer.
EarthPT is a 700 million parameter decoding transformer foundation model
trained in an autoregressive self-supervised manner and developed specifically
with EO use-cases in mind. We demonstrate that EarthPT is an effective
forecaster that can acc... | 2023-09-13T18:00:00Z | 7 pages, 4 figures, accepted to NeurIPS CCAI workshop at
https://www.climatechange.ai/papers/neurips2023/2 . Code available at
https://github.com/aspiaspace/EarthPT | null | null | EarthPT: a time series foundation model for Earth Observation | ['Michael J. Smith', 'Luke Fleming', 'J. Geach'] | 2,023 | null | 7 | 28 | ['Computer Science', 'Physics'] |
2,309.07287 | Enhancing Child Vocalization Classification with Phonetically-Tuned
Embeddings for Assisting Autism Diagnosis | ['Jialu Li', 'Mark Hasegawa-Johnson', 'Karrie Karahalios'] | ['eess.AS', 'cs.SD'] | The assessment of children at risk of autism typically involves a clinician
observing, taking notes, and rating children's behaviors. A machine learning
model that can label adult and child audio may largely save labor in coding
children's behaviors, helping clinicians capture critical events and better
communicate wit... | 2023-09-13T20:13:40Z | Accepted to Interspeech 2024 | null | null | null | null | null | null | null | null | null |
2,309.07314 | AudioSR: Versatile Audio Super-resolution at Scale | ['Haohe Liu', 'Ke Chen', 'Qiao Tian', 'Wenwu Wang', 'Mark D. Plumbley'] | ['cs.SD', 'cs.AI', 'cs.MM', 'eess.AS', 'eess.SP'] | Audio super-resolution is a fundamental task that predicts high-frequency
components for low-resolution audio, enhancing audio quality in digital
applications. Previous methods have limitations such as the limited scope of
audio types (e.g., music, speech) and specific bandwidth settings they can
handle (e.g., 4kHz to ... | 2023-09-13T21:00:09Z | Under review. Demo and code: https://audioldm.github.io/audiosr | null | null | Audiosr: Versatile Audio Super-Resolution at Scale | ['Haohe Liu', 'Ke Chen', 'Qiao Tian', 'Wenwu Wang', 'Mark D. Plumbley'] | 2,023 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 25 | 26 | ['Computer Science', 'Engineering'] |
2,309.07391 | EnCodecMAE: Leveraging neural codecs for universal audio representation
learning | ['Leonardo Pepino', 'Pablo Riera', 'Luciana Ferrer'] | ['cs.SD', 'cs.LG', 'eess.AS'] | The goal of universal audio representation learning is to obtain foundational
models that can be used for a variety of downstream tasks involving speech,
music and environmental sounds. To approach this problem, methods inspired by
works on self-supervised learning for NLP, like BERT, or computer vision, like
masked au... | 2023-09-14T02:21:53Z | null | null | null | null | null | null | null | null | null | null |
2,309.07405 | FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit
for Neural Speech Codec | ['Zhihao Du', 'Shiliang Zhang', 'Kai Hu', 'Siqi Zheng'] | ['cs.SD', 'cs.AI', 'eess.AS'] | This paper presents FunCodec, a fundamental neural speech codec toolkit,
which is an extension of the open-source speech processing toolkit FunASR.
FunCodec provides reproducible training recipes and inference scripts for the
latest neural speech codec models, such as SoundStream and Encodec. Thanks to
the unified desi... | 2023-09-14T03:18:24Z | 5 pages, 3 figures, submitted to ICASSP 2024 | null | null | null | null | null | null | null | null | null |
2,309.07414 | PromptASR for contextualized ASR with controllable style | ['Xiaoyu Yang', 'Wei Kang', 'Zengwei Yao', 'Yifan Yang', 'Liyong Guo', 'Fangjun Kuang', 'Long Lin', 'Daniel Povey'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Prompts are crucial to large language models as they provide context
information such as topic or logical relationships. Inspired by this, we
propose PromptASR, a framework that integrates prompts in end-to-end automatic
speech recognition (E2E ASR) systems to achieve contextualized ASR with
controllable style of trans... | 2023-09-14T03:43:07Z | Proc. ICASSP 2024 | null | null | PromptASR for Contextualized ASR with Controllable Style | ['Xiaoyu Yang', 'Wei Kang', 'Zengwei Yao', 'Yifan Yang', 'Liyong Guo', 'Fangjun Kuang', 'Long Lin', 'Daniel Povey'] | 2,023 | IEEE International Conference on Acoustics, Speech, and Signal Processing | 14 | 24 | ['Computer Science', 'Engineering'] |
2,309.07445 | SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic
Classification in 200+ Languages and Dialects | ['David Ifeoluwa Adelani', 'Hannah Liu', 'Xiaoyu Shen', 'Nikita Vassilyev', 'Jesujoba O. Alabi', 'Yanke Mao', 'Haonan Gao', 'Annie En-Shiun Lee'] | ['cs.CL'] | Despite the progress we have recorded in the last few years in multilingual
natural language processing, evaluation is typically limited to a small set of
languages with available datasets which excludes a large number of low-resource
languages. In this paper, we created SIB-200 -- a large-scale open-sourced
benchmark ... | 2023-09-14T05:56:49Z | Accepted to EACL 2024 (main conference) | null | null | null | null | null | null | null | null | null |
2,309.07597 | C-Pack: Packed Resources For General Chinese Embeddings | ['Shitao Xiao', 'Zheng Liu', 'Peitian Zhang', 'Niklas Muennighoff', 'Defu Lian', 'Jian-Yun Nie'] | ['cs.CL', 'cs.AI', 'cs.IR'] | We introduce C-Pack, a package of resources that significantly advance the
field of general Chinese embeddings. C-Pack includes three critical resources.
1) C-MTEB is a comprehensive benchmark for Chinese text embeddings covering 6
tasks and 35 datasets. 2) C-MTP is a massive text embedding dataset curated
from labeled... | 2023-09-14T10:57:50Z | SIGIR 2024 | null | null | null | null | null | null | null | null | null |
2,309.07875 | Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language
Models that Follow Instructions | ['Federico Bianchi', 'Mirac Suzgun', 'Giuseppe Attanasio', 'Paul Röttger', 'Dan Jurafsky', 'Tatsunori Hashimoto', 'James Zou'] | ['cs.CL'] | Training large language models to follow instructions makes them perform
better on a wide range of tasks and generally become more helpful. However, a
perfectly helpful model will follow even the most malicious instructions and
readily generate harmful content. In this paper, we raise concerns over the
safety of models... | 2023-09-14T17:23:37Z | null | null | null | null | null | null | null | null | null | null |
2,309.07915 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context
Learning | ['Haozhe Zhao', 'Zefan Cai', 'Shuzheng Si', 'Xiaojian Ma', 'Kaikai An', 'Liang Chen', 'Zixuan Liu', 'Sheng Wang', 'Wenjuan Han', 'Baobao Chang'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-mo... | 2023-09-14T17:59:17Z | Accepted by ICLR2024 | null | null | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | ['Haozhe Zhao', 'Zefan Cai', 'Shuzheng Si', 'Xiaojian Ma', 'Kaikai An', 'Liang Chen', 'Zixuan Liu', 'Sheng Wang', 'Wenjuan Han', 'Baobao Chang'] | 2,023 | International Conference on Learning Representations | 143 | 140 | ['Computer Science'] |
2,309.08168 | Draft & Verify: Lossless Large Language Model Acceleration via
Self-Speculative Decoding | ['Jun Zhang', 'Jue Wang', 'Huan Li', 'Lidan Shou', 'Ke Chen', 'Gang Chen', 'Sharad Mehrotra'] | ['cs.CL'] | We present a novel inference scheme, self-speculative decoding, for
accelerating Large Language Models (LLMs) without the need for an auxiliary
model. This approach is characterized by a two-stage process: drafting and
verification. The drafting stage generates draft tokens at a slightly lower
quality but more quickly,... | 2023-09-15T05:34:32Z | Accepted to ACL 2024 | null | 10.18653/v1/2024.acl-long.607 | null | null | null | null | null | null | null |
2,309.08351 | Headless Language Models: Learning without Predicting with Contrastive
Weight Tying | ['Nathan Godey', 'Éric de la Clergerie', 'Benoît Sagot'] | ['cs.CL'] | Self-supervised pre-training of language models usually consists in
predicting probability distributions over extensive token vocabularies. In this
study, we propose an innovative method that shifts away from probability
prediction and instead focuses on reconstructing input embeddings in a
contrastive fashion via Cons... | 2023-09-15T12:20:00Z | null | null | null | Headless Language Models: Learning without Predicting with Contrastive Weight Tying | ['Nathan Godey', 'Eric Villemonte de la Clergerie', 'Benoît Sagot'] | 2,023 | International Conference on Learning Representations | 3 | 46 | ['Computer Science'] |
2,309.08469 | Silver Retriever: Advancing Neural Passage Retrieval for Polish Question
Answering | ['Piotr Rybak', 'Maciej Ogrodniczuk'] | ['cs.CL', 'cs.IR'] | Modern open-domain question answering systems often rely on accurate and
efficient retrieval components to find passages containing the facts necessary
to answer the question. Recently, neural retrievers have gained popularity over
lexical alternatives due to their superior performance. However, most of the
work concer... | 2023-09-15T15:19:53Z | null | null | null | null | null | null | null | null | null | null |
2,309.08695 | Resolving Legalese: A Multilingual Exploration of Negation Scope
Resolution in Legal Documents | ['Ramona Christen', 'Anastassia Shaitarova', 'Matthias Stürmer', 'Joel Niklaus'] | ['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2'] | Resolving the scope of a negation within a sentence is a challenging NLP
task. The complexity of legal texts and the lack of annotated in-domain
negation corpora pose challenges for state-of-the-art (SotA) models when
performing negation scope resolution on multilingual legal data. Our
experiments demonstrate that mode... | 2023-09-15T18:38:06Z | null | null | null | null | null | null | null | null | null | null |
2,309.0873 | MusiLingo: Bridging Music and Text with Pre-trained Language Models for
Music Captioning and Query Response | ['Zihao Deng', 'Yinghao Ma', 'Yudong Liu', 'Rongchen Guo', 'Ge Zhang', 'Wenhu Chen', 'Wenhao Huang', 'Emmanouil Benetos'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.MM', 'cs.SD'] | Large Language Models (LLMs) have shown immense potential in multimodal
applications, yet the convergence of textual and musical domains remains not
well-explored. To address this gap, we present MusiLingo, a novel system for
music caption generation and music-related query responses. MusiLingo employs a
single project... | 2023-09-15T19:31:40Z | null | 2024 Annual Conference of the North American Chapter of the
Association for Computational Linguistics | null | null | null | null | null | null | null | null |
2,309.08788 | BioinspiredLLM: Conversational Large Language Model for the Mechanics of
Biological and Bio-inspired Materials | ['Rachel K. Luu', 'Markus J. Buehler'] | ['cond-mat.mtrl-sci', 'cond-mat.dis-nn', 'cond-mat.soft', 'cs.LG', 'nlin.AO'] | The study of biological materials and bio-inspired materials science is well
established; however, surprisingly little knowledge has been systematically
translated to engineering solutions. To accelerate discovery and guide
insights, an open-source autoregressive transformer large language model (LLM),
BioinspiredLLM, ... | 2023-09-15T22:12:44Z | null | null | null | null | null | null | null | null | null | null |
2,309.08958 | Monolingual or Multilingual Instruction Tuning: Which Makes a Better
Alpaca | ['Pinzhen Chen', 'Shaoxiong Ji', 'Nikolay Bogoychev', 'Andrey Kutuzov', 'Barry Haddow', 'Kenneth Heafield'] | ['cs.CL', 'cs.AI'] | Foundational large language models (LLMs) can be instruction-tuned to perform
open-domain question answering, facilitating applications like chat assistants.
While such efforts are often carried out in a single language, we empirically
analyze cost-efficient strategies for multilingual scenarios. Our study employs
the ... | 2023-09-16T11:22:46Z | Accepted to Findings of ACL: EACL 2024. Added human evaluation and
shortened writing | null | null | Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca | ['Pinzhen Chen', 'Shaoxiong Ji', 'Nikolay Bogoychev', 'B. Haddow', 'Kenneth Heafield'] | 2,023 | Findings | 47 | 56 | ['Computer Science'] |
2,309.094 | CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large
Language Models in 167 Languages | ['Thuat Nguyen', 'Chien Van Nguyen', 'Viet Dac Lai', 'Hieu Man', 'Nghia Trung Ngo', 'Franck Dernoncourt', 'Ryan A. Rossi', 'Thien Huu Nguyen'] | ['cs.CL', 'cs.AI'] | The driving factors behind the development of large language models (LLMs)
with impressive learning capabilities are their colossal model sizes and
extensive training datasets. Along with the progress in natural language
processing, LLMs have been frequently made accessible to the public to foster
deeper investigation ... | 2023-09-17T23:49:10Z | Ongoing Work | null | null | CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages | ['Thuat Nguyen', 'Chien Van Nguyen', 'Viet Dac Lai', 'Hieu Man', 'Nghia Trung Ngo', 'Franck Dernoncourt', 'Ryan A. Rossi', 'Thien Huu Nguyen'] | 2,023 | International Conference on Language Resources and Evaluation | 112 | 53 | ['Computer Science'] |
2,309.0953 | Adapting Large Language Models to Domains via Reading Comprehension | ['Daixuan Cheng', 'Shaohan Huang', 'Furu Wei'] | ['cs.CL'] | We explore how continued pre-training on domain-specific corpora influences
large language models, revealing that training on the raw corpora endows the
model with domain knowledge, but drastically hurts its prompting ability for
question answering. Taken inspiration from human learning via reading
comprehension--pract... | 2023-09-18T07:17:52Z | ICLR 2024 Conference | null | null | null | null | null | null | null | null | null |
2,309.09783 | The ParlaSent Multilingual Training Dataset for Sentiment Identification
in Parliamentary Proceedings | ['Michal Mochtak', 'Peter Rupnik', 'Nikola Ljubešić'] | ['cs.CL'] | The paper presents a new training dataset of sentences in 7 languages,
manually annotated for sentiment, which are used in a series of experiments
focused on training a robust sentiment identifier for parliamentary
proceedings. The paper additionally introduces the first domain-specific
multilingual transformer languag... | 2023-09-18T14:01:06Z | null | null | null | The ParlaSent Multilingual Training Dataset for Sentiment Identification in Parliamentary Proceedings | ['Michal Mochtak', 'Peter Rupnik', 'Nikola Ljubesic'] | 2,023 | International Conference on Language Resources and Evaluation | 4 | 76 | ['Computer Science'] |
2,309.098 | AMuRD: Annotated Arabic-English Receipt Dataset for Key Information
Extraction and Classification | ['Abdelrahman Abdallah', 'Mahmoud Abdalla', 'Mohamed Elkasaby', 'Yasser Elbendary', 'Adam Jatowt'] | ['cs.CL'] | The extraction of key information from receipts is a complex task that
involves the recognition and extraction of text from scanned receipts. This
process is crucial as it enables the retrieval of essential content and
organizing it into structured documents for easy access and analysis. In this
paper, we present AMuRD... | 2023-09-18T14:18:19Z | null | null | null | null | null | null | null | null | null | null |
2,309.09826 | Efficient Avoidance of Vulnerabilities in Auto-completed Smart Contract
Code Using Vulnerability-constrained Decoding | ['André Storhaug', 'Jingyue Li', 'Tianyuan Hu'] | ['cs.CR', 'cs.AI', 'cs.CL'] | Auto-completing code enables developers to speed up coding significantly.
Recent advances in transformer-based large language model (LLM) technologies
have been applied to code synthesis. However, studies show that many of such
synthesized codes contain vulnerabilities. We propose a novel
vulnerability-constrained deco... | 2023-09-18T14:47:34Z | 12 pages, 8 figures, 2 tables, 5 listings, accepted to the 34th IEEE
International Symposium on Software Reliability Engineering (ISSRE 2023) | null | null | Efficient Avoidance of Vulnerabilities in Auto-completed Smart Contract Code Using Vulnerability-constrained Decoding | ['André Storhaug', 'Jingyue Li', 'Tianyuan Hu'] | 2,023 | IEEE International Symposium on Software Reliability Engineering | 16 | 45 | ['Computer Science'] |
2,309.09958 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | ['Yadong Lu', 'Chunyuan Li', 'Haotian Liu', 'Jianwei Yang', 'Jianfeng Gao', 'Yelong Shen'] | ['cs.CV', 'cs.CL'] | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33... | 2023-09-18T17:30:46Z | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | null | null | null | null | null | null | null | null |
2,309.1002 | Multimodal Foundation Models: From Specialists to General-Purpose
Assistants | ['Chunyuan Li', 'Zhe Gan', 'Zhengyuan Yang', 'Jianwei Yang', 'Linjie Li', 'Lijuan Wang', 'Jianfeng Gao'] | ['cs.CV', 'cs.CL'] | This paper presents a comprehensive survey of the taxonomy and evolution of
multimodal foundation models that demonstrate vision and vision-language
capabilities, focusing on the transition from specialist models to
general-purpose assistants. The research landscape encompasses five core
topics, categorized into two cl... | 2023-09-18T17:56:28Z | 119 pages, PDF file size 58MB; Tutorial website:
https://vlp-tutorial.github.io/2023/ | null | null | Multimodal Foundation Models: From Specialists to General-Purpose Assistants | ['Chunyuan Li', 'Zhe Gan', 'Zhengyuan Yang', 'Jianwei Yang', 'Linjie Li', 'Lijuan Wang', 'Jianfeng Gao'] | 2,023 | Foundations and Trends in Computer Graphics and Vision | 259 | 0 | ['Computer Science'] |
2,309.10066 | Automatic Personalized Impression Generation for PET Reports Using Large
Language Models | ['Xin Tie', 'Muheon Shin', 'Ali Pirasteh', 'Nevein Ibrahim', 'Zachary Huemann', 'Sharon M. Castellino', 'Kara M. Kelly', 'John Garrett', 'Junjie Hu', 'Steve Y. Cho', 'Tyler J. Bradshaw'] | ['cs.AI', 'cs.CL', 'physics.med-ph'] | In this study, we aimed to determine if fine-tuned large language models
(LLMs) can generate accurate, personalized impressions for whole-body PET
reports. Twelve language models were trained on a corpus of PET reports using
the teacher-forcing algorithm, with the report findings as input and the
clinical impressions a... | 2023-09-18T18:33:40Z | 25 pages in total. 6 figures and 3 tables in the main body. The
manuscript has been submitted to a journal for potential publication | J Digit Imaging. Inform. Med. (2024) | 10.1007/s10278-024-00985-3 | null | null | null | null | null | null | null |
2,309.10272 | Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and
Hindi | ['Md Nishat Raihan', 'Dhiman Goswami', 'Antara Mahmud'] | ['cs.CL'] | One of the most popular downstream tasks in the field of Natural Language
Processing is text classification. Text classification tasks have become more
daunting when the texts are code-mixed. Though they are not exposed to such
text during pre-training, different BERT models have demonstrated success in
tackling Code-M... | 2023-09-19T02:59:41Z | null | null | null | null | null | null | null | null | null | null |
2,309.10305 | Baichuan 2: Open Large-scale Language Models | ['Aiyuan Yang', 'Bin Xiao', 'Bingning Wang', 'Borong Zhang', 'Ce Bian', 'Chao Yin', 'Chenxu Lv', 'Da Pan', 'Dian Wang', 'Dong Yan', 'Fan Yang', 'Fei Deng', 'Feng Wang', 'Feng Liu', 'Guangwei Ai', 'Guosheng Dong', 'Haizhou Zhao', 'Hang Xu', 'Haoze Sun', 'Hongda Zhang', 'Hui Liu', 'Jiaming Ji', 'Jian Xie', 'JunTao Dai', ... | ['cs.CL'] | Large language models (LLMs) have demonstrated remarkable performance on a
variety of natural language tasks based on just a few examples of natural
language instructions, reducing the need for extensive feature engineering.
However, most powerful LLMs are closed-source or limited in their capability
for languages othe... | 2023-09-19T04:13:22Z | Baichuan 2 technical report. Github:
https://github.com/baichuan-inc/Baichuan2 | null | null | null | null | null | null | null | null | null |
2,309.10339 | KoBigBird-large: Transformation of Transformer for Korean Language
Understanding | ['Kisu Yang', 'Yoonna Jang', 'Taewoo Lee', 'Jinwoo Seong', 'Hyungjin Lee', 'Hwanseok Jang', 'Heuiseok Lim'] | ['cs.CL'] | This work presents KoBigBird-large, a large size of Korean BigBird that
achieves state-of-the-art performance and allows long sequence processing for
Korean language understanding. Without further pretraining, we only transform
the architecture and extend the positional encoding with our proposed Tapered
Absolute Posit... | 2023-09-19T05:48:57Z | Accepted at IJCNLP-AACL 2023 | null | null | null | null | null | null | null | null | null |
2,309.104 | PoSE: Efficient Context Window Extension of LLMs via Positional
Skip-wise Training | ['Dawei Zhu', 'Nan Yang', 'Liang Wang', 'Yifan Song', 'Wenhao Wu', 'Furu Wei', 'Sujian Li'] | ['cs.CL', 'cs.LG'] | Large Language Models (LLMs) are trained with a pre-defined context length,
restricting their use in scenarios requiring long inputs. Previous efforts for
adapting LLMs to a longer length usually requires fine-tuning with this target
length (Full-length fine-tuning), suffering intensive training cost. To
decouple train... | 2023-09-19T08:03:38Z | ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,309.10706 | OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model
Pre-trained from Scratch | ['Juntao Li', 'Zecheng Tang', 'Yuyang Ding', 'Pinzheng Wang', 'Pei Guo', 'Wangjie You', 'Dan Qiao', 'Wenliang Chen', 'Guohong Fu', 'Qiaoming Zhu', 'Guodong Zhou', 'Min Zhang'] | ['cs.CL'] | Large language models (LLMs) with billions of parameters have demonstrated
outstanding performance on various natural language processing tasks. This
report presents OpenBA, an open-sourced 15B bilingual asymmetric seq2seq model,
to contribute an LLM variant to the Chinese-oriented open-source model
community. We enhan... | 2023-09-19T15:46:40Z | null | null | 10.1007/s11432-023-4128-3 | OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch | ['Juntao Li', 'Zecheng Tang', 'Yuyang Ding', 'Pinzheng Wang', 'Pei Guo', 'Wangjie You', 'Dan Qiao', 'Wenliang Chen', 'Guohong Fu', 'Qiaoming Zhu', 'Guodong Zhou', 'M. Zhang'] | 2,023 | arXiv.org | 5 | 123 | ['Computer Science'] |
2,309.1074 | ConsistencyTTA: Accelerating Diffusion-Based Text-to-Audio Generation
with Consistency Distillation | ['Yatong Bai', 'Trung Dang', 'Dung Tran', 'Kazuhito Koishida', 'Somayeh Sojoudi'] | ['cs.SD', 'cs.LG', 'cs.MM', 'eess.AS'] | Diffusion models are instrumental in text-to-audio (TTA) generation.
Unfortunately, they suffer from slow inference due to an excessive number of
queries to the underlying denoising network per generation. To address this
bottleneck, we introduce ConsistencyTTA, a framework requiring only a single
non-autoregressive ne... | 2023-09-19T16:36:33Z | null | null | null | null | null | null | null | null | null | null |
2,309.10818 | SlimPajama-DC: Understanding Data Combinations for LLM Training | ['Zhiqiang Shen', 'Tianhua Tao', 'Liqun Ma', 'Willie Neiswanger', 'Zhengzhong Liu', 'Hongyi Wang', 'Bowen Tan', 'Joel Hestness', 'Natalia Vassilieva', 'Daria Soboleva', 'Eric Xing'] | ['cs.CL', 'cs.AI'] | This paper aims to understand the impacts of various data combinations (e.g.,
web text, Wikipedia, GitHub, books) on the pretraining of large language models
using SlimPajama. SlimPajama is a rigorously deduplicated, multi-source
dataset, which has been refined and further deduplicated to 627B tokens from
the extensive... | 2023-09-19T17:59:54Z | Technical report. Models at:
https://huggingface.co/MBZUAI-LLM/SlimPajama-DC and dataset at:
https://huggingface.co/datasets/MBZUAI-LLM/SlimPajama-627B-DC | null | null | null | null | null | null | null | null | null |
2,309.10931 | A Family of Pretrained Transformer Language Models for Russian | ['Dmitry Zmitrovich', 'Alexander Abramov', 'Andrey Kalmykov', 'Maria Tikhonova', 'Ekaterina Taktasheva', 'Danil Astafurov', 'Mark Baushenko', 'Artem Snegirev', 'Vitalii Kadulin', 'Sergey Markov', 'Tatiana Shavrina', 'Vladislav Mikhailov', 'Alena Fenogenova'] | ['cs.CL'] | Transformer language models (LMs) are fundamental to NLP research
methodologies and applications in various languages. However, developing such
models specifically for the Russian language has received little attention.
This paper introduces a collection of 13 Russian Transformer LMs, which spans
encoder (ruBERT, ruRoB... | 2023-09-19T21:07:52Z | LREC-COLING-2024 | https://aclanthology.org/2024.lrec-main.45/ | null | null | null | null | null | null | null | null |
2,309.11 | Towards Joint Modeling of Dialogue Response and Speech Synthesis based
on Large Language Model | ['Xinyu Zhou', 'Delong Chen', 'Yudong Chen'] | ['cs.CL', 'cs.SD', 'eess.AS'] | This paper explores the potential of constructing an AI spoken dialogue
system that "thinks how to respond" and "thinks how to speak" simultaneously,
which more closely aligns with the human speech production process compared to
the current cascade pipeline of independent chatbot and Text-to-Speech (TTS)
modules. We hy... | 2023-09-20T01:48:27Z | null | null | null | null | null | null | null | null | null | null |
2,309.11087 | Embed-Search-Align: DNA Sequence Alignment using Transformer Models | ['Pavan Holur', 'K. C. Enevoldsen', 'Shreyas Rajesh', 'Lajoyce Mboning', 'Thalia Georgiou', 'Louis-S. Bouchard', 'Matteo Pellegrini', 'Vwani Roychowdhury'] | ['q-bio.GN', 'cs.AI'] | DNA sequence alignment involves assigning short DNA reads to the most
probable locations on an extensive reference genome. This process is crucial
for various genomic analyses, including variant calling, transcriptomics, and
epigenomics. Conventional methods, refined over decades, tackle this challenge
in 2 steps: geno... | 2023-09-20T06:30:39Z | 12 pages, Tables 7, Figures 6 | null | null | null | null | null | null | null | null | null |
2,309.11235 | OpenChat: Advancing Open-source Language Models with Mixed-Quality Data | ['Guan Wang', 'Sijie Cheng', 'Xianyuan Zhan', 'Xiangang Li', 'Sen Song', 'Yang Liu'] | ['cs.CL'] | Nowadays, open-source large language models like LLaMA have emerged. Recent
developments have incorporated supervised fine-tuning (SFT) and reinforcement
learning fine-tuning (RLFT) to align these models with human goals. However,
SFT methods treat all training data with mixed quality equally, while RLFT
methods requir... | 2023-09-20T11:54:40Z | null | null | null | null | null | null | null | null | null | null |
2,309.11259 | Sequence-to-Sequence Spanish Pre-trained Language Models | ['Vladimir Araujo', 'Maria Mihaela Trusca', 'Rodrigo Tufiño', 'Marie-Francine Moens'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In recent years, significant advancements in pre-trained language models have
driven the creation of numerous non-English language variants, with a
particular emphasis on encoder-only and decoder-only architectures. While
Spanish language models based on BERT and GPT have demonstrated proficiency in
natural language un... | 2023-09-20T12:35:19Z | Accepted paper at LREC-Coling2024 | null | null | null | null | null | null | null | null | null |
2,309.11325 | DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal
Services | ['Shengbin Yue', 'Wei Chen', 'Siyuan Wang', 'Bingxuan Li', 'Chenchen Shen', 'Shujun Liu', 'Yuxuan Zhou', 'Yao Xiao', 'Song Yun', 'Xuanjing Huang', 'Zhongyu Wei'] | ['cs.CL'] | We propose DISC-LawLLM, an intelligent legal system utilizing large language
models (LLMs) to provide a wide range of legal services. We adopt legal
syllogism prompting strategies to construct supervised fine-tuning datasets in
the Chinese Judicial domain and fine-tune LLMs with legal reasoning capability.
We augment L... | 2023-09-20T13:50:26Z | null | null | null | null | null | null | null | null | null | null |
2,309.11327 | Leveraging Data Collection and Unsupervised Learning for Code-switched
Tunisian Arabic Automatic Speech Recognition | ['Ahmed Amine Ben Abdallah', 'Ata Kabboudi', 'Amir Kanoun', 'Salah Zaiem'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Crafting an effective Automatic Speech Recognition (ASR) solution for
dialects demands innovative approaches that not only address the data scarcity
issue but also navigate the intricacies of linguistic diversity. In this paper,
we address the aforementioned ASR challenge, focusing on the Tunisian dialect.
First, textu... | 2023-09-20T13:56:27Z | 6 pages, submitted to ICASSP 2024 | null | null | null | null | null | null | null | null | null |
2,309.11419 | KOSMOS-2.5: A Multimodal Literate Model | ['Tengchao Lv', 'Yupan Huang', 'Jingye Chen', 'Yuzhong Zhao', 'Yilin Jia', 'Lei Cui', 'Shuming Ma', 'Yaoyao Chang', 'Shaohan Huang', 'Wenhui Wang', 'Li Dong', 'Weiyao Luo', 'Shaoxiang Wu', 'Guoxin Wang', 'Cha Zhang', 'Furu Wei'] | ['cs.CL', 'cs.CV'] | The automatic reading of text-intensive images represents a significant
advancement toward achieving Artificial General Intelligence (AGI). In this
paper we present KOSMOS-2.5, a multimodal literate model for machine reading of
text-intensive images. Pre-trained on a large-scale corpus of text-intensive
images, KOSMOS-... | 2023-09-20T15:50:08Z | null | null | null | Kosmos-2.5: A Multimodal Literate Model | ['Tengchao Lv', 'Yupan Huang', 'Jingye Chen', 'Lei Cui', 'Shuming Ma', 'Ya-Chi Chang', 'Shaohan Huang', 'Wenhui Wang', 'Li Dong', 'Weiyao Luo', 'Shaoxiang Wu', 'Guoxin Wang', 'Cha Zhang', 'Furu Wei'] | 2,023 | arXiv.org | 66 | 128 | ['Computer Science'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.