Title
stringlengths
16
196
Authors
stringlengths
6
6.27k
Abstract
stringlengths
242
1.92k
entry_id
stringlengths
33
33
Date
timestamp[ns, tz=UTC]
Categories
stringclasses
597 values
year
int32
2.02k
2.02k
Suvach -- Generated Hindi QA benchmark
Vaishak Narayanan, Prabin Raj KP, Saifudheen Nouphal
Current evaluation benchmarks for question answering (QA) in Indic languages often rely on machine translation of existing English datasets. This approach suffers from bias and inaccuracies inherent in machine translation, leading to datasets that may not reflect the true capabilities of EQA models for Indic languages....
http://arxiv.org/abs/2404.19254v1
2024-04-30T04:19:17Z
cs.CL, cs.AI
2,024
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, Chengzhong Xu
Adapting Large Language Models (LLMs) to new tasks through fine-tuning has been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) techniques, such as LoRA. However, these methods often underperform compared to full fine-tuning, particularly in scenarios involving complex datasets. This i...
http://arxiv.org/abs/2404.19245v1
2024-04-30T04:01:09Z
cs.CL, cs.AI
2,024
Multi-hop Question Answering over Knowledge Graphs using Large Language Models
Abir Chakraborty
Knowledge graphs (KGs) are large datasets with specific structures representing large knowledge bases (KB) where each node represents a key entity and relations amongst them are typed edges. Natural language queries formed to extract information from a KB entail starting from specific nodes and reasoning over multiple ...
http://arxiv.org/abs/2404.19234v1
2024-04-30T03:31:03Z
cs.AI, cs.CL, cs.DB
2,024
TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table Domains
Yoonsik Kim, Moonbin Yim, Ka Yeon Song
In this paper, we establish a benchmark for table visual question answering, referred to as the TableVQA-Bench, derived from pre-existing table question-answering (QA) and table structure recognition datasets. It is important to note that existing datasets have not incorporated images or QA pairs, which are two crucial...
http://arxiv.org/abs/2404.19205v1
2024-04-30T02:05:18Z
cs.CV, cs.AI
2,024
What Drives Performance in Multilingual Language Models?
Sina Bagheri Nezhad, Ameeta Agrawal
This study investigates the factors influencing the performance of multilingual large language models (MLLMs) across diverse languages. We study 6 MLLMs, including masked language models, autoregressive models, and instruction-tuned LLMs, on the SIB-200 dataset, a topic classification dataset encompassing 204 languages...
http://arxiv.org/abs/2404.19159v1
2024-04-29T23:49:19Z
cs.CL, I.2.7
2,024
Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations
Zhongzheng Xu, Emily Wall
Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lo...
http://arxiv.org/abs/2404.19097v2
2024-04-29T20:27:39Z
cs.HC
2,024
In-Context Symbolic Regression: Leveraging Language Models for Function Discovery
Matteo Merler, Nicola Dainese, Katsiaryna Haitsiukevich
Symbolic Regression (SR) is a task which aims to extract the mathematical expression underlying a set of empirical observations. Transformer-based methods trained on SR datasets detain the current state-of-the-art in this task, while the application of Large Language Models (LLMs) to SR remains unexplored. This work in...
http://arxiv.org/abs/2404.19094v1
2024-04-29T20:19:25Z
cs.CL, cs.LG
2,024
It's Difficult to be Neutral -- Human and LLM-based Sentiment Annotation of Patient Comments
Petter Mæhlum, David Samuel, Rebecka Maria Norman, Elma Jelin, Øyvind Andresen Bjertnæs, Lilja Øvrelid, Erik Velldal
Sentiment analysis is an important tool for aggregating patient voices, in order to provide targeted improvements in healthcare services. A prerequisite for this is the availability of in-domain data annotated for sentiment. This article documents an effort to add sentiment annotations to free-text comments in patient ...
http://arxiv.org/abs/2404.18832v1
2024-04-29T16:19:47Z
cs.CL
2,024
Benchmarking Benchmark Leakage in Large Language Models
Ruijie Xu, Zengzhi Wang, Run-Ze Fan, Pengfei Liu
Amid the expanding use of pre-training data, the phenomenon of benchmark dataset leakage has become increasingly prominent, exacerbated by opaque training processes and the often undisclosed inclusion of supervised data in contemporary Large Language Models (LLMs). This issue skews benchmark effectiveness and fosters p...
http://arxiv.org/abs/2404.18824v1
2024-04-29T16:05:36Z
cs.CL, cs.AI, cs.LG
2,024
Replacing Judges with Juries: Evaluating LLM Generations with a Panel of Diverse Models
Pat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus, Arkady Arkhangorodsky, Minjie Xu, Naomi White, Patrick Lewis
As Large Language Models (LLMs) have become more advanced, they have outpaced our abilities to accurately evaluate their quality. Not only is finding data to adequately probe particular model properties difficult, but evaluating the correctness of a model's freeform generation alone is a challenge. To address this, man...
http://arxiv.org/abs/2404.18796v2
2024-04-29T15:33:23Z
cs.CL, cs.AI
2,024
Where on Earth Do Users Say They Are?: Geo-Entity Linking for Noisy Multilingual User Input
Tessa Masis, Brendan O'Connor
Geo-entity linking is the task of linking a location mention to the real-world geographic location. In this paper we explore the challenging task of geo-entity linking for noisy, multilingual social media data. There are few open-source multilingual geo-entity linking tools available and existing ones are often rule-ba...
http://arxiv.org/abs/2404.18784v1
2024-04-29T15:18:33Z
cs.CL, cs.AI
2,024
PECC: Problem Extraction and Coding Challenges
Patrick Haller, Jonas Golde, Alan Akbik
Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving and reasoning. Existing benchmarks evaluate tasks in isolation, yet the extent to which LLMs can understand prose-style tasks, identify the underlying problems, a...
http://arxiv.org/abs/2404.18766v1
2024-04-29T15:02:14Z
cs.AI
2,024
Enhancing Interactive Image Retrieval With Query Rewriting Using Large Language Models and Vision Language Models
Hongyi Zhu, Jia-Hong Huang, Stevan Rudinac, Evangelos Kanoulas
Image search stands as a pivotal task in multimedia and computer vision, finding applications across diverse domains, ranging from internet search to medical diagnostics. Conventional image search systems operate by accepting textual or visual queries, retrieving the top-relevant candidate results from the database. Ho...
http://arxiv.org/abs/2404.18746v1
2024-04-29T14:46:35Z
cs.MM, cs.CV
2,024
LLMClean: Context-Aware Tabular Data Cleaning via LLM-Generated OFDs
Fabian Biester, Mohamed Abdelaal, Daniel Del Gaudio
Machine learning's influence is expanding rapidly, now integral to decision-making processes from corporate strategy to the advancements in Industry 4.0. The efficacy of Artificial Intelligence broadly hinges on the caliber of data used during its training phase; optimal performance is tied to exceptional data quality....
http://arxiv.org/abs/2404.18681v1
2024-04-29T13:24:23Z
cs.DB
2,024
101 Billion Arabic Words Dataset
Manel Aloui, Hasna Chouikhi, Ghaith Chaabane, Haithem Kchaou, Chehir Dhaouadi
In recent years, Large Language Models have revolutionized the field of natural language processing, showcasing an impressive rise predominantly in English-centric domains. These advancements have set a global benchmark, inspiring significant efforts toward developing Arabic LLMs capable of understanding and generating...
http://arxiv.org/abs/2405.01590v1
2024-04-29T13:15:03Z
cs.CL
2,024
Assessing Cybersecurity Vulnerabilities in Code Large Language Models
Md Imran Hossen, Jianyi Zhang, Yinzhi Cao, Xiali Hei
Instruction-tuned Code Large Language Models (Code LLMs) are increasingly utilized as AI coding assistants and integrated into various applications. However, the cybersecurity vulnerabilities and implications arising from the widespread integration of these models are not yet fully understood due to limited research in...
http://arxiv.org/abs/2404.18567v1
2024-04-29T10:14:58Z
cs.CR
2,024
Injecting Salesperson's Dialogue Strategies in Large Language Models with Chain-of-Thought Reasoning
Wen-Yu Chang, Yun-Nung Chen
Recent research in dialogue systems and corpora has focused on two main categories: task-oriented (TOD) and open-domain (chit-chat) dialogues. TOD systems help users accomplish specific tasks, while open-domain systems aim to create engaging conversations. However, in real-world scenarios, user intents are often reveal...
http://arxiv.org/abs/2404.18564v1
2024-04-29T10:12:04Z
cs.CL, cs.AI
2,024
Time Machine GPT
Felix Drinkall, Eghbal Rahimikia, Janet B. Pierrehumbert, Stefan Zohren
Large language models (LLMs) are often trained on extensive, temporally indiscriminate text corpora, reflecting the lack of datasets with temporal metadata. This approach is not aligned with the evolving nature of language. Conventional methods for creating temporally adapted language models often depend on further pre...
http://arxiv.org/abs/2404.18543v1
2024-04-29T09:34:25Z
cs.CL, cs.CE, cs.LG, I.2.1, I.2.7
2,024
Evaluating and Mitigating Linguistic Discrimination in Large Language Models
Guoliang Dong, Haoyu Wang, Jun Sun, Xinyu Wang
By training on text in various languages, large language models (LLMs) typically possess multilingual support and demonstrate remarkable capabilities in solving tasks described in different languages. However, LLMs can exhibit linguistic discrimination due to the uneven distribution of training data across languages. T...
http://arxiv.org/abs/2404.18534v1
2024-04-29T09:22:54Z
cs.CL, cs.AI, cs.CR, cs.SE
2,024
GPT-4 passes most of the 297 written Polish Board Certification Examinations
Jakub Pokrywka, Jeremi Kaczmarek, Edward Gorzelańczyk
Introduction: Recently, the effectiveness of Large Language Models (LLMs) has increased rapidly, allowing them to be used in a great number of applications. However, the risks posed by the generation of false information through LLMs significantly limit their applications in sensitive areas such as healthcare, highligh...
http://arxiv.org/abs/2405.01589v1
2024-04-29T09:08:22Z
cs.CL, cs.AI
2,024
ChatGPT as an inventor: Eliciting the strengths and weaknesses of current large language models against humans in engineering design
Daniel Nygård Ege, Henrik H. Øvrebø, Vegar Stubberud, Martin Francis Berg, Christer Elverum, Martin Steinert, Håvard Vestad
This study compares the design practices and performance of ChatGPT 4.0, a large language model (LLM), against graduate engineering students in a 48-hour prototyping hackathon, based on a dataset comprising more than 100 prototypes. The LLM participated by instructing two participants who executed its instructions and ...
http://arxiv.org/abs/2404.18479v1
2024-04-29T07:33:06Z
cs.HC
2,024
PromptReps: Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval
Shengyao Zhuang, Xueguang Ma, Bevan Koopman, Jimmy Lin, Guido Zuccon
The current use of large language models (LLMs) for zero-shot document ranking follows one of two ways: 1) prompt-based re-ranking methods, which require no further training but are feasible for only re-ranking a handful of candidate documents due to the associated computational costs; and 2) unsupervised contrastive t...
http://arxiv.org/abs/2404.18424v1
2024-04-29T04:51:30Z
cs.IR
2,024
Mixture-of-Instructions: Comprehensive Alignment of a Large Language Model through the Mixture of Diverse System Prompting Instructions
Bowen Xu, Shaoyu Wu, Kai Liu, Lulu Hu
With the proliferation of large language models (LLMs), the comprehensive alignment of such models across multiple tasks has emerged as a critical area of research. Existing alignment methodologies primarily address single task, such as multi-turn dialogue, coding, mathematical problem-solving, and tool usage. However,...
http://arxiv.org/abs/2404.18410v1
2024-04-29T03:58:12Z
cs.CL
2,024
Exploring the Limits of Fine-grained LLM-based Physics Inference via Premise Removal Interventions
Jordan Meadows, Tamsin James, Andre Freitas
Language models can hallucinate when performing complex and detailed mathematical reasoning. Physics provides a rich domain for assessing mathematical reasoning capabilities where physical context imbues the use of symbols which needs to satisfy complex semantics (\textit{e.g.,} units, tensorial order), leading to inst...
http://arxiv.org/abs/2404.18384v1
2024-04-29T02:43:23Z
cs.CL
2,024
QANA: LLM-based Question Generation and Network Analysis for Zero-shot Key Point Analysis and Beyond
Tomoki Fukuma, Koki Noda, Toshihide Ubukata Kousuke Hoso, Yoshiharu Ichikawa, Kyosuke Kambe, Yu Masubuch, Fujio Toriumi
The proliferation of social media has led to information overload and increased interest in opinion mining. We propose "Question-Answering Network Analysis" (QANA), a novel opinion mining framework that utilizes Large Language Models (LLMs) to generate questions from users' comments, constructs a bipartite graph based ...
http://arxiv.org/abs/2404.18371v1
2024-04-29T02:17:31Z
cs.CL
2,024
Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models
Norbert Tihanyi, Tamas Bisztray, Mohamed Amine Ferrag, Ridhi Jain, Lucas C. Cordeiro
This study provides a comparative analysis of state-of-the-art large language models (LLMs), analyzing how likely they generate vulnerabilities when writing simple C programs using a neutral zero-shot prompt. We address a significant gap in the literature concerning the security properties of code produced by these mod...
http://arxiv.org/abs/2404.18353v1
2024-04-29T01:24:14Z
cs.CR, cs.AI, cs.PL
2,024
Tabular Embedding Model (TEM): Finetuning Embedding Models For Tabular RAG Applications
Sujit Khanna, Shishir Subedi
In recent times Large Language Models have exhibited tremendous capabilities, especially in the areas of mathematics, code generation and general-purpose reasoning. However for specialized domains especially in applications that require parsing and analyzing large chunks of numeric or tabular data even state-of-the-art...
http://arxiv.org/abs/2405.01585v1
2024-04-28T14:58:55Z
cs.AI, cs.CL, cs.IR
2,024
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
Zhengpeng Shi, Haoran Luo
Domain-Specific Chinese Relation Extraction (DSCRE) aims to extract relations between entities from domain-specific Chinese text. Despite the rapid development of PLMs in recent years, especially LLMs, DSCRE still faces three core challenges: complex network structure design, poor awareness, and high consumption of fin...
http://arxiv.org/abs/2404.18085v1
2024-04-28T06:27:15Z
cs.CL
2,024
Enhancing Pre-Trained Generative Language Models with Question Attended Span Extraction on Machine Reading Comprehension
Lin Ai, Zheng Hui, Zizhou Liu, Julia Hirschberg
Machine Reading Comprehension (MRC) poses a significant challenge in the field of Natural Language Processing (NLP). While mainstream MRC methods predominantly leverage extractive strategies using encoder-only models such as BERT, generative approaches face the issue of out-of-control generation -- a critical problem w...
http://arxiv.org/abs/2404.17991v1
2024-04-27T19:42:51Z
cs.CL
2,024
VANER: Leveraging Large Language Model for Versatile and Adaptive Biomedical Named Entity Recognition
Junyi Biana, Weiqi Zhai, Xiaodi Huang, Jiaxuan Zheng, Shanfeng Zhu
Prevalent solution for BioNER involves using representation learning techniques coupled with sequence labeling. However, such methods are inherently task-specific, demonstrate poor generalizability, and often require dedicated model for each dataset. To leverage the versatile capabilities of recently remarkable large l...
http://arxiv.org/abs/2404.17835v1
2024-04-27T09:00:39Z
cs.CL
2,024
Recall, Retrieve and Reason: Towards Better In-Context Relation Extraction
Guozheng Li, Peng Wang, Wenjun Ke, Yikai Guo, Ke Ji, Ziyu Shang, Jiajun Liu, Zijie Xu
Relation extraction (RE) aims to identify relations between entities mentioned in texts. Although large language models (LLMs) have demonstrated impressive in-context learning (ICL) abilities in various tasks, they still suffer from poor performances compared to most supervised fine-tuned RE methods. Utilizing ICL for ...
http://arxiv.org/abs/2404.17809v1
2024-04-27T07:12:52Z
cs.CL, cs.AI
2,024
Meta In-Context Learning Makes Large Language Models Better Zero and Few-Shot Relation Extractors
Guozheng Li, Peng Wang, Jiajun Liu, Yikai Guo, Ke Ji, Ziyu Shang, Zijie Xu
Relation extraction (RE) is an important task that aims to identify the relationships between entities in texts. While large language models (LLMs) have revealed remarkable in-context learning (ICL) capability for general zero and few-shot learning, recent studies indicate that current LLMs still struggle with zero and...
http://arxiv.org/abs/2404.17807v1
2024-04-27T07:06:39Z
cs.CL, cs.AI
2,024
T-CLAP: Temporal-Enhanced Contrastive Language-Audio Pretraining
Yi Yuan, Zhuo Chen, Xubo Liu, Haohe Liu, Xuenan Xu, Dongya Jia, Yuanzhe Chen, Mark D. Plumbley, Wenwu Wang
Contrastive language-audio pretraining~(CLAP) has been developed to align the representations of audio and language, achieving remarkable performance in retrieval and classification tasks. However, current CLAP struggles to capture temporal information within audio and text features, presenting substantial limitations ...
http://arxiv.org/abs/2404.17806v1
2024-04-27T07:05:48Z
cs.SD, cs.CL, cs.LG, eess.AS
2,024
Temporal Scaling Law for Large Language Models
Yizhe Xiong, Xiansheng Chen, Xin Ye, Hui Chen, Zijia Lin, Haoran Lian, Jianwei Niu, Guiguang Ding
Recently, Large Language Models (LLMs) are widely adopted in a wide range of tasks, leading to increasing attention towards the research on how scaling LLMs affects their performance. Existing works, termed as Scaling Laws, have discovered that the loss of LLMs scales as power laws with model size, computational budget...
http://arxiv.org/abs/2404.17785v1
2024-04-27T05:49:11Z
cs.CL
2,024
MRScore: Evaluating Radiology Report Generation with LLM-based Reward System
Yunyi Liu, Zhanyu Wang, Yingshu Li, Xinyu Liang, Lingqiao Liu, Lei Wang, Luping Zhou
In recent years, automated radiology report generation has experienced significant growth. This paper introduces MRScore, an automatic evaluation metric tailored for radiology report generation by leveraging Large Language Models (LLMs). Conventional NLG (natural language generation) metrics like BLEU are inadequate fo...
http://arxiv.org/abs/2404.17778v1
2024-04-27T04:42:45Z
cs.CL, cs.AI
2,024
Building a Large Japanese Web Corpus for Large Language Models
Naoaki Okazaki, Kakeru Hattori, Hirai Shota, Hiroki Iida, Masanari Ohi, Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Rio Yokota, Sakae Mizuki
Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snap...
http://arxiv.org/abs/2404.17733v1
2024-04-27T00:02:45Z
cs.CL, cs.AI
2,024
Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering
Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, Zheng Li
In customer service technical support, swiftly and accurately retrieving relevant past issues is critical for efficiently resolving customer inquiries. The conventional retrieval methods in retrieval-augmented generation (RAG) for large language models (LLMs) treat a large corpus of past issue tracking tickets as plain...
http://arxiv.org/abs/2404.17723v2
2024-04-26T23:05:20Z
cs.IR, cs.AI, cs.CL, cs.LG, I.2
2,024
PLAYER*: Enhancing LLM-based Multi-Agent Communication and Interaction in Murder Mystery Games
Qinglin Zhu, Runcong Zhao, Jinhua Du, Lin Gui, Yulan He
Recent advancements in Large Language Models (LLMs) have enhanced the efficacy of agent communication and social interactions. Despite these advancements, building LLM-based agents for reasoning in dynamic environments involving competition and collaboration remains challenging due to the limitations of informed graph-...
http://arxiv.org/abs/2404.17662v1
2024-04-26T19:07:30Z
cs.CL
2,024
CEval: A Benchmark for Evaluating Counterfactual Text Generation
Van Bach Nguyen, Jörg Schlötterer, Christin Seifert
Counterfactual text generation aims to minimally change a text, such that it is classified differently. Judging advancements in method development for counterfactual text generation is hindered by a non-uniform usage of data sets and metrics in related work. We propose CEval, a benchmark for comparing counterfactual te...
http://arxiv.org/abs/2404.17475v1
2024-04-26T15:23:47Z
cs.CL, cs.AI
2,024
Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM
Xuan Zhang, Wei Gao
Retrieval-augmented language models have exhibited promising performance across various areas of natural language processing (NLP), including fact-critical tasks. However, due to the black-box nature of advanced large language models (LLMs) and the non-retrieval-oriented supervision signal of specific tasks, the traini...
http://arxiv.org/abs/2404.17283v1
2024-04-26T09:38:27Z
cs.CL
2,024
Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes
Mahammed Kamruzzaman, Gene Louis Kim
Dual process theory posits that human cognition arises via two systems. System 1, which is a quick, emotional, and intuitive process, which is subject to cognitive biases, and System 2, a slow, onerous, and deliberate process. NLP researchers often compare zero-shot prompting in LLMs to System 1 reasoning and chain-of-...
http://arxiv.org/abs/2404.17218v1
2024-04-26T07:46:29Z
cs.CL
2,024
A Unified Debugging Approach via LLM-Based Multi-Agent Synergy
Cheryl Lee, Chunqiu Steven Xia, Jen-tse Huang, Zhouruixin Zhu, Lingming Zhang, Michael R. Lyu
Tremendous efforts have been devoted to automating software debugging, a time-consuming process involving fault localization and repair generation. Recently, Large Language Models (LLMs) have shown great potential in automated debugging. However, we identified three challenges posed to traditional and LLM-based debuggi...
http://arxiv.org/abs/2404.17153v1
2024-04-26T04:55:35Z
cs.SE
2,024
Small Language Models Need Strong Verifiers to Self-Correct Reasoning
Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang
Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors. This work explores whether smaller-size (<= 13B) language models (LMs) have the ability of self-correction on...
http://arxiv.org/abs/2404.17140v1
2024-04-26T03:41:28Z
cs.CL
2,024
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites
Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Botian Shi, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Bo Zhang, Pinlong Cai, Licheng Wen, Xiangchao Yan, M...
In this report, we introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements: (1) Strong Vision Encoder: we explored a continuous learning strategy f...
http://arxiv.org/abs/2404.16821v2
2024-04-25T17:59:19Z
cs.CV
2,024
IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages
Harman Singh, Nitish Gupta, Shikhar Bharadwaj, Dinesh Tewari, Partha Talukdar
As large language models (LLMs) see increasing adoption across the globe, it is imperative for LLMs to be representative of the linguistic diversity of the world. India is a linguistically diverse country of 1.4 Billion people. To facilitate research on multilingual LLM evaluation, we release IndicGenBench - the larges...
http://arxiv.org/abs/2404.16816v1
2024-04-25T17:57:36Z
cs.CL
2,024
Make Your LLM Fully Utilize the Context
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou
While many contemporary large language models (LLMs) can process lengthy input, they still struggle to fully utilize information within the long context, known as the lost-in-the-middle challenge. We hypothesize that it stems from insufficient explicit supervision during the long-context training, which fails to emphas...
http://arxiv.org/abs/2404.16811v2
2024-04-25T17:55:14Z
cs.CL, cs.AI
2,024
Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning
Tianhui Zhang, Bei Peng, Danushka Bollegala
Generative Commonsense Reasoning (GCR) requires a model to reason about a situation using commonsense knowledge, while generating coherent sentences. Although the quality of the generated sentences is crucial, the diversity of the generation is equally important because it reflects the model's ability to use a range of...
http://arxiv.org/abs/2404.16807v1
2024-04-25T17:52:39Z
cs.CL
2,024
Continual Learning of Large Language Models: A Comprehensive Survey
Haizhou Shi, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Hao Wang
The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs into dynamic data distributions, task structures, and user preferences. ...
http://arxiv.org/abs/2404.16789v1
2024-04-25T17:38:57Z
cs.LG, cs.AI, cs.CL
2,024
Can't say cant? Measuring and Reasoning of Dark Jargons in Large Language Models
Xu Ji, Jianyi Zhang, Ziyin Zhou, Zhangchi Zhao, Qianqian Qiao, Kaiying Han, Md Imran Hossen, Xiali Hei
Ensuring the resilience of Large Language Models (LLMs) against malicious exploitation is paramount, with recent focus on mitigating offensive responses. Yet, the understanding of cant or dark jargon remains unexplored. This paper introduces a domain-specific Cant dataset and CantCounter evaluation framework, employing...
http://arxiv.org/abs/2405.00718v1
2024-04-25T17:25:53Z
cs.CL, cs.AI
2,024
Large Language Models in Healthcare: A Comprehensive Benchmark
Andrew Liu, Hongjian Zhou, Yining Hua, Omid Rohanian, Lei Clifton, David A. Clifton
The adoption of large language models (LLMs) to assist clinicians has attracted remarkable attention. Existing works mainly adopt the close-ended question-answering task with answer options for evaluation. However, in real clinical settings, many clinical decisions, such as treatment recommendations, involve answering ...
http://arxiv.org/abs/2405.00716v1
2024-04-25T15:51:06Z
cs.CL, cs.AI
2,024
Towards Adapting Open-Source Large Language Models for Expert-Level Clinical Note Generation
Hanyin Wang, Chufan Gao, Bolun Liu, Qiping Xu, Guleid Hussein, Mohamad El Labban, Kingsley Iheasirim, Hariprasad Korsapati, Jimeng Sun
Large Language Models (LLMs) have shown promising capabilities in handling clinical text summarization tasks. In this study, we demonstrate that a small open-source LLM can be effectively trained to generate high-quality clinical notes from outpatient patient-doctor dialogues. We achieve this through a comprehensive do...
http://arxiv.org/abs/2405.00715v1
2024-04-25T15:34:53Z
cs.CL, cs.AI, cs.LG
2,024
Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare
Emre Can Acikgoz, Osman Batur İnce, Rayene Bench, Arda Anıl Boz, İlker Kesen, Aykut Erdem, Erkut Erdem
The integration of Large Language Models (LLMs) into healthcare promises to transform medical diagnostics, research, and patient care. Yet, the progression of medical LLMs faces obstacles such as complex training requirements, rigorous evaluation demands, and the dominance of proprietary models that restrict academic e...
http://arxiv.org/abs/2404.16621v1
2024-04-25T14:06:37Z
cs.LG, cs.AI, cs.CL
2,024
Evaluating Large Language Models on Time Series Feature Understanding: A Comprehensive Taxonomy and Benchmark
Elizabeth Fons, Rachneet Kaur, Soham Palande, Zhen Zeng, Svitlana Vyetrenko, Tucker Balch
Large Language Models (LLMs) offer the potential for automatic time series analysis and reporting, which is a critical task across many domains, spanning healthcare, finance, climate, energy, and many more. In this paper, we propose a framework for rigorously evaluating the capabilities of LLMs on time series understan...
http://arxiv.org/abs/2404.16563v1
2024-04-25T12:24:37Z
cs.CL
2,024
Evaluating Consistency and Reasoning Capabilities of Large Language Models
Yash Saxena, Sarthak Chopra, Arunendra Mani Tripathi
Large Language Models (LLMs) are extensively used today across various sectors, including academia, research, business, and finance, for tasks such as text generation, summarization, and translation. Despite their widespread adoption, these models often produce incorrect and misleading information, exhibiting a tendenc...
http://arxiv.org/abs/2404.16478v1
2024-04-25T10:03:14Z
cs.CL, cs.AI
2,024
Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums
Isabelle Lorge, Dan W. Joyce, Andrey Kormilitzin
Mental health in children and adolescents has been steadily deteriorating over the past few years. The recent advent of Large Language Models (LLMs) offers much hope for cost and time efficient scaling of monitoring and intervention, yet despite specifically prevalent issues such as school bullying and eating disorders...
http://arxiv.org/abs/2404.16461v2
2024-04-25T09:42:50Z
cs.CL
2,024
Contextual Categorization Enhancement through LLMs Latent-Space
Zineddine Bettouche, Anas Safi, Andreas Fischer
Managing the semantic quality of the categorization in large textual datasets, such as Wikipedia, presents significant challenges in terms of complexity and cost. In this paper, we propose leveraging transformer models to distill semantic information from texts in the Wikipedia dataset and its associated categories int...
http://arxiv.org/abs/2404.16442v1
2024-04-25T09:20:51Z
cs.CL, cs.AI
2,024
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs
An Yan, Zhengyuan Yang, Junda Wu, Wanrong Zhu, Jianwei Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Julian McAuley, Jianfeng Gao, Lijuan Wang
Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of GPT-4V, by enabling the model to associate visual objects with tags inserted on the image. These tags, marked with alphanumerics, can be indexed via text tokens for easy reference. Despite the extraordinary performance from GPT-4V, we observe that...
http://arxiv.org/abs/2404.16375v1
2024-04-25T07:29:17Z
cs.CV, cs.AI, cs.CL
2,024
LLM-Based Section Identifiers Excel on Open Source but Stumble in Real World Applications
Saranya Krishnamoorthy, Ayush Singh, Shabnam Tafreshi
Electronic health records (EHR) even though a boon for healthcare practitioners, are growing convoluted and longer every day. Sifting around these lengthy EHRs is taxing and becomes a cumbersome part of physician-patient interaction. Several approaches have been proposed to help alleviate this prevalent issue either vi...
http://arxiv.org/abs/2404.16294v1
2024-04-25T02:25:35Z
cs.CL, cs.AI
2,024
Homonym Sense Disambiguation in the Georgian Language
Davit Melikidze, Alexander Gamkrelidze
This research proposes a novel approach to the Word Sense Disambiguation (WSD) task in the Georgian language, based on supervised fine-tuning of a pre-trained Large Language Model (LLM) on a dataset formed by filtering the Georgian Common Crawls corpus. The dataset is used to train a classifier for words with multiple ...
http://arxiv.org/abs/2405.00710v1
2024-04-24T21:48:43Z
cs.CL, cs.LG
2,024
Chat2Scenario: Scenario Extraction From Dataset Through Utilization of Large Language Model
Yongqi Zhao, Wenbo Xiao, Tomislav Mihalj, Jia Hu, Arno Eichberger
The advent of Large Language Models (LLM) provides new insights to validate Automated Driving Systems (ADS). In the herein-introduced work, a novel approach to extracting scenarios from naturalistic driving datasets is presented. A framework called Chat2Scenario is proposed leveraging the advanced Natural Language Proc...
http://arxiv.org/abs/2404.16147v2
2024-04-24T19:08:11Z
cs.RO
2,024
From Local to Global: A Graph RAG Approach to Query-Focused Summarization
Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, Jonathan Larson
The use of retrieval-augmented generation (RAG) to retrieve relevant information from an external knowledge source enables large language models (LLMs) to answer questions over private and/or previously unseen document collections. However, RAG fails on global questions directed at an entire text corpus, such as "What ...
http://arxiv.org/abs/2404.16130v1
2024-04-24T18:38:11Z
cs.CL, cs.AI, cs.IR, H.3.3; I.2.7
2,024
Act as a Honeytoken Generator! An Investigation into Honeytoken Generation with Large Language Models
Daniel Reti, Norman Becker, Tillmann Angeli, Anasuya Chattopadhyay, Daniel Schneider, Sebastian Vollmer, Hans D. Schotten
With the increasing prevalence of security incidents, the adoption of deception-based defense strategies has become pivotal in cyber security. This work addresses the challenge of scalability in designing honeytokens, a key component of such defense mechanisms. The manual creation of honeytokens is a tedious task. Alth...
http://arxiv.org/abs/2404.16118v1
2024-04-24T18:18:56Z
cs.CR
2,024
Classifying Human-Generated and AI-Generated Election Claims in Social Media
Alphaeus Dmonte, Marcos Zampieri, Kevin Lybarger, Massimiliano Albanese, Genya Coulter
Politics is one of the most prevalent topics discussed on social media platforms, particularly during major election cycles, where users engage in conversations about candidates and electoral processes. Malicious actors may use this opportunity to disseminate misinformation to undermine trust in the electoral process. ...
http://arxiv.org/abs/2404.16116v2
2024-04-24T18:13:29Z
cs.CL, cs.AI
2,024
Cantor: Inspiring Multimodal Chain-of-Thought of MLLM
Timin Gao, Peixian Chen, Mengdan Zhang, Chaoyou Fu, Yunhang Shen, Yan Zhang, Shengchuan Zhang, Xiawu Zheng, Xing Sun, Liujuan Cao, Rongrong Ji
With the advent of large language models(LLMs) enhanced by the chain-of-thought(CoT) methodology, visual reasoning problem is usually decomposed into manageable sub-tasks and tackled sequentially with various external tools. However, such a paradigm faces the challenge of the potential "determining hallucinations" in d...
http://arxiv.org/abs/2404.16033v1
2024-04-24T17:59:48Z
cs.CV, cs.CL
2,024
The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Hannah Rose Kirk, Alexander Whitefield, Paul Röttger, Andrew Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, He He, Bertie Vidgen, Scott A. Hale
Human feedback plays a central role in the alignment of Large Language Models (LLMs). However, open questions remain about the methods (how), domains (where), people (who) and objectives (to what end) of human feedback collection. To navigate these questions, we introduce PRISM, a new dataset which maps the sociodemogr...
http://arxiv.org/abs/2404.16019v1
2024-04-24T17:51:36Z
cs.CL
2,024
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
Linyu Liu, Yu Pan, Xiaocheng Li, Guanting Chen
Large language models (LLMs) are highly capable of many tasks but they can sometimes generate unreliable or inaccurate outputs. To tackle this issue, this paper studies the problem of uncertainty estimation and calibration for LLMs. We begin by formulating the uncertainty estimation problem for LLMs and then propose a ...
http://arxiv.org/abs/2404.15993v1
2024-04-24T17:10:35Z
cs.LG, cs.CL, 68T07, 68T50
2,024
Semantic Routing for Enhanced Performance of LLM-Assisted Intent-Based 5G Core Network Management and Orchestration
Dimitrios Michael Manias, Ali Chouman, Abdallah Shami
Large language models (LLMs) are rapidly emerging in Artificial Intelligence (AI) applications, especially in the fields of natural language processing and generative AI. Not limited to text generation applications, these models inherently possess the opportunity to leverage prompt engineering, where the inputs of such...
http://arxiv.org/abs/2404.15869v1
2024-04-24T13:34:20Z
cs.NI, cs.AI
2,024
Leveraging Large Language Models for Multimodal Search
Oriol Barbany, Michael Huang, Xinliang Zhu, Arnab Dhua
Multimodal search has become increasingly important in providing users with a natural and effective way to ex-press their search intentions. Images offer fine-grained details of the desired products, while text allows for easily incorporating search modifications. However, some existing multimodal search systems are un...
http://arxiv.org/abs/2404.15790v1
2024-04-24T10:30:42Z
cs.CV
2,024
KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering
Xinxin Zheng, Feihu Che, Jinyang Wu, Shuai Zhang, Shuai Nie, Kang Liu, Jianhua Tao
Large language models (LLMs) suffer from the hallucination problem and face significant challenges when applied to knowledge-intensive tasks. A promising approach is to leverage evidence documents as extra supporting knowledge, which can be obtained through retrieval or generation. However, existing methods directly le...
http://arxiv.org/abs/2404.15660v1
2024-04-24T05:32:41Z
cs.CL
2,024
Multi-Modal Proxy Learning Towards Personalized Visual Multiple Clustering
Jiawei Yao, Qi Qian, Juhua Hu
Multiple clustering has gained significant attention in recent years due to its potential to reveal multiple hidden structures of data from different perspectives. The advent of deep multiple clustering techniques has notably advanced the performance by uncovering complex patterns and relationships within large dataset...
http://arxiv.org/abs/2404.15655v1
2024-04-24T05:20:42Z
cs.CV
2,024
CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code
Batu Guan, Yao Wan, Zhangqian Bi, Zheng Wang, Hongyu Zhang, Yulei Sui, Pan Zhou, Lichao Sun
As Large Language Models (LLMs) are increasingly used to automate code generation, it is often desired to know if the code is AI-generated and by which model, especially for purposes like protecting intellectual property (IP) in industry and preventing academic misconduct in education. Incorporating watermarks into mac...
http://arxiv.org/abs/2404.15639v1
2024-04-24T04:25:04Z
cs.CL
2,024
Hybrid LLM/Rule-based Approaches to Business Insights Generation from Structured Data
Aliaksei Vertsel, Mikhail Rumiantsau
In the field of business data analysis, the ability to extract actionable insights from vast and varied datasets is essential for informed decision-making and maintaining a competitive edge. Traditional rule-based systems, while reliable, often fall short when faced with the complexity and dynamism of modern business d...
http://arxiv.org/abs/2404.15604v1
2024-04-24T02:42:24Z
cs.CL, cs.AI
2,024
ImplicitAVE: An Open-Source Dataset and Multimodal LLMs Benchmark for Implicit Attribute Value Extraction
Henry Peng Zou, Vinay Samuel, Yue Zhou, Weizhi Zhang, Liancheng Fang, Zihe Song, Philip S. Yu, Cornelia Caragea
Existing datasets for attribute value extraction (AVE) predominantly focus on explicit attribute values while neglecting the implicit ones, lack product images, are often not publicly available, and lack an in-depth human inspection across diverse domains. To address these limitations, we present ImplicitAVE, the first...
http://arxiv.org/abs/2404.15592v1
2024-04-24T01:54:40Z
cs.CV, cs.AI, cs.CL, cs.IR, cs.LG
2,024
Minimal Evidence Group Identification for Claim Verification
Xiangci Li, Sihao Chen, Rajvi Kapadia, Jessica Ouyang, Fan Zhang
Claim verification in real-world settings (e.g. against a large collection of candidate evidences retrieved from the web) typically requires identifying and aggregating a complete set of evidence pieces that collectively provide full support to the claim. The problem becomes particularly challenging when there exists d...
http://arxiv.org/abs/2404.15588v1
2024-04-24T01:44:09Z
cs.CL
2,024
Can Foundational Large Language Models Assist with Conducting Pharmaceuticals Manufacturing Investigations?
Hossein Salami, Brandye Smith-Goettler, Vijay Yadav
General purpose Large Language Models (LLM) such as the Generative Pretrained Transformer (GPT) and Large Language Model Meta AI (LLaMA) have attracted much attention in recent years. There is strong evidence that these models can perform remarkably well in various natural language processing tasks. However, how to lev...
http://arxiv.org/abs/2404.15578v1
2024-04-24T00:56:22Z
cs.CL
2,024
PRISM: Patient Records Interpretation for Semantic Clinical Trial Matching using Large Language Models
Shashi Kant Gupta, Aditya Basu, Mauro Nievas, Jerrin Thomas, Nathan Wolfrath, Adhitya Ramamurthi, Bradley Taylor, Anai N. Kothari, Regina Schwind, Therica M. Miller, Sorena Nadaf-Rahrov, Yanshan Wang, Hrituraj Singh
Clinical trial matching is the task of identifying trials for which patients may be potentially eligible. Typically, this task is labor-intensive and requires detailed verification of patient electronic health records (EHRs) against the stringent inclusion and exclusion criteria of clinical trials. This process is manu...
http://arxiv.org/abs/2404.15549v2
2024-04-23T22:33:19Z
cs.CL, cs.AI
2,024
Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models
Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, Chitta Baral
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks. But, can they really "reason" over the natural language? This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, a...
http://arxiv.org/abs/2404.15522v1
2024-04-23T21:08:49Z
cs.CL, cs.AI
2,024
IryoNLP at MEDIQA-CORR 2024: Tackling the Medical Error Detection & Correction Task On the Shoulders of Medical Agents
Jean-Philippe Corbeil
In natural language processing applied to the clinical domain, utilizing large language models has emerged as a promising avenue for error detection and correction on clinical notes, a knowledge-intensive task for which annotated data is scarce. This paper presents MedReAct'N'MedReFlex, which leverages a suite of four ...
http://arxiv.org/abs/2404.15488v1
2024-04-23T20:00:37Z
cs.CL, cs.AI, cs.MA
2,024
Interactive Analysis of LLMs using Meaningful Counterfactuals
Furui Cheng, Vilém Zouhar, Robin Shing Moon Chan, Daniel Fürst, Hendrik Strobelt, Mennatallah El-Assady
Counterfactual examples are useful for exploring the decision boundaries of machine learning models and determining feature attributions. How can we apply counterfactual-based methods to analyze and explain LLMs? We identify the following key challenges. First, the generated textual counterfactuals should be meaningful...
http://arxiv.org/abs/2405.00708v1
2024-04-23T19:57:03Z
cs.CL, cs.AI, cs.HC, cs.LG, I.2.7; H.5.2
2,024
Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT
Darui Lu, Yang Deng, Jordan M. Malof, Willie J. Padilla
Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are trained on massive quantities of text parsed from the internet and have shown a remarkable ability to respond to complex prompts in a manner often indistinguishable from humans. We present a LLM fine-tuned on up to 40,000 data that can predict ...
http://arxiv.org/abs/2404.15458v1
2024-04-23T19:05:42Z
physics.optics, cs.LG
2,024
Evaluating Large Language Models for Material Selection
Daniele Grandi, Yash Patawari Jain, Allin Groom, Brandon Cramer, Christopher McComb
Material selection is a crucial step in conceptual design due to its significant impact on the functionality, aesthetics, manufacturability, and sustainability impact of the final product. This study investigates the use of Large Language Models (LLMs) for material selection in the product design process and compares t...
http://arxiv.org/abs/2405.03695v1
2024-04-23T18:53:33Z
cs.CL
2,024
Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs
Davide Caffagni, Federico Cocchi, Nicholas Moratelli, Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara
Multimodal LLMs are the natural evolution of LLMs, and enlarge their capabilities so as to work beyond the pure textual modality. As research is being carried out to design novel architectures and vision-and-language adapters, in this paper we concentrate on endowing such models with the capability of answering questio...
http://arxiv.org/abs/2404.15406v1
2024-04-23T18:00:09Z
cs.CV, cs.AI, cs.CL, cs.MM
2,024
CultureBank: An Online Community-Driven Knowledge Base Towards Culturally Aware Language Technologies
Weiyan Shi, Ryan Li, Yutong Zhang, Caleb Ziems, Chunhua yu, Raya Horesh, Rogério Abreu de Paula, Diyi Yang
To enhance language models' cultural awareness, we design a generalizable pipeline to construct cultural knowledge bases from different online communities on a massive scale. With the pipeline, we construct CultureBank, a knowledge base built upon users' self-narratives with 12K cultural descriptors sourced from TikTok...
http://arxiv.org/abs/2404.15238v1
2024-04-23T17:16:08Z
cs.CL, cs.AI
2,024
Regressive Side Effects of Training Language Models to Mimic Student Misconceptions
Shashank Sonkar, Naiming Liu, Richard G. Baraniuk
This paper presents a novel exploration into the regressive side effects of training Large Language Models (LLMs) to mimic student misconceptions for personalized education. We highlight the problem that as LLMs are trained to more accurately mimic student misconceptions, there is a compromise in the factual integrity ...
http://arxiv.org/abs/2404.15156v1
2024-04-23T15:57:55Z
cs.CL
2,024
Bias patterns in the application of LLMs for clinical decision support: A comprehensive study
Raphael Poulain, Hamed Fayyaz, Rahmatollah Beheshti
Large Language Models (LLMs) have emerged as powerful candidates to inform clinical decision-making processes. While these models play an increasingly prominent role in shaping the digital landscape, two growing concerns emerge in healthcare applications: 1) to what extent do LLMs exhibit social bias based on patients'...
http://arxiv.org/abs/2404.15149v1
2024-04-23T15:52:52Z
cs.CL, cs.LG
2,024
Rethinking LLM Memorization through the Lens of Adversarial Compression
Avi Schwarzschild, Zhili Feng, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter
Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information. The answe...
http://arxiv.org/abs/2404.15146v1
2024-04-23T15:49:37Z
cs.LG, cs.CL
2,024
Social Media and Artificial Intelligence for Sustainable Cities and Societies: A Water Quality Analysis Use-case
Muhammad Asif Auyb, Muhammad Tayyab Zamir, Imran Khan, Hannia Naseem, Nasir Ahmad, Kashif Ahmad
This paper focuses on a very important societal challenge of water quality analysis. Being one of the key factors in the economic and social development of society, the provision of water and ensuring its quality has always remained one of the top priorities of public authorities. To ensure the quality of water, differ...
http://arxiv.org/abs/2404.14977v1
2024-04-23T12:33:14Z
cs.SI, cs.CL
2,024
Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond
Pengyu Xue, Linhao Wu, Zhongxing Yu, Zhi Jin, Zhen Yang, Xinyi Li, Zhenyu Yang, Yue Tan
Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs, which facilitate collaboration among developers and play a critical role in Open-Source Software (OSS). Very recently, Large Language Models (LLMs) have demonstrated extensive applicability in diverse cod...
http://arxiv.org/abs/2404.14824v1
2024-04-23T08:24:43Z
cs.SE
2,024
A Survey of Large Language Models on Generative Graph Analytics: Query, Learning, and Applications
Wenbo Shang, Xin Huang
A graph is a fundamental data model to represent various entities and their complex relationships in society and nature, such as social networks, transportation networks, financial networks, and biomedical systems. Recently, large language models (LLMs) have showcased a strong generalization ability to handle various N...
http://arxiv.org/abs/2404.14809v1
2024-04-23T07:39:24Z
cs.CL, cs.AI, cs.DB
2,024
LLM-Enhanced Causal Discovery in Temporal Domain from Interventional Data
Peiwen Li, Xin Wang, Zeyang Zhang, Yuan Meng, Fang Shen, Yue Li, Jialong Wang, Yang Li, Wenweu Zhu
In the field of Artificial Intelligence for Information Technology Operations, causal discovery is pivotal for operation and maintenance of graph construction, facilitating downstream industrial tasks such as root cause analysis. Temporal causal discovery, as an emerging method, aims to identify temporal causal relatio...
http://arxiv.org/abs/2404.14786v1
2024-04-23T06:52:40Z
cs.AI, cs.LG, stat.ME
2,024
Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches
Clément Christophe, Praveen K Kanithi, Prateek Munjal, Tathagata Raha, Nasir Hayat, Ronnie Rajan, Ahmed Al-Mahrooqi, Avani Gupta, Muhammad Umar Salman, Gurpreet Gosal, Bhargav Kanakiya, Charles Chen, Natalia Vassilieva, Boulbaba Ben Amor, Marco AF Pimentel, Shadab Khan
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically de...
http://arxiv.org/abs/2404.14779v1
2024-04-23T06:36:21Z
cs.CL
2,024
Simulating Task-Oriented Dialogues with State Transition Graphs and Large Language Models
Chris Samarinas, Pracha Promthaw, Atharva Nijasure, Hansi Zeng, Julian Killingback, Hamed Zamani
This paper explores SynTOD, a new synthetic data generation approach for developing end-to-end Task-Oriented Dialogue (TOD) Systems capable of handling complex tasks such as intent classification, slot filling, conversational question-answering, and retrieval-augmented response generation, without relying on crowdsourc...
http://arxiv.org/abs/2404.14772v1
2024-04-23T06:23:34Z
cs.CL
2,024
SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning
Yexiao He, Ziyao Wang, Zheyu Shen, Guoheng Sun, Yucong Dai, Yongkai Wu, Hongyi Wang, Ang Li
The pre-trained Large Language Models (LLMs) can be adapted for many downstream tasks and tailored to align with human preferences through fine-tuning. Recent studies have discovered that LLMs can achieve desirable performance with only a small amount of high-quality data, suggesting that a large amount of the data in ...
http://arxiv.org/abs/2405.00705v1
2024-04-23T04:56:48Z
cs.CL, cs.LG
2,024
TAAT: Think and Act from Arbitrary Texts in Text2Motion
Runqi Wang, Caoyuan Ma, GuoPeng Li, Zheng Wang
Text2Motion aims to generate human motions from texts. Existing datasets rely on the assumption that texts include action labels (such as "walk, bend, and pick up"), which is not flexible for practical scenarios. This paper redefines this problem with a more realistic assumption that the texts are arbitrary. Specifical...
http://arxiv.org/abs/2404.14745v1
2024-04-23T04:54:32Z
cs.CV
2,024
Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering
Yao Xu, Shizhu He, Jiabei Chen, Zihao Wang, Yangqiu Song, Hanghang Tong, Kang Liu, Jun Zhao
To address the issue of insufficient knowledge and the tendency to generate hallucination in Large Language Models (LLMs), numerous studies have endeavored to integrate LLMs with Knowledge Graphs (KGs). However, all these methods are evaluated on conventional Knowledge Graph Question Answering (KGQA) with complete KGs,...
http://arxiv.org/abs/2404.14741v1
2024-04-23T04:47:22Z
cs.CL, cs.AI
2,024
MisgenderMender: A Community-Informed Approach to Interventions for Misgendering
Tamanna Hossain, Sunipa Dev, Sameer Singh
Content Warning: This paper contains examples of misgendering and erasure that could be offensive and potentially triggering. Misgendering, the act of incorrectly addressing someone's gender, inflicts serious harm and is pervasive in everyday technologies, yet there is a notable lack of research to combat it. We are ...
http://arxiv.org/abs/2404.14695v1
2024-04-23T02:54:00Z
cs.CL
2,024
Automated Multi-Language to English Machine Translation Using Generative Pre-Trained Transformers
Elijah Pelofske, Vincent Urias, Lorie M. Liebrock
The task of accurate and efficient language translation is an extremely important information processing task. Machine learning enabled and automated translation that is accurate and fast is often a large topic of interest in the machine learning and data science communities. In this study, we examine using local Gener...
http://arxiv.org/abs/2404.14680v1
2024-04-23T02:19:35Z
cs.CL, cs.AI, cs.LG
2,024
Exploring and Unleashing the Power of Large Language Models in Automated Code Translation
Zhen Yang, Fang Liu, Zhongxing Yu, Jacky Wai Keung, Jia Li, Shuo Liu, Yifan Hong, Xiaoxue Ma, Zhi Jin, Ge Li
Code translation tools are developed for automatic source-to-source translation. Although learning-based transpilers have shown impressive enhancement against rule-based counterparts, owing to their task-specific pre-training on extensive monolingual corpora. Their current performance still remains unsatisfactory for p...
http://arxiv.org/abs/2404.14646v1
2024-04-23T00:49:46Z
cs.SE, cs.AI
2,024
WangLab at MEDIQA-CORR 2024: Optimized LLM-based Programs for Medical Error Detection and Correction
Augustin Toma, Ronald Xie, Steven Palayew, Patrick R. Lawler, Bo Wang
Medical errors in clinical text pose significant risks to patient safety. The MEDIQA-CORR 2024 shared task focuses on detecting and correcting these errors across three subtasks: identifying the presence of an error, extracting the erroneous sentence, and generating a corrected sentence. In this paper, we present our a...
http://arxiv.org/abs/2404.14544v1
2024-04-22T19:31:45Z
cs.CL
2,024
Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity
Tyler Griggs, Xiaoxuan Liu, Jiaxiang Yu, Doyoung Kim, Wei-Lin Chiang, Alvin Cheung, Ion Stoica
Large language models (LLMs) are increasingly integrated into many online services. However, a major challenge in deploying LLMs is their high cost, due primarily to the use of expensive GPU instances. To address this problem, we find that the significant heterogeneity of GPU types presents an opportunity to increase G...
http://arxiv.org/abs/2404.14527v1
2024-04-22T18:56:18Z
cs.DC, cs.LG
2,024