Title
stringlengths
16
196
Authors
stringlengths
6
6.27k
Abstract
stringlengths
242
1.92k
entry_id
stringlengths
33
33
Date
timestamp[ns, tz=UTC]
Categories
stringclasses
597 values
year
int32
2.02k
2.02k
RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios?
Adrian de Wynter, Ishaan Watts, Nektar Ege Altıntoprak, Tua Wongsangaroonsri, Minghui Zhang, Noura Farra, Lena Baur, Samantha Claudet, Pavel Gajdusek, Can Gören, Qilong Gu, Anna Kaminska, Tomasz Kaminski, Ruby Kuo, Akiko Kyuba, Jongho Lee, Kartik Mathur, Petter Merok, Ivana Milovanović, Nani Paananen, Vesa-Matti Paanan...
Large language models (LLMs) and small language models (SLMs) are being adopted at remarkable speed, although their safety still remains a serious concern. With the advent of multilingual S/LLMs, the question now becomes a matter of scale: can we expand multilingual safety evaluations of these models with the same velo...
http://arxiv.org/abs/2404.14397v1
2024-04-22T17:56:26Z
cs.CL, cs.CY, cs.LG
2,024
SnapKV: LLM Knows What You are Looking for Before Generation
Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen
Large Language Models (LLMs) have made remarkable progress in processing extensive contexts, with the Key-Value (KV) cache playing a vital role in enhancing their performance. However, the growth of the KV cache in response to increasing input length poses challenges to memory and time efficiency. To address this probl...
http://arxiv.org/abs/2404.14469v1
2024-04-22T17:42:58Z
cs.CL, cs.AI
2,024
Beyond Scaling: Predicting Patent Approval with Domain-specific Fine-grained Claim Dependency Graph
Xiaochen Kev Gao, Feng Yao, Kewen Zhao, Beilei He, Animesh Kumar, Vish Krishnan, Jingbo Shang
Model scaling is becoming the default choice for many language tasks due to the success of large language models (LLMs). However, it can fall short in specific scenarios where simple customized methods excel. In this paper, we delve into the patent approval pre-diction task and unveil that simple domain-specific graph ...
http://arxiv.org/abs/2404.14372v1
2024-04-22T17:22:31Z
cs.CL, cs.AI
2,024
Integrating Chemistry Knowledge in Large Language Models via Prompt Engineering
Hongxuan Liu, Haoyu Yin, Zhiyao Luo, Xiaonan Wang
This paper presents a study on the integration of domain-specific knowledge in prompt engineering to enhance the performance of large language models (LLMs) in scientific domains. A benchmark dataset is curated to encapsulate the intricate physical-chemical properties of small molecules, their drugability for pharmacol...
http://arxiv.org/abs/2404.14467v1
2024-04-22T16:55:44Z
cs.CL, cs.AI
2,024
Automated Long Answer Grading with RiceChem Dataset
Shashank Sonkar, Kangqi Ni, Lesa Tran Lu, Kristi Kincaid, John S. Hutchinson, Richard G. Baraniuk
We introduce a new area of study in the field of educational Natural Language Processing: Automated Long Answer Grading (ALAG). Distinguishing itself from Automated Short Answer Grading (ASAG) and Automated Essay Grading (AEG), ALAG presents unique challenges due to the complexity and multifaceted nature of fact-based ...
http://arxiv.org/abs/2404.14316v1
2024-04-22T16:28:09Z
cs.CL
2,024
An Artificial Neuron for Enhanced Problem Solving in Large Language Models
Sumedh Rasal
Recent advancements in artificial intelligence have propelled the capabilities of Large Language Models, yet their ability to mimic nuanced human reasoning remains limited. This paper introduces a novel conceptual enhancement to LLMs, termed the Artificial Neuron, designed to significantly bolster cognitive processing ...
http://arxiv.org/abs/2404.14222v1
2024-04-22T14:33:16Z
cs.HC
2,024
Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction
Zheye Deng, Chunkit Chan, Weiqi Wang, Yuxi Sun, Wei Fan, Tianshi Zheng, Yauwai Yim, Yangqiu Song
The task of condensing large chunks of textual information into concise and structured tables has gained attention recently due to the emergence of Large Language Models (LLMs) and their potential benefit for downstream tasks, such as text summarization and text mining. Previous approaches often generate tables that di...
http://arxiv.org/abs/2404.14215v1
2024-04-22T14:31:28Z
cs.CL
2,024
Benchmarking Advanced Text Anonymisation Methods: A Comparative Study on Novel and Traditional Approaches
Dimitris Asimopoulos, Ilias Siniosoglou, Vasileios Argyriou, Thomai Karamitsou, Eleftherios Fountoukidis, Sotirios K. Goudos, Ioannis D. Moscholios, Konstantinos E. Psannis, Panagiotis Sarigiannidis
In the realm of data privacy, the ability to effectively anonymise text is paramount. With the proliferation of deep learning and, in particular, transformer architectures, there is a burgeoning interest in leveraging these advanced models for text anonymisation tasks. This paper presents a comprehensive benchmarking s...
http://arxiv.org/abs/2404.14465v1
2024-04-22T12:06:54Z
cs.CL, cs.AI, cs.IR
2,024
How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study
Wei Huang, Xudong Ma, Haotong Qin, Xingyu Zheng, Chengtao Lv, Hong Chen, Jie Luo, Xiaojuan Qi, Xianglong Liu, Michele Magno
Meta's LLaMA family has become one of the most powerful open-source Large Language Model (LLM) series. Notably, LLaMA3 models have recently been released and achieve impressive performance across various with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization f...
http://arxiv.org/abs/2404.14047v1
2024-04-22T10:03:03Z
cs.LG
2,024
LLMs Know What They Need: Leveraging a Missing Information Guided Framework to Empower Retrieval-Augmented Generation
Keheng Wang, Feiyu Duan, Peiguang Li, Sirui Wang, Xunliang Cai
Retrieval-Augmented Generation (RAG) demonstrates great value in alleviating outdated knowledge or hallucination by supplying LLMs with updated and relevant knowledge. However, there are still several difficulties for RAG in understanding complex multi-hop query and retrieving relevant documents, which require LLMs to ...
http://arxiv.org/abs/2404.14043v1
2024-04-22T09:56:59Z
cs.CL
2,024
Tree of Reviews: A Tree-based Dynamic Iterative Retrieval Framework for Multi-hop Question Answering
Li Jiapeng, Liu Runze, Li Yabo, Zhou Tong, Li Mingling, Chen Xiang
Multi-hop question answering is a knowledge-intensive complex problem. Large Language Models (LLMs) use their Chain of Thoughts (CoT) capability to reason complex problems step by step, and retrieval-augmentation can effectively alleviate factual errors caused by outdated and unknown knowledge in LLMs. Recent works hav...
http://arxiv.org/abs/2404.14464v1
2024-04-22T09:25:05Z
cs.CL, cs.AI, cs.IR
2,024
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations
Sukmin Cho, Soyeong Jeong, Jeongyeon Seo, Taeho Hwang, Jong C. Park
The robustness of recent Large Language Models (LLMs) has become increasingly crucial as their applicability expands across various domains and real-world applications. Retrieval-Augmented Generation (RAG) is a promising solution for addressing the limitations of LLMs, yet existing studies on the robustness of RAG ofte...
http://arxiv.org/abs/2404.13948v1
2024-04-22T07:49:36Z
cs.CL
2,024
A User-Centric Benchmark for Evaluating Large Language Models
Jiayin Wang, Fengran Mo, Weizhi Ma, Peijie Sun, Min Zhang, Jian-Yun Nie
Large Language Models (LLMs) are essential tools to collaborate with users on different tasks. Evaluating their performance to serve users' needs in real-world scenarios is important. While many benchmarks have been created, they mainly focus on specific predefined model abilities. Few have covered the intended utiliza...
http://arxiv.org/abs/2404.13940v2
2024-04-22T07:32:03Z
cs.CL
2,024
MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit
Boning Zhang, Chengxi Li, Kai Fan
Large language models (LLMs) have been explored in a variety of reasoning tasks including solving of mathematical problems. Each math dataset typically includes its own specially designed evaluation script, which, while suitable for its intended use, lacks generalizability across different datasets. Consequently, updat...
http://arxiv.org/abs/2404.13925v1
2024-04-22T07:03:44Z
cs.CL
2,024
Navigating the Path of Writing: Outline-guided Text Generation with Large Language Models
Yukyung Lee, Soonwon Ka, Bokyung Son, Pilsung Kang, Jaewook Kang
Large Language Models (LLMs) have significantly impacted the writing process, enabling collaborative content creation and enhancing productivity. However, generating high-quality, user-aligned text remains challenging. In this paper, we propose Writing Path, a framework that uses explicit outlines to guide LLMs in gene...
http://arxiv.org/abs/2404.13919v1
2024-04-22T06:57:43Z
cs.CL, cs.AI, cs.HC
2,024
Generating Attractive and Authentic Copywriting from Customer Reviews
Yu-Xiang Lin, Wei-Yun Ma
The goal of product copywriting is to capture the interest of potential buyers by emphasizing the features of products through text descriptions. As e-commerce platforms offer a wide range of services, it's becoming essential to dynamically adjust the styles of these auto-generated descriptions. Typical approaches to c...
http://arxiv.org/abs/2404.13906v1
2024-04-22T06:33:28Z
cs.CL, cs.AI
2,024
VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
Haoyi Qiu, Wenbo Hu, Zi-Yi Dou, Nanyun Peng
Large Vision-Language Models (LVLMs) suffer from hallucination issues, wherein the models generate plausible-sounding but factually incorrect outputs, undermining their reliability. A comprehensive quantitative evaluation is necessary to identify and understand the extent of hallucinations in these models. However, exi...
http://arxiv.org/abs/2404.13874v1
2024-04-22T04:49:22Z
cs.CL, cs.CV
2,024
Context-Enhanced Language Models for Generating Multi-Paper Citations
Avinash Anand, Kritarth Prasad, Ujjwal Goel, Mohit Gupta, Naman Lal, Astha Verma, Rajiv Ratn Shah
Citation text plays a pivotal role in elucidating the connection between scientific documents, demanding an in-depth comprehension of the cited paper. Constructing citations is often time-consuming, requiring researchers to delve into extensive literature and grapple with articulating relevant content. To address this ...
http://arxiv.org/abs/2404.13865v1
2024-04-22T04:30:36Z
cs.CL
2,024
Class-Level Code Generation from Natural Language Using Iterative, Tool-Enhanced Reasoning over Repository
Ajinkya Deshpande, Anmol Agarwal, Shashank Shet, Arun Iyer, Aditya Kanade, Ramakrishna Bairi, Suresh Parthasarathy
LLMs have demonstrated significant potential in code generation tasks, achieving promising results at the function or statement level in various benchmarks. However, the complexities associated with creating code artifacts like classes, particularly within the context of real-world software repositories, remain underex...
http://arxiv.org/abs/2405.01573v1
2024-04-22T03:52:54Z
cs.SE, cs.AI
2,024
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian
While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain jailbreaking attacks that lead to generation of inappropriate or harmful content. Manual red-teaming requires finding adversarial prompts that cause such jailbreaking, e.g. by appending a suffix to a given ins...
http://arxiv.org/abs/2404.16873v1
2024-04-21T22:18:13Z
cs.CR, cs.AI, cs.CL, cs.LG
2,024
Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images
Ali Naseh, Katherine Thai, Mohit Iyyer, Amir Houmansadr
With the digital imagery landscape rapidly evolving, image stocks and AI-generated image marketplaces have become central to visual media. Traditional stock images now exist alongside innovative platforms that trade in prompts for AI-generated visuals, driven by sophisticated APIs like DALL-E 3 and Midjourney. This pap...
http://arxiv.org/abs/2404.13784v1
2024-04-21T21:30:17Z
cs.CR, cs.CL, cs.CV
2,024
LLMs in Web-Development: Evaluating LLM-Generated PHP code unveiling vulnerabilities and limitations
Rebeka Tóth, Tamas Bisztray, László Erdodi
This research carries out a comprehensive examination of web application code security, when generated by Large Language Models through analyzing a dataset comprising 2,500 small dynamic PHP websites. These AI-generated sites are scanned for security vulnerabilities after being deployed as standalone websites in Docker...
http://arxiv.org/abs/2404.14459v1
2024-04-21T20:56:02Z
cs.SE, cs.AI
2,024
SVGEditBench: A Benchmark Dataset for Quantitative Assessment of LLM's SVG Editing Capabilities
Kunato Nishina, Yusuke Matsui
Text-to-image models have shown progress in recent years. Along with this progress, generating vector graphics from text has also advanced. SVG is a popular format for vector graphics, and SVG represents a scene with XML text. Therefore, Large Language Models can directly process SVG code. Taking this into account, we ...
http://arxiv.org/abs/2404.13710v1
2024-04-21T16:44:52Z
cs.CV
2,024
FiLo: Zero-Shot Anomaly Detection by Fine-Grained Description and High-Quality Localization
Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Hao Li, Ming Tang, Jinqiao Wang
Zero-shot anomaly detection (ZSAD) methods entail detecting anomalies directly without access to any known normal or abnormal samples within the target item categories. Existing approaches typically rely on the robust generalization capabilities of multimodal pretrained models, computing similarities between manually c...
http://arxiv.org/abs/2404.13671v1
2024-04-21T14:22:04Z
cs.CV, cs.LG
2,024
EPI-SQL: Enhancing Text-to-SQL Translation with Error-Prevention Instructions
Xiping Liu, Zhao Tan
The conversion of natural language queries into SQL queries, known as Text-to-SQL, is a critical yet challenging task. This paper introduces EPI-SQL, a novel methodological framework leveraging Large Language Models (LLMs) to enhance the performance of Text-to-SQL tasks. EPI-SQL operates through a four-step process. In...
http://arxiv.org/abs/2404.14453v1
2024-04-21T03:52:46Z
cs.CL, cs.AI, cs.DB
2,024
Retrieval-Augmented Generation-based Relation Extraction
Sefika Efeoglu, Adrian Paschke
Information Extraction (IE) is a transformative process that converts unstructured text data into a structured format by employing entity and relation extraction (RE) methodologies. The identification of the relation between a pair of entities plays a crucial role within this framework. Despite the existence of various...
http://arxiv.org/abs/2404.13397v1
2024-04-20T14:42:43Z
cs.CL, cs.AI
2,024
UnibucLLM: Harnessing LLMs for Automated Prediction of Item Difficulty and Response Time for Multiple-Choice Questions
Ana-Cristina Rogoz, Radu Tudor Ionescu
This work explores a novel data augmentation method based on Large Language Models (LLMs) for predicting item difficulty and response time of retired USMLE Multiple-Choice Questions (MCQs) in the BEA 2024 Shared Task. Our approach is based on augmenting the dataset with answers from zero-shot LLMs (Falcon, Meditron, Mi...
http://arxiv.org/abs/2404.13343v1
2024-04-20T10:41:02Z
cs.CL, cs.AI, cs.LG
2,024
Large Language Models as Test Case Generators: Performance Evaluation and Enhancement
Kefan Li, Yuan Yuan
Code generation with Large Language Models (LLMs) has been extensively studied and achieved remarkable progress. As a complementary aspect to code generation, test case generation is of crucial importance in ensuring the quality and reliability of code. However, using LLMs as test case generators has been much less exp...
http://arxiv.org/abs/2404.13340v1
2024-04-20T10:27:01Z
cs.SE, cs.AI
2,024
A Multi-Faceted Evaluation Framework for Assessing Synthetic Data Generated by Large Language Models
Yefeng Yuan, Yuhong Liu, Liang Cheng
The rapid advancements in generative AI and large language models (LLMs) have opened up new avenues for producing synthetic data, particularly in the realm of structured tabular formats, such as product reviews. Despite the potential benefits, concerns regarding privacy leakage have surfaced, especially when personal i...
http://arxiv.org/abs/2404.14445v1
2024-04-20T08:08:28Z
cs.LG, cs.AI, cs.CL
2,024
Beyond Accuracy: Investigating Error Types in GPT-4 Responses to USMLE Questions
Soumyadeep Roy, Aparup Khatua, Fatemeh Ghoochani, Uwe Hadler, Wolfgang Nejdl, Niloy Ganguly
GPT-4 demonstrates high accuracy in medical QA tasks, leading with an accuracy of 86.70%, followed by Med-PaLM 2 at 86.50%. However, around 14% of errors remain. Additionally, current works use GPT-4 to only predict the correct option without providing any explanation and thus do not provide any insight into the thinki...
http://arxiv.org/abs/2404.13307v1
2024-04-20T07:29:06Z
cs.CL
2,024
PCQA: A Strong Baseline for AIGC Quality Assessment Based on Prompt Condition
Xi Fang, Weigang Wang, Xiaoxin Lv, Jun Yan
The development of Large Language Models (LLM) and Diffusion Models brings the boom of Artificial Intelligence Generated Content (AIGC). It is essential to build an effective quality assessment framework to provide a quantifiable evaluation of different images or videos based on the AIGC technologies. The content gener...
http://arxiv.org/abs/2404.13299v1
2024-04-20T07:05:45Z
cs.CV
2,024
Demystify Adult Learning: A Social Network and Large Language Model Assisted Approach
Fang Liu, Bosheng Ding, Chong Guan, Zhang Wei, Dusit Niyato, Justina Tan
Adult learning is increasingly recognized as a crucial way for personal development and societal progress. It however is challenging, and adult learners face unique challenges such as balancing education with other life responsibilities. Collecting feedback from adult learners is effective in understanding their concer...
http://arxiv.org/abs/2404.13267v1
2024-04-20T04:26:49Z
cs.SI
2,024
ISQA: Informative Factuality Feedback for Scientific Summarization
Zekai Li, Yanxia Qin, Qian Liu, Min-Yen Kan
We propose Iterative Facuality Refining on Informative Scientific Question-Answering (ISQA) feedback\footnote{Code is available at \url{https://github.com/lizekai-richard/isqa}}, a method following human learning theories that employs model-generated feedback consisting of both positive and negative information. Throug...
http://arxiv.org/abs/2404.13246v1
2024-04-20T03:16:13Z
cs.CL
2,024
LLMChain: Blockchain-based Reputation System for Sharing and Evaluating Large Language Models
Mouhamed Amine Bouchiha, Quentin Telnoff, Souhail Bakkali, Ronan Champagnat, Mourad Rabah, Mickaël Coustaty, Yacine Ghamri-Doudane
Large Language Models (LLMs) have witnessed rapid growth in emerging challenges and capabilities of language understanding, generation, and reasoning. Despite their remarkable performance in natural language processing-based applications, LLMs are susceptible to undesirable and erratic behaviors, including hallucinatio...
http://arxiv.org/abs/2404.13236v2
2024-04-20T02:18:00Z
cs.DC, cs.ET
2,024
STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases
Shirley Wu, Shiyu Zhao, Michihiro Yasunaga, Kexin Huang, Kaidi Cao, Qian Huang, Vassilis N. Ioannidis, Karthik Subbian, James Zou, Jure Leskovec
Answering real-world user queries, such as product search, often requires accurate retrieval of information from semi-structured knowledge bases or databases that involve blend of unstructured (e.g., textual descriptions of products) and structured (e.g., entity relations of products) information. However, previous wor...
http://arxiv.org/abs/2404.13207v1
2024-04-19T22:54:54Z
cs.IR, cs.LG
2,024
Unified Scene Representation and Reconstruction for 3D Large Language Models
Tao Chu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Qiong Liu, Jiaqi Wang
Enabling Large Language Models (LLMs) to interact with 3D environments is challenging. Existing approaches extract point clouds either from ground truth (GT) geometry or 3D scenes reconstructed by auxiliary models. Text-image aligned 2D features from CLIP are then lifted to point clouds, which serve as inputs for LLMs....
http://arxiv.org/abs/2404.13044v1
2024-04-19T17:58:04Z
cs.CV
2,024
When Life gives you LLMs, make LLM-ADE: Large Language Models with Adaptive Data Engineering
Stephen Choi, William Gazeley
This paper presents the LLM-ADE framework, a novel methodology for continued pre-training of large language models (LLMs) that addresses the challenges of catastrophic forgetting and double descent. LLM-ADE employs dynamic architectural adjustments, including selective block freezing and expansion, tailored to specific...
http://arxiv.org/abs/2404.13028v1
2024-04-19T17:43:26Z
cs.CE, cs.AI
2,024
MAiDE-up: Multilingual Deception Detection of GPT-generated Hotel Reviews
Oana Ignat, Xiaomeng Xu, Rada Mihalcea
Deceptive reviews are becoming increasingly common, especially given the increase in performance and the prevalence of LLMs. While work to date has addressed the development of models to differentiate between truthful and deceptive human reviews, much less is known about the distinction between real reviews and AI-auth...
http://arxiv.org/abs/2404.12938v1
2024-04-19T15:08:06Z
cs.CL, cs.AI
2,024
Cross-cultural Inspiration Detection and Analysis in Real and LLM-generated Social Media Data
Oana Ignat, Gayathri Ganesh Lakshmy, Rada Mihalcea
Inspiration is linked to various positive outcomes, such as increased creativity, productivity, and happiness. Although inspiration has great potential, there has been limited effort toward identifying content that is inspiring, as opposed to just engaging or positive. Additionally, most research has concentrated on We...
http://arxiv.org/abs/2404.12933v1
2024-04-19T15:04:30Z
cs.CL, cs.AI
2,024
MM-PhyRLHF: Reinforcement Learning Framework for Multimodal Physics Question-Answering
Avinash Anand, Janak Kapuriya, Chhavi Kirtani, Apoorv Singh, Jay Saraf, Naman Lal, Jatin Kumar, Adarsh Raj Shivam, Astha Verma, Rajiv Ratn Shah, Roger Zimmermann
Recent advancements in LLMs have shown their significant potential in tasks like text summarization and generation. Yet, they often encounter difficulty while solving complex physics problems that require arithmetic calculation and a good understanding of concepts. Moreover, many physics problems include images that co...
http://arxiv.org/abs/2404.12926v1
2024-04-19T14:52:57Z
cs.AI
2,024
The Power of Words: Generating PowerShell Attacks from Natural Language
Pietro Liguori, Christian Marescalco, Roberto Natella, Vittorio Orbinato, Luciano Pianese
As the Windows OS stands out as one of the most targeted systems, the PowerShell language has become a key tool for malicious actors and cybersecurity professionals (e.g., for penetration testing). This work explores an uncharted domain in AI code generation by automatically generating offensive PowerShell code from na...
http://arxiv.org/abs/2404.12893v1
2024-04-19T13:54:34Z
cs.CR, cs.SE
2,024
Large Language Models for Next Point-of-Interest Recommendation
Peibo Li, Maarten de Rijke, Hao Xue, Shuang Ao, Yang Song, Flora D. Salim
The next Point of Interest (POI) recommendation task is to predict users' immediate next POI visit given their historical data. Location-Based Social Network (LBSN) data, which is often used for the next POI recommendation task, comes with challenges. One frequently disregarded challenge is how to effectively use the a...
http://arxiv.org/abs/2404.17591v1
2024-04-19T13:28:36Z
cs.IR, cs.AI, cs.LG
2,024
LLM-R2: A Large Language Model Enhanced Rule-based Rewrite System for Boosting Query Efficiency
Zhaodonghui Li, Haitao Yuan, Huiming Wang, Gao Cong, Lidong Bing
Query rewrite, which aims to generate more efficient queries by altering a SQL query's structure without changing the query result, has been an important research problem. In order to maintain equivalence between the rewritten query and the original one during rewriting, traditional query rewrite methods always rewrite...
http://arxiv.org/abs/2404.12872v1
2024-04-19T13:17:07Z
cs.DB, cs.CL
2,024
How Far Can We Go with Practical Function-Level Program Repair?
Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, Yuqun Zhang
Recently, multiple Automated Program Repair (APR) techniques based on Large Language Models (LLMs) have been proposed to enhance the repair performance. While these techniques mainly focus on the single-line or hunk-level repair, they face significant challenges in real-world application due to the limited repair task ...
http://arxiv.org/abs/2404.12833v1
2024-04-19T12:14:09Z
cs.SE
2,024
Large Language Model Supply Chain: A Research Agenda
Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang
The rapid advancements in pre-trained Large Language Models (LLMs) and Large Multimodal Models (LMMs) have ushered in a new era of intelligent applications, transforming fields ranging from natural language processing to content generation. The LLM supply chain represents a crucial aspect of the contemporary artificial...
http://arxiv.org/abs/2404.12736v1
2024-04-19T09:29:53Z
cs.SE
2,024
Evaluating Character Understanding of Large Language Models via Character Profiling from Fictional Works
Xinfeng Yuan, Siyu Yuan, Yuhan Cui, Tianhe Lin, Xintao Wang, Rui Xu, Jiangjie Chen, Deqing Yang
Large language models (LLMs) have demonstrated impressive performance and spurred numerous AI applications, in which role-playing agents (RPAs) are particularly popular, especially for fictional characters. The prerequisite for these RPAs lies in the capability of LLMs to understand characters from fictional works. Pre...
http://arxiv.org/abs/2404.12726v1
2024-04-19T09:10:29Z
cs.CL
2,024
Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks
Avinash Anand, Mohit Gupta, Kritarth Prasad, Navya Singla, Sanjana Sanjeev, Jatin Kumar, Adarsh Raj Shivam, Rajiv Ratn Shah
The rapid progress in the field of natural language processing (NLP) systems and the expansion of large language models (LLMs) have opened up numerous opportunities in the field of education and instructional methods. These advancements offer the potential for tailored learning experiences and immediate feedback, all d...
http://arxiv.org/abs/2404.13099v1
2024-04-19T08:45:42Z
cs.CL, cs.AI
2,024
Parameter Efficient Diverse Paraphrase Generation Using Sequence-Level Knowledge Distillation
Lasal Jayawardena, Prasan Yapa
Over the past year, the field of Natural Language Generation (NLG) has experienced an exponential surge, largely due to the introduction of Large Language Models (LLMs). These models have exhibited the most effective performance in a range of domains within the Natural Language Processing and Generation domains. Howeve...
http://arxiv.org/abs/2404.12596v1
2024-04-19T02:59:09Z
cs.CL, cs.AI, cs.LG
2,024
Cocoon: Semantic Table Profiling Using Large Language Models
Zezhou Huang, Eugene Wu
Data profilers play a crucial role in the preprocessing phase of data analysis by identifying quality issues such as missing, extreme, or erroneous values. Traditionally, profilers have relied solely on statistical methods, which lead to high false positives and false negatives. For example, they may incorrectly flag m...
http://arxiv.org/abs/2404.12552v1
2024-04-19T00:13:25Z
cs.DB
2,024
HalluciBot: Is There No Such Thing as a Bad Question?
William Watson, Nicole Cho
Hallucination continues to be one of the most critical challenges in the institutional adoption journey of Large Language Models (LLMs). In this context, an overwhelming number of studies have focused on analyzing the post-generation phase - refining outputs via feedback, analyzing logit output values, or deriving clue...
http://arxiv.org/abs/2404.12535v1
2024-04-18T22:56:57Z
cs.LG, cs.AI, cs.CL
2,024
NORMAD: A Benchmark for Measuring the Cultural Adaptability of Large Language Models
Abhinav Rao, Akhila Yerukola, Vishwa Shah, Katharina Reinecke, Maarten Sap
The integration of Large Language Models (LLMs) into various global cultures fundamentally presents a cultural challenge: LLMs must navigate interactions, respect social norms, and avoid transgressing cultural boundaries. However, it is still unclear if LLMs can adapt their outputs to diverse cultural norms. Our study ...
http://arxiv.org/abs/2404.12464v1
2024-04-18T18:48:50Z
cs.CL
2,024
Characterizing LLM Abstention Behavior in Science QA with Context Perturbations
Bingbing Wen, Bill Howe, Lucy Lu Wang
The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. We probe model sensitivity in several set...
http://arxiv.org/abs/2404.12452v1
2024-04-18T18:26:43Z
cs.CL
2,024
When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes
Asaf Yehudai, Elron Bendel
We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes. FastFit utilizes a novel approach integrating batch contrastive learning and token-level similarity score. Compared to existing few-shot learni...
http://arxiv.org/abs/2404.12365v1
2024-04-18T17:48:05Z
cs.CL, cs.AI, cs.IR, cs.LG
2,024
V2Xum-LLM: Cross-Modal Video Summarization with Temporal Prompt Instruction Tuning
Hang Hua, Yunlong Tang, Chenliang Xu, Jiebo Luo
Video summarization aims to create short, accurate, and cohesive summaries of longer videos. Despite the existence of various video summarization datasets, a notable limitation is their limited amount of source videos, which hampers the effective fine-tuning of advanced large vision-language models (VLMs). Additionally...
http://arxiv.org/abs/2404.12353v1
2024-04-18T17:32:46Z
cs.CV, cs.AI
2,024
Large Language Models in Targeted Sentiment Analysis
Nicolay Rusnachenko, Anton Golubev, Natalia Loukachevitch
In this paper we investigate the use of decoder-based generative transformers for extracting sentiment towards the named entities in Russian news articles. We study sentiment analysis capabilities of instruction-tuned large language models (LLMs). We consider the dataset of RuSentNE-2023 in our study. The first group o...
http://arxiv.org/abs/2404.12342v1
2024-04-18T17:16:16Z
cs.CL
2,024
Simultaneous Interpretation Corpus Construction by Large Language Models in Distant Language Pair
Yusuke Sakai, Mana Makinae, Hidetaka Kamigaito, Taro Watanabe
In Simultaneous Machine Translation (SiMT) systems, training with a simultaneous interpretation (SI) corpus is an effective method for achieving high-quality yet low-latency systems. However, it is very challenging to curate such a corpus due to limitations in the abilities of annotators, and hence, existing SI corpora...
http://arxiv.org/abs/2404.12299v1
2024-04-18T16:24:12Z
cs.CL, cs.AI, cs.LG, cs.SD, eess.AS
2,024
Augmenting emotion features in irony detection with Large language modeling
Yucheng Lin, Yuhan Xia, Yunfei Long
This study introduces a novel method for irony detection, applying Large Language Models (LLMs) with prompt-based learning to facilitate emotion-centric text augmentation. Traditional irony detection techniques typically fall short due to their reliance on static linguistic features and predefined knowledge bases, ofte...
http://arxiv.org/abs/2404.12291v2
2024-04-18T16:11:17Z
cs.CL, cs.AI
2,024
Enhancing Embedding Performance through Large Language Model-based Text Enrichment and Rewriting
Nicholas Harris, Anand Butani, Syed Hashmy
Embedding models are crucial for various natural language processing tasks but can be limited by factors such as limited vocabulary, lack of context, and grammatical errors. This paper proposes a novel approach to improve embedding performance by leveraging large language models (LLMs) to enrich and rewrite input text ...
http://arxiv.org/abs/2404.12283v1
2024-04-18T15:58:56Z
cs.CL
2,024
DeepLocalization: Using change point detection for Temporal Action Localization
Mohammed Shaiqur Rahman, Ibne Farabi Shihab, Lynna Chu, Anuj Sharma
In this study, we introduce DeepLocalization, an innovative framework devised for the real-time localization of actions tailored explicitly for monitoring driver behavior. Utilizing the power of advanced deep learning methodologies, our objective is to tackle the critical issue of distracted driving-a significant facto...
http://arxiv.org/abs/2404.12258v1
2024-04-18T15:25:59Z
cs.CV
2,024
De-DSI: Decentralised Differentiable Search Index
Petru Neague, Marcel Gregoriadis, Johan Pouwelse
This study introduces De-DSI, a novel framework that fuses large language models (LLMs) with genuine decentralization for information retrieval, particularly employing the differentiable search index (DSI) concept in a decentralized setting. Focused on efficiently connecting novel user queries with document identifiers...
http://arxiv.org/abs/2404.12237v2
2024-04-18T14:51:55Z
cs.IR, cs.AI, cs.DC, I.2.7; I.2.11; H.3.3; C.2.4
2,024
OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data
Chandeepa Dissanayake, Lahiru Lowe, Sachith Gunasekara, Yasiru Ratnayake
Instruction fine-tuning pretrained LLMs for diverse downstream tasks has demonstrated remarkable success and has captured the interest of both academics and practitioners. To ensure such fine-tuned LLMs align with human preferences, techniques such as RLHF and DPO have emerged. At the same time, there is increasing int...
http://arxiv.org/abs/2404.12195v1
2024-04-18T13:57:18Z
cs.CL, cs.LG
2,024
Aligning Actions and Walking to LLM-Generated Textual Descriptions
Radu Chivereanu, Adrian Cosma, Andy Catruna, Razvan Rughinis, Emilian Radoi
Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains, including data augmentation and synthetic data generation. This work explores the use of LLMs to generate rich textual descriptions for motion sequences, encompassing both actions and walking patterns. We leverage the expressive ...
http://arxiv.org/abs/2404.12192v1
2024-04-18T13:56:03Z
cs.CV
2,024
Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?
Laura Majer, Jan Šnajder
The increasing threat of disinformation calls for automating parts of the fact-checking pipeline. Identifying text segments requiring fact-checking is known as claim detection (CD) and claim check-worthiness detection (CW), the latter incorporating complex domain-specific criteria of worthiness and often framed as a ra...
http://arxiv.org/abs/2404.12174v1
2024-04-18T13:31:05Z
cs.CL
2,024
Stance Detection on Social Media with Fine-Tuned Large Language Models
İlker Gül, Rémi Lebret, Karl Aberer
Stance detection, a key task in natural language processing, determines an author's viewpoint based on textual analysis. This study evaluates the evolution of stance detection methods, transitioning from early machine learning approaches to the groundbreaking BERT model, and eventually to modern Large Language Models (...
http://arxiv.org/abs/2404.12171v1
2024-04-18T13:25:29Z
cs.CL, cs.SI
2,024
Character is Destiny: Can Large Language Models Simulate Persona-Driven Decisions in Role-Playing?
Rui Xu, Xintao Wang, Jiangjie Chen, Siyu Yuan, Xinfeng Yuan, Jiaqing Liang, Zulong Chen, Xiaoqing Dong, Yanghua Xiao
Can Large Language Models substitute humans in making important decisions? Recent research has unveiled the potential of LLMs to role-play assigned personas, mimicking their knowledge and linguistic habits. However, imitative decision-making requires a more nuanced understanding of personas. In this paper, we benchmark...
http://arxiv.org/abs/2404.12138v1
2024-04-18T12:40:59Z
cs.AI
2,024
mABC: multi-Agent Blockchain-Inspired Collaboration for root cause analysis in micro-services architecture
Wei Zhang, Hongcheng Guo, Jian Yang, Yi Zhang, Chaoran Yan, Zhoujin Tian, Hangyuan Ji, Zhoujun Li, Tongliang Li, Tieqiao Zheng, Chao Chen, Yi Liang, Xu Shi, Liangfan Zheng, Bo Zhang
The escalating complexity of micro-services architecture in cloud-native technologies poses significant challenges for maintaining system stability and efficiency. To conduct root cause analysis (RCA) and resolution of alert events, we propose a pioneering framework, multi-Agent Blockchain-inspired Collaboration for ro...
http://arxiv.org/abs/2404.12135v2
2024-04-18T12:35:39Z
cs.MA, cs.CR, cs.DC
2,024
ParaFusion: A Large-Scale LLM-Driven English Paraphrase Dataset Infused with High-Quality Lexical and Syntactic Diversity
Lasal Jayawardena, Prasan Yapa
Paraphrase generation is a pivotal task in natural language processing (NLP). Existing datasets in the domain lack syntactic and lexical diversity, resulting in paraphrases that closely resemble the source sentences. Moreover, these datasets often contain hate speech and noise, and may unintentionally include non-Engli...
http://arxiv.org/abs/2404.12010v1
2024-04-18T09:02:45Z
cs.CL, cs.AI, cs.LG
2,024
Token-level Direct Preference Optimization
Yongcheng Zeng, Guoqing Liu, Weiyu Ma, Ning Yang, Haifeng Zhang, Jun Wang
Fine-tuning pre-trained Large Language Models (LLMs) is essential to align them with human values and intentions. This process often utilizes methods like pairwise comparisons and KL divergence against a reference LLM, focusing on the evaluation of full answers generated by the models. However, the generation of these ...
http://arxiv.org/abs/2404.11999v1
2024-04-18T08:49:38Z
cs.CL, cs.AI
2,024
EVIT: Event-Oriented Instruction Tuning for Event Reasoning
Zhengwei Tao, Xiancai Chen, Zhi Jin, Xiaoying Bai, Haiyan Zhao, Yiwei Lou
Events refer to specific occurrences, incidents, or happenings that take place under a particular background. Event reasoning aims to infer events according to certain relations and predict future events. The cutting-edge techniques for event reasoning play a crucial role in various natural language processing applicat...
http://arxiv.org/abs/2404.11978v1
2024-04-18T08:14:53Z
cs.CL
2,024
Aligning Language Models to Explicitly Handle Ambiguity
Hyuhng Joon Kim, Youna Kim, Cheonbok Park, Junyeob Kim, Choonghyun Park, Kang Min Yoo, Sang-goo Lee, Taeuk Kim
In spoken languages, utterances are often shaped to be incomplete or vague for efficiency. This can lead to varying interpretations of the same input, based on different assumptions about the context. To ensure reliable user-model interactions in such scenarios, it is crucial for models to adeptly handle the inherent a...
http://arxiv.org/abs/2404.11972v1
2024-04-18T07:59:53Z
cs.CL
2,024
Generating Diverse Criteria On-the-Fly to Improve Point-wise LLM Rankers
Fang Guo, Wenyu Li, Honglei Zhuang, Yun Luo, Yafu Li, Le Yan, Yue Zhang
The most recent pointwise Large Language Model (LLM) rankers have achieved remarkable ranking results. However, these rankers are hindered by two major drawbacks: (1) they fail to follow a standardized comparison guidance during the ranking process, and (2) they struggle with comprehensive considerations when dealing w...
http://arxiv.org/abs/2404.11960v1
2024-04-18T07:42:46Z
cs.IR, cs.AI
2,024
Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools
Yilun Hao, Yongchao Chen, Yang Zhang, Chuchu Fan
The recent advancements of Large Language Models (LLMs), with their abundant world knowledge and capabilities of tool-using and reasoning, fostered many LLM planning algorithms. However, LLMs have not shown to be able to accurately solve complex combinatorial optimization problems. In Xie et al. (2024), the authors pro...
http://arxiv.org/abs/2404.11891v1
2024-04-18T04:36:37Z
cs.AI, cs.CL, cs.HC
2,024
CAUS: A Dataset for Question Generation based on Human Cognition Leveraging Large Language Models
Minjung Shin, Donghyun Kim, Jeh-Kwang Ryu
We introduce the CAUS (Curious About Uncertain Scene) dataset, designed to enable Large Language Models, specifically GPT-4, to emulate human cognitive processes for resolving uncertainties. Leveraging this dataset, we investigate the potential of LLMs to engage in questioning effectively. Our approach involves providi...
http://arxiv.org/abs/2404.11835v1
2024-04-18T01:31:19Z
cs.AI
2,024
NL2FOL: Translating Natural Language to First-Order Logic for Logical Fallacy Detection
Abhinav Lalwani, Lovish Chopra, Christopher Hahn, Caroline Trippel, Zhijing Jin, Mrinmaya Sachan
Logical fallacies are common errors in reasoning that undermine the logic of an argument. Automatically detecting logical fallacies has important applications in tracking misinformation and validating claims. In this paper, we design a process to reliably detect logical fallacies by translating natural language to Firs...
http://arxiv.org/abs/2405.02318v1
2024-04-18T00:20:48Z
cs.CL, cs.AI, cs.LG, cs.LO
2,024
Enhancing Q&A with Domain-Specific Fine-Tuning and Iterative Reasoning: A Comparative Study
Zooey Nguyen, Anthony Annunziata, Vinh Luong, Sang Dinh, Quynh Le, Anh Hai Ha, Chanh Le, Hong An Phan, Shruti Raghavan, Christopher Nguyen
This paper investigates the impact of domain-specific model fine-tuning and of reasoning mechanisms on the performance of question-answering (Q&A) systems powered by large language models (LLMs) and Retrieval-Augmented Generation (RAG). Using the FinanceBench SEC financial filings dataset, we observe that, for RAG, com...
http://arxiv.org/abs/2404.11792v2
2024-04-17T23:00:03Z
cs.AI
2,024
REQUAL-LM: Reliability and Equity through Aggregation in Large Language Models
Sana Ebrahimi, Nima Shahbazi, Abolfazl Asudeh
The extensive scope of large language models (LLMs) across various domains underscores the critical importance of responsibility in their application, beyond natural language processing. In particular, the randomized nature of LLMs, coupled with inherent biases and historical stereotypes in data, raises critical concer...
http://arxiv.org/abs/2404.11782v1
2024-04-17T22:12:41Z
cs.CL, cs.AI, cs.CY, cs.LG
2,024
A Deep Dive into Large Language Models for Automated Bug Localization and Repair
Soneya Binta Hossain, Nan Jiang, Qiang Zhou, Xiaopeng Li, Wen-Hao Chiang, Yingjun Lyu, Hoan Nguyen, Omer Tripp
Large language models (LLMs) have shown impressive effectiveness in various software engineering tasks, including automated program repair (APR). In this study, we take a deep dive into automated bug fixing utilizing LLMs. In contrast to many deep learning-based APR methods that assume known bug locations, rely on line...
http://arxiv.org/abs/2404.11595v2
2024-04-17T17:48:18Z
cs.SE
2,024
Large Language Models meet Collaborative Filtering: An Efficient All-round LLM-based Recommender System
Sein Kim, Hongseok Kang, Seungyoon Choi, Donghyun Kim, Minchul Yang, Chanyoung Park
Collaborative filtering recommender systems (CF-RecSys) have shown successive results in enhancing the user experience on social media and e-commerce platforms. However, as CF-RecSys struggles under cold scenarios with sparse user-item interactions, recent strategies have focused on leveraging modality information of u...
http://arxiv.org/abs/2404.11343v1
2024-04-17T13:03:07Z
cs.IR, cs.AI
2,024
A Preference-driven Paradigm for Enhanced Translation with Large Language Models
Dawei Zhu, Sony Trenous, Xiaoyu Shen, Dietrich Klakow, Bill Byrne, Eva Hasler
Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine-tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise ...
http://arxiv.org/abs/2404.11288v1
2024-04-17T11:52:47Z
cs.CL
2,024
Low-Cost Language Models: Survey and Performance Evaluation on Python Code Generation
Jessica López Espejel, Mahaman Sanoussi Yahaya Alassan, Merieme Bouhandi, Walid Dahhane, El Hassane Ettifouri
Large Language Models (LLMs) have become the go-to solution for many Natural Language Processing (NLP) tasks due to their ability to tackle various problems and produce high-quality results. Specifically, they are increasingly used to automatically generate code, easing the burden on developers by handling repetitive t...
http://arxiv.org/abs/2404.11160v1
2024-04-17T08:16:48Z
cs.AI
2,024
TREACLE: Thrifty Reasoning via Context-Aware LLM and Prompt Selection
Xuechen Zhang, Zijian Huang, Ege Onur Taga, Carlee Joe-Wong, Samet Oymak, Jiasi Chen
Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers. Each LLM offering has different inference accuracy, monetary cost, and latency, and their accuracy further depends on the exact wording of the question (i.e., the specific prompt). At the...
http://arxiv.org/abs/2404.13082v1
2024-04-17T05:56:49Z
cs.CL, cs.AI, cs.LG
2,024
Stepwise Alignment for Constrained Language Model Policy Optimization
Akifumi Wachi, Thien Q Tran, Rei Sato, Takumi Tanabe, Yohei Akimoto
Safety and trustworthiness are indispensable requirements for applying AI systems based on large language models (LLMs) in real-world applications. This paper formulates a human value alignment as a language model policy optimization problem to maximize reward under a safety constraint and then proposes an algorithm ca...
http://arxiv.org/abs/2404.11049v1
2024-04-17T03:44:58Z
cs.LG, cs.AI, cs.CL
2,024
Which questions should I answer? Salience Prediction of Inquisitive Questions
Yating Wu, Ritika Mangla, Alexandros G. Dimakis, Greg Durrett, Junyi Jessy Li
Inquisitive questions -- open-ended, curiosity-driven questions people ask as they read -- are an integral part of discourse processing (Kehler and Rohde, 2017; Onea, 2016) and comprehension (Prince, 2004). Recent work in NLP has taken advantage of question generation capabilities of LLMs to enhance a wide range of app...
http://arxiv.org/abs/2404.10917v1
2024-04-16T21:33:05Z
cs.CL
2,024
Incubating Text Classifiers Following User Instruction with Nothing but LLM
Letian Peng, Jingbo Shang
In this paper, we aim to generate text classification data given arbitrary class definitions (i.e., user instruction), so one can train a small text classifier without any human annotation or raw corpus. Compared with pioneer attempts, our proposed Incubator is the first framework that can handle complicated and even m...
http://arxiv.org/abs/2404.10877v1
2024-04-16T19:53:35Z
cs.CL
2,024
A Dataset for Large Language Model-Driven AI Accelerator Generation
Mahmoud Nazzal, Deepak Vungarala, Mehrdad Morsali, Chao Zhang, Arnob Ghosh, Abdallah Khreishah, Shaahin Angizi
In the ever-evolving landscape of Deep Neural Networks (DNN) hardware acceleration, unlocking the true potential of systolic array accelerators has long been hindered by the daunting challenges of expertise and time investment. Large Language Models (LLMs) offer a promising solution for automating code generation which...
http://arxiv.org/abs/2404.10875v1
2024-04-16T19:52:26Z
cs.AR
2,024
MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents
Liyan Tang, Philippe Laban, Greg Durrett
Recognizing if LLM output can be grounded in evidence is central to many tasks in NLP: retrieval-augmented generation, summarization, document-grounded dialogue, and more. Current approaches to this kind of "fact-checking" are based on verifying each piece of a model generation against potential evidence using an LLM. ...
http://arxiv.org/abs/2404.10774v1
2024-04-16T17:59:10Z
cs.CL, cs.AI
2,024
Deep Learning and LLM-based Methods Applied to Stellar Lightcurve Classification
Yu-Yang Li, Yu Bai, Cunshi Wang, Mengwei Qu, Ziteng Lu, Roberto Soria, Jifeng Liu
Light curves serve as a valuable source of information on stellar formation and evolution. With the rapid advancement of machine learning techniques, it can be effectively processed to extract astronomical patterns and information. In this study, we present a comprehensive evaluation of deep-learning and large language...
http://arxiv.org/abs/2404.10757v1
2024-04-16T17:35:25Z
astro-ph.IM, astro-ph.SR, cs.CL, cs.LG
2,024
Automating REST API Postman Test Cases Using LLM
S Deepika Sri, Mohammed Aadil S, Sanjjushri Varshini R, Raja CSP Raman, Gopinath Rajagopal, S Taranath Chan
In the contemporary landscape of technological advancements, the automation of manual processes is crucial, compelling the demand for huge datasets to effectively train and test machines. This research paper is dedicated to the exploration and implementation of an automated approach to generate test cases specifically ...
http://arxiv.org/abs/2404.10678v1
2024-04-16T15:53:41Z
cs.SE, cs.LG
2,024
Private Attribute Inference from Images with Vision-Language Models
Batuhan Tömekçe, Mark Vero, Robin Staab, Martin Vechev
As large language models (LLMs) become ubiquitous in our daily tasks and digital interactions, associated privacy risks are increasingly in focus. While LLM privacy research has primarily focused on the leakage of model training data, it has recently been shown that the increase in models' capabilities has enabled LLMs...
http://arxiv.org/abs/2404.10618v1
2024-04-16T14:42:49Z
cs.AI, cs.CV, cs.LG
2,024
Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training
Masanori Hirano, Kentaro Imajo
Large language models (LLMs) are now widely used in various fields, including finance. However, Japanese financial-specific LLMs have not been proposed yet. Hence, this study aims to construct a Japanese financial-specific LLM through continual pre-training. Before tuning, we constructed Japanese financial-focused data...
http://arxiv.org/abs/2404.10555v1
2024-04-16T13:26:32Z
cs.CL, q-fin.CP
2,024
Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning
Xiao Wang, Tianze Chen, Xianjun Yang, Qi Zhang, Xun Zhao, Dahua Lin
The open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress. This includes both base models, which are pre-trained on extensive datasets without alignment, and aligned models, deliberately designed to align with ethical standards and human values. Contrary ...
http://arxiv.org/abs/2404.10552v1
2024-04-16T13:22:54Z
cs.CL, cs.AI
2,024
CoTAR: Chain-of-Thought Attribution Reasoning with Multi-level Granularity
Moshe Berchansky, Daniel Fleischer, Moshe Wasserblat, Peter Izsak
State-of-the-art performance in QA tasks is currently achieved by systems employing Large Language Models (LLMs), however these models tend to hallucinate information in their responses. One approach focuses on enhancing the generation process by incorporating attribution from the given input to the output. However, th...
http://arxiv.org/abs/2404.10513v1
2024-04-16T12:37:10Z
cs.CL, cs.AI, cs.LG
2,024
White Men Lead, Black Women Help: Uncovering Gender, Racial, and Intersectional Bias in Language Agency
Yixin Wan, Kai-Wei Chang
Social biases can manifest in language agency. For instance, White individuals and men are often described as "agentic" and achievement-oriented, whereas Black individuals and women are frequently described as "communal" and as assisting roles. This study establishes agency as an important aspect of studying social bia...
http://arxiv.org/abs/2404.10508v1
2024-04-16T12:27:54Z
cs.CL, cs.AI, cs.CY
2,024
When Emotional Stimuli meet Prompt Designing: An Auto-Prompt Graphical Paradigm
Chenggian Ma, Xiangyu Zhao, Chunhui Zhang, Yanzhao Qin, Wentao Zhang
With the development of Large Language Models (LLM), numerous prompts have been proposed, each with a rich set of features and their own merits. This paper summarizes the prompt words for large language models (LLMs), categorizing them into stimulating and framework types, and proposes an Auto-Prompt Graphical Paradigm...
http://arxiv.org/abs/2404.10500v1
2024-04-16T12:19:08Z
cs.CL, cs.AI, 68T20, I.2.7
2,024
Reasoning on Efficient Knowledge Paths:Knowledge Graph Guides Large Language Model for Domain Question Answering
Yuqi Wang, Boran Jiang, Yi Luo, Dawei He, Peng Cheng, Liangcai Gao
Large language models (LLMs), such as GPT3.5, GPT4 and LLAMA2 perform surprisingly well and outperform human experts on many tasks. However, in many domain-specific evaluations, these LLMs often suffer from hallucination problems due to insufficient training of relevant corpus. Furthermore, fine-tuning large models may...
http://arxiv.org/abs/2404.10384v1
2024-04-16T08:28:16Z
cs.CL, cs.AI, cs.IR
2,024
LLMs4OM: Matching Ontologies with Large Language Models
Hamed Babaei Giglou, Jennifer D'Souza, Felix Engel, Sören Auer
Ontology Matching (OM), is a critical task in knowledge integration, where aligning heterogeneous ontologies facilitates data interoperability and knowledge sharing. Traditional OM systems often rely on expert knowledge or predictive models, with limited exploration of the potential of Large Language Models (LLMs). We ...
http://arxiv.org/abs/2404.10317v2
2024-04-16T06:55:45Z
cs.AI
2,024
Enhancing Confidence Expression in Large Language Models Through Learning from Past Experience
Haixia Han, Tingyun Li, Shisong Chen, Jie Shi, Chengyu Du, Yanghua Xiao, Jiaqing Liang, Xin Lin
Large Language Models (LLMs) have exhibited remarkable performance across various downstream tasks, but they may generate inaccurate or false information with a confident tone. One of the possible solutions is to empower the LLM confidence expression capability, in which the confidence expressed can be well-aligned wit...
http://arxiv.org/abs/2404.10315v1
2024-04-16T06:47:49Z
cs.CL
2,024
LLM-Powered Test Case Generation for Detecting Tricky Bugs
Kaibo Liu, Yiyang Liu, Zhenpeng Chen, Jie M. Zhang, Yudong Han, Yun Ma, Ge Li, Gang Huang
Conventional automated test generation tools struggle to generate test oracles and tricky bug-revealing test inputs. Large Language Models (LLMs) can be prompted to produce test inputs and oracles for a program directly, but the precision of the tests can be very low for complex scenarios (only 6.3% based on our experi...
http://arxiv.org/abs/2404.10304v1
2024-04-16T06:20:06Z
cs.SE, cs.LG
2,024
Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
Tunazzina Islam, Dan Goldwasser
The widespread use of social media has led to a surge in popularity for automated methods of analyzing public opinion. Supervised methods are adept at text categorization, yet the dynamic nature of social media discussions poses a continual challenge for these techniques due to the constant shifting of the focus. On th...
http://arxiv.org/abs/2404.10259v1
2024-04-16T03:26:43Z
cs.CL, cs.AI, cs.CY, cs.LG, cs.SI
2,024
Two-Stage Stance Labeling: User-Hashtag Heuristics with Graph Neural Networks
Joshua Melton, Shannon Reid, Gabriel Terejanu, Siddharth Krishnan
The high volume and rapid evolution of content on social media present major challenges for studying the stance of social media users. In this work, we develop a two stage stance labeling method that utilizes the user-hashtag bipartite graph and the user-user interaction graph. In the first stage, a simple and efficien...
http://arxiv.org/abs/2404.10228v1
2024-04-16T02:18:30Z
cs.LG, cs.CL, cs.SI
2,024