diff --git "a/title_30K/test_title_long_2404.16283v1.json" "b/title_30K/test_title_long_2404.16283v1.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16283v1.json" @@ -0,0 +1,104 @@ +{ + "url": "http://arxiv.org/abs/2404.16283v1", + "title": "Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services", + "abstract": "The advent of large language models (LLMs) has transformed text-based\nservices, enabling capabilities ranging from real-time translation to AI-driven\nchatbots. However, existing serving systems primarily focus on optimizing\nserver-side aggregate metrics like token generation throughput, ignoring\nindividual user experience with streamed text. As a result, under high and/or\nbursty load, a significant number of users can receive unfavorable service\nquality or poor Quality-of-Experience (QoE). In this paper, we first formally\ndefine QoE of text streaming services, where text is delivered incrementally\nand interactively to users, by considering the end-to-end token delivery\nprocess throughout the entire interaction with the user. Thereafter, we propose\nAndes, a QoE-aware serving system that enhances user experience for LLM-enabled\ntext streaming services. At its core, Andes strategically allocates contended\nGPU resources among multiple requests over time to optimize their QoE. Our\nevaluations demonstrate that, compared to the state-of-the-art LLM serving\nsystems like vLLM, Andes improves the average QoE by up to 3.2$\\times$ under\nhigh request rate, or alternatively, it attains up to 1.6$\\times$ higher\nrequest rate while preserving high QoE.", + "authors": "Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services", + "main_content": "Introduction Large language Models (LLMs) [4, 9, 21, 46, 51] have revolutionized natural language processing. By generating contextually relevant responses, they power a wide range of applications, more than 60% of which are centered around conversational interactions like chatbots, virtual assistants, language translation, and customer support systems [15]. In particular, the meteoric rise of ChatGPT [35] spearheaded the growth of conversational AI services by attracting over 100 million users in just two months after its launch [29]. Conversational AI services, by nature, provide interactive conversations between the user and an AI agent. At its core, an LLM generates tokens one by one1 and streams them back to the user to be digested, be it as written text or speech. As 1LLMs process and generate text in units of tokens. For instance, the word \u201cstreaming\u201d may be broken down into two tokens: \u201cstream\u201d and \u201cing.\u201d Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (a) Existing LLM serving systems are oblivious of QoE. User 2 experiences a long wait time (TTFT) and therefore lower QoE. Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (b) A QoE-aware LLM serving system can schedule token generation over time to enhance QoE. User 2\u2019s TTFT is drastically improved without affecting User 1\u2019s token delivery timeline. Figure 1. Server-side token generation timeline and userside response digestion progress. Even if the server generates tokens very fast, users cannot digest them at such a pace. this token-by-token streaming nature is akin to the frameby-frame streaming nature of video streaming services, we dub such services text streaming services. In this paper, we seek to characterize and enhance the Quality-of-Experience (QoE) of text streaming services (\u00a72.2). We realize that user interaction with LLM responses happens at moments when each new token is delivered (e.g., displayed or spoken) to the user over time. Thus, we define token delivery timeline (TDT), a series of timestamps when each token was delivered to a user, to capture the user\u2019s interaction with the service for a single request. The ideal TDT a user expects from a text streaming service can vary significantly based on the type of the service and user demographics. For instance, a chat service that uses a text-to-speech model to read out the LLM\u2019s response to users (e.g., voice chat in ChatGPT, real-time speech translation) could be less stringent in terms of its minimum token delivery speed (TDS) compared to a chat service in raw text, because a user\u2019s speaking speed is often slower than their reading speed, but it may require smaller time to first token (TTFT) to better resemble real-life arXiv:2404.16283v1 [cs.DC] 25 Apr 2024 \fverbal conversations. The minimum TDS and TTFT together define the expected TDT of a request. Unfortunately, existing LLM serving systems [20, 25, 30, 50] are designed to optimize aggregated server-side performance metrics such as token generation throughput [25, 50], which are not necessarily aligned with optimizing the QoE of text streaming services (\u00a72.3). More importantly, by realigning the objectives of LLM serving systems towards QoE optimization, a QoE-aware serving system can utilize the same resources more effectively to manage a greater number of concurrent requests while ensuring high QoE, thus reducing the cost per request. To illustrate, we compare existing serving systems with a QoE-aware one, each with a serving capacity of 1, in Figure 1. In Figure 1a, due to the commonly adopted first-come-first-serve (FCFS) scheduling policy [25, 50, 52], User 2 experiences a long initial waiting time (TTFT). In contrast, in Figure 1b, a QoE-aware serving system schedules token generation in a manner that is aware of each user\u2019s reading speed, leading to a shorter wait time for User 2 without affecting User 1\u2019s interaction with the service. Although the average server-side token generation throughput or latency are the same for the two systems, overall user experience is improved in the QoE-aware system. We attribute this to the na\u00efve FCFS scheduling policy in existing serving systems, which fails to account for the QoE requirements of individual requests and cannot efficiently utilize resources (\u00a72.4). Consequently, some users may experience extended waiting time during their interaction with the service, especially when the system is under higher request rate or is serving requests with longer context lengths. To preserve good user experience, the service provider must provision more compute resources proportional to the excess request load, leading to higher operational costs. Designing a QoE-aware LLM serving system, however, is challenging from both conceptual and practical perspectives. Defining the QoE metric to capture the user experience in text streaming services is non-trivial. It should encapsulate the continuous interaction process over time, accounting for factors like TTFT and TDS. Designing a QoE-aware serving system faces several systems challenges as well: (a) Dynamic and unpredictable resource demand: Requests arrive dynamically with varying expected TDT and prompt length and the number of output tokens is not known a priori, making it challenging to implement a one-size-fits-all scheduling strategy such as round-robin. (b) Constrained resource supply: The system has limited GPU memory and computation resources, restricting the number of concurrent in-flight requests. To meet the QoE requirements of individual requests, the system needs to make runtime decisions to allocate resources among requests, which may incur non-negligible overhead. To this end, we first propose a mathematical definition of QoE for text streaming services (\u00a73.1). Our QoE metric Age Group Reading Speed 18-24 (28.0%) 236 WPM 25-44 (51.9%) 200 WPM 45-54 (11.2%) 192 WPM 55-64 (5.6%) 185 WPM 65+ (3.3%) 175 WPM Table 1. Reading speed (Word Per Minute) by age group [10, 29]. Language Speaking Speed English (79.3%) 150 WPM Chinese (7.0%) 158 WPM Korean (6.9%) 150 WPM French (3.6%) 195 WPM Spanish (3.2%) 218 WPM Table 2. Speaking speed (Word Per Minute) by language [8, 29, 36]. compares the actual TDT of a request with its expected TDT, reflecting the user\u2019s experience throughout their entire interaction with the service. Then, we propose Andes, an LLM serving system that optimizes the overall QoE of text streaming services (\u00a74). Andes employs a dynamic priority-based preemptive scheduler that operates at the granularity of tokens. Andes strategically allocates system resources to more urgent requests and preempts requests that have already received sufficient service, all to enhance QoE. By satisfying more requests with high QoE using the same amount of resource, Andes eliminates the need for additional resource provisioning, thus reducing LLM serving cost. Andes also codesigns a client-side token buffer that temporarily withholds excess tokens and displays them to the user at their expected pace (\u00a75). This design ensures users experience smooth token delivery, oblivious to the intricacies of server-side scheduling or network fluctuations. We evaluate Andes using the OPT [51] family of models, ranging from 13B to 175B parameters (\u00a76). Compared to vLLM [25], we find that Andes can manage 1.6\u00d7 higher request rate with high QoE, or alternatively, improve the average QoE by 3.2\u00d7 given the same amount of resource. Overall, we make the following contributions in this paper: 1. We identify an emerging category of LLM-based applications (text streaming services) and define a QoE metric for them. 2. We propose Andes, a QoE-aware LLM serving system designed to optimize QoE for text streaming services. 3. We evaluate Andes under different workloads and setups and show that Andes significantly improves QoE with negligible system overhead. 2 Background and Motivation In this section, we introduce the unique characteristics of LLM serving systems (\u00a72.1) and the user experience of text streaming services (\u00a72.2). We then discuss the opportunities for improving user experience (\u00a72.3) and the limitations of existing solutions (\u00a72.4). 2.1 LLM Serving Systems LLM text generation using Transformer-based [47] models is characterized by autoregressive token generation and significant memory usage. First, the LLM generates tokens 2 \fTime #Tokens Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 2. Four requests arrive at \ud835\udc61= 0. Requests 1 and 2 are equally satisfying. Requests 3 and 4 are frustrating, with request 4 being more so as it delivers fewer tokens earlier on, despite having the same TTFT and average token latency. sequentially, where the next token is conditioned on the previous tokens. Second, the LLM requires a large amount of memory to store intermediate data for each token in its input prompt and output response, known as KV cache [47]. As the number of tokens generated increases, so does the KV cache size. For instance, GPT-3 175B [9] requires 7 GB of GPU memory for a 1000-token request, limiting the number of requests that can be handled concurrently. 2.2 User Experience of Text Streaming Services Compared to traditional services that generate entire responses at once, text streaming services allow the user to start digesting the response as early as possible. The user experience includes two phases: Wait Phase. Users wait for the first token to arrive, known as the time-to-first-token (TTFT). For web applications, studies indicate that users expect an initial response to arrive within one second, with a significant 32% dropout rate if the response takes longer than three seconds [6]. Digest Phase. Following the first token, users enter the digest phase, which may last for tens of seconds or more [50], Hence, it is a common practice to stream tokens to the user on the fly so that they can start digesting the response as early as possible. The expected rate of token delivery, i.e., the Token Delivery Speed (TDS), depends on factors such as application type and user demographics. For example, reading speeds, measured in words per minute (WPM), differ across age groups (Table 1), while speaking speeds vary among languages (Table 2). By translating words to tokens using the average word-to-token ratio [38], we can estimate the average reading speed to 4.8 tokens/s and average speaking speed to 3.3 tokens/s. Intuition Behind QoE of Text Streaming Services. The expected TTFT and the expected TDS together define the expected token delivery timeline (TDT), represented by the black line in Figure 2. Similar to QoE in video streaming, a desired QoE metric should capture the gap between the actual TDT and the expected TDT. Intuitively, users are satisfied when the actual TDT is above the expected TDT, otherwise, they prefer to receive more tokens earlier on, as illustrated in 2 4 Request rate (req/s) 10 0 10 1 10 2 TTFT (s) Expected TTFT QoE-unaware QoE-aware (a) 90\ud835\udc61\u210e-p TTFT increases dramatically as the request rate surpasses the server\u2019s capacity. 2 3 4 5 Request rate (req/s) 0 5 10 TDS (tokens/s) Reading speed Speaking speed QoE-unaware QoE-aware (b) Token generation speed is much faster than the userexpected speed. Figure 3. System performance under different request rates. Figure 2. Therefore, the QoE should comprehensively measure the token delivery timeline throughout the entire user interaction, going beyond an aggregated number like TTFT or average token latency. We formally define such a QoE metric in Section 3.1. 2.3 Problems and Opportunities Existing LLM serving systems have primarily focused on optimizing aggregated server-side metrics, and often employ a first-come-first-serve (FCFS) scheduling approach without considering the user experience. In our experiment with ShareGPT [45] on OPT 66B [51] with 4 A100 GPUs, we notice that especially under high request rate, two issues arise: (1) certain users may encounter extended TTFT; (2) conversely, other users might receive tokens at a pace surpassing their digestion ability. Prolonged TTFT. As depicted in Figure 3a, the 90\ud835\udc61\u210epercentile TTFT increases dramatically as the server faces more bursty request rates, resulting in a longer queuing delay and degraded user experience. To accommodate such bursty request volumes, service providers often have to over-provision resources, such as by adding more GPUs, which significantly increases operational costs. Excessively High Token Generation Speed. Conversely, as shown in Figure 3b, we report the token generation speed under different request rates. The observed server-side token generation speed (\u22656.6 tokens/s) is much faster than the userexpected speed (3.3 or 4.8 tokens/s), as referenced in Table 1 and Table 2. This discrepancy indicates that the server often generates tokens faster than the user can consume them. While this might seem efficient from the server\u2019s perspective, it may overwhelm this user while starving others. Opportunities. We observe that there is an opportunity to optimize user experience by balancing prolonged TTFT and excessively fast token generation speed. By temporarily pausing the response generation for requests with already sufficient tokens generated, we can spare the limited GPU resources to other pending requests. The ratio between the expected token generation speed \ud835\udc47\ud835\udc37\ud835\udc46expected and the actual token generation speed \ud835\udc47\ud835\udc37\ud835\udc46actual 3 \fResponse length Prompt length Memory usage = Request Spec Request ID 1 2 3 4 Prompt length 90 90 180 90 Response length 10 10 10 20 Expected TTFT (s) 1 1 2 2 Expected TDS 1.25 1.25 5 5 (tokens/s) Server memory capacity 1 2 3 4 1,2,3,4 FCFS 1 2 3 4 1 2 3 4 1,2,3,4 Round Robin 1 2 3 4 1 2 4 1,2,3,4 QoE-aware 10 20 #Token 0 2 4 6 8 Time 10 20 #Token 0 2 4 6 8 Time 0 2 4 6 8 Time Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 4. Suboptimal user experience from QoE-unaware scheduling policies. In this illustrative toy example, we consider a server that can serve at most 200 tokens simultaneously due to memory constraints. We consider four requests with different prompt lengths, response lengths, as well as different expected TTFT and TDS values, arriving at time 0. The figure shows the serving order (first row) and the cumulative tokens delivered over time for each request (second and third rows). Colored lines represent actual TDT, while the black line indicates the expected TDT. An optimal QoE is achieved when the actual token delivery curve is completely left and/or above the expected token delivery curve. determines the slack for which a request can be preempted, allowing the system to accommodate more concurrent requests. Thus, with appropriate request preemption and restarting, we can serve \ud835\udc47\ud835\udc37\ud835\udc46actual \ud835\udc47\ud835\udc37\ud835\udc46expected \u00d7 concurrent requests than without request preemption, significantly improving user experience. In the example of text-based and voice-based chat services in Figure 3b, we could have increased the serving capacity by 6.6 4.8 = 1.38\u00d7 and 6.6 3.3 = 2\u00d7, respectively. Our evaluation shows that Andes can nearly achieve this theoretical improvement in practice. 2.4 Limitation of Existing Solutions Let us consider a toy example in Figure 4 to illustrate the limitations of existing QoE-unaware scheduling (FCFS used by vLLM [25] and Round Robin). Under FCFS scheduling, while requests 1, 2, and 3 are served immediately, request 4 suffers from longer TTFT due to queuing delays. Round Robin partially mitigates queuing delay using fair-sharing but still fails to align the token delivery in the later stage of the interaction, leading to suboptimal QoE. In contrast, the QoE-aware policy manages to meet the QoE requirements for all requests by prioritizing requests based on their QoE requirements and resource demand. It prioritizes requests with stringent TTFT requirements. Meanwhile, it monitors the resource demand of each request to prevent small requests from being starved of necessary resources. As the served requests accumulate enough tokens for the user to digest, the system upgrades the priority of request 3, which then requires more urgent servicing, and serves it. Finally, the system brings back requests 1, 2, and 4 to continue supplying tokens. In sum, when the server load is below its capacity, all requests can be served promptly and achieve perfect QoE without smart request scheduling. However, when the server is operating at capacity due to unpredictable higher request loads, QoE-aware scheduling can significantly improve the user experience without over-provisioning resources. 3 Overview In this section, we first introduce a formal definition of Quality-of-Experience (QoE) for text streaming services (\u00a73.1). Then, we provide an overview of Andes, an LLM serving system that optimizes QoE of text streaming services (\u00a73.2). 3.1 Quality-of-Experience (QoE) in Text Streaming Text streaming services allow the developer to specify the expected token delivery timeline (TDT) in a request. We derive the QoE of a request by comparing its actual TDT with the expected TDT, considering the entire token delivery process. Informed by the distinctions between superior and inferior service depicted in Figure 2, the formulation of our QoE metric is guided by a set of principles that reflect user expectations and experiences throughout their interaction: 1. Perfect Satisfaction: Users are satisfied when the actual token delivery perfectly aligns with or exceeds the expected delivery, resulting in maximum QoE (QoE = 1). We normalize QoE \u2208[0, 1] for generality across applications. 2. Excess Token Delivery: At any given time, delivering tokens faster than the user\u2019s digest speed does not add 4 \f) Perfect QoE (d) Pause in the middle Expected TDT Server generates User digests Sexpected Sactual Time #Tokens (a) TTFT missed. Time #Tokens (b) TDS missed. Time #Tokens (c) Perfect QoE. Time #Tokens (d) Pause in the middle. Figure 5. QoE example. The slope of the actual token delivery curve on the user side is capped by the expected TDS. value to the user experience, as the user cannot digest all tokens at once. So the QoE remains unchanged. 3. Early Token Delivery: Users prefer receiving more tokens earlier to start processing the response sooner. In scenarios where perfect satisfaction is not achieved, the QoE is higher for scenarios where more tokens are delivered earlier. For example, the QoE is worse for a longer TTFT with the same TDS, and similarly, the QoE is worse for a slower TDS with the same TTFT. Following these principles, we formalize the QoE metric by comparing two curves: (a) The expected token delivery curve \ud835\udc47(\ud835\udc61) that is defined by expected TTFT and TDS. Specifically, \ud835\udc47(\ud835\udc61) = \ud835\udc47\ud835\udc37\ud835\udc46expected\u00b7 (\ud835\udc61\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected) represents the ideal timeline at which tokens should be delivered to the user (black lines in Figure 5). (b) The actual token delivery curve \ud835\udc34(\ud835\udc61) reflects the timeline of how tokens are digested by the user over time (black dotted lines in Figure 5), with its slope at any time capped by the expected TDS. To quantify the QoE of a request with response length \ud835\udc59, we measure the area under both curves up to the actual time to the last token (TTLT). We then define QoE as the ratio of the actual and expected areas, as shown in Figure 5: \ud835\udc44\ud835\udc5c\ud835\udc38= \ud835\udc46actual \ud835\udc46expected = \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 \ud835\udc34(\ud835\udc61)\ud835\udc51\ud835\udc61 \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 min(\ud835\udc47(\ud835\udc61),\ud835\udc59)\ud835\udc51\ud835\udc61 (1) This formulation focuses on the relative QoE relationship between services, but Andes allows the service provider to prioritize specific aspects. For example, to stress a shorter TTFT, the provider can add a penalizing term on the defined QoE as \ud835\udefc\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47actual\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected \u00b7 \ud835\udc46actual \ud835\udc46expected , where \ud835\udefc\u2208[0, 1]. In this paper, we will use the QoE definition in Equation 1 by default. Running Waiting Queue \u2026 \u2026 1 Request Client Server 4 5 Buffer Request Priority GPU Admit Evict Submit Request {Prompt: \u2019What is the probability that this paper will be accepted?\u2019, TTFT: 1s, TDS: 5 tokens/s} Token Context Length QoE Tracker 2 3 3 Worker 0 Worker 1 Worker W-1 Request Metadata Receive Token Figure 6. Andes Overview. 3.2 Andes Overview The workflow of Andes is shown in Figure 6. 1 The interaction begins with the user submitting a request to the server. The request comes with its QoE requirement, which is prespecified by the application developer. 2 Upon receiving the request, the QoE tracker assigns a scheduling priority and puts it in the waiting queue. 3 At each scheduling iteration, the QoE tracker refreshes the priorities of all requests, both in the waiting and running queues. Then Andes reschedules the requests based on their priorities by admitting high-priority waiting requests to GPU workers and evicting low-priority running requests back to the server. For these evicted requests, their states (e.g., KV cache) are stored in the request metadata store on CPU RAM for future retrieval. 4 During each inference iteration, each running request generates one token, which is then sent to the client. 5 As tokens are delivered to the client, a token buffer is responsible for storing excess tokens and displaying them at the expected speed, ensuring smooth token delivery. 4 QoE-Aware Scheduling In this section, we describe how Andes schedules token generation across multiple requests to maximize the total QoE. Section 4.1 formulates the scheduling problem as a Knapsack variant, and Section 4.2 introduces an efficient solution. 4.1 Problem Formulation The core of Andes is an online preemptive scheduling algorithm for token generation, which requires designing three elements: (1) How often to make scheduling decisions (time quantum), (2) which requests to serve (scheduling objective), and (3) how many requests to serve at a time (batch size). Time Quantum. At the beginning of each time quantum, the scheduler inspects both queued and running requests, and determines which ones to admit and preempt. Following the 5 \fcontinuous batching used in existing systems [25, 50], Andes invokes its scheduler at the beginning of each iteration. Scheduling Objective. Just like any other online serving system, it is impractical to perfectly plan execution into the future. Therefore, Andes serves the set of requests that maximizes the scheduling objective in the upcoming time frame of length \u0394\ud835\udc61. The parameter \u0394\ud835\udc61cannot be too short, as scheduling decisions will become shortsighted, or too long, as the actual system state would deviate too far from estimations. We find that setting it as the average request completion time is reasonable, and show in Section 6.5 that Andes is not sensitive to the setting of \u0394\ud835\udc61. Andes supports various scheduling objectives including max average QoE and max-min QoE by designing its scheduling objective function appropriately. For the sake of presentation, we will focus on maximizing average QoE here (See Appendix A for alternative objectives). The objective function for request \ud835\udc56is defined as: \ud835\udc44serve,\ud835\udc56\u2212\ud835\udc44wait,\ud835\udc56 (2) where \ud835\udc44serve,\ud835\udc56and \ud835\udc44wait,\ud835\udc56are the QoE of request \ud835\udc56after \u0394\ud835\udc61 if it is served and not served, respectively. In simple terms, Equation 2 is the amount of QoE gain when we decide to serve request \ud835\udc56compared to when it is not served, and we naturally want to serve more of the requests that give us large QoE gains when served. Batch Size. The number of requests picked to run in the upcoming quantum, or batch size, is limited by two factors. First, each token in a request\u2019s context (prompt plus all generated tokens) consumes one entry in the LLM serving system\u2019s KV cache [9], whose size is bounded by GPU memory. Thus, we have the following constraint: \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (3) where there are \ud835\udc41requests in total (queued or running), \ud835\udc59\ud835\udc56 is request \ud835\udc56\u2019s context length, \ud835\udc65\ud835\udc56is an indicator variable that is 1 if request \ud835\udc56is served and 0 otherwise, and \ud835\udc40is the total number of tokens that can fit in GPU memory. Furthermore, Andes must take into account the latency to generate one token. That is, while a large batch size may increase server-side token generation throughput, the increase in the amount of compute will inflate the latency to generate one token from the perspective of each request, potentially hurting their QoE by delaying TTFT or failing to meet the expected TDS. On the other hand, a small batch size will be able to deliver tokens faster to each running request, but in turn more requests will not be served at all, again potentially hurting their QoE. Thus, the right intermediate batch size will have to be chosen in order to maximize average QoE. Knapsack Formulation. Putting these together, we observe that the problem setting resembles that of the classic knapsack problem [23]. The goal is to select items (requests) Time # Tokens Qserve(50) Qserve(30) Qserve(10) t Time # Tokens Qwait t Expected Actual Future Time # Tokens Qserve(50) Qserve(30) Qserve(10) t (a) \ud835\udc44serve, i(\ud835\udc35) Time # Tokens Qwait t (b) \ud835\udc44wait,\ud835\udc56 Figure 7. Visualization of \ud835\udc44serve, i(\ud835\udc35) and \ud835\udc44wait,\ud835\udc56. The former depends on batch size \ud835\udc35whereas the latter is a constant. With batch size 50, request \ud835\udc56no longer has perfect QoE. to put in a knapsack (GPU) so that total item value (QoE gain) is maximized and total weight (\ud835\udc59\ud835\udc56) does not exceed the knapsack\u2019s capacity (\ud835\udc40). However, our problem setting deviates from that of the classical knapsack because the value of each item depends on how many items there are in the knapsack. This is because, as noted above, the number of requests in the knapsack (batch size) affects token generation latency, which in turn means that \ud835\udc44serve,\ud835\udc56is actually a function of batch size \ud835\udc35.2 Figure 7 visualizes this. When \ud835\udc35is just 10 or 30, the request maintains perfect QoE by always running ahead. However, when \ud835\udc35is 50, the computation time of one iteration becomes longer and slows down token generation, degrading the request\u2019s QoE by failing to meet its TDS expectation. On the other hand, \ud835\udc44wait,\ud835\udc56does not depend on the batch size because it simply sits in the queue, waiting to be served. Thus, for a specific batch size \ud835\udc35, we would like to solve: max \ud835\udc65 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \u0000\ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \u0001 \u00b7 \ud835\udc65\ud835\udc56 s.t. \ud835\udc65\ud835\udc56\u2208{0, 1}, \ud835\udc56\u22081, . . . , \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc65\ud835\udc56= \ud835\udc35 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (4) where the optimization variable \ud835\udc65is a length \ud835\udc41array of \ud835\udc65\ud835\udc56s. The second constraint ensures that exactly \ud835\udc35many requests are chosen, whereas the final constraint ensures that the GPU memory capacity is not exceeded. Equation 4 should be solved for each possible batch size \ud835\udc35and the solution that yields the best objective value should be selected. 2More precisely, token generation latency is a function of batch size and the total number of tokens in the batch, but batch size and total number of tokens are nearly perfectly correlated, allowing us to eliminate the latter and only leave batch size. See Appendix B for more detailed analysis. 6 \f4.2 Solution Design In this section, we discuss the hardness of the problem formulated in the previous section in terms of algorithmic hardness and systems overhead. Then, we propose efficiency optimizations and a greedy algorithm that gives an approximate solution with low systems overhead. Algorithmic Hardness. As Andes must solve its optimization problem repetitively online to determine the set of requests to solve, an efficient algorithm is needed. However, Equation 4 is a variant of the knapsack problem called the Exact K-item Knapsack, which is weakly NP-Hard [23]. We give an optimal 3D dynamic programming solution to the problem that runs in pseudo-polynomial time \ud835\udc42(\ud835\udc40\u00b7 \ud835\udc412) in Appendix C. However, such an algorithm is also too slow in our case as the number of requests \ud835\udc41and the maximum number of tokens that can fit in memory \ud835\udc40are easily in the order of hundreds and thousands, respectively. Furthermore, we need to solve Equation 4 for each possible batch size \ud835\udc35\u2208[1, \ud835\udc41], which is clearly intractable. Preemption Overhead. When some requests that were running in the previous time quantum are not selected to run on the next, such requests are preempted. This is the core mechanism that reduces TTFT inflation from head-of-line blocking. For this, Andes supports two preemption mechanisms: swapping and recomputation. The former moves the request\u2019s KV cache entries between the GPU and CPU memory, whereas the latter drops all entries on preemption and recomputes them when the request restarts. If Andes runs out of host memory for storing KV cache, the preemption mechanism will automatically switch to recomputation. Preemption is not free \u2013 in general, the latency overhead of swapping is similar to one token generation iteration (See Appendix D for detailed benchmarking). Frequent preemption may slow down token generation and delay token delivery, potentially degrading request throughput and QoE. Therefore, our scheduling algorithm must make preemption decisions that strike a good balance between reaping QoE gains and causing slowdowns. Optimization #1: Selective Triggering. We observe that Equation 4 only needs to be solved when batch size is limited either by memory capacity or computation time. The former case can be detected easily by monitoring the KV cache occupancy and having a high-memory watermark (e.g., 90%). For the latter case, Andes monitors token generation latency and detects when it begins to exceed the most minimum token delivery speed requirement of the most stringent request. In all other cases, Andes does not trigger the optimization problem solver and serves every request. Optimization #2: Batch Size Search Space Pruning. In order to reduce the number of times Equation 4 needs to be solved, we reduce the search space of batch size \ud835\udc35from [1, \ud835\udc41] to [\ud835\udc35min, \ud835\udc35max]. First, there is no point in exploring very large Algorithm 1 Greedy packing algorithm for Equation 4 Inputs: Number of requests \ud835\udc41and KV cache capacity \ud835\udc40 Request context length array \ud835\udc59[\ud835\udc41] Request QoE gain array \ud835\udc5e[\ud835\udc41] Target batch size \ud835\udc35 Output: Solution array \ud835\udc65[\ud835\udc41] 1: Initialize priority array \ud835\udc5d[\ud835\udc41] with all zeros 2: for \ud835\udc56= 0 to \ud835\udc41\u22121 do 3: \ud835\udc5d[\ud835\udc56] = \ud835\udc5e[\ud835\udc56] \ud835\udc59[\ud835\udc56] \u22b2Priority of request \ud835\udc56 4: \ud835\udc40current = 0 5: \ud835\udc41current = 0 6: Initialize solution array \ud835\udc65[\ud835\udc41] with all zeros 7: for all \ud835\udc56\u2208[0, \ud835\udc41\u22121] in descending order of \ud835\udc5d[\ud835\udc56] do 8: if \ud835\udc40current + \ud835\udc59[\ud835\udc56] \u2264\ud835\udc40and \ud835\udc41current + 1 \u2264\ud835\udc35then 9: \ud835\udc65[\ud835\udc56] = 1 \u22b2Serve request \ud835\udc56 10: \ud835\udc40current = \ud835\udc40current + \ud835\udc59[\ud835\udc56] 11: \ud835\udc41current = \ud835\udc41current + 1 12: else 13: break 14: return \ud835\udc65 batch sizes that cannot be realized. Thus, \ud835\udc35max is determined by adding to the batch requests with the shortest context lengths until the total number of tokens in the batch reaches \ud835\udc40, at which point the batch size is the largest that can be realized. On the other hand, very small batch sizes that can generate tokens faster than the expected TDS of any request are also suboptimal. This is because going that fast does not increase the QoE of requests that are served, but on the other hand will serve a smaller number of requests, potentially degrading the QoE of requests that are left waiting. Thus, \ud835\udc35min is set as the largest batch size that generates tokens faster than the most stringent TDS among all requests. Optimization #3: Greedy Packing for Knapsack. A direct solution to the exact k-item knapsack problem in Equation 4 is computationally too heavy. Instead, Andes designs an efficient algorithm that computes each request\u2019s priority and greedily packs requests in that order. In designing the priority function, we have three goals: (a) Reflecting merit: Requests that yield high QoE gain and consume less resource should have high priority. (b) Preventing starvation: Requests should be automatically deprioritized as they receive service. (c) Reducing preemption: Selecting high priority requests should reduce the need for preemption. In light of these goals, request \ud835\udc56\u2019s priority is defined as: \ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \ud835\udc59\ud835\udc56 (5) This priority function meets our goals. (a) A higher QoE gain will increase the request\u2019s priority, but simultaneously discounted by the amount of GPU memory it will use. (b) As 7 \fa request receives service, its context length (\ud835\udc59\ud835\udc56) will increase, automatically deprioritizing itself. In contrast, requests will have higher QoE gain the more they wait, automatically boosting their priorities. (c) Finally, a request with long context length (\ud835\udc59\ud835\udc56) will be preempted first, freeing enough GPU memory to potentially bring in more than one waiting requests.3 This reduces the number of preemptions required to alleviate head-of-line blocking. The whole procedure is given in Algorithm 1. The greedy packing algorithm offers time complexity \ud835\udc42(\ud835\udc41log \ud835\udc41). We empirically show in Section 6.5 that this greedy solution can achieve performance comparable to the 3D DP algorithm while greatly reducing scheduling overhead. Optimization #4: Preemption Cap. We have discussed that preemption is not free and can potentially degrade QoE. However, we can empirically and theoretically show that Andes commonly does not result in excessive preemptions/thrashing that may cause average QoE to degrade. Empirically, Andes consistently maintains an average preemption frequency below 1 per request, even under a high server load (\u00a76.2.3). Theoretically, the number of preemptions needed to optimize the QoE of requests is contingent upon the excessive request load. Assume the serving system can handle \ud835\udc5f0 requests per second and the actual request rate is \ud835\udc58\u00b7 \ud835\udc5f0 requests per second, where \ud835\udc58\u22651. Thus, there would be (\ud835\udc58\u22121) \u00b7\ud835\udc5f0 requests whose QoE might be degraded due to the queuing delay. To mitigate this, we need roughly one preemption to accommodate each of these requests. Sometimes, a single preemption of a long request can allow multiple new requests to be served, which further reduces the number of preemptions needed. Therefore, the average preemption frequency needed is bounded by \ud835\udc58\u22121, which is small as long as the load is not excessively high. Nevertheless, in order to safeguard against thrashing that may happen in the worst case request pattern, Andes supports setting a cap \ud835\udc43on the average number of preemptions a request can experience throughout its lifetime. Too high a \ud835\udc43will not be able to act as a safeguard, whereas too small a \ud835\udc43will prevent even absolutely necessary preemptions from happening. We find that setting \ud835\udc43= 1, i.e., a request on average experiences at most one preemption during its lifetime, is a good default (Section 6.5). 5 Implementation The two core elements of Andes are its QoE-aware scheduler and a client-side token buffer. Server-Side QoE-Aware Scheduler. Andes\u2019s scheduling algorithm can work with any LLM serving system that supports continuous batching and at least one preemption mechanism (swapping or recomputation). We note that an LLM 3The overhead of preemption depends on how much memory was freed, not the number of requests. Therefore, for the same amount of memory freed from preemption, it\u2019s better to free a smaller number of requests. 0 50 100 150 200 250 #Tokens Generation Pause Network Fluctuation 0 10 20 30 40 50 Time (s) 0 100 #Tokens in buffer Client receives User digests Figure 8. The client-side token buffer holds excess tokens sent from the server to absorb token generation fluctuations and paces token delivery based on the user\u2019s expected TDS. serving system that implements Paged Attention [25] is likely to also support at least one preemption mechanism to prevent the system from running out of memory. As a reference, we implemented Andes\u2019s scheduling algorithm on top of vLLM [25]. The scheduler only manages requests coming into the vLLM instance it is integrated with, assuming that cluster-level load balancing and fault tolerance are done separately. Client-Side Token Buffer. The server sends tokens to the buffer as soon as they are generated, even if they were generated at a pace that exceeds the user\u2019s expected TDS. Then, the token buffer smooths out the token delivery timeline to pace tokens at the user\u2019s expected TDS. The token buffer can also naturally smooth out some fluctuations in network latency, for instance in crowded mobile networks. The buffer should be implemented appropriately depending on the destination of streaming \u2013 e.g., TypeScript for web frontend, Python for API use. Figure 8 visualizes the token buffer in action. With an initial burst generation faster than the user\u2019s expected TDS, the buffer withholds excess tokens and paces token delivery, thus growing in size. The server is fully aware of the token buffer, and preempts the request to serve other requests. During this time, the buffer drains at a rate that matches the user\u2019s expected TDS. Finally, the server brings back the request and starts generating tokens again, and together with the token buffer, perfect QoE was achieved. 6 Evaluation We evaluate the performance of Andes under different workloads. We demonstrate that: 1. Andes improves the average QoE up to 3.2\u00d7 when the system experiences high/bursty load (\u00a76.2.1). 8 \fModel size 13B 30B 66B 175B GPUs A100 4\u00d7A100 4\u00d7A100 4\u00d7A100 GPU Memory 80 GB 320 GB 320 GB 320 GB Precision FP16 FP16 FP16 8-bit [14] Model Memory 26 GB 60 GB 132 GB 180 GB Table 3. OPT model family and GPU specifications used. 2. Andes can handle up to 1.6\u00d7 higher request rates while preserving high QoE without additional resources, significantly reducing the serving cost(\u00a76.2.2). 3. Andes maintains similar token generation throughput as the baseline, with a minor drop (\u226410%) in throughput as the request rate increases (\u00a76.2.3). 4. Andes significantly improves TTFT, while maintaining TDS above user expected speed (\u00a76.3). 5. Andes outperforms the baselines across different workloads (\u00a76.4) and setups (\u00a76.5). 6.1 Experiment Setup Model and Server Configurations. Following state-ofthe-art LLM serving systems [25], we evaluate Andes using the OPT [51] series with 13B, 30B, 66B, and 175B parameters, with the 175B model employing INT8 quantization. We run all experiments on NVIDIA A100 GPUs in Chameleon Cloud [22], and use tensor parallelism to deploy the models, using the default configuration in vLLM [25]. We use swap as the preemption mechanism and set the CPU swap space to 240 GB in total. Detailed hardware specifications are provided in Table 3. Workloads. We experiment on ShareGPT [45], a dataset that gathers conversations shared by users with ChatGPT [35], including multiple rounds of input prompt and output response. By concatenating multiple rounds of conversations into one input while limiting its length to 1k tokens to fit the model\u2019s maximum context length, and setting the final response as the output, we create the Multi-Round ShareGPT dataset for longer conversations. As shown in Figure 9, MultiRound-ShareGPT has about 3\u00d7 longer input than ShareGPT, while both datasets have similar output length distribution. We generate request arrival traces using Poisson distribution with different arrival rates. The request\u2019s QoE requirement trace is created with different expected TTFT and TDS. TTFT is set to 1 second for all, while TDS is based on user reading speeds (Table 1), and is translated from words to tokens using the average word-to-token ratio for ChatGPT [38]. In real applications, QoE requirements should be set depending on the application\u2019s specific use case. For instance, reading speed (and thus expected TDS) may be measured using screen scrolling [18] or eye-tracking [3, 34]. Another potential use case is to introduce API price tiering, 0 500 1000 1500 2000 #Tokens 0 200 400 Density Input (mean: 174.55) Output (mean: 314.22) (a) ShareGPT. 0 200 400 600 800 1000 #Tokens 0 200 400 600 Density Input (mean: 624.22) Output (mean: 365.52) (b) Multi-Round ShareGPT. Figure 9. Input and output length distributions of datasets. where a higher per-token price provides faster TDS, and API users can select the tier suitable for downstream digestion. Baselines. We compare Andes with vLLM (version 0.2.7). vLLM uses first-come-first-serve (FCFS) scheduling policy by default. We implement another scheduling policy, RoundRobin (RR), atop vLLM for more informed comparison, which is designed to guarantee equal service to requests through cyclic request preemption. For RR, we set the service interval to 50 inference iterations, maximizing its QoE performance. Metrics. We focus on the following metrics in evaluations: \u2022 Average QoE: We set the threshold to 0.9 as the minimum acceptable average QoE. The QoE of 0.9 corresponds to a 5% delay in TTFT, a 10% slowdown in TDS, or something in the middle. \u2022 System capacity: It measures the maximum request rate that the system can handle while maintaining an average QoE above the threshold. \u2022 System throughput: It measures how many tokens the system generates per second. We also report normalized latency, which is used by vLLM[25] and Orca[50], in Appendix E. 6.2 End-to-End Experiments In this section, we report the performance of Andes in terms of average QoE (\u00a76.2.1), system capacity (\u00a76.2.2), and system throughput (\u00a76.2.3) under different setups. 6.2.1 Improvement on Average QoE. We evaluate the performance of Andes on all four models and two datasets. Figure 10 and Figure 11 show the result on the ShareGPT dataset and Multi-Round ShareGPT dataset respectively. As the request rate increases, Andes maintains a high average QoE, outperforming the baseline whose average QoE sharply decreases. In other words, Andes can serve more concurrent requests without compromising user experience. For ShareGPT dataset, Andes increases average QoE up to 3.1\u00d7 at the same request rate, while maintaining an average QoE of 0.9, all with the same resources. For Multi-Round ShareGPT dataset, Andes improves average QoE up to 3.2\u00d7. For OPT-30B model, the improvement is less significant, as the model is less resource-constrained when compared to the OPT-66B model. 9 \f1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 5.0 7.5 10.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B 1.4 1.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 10. Average QoE for different request rates using the ShareGPT dataset. 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 2 3 4 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B. 2 4 6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B. 1.5 2.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B. 0.8 1.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 11. Average QoE for different request rates using the Multi-Round ShareGPT dataset. These improvements can be attributed to Andes\u2019s QoEaware scheduling policy, which dynamically prioritizes resources for urgent requests that risk falling below their expected QoE, preempting those that have been sufficiently served. In contrast, under higher load, traditional FCFS scheduling policy suffers from head-of-line blocking, leading to significant queuing delay. Although the RR policy mitigates head-of-line blocking by preemptions, frequent preemptions introduce significant overhead and degrade the average QoE. 6.2.2 Improvement on Server Capacity. As shown in Figures 10 and 11, the horizontal dotted lines represent the average QoE threshold of 0.9. For ShareGPT dataset, Andes can manage 1.2\u00d7\u22121.6\u00d7 higher request rate than vLLM while maintaining an average QoE above the threshold. Specifically, for the OPT-66B model, Andes can handle 1.25\u00d7 higher request rate than vLLM, nearing the 1.38\u00d7 theoretical improvement suggested in Section 2.3, showcasing Andes\u2019s ability to optimize resource allocation and average QoE effectively. For Multi-Round ShareGPT dataset, Andes can serve 1.1 \u00d7 \u22121.3\u00d7 higher request rate. Additionally, by serving higher request rates with the same resources, Andes effectively reduces the resource cost per request. 6.2.3 Impact of Andes on System Throughput. We report the token generation throughput and the preemption frequency of Andes on OPT-66B with both datasets, as shown in Figure 12 and Figure 13. In both datasets, Andes maintains the same token throughput as vLLM when the request rate is moderate, and experiences a minor drop (\u226410%) in throughput as the request rate increases. This demonstrates that 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 3 4 5 Request rate (req/s) 0 50 Throughput (tokens/s) (a) ShareGPT. 1.5 2.0 Request rate (req/s) 0 50 Throughput (tokens/s) (b) Multi-Round ShareGPT. Figure 12. Token generation throughput with OPT-66B under different request arrival rates. Andes marginally impacts system throughput. The throughput decrease can be attributed to the overheads introduced by request preemption. Despite the active request scheduling, the frequency of preemptions per request remains low (\u22640.5) under reasonable average QoE as shown in Figure 13, minimizing the impact of overheads on throughput; Despite the minor decrease in throughput, the up to 60% improvement in server capacity offered by Andes can compensate for this, effectively reducing the resource cost per request while maintaining a satisfactory user experience. 6.3 Breakdown Analysis To understand Andes\u2019s performance in detail, we conducted a breakdown analysis focusing on QoE, time to first token (TTFT), and token delivery speed (TDS), as shown in Table 4. We report Andes\u2019s performance on OPT-66B and ShareGPT dataset with a request rate of 3.3, where Andes achieved an average QoE of 0.92. With these breakdown analyses, we can 10 \f3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (a) ShareGPT. 1.5 2.0 2.5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (b) Multi-Round ShareGPT. Figure 13. Preemption frequency with OPT-66B under different request arrival rates. Metric Percentile Method vLLM Andes 10\ud835\udc61\u210e 0.05 0.77 50\ud835\udc61\u210e 0.39 1.00 QoE 90\ud835\udc61\u210e 1.00 1.00 10\ud835\udc61\u210e 0.33 0.35 50\ud835\udc61\u210e 56.73 0.47 TTFT (s) 90\ud835\udc61\u210e 144.95 0.66 10\ud835\udc61\u210e 6.05 5.32 50\ud835\udc61\u210e 6.45 5.44 TDS (tokens/s) 90\ud835\udc61\u210e 7.84 7.02 Table 4. Andes significantly improves QoE and TTFT, while maintaining TDS above user expected speed. provide granular insights into individual user satisfaction under this level of QoE. QoE distribution. Andes significantly improves the lower and median user experiences, with the 10th percentile rising from 0.05 to 0.77 and the 50th percentile achieving a perfect score of 1, compared to 0.39 in vLLM. In order to understand how Andes handles requests with different request lengths, we present a scatter plot of QoE across different total lengths as shown in Figure 14. We observe Andes slightly starves a small fraction of longer requests, as they consume more resources or take longer time to complete. In contrast, FCFS starves lots of shorter requests that are blocked by longer requests. Token delivery timeline. Andes greatly enhances initial responsiveness, reducing median TTFT from 56.73 seconds in vLLM to just 0.47 seconds, and similarly improving the 90th percentile from 144.95 seconds to 0.66 seconds. This improved performance is attributed to Andes\u2019s QoE-aware scheduling, which effectively mitigates head-of-line blocking and reduces queuing delays. Additionally, we analyze the percentile distribution of the average TDS observed by users, excluding TTFT. While Andes slightly slows the average TDS, it remains above the user\u2019s expected speed, ensuring balanced delivery that neither overwhelms nor starves users. 0 1000 2000 Total Length 0 1 QoE (a) vLLM. 0 1000 2000 Total Length 0 1 QoE (b) Andes. Figure 14. QoE distribution across different total lengths. 6.4 Robustness to Diverse Workloads We evaluate the robustness of Andes under diverse settings including different hardware, arrival patterns, and QoE traces. We observed similar trends in diverse settings; therefore, we report our results with OPT-66B and ShareGPT. Hardware. We evaluate Andes on the NVIDIA A40 GPU with 46 GB RAM, as shown in Figure 15a. Andes improves average QoE up to 7\u00d7 under a higher request rate and serves 1.1\u00d7 higher request rate while maintaining an average QoE of 0.9. The reason for the smaller improvement on server capacity is that the A40 has a lower computational capability than the A100, leading to a slower average token generation speed. Consequently, the gap between the expected TDS and actual TDS on the A40 is smaller than on the A100, providing less opportunity for request scheduling and improving average QoE. However, as newer generations of GPUs are becoming more powerful in terms of computational capability, the potential improvement of Andes will be more significant. Bursty Arrival Process. We use a Gamma arrival process with the same request rate and a coefficient of variation of 3 to simulate the burst arrival of user requests. Figure 15b indicates that under bursty workload, the average QoE for FCFS policy begins to decrease at a lower request rate compared to the Poisson arrival, due to increased queuing delays. In contrast, Andes sustains a high average QoE, achieving up to a 2.7\u00d7 improvement on average QoE at the same request rate and serves 1.3\u00d7 higher request rate, showing Andes\u2019s adaptability to bursty workload. Different QoE Traces. Due to the unique QoE requirements of different applications, we evaluate Andes\u2019s performance under a voice chat QoE trace, with expected TTFT at 1 second and slower expected TDS adjusted according to the speaking speed outlined in Table 2. As shown in Figure 15c, both Andes and baseline achieve better average QoE even on higher request rates, attributed to the less strict TDS requirements. Nevertheless, Andes improves average QoE up to 1.25\u00d7 and manages 2\u00d7 request rate, which approaches the theoretical maximum improvement of 2\u00d7 as discussed in Section 2.3. 6.5 Sensitivity Analysis All experiments in sensitivity analysis are conducted on OPT66B with the ShareGPT dataset and a request rate of 3.3. 11 \f1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 0.4 0.5 0.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) NVIDIA A40. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) Burst request arrival. 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) Voice chat QoE trace. Figure 15. Robustness analysis on OPT-66B with ShareGPT dataset. 0.0 0.5 1.0 1.5 Preemption frequency cap p 0.5 1.0 Avg QoE vLLM Sedna 0.0 0.5 1.0 1.5 Preemption frequency cap P 0.5 1.0 Avg QoE (a) Average QoE. 0.0 0.5 1.0 1.5 Preemption frequency cap P 0 50 Throughput (tokens/s) (b) Throughput. Figure 16. Tuning preemption frequency cap \ud835\udc43. 0 50 100 150 t 0.4 0.6 0.8 1.0 Avg QoE vLLM Andes Figure 17. Tuning \u0394\ud835\udc61. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE vLLM Andes w/ greedy Andes w/ DP Figure 18. Different solver. Preemption Frequency Cap \ud835\udc43. Increasing preemption frequency cap \ud835\udc43can lead to finer-grained scheduling, potentially enhancing average QoE, but at the cost of increased overhead and reduced throughput. Figure 16a shows the average QoE under different \ud835\udc43. Improvements in QoE are observed as \ud835\udc43increases up to 0.4 preemptions per request, stabilizing beyond this point. Conversely, Figure 16b illustrates a slight decrease in system throughput with increased \ud835\udc43, stabilizing beyond 0.4 preemption per request. These observations suggest a trade-off between average QoE and system throughput, indicating the current setting of \ud835\udc43nearly optimizes QoE while maintaining satisfactory throughput. Prediction Timeframe \u0394\ud835\udc61. We evaluate how different \u0394\ud835\udc61 influences average QoE to understand its effect on system performance. Figure 17 illustrates that the average QoE remains roughly consistent for \u0394\ud835\udc61values greater than 50, and significantly outperforms the baselines, indicating that Andes is not sensitive to the setting of \u0394\ud835\udc61. Different Knapsack Solution. We compare the performance of Andes with different knapsack solutions between greedy and dynamic programming (DP). Figure 18 shows that the greedy consistently surpasses the DP solution, while both solutions outperform the baselines. The lower performance of the DP is due to its substantial computational overhead, which delays the inference process and degrades the average QoE. This suggests that the greedy approach is a more practical and efficient solution for Andes. 7 Related Work General Model Serving Systems. A variety of model serving systems have emerged, ranging from general-purpose, production-level frameworks like TensorFlow Serving [33] and NVIDIA Triton [31] to specialized systems such as Clipper [11], which sets application-level SLOs. Recent systems including Nexus[42], DeepRecSys [17], Clockwork [16], INFaaS [40], SuperServe [24] and AlpaServe [26] have introduced features like serving pipelines, hardware platform diversity, advanced scheduling, dynamic model selection, and model parallelism to boost resource efficiency. However, these general systems neglect the unique characteristics of LLM inference, leaving potential avenues for optimization. LLM Serving Systems. Numerous model serving systems are proposed to address the unique challenges of LLMs. Orca [50] introduced an iteration-level scheduling policy to enhance the throughput of batching inference, and vLLM [25] developed a PagedAttention to reduce the memory usage of LLMs. Splitwise [37], DistServe [52], TetriInfer [19] and Sarathi-Serve [1, 2] optimize the computation of prefill and decode phases through disaggregating or merging them. Some other systems focus on GPU kernel optimization and kernel fusion[5, 12, 32], model parallelism [5, 39], batching algorithm [13, 43, 50], KV-cache management [27, 28, 44] and parameter-sharing [53]. However, these systems focus on optimizing aggregated server-side performance and simply adopt a FCFS scheduling policy, which fail to address the queuing delay problem under higher request load. Finally, shortest remaining processing time [41] is a preemptive scheduling policy, but it does not consider the QoE of individual requests and requires knowledge of the response length of requests. To the best of our knowledge, Andes is the first to define and optimize QoE of text streaming services. 12 \fVideo Streaming and QoE. The concept of text streaming draws inspiration from video streaming but encounters unique challenges and has a different QoE definition. While video streaming services are primarily limited by network bandwidth and latency [7], text streaming services are mainly constrained on computational resources [48]. Additionally, the QoE in video streaming is often measured by metrics like buffering ratio, resolution stability, and playback smoothness [7], while the QoE in text streaming primarily considers the token delivery timelines (TDT). 8 Conclusion In this paper, we define and optimize the Quality-of-Experience (QoE) for text streaming services, a critical aspect often overlooked by existing serving systems. We propose a QoE-aware LLM serving system, Andes, which is able to serve more concurrent requests while meeting their QoE requirements, significantly reducing the cost per request. We demonstrate the effectiveness of Andes through extensive experiments on various real-world datasets and LLMs, showing that Andes can handle up to 1.6\u00d7 higher request rate while preserving high QoE, or enhance QoE by up to 3.2\u00d7 without additional resource expenditure.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.13556v1", + "title": "ChatRetriever: Adapting Large Language Models for Generalized and Robust Conversational Dense Retrieval", + "abstract": "Conversational search requires accurate interpretation of user intent from\ncomplex multi-turn contexts. This paper presents ChatRetriever, which inherits\nthe strong generalization capability of large language models to robustly\nrepresent complex conversational sessions for dense retrieval. To achieve this,\nwe propose a simple and effective dual-learning approach that adapts LLM for\nretrieval via contrastive learning while enhancing the complex session\nunderstanding through masked instruction tuning on high-quality conversational\ninstruction tuning data. Extensive experiments on five conversational search\nbenchmarks demonstrate that ChatRetriever substantially outperforms existing\nconversational dense retrievers, achieving state-of-the-art performance on par\nwith LLM-based rewriting approaches. Furthermore, ChatRetriever exhibits\nsuperior robustness in handling diverse conversational contexts. Our work\nhighlights the potential of adapting LLMs for retrieval with complex inputs\nlike conversational search sessions and proposes an effective approach to\nadvance this research direction.", + "authors": "Kelong Mao, Chenlong Deng, Haonan Chen, Fengran Mo, Zheng Liu, Tetsuya Sakai, Zhicheng Dou", + "published": "2024-04-21", + "updated": "2024-04-21", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "ChatRetriever: Adapting Large Language Models for Generalized and Robust Conversational Dense Retrieval", + "main_content": "Introduction Conversational search is rapidly gaining prominence and reshaping how users interact with search engines to foster a more natural informationseeking experience. At the heart of a conversational search system lie two key components: retrieval and generation (Gao et al., 2022; Zhu et al., 2023). The retrieval process is tasked with sourcing relevant passages, which the generation component then uses to craft the final response. Conversational retrieval plays a crucial role in ensuring the accuracy and reliability of the system responses by providing relevant passages (Liu et al., 2023). Compared to traditional ad-hoc web search, conversational retrieval requires an accurate under*Corresponding author. !!: Can the bottom of the ocean freeze? # !: Ocean water freezes just like freshwater, \u2026, because of the salt\u2026 !\": How does it freeze? How does the bottom of ocean water freeze? ChatRetriever LLM \u201cReformulate the current query into a context-free rewrite\u201d Conversational Search Session LLM Prompting LLM-based Rewriter LLM Conv. Retrieval Adaption CSIT on high-quality conversational instruction tuning data Figure 1: Illustration of adapting LLM for query rewriting and conversational dense retrieval. standing of the user\u2019s real search intent within longer, noisier, and more complex conversational contexts. A \u201cshortcut\u201d approach is to transform the conversational session into a standalone query rewrite, enabling the usage of ad-hoc retrievers for conversational retrieval. However, the additionally introduced rewriting process is hard to directly optimize towards better retrieval, and it also introduces extra search latency from the rewriting step (Yu et al., 2021). In contrast, the end-to-end conversational dense retrieval appears to be more promising, as it directly encodes the original conversational search session and passages into dense representations without additional input processing and can enjoy the efficiency benefit from advanced approximate nearest neighbor search algorithms (e.g. Faiss (Johnson et al., 2021)). Nonetheless, the effectiveness of existing conversational dense retrievers largely trails behind state-of-the-art conversational query rewriting approaches, which leverage large language models (LLMs). Owing to their strong text understanding and generation capabilities, LLM-based rewriters (Mao et al., 2023b; Ye et al., 2023) have demonstrated exceptional effectiveness, even outperforming human rewrites. Given that LLMs are inherently generative models, they can naturally serve as a high-quality conversational rewriter just through prompting (Figure 1). The question that remains is: whether the potent capabilities of LLMs can be harnessed to substantially enhance the performance of conversational dense retrievers. Several studies have explored tuning LLMs for arXiv:2404.13556v1 [cs.IR] 21 Apr 2024 \fdense retrieval but with a primary focus on ad-hoc search (Asai et al., 2023; Su et al., 2023; Ma et al., 2023; Wang et al., 2024; Muennighoff et al., 2024). While in conversational search, the multi-turn sessions exhibit greater diversity, complex expressions, and longer-tail intents compared to singleturn ad-hoc queries, posing severe challenges to the session representation learning. Additionally, these approaches often rely on manually designed and fixed instruction templates, which can considerably limit their ability to generalize and handle intricate conversational scenarios. In this work, we propose adapting LLM itself to serve as a powerful conversational dense retriever. To achieve this, we select high-quality conversational instruction tuning data (Ding et al., 2023) as our training data and propose a simple dual-learning approach called Contrastive SessionMasked Instruction Tuning (CSIT) for the model training. Specifically, we adopt the classical contrastive ranking loss function (Izacard et al., 2022) to fine-tune LLM from a generative model to a retrieval (or representational) model on the multiturn instruction (i.e., session)-response pairs, using the special tokens at the end of the input text to represent the entire text. Meanwhile, we mix the basic contrastive learning with a session-masked instruction tuning objective, where we mask all tokens except the special tokens of the session when computing the language modeling loss of the response tokens. The incorporation of this generative instruction tuning loss forces a strong enhancement in the learning of the complex session representation since the response tokens have to be generated solely based on the special tokens representing the session. Furthermore, it also helps retain the strong generalization capability of LLM for retrieval. Our resulting model, which we call ChatRetriever, can inherit the strong generalization capability of LLM to robustly represent complex conversational sessions for dense retrieval. We conducted extensive experiments across five conversational search benchmarks, where ChatRetriever substantially outperforms existing conversational dense retrievers. Notably, it achieves absolute NDCG@3 improvements of 6.8% and 12.2% on CAsT-20 and CAsT-21, respectively, matching the performance of the leading LLM-based conversational query rewriting methods. Beyond standard evaluations using fixed conversational trajectories, we also developed two robustness evaluation methods to assess the resilience of conversational retrieval approaches by altering the historical context. ChatRetriever demonstrates markedly more stable performance in our robustness test, showcasing its superior robustness in comparison to baselines when faced with varied contexts. Our contributions can be summarized as: (1) We introduce ChatRetriever, the first LLMadapted conversational dense retriever, which substantially outperforms existing conversational dense retrievers and achieves performance comparable to LLM-based rewriting approaches. (2) We propose Contrastive Session-Masked Instruction Tuning for such a retrieval-oriented adaption for LLM, which can help achieve better complex session representation and generalization. (3) We design two robustness evaluation methods for conversational retrieval by systematically varying the conversation contexts. Results highlight ChatRetriever\u2019s superior generalization capability in handling diverse conversational search scenarios. 2 Related Work Conversational search has seen the development of two primary approaches: conversational query rewriting (CQR) and conversational dense retrieval (CDR). The former approach transforms the conversational search problem into a traditional ad-hoc search problem by reformulating the conversational context into a standalone query. Techniques in this area range from selecting useful tokens from the context (Voskarides et al., 2020; Lin et al., 2021b) to training generative rewriters based on session-rewrite pairs (Yu et al., 2020; Wu et al., 2022; Mao et al., 2023a; Mo et al., 2023a). Inspired by the strong language generation capability of LLMs, some studies (Mao et al., 2023b; Ye et al., 2023; Yoon et al., 2024) propose to leverage LLMs as query rewriters and achieve amazing performance. Conversational dense retrieval (CDR), on the other hand, directly encodes the entire conversational session for end-to-end dense retrieval (Yu et al., 2021). Efforts in this direction have focused on improving session representation through various perspectives such as context denoising (Mao et al., 2022a; Mo et al., 2023b; Mao et al., 2023c), data augmentation using other corpus and LLMs (Lin et al., 2021a; Mao et al., 2022b; Dai et al., 2022; Jin et al., 2023; Chen et al., 2024; Mo et al., 2024b), and hard negative mining (Kim and Kim, 2022; Mo et al., 2024a). \fLLM-based and instruction-aware retrieval. Existing research has demonstrated that similar to the scaling laws (Kaplan et al., 2020) observed in LLMs, increasing the scale of models, data, and computing resources can also enhance the performance of retrieval models (Ni et al., 2022). To incorporate the ability to follow instructions into retrievers, some studies (Su et al., 2023; Asai et al., 2023) propose the creation of fixed instruction templates for various retrieval tasks, and use these instruction-enhanced datasets to train the retrievers. Moreover, there have been efforts to adapt LLMs for retrieval purposes by training on improved search data (Ma et al., 2023; Wang et al., 2024) or developing new search-oriented training objectives (Li et al., 2023). However, these approaches often rely on manually designed and fixed instruction templates, which can limit the generalization capabilities of the retrievers across diverse instructions. Additionally, they are typically designed for single-turn ad-hoc search, lacking the capability to comprehend long and complex search sessions. In contrast to LLMs, which can smoothly understand a wide range of complex user inputs, existing LLM-based retrievers still exhibit a large gap in their generalization capabilities, particularly in the context of conversational search. 3 Methodology We describe our simple and effective dual-learning approach, Contrastive Session-Masked Instruction Tuning (CSIT), which is designed to adapt LLM to a generalized and robust conversational dense retriever. An overview is shown in Figure 2. Contrastive instruction tuning. Recent works have demonstrated the effectiveness of simply using the contrastive ranking loss to adapt LLM to a retriever (Asai et al., 2023; Su et al., 2023; Ma et al., 2023; Wang et al., 2024; Muennighoff et al., 2024). However, their generalization capability can be limited as they overfit the narrow distribution of ad-hoc queries and fixed instruction templates they were trained on. We fine-tune LLM on diverse conversational instruction tuning data for more general conversational retrieval adaption. Specifically, given a training sample {(x, y+)} from conversational instruction tuning dataset, where x comprises all historical turns and the current instruction (we call x a session) and y is the response, we fine-tune LLM with the contrastive ranking loss: LC = \u2212log \u03d5(x, y+) \u03d5(x, y+) + P y\u2212\u2208D\u2212\u03d5(x, y\u2212), (1) where \u03d5(x, y) = exp((E(x) \u00b7 E(y))/\u03c4), E(\u00b7) is the shared text encoder of the retriever. D\u2212is a negative response collection for x. \u03c4 is a hyperparameter temperature. To encode text with LLM, we append t special tokens ([EMB1], ..., [EMBt]) to the end of the input text and utilize the representation of the last token ([EMBt]) as the comprehensive representation of the entire text. This approach is analogous to the text-level chain-of-thought (CoT) (Wei et al., 2020) for LLMs. We hypothesize that these t consecutive special tokens act as a representational chain-of-thought, expanding and guiding the learning space to achieve a more effective representation. Session-masked instruction tuning. To enhance the generalized encoding of complex search sessions, we integrate a session-masked instruction tuning objective with the fundamental contrastive learning. Given a training sample (x, y+), we concatenate the instruction and the response to form one input sequence s: s = [x1, ..., xN, [EMB1], ..., [EMBt], y+ 1 , ..., y+ M, [EMB1], ..., [EMBt]], (2) where xi and y+ i represent the i-th token of the session and the response, respectively. N and M denote the total number of tokens in the session and the response, respectively. We then input this sequence into the LLM to obtain the token representations. Specifically, the representations for the (N + t) session tokens are obtained through a standard auto-regressive process. However, for the subsequent (M+t) response token representations, we mask the N session token representations and allow only the attention of t special session tokens and their preceding response tokens. We achieve it by applying a customized attention mask matrix illustrated on the right side of Figure 1. Correspondingly, the loss function of the session-masked instruction tuning is defined as: LS = \u22121 M M X i=1 logp(y+ i |y+ 1 , ..., y+ i\u22121, x1:t), (3) where x1:t are the representations of the t session special tokens, which have been contextualized by the N session tokens. \f[Q1] [R1] [Q2] [R2] Session Response Session-Masked Attention Matrix Session-Masked Language Modeling Loss Contrastive Ranking Loss Session: Q1: Can the bottom of the ocean freeze? R1: Ocean water freezes just like freshwater, \u2026, because of \u2026 Q2: How does it freeze? Response (R2): Freezing happens when the molecules, \u2026, a solid crystal. Session-Response Concatenation A Training Sample Session Response ChatRetriever ChatRetriever ChatRetriever Figure 2: Overview of CSIT. We fine-tune LLM to be ChatRetriever using dual learning objectives. We use the last special token (i.e., ) to represent the input text, which can be session or response. In the session-masked attention matrix, the blue squares denote the session or the response tokens while the green squares denote their special tokens. By masking the session text and forcing correct generation for the response tokens, we build a closer connection between the session representation and the response token representations. The model has to perform a more nuanced understanding of the complex session and accurately encode them into the t session special tokens. We combine the contrastive instruction tuning and the session-masked instruction tuning to form the final training objective of ChatRetriever: L = LC + \u03b1LS, (4) where \u03b1 is a hyperparameter to balance the two losses. Discussion. Our dual-learning approach CSIT takes inspiration from several notable works in LLM-based retrieval and input compression such as RepLLaMA (Ma et al., 2023), E5mistral-7b (Wang et al., 2024), GRIT (Muennighoff et al., 2024), Gisting (Mu et al., 2023), and AutoCompressor (Chevalier et al., 2023). However, CSIT distinguishes from them in the following key aspects: (1) RepLLaMA and E5mistral-7b primarily focus on contrastive learning using (synthetic) ad-hoc search data with pre-defined instruction templates, which is hard to generalize to complex conversational search scenarios. (2) GRIT aims to build a unified model for both retrieval and generation, incorporating vanilla instruction tuning and using different training data for its contrastive learning and instruction tuning. (3) The mechanism of our session-masked instruction tuning shares similarities with Gisting and AutoCompressor, but they are for a completely different target: improving longcontext language modeling, not retrieval. In contrast, CSIT stands out from these works by specifically addressing the challenges of adapting LLM generalized to complex conversational retrieval. 4 Experiments 4.1 Setup Training data. We fine-tune LLM to be ChatRetriever on high-quality conversational instruction tuning datasets. We select training samples that are informative, diverse, and exhibit informationseeking intents. Our final training data comprises two sources: (1) The Question About the World subset of UltraChat (Ding et al., 2023) and (2) MSMARCO (Nguyen et al., 2016) passage ranking dataset. Ultrachat is a multi-turn instruction tuning dataset while MSMARCO can be deemed as a single-turn search-oriented instruction tuning dataset by treating the query as the instruction and the positive passage as the response. We find that incorporating MSMARCO is important to improve the basic (ad-hoc) retrieval performance. Evaluation data and metrics. We conduct evaluations on five public conversational search benchmarks, including QReCC (Anantha et al., 2021), TopiOCQA (Adlakha et al., 2022), CAsT-19 (Dalton et al., 2020), CAsT-20 (Dalton et al., 2021), and CAsT-21 (Dalton et al., 2022). The retrieval corpus sizes of these five datasets are in the tens of millions. Among them, the large-scale QReCC and TopiOCQA have training sets, while the other three CAsT datasets are small datasets that only have test sets. We mainly report NDCG@3 to evaluate the retrieval performance, as conversational search is more concerned with the top results (Dalton et al., 2021). Baselines. We compare ChatRetriever against the following three types of retrieval baselines. The first is CQR baselines, including T5QR (Lin et al., 2020), ConvGQR (Mo et al., 2023a), and LLM4CS (Mao et al., 2023b). The original \fModel Base Model #Model Parameter QReCC TopiOCQA CAsT-19 CAsT-20 CAsT-21 Conversational Query Rewriting T5QR T5-base (Raffel et al., 2020) 250M 31.8 22.2 41.7 29.9 33.0 ConvGQR T5-base (Raffel et al., 2020) 250M 41.0 24.3 43.4 33.1 27.3 LLM4CS (REW) ChatGPT-3.5 (OpenAI) Unknown 43.1 35.7 40.4 LLM4CS (RAR) ChatGPT-3.5 (OpenAI) Unknown 45.3 39.5 44.9 LLM4CS ChatGPT-3.5 (OpenAI) Unknown 51.5 45.5 49.2 LLM-based Retrieval LLM Embedder BGE (Xiao et al., 2023) 110M 50.5 22.4 36.6 15.3 31.2 INSTRCUTOR GTR-XL (Ni et al., 2022) 1.5B 42.3 12.3 26.8 17.3 32.4 RepLLaMA LLaMA-2 (Touvron et al., 2023) 7B 31.8 15.0 31.6 18.3 32.7 E5mistral-7b Mistral (Jiang et al., 2023) 7B 32.9 16.9 31.3 15.4 32.4 GRIT Mistral (Jiang et al., 2023) 7B 33.5 17.3 30.9 19.3 33.6 Conversational Dense Retrieval Conv-ANCE ANCE (Xiong et al., 2021) 110M 45.6 20.5 34.1 27.5 34.2 ConvDR ANCE (Xiong et al., 2021) 110M 35.7 26.4 43.9 32.4 37.4 DialogInpainter T5-Large (Raffel et al., 2020) 770M 47.0 33.2 LeCoRE SPLADE (Formal et al., 2022) 110M 48.5 31.4 42.2 29.0 32.3 ChatRetriever Qwen (Bai et al., 2023) 7B 52.5\u2020 40.1\u2020 52.1\u2020 40.0\u2020 49.6\u2020 Table 1: Results of the normal evaluation on five conversational search benchmarks. The base models of CQR methods are their rewriters and the model parameters are also counted as the rewriter\u2019s parameters. \u2020 denotes significant differences to baselines (p < 0.05). The best results are bold and the second-best results are underlined. LLM4CS has three prompting methods: REW, RAR, and RTR, and it requires multiple rounds of generation, which is time-consuming. For efficiency consideration, we additionally compare with its two single-generation variants based on RAR and REW; The second is CDR baselines, including ConvDR (Yu et al., 2021), ConvANCE (Mao et al., 2023c), DialogInpainter (Dai et al., 2022), and LeCoRE (Mao et al., 2023c); The third is the LLM-based retriever baselines, including INSTRUCTOR (Su et al., 2023), LLM Embedder (Zhang et al., 2023), RepLLaMA (Ma et al., 2023), E5mistral-7b (Wang et al., 2024), and GRIT (Muennighoff et al., 2024). More baseline details on in Appendix A. Implementations. We initialize ChatRetriever with Qwen-7B-Chat (Bai et al., 2023) and train it on eight 40G A100 GPUs using LoRA (Hu et al., 2022) with a maximum input sequence length of 1024. The training process involves 2500 steps with a learning rate of 1e-4, a gradient accumulation of 4 steps, a batch size of 64, and 4 hard negatives per sample. For consistency, we adopt the chatml input format of Qwen-Chat to form the input of ChatRetriever. We add three special tokens (i.e., <|extra_1|>, <|extra_2|>, and <|extra_3|>) at the end of the instructions and responses. For baseline comparisons, we adhere to the implementation settings specified in their original papers. Code is released at https://github.com/kyriemao/ ChatRetriever. 4.2 Normal Evaluation The retrieval performance comparisons on the five datasets are reported in Table 1. Our proposed ChatRetriever outperforms all the baseline methods across these datasets. Existing conversational dense retrievers are constrained by limited model capacity and data quality, resulting in suboptimal performance for conversational retrieval tasks. Prior to ChatRetriever, there was a considerable performance gap between existing conversational dense retrieval methods and the state-ofthe-art LLM-based conversational query rewriter (i.e., LLM4CS). Specifically, the absolute gaps between the best existing CDR model and LLM4CS were 1.6%, 12.2%, and 11.8% on the three CAsT datasets, respectively. However, ChatRetriever can achieve comparable or even superior performance to LLM4CS, highlighting the high potential of endto-end conversational dense retrieval compared to the two-stage approach of conversational query rewriting methods. If we force LLM4CS to generate a single output (RAR) or only consider query rewriting (REW) for efficiency, the advantages of ChatRetriever become even more pronounced, with over 4% absolute gains. We also observe that ex\fModel Partial Response Modification Full Context Modification CAsT-19 CAsT-20 CAsT-21 CAsT-19 CAsT-20 CAsT-21 NDCG@3\u2191 Diff.\u2193 NDCG@3\u2191 Diff.\u2193 NDCG@3\u2191 Diff.\u2193 Mean\u2191 SD\u2193 Mean\u2191 SD\u2193 Mean\u2191 SD\u2193 LLM4CS 50.4 1.1 43.8 1.7 49.4 0.2 49.7 1.5 44.0 1.1 48.4 1.4 ConvDR 44.3 0.4 31.0 1.4 34.8 2.6 39.3 3.4 30.2 2.6 35.8 2.9 LeCoRE 44.5 2.3 25.4 3.6 29.9 2.4 42.0 1.9 28.3 2.2 31.0 2.3 ChatRetriever 52.2 0.1 39.5 0.5 48.9 0.7 51.5 1.6 45.8 1.7 48.8 1.8 Table 2: Results of the robust evaluation. Diff. represents the absolute difference compared to the results in Table 1 and SD represents the standard deviation, where a smaller value means more stable. isting LLM-based retrievers do not perform well on conversational retrieval tasks. This can be attributed to the fact that they are fine-tuned solely on templated instructions, which fails to fully leverage the generalization capabilities of LLMs to handle complex and diverse conversational scenarios. 4.3 Robustness Evaluation Existing evaluations for conversational retrieval are mainly conducted on fixed conversation trajectories. In this section, we evaluate the robustness of conversational retrievers in different contexts. Our principle is modifying the context but fixing the current query (i.e., search intents) for each turn so that the original relevance labels can be re-used. Specifically, we propose the following two types of context modification: (1) Partial response modification: We do not use the provided responses in the evaluation dataset. Instead, for each turn, we input the current query, the context, and the top-3 passages retrieved by the conversational retriever, and prompt LLM to generate the response. The simulated online nature of generating responses turn-by-turn better matches how conversational retrieval systems are used in practice. However, a problem with this online evaluation manner is that the query of the next turn in the original dataset may become unreasonable after modifying its last response (Li et al., 2022). We propose a simple heuristic method to tackle this problem with LLM. Specifically, we prompt LLM to judge whether the current query is reasonable given the context. If not, we replace the current query with its human rewrite to make it stand on its own without needing external context. Otherwise, we can use the original query. The prompts can be found in Appendix B. (2) Full context modification: For each turn, we supply the original query and its human-modified version to the LLM, prompting it to generate new contexts (See Appendix C). We finally got five different contexts for each turn. We evaluate conversational retrievers based on different contexts generated by these two modification methods using ChatGPT 3.5. For the partial response modification setting, we report the retrieval performances and their absolute differences (Diff.) compared to the original counterpart results reported in Table 1. For the full context modification setting, we report the Mean performance of different runs and their standard deviation (SD). The robust evaluation results are shown in Table 2. For the partial response modification setting, it shows that the performance changes of ChatRetriever are the smallest. By referring to Table 1, we also observe a general degradation in retrieval performance compared to the original context. This degradation may stem from the retrieved passages being inaccurate, consequently leading to inaccurate responses, and then affecting the retrieval performance of the subsequent turns. For the full context modification setting, the robustness of ChatRetriever is further highlighted by its small average standard deviation of 1.7, which is lower compared to the 3.0 and 2.1 standard deviations observed for ConvDR and LeCoRE, respectively. These results demonstrate the strong robustness of ChatRetriever to different conversational search contexts. In contrast, the LLM4CS, which utilizes ChatGPT for query rewriting, shows an even lower standard deviation of 1.3, demonstrating the superior robustness of ChatGPT for conversational query rewriting. 4.4 Ablation Studies We build four ablations to study the effects of our proposed training approach: (1) w/o R-CoT: removing the representational CoT; (2) w/o SIT: removing the session-masked instruction tuning; (3) with Vanilla IT: replacing the session-masked instruction tuning with vanilla instruction tuning. Table 4 shows the ablation results. We find that \fBase LLM Model Parameter Base/Chat Training CAsT-19 CAsT-20 CAsT-21 Qwen 1.8B Chat Full 38.8 33.7 45.2 Qwen 1.8B Chat LoRA 35.1 31.9 42.4 Qwen 7B Base LoRA 46.9 37.7 46.5 Qwen 7B Chat LoRA 50.5 40.0 49.6 LLaMA-2 7B Chat LoRA 47.3 38.4 49.1 Mistrial 7B Chat LoRA 49.5 39.2 49.6 Table 3: Performance comparisons of ChatRetrievers under different settings with different backbone LLMs. Ablation CAsT-19 CAsT-20 CAsT-21 w/o SIT 49.5 36.8 45.8 w/o R-CoT 49.9 38.5 47.5 with Vanilla IT 51.1 39.3 48.4 CSIT 52.1 40.0 49.6 Table 4: Results of ablation studies. either removing the representational CoT or removing or replacing session-masked instruction tuning can lead to performance degradation. By contrast, the session-masked instruction tuning, which achieves 6.6% relative performance gains across the three CAsT datasets on average, is shown to be more effective than representational CoT, which achieves 3.4% relative performance gains on average. The results suggest that our two techniques have positive effects in helping adapt LLMs for conversational retrieval. We also studied the influence of the number of special CoT tokens, which can be found in Appendix D. 4.5 Influence of LLMs Table 3 shows the comparisons between different settings about the backbone LLM of ChatRetriever. (1) Base vs. Chat. Our results indicate that the Chat model outperforms the Base model, which aligns with our expectations. We hypothesize that the ability to follow instructions well is indicative of strong generalization capabilities, which are crucial for complex conversational search tasks. Therefore, the Chat model, having been fine-tuned for conversational instructions, provides a more appropriate foundation for this task. (2) Different LLMs. We find that different LLMs have similar performance under our training recipe. The relatively worst variation based on LLaMA-2 still largely outperforms existing conversational dense retrieval baselines on the more complex CAsT-20 and CAsT-21 datasets, and also outperforms smaller ChatRetrievers. (3) LoRA vs. full parameter tuning. Due to constraints in computing resources, our investigation into training modes (i.e., LoRA vs. full parameter tuning) was limited to the 1.8B scale model. Our findings indicate that employing LoRA training yields inferior performance compared to full parameter tuning. However, this may be attributed to the LoRA parameter capacity being insufficient for the 1.8B model. 4.6 Influence of Training Data Fine-tuning on different data sources. Table 6 presents the performance of ChatRetriever when trained solely on UltraChat, solely on MSMARCO, and on a combination of QReCC+MSMARCO (i.e., replacing UltraChat with the QReCC\u2019s training set). The model performance is evaluated using both session inputs and human rewrite inputs (i.e., converted to ad-hoc search). We find that training exclusively on UltraChat leads to a decline in performance for both input types, with a more pronounced degradation observed for the rewrite input. Conversely, training solely on MSMARCO yields comparable results for the rewrite input but considerably worse performance for the session input. These results suggest that MSMARCO effectively enhances the ad-hoc retrieval capabilities of LLMs, possibly due to its well-curated hard negatives. However, ad-hoc search data from MSMARCO alone is insufficient for transferring the generalization capability of LLMs to the more complex context of conversational search. The traditional conversational QA data (i.e., QReCC) is also not highly effective for LLMs in learning a diverse range of complex conversational patterns. To optimize LLM to be a universal conversational retriever, we recommend combining general conversational instruction tuning data (e.g., UltraChat) with ad-hoc search-oriented instruction tuning data (e.g., MSMARCO). Continuelly fine-tuning baselines on the same \fMethods QReCC TopiOCQA CAsT-19 CAsT-20 CAsT-21 Original New Original New Original New Original New Original New GRIT 33.5 48.3 17.3 36.0 30.9 47.1 19.3 35.7 33.6 45.3 Conv-ANCE 45.6 44.8 20.5 21.6 34.1 35.0 27.5 30.5 34.2 36.0 ConvDR 35.7 36.0 26.4 24.9 43.9 43.2 32.4 30.9 37.4 35.5 LeCoRE 48.5 46.1 31.4 31.0 42.2 42.9 29.0 30.1 32.3 33.4 ChatRetriever 52.5 40.1 52.1 40.0 49.6 Table 5: Results of continually fine-tuning baselines on the training data of ChatRetriever. \u201cOriginal\u201d and \u201cNew\u201d denote the performance before and after fine-tuning, respectively. 100 500 1000 1500 2000 2500 20 30 40 50 60 NDCG@3 31.2 38.5 39.4 39.6 39.9 40.0 44.8 47.9 48.7 49.5 50.2 49.9 CAsT-20 Session Human Rewrite 100 500 1000 1500 2000 2500 30 40 50 60 70 NDCG@3 41.7 46.9 49.1 48.9 49.7 49.6 50.8 58.1 58.7 59.5 59.0 59.2 CAsT-21 Session Human Rewrite Figure 3: Performance of ChatRetriever at different training steps. Data Source CAsT-20 CAsT-21 Session Rewrite Session Rewrite Only U 39.5 43.7 46.5 50.0 Only M 18.3 49.8 34.1 58.9 Q+M 31.5 46.9 42.4 47.9 U+M 40.0 49.9 49.6 59.2 Table 6: Comparisons of using different data sources combinations for training. U, M, and Q represent UltraChat, MSMARCO, and QReCC, respectively. training data of ChatRetriever. In Table 1, we follow the original training settings of the baselines. Here, we further fine-tune baselines on the training data of ChatRetriever. Results are shown in Table 5 and we find: (1) GRIT, a unified retrieval and generation model based on LLM, showed substantial performance improvement after fine-tuning on conversational instruction tuning data. Its performance approached that of ChatRetriever without session-masked instruction tuning, although it still lagged behind the final ChatRetriever. (2) The performance of Conv-ANCE, ConvDR, and LeCoRE did not show noticeable improvements and even experienced declines in QReCC and TopiOCQA. This may be because that the newly introduced training data disrupted their original in-domain training-test settings, as they were initially trained on the in-domain training sets of QReCC and TopiOCQA. This also highlights the robust generalization of ChatRetriever, which, when trained only on general conversational instruction tuning data, can effectively adapt to various conversational search test sets. Data volume. Figure 3 shows the performance of ChatRetriever across various training steps. It is observed that the performance attains a relatively high level at 500 steps and subsequently experiences marginal improvements as the number of training steps increases. The performance stabilizes upon reaching 2500 steps. Furthermore, the trends for inputs with sessions and human rewrites are similar. These findings suggest that, under our framework, adapting LLMs to function effectively as conversational retrievers may require only a small amount of high-quality data. 5 Conclusion In this paper, we introduce ChatRetriever, a large conversational retrieval model adapted from LLM. We propose a novel contrastive session-masked instruction tuning approach for this adaptation and fine-tune LLM on high-quality conversational instruction tuning data. Experimental results on five conversational retrieval datasets demonstrate the superior performance and robustness of ChatRetriever. Looking ahead, we aim to further explore and expand the generalization capabilities of ChatRetriever in a broader range of complex IR scenarios beyond conversational search, such as legal case retrieval, product search, and other instructionfollowed search tasks. We envision ChatRetriever to be as versatile as LLMs, capable of accepting \fand understanding any conversational inputs and retrieving useful information for those inputs. Limitations Efficiency. As indicated in Table 1, ChatRetriever is a 7B model which is much larger than existing CDR models. Our preliminary findings (Section 4.5) suggest that the large model size is a crucial factor for ChatRetriever\u2019s exceptional performance. However, this also raises efficiency concerns. With an embedding dimension of 4096, ChatRetriever incurs higher time and storage costs for indexing and retrieval than existing CDR models. Nevertheless, on the one hand, ChatRetriever\u2019s enhanced retrieval accuracy potentially reduces the need for extensive passage re-ranking, which could, in real-world applications, offset the initial higher costs by ultimately reducing the total time spent on ranking. On the other hand, we view ChatRetriever as a promising research direction in leveraging the potent capabilities of LLMs for more complex and potentially universal retrieval tasks. We are exploring the possibility of distilling ChatRetriever into a more efficient, smaller model. Hard Negatives. Unlike typical search datasets that provide a large retrieval corpus, the conversational instruction tuning dataset we used (i.e., UltraChat) consists of only multi-turn instructions (i.e., sessions) and responses. In this work, we simply chose the CAsT-21 corpus for the hard negative mining of UltraChat (see Appendix A.3). However, as existing studies have shown, hard negatives are crucial for improving retrieval performance (Zhan et al., 2021; Zhou et al., 2022). Therefore, a better strategy for mining hard negatives tailored to instruction tuning data is desirable. We plan to explore using LLMs to generate hard negatives for instructions similar to (Wang et al., 2024). Generalizability. ChatRetriever substantially outperforms existing CDR models in understanding and retrieving information for complex multi-turn inputs and achieves comparable performance to state-of-the-art LLM-based rewriting, showcasing its strong generalization capability. However, it has not yet achieved the same level of generalization as LLMs, particularly in following complex retrieval instructions, addressing very detailed information needs, or performing in-context learning across various specific domains. It is worth noting that existing instruction-aware retrievers (Su et al., 2023; Zhang et al., 2023; Muennighoff et al., 2024) also have limitations in perceiving complex (multi-turn) instructions that largely fall short of the generality of LLMs, as highlighted in this work (Table 1) and also in recent studies (Oh et al., 2024; Weller et al., 2024). As stated in our conclusion, we are committed to further advancing ChatRetriever\u2019s generalization capabilities to match those of LLMs." + }, + { + "url": "http://arxiv.org/abs/2404.13957v1", + "title": "How Well Can LLMs Echo Us? Evaluating AI Chatbots' Role-Play Ability with ECHO", + "abstract": "The role-play ability of Large Language Models (LLMs) has emerged as a\npopular research direction. However, existing studies focus on imitating\nwell-known public figures or fictional characters, overlooking the potential\nfor simulating ordinary individuals. Such an oversight limits the potential for\nadvancements in digital human clones and non-player characters in video games.\nTo bridge this gap, we introduce ECHO, an evaluative framework inspired by the\nTuring test. This framework engages the acquaintances of the target individuals\nto distinguish between human and machine-generated responses. Notably, our\nframework focuses on emulating average individuals rather than historical or\nfictional figures, presenting a unique advantage to apply the Turing Test. We\nevaluated three role-playing LLMs using ECHO, with GPT-3.5 and GPT-4 serving as\nfoundational models, alongside the online application GPTs from OpenAI. Our\nresults demonstrate that GPT-4 more effectively deceives human evaluators, and\nGPTs achieves a leading success rate of 48.3%. Furthermore, we investigated\nwhether LLMs could discern between human-generated and machine-generated texts.\nWhile GPT-4 can identify differences, it could not determine which texts were\nhuman-produced. Our code and results of reproducing the role-playing LLMs are\nmade publicly available via https://github.com/CUHK-ARISE/ECHO.", + "authors": "Man Tik Ng, Hui Tung Tse, Jen-tse Huang, Jingjing Li, Wenxuan Wang, Michael R. Lyu", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "How Well Can LLMs Echo Us? Evaluating AI Chatbots' Role-Play Ability with ECHO", + "main_content": "Introduction Large Language Models (LLMs) have recently made significant breakthroughs in the field of Artificial Intelligence (AI). Notably, ChatGPT1, one of the leading commercial models, has showcased its capabilities across different Natural Language Processing (NLP) tasks, such as information retrieval (Zhu et al., 2023), computer programming (Surameery & Shakor, 2023), grammar checking (Wu et al., 2023), and sentence translation (Jiao et al., 2023). Trained on extensive datasets, LLMs also demonstrate applicability beyond NLP tasks, extending to domains such as healthcare (Johnson et al., 2023), education (Baidoo-Anu & Ansah, 2023), legal service (Guha et al., 2024), and product design (Lanzi & Loiacono, 2023). Given LLMs\u2019 extensive capabilities, researchers have explored their human-like abilities (Huang et al., 2024b; 2023) and their performance on complex tasks (Wan et al., 2024; Huang et al., 2024a). Role-playing, the act of changing one\u2019s behavior to fulfill a specific role, has been employed as a scenario to evaluate LLMs (Shanahan et al., 2023; Wang et al., 2023a) since it is a complicated task requiring various abilities. However, the evaluation of LLMs\u2019 role-playing ability remains relatively unexplored. Previous studies (Shao et al., 2023; Wang et al., 2023b) mainly focus on instructing LLMs to impersonate celebrities or fictional characters whose data are likely to be included in the training corpus of the LLMs. As a result, the ability of LLMs to role-play as typical individuals is not well understood, limiting our evaluation of their role-playing potential. This oversight could restrict the scope of *Equal contribution. \u2020Corresponding author. 1https://chat.openai.com/ 1 arXiv:2404.13957v1 [cs.CL] 22 Apr 2024 \fPreprint CR ED LG PH PS IP EM FP IS IT 20 40 60 80 RPP RoleGPT Juliet (a) GPT-3.5-Turbo Performance CR ED LG PH PS IP EM FP IS IT 20 40 60 80 RPP RoleGPT Juliet GPTs (b) GPT-4-Turbo and GPTs Performance Figure 1: Success rates of role-playing LLMs in deceiving human evaluators. The human evaluators are instructed to identify human-generated responses. assessing LLMs\u2019 role-playing capabilities and overlooking situations where LLMs could act as digital clones of humans, non-player characters in video games and metaverse, or, more concerningly, be used maliciously to impersonate individuals, spreading false information or damaging reputations. Addressing this gap, our study directs LLMs to emulate real, ordinary individuals instead of famous figures, leveraging the Turing test. As initially proposed by Turing (1950), this test gauges whether a machine can demonstrate intelligence indistinguishable from that of a human. In our study, we create a role-playing LLM using the profile of an actual person and invite acquaintances of this person to discern between responses from the actual individual and the LLM. Utilizing real-person data makes it possible to apply the Turing test and makes it easier to recruit annotators, which is advantageous over using profiles of well-known figures due to the accessibility of their acquaintances. However, a limitation arises in multi-round dialogues, where human evaluators can easily differentiate between LLMs and actual people by posing questions LLMs cannot answer, such as queries about the current time. This issue can shift evaluators\u2019 focus from assessing the LLMs\u2019 ability to think and act like the intended emulation target. To address this problem, we introduce a novel framework, ECHO, designed to specifically evaluate LLMs\u2019 proficiency in replicating a human\u2019s thought process within a particular domain. We evaluate four different role-playing methods, RoleGPT (Wang et al., 2023b), Juliet (Jones & Bergen, 2023), Role-Play Prompting (RPP) (Kong et al., 2023), and OpenAI\u2019s online application, GPTs (OpenAI, 2023). For the first three methods, we compare performance differences when utilizing GPT-3.5-Turbo versus GPT-4-Turbo. We collect the personal data of ten unique participants for instructing each method to role-play these individuals. Subsequently, we pose ten types of questions from various aspects to both the target participant and the role-playing LLMs. Each participant then invites their acquaintances to identify which responses they believe are written by the actual individual. Results indicate that the most effective role-playing method, the GPTs, achieved a 48.3% success rate in deceiving acquaintances. Moreover, we explore whether LLMs can discern between human and machine-generated responses. We instruct GPT-4, GPT-4-Turbo, and GeminiPro to discern between texts. Results show that GPT-4 can identify differences but could not determine which texts were human-produced. The contribution of this paper can be summarized as: 1. We propose ECHO, the first framework to conduct Turing tests on role-playing LLMs, which can effectively compare different role-playing methods. 2 \fPreprint 2. We conduct extensive experiments on ten participants, including constructing roleplaying LLMs with their profiles and inviting their acquaintances to discern between responses produced by LLMs and the actual individual. 3. We delve into LLMs\u2019 potential as evaluators in identifying human versus machinegenerated texts, addressing concerns about biases that might influence their judgment. 2 Related Work 2.1 Role-Playing LLMs Recent advancements in AI have led to an increased interest in the role-playing capabilities of LLMs, a field exploring how LLMs adopt and sustain specific characters or personas within conversational contexts. Studies examine LLMs\u2019 inherent ability to role-play and evaluate their consistency in depicting assigned roles, offering insights into their adaptability and versatility in dynamic interactions (Shanahan et al., 2023). Specialized frameworks such as RoleLLM (Wang et al., 2023b) and CharacterLLM (Shao et al., 2023) aim to benchmark or enhance these capabilities, while research by Kong et al. (2023) focuses on improving LLMs\u2019 zero-shot reasoning in various personas. Additional investigations, including CharacterGLM (Zhou et al., 2023) and ChatHaruhi (Li et al., 2023), extend role-playing studies to cultural and entertainment contexts, demonstrating LLMs\u2019 ability to animate fictional characters and engage with Chinese cultural themes, thereby illustrating their creative potential across diverse scenarios. Furthermore, platforms like character.ai2 provide innovative environments where users can interact with AI-generated characters, each exhibiting unique personalities and histories. OpenAI\u2019s GPTs (OpenAI, 2023) enable users to customize and utilize tailored GPT models for specific applications such as role-playing. 2.2 Turing Tests for LLMs The Turing Test, a foundational concept in AI history, initially assessed AI capabilities through text-based interactions, determining whether a judge is conversing with a human or a machine (Turing, 1950). The development of LLMs has expanded the scope. Jannai et al. (2023) executes a large-scale, global online Turing Test, challenging participants to distinguish between an LLM and a human during a two-minute conversation, with LLMs passing approximately 40% of the time. Furthermore, the TURINGBENCH framework (Uchendu et al., 2021) provides a systematic platform for evaluating the indistinguishability of LLM responses from those of humans, reflecting both advancements and limitations of current models. Similarly, Jones & Bergen (2023) explores a modified approach where an interrogator interacts with a single respondent to assess their human or AI identity, with a GPT-4 prompt passing 41 games. Sejnowski (2023) suggests that reverse Turing tests involving LLMs can yield insights into human cognitive dynamics rather than just the artificial nature of LLMs. Elkins & Chun (2020) demonstrates GPT-3\u2019s ability to emulate well-known authors\u2019 writing styles and themes, underscoring its potential in creative domains such as journalism and novel writing. Despite these advances, challenges persist. For example, LLMs often reveal their non-human nature when directly queried, reflecting their honesty-oriented programming. Moreover, experiments frequently place LLMs in ambiguous roles rather than directly imitating real individuals. Our research addresses these issues by focusing on the capability of LLMs to accurately replicate specific personalities, thereby providing a more nuanced assessment of their mimicry skills. 3 ECHO Design and Implementation ECHO is a human evaluation framework based on the Turing Test designed to assess the role-playing abilities of various LLMs. It consists of three phases: construction of role2https://character.ai/ 3 \fPreprint Humans Personal Information Role-Play LLMs Acquaintances Philosophical Questions: Is it possible to have knowledge without evidence? Please explain your reasoning. (a) Constructing Role-Play LLMs (b) Question Answering (c) Human Evaluation Which one of the answer do you think is written by your friend? A1: I think it is impossible, 'knowledge' needs ... A2: idk maybe, but sounds like guessing to me A3: Knowledge without evidence may exist in ... A4:\u00a0... A3! A2? Figure 2: An illustration of the design of ECHO. playing LLMs, collection of responses from machines and humans, and execution of human evaluations. The framework is depicted in Fig. 2. 3.1 Constructing Role-Play LLMs The first challenge involves supplementing LLMs with sufficient personal data to accurately simulate certain individuals whose specific information is absent from the training corpus. Our objective is to enable LLMs to capture and reflect the individual\u2019s personality, experiences, and communication styles, thereby producing responses that authentically represent the individual\u2019s character and cognitive processes. To achieve this, we propose the following categories for collecting comprehensive background information: \u2022 Background and Interests: Education, Professional Background, Interests, and Hobbies. \u2022 Personal Identity: Personality Traits, Values, Beliefs, and Memorable Life Experiences. \u2022 Cultural Preferences: Favorite Books, Movies, and Music. \u2022 Cognition and Social Dynamics: Style in Problem-Solving, Communication, Social Interaction, Writing, and Speaking. The four categories provide a comprehensive framework by including both stable and dynamic aspects of an individual\u2019s profile, from demographic details to psychological traits. Additionally, by covering a wide spectrum from personal experiences to social behaviors, these categories enable the model to engage effectively across diverse cultural and social environments. We designed a questionnaire to include these four distinct aspects, comprising a total of ten questions. Details are provided in \u00a7A of the appendix. Participants are required to answer all questions completely and substantiate their responses, ensuring comprehensive and credible data collection. To enhance data quality, responses that do not adhere to our guidelines are manually reviewed and may be excluded to maintain data integrity. Subsequently, the data are input into role-playing LLMs to simulate each participant\u2019s behavior. 3.2 Collecting Responses To prevent evaluators from posing questions that could directly reveal whether a response originates from a machine (e.g., inquiries about the current time), we gathered responses from both humans and LLMs in advance using a set standard of questions. Both participants and their corresponding role-playing LLMs provided answers to the same questions. The responses are anonymized for the human evaluation phase. 4 \fPreprint Question Types This study categorizes questions into two primary dimensions: general and specific. General questions address broader themes, while specific questions delve into individual attributes informed by personal background information. General questions are further categorized into five sub-classes: \u2022 Creativity Questions (CR): Questions that require the generation of original ideas or the envisioning of scenarios by modifying or expanding existing concepts. \u2022 Ethical Dilemmas Questions (ED): Questions that compel respondents to reflect on and articulate their moral judgments in scenarios characterized by moral ambiguity or conflict. \u2022 Logical Questions (LG): Questions designed to evaluate an individual\u2019s capacity for structured, coherent, and logical thinking. \u2022 Philosophical Questions (PH): Questions that probe into profound, often abstract notions concerning human existence, ethics, knowledge, and reality. \u2022 Problem-Solving Questions (PS): Questions that demand analytical thinking and the formulation of practical solutions to hypothetical or real-world problems. Similarly, specific questions consist of the following five sub-dimensions: \u2022 In-Depth Personal Questions (IP): Questions that probe into an individual\u2019s personal experiences and reflections to understand their character, motivations, and life trajectory. \u2022 Emotional Questions (EM): inquiries that examine how individuals experience, manage, and interpret their emotions across different scenarios. \u2022 Future Prediction Questions (FP): Questions that prompt individuals to express their future aspirations, predictions, or plans, both personal and professional. \u2022 Insightful Questions (IS): Questions that invite individuals to share their unique insights or understanding on a specific subject or experience. \u2022 Interest Questions (IT): Questions that investigate how personal interests, hobbies, or passions influence an individual\u2019s perspectives, experiences, or future goals. The sub-categories are developed based on two primary sources: (1) a survey conducted on social media to identify question types effective in differentiating between a natural person and a language model; (2) a review of existing literature that focuses on distinguishing real individuals from language models by posing general inquiries about daily activities and emotions (Jones & Bergen, 2023). This classification ensures a comprehensive assessment of individual capabilities and perspectives by including diverse question types, ranging from logical reasoning to emotional understanding. Question Generation For general inquiries that do not require knowledge of participants\u2019 backgrounds, we utilize GPT-4 to generate five questions per category. For inquiries specific to participants\u2019 backgrounds, GPT-4 is instructed to produce five tailored questions for each participant. Each participant receives a total of ten questions\u2014five specific and five general\u2014randomly selected from a predefined set to facilitate a comprehensive evaluation across various baselines. A challenge in our design is that GPT-4 generates overly specific questions tailored to individual backgrounds, resulting in complexities that both participants and evaluators find challenging to comprehend, thereby hindering the evaluation process. For example, questions on specialized topics like gut microbiota in human health often surpass participants\u2019 general knowledge. To mitigate this issue, we introduce a selective filtering process aimed at ensuring that questions correspond to the participants\u2019 general knowledge level yet remain relevant to their unique experiences and knowledge. This approach adjusts the questions to be understandable and representative of each participant\u2019s background, thus excluding excessively specific inquiries from the analysis. 3.3 Conducting Human Evaluation We conduct human evaluations by having acquaintances of each participant review anonymized responses to determine whether they are generated by humans or machines. 5 \fPreprint Each evaluator is presented with ten pairs of responses, each containing one response from the actual participant and one from a random role-playing LLM. Evaluators are instructed to assess the tone, thought process, and identification accuracy of the responses to identify human-generated responses. Additionally, we pre-processed responses to eliminate syntactical biases that could affect evaluations. This included normalizing capitalization, spacing between words, and correcting misspelled words. Such pre-processing ensures that evaluations are based on the authenticity and coherence of the content rather than superficial textual patterns. Consequently, this approach aimed to provide a fair assessment based on the intrinsic quality of the ideas and thoughts expressed in the responses. The effectiveness of LLMs in simulating humans is quantified by the success rate of deceiving evaluators. It is defined as the proportion of instances in which human evaluators select an LLM-generated response over that of an actual participant. It is noteworthy that the baseline for random guessing is 50%. A success rate substantially higher than this baseline, such as 90%, indicates that evaluators can effectively distinguish between human and LLM responses, suggesting that the LLM fails to convincingly simulate a human participant. Conversely, a success rate closer to 50% indicates a greater difficulty for evaluators in differentiating between the two, signifying a more human-like performance by the LLM. 4 Experiments Baseline Methods We evaluate four widely used role-playing methods: \u2022 RoleGPT (Wang et al., 2023b): This method improves role-playing in LLMs through a four-stage process: constructing role profiles for 100 roles, extracting knowledge through context-based instructions, imitating style with GPT role prompting, and tuning with role-conditioned instructions. \u2022 Role-Play Prompting (RPP) (Kong et al., 2023): This approach enhances zero-shot reasoning in LLMs by using role-play prompting to assume various personas. It involves sampling multiple role-feedback prompts and selecting the most effective one for reasoning tasks, serving as an implicit Chain-of-Thought facilitator. \u2022 Juliet (Jones & Bergen, 2023): This study assesses GPT-4\u2019s ability to pass the Turing Test in online interactions by testing 25 LLM witnesses, including GPT-3.5 and GPT-4, with human participants. We select one of their open-sourced prompt named Juliet. \u2022 GPTs (OpenAI, 2023): A new feature by OpenAI that enables the creation of custom ChatGPT applications for specific tasks using natural language. These applications are shareable via links or through the GPT store. We select one tailored for persona imitation for our study. We employ GPT-3.5-Turbo and GPT-4-Turbo as the foundation models for all methods except GPTs, resulting in seven baselines in total. Due to the unavailability of some baselines, we reproduce their approaches using LangChain3 for a comprehensive evaluation across models. Implementation details are provided in \u00a7B of the appendix. Human Participants We recruit ten participants from diverse backgrounds for our evaluation. Additionally, a minimum of seven acquaintances per participant are included to ensure that responses of all baselines are evaluated. Data collection and management are conducted using Google Forms.4 4.1 Results Table 1 presents the success rates of various role-playing baselines in deceiving human evaluators, detailing these rates across different question types. 3https://www.langchain.com/ 4https://www.google.com/forms/about/ 6 \fPreprint Table 1: Success rates of role-playing LLMs in deceiving human evaluators. The human evaluators are instructed to identify human-generated responses. The highest numbers are marked in bold, while the numbers closest to 50% are underlined. Success Rate (%) GPT-3.5-Turbo GPT-4-Turbo GPTs Overall RPP RoleGPT Juliet RPP RoleGPT Juliet Creativity 40.0 53.3 31.3 26.1 37.0 37.5 47.8 39.0 Ethical Dilemmas 43.5 30.0 44.4 38.9 27.3 44.4 47.8 39.5 Logical 23.5 50.0 36.4 42.1 47.6 47.1 41.7 41.2 Philosophical 26.7 38.9 43.5 44.0 28.0 40.9 34.8 36.7 Problem Solving 17.4 23.3 34.8 46.2 46.7 48.0 54.6 38.7 In-Depth Personal 42.1 45.2 40.0 35.0 83.3 41.7 56.0 49.0 Emotional 44.4 57.9 22.2 66.7 25.0 55.6 45.8 45.4 Future Prediction 38.9 59.1 37.5 60.0 50.0 50.0 50.0 49.4 Insightful 50.0 34.8 61.5 45.0 50.0 35.5 50.0 46.7 Interest 48.0 41.7 30.0 66.7 22.7 33.3 53.9 42.3 Overall 37.5 43.4 38.2 47.1 41.8 43.4 48.2 42.8 Across Baselines GPTs generally outperforms other baselines across various question types. It achieves not only the highest success rates but also rates closest to 50%, making it hard for human evaluators to distinguish between its outputs and human outputs. This effectiveness likely stems from GPTs\u2019 capability to incorporate enriched personal information into responses. This method proves more precise than traditional human imitation techniques, emphasizing the importance of specificity in role-playing scenarios. Furthermore, transitioning from GPT-3.5-Turbo to GPT-4-Turbo has markedly enhanced role-playing ability. GPT-4-Turbo demonstrates a superior ability to replicate individual writing and cognitive styles, particularly within the RPP and Juliet frameworks. Conversely, RoleGPT shows diminished performance following the upgrade, likely due to a tendency towards overly casual or dramatic outputs, which undermines the authenticity of its imitations. This finding suggests that GPT-4-Turbo\u2019s intricate understanding may lead to stylistic over-emphasis, affecting perceived authenticity. Across Question Types The analysis of success rates among different question types reveals the comparative strengths and weaknesses of GPT-3.5-Turbo and GPT-4-Turbo. As the foundational model, GPT-3.5-Turbo exhibits limitations. Juliet underperforms in emotional questions, while RPP and RoleGPT struggle with logical and problem-solving questions. This finding suggests a lack in nuanced emotional understanding and complex logical processing in GPT-3.5-Turbo. The transition to GPT-4-Turbo brings about significant improvements in specific areas. For instance, RoleGPT achieves an 80% success rate in In-Depth Personal questions\u2014the highest observed rate\u2014while RPP exceeds 60% in three specific question categories, demonstrating the targeted enhancements in these domains. However, this targeted improvement raises valid concerns about potential over-specialization. While it enhances performance in specific areas, it could compromise the models\u2019 ability to handle broader queries, a factor that needs to be carefully considered. Both Juliet and GPTs demonstrate relatively balanced performances across various question types, with GPTs notably outperforming Juliet. The trend towards better performance on specific rather than general questions aligns with the models\u2019 design objectives, indicating a higher efficacy in generating detailed, tailored responses over broad, abstract topics. General questions, especially those related to Philosophy and Problem-Solving questions, present challenges due to their abstract nature and the demand for definitive answers, pushing the limits of LLMs\u2019 capabilities in data-driven reasoning toward domains that require speculative or creative problem-solving. This finding results in a noticeable disparity between human and LLM-generated responses, as LLMs may lack the creative or interdisciplinary thinking required for such questions. 7 \fPreprint Table 2: Success rates of role-playing LLMs in deceiving evaluator LLMs. The evaluator LLMs are instructed to identify human-generated responses. Success Rate (%) GPT-3.5-Turbo GPT-4-Turbo GPTs Overall RPP RoleGPT Juliet RPP RoleGPT Juliet Control Model 86.0 78.0 67.0 95.0 31.0 5.0 78.0 62.9 GPT-4 85.3 92.3 88.3 63.7 93.0 91.3 95.7 91.4 GPT-4-Turbo 95.0 94.0 95.3 95.7 99.0 98.0 98.3 96.5 Gemini-1.0-Pro 52.7 52.7 62.7 56.3 60.7 58.3 54.0 56.8 Table 3: Success rates of role-playing LLMs in deceiving evaluator LLMs. The evaluator LLMs are instructed to identify non-human-generated responses. Success Rate (%) GPT-3.5-Turbo GPT-4-Turbo GPTs Overall RPP RoleGPT Juliet RPP RoleGPT Juliet Control Model 14.0 22.0 33.0 5.0 69.0 95.0 22.0 37.1 GPT-4 25.7 24.7 26.0 25.7 29.0 52.3 11.7 27.9 GPT-4-Turbo 61.7 62.7 53.3 34.3 60.0 58.0 62.3 56.5 Gemini-1.0-Pro 51.0 49.0 42.3 48.7 54.3 50.0 48.7 41.0 5 LLMs as Evaluators LLM-based evaluators have demonstrated their potential in identifying text quality (Desmond et al., 2024; Chan et al., 2023). Despite concerns regarding positional and length biases that may favor longer responses (Zheng et al., 2024) or influence judgments based on response order (Zhao et al., 2021), recent findings indicate these biases are minimal in advanced models such as GPT-4-Turbo (Chen et al., 2024). Our study further explores the capability of LLMs as evaluators to discern between human and machine-generated texts. 5.1 Methodology We utilize three language models\u2014GPT-4, GPT-4-Turbo, and Gemini-1.0-Pro\u2014all configured with a temperature setting of zero. Each model is tested using a dataset comprising each participant\u2019s background information and ten pairs of responses. Each pair corresponds to a question and consists of one human-generated answer and one randomly generated answer from a language model. We elaborate on the detailed process for this evaluation and the prompts for the evaluator LLMs in \u00a7C of the appendix. We create a Control Model that always selects the longer answer to investigate the presence of length biases in LLM evaluations. A close comparison of success rates between this control and the LLM evaluators would indicate a significant length bias in the models, marked by a preference for lengthier responses. To mitigate potential positional biases, we introduce a two-fold approach: (1) randomizing the order of answer presentation within each question-answer pair and (2) conducting multiple rounds of evaluation with the same question set to determine an average success rate. To further assess potential biases in LLMs towards specific instructions, we not only instructed the LLMs to select responses likely to be produced by humans but also required them to choose responses generated from language models. If the LLMs exhibit no bias, the accuracies across these two conditions should be approximately the same. 5.2 Results Analysis The results of instructing LLMs to identify human-generated and non-human-generated responses are shown in Table 2 and Table 3, respectively. As discussed before, success rates 8 \fPreprint that significantly deviate from 50%, which is expected in random guessing, indicate the LLM\u2019s capability to distinguish between human and machine-generated text. Across Models GPT-4 and GPT-4-Turbo effectively distinguish between LLM and human-generated texts, albeit choosing completely opposite answers. As illustrated in Table 2, both models show proficiency in this differentiation, with success rates for all role-playing LLMs exceeding 90%. In other words, GPT-4 and GPT-4-Turbo demonstrate a consistent inclination to identify LLM-generated responses as human-generated. In contrast, Gemini-1.0-Pro performs comparably to random guessing. This finding suggests that GPT models may prefer texts produced by similar models. The underlying cause is likely a model-specific bias towards its own text generation patterns. Instruction Bias Our analysis reveals a pronounced bias in GPT models, as evidenced by discrepancies between the results from Table 2 and Table 3. Note that unbiased models should exhibit comparable accuracy in these two settings. In both scenarios, Gemini-1.0Pro demonstrates accuracy akin to random guessing, suggesting it is free of bias toward the instruction. However, GPT models display significant variances in their capacity to differentiate human from machine-generated responses. Specifically, GPT-4 shows a more significant disparity (63.5%) compared to GPT-4-Turbo (40%). This finding suggests that GPT models are generally more adept at identifying machine-generated content. We believe that the concept of \u201chuman-generated\u201d responses is inherently more ambiguous and abstract for GPT models, whereas \u201cmachine-generated\u201d content is more clearly defined and understood. Length Bias Tables 2 and 3 present the success rates of the control model in the two settings to examine the length bias. By comparing them to the success rates of the evaluator LLMs, we find no significant correlation between any LLMs and the control model, suggesting that length bias minimally impacts model selections. This observation is consistent with the findings reported in Chen et al. (2024). 6 Conclusion Conclusion This paper introduces ECHO, a framework designed to assess the role-playing capabilities of LLMs in simulating ordinary individuals, utilizing the Turing test methodology. Our evaluation includes ten target participants and seven baseline models, yielding over 800 responses. Analysis of human evaluation data reveals that: (1) Among the four role-playing approaches, GPTs performs better in accurately role-playing target individuals. (2) GPT-4 exhibits enhanced role-playing capabilities compared to GPT-3.5. Moreover, this study investigates the potential of LLMs to function as unbiased evaluators, examining the influence of inherent biases on their accuracy. The results suggest that GPT models may prefer texts generated by similar models. Limitations Our study has several limitations. A primary limitation is that the background information categories may not adequately capture the complexities of a person\u2019s identity, experiences, and communication nuances. This inadequacy can result in responses from LLMs that lack authenticity. The second concern is that restricting evaluators to those familiar with the target individual may limit the size and diversity of the evaluation team, potentially compromising the objectivity and breadth of assessments. Including evaluators who are not previously acquainted with the individuals but are informed about their backgrounds could enhance our understanding of LLMs\u2019 imitative accuracy. The third threat concerns the difficulty of LLMs in capturing the unique behavioral quirks and subtle communication nuances that characterize human interaction. This challenge is particularly pronounced in short interactions, where LLMs fail to replicate the full complexity of human language, emotional depth, and cultural nuances. 9 \fPreprint Ethics Statement Data Protection Since this study employs LLMs to simulate real individuals, we adhere to rigorous ethical guidelines to protect participant privacy and maintain the integrity of AI research. We have ensured the privacy and anonymity of all participants by treating personal data and identifiable information, such as background files, with strict confidentiality. We constructed local role-playing LLMs without transferring any personal data to third parties. Furthermore, all data, including responses from human participants and simulations generated by the role-playing LLMs, will be deleted six months after our study\u2019s publication. Informed Consent Additionally, participants are fully informed with comprehensive information about the study\u2019s objectives and the specific use of their data in generating roles, answers, and evaluations. Informed consent was explicitly obtained, with provisions allowing participants to withdraw at any time without consequences." + }, + { + "url": "http://arxiv.org/abs/2404.12833v1", + "title": "How Far Can We Go with Practical Function-Level Program Repair?", + "abstract": "Recently, multiple Automated Program Repair (APR) techniques based on Large\nLanguage Models (LLMs) have been proposed to enhance the repair performance.\nWhile these techniques mainly focus on the single-line or hunk-level repair,\nthey face significant challenges in real-world application due to the limited\nrepair task scope and costly statement-level fault localization. However, the\nmore practical function-level APR, which broadens the scope of APR task to fix\nentire buggy functions and requires only cost-efficient function-level fault\nlocalization, remains underexplored. In this paper, we conduct the first\ncomprehensive study of LLM-based function-level APR including investigating the\neffect of the few-shot learning mechanism and the auxiliary repair-relevant\ninformation. Specifically, we adopt six widely-studied LLMs and construct a\nbenchmark in both the Defects4J 1.2 and 2.0 datasets. Our study demonstrates\nthat LLMs with zero-shot learning are already powerful function-level APR\ntechniques, while applying the few-shot learning mechanism leads to disparate\nrepair performance. Moreover, we find that directly applying the auxiliary\nrepair-relevant information to LLMs significantly increases function-level\nrepair performance. Inspired by our findings, we propose an LLM-based\nfunction-level APR technique, namely SRepair, which adopts a dual-LLM framework\nto leverage the power of the auxiliary repair-relevant information for\nadvancing the repair performance. The evaluation results demonstrate that\nSRepair can correctly fix 300 single-function bugs in the Defects4J dataset,\nlargely surpassing all previous APR techniques by at least 85%, without the\nneed for the costly statement-level fault location information. Furthermore,\nSRepair successfully fixes 32 multi-function bugs in the Defects4J dataset,\nwhich is the first time achieved by any APR technique ever to our best\nknowledge.", + "authors": "Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, Yuqun Zhang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "How Far Can We Go with Practical Function-Level Program Repair?", + "main_content": "INTRODUCTION Fixing software defects costs developers a significant amount of time and effort [34]. To assist developers in reducing the burden of repairing programs, Automated Program Repair (APR) techniques have been proposed to automatically generate potential patches for buggy programs. Specifically, the learning-based APR techniques which incorporate the learning power to advance the repair performance have been increasingly studied in recent years. For instance, many such techniques [30, 39, 49, 56, 81, 82, 89] utilize Neural Machine Translation (NMT) [68] such that APR is modeled as a translation task where the objective is to transform buggy code into correct code. More recently, Large Language Models (LLMs) have become largely adopted in various downstream software tasks [31, 32, 42, 47] including APR [38, 40, 62, 74, 75, 77, 79] where they have been proven to advance the repair performance [33, 38, 75\u201377], e.g., the Codex model can fix 32 more bugs than previous APR techniques in the Defects4J 1.2 dataset [76]. Meanwhile, researchers also propose multiple LLM-based APR techniques [40, 71, 74, 77\u2013 79] to further enhance the repair performance. For instance, the state-of-the-art LLM-based APR technique ChatRepair [79] employs a conversational repair mechanism and successfully fixes 162 out of 337 bugs in a crafted Defects4J dataset, causing at least 24.6% gain compared with all existing techniques. However, many LLM-based APR techniques are proposed for the single-line or hunk-level program repair by auto-completing a single line [45] or infilling a hunk of code with context [77] respectively. They typically rely on identifying statement-level program faults, i.e., given fault locations [37, 74, 75, 77, 85] or applying statement-level fault localization techniques such as Gzoltar [71]. Nevertheless, it has been widely argued that accurately identifying statement-level faults can be essentially costly, i.e., demanding fine-grained input or strong assumptions [21, 24, 52, 66], thus potentially limiting the real-world applicability of the single-line or hunk-level APR. On the other hand, the LLM-based function-level APR can be potentially more promising, i.e., applying a generative model such as an LLM to auto-regressively generate the entire patched version of the buggy function by prompting the buggy function into the LLM. To illustrate, first, the function-level APR enables a larger scope of the program repair task\u2014it involves not only the singleline and hunk-level repair tasks, but also a more complicated task which repairs multiple discontinuous lines or hunks within a function [65]. Second, identifying function-level faults tends to be more cost-efficient than identifying statement-level faults, thus rendering the function-level APR more practical in real world [44, 48, 55]. arXiv:2404.12833v1 [cs.SE] 19 Apr 2024 \fConference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang While LLM-based function-level APR techniques are more promising, there lacks sufficient study and understanding of them [76], thus potentially hindering the further improved usage of LLMs for APR. Specifically, first, the existing LLM-based APR techniques exhibit significant performance loss for the function-level APR, e.g., incurring a decrease of 33% in ChatRepair and 36.4% in CodexRepair [76, 79] in terms of correctly fixed bugs. Second, the rationale of how such techniques can be affected has not been fully investigated. More specifically, the effectiveness of certain commonly-used mechanism such as the few-shot learning [40, 76, 79], i.e., prompting buggy code and fixed code pair examples that illustrate the function-level APR task and provide repair context for advancing the learning power of models, remains inadequately validated. Additionally, the potential of incorporating the auxiliary repair-relevant information, such as bug reports and trigger tests, remains underexplored [33, 62, 79]. Thus, there is an urgent need to extensively study the LLM-based function-level APR to further enhance the repair performance. In this paper, we conduct the first comprehensive study on the function-level LLM-based APR including investigating the effect of the few-shot learning and the auxiliary repair-relevant information. Specifically, we adopt six widely-studied LLMs including the state-of-the-art LLMs such as Codex-edit and GPT-3.5Turbo [33, 79, 85, 86] as the study subjects. We also construct a benchmark containing 522 single-function bugs, i.e., the bugs existing within one single function, in the Defects4J dataset. Typically, we build our repair prompt containing the buggy function along with (1) the buggy code and fixed code pair examples to utilize the few-shot learning mechanism; and (2) the auxiliary repair-relevant information such as trigger tests for the studied LLMs respectively. In this way, the entire patched functions can be auto-generated and then validated with the Defects4J test suite to derive the plausible patches (which pass all the tests). Our evaluation results demonstrate that incorporating the few-shot learning mechanism in the function-level APR actually causes significantly disparate and even negative impacts on the average number of plausible fixes compared with applying the default LLMs only, i.e., from an increase of 10% to a decrease of 49.7% among all studied LLMs. Surprisingly, we find that directly applying trigger tests or error messages for prompting can significantly enhance the repair performance, e.g., 26.7% and 26.1% improvement in terms of the average number of plausible fixes respectively. On the other hand, while statementlevel fault location information is shown powerful essential to many APR techniques, only adopting the easier-to-obtain auxiliary repairrelevant information including trigger tests, error messages, and comments altogether for prompting can achieve rather close performance, i.e., causing merely 7.1% gap in terms of the number of plausible fixes. Such a result indicates the potential of replacing the costly statement-level fault location information for the LLM-based function-level APR. In our study, over 10 million patches are generated and validated, consuming more than 8,000 GPU and 100,000 CPU hours. To our best knowledge, this is the largest empirical study of LLM-based APR conducted to date. Inspired by our findings, we propose an LLM-based functionlevel APR technique, namely SRepair, which adopts a dual-LLM framework to leverage the power of the auxiliary repair-relevant information for advancing the repair performance. In particular, SRepair first adopts a repair suggestion model which employs the Chain of Thought (CoT) technique [72] to generate naturallanguage repair suggestions. More specifically, SRepair prompts the LLM with the buggy function and the auxiliary repair-relevant information (i.e., trigger tests, error messages, and comments) to identify the root causes of the bugs and generate repair suggestions in natural language accordingly. SRepair then adopts a patch generation model to auto-generate a patched function with the assistance of the repair suggestions. Our evaluation demonstrates that SRepair can correctly fix a total of 300 single-function bugs in our Defects4J dataset, largely surpassing all previous APR techniques, e.g., 1.59\u00d7 more than Repilot [74] and 85% more than ChatRepair [79], without the need for the costly statement-level fault location information. Moreover, 128 bugs out of them were not fixed by any of the baseline LLM-based APR techniques adopted in this paper. Surprisingly, SRepair is also capable of repairing 32 multi-function bugs, i.e., bugs existing across multiple functions at the same time, which, to our best knowledge, is the first time achieved by any APR technique ever. To summarize, this paper makes the following contributions: \u2022 We perform the first ever extensive study on the LLM-based function-level APR with the impact factors on its performance, paving the way for new directions in future research. \u2022 We find that LLMs with zero-shot learning are already powerful function-level APR techniques. We also find that applying auxiliary repair-relevant information can substantially improve the repair performance for all studied LLMs. \u2022 We propose a new LLM-based function-level APR technique, SRepair, which can achieve remarkable repair performance by correctly fixing 300 single-function bugs, largely surpassing the SOTA techniques, i.e., outperforming ChatRepair [79] by 85% and Repilot [74] by 1.59\u00d7 in the Defects4J dataset. Moreover, SRepair successfully fixes 32 multi-function bugs, which is the first time achieved by any APR technique ever to our best knowledge. 2 BACKGROUND & RELATED WORK 2.1 Large Language Model Large Language Models (LLMs) contain billions of parameters and are trained on petabyte-scale datasets. They are typically built based on the Transformer architecture [70] comprising an encoder for input processing and a decoder for output token generation. In particular, the decoder-only models as shown in Figure 1a have demonstrated superior text comprehension [61] and code generation capabilities [23]. Thus, they have garnered significant interest of researchers and been widely applied to various downstream tasks in software engineering, e.g., test case generation [42, 47], vulnerability detection [31, 32], and program repair [33, 37, 77]. These models when integrating domain-specific knowledge for specific tasks, are often fine-tuned [67] for further improving their performance. For instance, CodeLlama [63] is fine-tuned based on Llama 2 [69] for generating and discussing code, and Magicoder [7] is fine-tuned with OSS-Instruct [73] to enhance the code generation performance. While fine-tuning requires significant computational resources and specialized datasets [29], simpler prompting strategies like few-shot learning [26] and Chain of Thought (CoT) [72] \fHow Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA // Buggy Function public double getNumericalMean() {\u2026} GPT-3.5 Decoder for(int cnt = 0; Decoder-only for(int cnt = 0; cnt OSS-Instruct RLHF CodeLlama Magicoder Codex Code Training // Buggy Function int binarySearch(\u2026) {Buggy Code} // Fixed Function int binarySearch(\u2026) {Fixed Code} Few-shot learning on APR task // Buggy Function Same Buggy Project: Historical Buggy Code Example // Fixed Function Same Buggy Project: Historical Fixed Code Example (a) (b) Figure 1: Decoder-only models and few-shot learning on APR which have also been shown effective are much less costly and thus have been increasingly adopted [20, 84, 88]. Figure 1b illustrates how the few-shot learning mechanism is applied for APR. Firstly, the APR task-related examples such as the buggy and fixed code pair of the function binarySearch are incorporated into the prompt. Note that the example selection varies among different techniques, e.g., manually crafting examples like binarySearch [76] and choosing examples of historical bug fixes within the same buggy project [76, 79]. Next, the target buggy function to be fixed, e.g., getNumbericalMean() in the Math-2 bug [14], is also added to the prompt. At last, the resulting prompt is fed to the model to generate patches for the target buggy function. To summarize, the purpose of the few-shot learning mechanism is to enable the model to learn how to handle specific tasks through the examples. However, while multiple LLM-based APR techniques have already incorporated the few-shot learning mechanism [40, 76, 79], its impacts and characteristics remain unexplored. 2.2 Automated Program Repair Automated Program Repair (APR) techniques [33, 39, 40, 46, 56, 74\u2013 77, 81], designed to aid developers in fixing bugs by automatically generating patches, typically follow the Generate-and-Validate (G&V) paradigm [54]. In particular, an APR process refers to locating program faults, generating patches for the buggy locations, and validating such patches against a test suite to determine their plausibility (i.e., whether they could pass all tests). Eventually, these resulting plausible patches are manually reviewed to select the correct fix for the target fault. Notably, the trigger tests in the patch-validating test suite are manually created by developers. During the execution of trigger tests, the unit testing framework, e.g., JUnit [5], can be used to provide the corresponding error messages. Among the APR techniques, the learning-based techniques [39, 49, 56, 74, 77, 79, 81] that utilize deep learning techniques have recently achieved remarkable performance. Specifically, many such techniques widely adopt the Neural Machine Translation (NMT) [68] techniques which convert APR into a translation task to transform buggy code into correct code. They typically leverage the power of the NMT models through training on extensive datasets containing millions of buggy and fixed code pairs. However, such techniques are highly costly when building well-constructed datasets of buggy and patch code pairs [77] and specific context representation for the NMT models [39]. More recently, Large Language Models (LLMs) have become increasingly adopted in various downstream software tasks including APR. In particular, directly applying models like Codex can already outperform all previous APR techniques [76]. Meanwhile, multiple LLM-based APR techniques have been proposed to further enhance the repair performance. For instance, AlphaRepair [77] applies the pre-trained CodeBERT model with the \u201ccloze-style\u201d APR, i.e., removing the buggy code tokens and applying the LLM to generate the correct ones. Similar as AlphaRepair in adopting the cloze-style repair paradigm, Repilot [74] focuses on synthesizing compilable patches, utilizing the Language Server Protocol to prune infeasible tokens and proactively complete tokens as suggested by the LLM. FitRepair [75] combines LLMs with domainspecific fine-tuning and prompting strategies, fully automating the plastic surgery hypothesis, i.e., the code ingredients to fix bugs usually already exist within the same project. ChatRepair [79] utilizes the conversational repair mechanism based on GPT-3.5 and successfully fixes 162 out of 337 bugs in the Defects4J dataset with the assistance of the rich information from original bug-exposing tests. Fan et al. [33] conduct a study to investigate whether APR techniques can correct program errors generated by LLMs, particularly in complex tasks like the LeetCode contests. Another study [76] employs the few-shot learning mechanism and recognizes the ineffectiveness of simply feeding LLMs with only buggy functions as they are not pre-trained for APR. To address this, they create a prompt containing two pairs of buggy and fixed code examples: one manually crafted, and another from the same project or dataset. Then they include the buggy function to be fixed in this prompt, thus activating the function-level APR by providing such a prompt to LLMs. However, their claim that LLMs cannot be directly applied to the function-level APR and the effectiveness of employing the few-shot learning mechanism has not been fully investigated. private boolean flipIfWarranted(\u2026) { if(1.5 * work[pingPong] < work) { int j = 4 * n 1; + int j = 4 * (n 1); for(\u2026) { \u2026}} Single-line Bug (Math-80) Single-Hunk Bug (Math-91) Single-Function Bug (Math-95) protected double getInitialDomain(\u2026) { double ret; + double ret = 1.0; double d = getDenominator\u2026(); + if (d > 2.0) { ret = d / (d 2.0); + }\u2028 \u2026} public int compareTo(\u2026) { \u2026 double nOd = doubleValue(); double dOn = object.doubleValue(); + long nOd = \u2026numerator*denominator; + long dOn = \u2026denominator*numerator; \u2026} Figure 2: Bug examples existing in a single line, hunk, or function The existing LLM-based APR techniques mainly focus on repairing single-line or single-hunk bugs [74, 75, 77, 85], as illustrated \fConference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang in Figure 2. Specifically, the single-line bug Math-80 [15] is contained within a single line of the function flipIfWarranted where fixing this line alone can resolve the bug. Such a single-line bug can be fixed by APR techniques focusing on the line-level repair like AlphaRepair [77] given accurate fault locations. Meanwhile, the single-hunk bug Math-91 [16] is contained in a continuous section of code where two contiguous buggy lines are replaced in the fixed version. This kind of bugs can be fixed by the hunk-level APR techniques like Repilot [74] requiring multiple accurate statementlevel fault locations. Note that single-line and single-hunk bugs can be considered as part of single-function bugs. On the other hand, the bugs existing in multiple discontinuous sections/lines within a function and requiring simultaneous edits on them for a fix are also referred to as single-function bugs. For instance, fixing the Math-95 bug [17] requires editing three discontinuous lines simultaneously. It can be easily derived that fixing single-function bugs poses a greater challenge for APR techniques, e.g., the state-of-the-art APR techniques like ChatRepair and CodexRepair incur a performance decrease of 33% and 36.4% [79] in terms of the number of correctly fixed bugs for the function-level APR respectively. As mentioned, single-function bugs actually include single-hunk and single-line bugs as subset, i.e., enabling a larger scope of repair tasks. Moreover, locating function-level bugs tends to be cheaper than locating line-level or hunk-level bugs [21, 48, 55], thus making the function-level APR more practical. Therefore, we consider developing the function-level APR techniques rather promising and worthy being sufficiently investigated. 3 EMPIRICAL STUDY 3.1 Study Setup 3.1.1 Study Subjects. We utilize six distinct LLMs as our study subjects, encompassing the widely used state-of-the-art Codexedit and GPT-3.5-Turbo [33, 76, 79, 86], along with four advanced open-source code LLMs including CodeLlama 7B, 13B, and 34B (over 500k downloads on Hugging Face within one month [6]), and Magicoder 7B (over 1000 stars on GitHub [7]). Specifically, we adopt code-davinci-edit-001 [8], gpt-3.5-turbo-1106 [18], and the CodeLlama-Instruct series [6] models as the versions of as Codexedit, GPT-3.5-Turbo, and CodeLlama respectively since they have conducted the instruction fine-tuning [63] and can better follow the APR prompt instruction. We also employ the MagicoderS-CL [7] model as the version of Magicoder. Due to the page limit, the LLM configuration details are presented in our GitHub page [1]. 3.1.2 Dataset. We construct our benchmark using both the versions 1.2 and 2.0 of the Defects4J dataset [41] which is the most widely used APR dataset [39, 57, 76] with a collection of a total of 835 real-world bugs extracted from open-source Java projects, comprising both buggy and fixed versions of the source code. Notably, we bound our study scope within 522 single-function bugs including 276 single-hunk (\u201cSH\u201d) bugs and 158 single-line (\u201cSL\u201d) bugs as shown in Table 1. It should be noted that our collected singlefunction bugs already include all the single-line and single-hunk bugs studied in previous works [74\u201377, 79]. 3.1.3 Evaluation Metrics. To assess the repair performance, we follow the standard practice [35, 43, 60], to utilize the plausible Table 1: Statistics of the Dataset Dataset Project # Bugs SH Bugs SL Bugs Defects4j 1.2 Chart 16 12 9 Closure 105 59 26 Lang 42 23 13 Math 74 35 23 Mockito 24 12 7 Time 16 6 3 Defects4j 2.0 Cli 28 13 6 Codec 11 9 8 Collections 1 1 1 Compress 36 16 5 Csv 12 7 4 Gson 9 5 4 JacksonCore 13 9 5 JacksonDatabind 67 26 15 JacksonXml 5 1 1 Jsoup 53 38 27 JxPath 10 4 1 Overall 522 276 158 patches that pass all test cases as our major evaluation metric. In particular, those test cases include the trigger tests in the Defects4J dataset designed to expose bugs and relevant tests which can load the classes associated with the buggy functions. 3.2 Research Questions We investigate the following research questions for extensively studying the function-level APR along with the factors which can impact its effectiveness. \u2022 RQ1: How does the LLM-based function-level APR perform under the zero-shot and few-shot learning setups? For this RQ, we attempt to investigate the performance of the LLM-based function-level APR under the default zero-shot learning and explore the performance impact from adopting few-shot learning. \u2022 RQ2: How do different auxiliary repair-relevant information affect the LLM-based function-level repair performance? For this RQ, we attempt to study the impact from different auxiliary repairrelevant information including bug reports, trigger tests, etc., on the function-level repair performance. 3.3 Implementation We obtain the model from Hugging Face [2] and access Codexedit and GPT-3.5-Turbo through API [3] provided by OpenAI. Our default setting for patch generation uses the nucleus sampling with top \ud835\udc5d= 0.9, temperature = 0.8 and 200 samples per bug following prior works [28, 62, 76]. Patches are generated on servers with 128-core 2.6GHz AMD EPYC\u2122ROME 7H12 CPU, 512 GiB RAM and eight NVIDIA A100 80GB GPUs, running Ubuntu 20.04.6 LTS. 3.3.1 APR input prompt setup. Following prior studies [62, 76], we set the prompt for the LLM utilized in the APR task, as illustrated in Figure 3 to enable the function-level APR. Specifically, we begin with a description of the APR task as \u2018Provide a fix for the buggy function\u2019. Next, we incorporate the buggy code and fixed code pair examples from the few-shot learning mechanism or \fHow Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA // Provide a fix for the buggy function {Buggy code and fixed code pair examples} OR {Auxiliary repair-relevant Information}\u2028 // Buggy Function public double getNumericalMean() { return (double) (getSampleSize() * getNumberOfSuccesses()) / (double) getPopulationSize();} // Fixed Function Figure 3: The input prompt for the function-level APR of the Math-2 bug Table 2: K-shot learning settings and abbreviations K Example Type Abbreviation 0 N.A. K0(Basic) 1 Crafted Example K1(CE) 1 Project Example K1(PE) 2 Crafted Example & Project Example K2(CE, PE) 2 Project Example & Project Example K2(PE, PE) the auxiliary repair-relevant information into the prompt. Subsequently, we use \u2018Buggy Function\u2019 in conjunction with the buggy code, e.g., the Math-2 buggy function getNumericalMean [14] in Figure 3, to prompt LLMs with the buggy function to be fixed. Finally, we apply \u2018Fixed Function\u2019 to guide the LLM in generating a patched function. Notably, we employ the zero-shot learning approach as the default baseline in our study, i.e., adopting no auxiliary repair-relevant information or buggy-fixed code pair examples for prompting. 3.3.2 K-shot learning setups. Table 2 presents our k-shot learning setups. Specifically, we set the zero-shot learning approach, i.e., adopting no pairs of buggy code and fixed code examples (K=0), as our basic setup denoted as K0(Basic). Moreover, we follow prior work [76] to form our buggy and fixed code pair examples via manually crafted examples (CE), i.e., binarySearch in Figure 1b and chosen historical bug fix examples within the same buggy project (PE). We thus form our k-shot learning setup variants as K1(PE) with only one chosen historical bug fix example from the same project, K1(CE) with only one manually crafted example, and K2(CE, PE) with one manually crafted example and one chosen historical bug fix example from the same project. We also form K2(PE, PE) with two chosen historical bug fix examples from the same project following the implementation of the prior work for selecting multiple examples [19]. Note that while it is possible for more setup variants, e.g., with more manually crafted examples and chosen historical bug fix examples from the same project, we generally follow the setup of prior work [76] for fair performance comparison and cost-efficient evaluations. 3.3.3 Collecting auxiliary repair-relevant information. In this study, we refer to the auxiliary repair-relevant information as the bug report and project-specific information from the target buggy project following prior works [46, 64, 79, 81, 83], as the Math-2 bug shown in Figure 4. Specifically, the bug reports are collected from the official issue links of the Defects4J [4] repository. More specifically, following prior works [27, 87], we divide a bug report into two parts, as (a) Bug Report (b) Project-specifi Comment Issue Title Issue 1021: HypergeometricDistribution.sample suffers from integer overflow Issue Description \u201cHi, I have an application which broke when ported from commons math 2.2 to 3.2. It looks like the HypergeometricDistribution.sample() method doesn't work as well as it used to with large integer values, the example code below should return a sample between 0 and 50, but usually returns -50\u2026\u201d Trigger Test Error Message public void testMath1021() { \u2026 for (\u2026) { Assert.assertTrue(0 <= sample); Assert.assertTrue(sample <= n); }} AssertionFailedError: sample=-50 at HypergeometricDistributionTest.java:297 /* For population size {@code N}, number of successes {@code m}, and sample*size {@code n}, the mean is {@code n*m/N}. */ Figure 4: The bug report and project-specific information in the Math-2 bug illustrated in Figure 4a. One is the issue title with averagely around 12 tokens to summarize the type and the cause of the bug (e.g., \u201cIssue 1021: ... suffers from integer overflow\u201d). The other is the issue description with averagely 234 tokens which provides detailed conditions, error messages, and reproduction steps, etc. For instance, the issue description in the Math-2 bug report provides a detailed description of the buggy method HypergeometricDistribution.sample() with the trigger conditions, i.e., \u201cwith large integer values\u201d. Furthermore, we automatically extract the project-specific information from the buggy project, as the Math-2 bug in Figure 4b following the prior works [62, 79, 81]. Specifically, we first build all the buggy projects and automatically extract all the trigger tests and buggy function comments. Then, for each bug, we execute the trigger tests and capture the error messages generated by the unit test framework, such as Junit [5]. Notably, among all 522 singlefunction bugs in the Defects4J dataset, only 10 miss reports and 2 miss comments. We then leave such auxiliary repair-relevant information empty in our study. For evaluating the impact from the auxiliary repair-relevant information, we form eight different setups. Specifically, for bug report-relevant information, we form three setups: BR(IT) with the issue title only, BR(ID) with the issue description only, and BR(ALL) with the whole bug report. For the project-specific information, we form four setups: PI(TT) with the trigger test only, PI(EM) with the error message only, PI(BC) with the buggy comment only, and PI(ALL) with all such information. 3.4 Result Analysis 3.4.1 RQ1: the function-level repair performance. Table 3 presents the function-level APR results in terms of the number of plausible fixes. In general, we observe that K0(Basic) achieves the overall optimal plausible fix results, i.e., 180 average plausible fixes out of our collected 522 single-function bugs, outperforming all the rest setups by at least 10.4%. Such a result indicates that LLMs themselves (with zero-shot learning) are already powerful functionlevel APR techniques. \fConference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang Table 3: APR results under different few-shot learning setups Settings Codex-edit GPT-3.5-Turbo CodeLlama Magicoder Average Plausible Fixes 7B 13B 34B K0(Basic) 174 175 192 179 160 199 180 K1(CE) 103 138 180 185 176 112 149 K1(PE) 109 174 194 193 153 157 163 K2(CE, PE) 138 166 175 189 125 100 149 K2(PE, PE) 165 187 167 189 128 121 160 Finding 1: LLMs with zero-shot learning are already powerful function-level APR techniques. Interestingly, we can further observe that applying the few-shot learning mechanism leads to quite disparate plausible fix results across LLMs. For instance, compared with K0(Basic), while CodeLlama 34B shows a 10% (176 vs. 160) improvement in K1(CE), Magicoder shows a 49.7% (100 vs. 199) decline in terms of the number of plausible fixes. Finding 2: Applying the few-shot learning mechanism in the function-level APR leads to disparate plausible fix results across LLMs. K (Basic) K (CE) K (PE) K (CE, PE) K (PE, PE) 0 25 50 75 100 Plausible Test-failure Uncompilable Patch Ratio(%) 0 1 1 2 2 Figure 5: Patch status averaged across all models under different few-shot learning setups Furthermore, we present the distribution of plausible, test-failure, and uncompilable patches given the identical total number of generated patches across different LLMs under all setups in Figure 5. Note that the test-failure patches can be successfully compiled but fail one or more tests in our Defects4J dataset. Interestingly, we can find that K0(Basic) achieves the best plausible patch rate 4.3% and the lowest uncompilable patch rate 30.2% among all the k-shot setups, while applying the few-shot learning generates more uncompilable patches, i.e., ranging from 38.4% to 59.6% more than K0(Basic). Finding 3: Applying the few-shot learning mechanism may generate more uncompilable patches than the zero-shot learning mechanism. 3.4.2 RQ2: performance impact from the auxiliary repair-relevant information. Noticing that since applying zero-shot learning achieves the optimal repair performance among all the k-shot learning techniques as mentioned, we also adopt zero-shot learning in our auxiliary repair-relevant information evaluations and K0(Basic) as a baseline for a fair performance comparison. Table 4 presents the K0(Basic) BR(ALL) BR(IT) BR(ID) (a) Venn diagram public void visit(\u2026) { \u2026 if (!NodeUtil.isObjectLitKey(\u2026)){ ensureTyped(\u2026) + } else { // Object literal keys are not typeable + typeable = false; } } (b) Closure-66 bug Figure 6: The Venn diagram of plausible fixes over different setups and the bug Closure-66 which can only be fixed in K0(Basic) APR results under different auxiliary repair-relevant information setups. We observe that using bug report-relevant setups significantly enhances the repair performance of all models, i.e., the number of average plausible fixes increases from 180 in K0(Basic) to 238 in BR(IT), 270 in BR(ID), and 273 in BR(ALL). On the other hand, while BR(ALL) achieves the optimal result, Figure 6a also shows that it misses fixing 19 bugs which can be fixed in K0(Basic). More specifically, we can observe that five bugs can only be fixed in K0(Basic) other than all the rest setups, such as Closure-66 [11] in Figure 6b. We find that to fix such a bug, a simple branch condition must be added to set typeable false. In K0(Basic), four models successfully fix this bug. However, in BR(IT), BR(ID), and BR(ALL), the focus is incorrectly placed on the issue described in the bug report, leading to inappropriate code logic changes. Consequently, none of the six models are able to fix the bug. Finding 4: While applying the bug report-relevant information significantly enhances the function-level repair performance, it still misses fixing certain bugs which can be fixed by the baseline technique. We also attempt to investigate the performance impact from the project-specific information on the LLM-based function-level APR. Table 4 shows that using project-specific information setups leads to an increase for all models, i.e., the average number of plausible fixes rises from 180 in K0(Basic) to 185 in PI(BC), 227 in PI(EM), 228 in PI(TT). Notably, PI(ALL) achieves an optimal average of 254 plausible fixes, indicating the potential of leveraging as much auxiliary repair-relevant information as possible for enhancing the function-level repair performance. Interestingly, unlike adopting the bug report-relevant information, all the bugs plausibly fixed in K0(Basic) can also be fixed by adopting the project-specific information. \fHow Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA Table 4: APR result in different auxiliary repair-relevant information settings Sources Settings Codex-edit GPT-3.5-Turbo CodeLlama Magicoder Average Plausible Fixes 7B 13B 34B N.A. K0(Basic) 174 175 192 179 160 199 180 Bug Report Information BR(IT) 265 233 234 221 221 251 238 BR(ID) 281 286 261 264 248 279 270 BR(ALL) 301 285 275 260 255 260 273 Projectspecific Information PI(BC) 186 185 187 191 169 194 185 PI(EM) 217 226 239 225 217 240 227 PI(TT) 239 247 227 221 201 235 228 PI(ALL) 264 273 249 247 236 254 254 Table 5: APR results with fault location information Sources Settings w/o \u2020FL w/ \u2020FL Improvement N.A. K0(Basic) 180 217 20.6% Bug Report Information BR(IT) 238 262 10.1% BR(ID) 270 289 7.0% BR(ALL) 273 291 6.6% Projectspecific Information PI(BC) 185 217 17.3% PI(EM) 227 257 13.2% PI(TT) 228 246 7.9% PI(ALL) 254 272 7.1% \u2020FL refers to fault location information. Finding 5: Directly adopting trigger tests, error messages, and comments from buggy projects can also effectively advance the function-level repair performance. We further evaluate the performance impact and necessity of the statement-level fault location information in the function-level APR. We utilize the ground-truth statement-level fault location information following previous work [62] by labeling the corresponding buggy line with /*bug is here*/. Specifically, the ground-truth fault locations are provided by the official Defects4J GitHub Repository [12]. To investigate the impact of the statement-level fault location information on the function-level APR, we calculate the average number of plausible fixes generated by various models across different auxiliary repair-relevant information setups. private boolean inferTemplatedTypesForCall(\u2026) { \u2026 Map inferred = /* bug is here */ inferTemplateTypesFromParameters(fnType, n); /* bug is here */ + Map inferred = Maps.filterKeys(\u2026); TemplateTypeReplacer replacer = \u2026 return replacer.madeChanges; } Figure 7: Statement-level fault location information misleads LLM, preventing the repair of the Closure-112 bug From Table 5, we can observe that while applying the statementlevel fault location information enhances the repair performance, the extent of this improvement can be potentially compromised with the token number increase of the auxiliary repair-relevant information. For instance, while K0(Basic)\ud835\udc39\ud835\udc3fachieves a performance improvement of 20.6% compared to K0(Basic), such an improvement shrinks to 6.6% comparing BR(ALL)\ud835\udc39\ud835\udc3fto BR(ALL) with averagely 246 tokens and 7.1% comparing PI(ALL)\ud835\udc39\ud835\udc3fto PI(ALL) with averagely 396 tokens. Moreover, we find that 14 bugs that are originally fixable without fault location information cannot be plausibly fixed across all setups and models when using fault location information. For instance in the Closure-112 bug [10] shown in Figure 6b which demands multiple edits, a correct fix is achieved if the model reads the entire method, thus comprehending the necessity of adding Maps.filterKeys to check if each key (of the TemplateType type) exists in the key collection. However, with the fault location information, the attention of the model becomes disturbed, consequently over-focusing on the Map inferred code block and making extensive but ineffective modifications. Finding 6: The statement-level fault location information effectively enhances the repair performance. As the token number of auxiliary repair-relevant information increases, the extent of the improvement can be potentially compromised. 4 DISCUSSION 4.1 Bug report While the bug reports associated with carefully evaluated projects like Defects4J are generally of high quality where their effectiveness can be shown in our evaluation results, they nonetheless include instances of inaccuracies [46]. Specifically, real-world bug reporting is filled with a significant volume of reports that are invalid, irreproducible, incomplete, or outright misleading [22, 25, 51]. Moreover, the process of generating bug reports is manual and labor-intensive, in contrast to the APR techniques seeking to rectify software bugs autonomously, eliminating the need for human intervention [36]. Consequently, relying on bug reports for providing auxiliary repairrelevant information to advance the function-level APR may be inappropriate and impractical, especially when dealing with unknown faults. On the contrary, trigger tests [53, 80, 81] precisely identify the root cause of faults. Error messages [59, 81] can be automatically obtained from test outputs and reveal the fault-triggering boundary conditions. Comments provide function descriptions added by developers [62]. These sources of information are more precise and cost-efficient compared to bug reports. Therefore, we recommend the utilization of project-specific information in LLM-based APR techniques to further improve repair performance. \fConference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang Comment\u2028 Buggy Code Repair Suggestion Model GPT Trigger Test Error Message CoT Repair Suggestion 1 Repair Suggestions Repair Suggestion 2 Repair Suggestion 3 \u2026 Patch Generation Model Magicoder \ud83c\udfa9 // Provide a fix \u2026 {Repair Suggestion}\u2028 // Buggy Function {Buggy Code} // Fixed Function Fixed Function Fixed Function Fixed Function \u2026 Figure 8: The SRepair framework 4.2 Models for APR Although the CodeLlama models have gained a number of plausible fixes in our study, we do observe abnormal behaviors of CodeLlama-based models in the patch generation of the function-level APR. When applied to the Java-based Defects4J dataset, CodeLlama models frequently generate patches with \u2018[PYTHON]\u2019 tags and Python code, e.g., producing 188,113 such patches in the CodeLlama 34B model. This issue was not prevalent in other models. Hence, we advocate using the high-performing GPT-3.5-Turbo and the open-source Magicoder models, both of which have shown superior capabilities in the APR task. 5 APPROACH By far, we have demonstrated the power of adopting the auxiliary repair-relevant information in the function-level LLM-based APR, i.e., including such information in the repair prompt along with the buggy function under zero-shot learning. In this section, to further leverage the potential of the auxiliary repair-relevant information, we construct a novel function-level APR technique SRepair (referring to Suggestion Repair), which adopts a dual-LLM framework for advancing the repair performance. 5.1 SRepair Framework Our Dual-LLM framework is shown in Figure 8 where SRepair first adopts a repair suggestion model which utilizes the learning power of LLM by comprehensively analyzing the auxiliary repair-relevant information via the Chain of Thought (CoT) technique [72]. Then it provides repair suggestions in natural language. Next, SRepair adopts a patch generation model which exhibits its code generation capabilities by generating the entire patched function following the repair suggestions. More specifically, we enable the CoT technique by prompting the LLM to first analyze the buggy function and project-specific information, then identify the root cause of the bug, and finally generate repair suggestions in natural language. For instance, as shown in Figure 9, the repair suggestion model first identifies the root cause of the Cli-26 bug [9]: \u2018are not being reset after creating an Option\u2019, and then generates the correct repair suggestion, \u2018use a try-finally block\u2019. Finally, such a suggestion is fed to the patch generation model for generating the patched functions. 5.2 Evaluation 5.2.1 Dataset. We use the widely studied repair benchmark of Defects4J [41] and QuixBugs [50]. Specifically, to extensively leverage SRepair\u2019s ability in the function-level APR, we include all functionlevel bugs from Defects4J 1.2 and 2.0, thereby forming a dataset Table 6: Statistics of SRepair Dataset Dataset Project # Bugs SF Bugs MF Bugs Defects4j 1.2 Chart 25 16 9 Closure 140 105 35 Lang 56 42 14 Math 102 74 28 Mockito 30 24 6 Time 22 16 6 Defects4j 2.0 Cli 30 28 2 Codec 13 11 2 Collections 2 1 1 Compress 40 36 4 Csv 13 12 1 Gson 12 9 3 JacksonCore 18 13 5 JacksonDatabind 85 67 18 JacksonXml 5 5 0 Jsoup 58 53 5 JxPath 14 10 4 Overall 665 522 143 that comprises 522 single-function (SF) bugs and an additional 143 multi-function (MF) bugs, i.e., the bugs existing in multiple functions and requiring simultaneous edits on them for a fix, as shown in Table 6. Additionally, we also evaluate on the QuixBugs dataset which is made up of 40 function-level buggy and fixed versions of classic programming problems in both Python and Java. CoT Prompt Input Analyze the buggy code, trigger test and error message Then analyze the root cause Finally, try to provide repair suggestions Comment\u2028 Buggy Code Trigger Test Error Message Root Cause: The OptionBuilder properties are not being reset after creating an Option Repair Suggestions Suggestion: Use a try-finally block to ensure OptionBuilder properties are reset Figure 9: Chain of Thought example of Cli-26 Bug \fHow Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA Table 7: Single-function APR result of SRepair Datasets Project Plausible Fixes Correct Fixes PI(ALL) SRepair Variant AlphaRepair Repilot FitRepair ChatRepair SRepair500 GPT-3.5-Turbo Magicoder SRepair2\ud835\udc40 SRepair2\ud835\udc40+\ud835\udc39\ud835\udc3f SRepair200 SRepair500 Defects4J1.2 Chart 12 11 14 14 14 14 9 6 8 15 13 Closure 40 30 39 49 48 56 23 22 29 37 47 Lang 19 25 27 29 29 32 13 15 19 21 26 Math 48 43 50 48 47 55 21 21 24 32 42 Mockito 8 8 12 9 12 12 5 0 6 6 11 Time 7 5 5 6 6 7 3 2 3 3 7 Defects4J2.0 Cli 16 13 16 17 17 19 5 6 6 5 18 Codec 8 5 8 8 8 11 6 6 5 8 11 Collections 0 1 0 1 1 1 0 1 1 0 1 Compress 21 22 21 24 26 28 1 3 2 2 21 Csv 10 9 10 9 10 11 1 3 2 3 11 Gson 6 8 7 7 7 9 2 1 1 3 8 JacksonCore 9 6 9 7 9 10 3 3 3 3 10 JacksonDatabind 30 28 39 38 39 45 8 8 10 9 33 JacksonXml 3 1 1 3 1 3 0 0 0 1 2 Jsoup 34 35 33 35 35 39 9 18 13 14 35 JxPath 2 4 4 5 4 5 1 1 1 0 4 D4J 1.2 Total 134 122 147 155 156 176 74 66 89 114 146 D4J 2.0 Total 139 132 148 154 157 181 36 50 44 48 154 Overall 273 254 295 309 313 357 110 116 133 162 300 5.2.2 Implementation. In the SRepair implementation, GPT-3.5Turbo acts as the repair suggestion model due to its superior analytical, coding, and natural language generation abilities, especially in PI(ALL). Magicoder is adopted as the patch generation model due to its cost-effectiveness and competent code generation ability. Notably, for each repair suggestion, SRepair generates 5 patched functions via the patch generation model. We set the sample size of SRepair 200 (denoted as SRepair200) for comparing with previous APR results in our study section and 500 (denoted as SRepair500) for fair comparisons with previous APR techniques [56, 74, 75, 77]. Similar to prior work [39, 49, 77], we additionally add an end-to-end time limit of 5 hours to fix one bug. Moreover, to better repair the Python bugs in QuixBugs, we replace the Java comment symbol \u2018//\u2019 in the input APR prompt with the Python comment symbol \u2018#\u2019. It should be noted that SRepair does not require statement-level fault location information. For the rest setups, we follow our study section. Due to the page limit, we show the experimental results under different configurations and costs of SRepair in our GitHub page [1]. 5.2.3 Evaluation Metrics. Following our study section, we utilize plausible patches to reflect the repair performance. Furthermore, following standard practices in the APR research, we manually inspect each plausible patch for semantic equivalency [54, 56, 58, 76, 77] to determine the correct patches. Due to the intensive manual efforts involved in patch inspection, we conduct a cross-validation with three authors in order to filter out the correct patches generated by SRepair500. 5.2.4 Compared Techniques. We adopt four recent SOTA LLMbased APR techniques: AlphaRepair [77], Repilot [74], FitRepair [75], and ChatRepair [79]. We also adopt GPT-3.5-TurboPI(ALL) and MagicoderPI(ALL) as baselines with the same auxiliary repair-relevant information and models used in SRepair for studying the effectiveness of our Dual-LLM CoT framework. We also form two SRepair200 variants: SRepair2\ud835\udc40with Dual-LLM only, i.e., directly generating Table 8: Correct fixes on QuixBugs datasets QuixBugs SRepair500 SRepair200 ChatRepair AlphaRepair Python 40 40 40 27 Java 40 40 40 28 128 SRepair500 Repilot FitRepair ChatRepair AlphaRepair (a) Single-function dataset 35 SRepair500 Repilot FitRepair ChatRepair AlphaRepair (b) Studied baselines dataset Figure 10: Bug fixes Venn diagram of SRepair500 with studied baselines repair suggestions without CoT, and SRepair2\ud835\udc40+\ud835\udc39\ud835\udc3fwith additional statement-level fault location information for comparison. 5.2.5 Result analysis. Table 7 presents the APR results for singlefunction bugs in the Defects4J dataset. Surprisingly, we find that SRepair500 outperforms all previous LLM-based APR techniques by at least 85%. Specifically, we can observe that 68.4% of singlefunction bugs (357) in Defects4J can be plausibly fixed, and even 57.5% of bugs (300) can be correctly fixed by SRepair500. Such surprising results indicate that SRepair is capable of fixing a significant number of real-world complicated bugs in the function-level APR. Notably, repairing 300 single-function bugs with SRepair costs only $8.6, averaging $0.029 per correct fix, demonstrating its efficiency as an LLM-based APR technique. \fConference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang Moreover, as in Figure 10a, SRepair500 correctly fixes 128 out of 522 single-function bugs which cannot be fixed by any of the baseline LLM-based APR techniques adopted in this paper. Interestingly, Figure 10b shows that SRepair500 also significantly outperforms the state-of-the-art APR baselines, correctly fixing 35 unique bugs that all other baselines failed to fix in their studied bugs. Such a result indicates that SRepair not only expands the repair task scope to the more practical function-level APR but also achieves remarkable repair performance without the need for statement-level fault location information. Table 8 shows that SRepair500 successfully fixes all bugs in the QuixBugs dataset, indicating its superior capability for diverse programming languages. 0 15 30 45 60 Srepair Srepair GPT-Turbo-3.5 Magicoder 11 15 25 32 17 23 42 53 Plausible Correct # Fixed Bugs 500 PI(ALL) PI(ALL) 200 Figure 11: The APR results of the multi-function bugs in the Defects4J dataset We also evaluate how SRepair repairs complicated multi-function bugs shown in Figure 11 where we find that SRepair500 (53 plausible fixes and 32 correct fixes) and SRepair200 (42 plausible fixes and 25 correct fixes) both largely outperform GPT-3.5-TurboPI(ALL) and MagicoderPI(ALL). Interestingly, as shown in Figure 12 where Functions 1 and 2 require information from successfully running Function 3 to determine if they should execute subsequent statements. This poses a significant challenge for APR techniques, as they need to simultaneously alter the return type of Function 3 to boolean and adapt the function calls in Functions 1 and 2. SRepair successfully identifies such a complex function call and generates the correct fix, indicating the power of SRepair on complicated multi-function faults, which, to our best knowledge, is the first time achieved by any APR technique ever. We further find that SRepair2\ud835\udc40outperforms both GPT-3.5-TurboPI(ALL) and MagicoderPI(ALL) by 8.1% and 16.1% in terms of the number of plausible fixes respectively. Furthermore, leveraging CoT technique achieves even better result (313 plausible fixes) than incorporating public void addPropertyCreator(\u2026) { verifyNonDup(\u2026); \u2026 } protected void verifyNonDup(\u2026) { \u2026 if (!explicit) { return; }\u2026 } public void addPropertyCreator(\u2026) { + if (verifyNonDup(\u2026)){ \u2026 + } } + protected boolean verifyNonDup(\u2026) { \u2026 if (!explicit) { + return false; } \u2026 + return true; } Function 3 (verifyNonDup) Function 2 (addPropertyCreator) Function 1 (addDelegatingCreator) public void addDelegatingCreator(\u2026) {\u2026 + if (verifyNonDup(\u2026)){ _arrayDelegateArgs = injectables; + } \u2026 } public void addDelegatingCreator(\u2026) {\u2026 verifyNonDup(\u2026); _arrayDelegateArgs = injectables; \u2026 } Figure 12: The multi-function bug JacksonDatabind-69 [13] statement-level fault localization information (309 plausible fixes). Such results indicate the effectiveness of our Dual-LLM framework and CoT mechanism in SRepair. 6 THREATS TO VALIDITY Threats to internal validity. One potential threat arises from our manual validation process, which differentiates between plausible patches and those that are semantically correct. To address this concern, three authors cross-validated the plausible patches of SRepair500 by comparing them to those created by developers (the plausible patches generated by other techniques are mostly subsets). Another threat is the potential for data leakage if the developer patches were included in the original training data. To address this, we examined all the patches generated in our study and by SRepair500 in the Defects4J dataset. Among the total plausible patches produced in our study, only 7.4\u2030 are identical to the developer patches. Similarly, for the plausible patches generated by SRepair500, only 1.5\u2030 match the developer patches. Such overlapped patches pose almost no impact on our experimental results. An additional threat lies in the trigger tests adopted in SRepair where the LLMs might have recognized the trigger tests and manipulated them to pass all tests, creating seemingly plausible patches. Our SRepair\u2019s Dual-LLM mechanism effectively mitigates this threat, as the repair suggestion model only suggests bug fixes without trigger test information, keeping the patch generation model isolated from such data. Threats to external validity. The main threat to external validity lies in our evaluation datasets used which may not well generalize our experimental results. To mitigate this, we evaluate our approach on both the popular Defects4J 1.2 and 2.0 datasets where we include all their single-function bugs in our study. Furthermore, we extend our investigation to multi-function bugs in our SRepair evaluation. We also evaluate SRepair on the QuixBugs datasets, which contain both Java and Python bugs, to validate its generalizability. Threats to construct validity. The threat to construct validity mainly lies in the metrics used. To mitigate this, we adopt the widely-used plausible patches along with their distributions. We also use correct fix to evaluate our approach SRepair. 7 CONCLUSION In this paper, we conduct the first comprehensive study on the function-level LLM-based APR. Our study reveals that LLMs with zero-shot learning are powerful function-level APR techniques. Moreover, directly applying the auxiliary repair-relevant information to LLMs significantly increases the function-level repair performance. Inspired by our findings, we design a Dual-LLM framework utilizing Chain of Thought technique, named SRepair, which achieves remarkable repair performance by correctly fixing 300 single-function bugs in the Defects4J dataset, surpassing ChatRepair [79] by 85% and Repilot [74] by 1.59\u00d7. Notably, SRepair successfully fixes 32 multi-function bugs, which is the first time achieved by any APR technique ever to our best knowledge. DATA AVAILABILITY The data and code are available at GitHub [1] for public evaluation. \fHow Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA" + }, + { + "url": "http://arxiv.org/abs/2404.14716v1", + "title": "Bayesian Example Selection Improves In-Context Learning for Speech, Text, and Visual Modalities", + "abstract": "Large language models (LLMs) can adapt to new tasks through in-context\nlearning (ICL) based on a few examples presented in dialogue history without\nany model parameter update. Despite such convenience, the performance of ICL\nheavily depends on the quality of the in-context examples presented, which\nmakes the in-context example selection approach a critical choice. This paper\nproposes a novel Bayesian in-Context example Selection method (ByCS) for ICL.\nExtending the inference probability conditioned on in-context examples based on\nBayes' theorem, ByCS focuses on the inverse inference conditioned on test\ninput. Following the assumption that accurate inverse inference probability\n(likelihood) will result in accurate inference probability (posterior),\nin-context examples are selected based on their inverse inference results.\nDiverse and extensive cross-tasking and cross-modality experiments are\nperformed with speech, text, and image examples. Experimental results show the\nefficacy and robustness of our ByCS method on various models, tasks and\nmodalities.", + "authors": "Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.SD", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Bayesian Example Selection Improves In-Context Learning for Speech, Text, and Visual Modalities", + "main_content": "Introduction Large language models (LLMs) (Touvron et al., 2023b; OpenAI, 2023a) have achieved great success on many text-based natural language processing (NLP) tasks. By connecting with extra visual and audio encoders (Sun et al., 2023b; Radford et al., 2023), the resulting multimodal LLMs can also achieve remarkable performance on imagetext and audio-text tasks (Li et al., 2023; OpenAI, 2023b; Tang et al., 2023). With the ability of incontext learning (ICL) (Brown et al., 2020), LLMs can adapt to new tasks easily and efficiently in a training-free manner, to generate output following the prompting paradigm based on a few input-label pairs pre-pended to the test input. The existence of ICL ability has also been verified on image-text and audio-text tasks (Tsimpoukelli et al., 2021; Wang et al., 2023c; Hsu et al., 2023; Pan et al., 2023). (i) Random Selected Example(s) (ii) Inverse Inference (iii) Bayesian Selected Example(s) text similarity score-based reranking estimated probabilities datastore (few-shot with k samples) (k samples in-context learning) Figure 1: A brief illustration of the proposed Bayesian in-context example selection includes: (i) first randomly selecting k examples; (ii) examining the examples in the datastore through \u201cinverse inference,\u201d where the test input-label pair serves as the in-context example; and (iii) selecting samples with correct label predictions as good examples (colored in blue), considered to have high mutual information interaction with the test input. Although ICL requires no gradient descent and thus does not suffer from the instability caused by stochastic optimisation compared to other testtime adaptation approaches, care still needs to be taken when selecting the in-context examples since they often lead to distinct ICL performance variations (Zhao et al., 2021; Min et al., 2022; Lu et al., 2022b). Prior work on in-context example selection trains an example retrieval module (Rubin et al., 2022; Zhang et al., 2022; Lu et al., 2022a; Wang et al., 2023b), selects close examples in embedding space (Liu et al., 2022; An et al., 2023; Qin et al., 2023), or leverages the feedback of LLMs to score the examples (Su et al., 2022; Nguyen and Wong, 2023; Iter et al., 2023; Mavromatis et al., 2023). While boosting ICL performance, most methods treat in-context examples and test input separately, overlooking their mutual interactions. This paper proposes ByCS (Bayesian in-Context example Selection), a novel in-context example selection approach focusing on mutual information interactions based on the Bayesian formula. Refer to the inference of test input conditioned on in-context examples as ICL inference, and the inference of in-context example\u2019s input based on the test input-label pair as the inverse inference. arXiv:2404.14716v1 [cs.CL] 23 Apr 2024 \fBy introducing inverse inference via Bayes\u2019 theorem, ByCS leverages the inverse inference result to evaluate the quality of each in-context example. Assuming the contextual information interaction is mutual, an accurate inverse inference is likely to result in an accurate inference. Examples with accurate inverse inference results are selected as optimal examples. Extensive experiments across audio, image, and text modalities are conducted to verify the effectiveness and robustness of ByCS, such as ASR, visual question answering (VQA), as well as NLP tasks (including topic classification, sentiment analysis, and text-to-SQL etc). Our main contributions are summarised as follows: \u2022 ByCS, a novel in-context example selection method inspired by Bayes\u2019 theorem, is proposed. To improve the efficiency, the use of a smaller model for fast inverse inference implementation and a ranking-based pre-selection to reduce the number of in-context examples are also proposed in this paper. \u2022 The method is verified using both \u201cdecoderonly ICL\" on NLP tasks and \u201cencoderdecoder\u201d ICL on ASR and VQA. To the best of our knowledge, this is the first work of an in-context example selection method verified across text, audio, and visual modalities as shown in Figure 2. 2 Related Work Multimodal ICL. Inspired by the decoder-only ICL in text-based NLP, efforts have been made to extend such a few-shot learning ability to other modalities, in particular image and audio. Frozen (Tsimpoukelli et al., 2021) is the first attempt to exploit ICL ability in the vision-language model (VLM). By using a vision encoder to map the input image to textual tokens in the input embedding space of a frozen text language model, Frozen can handle interleaved image and text input and achieve image-text ICL. Other work manages to improve VLM\u2019s ICL ability by using adapter blocks (Eichenberg et al., 2022), adding blockwise modality fusion structures (Alayrac et al., 2022) and scaling up the model size (Sun et al., 2023a). In audio modality, Borsos et al. (2023) proposed AudioLM, a language model based on quantised audio tokens for audio generation tasks, which exhibits ICL ability for audio continuation. Similarly, Speech example inputs Speech test input Text example labels Answer \u201c\u597d\u7747\u3002\u201d \ud835\udc4b \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc4c Text example inputs Text test input Answer Albert Einstein was Marie Curie was Polish. \ud835\udc4c \ud835\udc4b \ud835\udc36!\"#$% Text example labels \ud835\udc36&'()& German. \u201c\u7747\u569f\u3002\u201d Image example inputs Text example inputs \ud835\udc36!\"#$% Text example labels Image test input Text test input Answer \ud835\udc36&'()& \ud835\udc4b \ud835\udc4c Does this type of train transport people or cargo? What is this vehicle used for? Transporting goods. Cargo. (a) text ICL (b) ASR ICL (c) VQA ICL Figure 2: Multimodal ICL. Although ICL on different modalities shares the same formula expression, the actual inputs and inference model architectures differ. For ASR ICL on Whisper, the speech is fed into the encoder while the text example is labelled into the decoder, which is aware of speech input through cross-attention with the encoder. For VQA ICL, images are first encoded to the same embedding space of LM\u2019s input, then interleaved images and texts are fed into decoder LM. Wang et al. (2023a) proposed VALL-E, a controllable text-to-speech synthesis system with ICL ability based on audio and text prompts. Wang et al. (2023c) presented the first ICL work for ASR based on paired speech-text examples, which adapted the Whisper (Radford et al., 2023) model to receive considerable word error rate (WER) reductions on unseen Chinese dialects. Further explorations enabled the recent speech-language models to perform ICL on more speech input tasks through warmup training (Hsu et al., 2023) or speech instruction-tuning (Pan et al., 2023). In-Context Example Selection Methods. Rubin et al. (2022) proposed a scoring LM to retrieve incontext examples using contrastive learning, which can also be trained with reinforced learning algorithms, such as Q-learning (Zhang et al., 2022) and policy gradient (Lu et al., 2022a). Alternatively, examples that are semantically similar to the test input can be selected. Liu et al. (2022) proposed to select the k nearest neighbours (kNN) in the embedding space of the examples. When combining with chain-of-thought (Wei et al., 2022), Qin et al. (2023) proposed to select examples in the embedding space of the reasoning path. LLM feedback is often used in in-context example selection. Iter et al. (2023) selected in-context examples with cross-entropy differences of the fine-tuned model \f\ud835\udc36 \"!\"#$! = arg max \ud835\udc43(\ud835\udc36!\"#$!|\ud835\udc7f, \ud835\udc80 /, \ud835\udc36%&'()) \ud835\udc7f \ud835\udc80 # \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc36 $&'()& Text similarity measurement Example Score \ud835\udc44 Select examples with max(\ud835\udc78) \ud835\udc4c 3 = arg max \ud835\udc43(\ud835\udc80|\ud835\udc36%&'(), \ud835\udc36!\"#$!, \ud835\udc7f) \ud835\udc44= \ud835\udc46\ud835\udc56\ud835\udc5a\ud835\udc56\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66(\ud835\udc36!\"#$!, \ud835\udc36 \"!\"#$!) First-round inference Inverse inference \u2460 \u2461 \u2462 Figure 3: The detailed pipeline of our ByCS method includes: First, conduct the first-round inference to estimate the label of the test input. Then, perform inverse inference on each example in the datastore, where the test input and the estimated label serve as in-context examples. A detailed illustration of inverse inference can be found in Figure 5 in the Appendix. Finally, rank in-context examples by the text similarity between the inverse inference result and the true context label. Examples with high similarity scores are selected due to their high mutual information interaction. based on the assumption that ICL may act as implicit gradient descent (Dai et al., 2022). Nguyen and Wong (2023) identified highly impactful examples according to the proposed influence score. Although ByCS also uses LLM feedback when evaluating the quality of in-context examples through inverse inference, it leverages the text-similarity of the inverse inference results and the corresponding ground-truth labels, in no need of complete output probability distributions which are often not available for commercial LLMs. Wang et al. (2023d) selected optimal in-context examples in the Bayesian framework by viewing LLMs as latent variable models and ICL as latent concept learning. In comparison, ByCS directly extends the ICL inference probability using Bayes\u2019 theorem. Xu and Zhang (2024) selected examples with high discrepancy between the labels and LLM\u2019s outputs when performing question answering. ByCS also selected examples from candidates in a datastore based on LLM\u2019s outputs but computes the mutual information interactions between the in-context examples and test input. 3 Methodology As shown in Figure 3, given a test input X and paired in-context examples (Cinput, Clabel), LLMs predict the most possible answer \u02c6 Y by maximising the inference probability P(Y|Cinput, Clabel, X): \u02c6 Y = arg max P(Y|Cinput, Clabel, X), (1) where Cinput and Clabel are the inputs and labels of different data types in different tasks. Regarding text-based NLP tasks, Cinput and Clabel are referred to as text questions and corresponding answers. Regarding ASR, Cinput and Clabel are speech audio and corresponding text transcriptions. Regarding VQA, Cinput are images and text questions based on the images and Clabel are the text answers. The inference probability can be extended using Bayes\u2019 theorem: P(Y|Cinput, Clabel, X) = P(Clabel|X, Y, Cinput)P(Y|X, Cinput) P(Clabel|X, Cinput) . (2) The likelihood P(Clabel|X, Y, Cinput) is termed as inverse inference probability, since it can be interpreted as the probability of the context label Clabel when the test input-label pair (X, Y) is inversely treated as the in-context example. ByCS is focused on the inverse inference probability and assumes the influence of the prior P(Y|X, Cinput) is subordinate for simplification. In practice, since the ground-truth label Yref of the test input X is not available, the correct likelihood P(Clabel|X, Yref, Cinput) is approximated by P(Clabel|X, \u02c6 Y, Cinput), where \u02c6 Y is produced by the first-round inference. Specifically, \u2022 First, the first-round inference is performed to produce a hypothesized label \u02c6 Y based on the test input X, which can be achieved using decoding rule without any in-context examples by \u02c6 Y = arg max P(Y|X). Better performance can be achieved when using the hypothesized label obtained by in-context examples by \u02c6 Y = arg max P(Y| \u02dc Cinput, \u02dc Clabel, X) based on Eqn. (1), where ( \u02dc Cinput, \u02dc Clabel) is a pair of first-round in-context example selected either randomly or using other example selection methods. \u2022 Next, for the datastore with all candidate incontext examples, generate the inverse infer\fence result in \u02c6 Clabel for every candidate example based on the approximated inverse inference probability P(Clabel|X, \u02c6 Y, Cinput) by \u02c6 Clabel = arg max P(Clabel|X, \u02c6 Y, Cinput). \u2022 Last, compute Q = Similarity(Clabel, \u02c6 Clabel) as the text similarity between Clabel and \u02c6 Clabel, and use Q as the metric for the evaluation of the quality of inverse inference. Since more accurate inverse inference probability often results in higher text similarity, ByCS selects the in-context examples with higher Q. Note that Q is adopted since it does not require to assessment of the model\u2019s output probability distribution of the LLM, which is often unavailable for commercial LLMs. To reduce the computation cost of inverse inference, two methods are used when the number of examples in the datastore is large: \u2022 Conduct inverse inference using a model in the same model family as our inference model but has a smaller model size. \u2022 Apply ByCS to a small number (e.g. N) of pre-selected candidate examples. In preselection, all examples in the datastore are first ranked, and only the top N best examples are reserved as the pre-selected candidates. The pre-selection is performed using fast rankingbased algorithms like kNN. 4 Experimental Setup 4.1 Models Experimental results are performed on audio, text, and image modalities. For audio-text and imagetext tasks, ASR and VQA are used to evaluate the ICL ability of encoder-decoder structured models. For text-only NLP tasks, topic classification, sentiment analysis, and text-to-SQL are used to evaluate the ICL performance with decoder-only models. Regarding the NLP tasks, experiments are conducted using GPT-3.5-Turbo and GPT-4 (OpenAI, 2023a). For the ASR task, the open-sourced Whisper model (Radford et al., 2023) is used, which is a series of speech models released by OpenAI. The Whisper model family uses vanilla encoderdecoder Transformer (Vaswani et al., 2017) architecture ranging from 39 million (M) parameters (tiny) to 1.55 billion (B) parameters (large). Specifically, the Whisper small (244M) and Whisper largev2/-v3 (1.55B) models are used. For the VQA task, experiments are performed on Emu2 (Sun et al., 2023a) and GPT-4V (OpenAI, 2023b). Emu2 is a 37B text-image model (VLM) which leverages pretrained EVA-02-CLIP-E-plus (Sun et al., 2023b) and LLAMA-33B (Touvron et al., 2023a), which has ICL ability when taking interleaved inputs of images and texts. For experiments on Emu2, the outputs are generated using a greedy decoding setting for fast evaluation. GPT-4V is a GPT4 variant that can directly perceive image inputs, showing state-of-the-art image understanding performance. 4.2 Datasets Seven datasets covering NLP, ASR and VQA are used in this paper. For text-only ICL, four datasets are used in four different task categories: the TREC dataset for topic classification (Voorhees and Tice, 2000), the SST2 dataset for sentiment analysis (Socher et al., 2013), the Spider dataset for text-to-SQL (Yu et al., 2018), and the CHiME4 (Vincent et al., 2017) split of the HyPoradise dataset (Chen et al., 2023) for generative language model re-scoring to correct pre-generated ASR transcriptions. For audio-text ICL, Two datasets are used for ASR tasks, namely RASC863 (ChineseLDC.org, 2004) and CORAAL (Gunter et al., 2021). RASC863 is a commonly used Chinese dialect ASR dataset and its dialectal words split of Chongqing and Guangzhou dialects are used. CORAAL is an English corpus with speech recordings from regional African Americans. For imagetext ICL, VQA experiments are conducted on OKVQA (Marino et al., 2019), a dataset that requires methods to draw upon external knowledge to answer the visual questions. 4.3 Baselines On all three modalities, random selection and improved KATE (Liu et al., 2022) are used as baseline approaches. For random selection, in-context examples are uniformly selected from the example datastore three times and the average results are reported. For KATE (Liu et al., 2022), k neighbours that are nearest to the test input in the embedding space in terms of Euclidean distance are selected. For ASR ICL, the encoder of Whisper large-v2 acts as the embedding retrieval module on the Chinese dataset, while on the English dataset, we use the encoder of Whisper large-v3. In text-ICL, OpenAI text-embedding-ada-002 is used as the embedding retrieval model. For VQA ICL, KATE is only based on the embedding space of the query \fCorpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 67.1 56.1 52.7 51.0 61.7 38.3 31.2 28.8 12.4 KATE+ 67.1 54.7 51.3 49.7 61.3 36.1 26.9 24.8 12.0 ByCS 62.4 53.4 50.6 48.6 49.5 31.9 27.1 26.6 11.7 oracle ByCS 62.4 52.4 49.5 47.2 49.4 30.7 25.8 24.7 11.7 (a) Results with Whisper-large-v2 Corpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 68.9 60.3 57.0 55.7 67.1 42.8 38.3 35.2 11.6 KATE+ 68.1 58.2 54.8 54.1 67.7 41.3 34.3 31.6 11.4 ByCS 63.5 56.3 53.5 51.8 50.7 36.7 33.0 31.5 11.3 oracle ByCS 63.4 55.2 53.0 50.7 51.3 35.6 31.9 30.7 11.2 (b) Results with Whisper-large-v3 Table 1: %WERs on RASC863 dialectal word dataset and CORAAL with different in-context example selection methods. For RASC863, the example datastore is the RASC863 dialectal word dataset of the corresponding dialect. For CORAAL, the size of the example datastore for ByCS is narrowed down to 10 using kNN algorithm. For the \u201coracle ByCS\u201d setting, the ground-truth label Yref is used in the inverse reference. image and EVA02-CLIP-bigE-14-plus (Sun et al., 2023b) serves as the embedding retrieval module. We use the term \u201cKATE+\u201d to refer to the baseline in our paper, putting stress on the fact that it is actually an improved KATE version enhanced using stronger embedding retrieval models, which results in better performance. For text ICL, bm25 (Robertson et al., 1995) and LLM-R (Wang et al., 2023b) are also compared as baselines. bm25 is a ranking metric originally designed for search engines to estimate the relevance of documents to a given query based on word-overlapping similarity. LLM-R provides a recent and preferment dense retriever distilled using a reward model trained based on LLM feedback. 5 Results 5.1 ASR ICL Results in WER are reported for ASR tasks in Table 1, and here in Chinese WER is calculated based on Chinese characters, which is also termed as character error rate. The ByCS method outperforms the KATE+ baseline in most cases, showing the robustness and effectiveness of our method. When the number of in-context examples k is small, ByCS surpasses KATE+ baseline in a large margin, with a 10.25% relative WER reduction on average when k = 1. Such performance advantage of ByCS reduces when the number of in-context examples increases, which may be attributed to the fact that ByCS performs the inverse inference of each in-context example individually by applying an independence assumption that ignores the contextual interactions between different in-context examples. The use of Yref in \u201coracle ByCS\u201d further boosts the performance gain, indicating the upper bound of our method with the same number of k. 5.2 Ablation study on ASR ICL 5.2.1 Inverse decoding option The influence of different decoding options of inverse inference is studied on the RASC863 dialectal word dataset. The results are shown in Table 2. For the setting notation, \u201cnoprompt\u201d denotes decoding in the default decoding option, and \u201cprompt\u201d means to decode with a specially designed prompt \u201c\u8bc6\u522b\u65b9\u8a00\u201d (meaning to \u201crecognize dialect speech\u201d). \u201cLID\u201d denotes decoding with the correct language identity of Chinese (\u201czh\u201d). The results show that among the three inverse decoding options, \u201cnoprompt\u201d obtains the best performance, \u201cprompt\u201d becomes the second, and \u201cLID\u201d the worst. The WERs of inverse inference are re\fported in Table 3. The WERs under the \u201cnoprompt\u201d setting are more than 100% due to the high insertion error rate. The repeated outputs are not removed when calculating the WERs of inverse inference and when calculating the text similarity, making a more obvious distinction between the examples with high mutual information interaction and those with low. Although it may be a little counter-intuitive that low inverse inference accuracy results in high ByCS selection performance, it is reasonable since inverse inference in ByCS helps to separate good in-context examples from the rest, which can be better achieved by using worse decoding options during inverse inference. This is because our decoding options can often make the model make more mistakes for worse in-context examples. Setting Corpus Text Inverse RASC863 Chongqing RASC863 Guangzhou similarity decoding measurement option Jaccard coefficient noprompt 62.4 49.5 prompt 62.9 50.7 LID 64.1 52.3 BERT wordvecs noprompt 62.4 51.5 prompt 63.5 56.8 LID 64.5 57.7 Table 2: %WERs of Whisper large-v2 on RASC863 dialectal word dataset using ByCS method with different inverse decoding options and text similarity measurements. The number of in-context examples is k = 1. Inverse decoding option Corpus RASC863 Chongqing RASC863 Guangzhou noprompt 91.5 125.2 prompt 70.2 70.1 LID 54.6 61.7 Table 3: Inverse inference %WERs of Whisper largev2 on RASC863 dialectal word dataset with different inverse decoding options. 5.2.2 Text similarity measurement The results of ByCS with different text similarity measurements are also reported in Table 2. For the setting notation, the \u201cJaccard coefficient\u201d is a comSetting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 67.1 54.7 51.3 49.7 ByCSlargev2 62.4 53.4 50.6 48.6 ByCSsmall 64.2 53.3 50.5 48.7 (a) Results with Whisper large-v2 Setting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 68.1 58.2 54.8 54.1 ByCSlargev3 63.5 56.3 53.5 51.8 ByCSsmall 64.4 56.5 54.1 51.7 (b) Results with Whisper large-v3 Table 4: %WERs on RASC863 Chongqing dialectal word dataset with ByCS with different inverse inference models. ByCSlargev3 and ByCSsmall use Whisper-largev3 and Whisper-small as the inverse inference model separately. monly used statistic to gauge similarity, defined as the intersection over the union of two sentences. \u201cBERT wordvecs\u201d is to measure similarity based on the Euclidean distance in the embedding space of BERT encoded word vectors. The embedding retrieval module is bert-base-chinese 1. ByCS with the Jaccard coefficient as text similarity have lower WERs, which may be because the training data of the BERT model doesn\u2019t include sufficient dialectal Chinese words and expressions. It also indicates that ByCS can work well with even a simple rule-based text similarity measurement, further verifying its high robustness. The Jaccard coefficient is used as the text similarity measurement in later experiments unless explicitly specified, due to the performance and simplicity. 5.2.3 Inverse inference model The inverse inference with different models is also investigated, with the results displayed in Table 4. A smaller model is used for inverse inference to speed up ByCS, since it is expensive to perform inverse inference using the inference model for every candidate example in datastore. Replacing Whisper-large-v2/v3 with Whisper-small will speed up six times2. For the notation, the subscript denotes the inverse inference model. For example, ByCSsmall is the ByCS method with Whisper small 1https://huggingface.co/ bert-base-chinese 2https://github.com/openai/whisper \fCorpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 63.0 92.92 67.41 8.0 random 63.5 72.7 75.3 94.96 94.80 67.02 7.5 7.5 7.3 KATE+ 78.8 86.4 91.0 95.05 94.69 69.44 7.7 7.1 6.8 bm25 74.6 89.4 89.8 95.27 95.40 67.41 7.4 7.5 8.1 LLM-R 78.0 88.8 90.4 95.05 94.02 67.82 7.4 6.9 7.0 ByCS 81.2 88.0 90.6 95.16 95.04 69.63 7.1 6.8 6.4 (a) Results using GPT-3.5-Turbo Corpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 75.2 95.01 69.63 11.6 random 81.3 82.5 84.6 96.38 96.11 70.66 6.9 6.8 6.5 KATE+ 88.2 91.6 93.4 96.43 95.85 71.95 7.0 6.3 5.8 bm25 81.8 87.4 91.4 96.19 96.09 71.47 6.8 6.6 6.3 LLM-R 88.2 91.0 93.6 95.74 95.06 72.63 6.8 6.3 5.9 ByCS 88.6 92.4 93.6 96.55 96.31 72.82 6.7 6.3 5.9 (b) Results using GPT-4 Table 5: Results of four text ICL tasks on two GPT-family models with different in-context example selection methods. The evaluation metrics are denoted in the brackets. The example datastore is narrowed down to a small size using kNN for ByCS. In the \u2018default\u2019 setting, the answers are generated directly with the questions without ICL. as an inverse inference model. ByCSsmall has similar results to ByCSlargev2 and ByCSlargev3, verifying the effectiveness of using a smaller model from the same family for inverse inference. This is intuitive since Whisper-small is trained using the same data and settings compared to the inference model Whisper-large-v2 and Whisper-large-v3, which therefore processes information similarly and can serve as a good alternative when evaluating the quality of the in-context examples. The smaller size of Whisper-small makes ByCS a more practical method in cost-sensitive scenarios. 5.3 Text ICL Text-only ICL results are shown in Table 5. As shown, ByCS outperforms all baselines on most dataset settings, showing not only the effectiveness but also the robustness of ByCS. In particular, ByCS outperforms the best baseline on the generative ASR rescoring dataset HyPoradise with a considerable 4.7% relative WER reduction with GPT3.5-Turbo. On TREC and SST2 datasets, ByCS does not always outperform the baselines. This indicates that ByCS is more suitable for open-ended long-answer datasets due to the calculation of text similarity in ByCS, in which answers are much more diverse and examples with rich information interactions can be better separated. In contrast, in multi-choice classification datasets, only a few short answers are often available, containing little contextual information. As the example shown in Figure 4, the distribution of the text similarity for ranking the examples is often sharp, merging the optimal and the suboptimal examples. Furthermore, considering the hypothesized labels of the test inputs for inverse inference, the hypothesized answers in open-ended datasets (in the form of long sentences) are often more similar to their corresponding references compared to those in the multi-choice classification datasets (in the form of a word or phrase or just an index of choice). It is observed that different in-context example selection methods perform differently with different models, even though on the same dataset. The bm25 method outperforms the KATE+ method with GPT-3.5-Turbo on the SST2 dataset, but not with GPT4. Compared to KATE+ and bm25 that is \fmodel-free in the actual selection step, the performance advantage of ByCS is more consistent since it takes into account the influence of the model. The outputs of the inverse inference model are used, which can serve as a good approximation to the inference model as verified in Section 5.2.3. Note that for ByCS on GPT-4, although the inverse inference procedure is conducted on GPT-3.5Turbo, the performances of ByCS are still superior. This further verifies that smaller models from the same model family can serve as a good low-cost approximation of the inverse inference model. (a) Distribution on SST2 (b) Distribution on HyPoradise Figure 4: The distribution of text similarity scores on different datasets. The text similarity score is the Jaccard coefficient. The entropy of distribution is calculated and placed on the upper left. The distribution on the multichoice classification dataset SST2 (blue) is much sharper than that of the open-ended dataset HyPoradise (red). 5.4 VQA ICL ByCS is tested on VQA ICL and the results are reported in Table 6. ByCS outperforms the KATE+ baseline on the VQA ICL task, demonstrating strong performances across modalities. The performance improvement from ByCS is not as obvious as in audio and text tasks, since the answers of VQA are usually short (usually a word or phrase), lacking sufficient contextual information. ByCS on In-context example number k Example selection method KATE+ ByCS k = 2 40.47 40.12 k = 4 45.11 45.14 (a) Results with Emu-2 In-context example number k Example selection method KATE+ ByCS k = 2 52.54 52.86 k = 4 54.00 54.39 (b) Results with GPT-4V Table 6: Results of VQA ICL with different in-context example selection methods and numbers of examples on OKVQA dataset. the VQA dataset suffers from the problem of having sharp text similarity score distributions, similar to the multichoice classification dataset. For ByCS with GPT-4V, inverse inference results on Emu-2 are used to pre-select the candidate examples, and ByCS still outperforms the KATE+ baseline. The performance may be further improved if GPT-4V is also used for inverse inference. This demonstrates that ICL may perform similarly cross models not only on speech and text, but also on images. 6 Conclusion This paper proposes ByCS, a novel in-context example selection method based on Bayes\u2019 theorem, which assumes that contextual information interaction is mutual between the test input and in-context examples and selects high-quality examples based on the inverse inference results. Experiments are performed across three modalities: speech, text, and images, using six different tasks and seven datasets. Results demonstrated the robustness and effectiveness of ByCS. It is also validated that the inverse inference results can be approximated using a smaller model from the same model family, which considerably reduces the computational cost. Moreover, relying on text similarity to rank in-context examples, ByCS is more suitable for open-ended long-answer datasets which contain sufficient contextual information. Future work is to extend the inverse inference to sequences with multiple incontext examples to model the interactions among the in-context examples. \fLimitations There are two limitations to this work. First, ByCS follows the simple assumption that the influence of each in-context example is independent and treats each in-context example individually, which neglects the contextual interactions between incontext examples. The approximation may be not adapted to the scenario in which the number of in-context examples is high. Another limitation is that sufficient contextual diversity is required by ByCS to select optimal examples for it depends on text similarity to evaluate inverse inference results. ByCS may suffer performance penalty when applied to a short-answer dataset. Future work includes enhancing ByCS in more scenarios. Ethics Statement The work doesn\u2019t give rise to any ethical risks and issues. All the models and data used in this paper are publicly accessible and used under licenses." + }, + { + "url": "http://arxiv.org/abs/2404.13414v3", + "title": "Evaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study", + "abstract": "The integration of AI assistants, especially through the development of Large\nLanguage Models (LLMs), into computer science education has sparked significant\ndebate. An emerging body of work has looked into using LLMs in education, but\nfew have examined the impacts of LLMs on students in entry-level programming\ncourses, particularly in real-world contexts and over extended periods. To\naddress this research gap, we conducted a semester-long, between-subjects study\nwith 50 students using CodeTutor, an LLM-powered assistant developed by our\nresearch team. Our study results show that students who used CodeTutor (the\nexperimental group) achieved statistically significant improvements in their\nfinal scores compared to peers who did not use the tool (the control group).\nWithin the experimental group, those without prior experience with LLM-powered\ntools demonstrated significantly greater performance gain than their\ncounterparts. We also found that students expressed positive feedback regarding\nCodeTutor's capability, though they also had concerns about CodeTutor's limited\nrole in developing critical thinking skills. Over the semester, students'\nagreement with CodeTutor's suggestions decreased, with a growing preference for\nsupport from traditional human teaching assistants. Our analysis further\nreveals that the quality of user prompts was significantly correlated with\nCodeTutor's response effectiveness. Building upon our results, we discuss the\nimplications of our findings for integrating Generative AI literacy into\ncurricula to foster critical thinking skills and turn to examining the temporal\ndynamics of user engagement with LLM-powered tools. We further discuss the\ndiscrepancy between the anticipated functions of tools and students' actual\ncapabilities, which sheds light on the need for tailored strategies to improve\neducational outcomes.", + "authors": "Wenhan Lyu, Yimeng Wang, Tingting, Chung, Yifan Sun, Yixuan Zhang", + "published": "2024-04-20", + "updated": "2024-05-03", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Evaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study", + "main_content": "INTRODUCTION Recent advancements in Generative AI and Large Language Models (LLMs), exemplified by GitHub Copilot [15] and ChatGPT [32], have demonstrated their capacity to tackle complex problems with human-like proficiency. These innovations raise significant concerns within the educational domain, particularly as students might misuse these tools, thereby compromising the quality of education and breaching academic integrity norms [36]. Specifically, entrylevel computer science education is directly affected by the progress in LLMs [59]. LLMs\u2019 capability in handling programming tasks means they can complete many assignments typically given in introductory courses, thus becoming highly appealing to students looking for easy solutions. Despite these challenges, LLM-powered tools offer great opportunities to enrich computer science education [23]. When used ethically and appropriately, they can serve as powerful educational resources. For instance, LLMs can provide students instant feedback on their coding assignments or generate diverse examples of code that help demonstrate programming concepts [35]. Moreover, as Generative AIs are becoming popular in production environments, arXiv:2404.13414v3 [cs.HC] 3 May 2024 \fL@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA Wenhan Lyu, Yimeng Wang, Tingting (Rachel) Chung, Yifan Sun, & Yixuan Zhang familiarizing students with these technologies is increasingly becoming a crucial aspect of computer science education. The unique challenges posed by LLMs stem from the difficulty in detecting the use of AI tools [54, 58]. Traditional approaches, such as plagiarism detection software, fall short in determining the originality of student submissions [28]. Given the challenges in identifying LLMs usage and recognizing the potential advantages of these technologies, we consider integrating LLMs into computer science education inevitable. As students have already started using such tools, the impact of LLMs on computer science education remains unknown. Indeed, a growing body of research has begun to explore the application of LLMs within educational settings, primarily focusing on assessing the capabilities of current models with existing datasets or previous assignments from students [18, 27]. However, there is still a research gap in understanding how students interact with LLM-powered tools in introductory programming classes, particularly regarding their engagement in genuine learning settings over extended periods. Furthermore, while previous studies have shown individual differences in intelligent tutoring systems [22], research into how these differences apply to LLM tools is lacking. Investigating these variations is important for tailoring educational strategies to diverse student needs. In short, understanding these nuanced attitudes of and interactions with LLM-powered tools in CS education over extended periods is crucial for identifying the evolving challenges and opportunities LLMs introduce. To address the research gap, we asked the following research questions (RQs) in this work: RQ1. Does the integration of LLM-powered tools in introductory programming courses enhance or impair students\u2019 learning outcomes, compared to traditional teaching methods? How are individual differences associated with students\u2019 learning outcomes using LLM-powered tools? RQ2. What are students\u2019 attitudes towards LLM-powered tools, how do they change over time, and which factors might influence these attitudes? RQ3. How do students engage with LLM-powered tools, and how do they respond to their programming needs? We believe that addressing the following research questions (RQs) is critical for enabling researchers to make informed decisions about incorporating LLMs into their courses and guiding students on the optimal and responsible use of LLM-powered tools. To answer the questions, we conducted a longitudinal, betweensubject field study with 50 students over the course of the fall semester from September to December 2023 with a web-based tool we developed called CodeTutor. The contributions of this work are: 1) We conducted a semesterlong longitudinal field study to assess the effectiveness of an LLMpowered tool (CodeTutor) on students\u2019 learning outcomes in an introductory programming course. By comparing the performance of students who used CodeTutor against those who did not, our study contributes to new empirical evidence regarding the role of LLM-powered tools in the programming learning experience; 2) We characterized patterns of student engagement with CodeTutor and analyzed the ways in which it can meet students\u2019 learning needs. Through the analysis of conversational interactions and feedback loops between students and the tool, we contributed new knowledge regarding how CodeTutor facilitates or impedes learning; and 3) We offered insights and outlined design implications for future research. 2 RELATED WORK 2.1 Intelligent Tutoring Systems Using computerized tools for assisting educational purposes is not a new idea. As early as the 1950s, the first concept of using computers to assist learning has already emerged [29]. From where the factor of intelligence had been considered and it had started evolving into Intelligent Tutoring Systems (ITS) [46]. ITS leverages artificial intelligence to provide personalized learning experiences in computer science education, adapting instruction and feedback to individual student needs [3, 14]. These systems have enhanced student engagement, comprehension, and problem-solving skills by offering tailored support and immediate feedback, similar to one-on-one tutoring [10, 52]. Research has demonstrated that ITS can significantly improve understanding of complex concepts in programming courses compared to traditional teaching methods, leading to higher student satisfaction due to the personalized learning environment [9, 42]. The Internet also empowered ITS to offer more interactivity and adaptivity [5\u20137], leveraging the path of later boost with natural language processing techniques [13, 19]. However, prior work has shown that as the granularity of tutoring decreases, its effectiveness increases [52]. Significant limitations for ITS include the complexity and cost of building them, the incapability to answer questions and tasks out of their programmed domains, and the difficulty to develop with the purpose of productively used by individuals without expertise [16]. Even though the Generalized Intelligent Framework for Tutoring (GIFT) framework [47] was proposed and evolved for developing ITS for use at scale, those limitations mostly remain unresolved. 2.2 Large Language Models in CS Education The release of ChatGPT and other Generative AI applications brought LLMs into the public view and attracted enormous attention [1, 48]. LLMs offer researchers and users the flexibility to employ a single tool across various tasks [53], such as medical research [8, 49], finance [55], and education [21]. Adopting LLM-powered tools in educational settings is facilitated by their broad accessibility and cost-free nature [56]. Recent studies have looked into the potential of AI assistants to enhance student learning by helping with students\u2019 problem-solving [2, 25, 37] and generating computer science content [11, 43]. Current research on the use of LLMs in education has primarily looked into their performance and capabilities [40] compared to humans, such as generating code for programming tasks [24, 39], answering general inquriries [38, 44], addressing textbook questions [20] and exam questions [12]. Despite the growing interest in examining the capabilities of LLMs in education, very few empirical studies have examined the emerging concerns regarding their impact. Therefore, there is an urgent need for research into the long-term effects of LLMs in CS education and the development of strategies to counteract potential negative consequences. One exceptional work was conducted by Liffiton et al. [26], who developed a tool called CodeHelp for assisting students with their debug needs in an undergraduate course \fEvaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study L@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA over 12 weeks. Their follow-up study [45] categorized history messages in their tool, and found a positive relationship between tool usage and course performance. However, their study specifically focused on debugging issues and did not compare the outcomes with those achieved through traditional TA methods. Furthermore, prior research has demonstrated that individual differences, such as gender, race, and prior experiences with technologies, significantly influence the effectiveness of intelligent tutoring systems [22]. However, work that examines how individual differences affect interactions with and perceptions of LLM-powered tools in educational settings is sparse, even though understanding the role of demographic and individual variability is crucial [57]. This is particularly important for developing inclusive and effective educational tools that suit the diverse needs of students. Our work seeks to address these research gaps by conducting a field study that evaluates the use of LLM-powered tools for an extended period of time. Particularly, our study not only aims to evaluate the practicality of LLMs in programming learning educational contexts, but also intends to contribute to a more nuanced understanding of their long-term implications for learning and teaching methodologies. 3 METHOD In this section, we describe the design of CodeTutor (subsection 3.1), an overview of our participants (subsection 3.2), our study procedure and data collection (subsection 3.3), and our quantitative and qualitative data analysis (subsection 3.4). The source code of CodeTutor, pre-test questions, and data analysis code is available on osf.io/e3zgh. 3.1 Design of CodeTutor We developed CodeTutor, a browser-based web application using TypeScript and front-end frameworks (e.g., SolidJS, Astro, and libraries such as Zag), for a responsive and interactive user interface. CodeTutor integrates OpenAPI API, which enables the GPT-3.5 model offered by OpenAI. The main interface is shown in Figure 1. Login. Students log in to CodeTutor using their email addresses, with a randomly generated unique identifier (UID) that tracks their activities anonymously. User Interface. The CodeTutor interface features a navigation sidebar and a central chat area. The sidebar enables easy navigation, with a button for starting new conversations and a chronological listing of existing ones for quick access. User Feedback Structure. Feedback is important in CodeTutor in order to understand user engagement and students\u2019 attitudes towards it. CodeTutor provides two feedback mechanisms: 1) conversation-level and 2) message-level feedback. Data Storage. CodeTutor stores data locally on the user\u2019s browser with IndexedDB and can only upload essential information with our secure server for research purposes, where a unique ID for anonymous tracking identifies each conversation. To protect privacy, CodeTutor cannot read stored data from our server. API Usage. OpenAI only offered limited configuration ability for their API at the time we started our experiment. So we carefully crafted the system role text in our implementation to specify the model to answer questions as a teaching assistant in an entry-level Python class, making answers from OpenAI API consistent even if the length of a conversation exceeds its token limit. 3.2 Participants Upon approval from our institution\u2019s Institutional Review Board (IRB), we conducted a field study evaluation study with 50 participants. The field study took place in the Computer Science Department of a 4-year university in the United States. Our criteria for participation include: Participants need to be 18 years or older, be able to speak and write in English, and register as entry-level undergraduate computer science students at our institution. Table 1 presents an overview of our participants\u2019 demographic information. Table 1: Overview of Participant Characteristics Characteristics Options Number of participants Gender Woman 22 Man 25 Non-binary 1 Prefer not to say 2 Major Computer Science 18 Data Science 9 Biology 5 Mathematics 4 Economics 3 Others 10 Not reported 1 Year of Study Freshman 37 Sophomore 5 Junior 6 Senior 1 Not reported 1 Race African American or Black 1 Asian 17 Multiracial 3 White 26 Not reported 3 Ethnicity Latino/Hispanic 3 Prior Experience Only ChatGPT 28 with LLM tools ChatGPT and other tools 11 Never used 11 3.3 Study Procedure & Data Collection Our field study lasted from September 27 (after the course adddrop period) to December 11, 2023 (the final exam due). Below, we describe each component of our study. 3.3.1 Pre-test. Participants were initially requested to provide their consent to participate, with being informed about the study\u2019s objectives, procedures, and their rights as participants, including the right to withdraw at any time without penalty. Following the consent process, the pre-test assessment was administered to evaluate students\u2019 existing knowledge of Python programming, providing a baseline for subsequent analysis. This pre-test included three sections with Python questions, with a total of 22 questions that varied in difficulty for an evaluation of participant skills. The first section featured eight questions (Questions 1-8, for example, \u201cWhat is the output of the following code: print(3+4)?\u201d ), the second section included seven questions \fL@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA Wenhan Lyu, Yimeng Wang, Tingting (Rachel) Chung, Yifan Sun, & Yixuan Zhang Main Conversation Conversation History 1 2 3 4 Message-level feedback Conversation-level feedback Comprehension Critical Thinking Syntax Mastery Independent Learning TA Replacement Conversation-level feedback mode triggers when 1 ) users are inactive for 10 minutes, or 2) users end the conversation; or 3) users click on the providing feedback button Light/ Dark mode Delete messages Message-level feedback mode triggers when users click on the upvote or downvote Figure 1: CodeTutor is a web application that leverages OpenAI API, featuring four main components: 1 Conversation History that lists different conversation threads, 2 Main Conversation that shows an ongoing dialogue with CodeTutor, 3 Conversation-level Feedback module that allows users to elaborate on their attitudes towards CodeTutor by proving ratings on 1) comprehension, 2) critical thinking, 3) syntax mastery, 4) independent learning, and 5) TA replacement likelihood, and to provide specific comments, and 4 Message-level Feedback that offers options for users to give detailed feedback on individual messages or responses from CodeTutor. of medium difficulty (Questions 9-15, for example, \u201cIf I wanted a function to return the product of two numbers a and b, what should the return statement look like?\u201d), and the third section presented seven challenging questions (Questions 16-22, for example, \u201cWhat will be the output of the following code? [Multiple lines of code]\u201d). The total score of the three sections was 100 points. Pre-test submissions were graded by our researchers with Computer Science backgrounds, using predetermined scoring criteria. This pre-test also asked about participants\u2019 prior experience with LLMs, specifically asking, \u201cWhich of the following Large Language Model AI tools have you used before? Please select all that apply.\u201d Participants were also asked to provide demographic information, including their major (or intended major), gender, and race/ethnicity. Participants were assured that all demographic information would remain anonymous and be used solely for research purposes. 3.3.2 Control vs. Experimental Group. Participants were divided into two groups: the control group, which used traditional learning methods and had access to human teaching assistants (TAs) for additional support outside class hours, and the experimental group, which used CodeTutor as their primary educational tool beyond class hours, alongside access to standard learning materials and human TAs. Using LLM-based tools other than CodeTutor in this course was prohibited. To divide participants into a control group and an experimental group, we initially sorted the entire sample based on their previous engagement with LLM-powered tools, resulting in two groups: those who have used any LLM-powered tools before (Used Before) and those who have not (Never Used). Within the Used Before category, we split the participants into two subsets, Used Before Subset A and Used Before Subset B, based on the overall pre-test result distribution to ensure both subsets are representative of the wider group. The same process was applied to the Never Used group, generating two additional subsets: Never Used Subset A and Never Used Subset B. The experimental group is then formed by combining Used Before Subset A with Never Used Subset A, while the control group consists of the combination of Used Before Subset B and Never Used Subset B. This method ensures the experimental and control groups were balanced regarding prior experience with Chatbots and their pre-test performance (see Figure 2). Following their group assignments, students in the experimental group were sent detailed instructions via email on how to access and use CodeTutor. In the field study, participants were not mandated to adhere to a specific frequency of engagement with CodeTutor; instead, they were encouraged to utilize the tool at their own pace. This approach allowed for a naturalistic observation of how \fEvaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study L@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA \u00b5mean = 9.44 \u00b5mean = 8.68 5 10 15 Control (n = 25) Experiment (n = 25) group Total correct answers tStudent(48) = 0.61, p = 0.55, gHedges = 0.17, CI95% [\u22120.38, 0.71], nobs = 50 Figure 2: Parametric pairwise comparison (ANOVA) reveals no significant difference in correct answer count of pre-test in the control and experimental groups. students integrate LLM-powered educational resources into their learning processes, without imposing additional constraints that could influence their study habits or the study\u2019s outcomes. 3.3.3 Student Evaluation. At the end of the semester, students\u2019 final grades were used as a primary measure to assess their learning outcomes and the impact of CodeTutor interventions. While acknowledging that final grades are influenced by various factors, they offer a standardized measure of overall academic success, enabling an assessment of CodeTutor\u2019s role in improving student learning outcomes. Final grades were determined by a weighted average that includes several components for each student: labs (practical miniprojects), assignments (individual coding tasks, such as array summation), mid-terms, and a final exam (comprising questions similar to those in the pre-test). Note that a student\u2019s final grade can surpass 100 if bonus points are awarded throughout the semester. Access to CodeTutor is restricted during mid-terms and final exams, categorizing the assessment components into two groups: CodeTutor-Allowed (labs and assignments) and CodeTutor-Not-Allowed (mid-terms and final exams). This categorization facilitates an analysis of CodeTutor\u2019s impact on student performance by examining potential dependencies on the tool and the improvement of learning outcomes in its absence. 3.4 Data Analysis 3.4.1 Quantitative Data Analysis. We examined the students\u2019 scores, interaction behaviors, and attitudes of using CodeTutor through multiple statistical analyses. First, we calculated descriptive statistics for all variables, including frequency with percentage for categorical variables and means and standard deviations for continuous variables. To examine the variation in students\u2019 scores before and after the intervention (i.e., the use of CodeTutor), we conducted paired-t tests for both the experimental and control groups. Multiple regression analyses with family-wise p-value adjustment were used to examine the effects of CodeTutor on score improvement, taking into account students\u2019 past experiences using LLM-powered tools and demographic variables, such as major, gender, and race. We then investigated the impact of CodeTutor accessibility on academic performance with ANOVA method. Moreover, we conducted a chi-squared test to explore the relationship between the quality of students\u2019 content and prompts and CodeTutor performance. To understand students\u2019 attitudes towards CodeTutor, we calculated Spearman\u2019s correlation matrix for continuous variables, given the characteristics of our data, which are non-normal and exhibit unequal variance. Furthermore, to examine differences between questions, we used the Kruskal-Wallis Rank Sum Test (using R package stats [41]) and then performed post-hoc tests using Dunnett\u2019s test (using the R package FSA [30]) in cases where significant differences were found. To investigate the importance of time on students\u2019 attitudes towards CodeTutor, we introduced a linear mixed effects (LME) model (using the R package lme4 [4]). We considered statistical significance at a significance level of \ud835\udc5d< 0.05 for most cases, except in multiple regression analyses where we used \ud835\udc5d< 0.1 and showed effect sizes were significant enough to indicate the relationship of variables. 3.4.2 Qualitative Data Analysis. We also analyzed the conversational history between users. Specifically, we used the General Inductive Approach [50] to guide our thematic analysis of the conversational data. The first author conducted a close reading of the data to gain a preliminary understanding of the conversational data and then labeled the text segments to formulate categories, which served as the basis for constructing low-level codes to capture specific elements of the user-CodeTutor interactions. Similar low-level codes were then clustered together to achieve high-level themes. During the analysis, the research team engaged in ongoing discussions to refine and clarify emerging themes. 4 RESULTS In this section, we examined the impact of CodeTutor on student academic performance (subsection 4.1 to answer RQ1), analyzed students\u2019 attitudes towards learning with CodeTutor (subsection 4.2 to answer RQ2), and characterized their engagement patterns in entry-level programming courses (subsection 4.3 to answer RQ3). 4.1 RQ1: Learning Outcomes with CodeTutor 4.1.1 Comparative Analysis of Score Improvements. Overall, students in the experimental group exhibited a greater average improvement in scores, as illustrated by comparing their pre-test and final scores to those in the control group. Specifically, the average increase for the experimental group was 12.50, whereas the control group showed an average decrease of 3.17 when comparing final scores to pre-test scores. We conducted paired t-tests for both the experimental and control groups to determine if the observed improvements were statistically significant, starting with the premise that there were no differences in pre-test scores between these two groups. Our null hypothesis assumed that the true mean difference between pre-test and final scores was zero. For the control group, the null hypothesis could not be rejected, suggesting that the differences between pretest and final scores were not statistically significant (\ud835\udc61= -0.879, \ud835\udc5d= 0.394). Conversely, participants in the experimental group demonstrated significant improvement from the pre-test to final scores, indicating a statistically significant enhancement in their scores (\ud835\udc61= -2.847, \ud835\udc5d= 0.009). \fL@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA Wenhan Lyu, Yimeng Wang, Tingting (Rachel) Chung, Yifan Sun, & Yixuan Zhang Furthermore, when examining the improvement in CodeTutorNot-Allowed components, the experimental group exhibited an average increase of 7.33, whereas the control group showed no significant change. By conducting a paired t-test comparing the pre-test and final exam scores (during which the use of CodeTutor was not permitted), it was observed that students in the experimental group demonstrated a statistically significant improvement (\ud835\udc61= -2.405, \ud835\udc5d= 0.026). This result suggests that students who have used CodeTutor exhibit more substantial improvement even when CodeTutor is unavailable. \u00b5mean = 102.29 \u00b5mean = 93.40 60 80 100 120 CodeTutor Allowed (n = 21) CodeTutor Not Allowed (n = 21) group score tStudent(40) = 2.31, p = 0.03, gHedges = 0.69, CI95% [0.07, 1.30], nobs = 42 Figure 3: Parametric pairwise comparison (ANOVA) reveals a significantly higher mean score in the \u201cCodeTutor-Allowed\u201d group compared to the \u201cCodeTutor-Not-Allowed\u201d group. 4.1.2 Effect of CodeTutor Accessibility on Academic Performance. By constructing the CodeTutor-Allowed and CodeTutor-Not-Allowed, we determine the correlation between CodeTutor\u2019s accessibility and student academic performance. Using the ANOVA technique on the data from the experimental group, Figure 3 reveals that the mean score for the CodeTutor-Allowed category stands at 102.29, in contrast to the CodeTutor-Not-Allowed components, which has a mean score of 93.40. The statistical analysis results show a significant difference between the two groups (\ud835\udc61= 2.31, \ud835\udc5d= 0.03), suggesting that the allowance of CodeTutor correlates with higher student scores. 4.1.3 Correlation Between Student Demographics and Final Scores in the Experimental Group. Subsequently, we evaluated demographic factors to determine whether specific student groups, particularly those with prior tech experience, experienced greater benefits from CodeTutor. Table 2 shows the results of multiple regression models, examining how students\u2019 final scores in the experimental group are associated with their LLM history, major, gender, and race. Students who have never used any LLM-powered tools performed a significant increase (\ud835\udefd= 18.877, \ud835\udc5d= 0.032) in final score than the students who used it before. Moreover, differences in final scores among various majors within the experimental group were statistically significant, indicating that majors play a substantial role in final scores in the experimental group. Students majoring in data science (\ud835\udefd= 14.532, \ud835\udc5d= 0.073), mathematics (\ud835\udefd= 17.692, \ud835\udc5d= 0.057), and biology (\ud835\udefd= 16.257, \ud835\udc5d= 0.057) exhibited a significant positive correlation with final scores Table 2: Multiple regression models explaining respondents\u2019 final scores in experimental group. (Significance level: \u2020 \ud835\udc5d< 0.1, * \ud835\udc5d< 0.05, ** \ud835\udc5d< 0.01, *** \ud835\udc5d< 0.001). Estimate Std. Error t value Pr(>|t|) Const 93.683 3.877 24.166 0.000 *** Prior Experiences with LLM tools (Reference: Used before) Never used 18.877 5.054 3.735 0.032 * Major (Reference: Computer science) Data Science 14.532 5.662 2.567 0.073 \u2020 Mathematics 17.692 5.852 3.023 0.057 \u2020 Biology 16.257 5.662 2.871 0.057 \u2020 Economics 1.362 4.799 0.284 0.784 Others -13.004 6.022 -2.160 0.115 Gender (Reference: Female) Male 5.917 3.845 1.539 0.223 Race (Reference: White) Asian -7.831 3.933 -1.991 0.128 African American or Black 8.099 7.107 1.140 0.322 Others 6.102 5.416 1.127 0.322 compared to those majoring in computer science, suggesting that these majors achieved higher final scores. In terms of gender, no significant effects were observed, indicating no difference between genders in final scores. Additionally, no significant differences were noted across the races in final scores. Summary of results of RQ1: Collectively, our findings suggest that students in the experimental group achieved significant score improvements with CodeTutor. Particularly, those who were new to CodeTutor achieved even greater improvements, while students majoring in data science, mathematics, and biology surpassed their computer science counterparts. Moreover, students exhibited higher scores when permitted to use CodeTutor. 4.2 RQ2: Students\u2019 Attitudes towards CodeTutor 4.2.1 Descriptive Analysis. In terms of students\u2019 attitudes towards CodeTutor (see Figure 1 3 for the specific questions), we found that a small portion of students (8%) strongly disagreed or disagreed that CodeTutor accurately understood what students intended to ask, while most (67%) agreed or strongly agreed. In addition, 35% strongly disagreed or disagreed that CodeTutor helped them think critically, while 19% agreed or strongly agreed. Furthermore, 13% students disagreed that CodeTutor improved their understanding of programming syntax, with a larger proportion of individuals agreeing (33%) or strongly agreeing (25%). Nearly half of the students (42%) agreed or strongly agreed that CodeTutor helped students build their own understandings, while very few (17%) strongly disagreed or disagreed. Finally, regarding the potential of CodeTutor to substitute for a human teaching assistant1, 20% of the students strongly disagreed or disagreed with this notion, while 42% of them agreed or strongly agreed. Figure 4 shows the distribution of students\u2019 responses across these five questions. 1In our analysis, response values to the TA Replacement question were reversed, so a higher score indicates a stronger preference for our tool over human teaching assistants. This reversal is consistently applied across all subsequent analyses. \fEvaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study L@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA 2.0%6.0% 25.0% 21.0% 46.0% 6.0% 29.0% 46.0% 15.0% 4.0% 13.0% 29.0% 33.0% 25.0% 2.0% 15.0% 40.0% 37.0% 6.0% 8.0% 12.0% 38.0% 13.0% 29.0% TA Replacement Independent Learning Syntax Mastery Critical Thinking Comprehension Strongly Disagree Disagree Neutral Agree Strongly Agree Figure 4: Participants\u2019 attitudes toward CodeTutor, in terms of comprehension, critical thinking, syntax mastery, independent learning, and TA replacement (see Figure 1 for detailed questions). Comprehension Critical Thinking Syntax Mastery Independent Learning TA Replacement Comprehension Critical Thinking Syntax Mastery Independent Learning TA Replacement 1 0.26 1 0.46 0.22 1 0.13 0.23 0.5 1 0.24 -0.15 0 0 1 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Figure 5: A correlation matrix heatmap visualizing the relationship between different metrics. The blue color indicates positive correlations, while pink represents negative correlations. Correlation coefficients are displayed inside each cell. 4.2.2 Exploring Relationships in Student Attitudes Toward CodeTutor. Figure 5 reveals key relationships among students\u2019 attitudes on CodeTutor. The moderate positive correlation between Comprehension and Syntax Mastery suggests that proficiency in one is associated with higher performance in the other. Critical Thinking is slightly positive with Comprehension and Independent Learning but slightly negative with TA Replacement. Furthermore, Syntax Mastery strongly correlates with Independent Learning, indicating a close relationship between mastering programming syntax and self-directed learning outcomes. In addition, TA Replacement has minimal to no significant correlations with other variables, suggesting its effects vary independently of these educational aspects. To further explore the relationship of different students\u2019 attitudes among questions, we present the results of multiple comparisons across the five questions. Specifically, our results show that respondents\u2019 attitudes (\ud835\udf122 = 32.99, \ud835\udc5d< 0.05) significantly differ across questions. Our post-hoc tests (see Figure 6) further reveal that students were significantly less in agreement about CodeTutor\u2019s assistance in fostering critical thinking compared to its ability to understand, help in learning syntax and serving as a replacement for a teaching assistant. Moreover, our findings suggest that respondents were significantly more in agreement with CodeTutor\u2019s effectiveness in comprehension than in its ability to improve students\u2019 understanding of programming syntax. \u00b5median = 4.00 \u00b5median = 3.00 \u00b5median = 4.00 \u00b5median = 3.00 \u00b5median = 3.00 pHolm\u2212adj. = 5.34e\u221207 pHolm\u2212adj. = 0.01 pHolm\u2212adj. = 5.32e\u221204 pHolm\u2212adj. = 0.03 2 4 6 Comprehension (n = 48) Critical Thinking (n = 48) Syntax Mastery (n = 48) Independent Learning (n = 48) TA Replacement (n = 48) Question Result Pairwise test: Dunn, Bars shown: significant \u03c7Kruskal\u2212Wallis 2 (4) = 32.99, p = 1.20e\u221206, \u03b5ordinal 2 = 0.14, CI95% [0.09, 1.00], nobs = 240 Figure 6: Non-parametric pairwise comparison test (Dunn\u2019s test): Differences in agreement levels across different questions. We can see that students predominantly favored CodeTutor for its comprehension and syntax support rather than its ability to foster critical thinking. Additionally, there was a stronger consensus on CodeTutor\u2019s proficiency in understanding queries compared to its effectiveness in enhancing programming syntax. We then conducted a linear mixed effects (LME) model to explore time\u2019s influence on students\u2019 attitudes toward CodeTutor: \ud835\udc44\ud835\udc62\ud835\udc52\ud835\udc60\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\ud835\udc3c\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc50\ud835\udc4e\ud835\udc61\ud835\udc5c\ud835\udc5f\ud835\udc56\ud835\udc61= \ud835\udefd0 + \ud835\udc4f0\ud835\udc56+ (\ud835\udefd1 + \ud835\udc4f1\ud835\udc56)\ud835\udc61+ \ud835\udf16\ud835\udc56\ud835\udc61 where \ud835\udefd0 and \ud835\udefd1 are unknown fixed effect parameters; \ud835\udc4f0\ud835\udc56and \ud835\udc4f1\ud835\udc56are the unknown student-specific random intercept and slope, respectively, which are assumed to have a bivariate normal distribution with mean zero and covariance matrix \ud835\udc37; \ud835\udc44\ud835\udc62\ud835\udc52\ud835\udc60\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\ud835\udc3c\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc50\ud835\udc4e\ud835\udc61\ud835\udc5c\ud835\udc5f is the student response at time \ud835\udc61; and \ud835\udf16\ud835\udc56\ud835\udc61is the residual error for student \ud835\udc56at time \ud835\udc61, with a normal distribution \ud835\udc41(0, \ud835\udf0e2), which is assumed to be independent of the random effects. From Table 3, we can see that students\u2019 attitudes toward CodeTutor show a significant decrease in Comprehension (\ud835\udefd= -0.114, \ud835\udc5d< 0.01), which indicates that students disagree with CodeTutor\u2019s understanding accuracy over time. Moreover, there is a weakly significant decrease in TA Replacement (\ud835\udefd= -0.099, \ud835\udc5d< 0.1) with increasing time. This shows a slight tendency for them to consider more human TA help over time. Also, students perform no significant difference over time in Critical Thinking, Syntax Mastery, and Independent Learning. Summary of results of RQ2: In summary, students recognize CodeTutor\u2019s ability to understand their queries and assist with programming syntax yet question its capacity to promote critical thinking skills. Additionally, students\u2019 confidence in CodeTutor\u2019s comprehension abilities decreases over time, with a growing preference for support from human teaching assistants. 4.3 RQ3: Students\u2019 Engagement with CodeTutor In total, we documented 82 conversation sessions2 with CodeTutor, encompassing a total of 2,567 messages. In these sessions, 415 2In our analysis, a conversation session is a continuous exchange of messages between users and CodeTutor within a specific period, characterized by a coherent topic or purpose. \fL@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA Wenhan Lyu, Yimeng Wang, Tingting (Rachel) Chung, Yifan Sun, & Yixuan Zhang Table 3: Linear Mixed-Effects Model of Student Attitudes over time. (Significance level: \u2020 \ud835\udc5d< 0.1, * \ud835\udc5d< 0.05, ** \ud835\udc5d< 0.01, *** \ud835\udc5d< 0.001). Over time, students exhibit a significant decline in their agreement with CodeTutor\u2019s comprehension and replacement of human teaching assistants. Comprehension Critical Thinking Syntax Mastery Independent Learning TA Replacement \ud835\udefd(Std. Error) \ud835\udefd(Std. Error) \ud835\udefd(Std. Error) \ud835\udefd(Std. Error) \ud835\udefd(Std. Error) Const 4.700(0.297)*** 2.690(0.247)*** 3.760(0.262)*** 3.044(0.218)*** 3.964(0.330)*** Time -0.114(0.039)** 0.040(0.037) -0.018(0.041) 0.054(0.036) -0.099(0.051)\u2020 unique topics were discussed, averaging 5.06 topics per session and 6.19 messages per topic. 4.3.1 Message Classification & Interaction Patterns. In total, we collected 2567 conversational messages exchanged between users and the CodeTutor. Of these, 1288 messages originated from the users, and CodeTutor responded with 1279 messages. Table 4 presents categorizations of messages between users and CodeTutor. Each category has a description and an example to illustrate the message type. Categories of messages from both users and CodeTutor include Programming Task inquiries, addressing specific Python programming challenges; Grammar and Syntax questions, focusing on Python\u2019s basic grammar or syntax without necessitating runnable programs; General Questions, which are not directly related to Python; and Greetings, initiating or finishing interaction. From the users\u2019 side , additional categories highlight their engagement with CodeTutor: Modification Requests for alterations to previous answers; Help Ineffective indicating issues or errors in CodeTutor\u2019s provided solutions; Further Information to elaborate on prior queries; and Debug Requests for assistance in resolving bugs or errors in code snippets. CodeTutor\u2019s responses are classified into Corrections, which address and amend errors in previous responses and Explanations, providing further details on provided solutions or clarify why certain requests cannot be fulfilled. 4.3.2 Analysis of Prompt Quality & Correlation with Response Effectiveness. To further examine user interaction patterns with CodeTutor and their implications for its educational value, we analyzed the relationship between prompt quality and response accuracy. This analysis stems from the premise that detailed and precise prompts are likely to improve the AI\u2019s understanding of user requirements, thereby potentially raising the standard of its responses. To do so, we evaluated a corpus of 1,190 prompts, after removing all greeting messages, to assess their quality. Our analysis showed that 37% were deemed good quality. The remaining 63% were identified as poor quality. We defined \u201cgood quality\u201d prompts as providing sufficient detail for CodeTutor to generate an accurate response. In contrast, \u201cpoor quality\u201d prompts were those that did not meet this criterion. We categorized the deficiencies in poor quality prompts into four types: incomplete information (\ud835\udc5b= 189, 25%), which lacked specific details necessary for CodeTutor to understand the context; lack of clear goals (\ud835\udc5b= 172, 23%), where the desired outcome was not explicitly stated; over-reliance on CodeTutor (\ud835\udc5b= 362, 48%), where the assignment questions are directly copied and pasted into CodeTutor; and poor structural organization (\ud835\udc5b= 25, 3%), which exhibited unclear or confusing request structures. Prompts were further labeled as \u201cworking\u201d if they elicited an appropriate response from CodeTutor, and \u201cnot working\u201d if they failed to do so. Using a Chi-square test, we investigated whether the prompt quality and the effectiveness of CodeTutor\u2019s responses were independent. Our results showed a significant correlation (\ud835\udf122 = 144.84, \ud835\udc5d< 0.001). In other words, clearer and more detailed prompts are associated with responses that are more likely to be effective. Summary of results of RQ3: We characterized the messages exchanged between users and CodeTutor. We categorize these interactions between users and CodeTutor into inquiries (e.g., programming tasks, syntax questions) and feedback alongside CodeTutor\u2019s responses (corrections and explanations), illustrating a dynamic exchange aimed at facilitating learning. We also found that the clarity and completeness of prompts are significantly correlated with the quality of responses from CodeTutor. 5 DISCUSSION Our semester-long field study provided insights into how students in introductory computer science courses utilized CodeTutor and its effects on educational outcomes. In short, our results show that 1) students who used CodeTutor had shown significant improvements in scores; 2) while CodeTutor was valued for its assistance in comprehension and syntax, students expressed concerns about its capacity to enhance critical thinking skills; 3) skepticism regarding CodeTutor as an alternative to human teaching assistants grew over time; 4) CodeTutor was primarily used for various coding tasks, including syntax comprehension, debugging, and clarifying fundamental concepts; 5) the effectiveness of CodeTutor responses was notably higher when prompts were clearer and more detailed. Building on these findings, we discuss the implications for future enhancements and research directions in the rest of the section. 5.1 Towards Enhancing Generative AI Literacy Our research indicates a positive correlation between the use of Generative AI tools and improved student learning outcomes. However, 63% of student-generated prompts were deemed unsatisfactory, indicating a lack of essential skills to fully exploit Generative AI tools. This finding also suggests the need to promote Generative AI literacy among students. Here, we define Generative AI literacy as the ability to effectively interact with AI tools and understand how to formulate queries and interpret responses. Our findings suggest that while students can leverage CodeTutor for practical coding assistance and syntax understanding, there is a gap in using these tools to enhance critical thinking skills. We suggest educational programs integrate Generative AI literacy as a core component of their curriculum, teaching students how to use these tools for immediate problem-solving and engaging with them to promote \fEvaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study L@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA Table 4: Categorizations of messages, from users\u2019 side and from CodeTutor\u2019s side . [Code Snippet] represents a Python code segment. The Percentage column represents the ratio of occurrences of each category to the total number of messages. Note that some categories may only apply to messages sent by either users or CodeTutor, and messages may carry multiple categories. Category Name Description Example Percentage Programming Task Any questions or answers related to Python programming. \u201cWrite a function that prints the nth(argument) prime number.\u201d 86.52% Grammar & Syntax When a message is related to basic Python grammar or syntax problems, a runnable program is most likely unnecessary. \u201cWhat does {} do in Python?\u201d 14.26% General Question When a message is not directly related to Python. \u201cWhat is ASCII?\u201d 4.29% Greetings When a message is greeting. \u201cHello! How can I assist you today?\u201d 0.62% Help Ineffective When a user message says the previous answer generated by CodeTutor is wrong or provides error information. \u201cThis code still fails.\u201d 12.86% Debug Request When a user message asks CodeTutor to fix bugs or explain what was wrong in code snippets provided or in previous messages. \u201cDebug this code. [Code Snippet]\u201d 8.22% Modification Request When a user requires CodeTutor to change something on its previous answer. \u201cRemove comments.\u201d 4.48% Further Information When a user message provides more context on their previous input. \u201cAll the input strings will be the same length.\u201d 3.97% Explanation When CodeTutor explains something in previous messages or why it cannot complete the current task from users. \u201cI\u2019m sorry, but I need more information to provide the answers for questions 4 and 6.\u201d 28.94% Correction When CodeTutor corrects content in its previous answer. \u201cApologies for the syntax error. Here is the corrected version: [Code Snippet]\u201d 13.95% deeper analytical and critical thinking. This could include workshops on effective query formulation, sessions on interpreting AI responses, and exercises designed to challenge students to critically evaluate the information and solutions offered by AI tools. We also propose approaches to integrate HCI tools and principles into LLM-enabled platforms, such as prompt construction templates providing users with templates or structured forms for crafting queries. They can guide users in formulating more effective and precise questions. Templates could include placeholders for essential details and context, providing the necessary information for the AI to generate accurate responses to users. Furthermore, integrating Critical Thinking Prompts might be particularly effective in stimulating in-depth analytical thinking. For example, the interface could pose follow-up questions encouraging users to assess AI answers\u2019 adequacy critically. Questions such as, \u201cDoes this response fully address your query?\u201d or \u201cWhat additional information might you need?\u201d may prompt users to engage in a more thorough evaluation of the information provided, fostering a habit of critical reflection and assessment. Another possible approach is Facilitating Collaborative Query Building, which leverages the power of collective intelligence. By designing interfaces that support real-time collaboration among users, individuals can work together to construct and refine queries. We can also use LLMs to evaluate and refine user questions instantly as they perform well in prompting [60]. 5.2 Turning to the Temporal Dynamics of LLM-Powered Tutoring Tools The temporality aspect of using CodeTutor in computer science education presents a nuanced perspective on their integration and effectiveness over time. Our analysis reveals a complex relationship between the duration of CodeTutor use and students\u2019 attitudes towards it. Specifically, our results show that although students initially find CodeTutor a reliable tool for understanding their queries, their confidence in its accuracy diminishes with prolonged use. Additionally, our model uncovers a weakly significant decrease in students\u2019 preference for CodeTutor as a TA replacement over time. This trend implies a growing inclination among students to seek human TA support as they progress in their courses, possibly due to the nuanced understanding and personalized feedback that human TAs can offer, which might not be fully replicated by LLMs. However, our study found no significant temporal change in students\u2019 attitudes toward CodeTutor\u2019s impact on critical thinking, syntax mastery, and independent learning. This stability suggests that while students may question CodeTutor\u2019s comprehension abilities and its adequacy as a TA replacement over time, they still recognize its utility in facilitating certain aspects of the learning process, such as mastering syntax and promoting independent study habits. Collectively, our findings highlight the importance of investigating the temporal dynamics of student attitudes towards and their use of LLM-powered tools for learning and shed light on the need for a balanced approach to integrating LLMs into CS education. While these tools offer great support in specific areas, their limitations become more apparent with extended use. In other words, it is important to complement LLMs with human instruction to address learning objectives, such as critical thinking and problem-solving, which are crucial for computer science education. Furthermore, we argue that educators and developers should work collaboratively to enhance the capabilities of LLM-powered tutoring systems, ensuring they remain effective and relevant over time. 5.3 Alignments of LLMs for Education Our observations regarding students\u2019 utilization of CodeTutor provide insights into their learning approaches and completion of \fL@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA Wenhan Lyu, Yimeng Wang, Tingting (Rachel) Chung, Yifan Sun, & Yixuan Zhang assignments. The exams that prohibit using CodeTutor reflect students\u2019 understanding of programming, as they must rely solely on their internal knowledge. Conversely, assignments and lab tasks that permit using CodeTutor result in higher scores, indicating that students may prioritize completion over deep comprehension [17]. While students employ CodeTutor to fulfill homework requirements, they may not perceive it as a tool for a comprehensive understanding of course materials. Our results show that nearly half of the low-quality prompts classified as over-reliance were copied and pasted original assignment questions into CodeTutor. This suggests that students primarily used CodeTutor as a quick-fix solution, neglecting the opportunity to engage with the underlying question logic and determine appropriate solutions to the question. As the complexity of assignments increased, students\u2019 perceptions of CodeTutor\u2019s ability to understand their queries turned more negative. However, students acknowledge its proficiency in syntax mastery, which reveals a gap between their expectations and the tool\u2019s capabilities. Complex questions require students to integrate and apply the knowledge acquired in class [51], challenging the notion that CodeTutor can easily break down questions into manageable components. Additionally, CodeTutor\u2019s limitations, such as its training on a predetermined database and inability to handle custom or complex queries, suggest that it is important to simplify questions and structure prompts effectively for optimal results. Furthermore, we argue that students\u2019 previous experiences with chatbots, if unrelated to structured learning, such as a simple oneline request (e.g., \u201chelp me write a summary\u201d), may not adequately prepare them for using CodeTutor effectively in a programming context, as evidenced by our findings that nearly 70% of student submissions in our corpus were of poor quality. Students with limited experience interacting with chatbots might be hesitant to trust tools like CodeTutor fully, potentially affecting their use and reliance on its outputs. This lack of familiarity could lead them to prefer traditional learning approaches, fostering deeper analytical thinking and minimizing dependency on automated assistance. Design Implications. Our findings shed light on the future implementation and enhancement of CodeTutor in programming courses. The inherent limitations of CodeTutor, which is trained on a general dataset, may necessitate the creation of custom datasets tailored to specific class contexts. Through instructors\u2019 reflections on the quality of students\u2019 assignments, it becomes evident that while CodeTutor produces impressive results due to its training on datasets crafted by professional programmers aimed at efficiency, the emphasis in entry-level classes should prioritize humanreadable code over complex solutions. One potential solution is to leverage GPT models with the Assistant API [31]. This API enables the development of AI assistants with features, such as the Code Interpreter [33], which can execute Python code in a sandboxed environment, and Knowledge Retrieval [34], allowing users to upload documents to enhance the assistant\u2019s knowledge base. These features align more closely with the requirements of a virtual TA in entry-level programming courses. The Code Interpreter can enhance the quality of responses containing code blocks, while Knowledge Retrieval empowers instructors to provide course-specific information. Meanwhile, providing systematic instructions to students can enhance their understanding of how to use the tool effectively while improving its accessibility through additional instructional features. Additionally, it is crucial to emphasize the boundaries of using LLM-powered tools, clarifying what is permissible and the consequences of inappropriate usage. 6 LIMITATIONS AND FUTURE WORK Our study, while providing valuable insights into the use of LLMpowered tools in educational settings, has several limitations that suggest avenues for further research. First, The current study was conducted on a relatively small scale, limiting the generalizability of our findings. Therefore, our future work will conduct largerscale tests involving more diverse student populations and settings. Second, regarding the applicability to different levels of coding courses, our work has focused on beginning levels of CS courses. Our findings may not directly translate to intermediate or advanced programming courses. Furthermore, we relied on GPT-3.5 in this study, which may not always provide accurate or contextually appropriate responses, potentially affecting the quality of tutoring provided. Lastly, controlling the experimental environment in a semester-long study, particularly the control group, was challenging, indicating the need for more experimental designs in future studies to better understand the factors affecting student learning. 7 CONCLUSION In this work, we conducted a semester-long between-subjects study with 50 students to examine the ways in which students use an LLM-powered virtual teaching assistant (i.e., CodeTutor) in their introductory-level programming learning. The experimental group using CodeTutor showed significant improvements in final scores over the control group, with first-time users of LLM-powered tools experiencing the most substantial gains. While positive feedback was received on CodeTutor\u2019s ability to understand queries and aid in syntax learning, concerns were raised about its effectiveness in cultivating critical thinking skills. Over time, we observed a shift towards preferring human assistant support over CodeTutor, despite its utility in completing programming tasks, understanding syntax, and debugging. Our study also shows the importance of prompt quality in leveraging CodeTutor\u2019s effectiveness, indicating that detailed and clear prompts yield more accurate responses. Our findings point to the critical need for embedding Generative AI literacy into educational curricula and to promote critical thinking abilities among students. Looking ahead, our research suggests integrating LLM-powered tools in computer science education requires more tools, resources, and regulations to help students develop Generative AI literacy and customize teaching strategies to bridge the gap between tool capabilities and educational goals. By adjusting expectations and guiding students on effective tool use, educators may harness the full potential of Generative AI to complement traditional teaching methods. ACKNOWLEDGMENTS This project is funded by the Studio for Teaching & Learning Innovation Learn, Discover, Innovate Grant, the Faculty Research Grant from William & Mary, and the Microsoft Accelerate Foundation Models Research Award. We thank our participants in this study and our anonymous reviewers for their feedback. \fEvaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study L@S \u201924, July 18\u201320, 2024, Atlanta, GA, USA" + } + ] +} \ No newline at end of file