| { |
| "url": "http://arxiv.org/abs/2404.16283v1", |
| "title": "Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services", |
| "abstract": "The advent of large language models (LLMs) has transformed text-based\nservices, enabling capabilities ranging from real-time translation to AI-driven\nchatbots. However, existing serving systems primarily focus on optimizing\nserver-side aggregate metrics like token generation throughput, ignoring\nindividual user experience with streamed text. As a result, under high and/or\nbursty load, a significant number of users can receive unfavorable service\nquality or poor Quality-of-Experience (QoE). In this paper, we first formally\ndefine QoE of text streaming services, where text is delivered incrementally\nand interactively to users, by considering the end-to-end token delivery\nprocess throughout the entire interaction with the user. Thereafter, we propose\nAndes, a QoE-aware serving system that enhances user experience for LLM-enabled\ntext streaming services. At its core, Andes strategically allocates contended\nGPU resources among multiple requests over time to optimize their QoE. Our\nevaluations demonstrate that, compared to the state-of-the-art LLM serving\nsystems like vLLM, Andes improves the average QoE by up to 3.2$\\times$ under\nhigh request rate, or alternatively, it attains up to 1.6$\\times$ higher\nrequest rate while preserving high QoE.", |
| "authors": "Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.DC", |
| "cats": [ |
| "cs.DC", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "The advent of large language models (LLMs) has transformed text-based\nservices, enabling capabilities ranging from real-time translation to AI-driven\nchatbots. However, existing serving systems primarily focus on optimizing\nserver-side aggregate metrics like token generation throughput, ignoring\nindividual user experience with streamed text. As a result, under high and/or\nbursty load, a significant number of users can receive unfavorable service\nquality or poor Quality-of-Experience (QoE). In this paper, we first formally\ndefine QoE of text streaming services, where text is delivered incrementally\nand interactively to users, by considering the end-to-end token delivery\nprocess throughout the entire interaction with the user. Thereafter, we propose\nAndes, a QoE-aware serving system that enhances user experience for LLM-enabled\ntext streaming services. At its core, Andes strategically allocates contended\nGPU resources among multiple requests over time to optimize their QoE. Our\nevaluations demonstrate that, compared to the state-of-the-art LLM serving\nsystems like vLLM, Andes improves the average QoE by up to 3.2$\\times$ under\nhigh request rate, or alternatively, it attains up to 1.6$\\times$ higher\nrequest rate while preserving high QoE.", |
| "main_content": "Introduction Large language Models (LLMs) [4, 9, 21, 46, 51] have revolutionized natural language processing. By generating contextually relevant responses, they power a wide range of applications, more than 60% of which are centered around conversational interactions like chatbots, virtual assistants, language translation, and customer support systems [15]. In particular, the meteoric rise of ChatGPT [35] spearheaded the growth of conversational AI services by attracting over 100 million users in just two months after its launch [29]. Conversational AI services, by nature, provide interactive conversations between the user and an AI agent. At its core, an LLM generates tokens one by one1 and streams them back to the user to be digested, be it as written text or speech. As 1LLMs process and generate text in units of tokens. For instance, the word \u201cstreaming\u201d may be broken down into two tokens: \u201cstream\u201d and \u201cing.\u201d Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (a) Existing LLM serving systems are oblivious of QoE. User 2 experiences a long wait time (TTFT) and therefore lower QoE. Req 2 Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Req 1 Request 1 and 2 arrive Quality of Experience is a different story. Throughput is not all you need. Throughput is not all you need. User 1 User 2 User 1 User 2 Req 2 Req 1 Req 2 Server Server TTFT TTFT (b) A QoE-aware LLM serving system can schedule token generation over time to enhance QoE. User 2\u2019s TTFT is drastically improved without affecting User 1\u2019s token delivery timeline. Figure 1. Server-side token generation timeline and userside response digestion progress. Even if the server generates tokens very fast, users cannot digest them at such a pace. this token-by-token streaming nature is akin to the frameby-frame streaming nature of video streaming services, we dub such services text streaming services. In this paper, we seek to characterize and enhance the Quality-of-Experience (QoE) of text streaming services (\u00a72.2). We realize that user interaction with LLM responses happens at moments when each new token is delivered (e.g., displayed or spoken) to the user over time. Thus, we define token delivery timeline (TDT), a series of timestamps when each token was delivered to a user, to capture the user\u2019s interaction with the service for a single request. The ideal TDT a user expects from a text streaming service can vary significantly based on the type of the service and user demographics. For instance, a chat service that uses a text-to-speech model to read out the LLM\u2019s response to users (e.g., voice chat in ChatGPT, real-time speech translation) could be less stringent in terms of its minimum token delivery speed (TDS) compared to a chat service in raw text, because a user\u2019s speaking speed is often slower than their reading speed, but it may require smaller time to first token (TTFT) to better resemble real-life arXiv:2404.16283v1 [cs.DC] 25 Apr 2024 \fverbal conversations. The minimum TDS and TTFT together define the expected TDT of a request. Unfortunately, existing LLM serving systems [20, 25, 30, 50] are designed to optimize aggregated server-side performance metrics such as token generation throughput [25, 50], which are not necessarily aligned with optimizing the QoE of text streaming services (\u00a72.3). More importantly, by realigning the objectives of LLM serving systems towards QoE optimization, a QoE-aware serving system can utilize the same resources more effectively to manage a greater number of concurrent requests while ensuring high QoE, thus reducing the cost per request. To illustrate, we compare existing serving systems with a QoE-aware one, each with a serving capacity of 1, in Figure 1. In Figure 1a, due to the commonly adopted first-come-first-serve (FCFS) scheduling policy [25, 50, 52], User 2 experiences a long initial waiting time (TTFT). In contrast, in Figure 1b, a QoE-aware serving system schedules token generation in a manner that is aware of each user\u2019s reading speed, leading to a shorter wait time for User 2 without affecting User 1\u2019s interaction with the service. Although the average server-side token generation throughput or latency are the same for the two systems, overall user experience is improved in the QoE-aware system. We attribute this to the na\u00efve FCFS scheduling policy in existing serving systems, which fails to account for the QoE requirements of individual requests and cannot efficiently utilize resources (\u00a72.4). Consequently, some users may experience extended waiting time during their interaction with the service, especially when the system is under higher request rate or is serving requests with longer context lengths. To preserve good user experience, the service provider must provision more compute resources proportional to the excess request load, leading to higher operational costs. Designing a QoE-aware LLM serving system, however, is challenging from both conceptual and practical perspectives. Defining the QoE metric to capture the user experience in text streaming services is non-trivial. It should encapsulate the continuous interaction process over time, accounting for factors like TTFT and TDS. Designing a QoE-aware serving system faces several systems challenges as well: (a) Dynamic and unpredictable resource demand: Requests arrive dynamically with varying expected TDT and prompt length and the number of output tokens is not known a priori, making it challenging to implement a one-size-fits-all scheduling strategy such as round-robin. (b) Constrained resource supply: The system has limited GPU memory and computation resources, restricting the number of concurrent in-flight requests. To meet the QoE requirements of individual requests, the system needs to make runtime decisions to allocate resources among requests, which may incur non-negligible overhead. To this end, we first propose a mathematical definition of QoE for text streaming services (\u00a73.1). Our QoE metric Age Group Reading Speed 18-24 (28.0%) 236 WPM 25-44 (51.9%) 200 WPM 45-54 (11.2%) 192 WPM 55-64 (5.6%) 185 WPM 65+ (3.3%) 175 WPM Table 1. Reading speed (Word Per Minute) by age group [10, 29]. Language Speaking Speed English (79.3%) 150 WPM Chinese (7.0%) 158 WPM Korean (6.9%) 150 WPM French (3.6%) 195 WPM Spanish (3.2%) 218 WPM Table 2. Speaking speed (Word Per Minute) by language [8, 29, 36]. compares the actual TDT of a request with its expected TDT, reflecting the user\u2019s experience throughout their entire interaction with the service. Then, we propose Andes, an LLM serving system that optimizes the overall QoE of text streaming services (\u00a74). Andes employs a dynamic priority-based preemptive scheduler that operates at the granularity of tokens. Andes strategically allocates system resources to more urgent requests and preempts requests that have already received sufficient service, all to enhance QoE. By satisfying more requests with high QoE using the same amount of resource, Andes eliminates the need for additional resource provisioning, thus reducing LLM serving cost. Andes also codesigns a client-side token buffer that temporarily withholds excess tokens and displays them to the user at their expected pace (\u00a75). This design ensures users experience smooth token delivery, oblivious to the intricacies of server-side scheduling or network fluctuations. We evaluate Andes using the OPT [51] family of models, ranging from 13B to 175B parameters (\u00a76). Compared to vLLM [25], we find that Andes can manage 1.6\u00d7 higher request rate with high QoE, or alternatively, improve the average QoE by 3.2\u00d7 given the same amount of resource. Overall, we make the following contributions in this paper: 1. We identify an emerging category of LLM-based applications (text streaming services) and define a QoE metric for them. 2. We propose Andes, a QoE-aware LLM serving system designed to optimize QoE for text streaming services. 3. We evaluate Andes under different workloads and setups and show that Andes significantly improves QoE with negligible system overhead. 2 Background and Motivation In this section, we introduce the unique characteristics of LLM serving systems (\u00a72.1) and the user experience of text streaming services (\u00a72.2). We then discuss the opportunities for improving user experience (\u00a72.3) and the limitations of existing solutions (\u00a72.4). 2.1 LLM Serving Systems LLM text generation using Transformer-based [47] models is characterized by autoregressive token generation and significant memory usage. First, the LLM generates tokens 2 \fTime #Tokens Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 2. Four requests arrive at \ud835\udc61= 0. Requests 1 and 2 are equally satisfying. Requests 3 and 4 are frustrating, with request 4 being more so as it delivers fewer tokens earlier on, despite having the same TTFT and average token latency. sequentially, where the next token is conditioned on the previous tokens. Second, the LLM requires a large amount of memory to store intermediate data for each token in its input prompt and output response, known as KV cache [47]. As the number of tokens generated increases, so does the KV cache size. For instance, GPT-3 175B [9] requires 7 GB of GPU memory for a 1000-token request, limiting the number of requests that can be handled concurrently. 2.2 User Experience of Text Streaming Services Compared to traditional services that generate entire responses at once, text streaming services allow the user to start digesting the response as early as possible. The user experience includes two phases: Wait Phase. Users wait for the first token to arrive, known as the time-to-first-token (TTFT). For web applications, studies indicate that users expect an initial response to arrive within one second, with a significant 32% dropout rate if the response takes longer than three seconds [6]. Digest Phase. Following the first token, users enter the digest phase, which may last for tens of seconds or more [50], Hence, it is a common practice to stream tokens to the user on the fly so that they can start digesting the response as early as possible. The expected rate of token delivery, i.e., the Token Delivery Speed (TDS), depends on factors such as application type and user demographics. For example, reading speeds, measured in words per minute (WPM), differ across age groups (Table 1), while speaking speeds vary among languages (Table 2). By translating words to tokens using the average word-to-token ratio [38], we can estimate the average reading speed to 4.8 tokens/s and average speaking speed to 3.3 tokens/s. Intuition Behind QoE of Text Streaming Services. The expected TTFT and the expected TDS together define the expected token delivery timeline (TDT), represented by the black line in Figure 2. Similar to QoE in video streaming, a desired QoE metric should capture the gap between the actual TDT and the expected TDT. Intuitively, users are satisfied when the actual TDT is above the expected TDT, otherwise, they prefer to receive more tokens earlier on, as illustrated in 2 4 Request rate (req/s) 10 0 10 1 10 2 TTFT (s) Expected TTFT QoE-unaware QoE-aware (a) 90\ud835\udc61\u210e-p TTFT increases dramatically as the request rate surpasses the server\u2019s capacity. 2 3 4 5 Request rate (req/s) 0 5 10 TDS (tokens/s) Reading speed Speaking speed QoE-unaware QoE-aware (b) Token generation speed is much faster than the userexpected speed. Figure 3. System performance under different request rates. Figure 2. Therefore, the QoE should comprehensively measure the token delivery timeline throughout the entire user interaction, going beyond an aggregated number like TTFT or average token latency. We formally define such a QoE metric in Section 3.1. 2.3 Problems and Opportunities Existing LLM serving systems have primarily focused on optimizing aggregated server-side metrics, and often employ a first-come-first-serve (FCFS) scheduling approach without considering the user experience. In our experiment with ShareGPT [45] on OPT 66B [51] with 4 A100 GPUs, we notice that especially under high request rate, two issues arise: (1) certain users may encounter extended TTFT; (2) conversely, other users might receive tokens at a pace surpassing their digestion ability. Prolonged TTFT. As depicted in Figure 3a, the 90\ud835\udc61\u210epercentile TTFT increases dramatically as the server faces more bursty request rates, resulting in a longer queuing delay and degraded user experience. To accommodate such bursty request volumes, service providers often have to over-provision resources, such as by adding more GPUs, which significantly increases operational costs. Excessively High Token Generation Speed. Conversely, as shown in Figure 3b, we report the token generation speed under different request rates. The observed server-side token generation speed (\u22656.6 tokens/s) is much faster than the userexpected speed (3.3 or 4.8 tokens/s), as referenced in Table 1 and Table 2. This discrepancy indicates that the server often generates tokens faster than the user can consume them. While this might seem efficient from the server\u2019s perspective, it may overwhelm this user while starving others. Opportunities. We observe that there is an opportunity to optimize user experience by balancing prolonged TTFT and excessively fast token generation speed. By temporarily pausing the response generation for requests with already sufficient tokens generated, we can spare the limited GPU resources to other pending requests. The ratio between the expected token generation speed \ud835\udc47\ud835\udc37\ud835\udc46expected and the actual token generation speed \ud835\udc47\ud835\udc37\ud835\udc46actual 3 \fResponse length Prompt length Memory usage = Request Spec Request ID 1 2 3 4 Prompt length 90 90 180 90 Response length 10 10 10 20 Expected TTFT (s) 1 1 2 2 Expected TDS 1.25 1.25 5 5 (tokens/s) Server memory capacity 1 2 3 4 1,2,3,4 FCFS 1 2 3 4 1 2 3 4 1,2,3,4 Round Robin 1 2 3 4 1 2 4 1,2,3,4 QoE-aware 10 20 #Token 0 2 4 6 8 Time 10 20 #Token 0 2 4 6 8 Time 0 2 4 6 8 Time Req 1 Req 2 Req 3 Req 4 Expected TDT Figure 4. Suboptimal user experience from QoE-unaware scheduling policies. In this illustrative toy example, we consider a server that can serve at most 200 tokens simultaneously due to memory constraints. We consider four requests with different prompt lengths, response lengths, as well as different expected TTFT and TDS values, arriving at time 0. The figure shows the serving order (first row) and the cumulative tokens delivered over time for each request (second and third rows). Colored lines represent actual TDT, while the black line indicates the expected TDT. An optimal QoE is achieved when the actual token delivery curve is completely left and/or above the expected token delivery curve. determines the slack for which a request can be preempted, allowing the system to accommodate more concurrent requests. Thus, with appropriate request preemption and restarting, we can serve \ud835\udc47\ud835\udc37\ud835\udc46actual \ud835\udc47\ud835\udc37\ud835\udc46expected \u00d7 concurrent requests than without request preemption, significantly improving user experience. In the example of text-based and voice-based chat services in Figure 3b, we could have increased the serving capacity by 6.6 4.8 = 1.38\u00d7 and 6.6 3.3 = 2\u00d7, respectively. Our evaluation shows that Andes can nearly achieve this theoretical improvement in practice. 2.4 Limitation of Existing Solutions Let us consider a toy example in Figure 4 to illustrate the limitations of existing QoE-unaware scheduling (FCFS used by vLLM [25] and Round Robin). Under FCFS scheduling, while requests 1, 2, and 3 are served immediately, request 4 suffers from longer TTFT due to queuing delays. Round Robin partially mitigates queuing delay using fair-sharing but still fails to align the token delivery in the later stage of the interaction, leading to suboptimal QoE. In contrast, the QoE-aware policy manages to meet the QoE requirements for all requests by prioritizing requests based on their QoE requirements and resource demand. It prioritizes requests with stringent TTFT requirements. Meanwhile, it monitors the resource demand of each request to prevent small requests from being starved of necessary resources. As the served requests accumulate enough tokens for the user to digest, the system upgrades the priority of request 3, which then requires more urgent servicing, and serves it. Finally, the system brings back requests 1, 2, and 4 to continue supplying tokens. In sum, when the server load is below its capacity, all requests can be served promptly and achieve perfect QoE without smart request scheduling. However, when the server is operating at capacity due to unpredictable higher request loads, QoE-aware scheduling can significantly improve the user experience without over-provisioning resources. 3 Overview In this section, we first introduce a formal definition of Quality-of-Experience (QoE) for text streaming services (\u00a73.1). Then, we provide an overview of Andes, an LLM serving system that optimizes QoE of text streaming services (\u00a73.2). 3.1 Quality-of-Experience (QoE) in Text Streaming Text streaming services allow the developer to specify the expected token delivery timeline (TDT) in a request. We derive the QoE of a request by comparing its actual TDT with the expected TDT, considering the entire token delivery process. Informed by the distinctions between superior and inferior service depicted in Figure 2, the formulation of our QoE metric is guided by a set of principles that reflect user expectations and experiences throughout their interaction: 1. Perfect Satisfaction: Users are satisfied when the actual token delivery perfectly aligns with or exceeds the expected delivery, resulting in maximum QoE (QoE = 1). We normalize QoE \u2208[0, 1] for generality across applications. 2. Excess Token Delivery: At any given time, delivering tokens faster than the user\u2019s digest speed does not add 4 \f) Perfect QoE (d) Pause in the middle Expected TDT Server generates User digests Sexpected Sactual Time #Tokens (a) TTFT missed. Time #Tokens (b) TDS missed. Time #Tokens (c) Perfect QoE. Time #Tokens (d) Pause in the middle. Figure 5. QoE example. The slope of the actual token delivery curve on the user side is capped by the expected TDS. value to the user experience, as the user cannot digest all tokens at once. So the QoE remains unchanged. 3. Early Token Delivery: Users prefer receiving more tokens earlier to start processing the response sooner. In scenarios where perfect satisfaction is not achieved, the QoE is higher for scenarios where more tokens are delivered earlier. For example, the QoE is worse for a longer TTFT with the same TDS, and similarly, the QoE is worse for a slower TDS with the same TTFT. Following these principles, we formalize the QoE metric by comparing two curves: (a) The expected token delivery curve \ud835\udc47(\ud835\udc61) that is defined by expected TTFT and TDS. Specifically, \ud835\udc47(\ud835\udc61) = \ud835\udc47\ud835\udc37\ud835\udc46expected\u00b7 (\ud835\udc61\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected) represents the ideal timeline at which tokens should be delivered to the user (black lines in Figure 5). (b) The actual token delivery curve \ud835\udc34(\ud835\udc61) reflects the timeline of how tokens are digested by the user over time (black dotted lines in Figure 5), with its slope at any time capped by the expected TDS. To quantify the QoE of a request with response length \ud835\udc59, we measure the area under both curves up to the actual time to the last token (TTLT). We then define QoE as the ratio of the actual and expected areas, as shown in Figure 5: \ud835\udc44\ud835\udc5c\ud835\udc38= \ud835\udc46actual \ud835\udc46expected = \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 \ud835\udc34(\ud835\udc61)\ud835\udc51\ud835\udc61 \u222b\ud835\udc47\ud835\udc47\ud835\udc3f\ud835\udc47 0 min(\ud835\udc47(\ud835\udc61),\ud835\udc59)\ud835\udc51\ud835\udc61 (1) This formulation focuses on the relative QoE relationship between services, but Andes allows the service provider to prioritize specific aspects. For example, to stress a shorter TTFT, the provider can add a penalizing term on the defined QoE as \ud835\udefc\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47actual\u2212\ud835\udc47\ud835\udc47\ud835\udc39\ud835\udc47expected \u00b7 \ud835\udc46actual \ud835\udc46expected , where \ud835\udefc\u2208[0, 1]. In this paper, we will use the QoE definition in Equation 1 by default. Running Waiting Queue \u2026 \u2026 1 Request Client Server 4 5 Buffer Request Priority GPU Admit Evict Submit Request {Prompt: \u2019What is the probability that this paper will be accepted?\u2019, TTFT: 1s, TDS: 5 tokens/s} Token Context Length QoE Tracker 2 3 3 Worker 0 Worker 1 Worker W-1 Request Metadata Receive Token Figure 6. Andes Overview. 3.2 Andes Overview The workflow of Andes is shown in Figure 6. 1 The interaction begins with the user submitting a request to the server. The request comes with its QoE requirement, which is prespecified by the application developer. 2 Upon receiving the request, the QoE tracker assigns a scheduling priority and puts it in the waiting queue. 3 At each scheduling iteration, the QoE tracker refreshes the priorities of all requests, both in the waiting and running queues. Then Andes reschedules the requests based on their priorities by admitting high-priority waiting requests to GPU workers and evicting low-priority running requests back to the server. For these evicted requests, their states (e.g., KV cache) are stored in the request metadata store on CPU RAM for future retrieval. 4 During each inference iteration, each running request generates one token, which is then sent to the client. 5 As tokens are delivered to the client, a token buffer is responsible for storing excess tokens and displaying them at the expected speed, ensuring smooth token delivery. 4 QoE-Aware Scheduling In this section, we describe how Andes schedules token generation across multiple requests to maximize the total QoE. Section 4.1 formulates the scheduling problem as a Knapsack variant, and Section 4.2 introduces an efficient solution. 4.1 Problem Formulation The core of Andes is an online preemptive scheduling algorithm for token generation, which requires designing three elements: (1) How often to make scheduling decisions (time quantum), (2) which requests to serve (scheduling objective), and (3) how many requests to serve at a time (batch size). Time Quantum. At the beginning of each time quantum, the scheduler inspects both queued and running requests, and determines which ones to admit and preempt. Following the 5 \fcontinuous batching used in existing systems [25, 50], Andes invokes its scheduler at the beginning of each iteration. Scheduling Objective. Just like any other online serving system, it is impractical to perfectly plan execution into the future. Therefore, Andes serves the set of requests that maximizes the scheduling objective in the upcoming time frame of length \u0394\ud835\udc61. The parameter \u0394\ud835\udc61cannot be too short, as scheduling decisions will become shortsighted, or too long, as the actual system state would deviate too far from estimations. We find that setting it as the average request completion time is reasonable, and show in Section 6.5 that Andes is not sensitive to the setting of \u0394\ud835\udc61. Andes supports various scheduling objectives including max average QoE and max-min QoE by designing its scheduling objective function appropriately. For the sake of presentation, we will focus on maximizing average QoE here (See Appendix A for alternative objectives). The objective function for request \ud835\udc56is defined as: \ud835\udc44serve,\ud835\udc56\u2212\ud835\udc44wait,\ud835\udc56 (2) where \ud835\udc44serve,\ud835\udc56and \ud835\udc44wait,\ud835\udc56are the QoE of request \ud835\udc56after \u0394\ud835\udc61 if it is served and not served, respectively. In simple terms, Equation 2 is the amount of QoE gain when we decide to serve request \ud835\udc56compared to when it is not served, and we naturally want to serve more of the requests that give us large QoE gains when served. Batch Size. The number of requests picked to run in the upcoming quantum, or batch size, is limited by two factors. First, each token in a request\u2019s context (prompt plus all generated tokens) consumes one entry in the LLM serving system\u2019s KV cache [9], whose size is bounded by GPU memory. Thus, we have the following constraint: \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (3) where there are \ud835\udc41requests in total (queued or running), \ud835\udc59\ud835\udc56 is request \ud835\udc56\u2019s context length, \ud835\udc65\ud835\udc56is an indicator variable that is 1 if request \ud835\udc56is served and 0 otherwise, and \ud835\udc40is the total number of tokens that can fit in GPU memory. Furthermore, Andes must take into account the latency to generate one token. That is, while a large batch size may increase server-side token generation throughput, the increase in the amount of compute will inflate the latency to generate one token from the perspective of each request, potentially hurting their QoE by delaying TTFT or failing to meet the expected TDS. On the other hand, a small batch size will be able to deliver tokens faster to each running request, but in turn more requests will not be served at all, again potentially hurting their QoE. Thus, the right intermediate batch size will have to be chosen in order to maximize average QoE. Knapsack Formulation. Putting these together, we observe that the problem setting resembles that of the classic knapsack problem [23]. The goal is to select items (requests) Time # Tokens Qserve(50) Qserve(30) Qserve(10) t Time # Tokens Qwait t Expected Actual Future Time # Tokens Qserve(50) Qserve(30) Qserve(10) t (a) \ud835\udc44serve, i(\ud835\udc35) Time # Tokens Qwait t (b) \ud835\udc44wait,\ud835\udc56 Figure 7. Visualization of \ud835\udc44serve, i(\ud835\udc35) and \ud835\udc44wait,\ud835\udc56. The former depends on batch size \ud835\udc35whereas the latter is a constant. With batch size 50, request \ud835\udc56no longer has perfect QoE. to put in a knapsack (GPU) so that total item value (QoE gain) is maximized and total weight (\ud835\udc59\ud835\udc56) does not exceed the knapsack\u2019s capacity (\ud835\udc40). However, our problem setting deviates from that of the classical knapsack because the value of each item depends on how many items there are in the knapsack. This is because, as noted above, the number of requests in the knapsack (batch size) affects token generation latency, which in turn means that \ud835\udc44serve,\ud835\udc56is actually a function of batch size \ud835\udc35.2 Figure 7 visualizes this. When \ud835\udc35is just 10 or 30, the request maintains perfect QoE by always running ahead. However, when \ud835\udc35is 50, the computation time of one iteration becomes longer and slows down token generation, degrading the request\u2019s QoE by failing to meet its TDS expectation. On the other hand, \ud835\udc44wait,\ud835\udc56does not depend on the batch size because it simply sits in the queue, waiting to be served. Thus, for a specific batch size \ud835\udc35, we would like to solve: max \ud835\udc65 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \u0000\ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \u0001 \u00b7 \ud835\udc65\ud835\udc56 s.t. \ud835\udc65\ud835\udc56\u2208{0, 1}, \ud835\udc56\u22081, . . . , \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc65\ud835\udc56= \ud835\udc35 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \ud835\udc59\ud835\udc56\ud835\udc65\ud835\udc56\u2264\ud835\udc40 (4) where the optimization variable \ud835\udc65is a length \ud835\udc41array of \ud835\udc65\ud835\udc56s. The second constraint ensures that exactly \ud835\udc35many requests are chosen, whereas the final constraint ensures that the GPU memory capacity is not exceeded. Equation 4 should be solved for each possible batch size \ud835\udc35and the solution that yields the best objective value should be selected. 2More precisely, token generation latency is a function of batch size and the total number of tokens in the batch, but batch size and total number of tokens are nearly perfectly correlated, allowing us to eliminate the latter and only leave batch size. See Appendix B for more detailed analysis. 6 \f4.2 Solution Design In this section, we discuss the hardness of the problem formulated in the previous section in terms of algorithmic hardness and systems overhead. Then, we propose efficiency optimizations and a greedy algorithm that gives an approximate solution with low systems overhead. Algorithmic Hardness. As Andes must solve its optimization problem repetitively online to determine the set of requests to solve, an efficient algorithm is needed. However, Equation 4 is a variant of the knapsack problem called the Exact K-item Knapsack, which is weakly NP-Hard [23]. We give an optimal 3D dynamic programming solution to the problem that runs in pseudo-polynomial time \ud835\udc42(\ud835\udc40\u00b7 \ud835\udc412) in Appendix C. However, such an algorithm is also too slow in our case as the number of requests \ud835\udc41and the maximum number of tokens that can fit in memory \ud835\udc40are easily in the order of hundreds and thousands, respectively. Furthermore, we need to solve Equation 4 for each possible batch size \ud835\udc35\u2208[1, \ud835\udc41], which is clearly intractable. Preemption Overhead. When some requests that were running in the previous time quantum are not selected to run on the next, such requests are preempted. This is the core mechanism that reduces TTFT inflation from head-of-line blocking. For this, Andes supports two preemption mechanisms: swapping and recomputation. The former moves the request\u2019s KV cache entries between the GPU and CPU memory, whereas the latter drops all entries on preemption and recomputes them when the request restarts. If Andes runs out of host memory for storing KV cache, the preemption mechanism will automatically switch to recomputation. Preemption is not free \u2013 in general, the latency overhead of swapping is similar to one token generation iteration (See Appendix D for detailed benchmarking). Frequent preemption may slow down token generation and delay token delivery, potentially degrading request throughput and QoE. Therefore, our scheduling algorithm must make preemption decisions that strike a good balance between reaping QoE gains and causing slowdowns. Optimization #1: Selective Triggering. We observe that Equation 4 only needs to be solved when batch size is limited either by memory capacity or computation time. The former case can be detected easily by monitoring the KV cache occupancy and having a high-memory watermark (e.g., 90%). For the latter case, Andes monitors token generation latency and detects when it begins to exceed the most minimum token delivery speed requirement of the most stringent request. In all other cases, Andes does not trigger the optimization problem solver and serves every request. Optimization #2: Batch Size Search Space Pruning. In order to reduce the number of times Equation 4 needs to be solved, we reduce the search space of batch size \ud835\udc35from [1, \ud835\udc41] to [\ud835\udc35min, \ud835\udc35max]. First, there is no point in exploring very large Algorithm 1 Greedy packing algorithm for Equation 4 Inputs: Number of requests \ud835\udc41and KV cache capacity \ud835\udc40 Request context length array \ud835\udc59[\ud835\udc41] Request QoE gain array \ud835\udc5e[\ud835\udc41] Target batch size \ud835\udc35 Output: Solution array \ud835\udc65[\ud835\udc41] 1: Initialize priority array \ud835\udc5d[\ud835\udc41] with all zeros 2: for \ud835\udc56= 0 to \ud835\udc41\u22121 do 3: \ud835\udc5d[\ud835\udc56] = \ud835\udc5e[\ud835\udc56] \ud835\udc59[\ud835\udc56] \u22b2Priority of request \ud835\udc56 4: \ud835\udc40current = 0 5: \ud835\udc41current = 0 6: Initialize solution array \ud835\udc65[\ud835\udc41] with all zeros 7: for all \ud835\udc56\u2208[0, \ud835\udc41\u22121] in descending order of \ud835\udc5d[\ud835\udc56] do 8: if \ud835\udc40current + \ud835\udc59[\ud835\udc56] \u2264\ud835\udc40and \ud835\udc41current + 1 \u2264\ud835\udc35then 9: \ud835\udc65[\ud835\udc56] = 1 \u22b2Serve request \ud835\udc56 10: \ud835\udc40current = \ud835\udc40current + \ud835\udc59[\ud835\udc56] 11: \ud835\udc41current = \ud835\udc41current + 1 12: else 13: break 14: return \ud835\udc65 batch sizes that cannot be realized. Thus, \ud835\udc35max is determined by adding to the batch requests with the shortest context lengths until the total number of tokens in the batch reaches \ud835\udc40, at which point the batch size is the largest that can be realized. On the other hand, very small batch sizes that can generate tokens faster than the expected TDS of any request are also suboptimal. This is because going that fast does not increase the QoE of requests that are served, but on the other hand will serve a smaller number of requests, potentially degrading the QoE of requests that are left waiting. Thus, \ud835\udc35min is set as the largest batch size that generates tokens faster than the most stringent TDS among all requests. Optimization #3: Greedy Packing for Knapsack. A direct solution to the exact k-item knapsack problem in Equation 4 is computationally too heavy. Instead, Andes designs an efficient algorithm that computes each request\u2019s priority and greedily packs requests in that order. In designing the priority function, we have three goals: (a) Reflecting merit: Requests that yield high QoE gain and consume less resource should have high priority. (b) Preventing starvation: Requests should be automatically deprioritized as they receive service. (c) Reducing preemption: Selecting high priority requests should reduce the need for preemption. In light of these goals, request \ud835\udc56\u2019s priority is defined as: \ud835\udc44serve,\ud835\udc56(\ud835\udc35) \u2212\ud835\udc44wait,\ud835\udc56 \ud835\udc59\ud835\udc56 (5) This priority function meets our goals. (a) A higher QoE gain will increase the request\u2019s priority, but simultaneously discounted by the amount of GPU memory it will use. (b) As 7 \fa request receives service, its context length (\ud835\udc59\ud835\udc56) will increase, automatically deprioritizing itself. In contrast, requests will have higher QoE gain the more they wait, automatically boosting their priorities. (c) Finally, a request with long context length (\ud835\udc59\ud835\udc56) will be preempted first, freeing enough GPU memory to potentially bring in more than one waiting requests.3 This reduces the number of preemptions required to alleviate head-of-line blocking. The whole procedure is given in Algorithm 1. The greedy packing algorithm offers time complexity \ud835\udc42(\ud835\udc41log \ud835\udc41). We empirically show in Section 6.5 that this greedy solution can achieve performance comparable to the 3D DP algorithm while greatly reducing scheduling overhead. Optimization #4: Preemption Cap. We have discussed that preemption is not free and can potentially degrade QoE. However, we can empirically and theoretically show that Andes commonly does not result in excessive preemptions/thrashing that may cause average QoE to degrade. Empirically, Andes consistently maintains an average preemption frequency below 1 per request, even under a high server load (\u00a76.2.3). Theoretically, the number of preemptions needed to optimize the QoE of requests is contingent upon the excessive request load. Assume the serving system can handle \ud835\udc5f0 requests per second and the actual request rate is \ud835\udc58\u00b7 \ud835\udc5f0 requests per second, where \ud835\udc58\u22651. Thus, there would be (\ud835\udc58\u22121) \u00b7\ud835\udc5f0 requests whose QoE might be degraded due to the queuing delay. To mitigate this, we need roughly one preemption to accommodate each of these requests. Sometimes, a single preemption of a long request can allow multiple new requests to be served, which further reduces the number of preemptions needed. Therefore, the average preemption frequency needed is bounded by \ud835\udc58\u22121, which is small as long as the load is not excessively high. Nevertheless, in order to safeguard against thrashing that may happen in the worst case request pattern, Andes supports setting a cap \ud835\udc43on the average number of preemptions a request can experience throughout its lifetime. Too high a \ud835\udc43will not be able to act as a safeguard, whereas too small a \ud835\udc43will prevent even absolutely necessary preemptions from happening. We find that setting \ud835\udc43= 1, i.e., a request on average experiences at most one preemption during its lifetime, is a good default (Section 6.5). 5 Implementation The two core elements of Andes are its QoE-aware scheduler and a client-side token buffer. Server-Side QoE-Aware Scheduler. Andes\u2019s scheduling algorithm can work with any LLM serving system that supports continuous batching and at least one preemption mechanism (swapping or recomputation). We note that an LLM 3The overhead of preemption depends on how much memory was freed, not the number of requests. Therefore, for the same amount of memory freed from preemption, it\u2019s better to free a smaller number of requests. 0 50 100 150 200 250 #Tokens Generation Pause Network Fluctuation 0 10 20 30 40 50 Time (s) 0 100 #Tokens in buffer Client receives User digests Figure 8. The client-side token buffer holds excess tokens sent from the server to absorb token generation fluctuations and paces token delivery based on the user\u2019s expected TDS. serving system that implements Paged Attention [25] is likely to also support at least one preemption mechanism to prevent the system from running out of memory. As a reference, we implemented Andes\u2019s scheduling algorithm on top of vLLM [25]. The scheduler only manages requests coming into the vLLM instance it is integrated with, assuming that cluster-level load balancing and fault tolerance are done separately. Client-Side Token Buffer. The server sends tokens to the buffer as soon as they are generated, even if they were generated at a pace that exceeds the user\u2019s expected TDS. Then, the token buffer smooths out the token delivery timeline to pace tokens at the user\u2019s expected TDS. The token buffer can also naturally smooth out some fluctuations in network latency, for instance in crowded mobile networks. The buffer should be implemented appropriately depending on the destination of streaming \u2013 e.g., TypeScript for web frontend, Python for API use. Figure 8 visualizes the token buffer in action. With an initial burst generation faster than the user\u2019s expected TDS, the buffer withholds excess tokens and paces token delivery, thus growing in size. The server is fully aware of the token buffer, and preempts the request to serve other requests. During this time, the buffer drains at a rate that matches the user\u2019s expected TDS. Finally, the server brings back the request and starts generating tokens again, and together with the token buffer, perfect QoE was achieved. 6 Evaluation We evaluate the performance of Andes under different workloads. We demonstrate that: 1. Andes improves the average QoE up to 3.2\u00d7 when the system experiences high/bursty load (\u00a76.2.1). 8 \fModel size 13B 30B 66B 175B GPUs A100 4\u00d7A100 4\u00d7A100 4\u00d7A100 GPU Memory 80 GB 320 GB 320 GB 320 GB Precision FP16 FP16 FP16 8-bit [14] Model Memory 26 GB 60 GB 132 GB 180 GB Table 3. OPT model family and GPU specifications used. 2. Andes can handle up to 1.6\u00d7 higher request rates while preserving high QoE without additional resources, significantly reducing the serving cost(\u00a76.2.2). 3. Andes maintains similar token generation throughput as the baseline, with a minor drop (\u226410%) in throughput as the request rate increases (\u00a76.2.3). 4. Andes significantly improves TTFT, while maintaining TDS above user expected speed (\u00a76.3). 5. Andes outperforms the baselines across different workloads (\u00a76.4) and setups (\u00a76.5). 6.1 Experiment Setup Model and Server Configurations. Following state-ofthe-art LLM serving systems [25], we evaluate Andes using the OPT [51] series with 13B, 30B, 66B, and 175B parameters, with the 175B model employing INT8 quantization. We run all experiments on NVIDIA A100 GPUs in Chameleon Cloud [22], and use tensor parallelism to deploy the models, using the default configuration in vLLM [25]. We use swap as the preemption mechanism and set the CPU swap space to 240 GB in total. Detailed hardware specifications are provided in Table 3. Workloads. We experiment on ShareGPT [45], a dataset that gathers conversations shared by users with ChatGPT [35], including multiple rounds of input prompt and output response. By concatenating multiple rounds of conversations into one input while limiting its length to 1k tokens to fit the model\u2019s maximum context length, and setting the final response as the output, we create the Multi-Round ShareGPT dataset for longer conversations. As shown in Figure 9, MultiRound-ShareGPT has about 3\u00d7 longer input than ShareGPT, while both datasets have similar output length distribution. We generate request arrival traces using Poisson distribution with different arrival rates. The request\u2019s QoE requirement trace is created with different expected TTFT and TDS. TTFT is set to 1 second for all, while TDS is based on user reading speeds (Table 1), and is translated from words to tokens using the average word-to-token ratio for ChatGPT [38]. In real applications, QoE requirements should be set depending on the application\u2019s specific use case. For instance, reading speed (and thus expected TDS) may be measured using screen scrolling [18] or eye-tracking [3, 34]. Another potential use case is to introduce API price tiering, 0 500 1000 1500 2000 #Tokens 0 200 400 Density Input (mean: 174.55) Output (mean: 314.22) (a) ShareGPT. 0 200 400 600 800 1000 #Tokens 0 200 400 600 Density Input (mean: 624.22) Output (mean: 365.52) (b) Multi-Round ShareGPT. Figure 9. Input and output length distributions of datasets. where a higher per-token price provides faster TDS, and API users can select the tier suitable for downstream digestion. Baselines. We compare Andes with vLLM (version 0.2.7). vLLM uses first-come-first-serve (FCFS) scheduling policy by default. We implement another scheduling policy, RoundRobin (RR), atop vLLM for more informed comparison, which is designed to guarantee equal service to requests through cyclic request preemption. For RR, we set the service interval to 50 inference iterations, maximizing its QoE performance. Metrics. We focus on the following metrics in evaluations: \u2022 Average QoE: We set the threshold to 0.9 as the minimum acceptable average QoE. The QoE of 0.9 corresponds to a 5% delay in TTFT, a 10% slowdown in TDS, or something in the middle. \u2022 System capacity: It measures the maximum request rate that the system can handle while maintaining an average QoE above the threshold. \u2022 System throughput: It measures how many tokens the system generates per second. We also report normalized latency, which is used by vLLM[25] and Orca[50], in Appendix E. 6.2 End-to-End Experiments In this section, we report the performance of Andes in terms of average QoE (\u00a76.2.1), system capacity (\u00a76.2.2), and system throughput (\u00a76.2.3) under different setups. 6.2.1 Improvement on Average QoE. We evaluate the performance of Andes on all four models and two datasets. Figure 10 and Figure 11 show the result on the ShareGPT dataset and Multi-Round ShareGPT dataset respectively. As the request rate increases, Andes maintains a high average QoE, outperforming the baseline whose average QoE sharply decreases. In other words, Andes can serve more concurrent requests without compromising user experience. For ShareGPT dataset, Andes increases average QoE up to 3.1\u00d7 at the same request rate, while maintaining an average QoE of 0.9, all with the same resources. For Multi-Round ShareGPT dataset, Andes improves average QoE up to 3.2\u00d7. For OPT-30B model, the improvement is less significant, as the model is less resource-constrained when compared to the OPT-66B model. 9 \f1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 5.0 7.5 10.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B 1.4 1.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 10. Average QoE for different request rates using the ShareGPT dataset. 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 2 3 4 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) OPT-13B. 2 4 6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) OPT-30B. 1.5 2.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) OPT-66B. 0.8 1.0 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (d) OPT-175B. Figure 11. Average QoE for different request rates using the Multi-Round ShareGPT dataset. These improvements can be attributed to Andes\u2019s QoEaware scheduling policy, which dynamically prioritizes resources for urgent requests that risk falling below their expected QoE, preempting those that have been sufficiently served. In contrast, under higher load, traditional FCFS scheduling policy suffers from head-of-line blocking, leading to significant queuing delay. Although the RR policy mitigates head-of-line blocking by preemptions, frequent preemptions introduce significant overhead and degrade the average QoE. 6.2.2 Improvement on Server Capacity. As shown in Figures 10 and 11, the horizontal dotted lines represent the average QoE threshold of 0.9. For ShareGPT dataset, Andes can manage 1.2\u00d7\u22121.6\u00d7 higher request rate than vLLM while maintaining an average QoE above the threshold. Specifically, for the OPT-66B model, Andes can handle 1.25\u00d7 higher request rate than vLLM, nearing the 1.38\u00d7 theoretical improvement suggested in Section 2.3, showcasing Andes\u2019s ability to optimize resource allocation and average QoE effectively. For Multi-Round ShareGPT dataset, Andes can serve 1.1 \u00d7 \u22121.3\u00d7 higher request rate. Additionally, by serving higher request rates with the same resources, Andes effectively reduces the resource cost per request. 6.2.3 Impact of Andes on System Throughput. We report the token generation throughput and the preemption frequency of Andes on OPT-66B with both datasets, as shown in Figure 12 and Figure 13. In both datasets, Andes maintains the same token throughput as vLLM when the request rate is moderate, and experiences a minor drop (\u226410%) in throughput as the request rate increases. This demonstrates that 1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 3 4 5 Request rate (req/s) 0 50 Throughput (tokens/s) (a) ShareGPT. 1.5 2.0 Request rate (req/s) 0 50 Throughput (tokens/s) (b) Multi-Round ShareGPT. Figure 12. Token generation throughput with OPT-66B under different request arrival rates. Andes marginally impacts system throughput. The throughput decrease can be attributed to the overheads introduced by request preemption. Despite the active request scheduling, the frequency of preemptions per request remains low (\u22640.5) under reasonable average QoE as shown in Figure 13, minimizing the impact of overheads on throughput; Despite the minor decrease in throughput, the up to 60% improvement in server capacity offered by Andes can compensate for this, effectively reducing the resource cost per request while maintaining a satisfactory user experience. 6.3 Breakdown Analysis To understand Andes\u2019s performance in detail, we conducted a breakdown analysis focusing on QoE, time to first token (TTFT), and token delivery speed (TDS), as shown in Table 4. We report Andes\u2019s performance on OPT-66B and ShareGPT dataset with a request rate of 3.3, where Andes achieved an average QoE of 0.92. With these breakdown analyses, we can 10 \f3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (a) ShareGPT. 1.5 2.0 2.5 Request rate (req/s) 0.0 0.5 1.0 Avg preemption frequency Andes (b) Multi-Round ShareGPT. Figure 13. Preemption frequency with OPT-66B under different request arrival rates. Metric Percentile Method vLLM Andes 10\ud835\udc61\u210e 0.05 0.77 50\ud835\udc61\u210e 0.39 1.00 QoE 90\ud835\udc61\u210e 1.00 1.00 10\ud835\udc61\u210e 0.33 0.35 50\ud835\udc61\u210e 56.73 0.47 TTFT (s) 90\ud835\udc61\u210e 144.95 0.66 10\ud835\udc61\u210e 6.05 5.32 50\ud835\udc61\u210e 6.45 5.44 TDS (tokens/s) 90\ud835\udc61\u210e 7.84 7.02 Table 4. Andes significantly improves QoE and TTFT, while maintaining TDS above user expected speed. provide granular insights into individual user satisfaction under this level of QoE. QoE distribution. Andes significantly improves the lower and median user experiences, with the 10th percentile rising from 0.05 to 0.77 and the 50th percentile achieving a perfect score of 1, compared to 0.39 in vLLM. In order to understand how Andes handles requests with different request lengths, we present a scatter plot of QoE across different total lengths as shown in Figure 14. We observe Andes slightly starves a small fraction of longer requests, as they consume more resources or take longer time to complete. In contrast, FCFS starves lots of shorter requests that are blocked by longer requests. Token delivery timeline. Andes greatly enhances initial responsiveness, reducing median TTFT from 56.73 seconds in vLLM to just 0.47 seconds, and similarly improving the 90th percentile from 144.95 seconds to 0.66 seconds. This improved performance is attributed to Andes\u2019s QoE-aware scheduling, which effectively mitigates head-of-line blocking and reduces queuing delays. Additionally, we analyze the percentile distribution of the average TDS observed by users, excluding TTFT. While Andes slightly slows the average TDS, it remains above the user\u2019s expected speed, ensuring balanced delivery that neither overwhelms nor starves users. 0 1000 2000 Total Length 0 1 QoE (a) vLLM. 0 1000 2000 Total Length 0 1 QoE (b) Andes. Figure 14. QoE distribution across different total lengths. 6.4 Robustness to Diverse Workloads We evaluate the robustness of Andes under diverse settings including different hardware, arrival patterns, and QoE traces. We observed similar trends in diverse settings; therefore, we report our results with OPT-66B and ShareGPT. Hardware. We evaluate Andes on the NVIDIA A40 GPU with 46 GB RAM, as shown in Figure 15a. Andes improves average QoE up to 7\u00d7 under a higher request rate and serves 1.1\u00d7 higher request rate while maintaining an average QoE of 0.9. The reason for the smaller improvement on server capacity is that the A40 has a lower computational capability than the A100, leading to a slower average token generation speed. Consequently, the gap between the expected TDS and actual TDS on the A40 is smaller than on the A100, providing less opportunity for request scheduling and improving average QoE. However, as newer generations of GPUs are becoming more powerful in terms of computational capability, the potential improvement of Andes will be more significant. Bursty Arrival Process. We use a Gamma arrival process with the same request rate and a coefficient of variation of 3 to simulate the burst arrival of user requests. Figure 15b indicates that under bursty workload, the average QoE for FCFS policy begins to decrease at a lower request rate compared to the Poisson arrival, due to increased queuing delays. In contrast, Andes sustains a high average QoE, achieving up to a 2.7\u00d7 improvement on average QoE at the same request rate and serves 1.3\u00d7 higher request rate, showing Andes\u2019s adaptability to bursty workload. Different QoE Traces. Due to the unique QoE requirements of different applications, we evaluate Andes\u2019s performance under a voice chat QoE trace, with expected TTFT at 1 second and slower expected TDS adjusted according to the speaking speed outlined in Table 2. As shown in Figure 15c, both Andes and baseline achieve better average QoE even on higher request rates, attributed to the less strict TDS requirements. Nevertheless, Andes improves average QoE up to 1.25\u00d7 and manages 2\u00d7 request rate, which approaches the theoretical maximum improvement of 2\u00d7 as discussed in Section 2.3. 6.5 Sensitivity Analysis All experiments in sensitivity analysis are conducted on OPT66B with the ShareGPT dataset and a request rate of 3.3. 11 \f1.4 1.6 1.8 2.0 2.2 Request rate (req/s) 0.00 0.25 0.50 0.75 1.00 Avg QoE RR vLLM Andes 0.4 0.5 0.6 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (a) NVIDIA A40. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (b) Burst request arrival. 5 10 Request rate (req/s) 0.0 0.5 1.0 Avg QoE (c) Voice chat QoE trace. Figure 15. Robustness analysis on OPT-66B with ShareGPT dataset. 0.0 0.5 1.0 1.5 Preemption frequency cap p 0.5 1.0 Avg QoE vLLM Sedna 0.0 0.5 1.0 1.5 Preemption frequency cap P 0.5 1.0 Avg QoE (a) Average QoE. 0.0 0.5 1.0 1.5 Preemption frequency cap P 0 50 Throughput (tokens/s) (b) Throughput. Figure 16. Tuning preemption frequency cap \ud835\udc43. 0 50 100 150 t 0.4 0.6 0.8 1.0 Avg QoE vLLM Andes Figure 17. Tuning \u0394\ud835\udc61. 3 4 5 Request rate (req/s) 0.0 0.5 1.0 Avg QoE vLLM Andes w/ greedy Andes w/ DP Figure 18. Different solver. Preemption Frequency Cap \ud835\udc43. Increasing preemption frequency cap \ud835\udc43can lead to finer-grained scheduling, potentially enhancing average QoE, but at the cost of increased overhead and reduced throughput. Figure 16a shows the average QoE under different \ud835\udc43. Improvements in QoE are observed as \ud835\udc43increases up to 0.4 preemptions per request, stabilizing beyond this point. Conversely, Figure 16b illustrates a slight decrease in system throughput with increased \ud835\udc43, stabilizing beyond 0.4 preemption per request. These observations suggest a trade-off between average QoE and system throughput, indicating the current setting of \ud835\udc43nearly optimizes QoE while maintaining satisfactory throughput. Prediction Timeframe \u0394\ud835\udc61. We evaluate how different \u0394\ud835\udc61 influences average QoE to understand its effect on system performance. Figure 17 illustrates that the average QoE remains roughly consistent for \u0394\ud835\udc61values greater than 50, and significantly outperforms the baselines, indicating that Andes is not sensitive to the setting of \u0394\ud835\udc61. Different Knapsack Solution. We compare the performance of Andes with different knapsack solutions between greedy and dynamic programming (DP). Figure 18 shows that the greedy consistently surpasses the DP solution, while both solutions outperform the baselines. The lower performance of the DP is due to its substantial computational overhead, which delays the inference process and degrades the average QoE. This suggests that the greedy approach is a more practical and efficient solution for Andes. 7 Related Work General Model Serving Systems. A variety of model serving systems have emerged, ranging from general-purpose, production-level frameworks like TensorFlow Serving [33] and NVIDIA Triton [31] to specialized systems such as Clipper [11], which sets application-level SLOs. Recent systems including Nexus[42], DeepRecSys [17], Clockwork [16], INFaaS [40], SuperServe [24] and AlpaServe [26] have introduced features like serving pipelines, hardware platform diversity, advanced scheduling, dynamic model selection, and model parallelism to boost resource efficiency. However, these general systems neglect the unique characteristics of LLM inference, leaving potential avenues for optimization. LLM Serving Systems. Numerous model serving systems are proposed to address the unique challenges of LLMs. Orca [50] introduced an iteration-level scheduling policy to enhance the throughput of batching inference, and vLLM [25] developed a PagedAttention to reduce the memory usage of LLMs. Splitwise [37], DistServe [52], TetriInfer [19] and Sarathi-Serve [1, 2] optimize the computation of prefill and decode phases through disaggregating or merging them. Some other systems focus on GPU kernel optimization and kernel fusion[5, 12, 32], model parallelism [5, 39], batching algorithm [13, 43, 50], KV-cache management [27, 28, 44] and parameter-sharing [53]. However, these systems focus on optimizing aggregated server-side performance and simply adopt a FCFS scheduling policy, which fail to address the queuing delay problem under higher request load. Finally, shortest remaining processing time [41] is a preemptive scheduling policy, but it does not consider the QoE of individual requests and requires knowledge of the response length of requests. To the best of our knowledge, Andes is the first to define and optimize QoE of text streaming services. 12 \fVideo Streaming and QoE. The concept of text streaming draws inspiration from video streaming but encounters unique challenges and has a different QoE definition. While video streaming services are primarily limited by network bandwidth and latency [7], text streaming services are mainly constrained on computational resources [48]. Additionally, the QoE in video streaming is often measured by metrics like buffering ratio, resolution stability, and playback smoothness [7], while the QoE in text streaming primarily considers the token delivery timelines (TDT). 8", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.12957v1", |
| "title": "Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction", |
| "abstract": "We propose an approach for estimating the latent knowledge embedded inside\nlarge language models (LLMs). We leverage the in-context learning (ICL)\nabilities of LLMs to estimate the extent to which an LLM knows the facts stored\nin a knowledge base. Our knowledge estimator avoids reliability concerns with\nprevious prompting-based methods, is both conceptually simpler and easier to\napply, and we demonstrate that it can surface more of the latent knowledge\nembedded in LLMs. We also investigate how different design choices affect the\nperformance of ICL-based knowledge estimation. Using the proposed estimator, we\nperform a large-scale evaluation of the factual knowledge of a variety of open\nsource LLMs, like OPT, Pythia, Llama(2), Mistral, Gemma, etc. over a large set\nof relations and facts from the Wikidata knowledge base. We observe differences\nin the factual knowledge between different model families and models of\ndifferent sizes, that some relations are consistently better known than others\nbut that models differ in the precise facts they know, and differences in the\nknowledge of base models and their finetuned counterparts.", |
| "authors": "Qinyuan Wu, Mohammad Aflah Khan, Soumi Das, Vedant Nanda, Bishwamittra Ghosh, Camila Kolling, Till Speicher, Laurent Bindschaedler, Krishna P. Gummadi, Evimaria Terzi", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "We propose an approach for estimating the latent knowledge embedded inside\nlarge language models (LLMs). We leverage the in-context learning (ICL)\nabilities of LLMs to estimate the extent to which an LLM knows the facts stored\nin a knowledge base. Our knowledge estimator avoids reliability concerns with\nprevious prompting-based methods, is both conceptually simpler and easier to\napply, and we demonstrate that it can surface more of the latent knowledge\nembedded in LLMs. We also investigate how different design choices affect the\nperformance of ICL-based knowledge estimation. Using the proposed estimator, we\nperform a large-scale evaluation of the factual knowledge of a variety of open\nsource LLMs, like OPT, Pythia, Llama(2), Mistral, Gemma, etc. over a large set\nof relations and facts from the Wikidata knowledge base. We observe differences\nin the factual knowledge between different model families and models of\ndifferent sizes, that some relations are consistently better known than others\nbut that models differ in the precise facts they know, and differences in the\nknowledge of base models and their finetuned counterparts.", |
| "main_content": "Introduction Conversational chatbots (e.g., OpenAI\u2019s ChatGPT) built around large language models (e.g., OpenAI\u2019s GPT) are increasingly being used for a variety of information retrieval tasks such as searching for information or seeking recommendations related to real world entities like people or places (Wu et al., 2023; Zhu et al., 2023). A worrisome concern in such scenarios is the factual correctness of information generated by the LLMs (Peng et al., 2023; Hu et al., 2023a; Snyder et al., 2023; Yao et al., 2023; Ji et al., 2023; Zhang et al., 2023; Wang et al., 2023). The latent knowledge estimation problem: To avoid making false assertions about a real-world entity, an LLM first needs to have factual (true) knowledge about the entity. Given a prompt like \u201cEinstein was born in the year\u201d, LLMs may generate both the correct answer (\u201c1879\u201d) and wrong answers (e.g., \u201c1878\u201d or \u201c1880\u201d) with some probabilities. If an LLM knows the fact, one can hope that the probability with which it would generate the correct answer would be much higher than the wrong answers (Jiang et al., 2021). As LLMs are typically pretrained over a Web corpus (including Wikipedia data) with millions of facts about realworld entities, they have the opportunity to learn factual knowledge about our world and latently embed the knowledge in their parameters. But, how can we estimate the extent to which LLMs have knowledge of real-world facts? Reliability of latent knowledge estimates: Prior works (Jiang et al., 2020; Bouraoui et al., 2020) followed (Petroni et al., 2019), and represented factual knowledge in the form of triplets \u27e8x, r, y\u27e9, where the subject x has a relation of type r with the object y (e.g., \u27e8Einstein, birth-year, 1879\u27e9). The central challenge of latent knowledge estimation is to infer y given x and r by only using information extracted from the LLM. Typically, the inference relies on probing the LLM with prompts constructed using x and r and analyzing the responses. Current approaches have few well-defined rules to avoid prompt engineering and prompt hacking, raising serious concerns about the reliability of their estimates. Against this background, in this paper, we make four primary contributions: 1. A simple yet reliable latent knowledge estimator (LKE) leveraging in-context learning (ICL): We propose a latent knowledge estimator (LKE) that leverages in-context learning (ICL), called ICLKE, in a simple yet clever way to avoid the many reliability concerns with prompting based previous knowledge estimators. 2. Exploring the nuances of using ICL for knowledge estimation: We investigate the impact of dif1 arXiv:2404.12957v1 [cs.CL] 19 Apr 2024 \fferent ICL design choices on the estimation of latent knowledge, such as the number of in-context examples, when some of the examples are unknown to the model or simply incorrect, as well as the sequence in which they appear. While we focus on knowledge estimation, our findings can inform the application of ICL in other contexts. 3. A comparison of IC-LKE with previous approaches: We empirically demonstrate that IC-LKE outperforms previous knowledge estimation approaches that rely on human-generated or machine-mined prompts across a variety of different open-source models and different types of factual relations. In contrast to prompting based methods, which are relation-specific and LLM-specific, IC-LKE\u2019s design is straightforward to apply. 4. A systematic comparison of latent knowledge of open source LLMs at scale: We use IC-LKE to evaluate the knowledge of 49 open-source LLMs spanning many families such as Llama(2), Gemma, Mistral, OPT, Pythia, etc. across a wide range of sizes, both with and without instruction-finetuning over 50 different relations and 20,000 facts from Wikidata. We find that models from some families such as Llama2, Mistral and Gemma and larger models know more facts than others, that models within the same family differ in the specific facts they know, despite being trained on the same data, and that fine-tuning reduces the amount of factual knowledge that can be extracted from the models. Related Work: Researchers have proposed several approaches to estimate latent knowledge from LLMs, which can be categorized into two ways: (i) Model-internals based approaches leverage the LLM attention map (Wang et al., 2020), activation function (Burns et al., 2022), or model parameters (Kazemnejad et al., 2023) to decide whether factual information can be extracted from the LLM. In our study, we rely on the probability distribution of generated tokens in an LLM \u2013 thereby our method belongs to the model-responses based approach. (ii) Model-responses based approaches \u2013 generally applicable to a wide range of LLM models \u2013 often propose different prompting techniques to nudge the LLM to validate whether a target fact is stored in it (Chern et al., 2023; Sun et al., 2023; Wang et al., 2020; Petroni et al., 2019; Jiang et al., 2021; Newman et al., 2022; Jiang et al., 2020). Prompt-based methods differ subtly by the choice of prompts and evaluation criteria. Besides, the prompts are often brittle (Zamfirescu-Pereira et al., 2023; Arora et al., 2023; Sclar et al., 2023) \u2013 their success depends on the hypothesis that the LLM indeed understands the prompts. In our study, we instead seek a minimal understanding of prompts by an LLM and design a knowledge estimation method based on the in-context learning. As a test bed (Elsahar et al., 2018; Hu et al., 2023b; Sun et al., 2023; Petroni et al., 2019; Zhu and Li, 2023; Kry\u00b4 sci\u00b4 nski et al., 2019), we consider facts from existing knowledge graphs for performing knowledge estimation of LLMs. 2 Designing Reliable LKEs Today, there exist many general-purpose as well as domain-specific factual knowledge bases that contain a very large number (millions to billions) of facts. The facts can be encapsulated as triplets, represented as \u27e8subject (x), relation (r), object (y)\u27e9. These triplets offer a general way to represent factual knowledge about real-world entities in knowledge graphs or other structured knowledge bases. The goal of latent knowledge estimation is to infer what fraction of the facts are known to a LLM. We call methods that estimate the amount of latent knowledge inside an LLM latent knowledge estimators (LKEs). 2.1 Reliability concerns with existing LKEs Existing approaches to estimating latent knowledge in LLMs use a variety of factual knowledge tests. Below, we identify several reliability concerns with current designs that motivate our new LKE design. 1. LLM-specific restrictions on test topics: Many prior works (Petroni et al., 2019; Jiang et al., 2020) limit the choice of facts that can be used in tests to those where the surface form of the objects (y) is represented by a single token by the LLM\u2019s tokenizer. As different LLMs use different tokenizers, this limitation prevents us from comparing the latent knowledge across different LLMs. Furthermore, only popular objects tend to be represented by a single token and so the resulting estimates are not representative of the LLM\u2019s knowledge of facts with multi-token object representations. 2. Unrestricted choice of test prompts: Many past works have attempted to use test prompts without any restrictions, including both humangenerated or machine-mined prompts (Jiang et al., 2020; Zamfirescu-Pereira et al., 2023; Arora et al., 2023; Sclar et al., 2023). They typically intersperse the subject x and object y between additional relationship context-communicating tokens. Some 2 \fanalyze the performance of a variety of prompts and then pick the best-performing or use an ensemble of the best-performing prompts (Jiang et al., 2020; Newman et al., 2022; Fernando et al., 2023). However, these approaches raise two important concerns: First, the generated prompts, particularly those that are machine-mined, may include tokens that can implicitly or explicitly introduce additional (side-channel) information that makes it easier to answer the question. As a specific example, in a prior work (Jiang et al., 2020), for the relation \u201cposition held\", the prompt \u201cx has the position of y\" performed worse than \u201cx is elected y\". But, note that the second prompt potentially introduces a side-channel: it implicitly rules out answer choices for unelected positions like Professor and favors elected positions like President. Second, selecting from an unbounded number of potential prompt choices raises concerns about the complexity of LKEs (the size of the set of all considered prompts) and the potential for over-fitting, which in turn brings the reliability of estimates into question. 3. Reliance on LLMs\u2019 meta-linguistic judgments: Prior works used prompts (Chern et al., 2023; Sun et al., 2023; Wang et al., 2020; Petroni et al., 2019; Jiang et al., 2021; Newman et al., 2022; Jiang et al., 2020) for communicating the question as well as the expected format of answers. But, the scores (estimates) resulting from such prompt-based testing conflate an LLM\u2019s latent knowledge of the facts with the LLM\u2019s meta-linguistic judgments, i.e., the LLM\u2019s ability to comprehend the prompt, understand the question embedded within the prompt and output the answer in some expected format (Hu and Levy, 2023). The impact on meta-linguistic judgments can be seen from the fact that multiple semantically-equivalent prompts result in different responses from an LLM and thereby, different estimates of latent knowledge (Hu and Levy, 2023). Motivated from the above, we derive the following three design principles for LKEs. A reliable LKE design should: \u2022 DP1: generate estimates for any factual topic and tokenization scheme. \u2022 DP2: limit arbitrary prompt engineering to minimize over-fitting & side-channels. \u2022 DP3: minimize reliance on meta-linguistic prompts. 2.2 A new In Context learning based LKE (IC-LKE) Our goal is to estimate whether an LLM knows a fact f = \u27e8x, r, y\u27e9. The challenge is to probe the LLM and evaluate its responses in a way compatible with the design principles set in Section 2.1. Key idea: Leverage in-context learning. LLMs have shown to exhibit In-Context Learning (ICL) abilities (Brown et al., 2020) that allow them to infer and extrapolate patterns in their inputs. We leverage this ability to communicate information about relation r without additional instructions to the LLM (DP3) by providing it with a list of facts based on r. Example 1. Assume that we want to probe for whether an LLM knows the fact \u27e8Einstein, birthyear, 1879 \u27e9. We can use other facts for the birthyear relation such as \u27e8Feynman, birth-year, 1918 \u27e9, \u27e8Heisenberg, birth-year, 1901 \u27e9to construct an input \u201cFeynman 1918 Heisenberg 1901 Einstein\u201d. By providing in-context examples to the model, we communicate the relation between subjects and objects. To correctly extrapolate the pattern, the model needs to retrieve Einstein\u2019s birth-year as the completion of the sequence. More formally, given a training dataset of facts Fr = {\u27e8xi, r, yi\u27e9}n i=1 for relation r, as well as a test fact f = \u27e8x, r, y\u27e9, we leverage ICL to construct prompts that elicit information about f as \u03c3(x, r) = x1 y1 . . . xn yn x (1) We use r to pick facts from Fr and concatenate the tokens corresponding to the subjects and objects, but do not include any other information about r (DP2). We use space \u201c \u201d as the separator token and discuss this choice in more detail in Section 4.1. We discuss other design choices for IC-LKE construction in Section 3. When further details are not needed, we simply refer to some input as \u03c3. Evaluating model outputs. We evaluate the output of model \u03b8 for input \u03c3(x, r) based on the probabilities \u03b8 assigns to the tokens of the corresponding object y. To allow for objects y consisting of multiple tokens and to be independent of the specific tokenization scheme (DP1), we compute the object probability over multiple tokens as follows: P\u03b8(y | \u03c3) = |y| Y i=2 P\u03b8(y(i) | y[i\u22121:1] \u03c3) \u00b7 P\u03b8(y(1) | \u03c3), (2) where |y| denotes the number of tokens in y and P\u03b8(y(i) | y[i\u22121:1] \u03c3) is the conditional probability 3 \fof predicting the i-th token y(i) of y given the preceding tokens y(i\u22121), . . . , y(1), and \u03c3. Multiple-choice testing. To determine whether model \u03b8 knows a fact f = \u27e8x, r, y\u2217\u27e9, we test whether given input \u03c3(x, r), \u03b8 can choose the correct object y\u2217from among a set of M unique alternatives. Specifically, given fact f, we derive a test instance called choice c = \u27e8x, r, y\u2217, Y\u27e9, where Y is a set of M plausible but incorrect alternatives. We discuss the choice of Y in Section 4. pred\u03b8(c) \u225cargmax y \u2208{y\u2217} \u222aY P\u03b8(y | \u03c3(x, r)) (3) denotes the prediction of \u03b8 for choice c = \u27e8x, r, y\u2217, Y\u27e9. The predicted object has the maximal object probability within {y\u2217} \u222aY. Evaluation Metric. We evaluate the factual knowledge of model \u03b8 over a dataset of choices D = {ci}n i=1 using multiple choice accuracy: acc(\u03b8, D) \u225c P c\u2208D \u03b4 (y\u2217= pred\u03b8(c)) |D| (4) where \u03b4(\u00b7) is the indicator function. The IC-LKE design satisfies the knowledge estimation design principles. The IC-LKE design proposed here satisfies the design principles from Section 2.1, since \u2022 DP1: its relative probability comparisons between different answer-options make it applicable to arbitrary types of facts. \u2022 DP2: it uses the same, minimal prompt design based on ICL across all relations. \u2022 DP3: its only requirement is that the LLM is able to use ICL, no further assumptions about any metalinguistic abilities are made. 3 Exploring the design space of IC-LKE By design, IC-LKE avoids many limitations of prior works. However, IC-LKE introduces a few design choices for the input, i.e., \u03c3(x, r) in Equation (1). One must decide the right n, the number of in-context examples included in \u03c3(x, r). Further, it is unclear how IC-LKE would be impacted when some of the chosen examples are unknown to the model or are incorrect. We study both these factors in detail by varying n and introducing unknown or incorrect examples within these n examples. These experiments allows us to better understand the number of in-context examples needed and how robust IC-LKE is to several 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Pythia-12B Falcon-7B Llama2-7B Mistral-7B Gemma-7B Number of Examples Accuracy Figure 1: [Influence of the number of in-context examples] We examine how varying numbers of incontext examples influence the accuracy (calculated as defined in Eq 5) across different LLMs. The vertical dashed line indicates the number of examples at which the models achieve 95% of their respective stable accuracy at 50 examples. types of noise in these in-context examples. We perform an in-depth empirical analysis on a Nobel Laureate dataset for the relation \u2018birth year\u2019 (details in A.1). The dataset consists of facts formatted as \u27e8Person(x), birth-year(r), YYYY(y)\u27e9. More knowledgeable models need fewer incontext examples, but a small number suffices for most models. In Figure 1, we report knowledge estimation accuracy (Eq. (5)) for different LLMs evaluated on 900 test samples, with varying numbers of in-context examples (n) by randomly sampling from the training set using five random seeds. With an increasing number of in-context examples, the mean accuracy increases while the standard deviation decreases in different LLMs, i.e., the models gradually converge to a stable performance. Using dashed vertical lines, we report the minimum number of examples required by different LLMs to achieve 95% of the accuracy at 50 in-context examples. Interestingly, LLMs with higher estimation accuracy tend to require fewer in-context examples compared to those with lower accuracy. A potential explanation for this behavior is that in order to infer the relation r, models need to comprehend the examples presented in the prompt. Therefore, less knowledgeable models need to see more examples in order to infer r. To further investigate which individual facts may be known or unknown to a model, we look at the generation probability of in-context objects in 200 correct subject (x)-object (y) pairs using the Mistral-7B model, as shown in Figure 2a. Similar results for additional models are presented in Appendix E. Note that here we are only looking at probabilities of the object (y) for in-context examples given previous x y pairs in the input to understand which of these samples are known by the LLM. The Mistral-7B model demon4 \f0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Example Position Probability (a) (Subject, object) examples in a prompt 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Unknown Following (b) Distributed unknown examples 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Unknown (c) Continuous unknown examples 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Incorrect Following (d) Distributed incorrect examples 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Incorrect (e) Continuous incorrect examples Figure 2: [Variation in object probabilities of Nobel laureate data using Mistral-7B] Figure 2a illustrates the probability of each object at various positions in the prompt. We show the impact on probabilities after replacing objects with unknown ones at randomly distributed positions in Figure 2b and at continuous positions in Figure 2d. Similarly, we also show the impact of incorrect examples when replaced at randomly distributed positions (Figure 2d) and continuous positions (Figure 2e). In all plots, the horizontal dashed line shows the average probability of the correct examples (blue dots). strates a gradual increase in probability for generating correct objects as we go from left to right on the x-axis (note that for a point on the x-axis, points before it are in context, thus points on the right have more context to leverage) in Figure 2a, stabilizing at a mean probability of approximately 85%. We also see that some objects at later positions have a lower generation probability. This suggests that the LLM may be less confident about its knowledge of the facts corresponding to them. We can leverage the token generation probability as a signal of LLM\u2019s confidence when evaluating LKEs (see Appendix D). Models are robust to unknown examples. Next, we investigate the robustness of estimates to occurrence of unknown examples. We insert unknown examples in two distinct ways: one where we randomly distribute the occurrence of unknown examples throughout \u03c3(x, r), and another more extreme scenario where we replace a continuous block of examples with unknown ones. We chose 40 out of the 200 examples and replaced them with unknown examples created using fictitious names and birth years 1. Our findings are shown in Figures 2b and 2c for random and continuous replacement respectively. Unknown examples are marked by red dots, examples immediately following unknown ones in cyan dots and the rest in blue dots. The unknown examples show generation probabilities close to zero, confirming the LLM\u2019s tendency to assign low probabilities to unknown data. However, interestingly, unknown examples minimally impact surrounding data in both settings. Models are vulnerable to incorrect examples. We investigate the impact of including incorrect examples in \u03c3(x, r). Similar to the setup for unknown 1generated via https://en.namefake.com/api examples, we also insert 40 (out of 200) incorrect examples randomly (Figure 2d) and simultaneously (Figure 2e). In our experiments, these incorrect examples are created by altering the birth years of known Nobel laureates and are marked by red dots in the plots. In contrast to inserting unknown examples, the LLM significantly struggles with incorrect examples. Injection of such examples detrimentally affects the LLM\u2019s performance in both settings. We highlight one randomly marked yellow star example in Figure 2a, Figure 2b, and Figure 2d to show how the presence of incorrect samples brings down the probability of surrounding points. Summary: LLMs can identify the relation pattern of subject-object pairs even with a small set of in-context examples in the prompt. LLMs are relatively robust to unknown examples, but their ability to recollect factual knowledge is vulnerable to incorrect examples, particularly when they appear in a continuous sequence. Our findings allude to the effectiveness of designing an IC-LKE, where we carefully place correct examples from a training dataset and proceed to estimate the latent knowledge of the LLM on examples from the test set. Furthermore, the findings also motivate us to design a more efficient in-context learning based LKE, called EIC-LKE, that can process multiple test examples simultaneously in a single prompt where training examples are placed preceding each test example, see more details in the Appendix F. 4 Experiments and Results We present the empirical findings of IC-LKE (as well as the efficient version, EIC-LKE) on the knowledge-estimation task on 49 open-source (pretrained and fine-tuned) LLMs across different LLM families and sizes. We enlist models and their simplified names used in this paper in Appendix 6, Ta5 \f0.73 0.55 0.83 0.82 0.74 0.53 0.86 0.83 0.71 0.5 0.85 0.58 0.68 0.48 0.71 0.72 HGP MMP IC-LKE EIC-LKE 0 0.2 0.4 0.6 0.8 1 Mistral-7B Llama2-7B Falcon-7B Pythia-12B Knowledge Extraction Method Accuracy Figure 3: [Performance comparison for different latent knowledge extractors] We compare the accuracy of IC-LKE and EIC-LKE with the baseline method (Jiang et al., 2020) across 12 relations from T-REx-MC. ble 6, and provide a leader-board of models based on IC-LKE in Table 7. Dataset: We evaluate the knowledge of models on a large set of facts from the T-REx dataset2 (Elsahar et al., 2018). We selected relations from TREx with at least 500 samples and linked to a minimum of 100 unique objects. This filtering leads to 50 distinct relations spanning categories like birth dates, directorial roles, parental relationships, and educational lineage. The resulting T-REx Multiple Choice (T-REx-MC) dataset comprises 5,000 training and 20,000 test facts. Appendix A contains detailed information on the dataset and relations. Choosing the set Y & its impact on test difficulty: For each fact \u27e8subject (x), relation (r), object (y\u2217)\u27e9, we generate alternative objects Y to create multiple choices. Note that the alternative objects in Y are viable choices and cannot be easily eliminated. Therefore, for each fact \u27e8x, r, y\u2217\u27e9we select y \u2208Y from other facts in the dataset that share the same relationship r. For computational feasibility, we sample |Y| = 99 alternative objects per fact, so that a random guess between {y\u2217} \u222aY has a 0.01 probability of being correct. 4.1 IC-LKE vs. prompt-based approaches We compare the performance of IC-LKE and EICLKE with the existing prompt-based approaches (Jiang et al., 2020) and report two key takeaways. IC-LKE outperforms prompt-based approaches. We randomly sample three humangenerated prompts (HGP) and machine-mined prompts (MMP) from (Jiang et al., 2020) for 12 common relations between T-REx-MC and (Jiang et al., 2020). The HGPs and MMPs for all relations are in Appendix G. In Figure 3, IC-LKE and EICLKE outperform HGP and MMP in terms of higher 2https://huggingface.co/datasets/relbert/t_rex IC-LKE IC-HGP1 IC-HGP2 IC-HGP3 IC-MMP1 IC-MMP2 IC-MMP3 0 0.2 0.4 0.6 0.8 1 Mistral-7B Llama2-7B Falcon-7B Pythia-12B In-Context Prompt Accuracy Figure 4: [Influence of different separators] We replace the \u2018[space]\u2019 token separating the subject-object pairs with human-generated prompts (HGP, red background) and machine-mined prompts (MMP, blue background) for the relation \u2018original broadcaster\u2019. Accuracy performance is agnostic to the separators. mean accuracy across different models and 12 relations. Also, IC-LKE and EIC-LKE have lower standard deviation than HGP and MMP, indicating a higher consistency of IC-LKE and EIC-LKE on knowledge estimation tasks. In Appendix H.2, we report relation specific results, where IC-LKE and EIC-LKE estimate higher factual knowledge than the existing works in most relations, thereby demonstrating the superiority of IC-LKE and EIC-LKE over existing methods. IC-LKE is a flexible and effective knowledge estimator. We adapt IC-LKE by replacing the separator \u2018[space]\u2019 with three separators from HGP and MMP each for the relation \u2018original broadcaster\u2019 and report estimation accuracy in Figure 4. We can observe that \u2018[space]\u2019 token demonstrates an equivalent performance with semantically meaningful prompts via HGP and MMP. Therefore, adding relation specific separators has a limited impact on factual knowledge estimation, as long as the subject-object pairs are correctly presented. Furthermore, finding relation-specifc prompts often require hand-crafted efforts vs. an automatic incontext based approach like ours where (subject, object) pairs are used. Therefore, IC-LKE can potentially extend to any facts from knowledge graphs over any LLM while HGP and MMP requires additional supervision and relation-specific validation. 4.2 Evaluating Diverse Models and Relations We investigate the performance of 35 pre-trained LLMs and 14 fine-tuned LLMs across 50 relations using the IC-LKE framework. Our analysis is designed to uncover nuanced insights into the knowledge levels and structures within these models. We will examine the results through two primary lenses: (1) the variations in knowledge across different model families, and (2) the influence of model size and fine-tuning within the same 6 \fMistral-8x7B Mistral-7B Llama2-70B Llama2-13B Llama2-7B Gemma-7B Gemma-2B Llama-65B Llama-33B Llama-13B Llama-7B Falcon-7B MPT-7B GPT-NEOX-20B OPT-6.7B OPT-13B OPT-30B OPT-2.7B OPT-1.3B OPT-350M OPT-125M GPT-J-6B Pythia-12B Pythia-6.9B Pythia-2.8B Pythia-1.4B Pythia-1B Pythia-410M Pythia-160M Pythia-70M Bloom-7.1B Bloom-3B Bloom-1.7B Bloom-1.1B Bloom-560M 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 0 0.2 0.4 0.6 0.8 1 Model Relation Order Figure 5: [Accuracy for 35 pre-trained LLMs on the 50 different relations in T-REx-MC] Models are grouped by family and arranged from left to right based on the accuracy of the model closest to 7 billion parameters. Within each family, models are ordered by their average accuracy. 0.91 0.86 0.74 0.86 0.85 0.67 0.79 0.75 0.79 0.68 0.73 0.86 0.92 0.72 0.85 0.86 0.74 0.86 0.78 0.78 0.73 0.75 0.74 0.72 0.8 0.75 0.81 0.62 0.77 0.74 0.74 0.71 0.72 0.86 0.85 0.75 0.85 0.84 0.67 0.77 0.74 0.8 0.71 0.73 0.85 0.86 0.81 0.84 1 0.74 0.9 0.81 0.88 0.79 0.81 0.67 0.74 0.62 0.67 0.74 1 0.78 0.71 0.64 0.69 0.73 0.79 0.86 0.77 0.77 0.9 0.78 1 0.86 0.82 0.8 0.82 0.75 0.78 0.74 0.74 0.81 0.71 0.86 0.87 0.79 0.8 0.83 0.79 0.78 0.74 0.8 0.88 0.64 0.82 0.79 1 0.8 0.82 0.68 0.73 0.71 0.71 0.79 0.69 0.8 0.8 0.8 0.8 0.81 0.73 0.75 0.72 0.73 0.81 0.73 0.82 0.83 0.82 0.81 0.85 Mistral Llama2 Gemma Llama Falcon MPT GPT-NEOX OPT GPT-J Pythia Bloom Mistral Llama2 Gemma Llama Falcon MPT GPT-NEOX OPT GPT-J Pythia Bloom 0 0.2 0.4 0.6 0.8 1 Model Family Model Family Figure 6: [Pearson correlation coefficients between model families] We compute the Pearson correlation coefficients between each pair of models and then compute the average correlation across the same model family. model family on their knowledge attributes. 4.2.1 Comparing different LLMs families Some model families are consistently more knowledgeable than the rest. We sort the model families based on the performance of the model closest to 7B parameters 3, and the models within each family based on average accuracy across 50 relations. Figure 5 shows that the Mistral, Llama2, Gemma, and Llama families have higher performance on most of the relations than Pythia, Bloom, and OPT, indicating their lower factual knowledge. Different model families align in their relative factual knowledge. We investigate the correla37B parameters is a good reference point since all model families except GPT-NEO-X have models within a gap of \u2264 1B parameters: Mistral-7B, Gemma-7B, Llama-7B, Falcon7B, MPT-7B, OPT-6.7B, GPT-J-6B, Pythia-6.9B, and Bloom7.1B. tions between each model pair\u2019s performance over 50 relations to assess the agreement in their knowledge levels of the 50 relations. We compute the average correlations within each model family (e.g. Llama2 7B, 13B, 70B) in Figure 6. Despite differences in architecture and training datasets among model families, there is a significant consensus (correlation > 0.6, see Figure 14) regarding the hierarchy of knowledge across various relations. We also compile the three best and worst-performing relations for each model in Table 9, illustrating the consensus among all models. 4.2.2 Comparing within the same LLM family Larger models embed more knowledge. We show in Figure 5 that, within each model family, bigger models (e.g. Llama-65B) generally outperform their smaller counterparts (e.g. Llama-13B) in terms of accuracy with an exception in the OPT family. Models within the same family are typically pre-trained on the same datasets (Biderman et al., 2023; Zhang et al., 2022; Touvron et al., 2023). Thus, this observation suggests that, when trained on identical datasets, the larger models capture a broader set of facts. Despite being trained on the same data, models might remember different facts. From these results, however, it is not clear if the larger models are subsuming smaller models in their factual knowledge, i.e., are the larger models also correct on the facts that the smaller models are correct on? To assess this, we compute the subsumption rate \u03b7: \u03b7(\u03b81|\u03b82, F) = |\u03d5(\u03b81, F) \u2229\u03d5(\u03b82, F)| |\u03d5(\u03b81, F)| i.e., the fraction of facts from F known by smaller model \u03b81 that larger model \u03b82 also knows. A subsumption rate of \u223c1 indicates that all of the smaller model\u2019s knowledge is also contained in the larger model. To ensure a meaningful comparison across scales, we only consider models that were pre-trained using the same training data. Table 1 shows the average subsumption rate (\u03b7) between the largest and smallest models in a family, as well as the average accuracy, over all relations for different model families. Interestingly, \u03b7 is relatively low (< 0.5) for OPT, Pythia and Bloom (i.e., the larger models know less than 50% of what the smaller models know) and only reaching up to 0.8 for Gemma, Llama and Llama-2. Therefore, even though models within each family are trained on the same datasets and generally agree on the relative knowledge of different relations (Figure 6), 7 \fTable 1: Average subsumption rate (\u03b7) for different model families over the relations in T-REx-MC. Despite being trained on the same datasets, models of different sizes differ in the specific facts that they know (low \u03b7). Smallest Model Largest Model Family #Parameters Accuracy #Parameters Accuracy \u03b7 Llama 7B 0.699 65B 0.836 0.769 Llama-2 7B 0.741 70B 0.846 0.801 Gemma 2B 0.666 7B 0.750 0.710 OPT 125m 0.430 30B 0.588 0.481 Pythia 70m 0.334 12B 0.648 0.403 Bloom 560m 0.410 7.1B 0.548 0.498 Llama-7B Llama-7B-FT1 Llama-13b Llama-13B-FT1 Llama2-7b Llama2-7b-FT1 Llama2-7b-FT2 Llama2-13b Llama2-13b-FT1 Llama2-13b-FT2 Llama2-70b Llama2-70b-FT1 Mistral-7b Mistral-7b-FT1 Mistral-7b-FT2 Mixtral-8x7b Mixtral-8x7b-FT1 Mixtral-8x7b-FT2 Gemma-2b Gemma-2b-FT1 Gemma-7b Gemma-7b-FT1 Falcon-7b Falcon-7b-FT1 0 0.2 0.4 0.6 0.8 1 Model Accuracy Figure 7: [Accuracy of base vs chat-finetuned models] We see that finetuned versions (in lighter shades) obtain lower accuracy across the relations in T-REx-MC than pre-trained models (in darker shades). there are differences in the knowledge of specific facts they retain from their training data. Fine-tuning reduces latent knowledge. Finally, we investigate the effects of chat-based fine-tuning on the factual knowledge of models. Base language models are often fine-tuned (using a mix of supervised and reinforcement learning (Ouyang et al., 2022)) to make them better at following instructions. While prior works have shown that this makes the models better at various benchmarks, it\u2019s unclear how such fine-tuning affects latent knowledge. Figure 7 illustrates the comparative accuracy of pre-trained models and their fine-tuned counterparts. In almost all cases, the fine-tuned models obtain lower accuracy than their base versions. This suggests that fine-tuning reduces the amount of extractable latent knowledge in the models. A similar observation was also made by Yu et al. (2024). We observe a similar trend using EIC-LKE in Appendix H.6, Figure 15. Additional results on evaluating generated outputs (using 50 tokens) in Figure 16 reveal the same pattern. To further assess if the fine-tuned models are acquiring new knowledge, we compute the subsumption rate between pre-trained and fine-tuned versions (Table 10). We find that most of the latent knowledge in fine-tuned models is already present in base models (high \u03b7), thus indicating, that fine-tuned models may not be obtaining additional knowledge. 5 Concluding Discussion In this work, we investigate a new way to estimate latent factual knowledge from an LLM. Unlike prior approaches that use prompting, our method relies on in-context learning. Our method not only addresses many reliability concerns with prompting, but it also recollects (at time significantly) more factual knowledge than prompting. In contrast to prompting, which requires relationship-specific and LLM-specific prompt engineering, our method can be applied with minimal effort to test factual knowledge of relations across a variety of structured knowledge bases and LLMs. This ability enables us to compare the latent knowledge captured by many different families of open-source LLMs; we expect our results to be of interest to designers of these LLMs. Finally, to design our incontext learning based LKE, we explore the impact of the number and ordering of correct, incorrect, and unknown examples used as inputs; our findings may be of independent interest to developing a better understanding of in-context learning. A fundamental question posed by our and prior work on estimating latent knowledge in LLMs: What does it mean for an LLM to know a fact? Suppose we tried to infer if an LLM knows the capital of Germany using the input \"France Paris; Spain Madrid; Germany \" and suppose the answer were Berlin. What we have learnt is that the LLM knows that the relationship r between Germany and Berlin is similar to that between France and Paris or Spain and Madrid. What we have not learned is whether the LLM knows that the relation r is called \"capital\" in English or \"hauptstadt\" in German. The latter is revealed by prompts such as \"The capital of Germany is \". But, such prompts don\u2019t reveal whether the LLM knows that what Berlin means to Germany is similar to what Paris means to France. Is one type of knowing facts better than other? It is difficult to answer in general. Neither type of knowing guarantees that the knowledge can be put to use in different contexts and tasks, such as when we ask the LLM where the parliament of Germany is located. Nevertheless, one clear takeaway from our study is related to how factual knowledge is latently embedded in an LLM. We show that more factual knowledge can be recollected using in-context learning, i.e., the representations of subjects and objects that share the same relationship, than by prompting with the name of their relationship. 8 \f6 Limitations This study contributes to advancing our understanding of latent factual knowledge in LLMs through an innovative in-context learning approach. However, it is essential to acknowledge the inherent limitations of our work. While the use of in-context learning aims to mitigate the influence of prompt engineering and the reliability issues associated with previous prompting methods, it introduces its own biases based on the selection and formulation of in-context examples. We discus these in detail in Section 3. For example, the choice of which examples to include, their order, and their factual accuracy can influence model responses, and thus these in-context examples must be carefully curated for reliable latent knowledge estimation. Additionally, our study\u2019s limitation in testing simple-format facts underlines a critical gap in assessing LLMs\u2019 complex reasoning abilities. The knowledge estimation framework employed predominantly hinges on the LLM\u2019s capacity to correctly recall or recognize factual information from a given set of triplets or structured prompts. This narrows the scope of evaluation to straightforward factual recall, thereby overlooking the models\u2019 capability to engage in more sophisticated cognitive processes such as reasoning, synthesis, and inference, which we leave as open avenues for future work." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15157v1", |
| "title": "FASTTRACK: Fast and Accurate Fact Tracing for LLMs", |
| "abstract": "Fact tracing seeks to identify specific training examples that serve as the\nknowledge source for a given query. Existing approaches to fact tracing rely on\nassessing the similarity between each training sample and the query along a\ncertain dimension, such as lexical similarity, gradient, or embedding space.\nHowever, these methods fall short of effectively distinguishing between samples\nthat are merely relevant and those that actually provide supportive evidence\nfor the information sought by the query. This limitation often results in\nsuboptimal effectiveness. Moreover, these approaches necessitate the\nexamination of the similarity of individual training points for each query,\nimposing significant computational demands and creating a substantial barrier\nfor practical applications. This paper introduces FASTTRACK, a novel approach\nthat harnesses the capabilities of Large Language Models (LLMs) to validate\nsupportive evidence for queries and at the same time clusters the training\ndatabase towards a reduced extent for LLMs to trace facts. Our experiments show\nthat FASTTRACK substantially outperforms existing methods in both accuracy and\nefficiency, achieving more than 100\\% improvement in F1 score over the\nstate-of-the-art methods while being X33 faster than \\texttt{TracIn}.", |
| "authors": "Si Chen, Feiyang Kang, Ning Yu, Ruoxi Jia", |
| "published": "2024-04-22", |
| "updated": "2024-04-22", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Fact tracing seeks to identify specific training examples that serve as the\nknowledge source for a given query. Existing approaches to fact tracing rely on\nassessing the similarity between each training sample and the query along a\ncertain dimension, such as lexical similarity, gradient, or embedding space.\nHowever, these methods fall short of effectively distinguishing between samples\nthat are merely relevant and those that actually provide supportive evidence\nfor the information sought by the query. This limitation often results in\nsuboptimal effectiveness. Moreover, these approaches necessitate the\nexamination of the similarity of individual training points for each query,\nimposing significant computational demands and creating a substantial barrier\nfor practical applications. This paper introduces FASTTRACK, a novel approach\nthat harnesses the capabilities of Large Language Models (LLMs) to validate\nsupportive evidence for queries and at the same time clusters the training\ndatabase towards a reduced extent for LLMs to trace facts. Our experiments show\nthat FASTTRACK substantially outperforms existing methods in both accuracy and\nefficiency, achieving more than 100\\% improvement in F1 score over the\nstate-of-the-art methods while being X33 faster than \\texttt{TracIn}.", |
| "main_content": "Introduction Recent years have witnessed large language models (LLMs) demonstrating remarkable abilities in absorbing vast knowledge from extensive text corpora, yielding impressive advancements in NLP tasks such as question answering (QA). However, these models often produce seemingly coherent yet unfounded outputs, known as \u2018hallucinations\u2019 (Agrawal et al., 2023), posing risks in high-stake scenarios such as healthcare and finance, where reliability is of paramount importance (Master of Code, 2023). This critical challenge has motivated research on fact tracing (Aky\u00fcrek et al., 2022), which aims to identify the training data that serves as the knowledge source for LLMs\u2019 generation. Striving to provide a pathway to understanding and mitigating the issue of hallucination, Aky\u00fcrek et al. (2022) proposed a benchmark for fact tracing, formulating it as a challenging task that involves searching for training data that has fact-support correspondence (i.e., supportiveness) with given queries. Current methods, however, tend to miss the mark and overly rely on similarity measures between individual training samples and the target query, such as gradient similarity (Pruthi et al., 2020; Koh and Liang, 2017), embedding similarity (Rajani et al., 2020), or lexical similarity (Robertson et al., 1995; Lv and Zhai, 2011). As a natural result, these approaches may fail to differentiate between samples that merely look similar and those that actually contain the supporting information sought by the query\u2013even in considerably simple cases. This prominent issue limits their effectiveness in identifying supportive training examples, preventing them from being effective in broader use cases (Aky\u00fcrek et al., 2022). Besides, some of these methods, such as Pruthi et al. (2020); Koh and Liang (2017), carry a significant computational overhead in analyzing a given query. Providing intellectual inspiration for research exploration, nonetheless, its computational demand can be unaffordable for most practical scenarios. Despite soaring interest in this emerging problem, current research still falls short of the critical need by a large margin. We summarize the desiderata for fact-tracing methods as the following: \u22c4D-i. Effective and Accurate. For a target query, fact-tracing methods need to identify all supporting facts in the training corpus and achieve both high precision and recall simultaneously. \f\u22c4D-ii. Computationally Tractable. Facttracing methods need to be scalable with both the number of queries and the number of training samples to be examined. \u22c4D-iii. Practically Robust. Fact-tracing prioritizes general-purposed, principled methods that are plausible for deployment and transferable between use cases. Current methods all miss one or more of these principles. Specifically, gradient-similarity-based methods (Pruthi et al., 2020; Koh and Liang, 2017) are notoriously computationally demanding (Dii). Also, gradients are considerably susceptible to noises, rendering their performance rather unstable even with extensive hyper-parameter tuning (Aky\u00fcrek et al., 2022; Park et al., 2023) (D-i, D-iii). Lexical-similarity-based methods (Robertson et al., 1995; Lv and Zhai, 2011) are typically faster, but relying on queries and samples with supporting facts being similarly phrased. This assumption is not necessarily true in realistic use cases (D-iii). Table 4 shows that the performance for such methods may drop a large margin under slight rephrasing of the text (D-i). Therefore, these methods are neither practical nor reliable (as illustrated in Sec. 5.2). Figure 1: FASTTRACK achieves the best tradeoffs between fact tracing efficacy and efficiency. The x-axis the the computational time of evaluating 100 queries using a 10k corpus, and the y-axis is the tracing performance when using top-k thresholds (if applicable). TDA methods yield consistently low performance across top-k thresholds, making them look like dots in the plot. Determining whether a training example supports a factual statement in a query demands reasoning abilities beyond sample similarities where support for a factual assertion often arises through the inference of connections among related pieces of information. The dilemma with these approaches is that no single representation works in all cases and the similarity in these pre-defined spaces may easily fail to capture the nuance of supportiveness effectively. Inspired by the recent advancement in LLM\u2019s abilities in natural language understanding (NLU), a natural idea is to directly evaluate the supportiveness between each training sample and the target query using an LLM. Unprecedented in-context learning (ICL) capabilities make these models notably versatile and easily adaptable to novel cases with minimal customization, effectively bridging the realistic gap between fact-tracing methods and real-world scenarios. Admittedly, our preliminary investigation shows that this idea indeed enhances the efficacy in the identification of supportive training samples to an impressive extent. Nevertheless, this idea faces immediate challenges when applied to a practical-sized training corpora: traversal evaluation for all training sample-query pairs requires a massive number of queries to the LLM, unaffordable in both computation time and costs, hindering it from being practically useful. To address this dilemma, we propose FASTTRACK, which is a two-stage scheme decomposed into offline and online components. In the first stage, we build semantic indexes for the training corpus through hierachical clustering. Such process is completely offline and only need to be run once. During online stage, these pre-built semantic indexes facilitate the retrieval of relevant clusters for any given query, significantly reducing the search range. FASTTRACK then runs a fine-grained examination by employing a LLM to evaluate the supportiveness of training data in the retrieved clusters. While prior work (Aky\u00fcrek et al., 2022) requires careful selection of small candidate set of size around 500 for practical evaluation, FASTTRACK enables a balance between computational feasibility and fine-grained analysis. This enables it to accommodate large corpus of size 10k or even 100k, while ensuring both satisfactory efficiency and efficacy (high precision and recall). Our contributions are summarized as follows: \u2022 We propose a novel two-stage pipeline FASTTRACK and show it is easily adaptable without needing to train a model. (meets D-iii) \u2022 We evaluate FASTTRACK\u2019s performance on various datasets with baseline methods. FAST\fTRACK achieves notable F1 scores of 0.72 on FTRACE-TREx and 0.91 on VITAMINC, more than doubling the performance of the best existing methods. (meets D-i) \u2022 We show FASTTRACK to offer a substantial edge in efficiency, being 33\u00d7 faster than the TDA method TRACIN for a corpus of 10k samples, and readily applicable to larger datasets with more than 100k samples. (meets D-ii) 2 Related Work Training Data Attribution (TDA). TDA aims to trace model predictions back to the training examples that responsible for these predictions, which shares a similar goal with fact tracing. Prior work (Aky\u00fcrek et al., 2022) proposes to use two main types of TDA methods as baselines: gradient-based and embedding-based attributions. Gradient-based methods, such as TRACIN (Pruthi et al., 2020), estimate the attribution score of training data on predictions by calculating the cosine similarity between the gradients of the training data and the query. Embedding-based methods employs the model\u2019s internal representations to determine the relevance of training examples to a given test prediction (Rajani et al., 2019). The attribution score is defined as a cosine product of hidden representations. To retrieve supporting training data for a given query zquery, one need to score every training data and rank them by their influence score. As it could be computationally infeasible for gradient-based TDA scoring all training data in large datasets, Aky\u00fcrek et al. (2022) only evaluates on carelly selected small subsets (i.e., around 500) for each query. This limitation motivates us to design a framework that is both more computationally efficient and more effective. Information Retrieval (IR). IR focuses on retrieving relevant documents in a large collection given specific queries (Izacard et al., 2021). Though not originally designed for fact tracing task, prior work (Aky\u00fcrek et al., 2022) found it effective and outperforms principled TDA methods by a large margin. IR splits into two categories: termfrequency-based methods like BM25(Thakur et al., 2021; Zhou et al., 2022), which score each training data base on the token overlap with the given query, inversely weighted with the frequency of such tokens, and neural network-based methods (Izacard et al., 2021; Ni et al., 2021), which, despite their advanced capabilities, often require extensive manual annotations, making them less suited for fact tracing due to the absence of necessary annotations. Recent attempts to adapt neural methods through zero-shot learning have not matched BM25\u2019s performance (Thakur et al., 2021; Zhou et al., 2022). Therefore, following prior work, we select BM25 as the baseline for fact tracing due to its superior retrieval quality without the need for annotated data. All of the methods above focus on relevance while neglecting the supportiveness of the connection between training data and the query. In this paper, we introduce FASTTRACK, the first supportiveness-aware approach for fact tracing, offering substantial benefits in real scenarios where training data may contain conflicting information. 3 Methodology Fact tracing aims to identify knowledge source of a particular query. While similar to TDA, it focuses more on the fact-support correspondance between training data and query. This distinction is crucial: existing methods often retrieve relevant examples but fail to provide factual support, misaligning with the objective. The strong capability of LLMs such as ChatGPT makes it a perfect solution to provide justification based on \u2018supportiveness\u2019. However, directly doing pair-level comparison could be very time-consuming: Given a corpus of size N and m queries, the computation complexity is O(mN). In this section, we introduce an original twostage framework FASTTRACK, as illustrated in Figure 2. In the first stage, FASTTRACK leverages a recursive clustering scheme to mine the semantic structure in the training corpus, which enables a coarse matching for a given query. This significantly refines the search range, making it feasible to perform a fine-grained examination of each candidate training examples in the second stage. 3.1 Semantic Clustering The goal of the first stage is to create semantically meaningful indexes in an offline setting. This onetime process allows for the efficient utilization of these indexes in subsequent online stages, eliminating the need for re-computation. In this paper, we propose to employ a simple hierarchical clustering process over training data embeddings to recover underlying tree structures of the data. This process reorganize the entire training corpus into a more structure format, laying the groundwork for more \fz \u2026 Apple Pear Grass Tree Bamboo Training Corpus Query: The Granny Smith apple is known for its bright ____ skin. (green) Retrieved \u2018Apple\u2019 cluster: Apple is a rich source of dietary fiber and vitamin C. The Granny Smith apple is tart and quite crisp, having bright yellow skin. The Granny Smith apple is tart and quite crisp, having bright green skin. The Granny Smith apple originated in Australia in 1868, attributed to a chance seedling propagated by Maria Ann Smith. The green-skinned variety of Gravenstein apple is tart and good for cooking. \u2026 \u201cI will give you a claim and multiple texts. Carefully evaluate each text, check if the text alone supports the claim\u2026\u201d Semantic Clustering (offline) LLM as a Tracer (online) #analysis: Text 1 does not support the claim as it talks about apple in general and does not mention Granny Smith variety or its skin color. Text 2 directly contradicts the claim by stating that the Granny Smith apple has bright yellow skin, not bright green. Text 3 directly supports the claim by describing the Granny Smith apple as having bright green skin. Text 4 provides historical background \u2026 Text 5 This text is about the Gravenstein apple, not the Granny Smith apple, and therefore does not support the claim\u2026 #scores: 0, 0, 1, 0, 0, \u2026 Figure 2: Illustration of FASTTRACK workflow. Stage 1, which is completely offline, reorganizes the training corpus into a semantic tree for easier navigation; Stage 2 retrieves relevant clusters using fuzzy keyword matching, then employs LLMs to assess candidate samples, retrieving those with a score of 1. effective data navigation and retrieval. We first apply k-means clustering on the sample embeddings to mine its semantic structure. The clustering is conducted recursively where larger clusters will be further clustered until the size of all the clusters is within a certain threshold. The key of our method lies in transcending the limitations of conventional clustering algorithms, which typically do not assign semantically meaningful labels to each cluster. By harnessing the power of Large Language Models (LLMs), FASTTRACK assigns a carefully selected set of keywords to each cluster, serving as its semantic label. This strategic integration not only renders the clustering outcomes interpretable but also significantly simplifies the process of navigating through the corpus in response to specific queries. We note that such semantic clustering only need to be applied once offline, effectively allowing us to leverage the massive amount of compute in pre-training for free. 3.2 LLM as a Sample-Level Tracer With the structured and semantically meaningful clusters, we can now online process each query for fact tracing efficiently. The first step is to retrieve relevant clusters for a given query. A simple example for such cluster-level retrieval is to apply fuzzy match 1 to identify those clusters that shared similar keywords as the query. Furthermore, the efficacy of clustering can be enhanced through ensemble of different clustering outcomes, as detailed in Table 2. 1https://github.com/seatgeek/thefuzz Now, with the retrieved clusters, the second step is to identify the groundtruth supporting data from this narrowed pool. We frame this stage as a binary verification problem: given a specific query, we classify each candidate training example into two categories based on its \u2018supportiveness\u2019. An example is considered \u2019grounding\u2019 if it supports the query. A direct way to perform such classification is to instruct the LLM to evaluate a single training example against a query for supportiveness, assigning a score of 1 for supportiveness and 0 otherwise. Although effective, this one-at-a-time scoring method can still be computationally and financially costly. To futher enhance efficiency and speed up the process, we devised the prompting strategy to evaluate a batch of training data in a single inference run. This batch processing approach significantly cuts down the time required for evaluations, reducing the number of necessary inferences by a factor of b, where b is the number of candidate examples in a batch. The example prompt used in our experiments can be found in Appendix F. Following the LLM\u2019s evaluation, examples that are assigned a score of 1, indicating supportiveness, are systematically retrieved. The detailed workflow of FASTTRACK is presented in Algorithm 1. 4 Experimental Setup 4.1 Datasets FTRACE-TREx. The FTRACE-TRex dataset is proposed by (Aky\u00fcrek et al., 2022), with 27k queries created using LAMA (Petroni et al., 2019) and 1M masked training examples extracted from \fTREx (Elsahar et al., 2018) as the attribution set. Each training example is a cloze-style sentence with either the subject or object masked. The groundtruth training example for each query is defined as the examples that express the same fact, regardless of the masking position. To address the computational overhead, Aky\u00fcrek et al. (2022) proposes to construct a small, separate candidate set for each query (around 500). We follow a similar setup, but use a larger, fixed candidate pool to better reflect real-world scenarios: we randomly sample 100 queries from the entire query set for evaluation, and build the candidate pool by including all the corresponding groundtruth, supplementing with random samples to form a corpus of 10k. VITAMINC. We incorporate the VITAMINC dataset (Schuster et al., 2021) as a means to evaluate fact tracing methods\u2019 ability to mirror real scenarios where training corpus of LMs containing contradictions or misinformation. The VITAMINC dataset is built based on factual revisions to Wikipedia: each single factual revision yields a contrastive pair of contexts, where one context refutes the given claim and the other supports it. The original VITAMINC dataset presented each entry in the format of claim, evidence, and label, where the label indicates if the evidence \u2019SUPPORTS\u2019, \u2019REFUTES\u2019, or provide \u2019NOT ENOUGH INFO\u2019 to the evidence. To use it for fact tracing purposes, we build the attribution set by collecting 10k unique pieces of evidence (acting as training data). Then the query set is built by collecting corresponding claims that can be supported by the evidence. 2 4.2 Baselines Following Aky\u00fcrek et al. (2022), we compare our method FASTTRACK with TDA methods (i.e., TRACIN, EMBED) and the most representative IR method (i.e., BM25). TRACIN. TRACIN (Pruthi et al., 2020) is a recent gradient-based TDA method that has demonstrated strong empirical results and tractability. Following the setup of Aky\u00fcrek et al. (2022), we use an optimized version of TRACIN by rescaling gradients with Adafactors accumulators, applying unitnormalization to the gradients, and selecting the 2Due to the labeling format of the original dataset, some claims may have more than one supporting evidence but we do not know. To address such an issue, we manually inspect 100 queries for their groundtruth data and use these queries for evaluation. We provide the data we manually inspect along with this submission. best-performing layer. Data in FTRACE-TREx are cloze-style examples, hence we finetune an MT5 model (Xue et al., 2021) following Aky\u00fcrek et al. (2022) to predict the masked tokens. We note that gradient similarity is only meaningful when query and training data have the same question-answer construction, and it is difficult to construct the VITAMINC dataset in this way. Hence, we omit the evaluation of TRACIN on VITAMINC dataset. EMBED. Embedding-based similarity is another popular branch for fact tracing tasks. Here we refer to Equation 2 as baseline EMBED. For FTRACE-TREx dataset, we use the same finetuned MT5 model as for TRACIN, selecting the best-performing layer as the final result. For the VITAMINC dataset, we finetune a BERT model (Kenton and Toutanova, 2019) on our constructed attribution set. BM25. We use a publicly available implementation of BM25 (Lv and Zhai, 2011) as our baselines 3. We tokenize queries and training examples by space, removing any masked tokens. We proceed with the default settings for all hyperparameters, ensuring a standardized approach for our baseline comparisons. 4.3 Tracing Performance Evaluation TDA methods and BM25 score a given test query against every training example and then sort all examples based on their scores. This results in a top-k precision and recall performance measurement, where the k is the threshold of taking the top k ranked examples as the retrieved supporting training data (Aky\u00fcrek et al., 2022). In contrast, our method directly retrieves the supporting training data without ranking. To enable a unified comparison, we use F1 score as the main metric. We report the best-performing F1 score and the corresponding precision and recall for each method. 5 Empirical Results 5.1 Overall Performance We first evaluate the overall performance of different methods on FTRACE-TREx and VITAMINC datasets in Table 1. Hyperparameters for all methods are presented in Appendix C. Fact tracing is a challenging task. Previous work (Aky\u00fcrek et al., 2022) proposes several techniques to optimize TDA methods but found that even 3https://pypi.org/project/rank-bm25/ \fTable 1: Comparison of fact tracing performance. We present the best F1 scores among top-k for each method; precisions and recalls are chosen at the threshold lead to optimal F1 score. Among all methods, FASTTRACK performs the best. *The last row gives the upper bound performance achievable in the first cluster-level retrieval stage. FTRACE-TREx VITAMINC F1 Precision Recall F1 Precision Recall TRACIN 0.02 0.19 0.01 EMBED 0.01 0.08 0.01 0.48 0.54 0.46 BM25 0.40 0.49 0.52 0.55 0.59 0.53 Ours 0.72 0.81 0.69 0.91 0.88 0.98 Ours* 0.86 0.92 0.83 1.00 1.00 1.00 BM25 with no tuning outperforms TDA, and all these methods are far from perfect. In Table 1 we show similar findings, where TRACIN and EMBED resulted in F1 score lower than 0.1 on FTRACETREx dataset. We also observe that TRACIN\u2019s performance is highly dependent on the chosen model checkpoint. Specifically, the performance noted in our main results table was achieved using the final 80k-step checkpoint, with earlier checkpoints yielding even weaker outcomes (as shown in Appendix E). Takeaway: FASTTRACK delivers impressive tracing performance, yielding both high precision and recall, improving the F1 score by >80% compared to the best-performing baseline BM25. All baseline methods retrieve training examples based on their \u2018relevance\u2019 to the given query, which could violate the goal of fact tracing. This discrepancy becomes evident in real-world scenarios, where datasets, unlike the scientifically accurate and consistent ones often evaluated in prior research, contain conflicting information. Our evaluation on VITAMINC dataset reveals that such methods yield low precision due to their relevancefocused logic. Notably, FASTTRACK significantly outperforms all baselines, achieving an F1-score of 0.91, demonstrating its effectiveness in accurately identifying grounding training data for queries. Takeaway: FASTTRACK not only excels in fact-tracing performance but also offers the optimal balance between computational speed and effectiveness. It outperforms competitors significantly, running 33 times faster than TRACIN in evaluating 100 queries (Figure 1). 5.2 Failure Analysis In this section, we qualitatively examine some failure examples of different tracing methods to shed light on the future direction of fact tracing. When does BM25 fail? BM25 operates based on token overlap, and retrieves examples with high lexical similarity to the query, regardless of their factual consistency. As shown in the example below, while the first retrieved example is correct, the second contradicts the query, and the third is entirely unrelated. Query: Alloy Digital\u2019s network has a monthly reach of more than 100 million unique visitors. BM25 Retrieved: Rank-1: Defy Media: According to comScore, Alloy Digital\u2019s network reaches over 221 million unique visitors each month, including more than half of the aged 12-34 internet users. Rank-2: According to comScore, Alloy media platforms reach over 95 million unique visitors each month, including over half of the age 12-34 internet users. Rank-3: The franchise has sold more than 26 million units worldwide with the release of 2018 \u2019s installment. BM25\u2019s performance can be poor even when there are no such data conflicts. We further conduct experiment on FTRACE-TREx dataset where we paraphrase each query using an open-sourced paraphraser 4. The performance of BM25 before and after paraphrasing is shown in Table 4, where both precision and recall drop by a wide margin. When do TDA methods fail? TRACIN conducts a first-order approximation and uses the dot product of the model\u2019s gradients between each train-test sample pair to measure this contribution However, we find its actual performance is fragile and can be affected by a number of factors. 1) TRACIN\u2019s performance is highly dependent on having the exact same construct of questionanswer pairs. LMs for QA tasks typically use an encoder-decoder architecture, such as T5/MT5. The gradient is calculated with respect to the loss 4https://huggingface.co/humarin/chatgpt_ paraphraser_on_T5_base \fof the word/token being predicted. However, gradient similarity between a train-test sample pair is only meaningful when these are the same QA questions with identical question-answer pairs. In other words, even for sample pairs where the texts are the same, if the construction of question-answer is different, the loss and gradient may be unrelated. This aligns with our evaluation results: we find that TRACIN cannot identify those groundtruth training examples with supporting facts but having different QA construction. This results in arbitrarily poor performance on some queries, as the cosine similarity between gradients which are high-dimensional vectors can be dominated by unrelated factors and fail to capture the actual correlation between samples. 2) TRACIN tends to retrieve sentence with the same masked token. Such finding has also been observed in (Aky\u00fcrek et al., 2022). This likely occurs because the same masked token produces similar training gradients. Query: Comptroller of Maryland is a legal term in ____. (Maryland) TRACIN Retrieved: Rank-1: The ____ Comptroller election of 2010, was held on November 2, 2010. (Maryland) Rank-2: It is found in Alabama, Florida, Louisiana, ____, Mississippi, North Carolina and Virginia. (Maryland) As illustrated in the example above, the top-ranked retrieved example is correct, where the training example and query share the same masked target token. However, the second retrieved example does not provide any relevant fact, only the masked token to predict is the same. The other TDA method evaluated in this paper, EMBED, relies on hidden space similarity search. The dilemma for this approach is that no single representation works for all tasks (Vaze et al., 2023), which is more pronounced in these QA problems. The similarity of text pairs could be measured from different perspectives and the one that is best captured does not necessarily focus on the \"supporting fact\". Another major issue with this approach is that similar texts always receive similar scores, rendering the results end up in clumps. If the frontrunning clump is wrong, all samples in the clump are wrong, yields zero top-k accuracy. For example, for the same query \"Comptroller of Maryland is a legal term in <MASK>\", the top 3 retrieved examples of EMBED are: Rank-1: the Mayor of ____. (Moscow) Rank-2: Embassy in Cyprus is located in ____. (Nicosia) Rank-3: He served on the ____ of Edmonton. (town council) These retrieved examples, to varying degrees, relate to the query by involving 1) public offices and elected officials, 2) political or geographical entities, and 3) individuals with governmental roles. In fact, The groundtruth example belongs to a similar category. Yet, embedding similarity cannot detect fact-support correspondence between samples and cannot distinguish different levels of sample similarities. 6 Ablation Study and Analysis In-depth Analysis of FASTTRACK. The first stage of FASTTRACKcluster level retrieval decides the performance upper bound of our methods. If relevant clusters are not identified during this phase, it becomes impossible to recover them in the later stage. We report the upper bound performance achievable in the last row of Table 1, to reveal the limitation origins from the first stage. Specifically, this upper bound assumes perfect accuracy in the second stage, meaning if the correct cluster is identified, we achieve 100% precision and recall on this cluster. As shown in Table 1, the upper bound of FASTTRACK has a short gap to perfect. The precision is 0.92 while the recall is only 0.83. Such failure origins from the first stage could come from two sides: 1) clustering algorithm. The clustering algorithms group data with similar embedding together. Although in general, we observe that groundtruth training data for a specific query usually falls within 4 clusters on average, which means the clustering algorithm successfully groups relevant training data into the same cluster, there still exists the case that for some clusters the groundtruth training data is the minority. In such case, groundtruth data could be ignored when assigning the cluster semantically meaningful keywords, making this cluster hard to retrieve. In practice, this can be improved by using an ensemble we observe that an ensemble of three yields a performance upper bound of precision 0.92, recall 0.83; while single clustering yields an upper bound of precision 0.81, recall 0.65. 2) cluster retrieval method. We currently em\fTable 2: Upperbound performance of FASTTRACK when using single and ensemble embeddings on FTRACE-TREx. Single Two-Ensemble Three-Ensemble Precision 0.81 0.89 0.92 Recall 0.65 0.78 0.83 ploy simple fuzzy matches to capture clusters that share similar keywords as the query. However, the training data may present the query on a different surface. Future studies could leverage more advanced tools to enhance the process. Table 1 shows that there exists a gap between performance upper bound and final performance. This gap comes from ChatGPT\u2019s limitation, where it misclassified a few examples. We show two interesting types of misclassification here: Query: President of the Executive Yuan is a legal term in _____. (Taiwan) False negative examples (mask removed): 1. He has interviewed financial services regulators including Sean Chen (politician), the Premier of Taiwan, when he was the Chairman of the Financial Supervisory Commission (Republic of China) of Taiwan and negotiated the financial Memorandum of Understanding with China. 2. Hsich Tung-min was the ninth Governor of Taiwan Province (1972-1978) and the sixth and first local Taiwanese Vice President of the Republic of China (19781984) under President Chiang Ching-Kuo. GPT-4 analysis: The term \"President of the Executive Yuan\" is not mentioned in any of the texts. The texts mention various political positions in Taiwan, such as the Premier of the Republic of China and the President of Taiwan, but none of them refer to the President of the Executive Yuan. Therefore, it cannot be inferred from the texts that \"President of the Executive Yuan\" is a legal term in Taiwan. In the above example, GPT-4 did not recognize that the \u2019Executive Yuan\u2019s leader is the \u2018Premier of Taiwan\u2019, indicating a gap in connecting related concepts. The second failure example appears to be a labeling error. Another example is that GPT-4 struggles with complex logical reasoning involving dates; for instance, it incorrectly equates the information from different dates, focusing merely on numerical comparisons (see Appendix E). Failure cases at this stage mainly stem from LLM\u2019s own bottleneck. These challenges represent a significant area of ongoing research and are beyond the scope of our current study. We acknowledge these limitations and suggest them as critical avenues for future investigation to enhance the capabilities and Table 3: Performance of BM25 and FASTTRACK when dealing with different corpus size. Both of the methods encounter a slight performance drop, while FASTTRACK is still 1.66\u00d7 better than BM25. VITAMINC-10k VITAMINC-100k F1 Precision Recall F1 Precision Recall BM25 0.55 0.59 0.53 0.53 0.56 0.50 Ours 0.91 0.88 0.98 0.88 0.85 0.92 Ours* 1.00 1.00 1.00 0.95 0.95 0.95 applications of LLMs. Embeddings Schemes. We use SentenceTransformer 5 as the embedding model to perform clustering in our main evaluation. To test the sensitivity of FASTTRACK on different choices of embeddings, we also test some state-of-the-art embedding models such as Cohere Embed v36 and Mistral-Embed7. As shown in Table 6, FASTTRACK consistently achieves comparable top-performance upperbounds across various embedding models, underscoring its adaptability to different embedding choices. Corpus Size. Moving forward, we aim to tackle a more challenging scenario: we use the same query set of VITAMINC, but augment the attribution set with additional non-relevant examples until the total reaches 100k. This setting is designed to evaluate our method\u2019s robustness in scenarios that better resemble real-world applications. As shown in Table 7, both methods exhibit a slight decline in performance, yet FASTTRACK consistently outperforms BM25 by a significant margin. BM25\u2019s performance drop is ascribed to the inclusion of new examples that exhibit high lexical overlap with the queries, while our method, mainly stems from the clustering stage, where the clustering logic has been impacted after a more diverse sample are included. We leave a detailed analysis in Appendix E. 7" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15949v1", |
| "title": "Sequence can Secretly Tell You What to Discard", |
| "abstract": "Large Language Models (LLMs), despite their impressive performance on a wide\nrange of tasks, require significant GPU memory and consume substantial\ncomputational resources. In addition to model weights, the memory occupied by\nKV cache increases linearly with sequence length, becoming a main bottleneck\nfor inference. In this paper, we introduce a novel approach for optimizing the\nKV cache which significantly reduces its memory footprint. Through a\ncomprehensive investigation, we find that on LLaMA2 series models, (i) the\nsimilarity between adjacent tokens' query vectors is remarkably high, and (ii)\ncurrent query's attention calculation can rely solely on the attention\ninformation of a small portion of the preceding queries. Based on these\nobservations, we propose CORM, a KV cache eviction policy that dynamically\nretains important key-value pairs for inference without finetuning the model.\nWe validate that CORM reduces the inference memory usage of KV cache by up to\n70% without noticeable performance degradation across six tasks in LongBench.", |
| "authors": "Jincheng Dai, Zhuowei Huang, Haiyun Jiang, Chen Chen, Deng Cai, Wei Bi, Shuming Shi", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Large Language Models (LLMs), despite their impressive performance on a wide\nrange of tasks, require significant GPU memory and consume substantial\ncomputational resources. In addition to model weights, the memory occupied by\nKV cache increases linearly with sequence length, becoming a main bottleneck\nfor inference. In this paper, we introduce a novel approach for optimizing the\nKV cache which significantly reduces its memory footprint. Through a\ncomprehensive investigation, we find that on LLaMA2 series models, (i) the\nsimilarity between adjacent tokens' query vectors is remarkably high, and (ii)\ncurrent query's attention calculation can rely solely on the attention\ninformation of a small portion of the preceding queries. Based on these\nobservations, we propose CORM, a KV cache eviction policy that dynamically\nretains important key-value pairs for inference without finetuning the model.\nWe validate that CORM reduces the inference memory usage of KV cache by up to\n70% without noticeable performance degradation across six tasks in LongBench.", |
| "main_content": "Introduction Large language models (LLMs) have demonstrated impressive proficiency in a wide range of natural language processing tasks such as question answering, summarization and multi-turn dialogues [1\u20133]. Considering substantial cost of deploying LLMs introduced by tremendous model size and quadratic cost of attention layer, many works focused on model compression and memory-efficient attention techniques [4\u20137]. However, the size of KV cache, which stores previous tokens\u2019 key and value states to avoid re-computation, scaling linearly with sequence length during generation, also incurs significant overhead. For instance, even a 7 billion-parameter model with batch size of 128 and sequence length of 4096 results in 256GB of KV cache, far exceeds memory consumed by model itself which is only 14GB. A natural idea is to discard some less informative KV cache to reduce memory consumption. The challenge lies in finding a balance between discarding as much as possible while still maintaining model performance. Despite multi-query attention [8] and grouped-query attention [9] can reduce the size of KV cache by reducing attention heads, it needs re-training to recover performance of original model. Recent works \u2217Corresponding Author Preprint. In progress. arXiv:2404.15949v1 [cs.CL] 24 Apr 2024 \f[10\u201314] have investigated implementing KV cache using specific eviction policy, that determines which key-value states should be evicted from KV cache. These methods aim to compress KV cache to a pre-defined budget size, thereby reducing memory and computational overhead. However, they save same number of key-value pairs for all attention heads and layers, ignoring that the number of keys playing an important role may vary across different attention heads and layers [15]. (a) (b) Figure 1: Attention sparsity of LLaMA2-7B. (a) Layer-wise attention sparsity. (b) Head-wise attention sparsity of layer 0 and layer 1. Intuitively, if important information in the KV cache exceeds the predetermined budget size, the performance of the model is likely to decline as it unavoidably evicts some crucial information. Our preliminary exploration also reveals that different attention layers and heads show different sparsities as shown in Figure 1. First, we observe that bottom layers of the model are relatively dense2, while the remaining attention layers exhibit significant sparsity. Second, even within the same layer, different heads can exhibit obvious differences in sparsity levels. These properties suggest that we need to treat different layers and heads differently, rather than using the same budget size for all of them. In addition, we prove that completely similar queries have similar concerns about keys, and observe that recent query vectors are quite similar on LLaMA2 series models so current query can directly use recent query attention messages during generation. Based on the above insights, we first define the generation process of LLMs with a budget-unrestricted KV cache in Section 3. Then we propose Cache Optimization with Recent Message (CORM), a framework that exploits recent query attention information for KV cache optimization and token generation of LLMs. Specifically, \u2022 In Section 3, we explore the similarity between query vectors of all tokens within same sequence, revealing that recent query vectors are highly similar, which implies that (i) keys that are important for recent queries might be also important for the current query; and (ii) removing key-value pairs that appear to be less informative for recent queries can greatly preserve the performance of the model. \u2022 In Section 4, we present a simple method which dynamically evicts minor key-value pairs determined by recent tokens\u2019 attention information. We conduct extensive experiments on LLaMA2-7B-Chat, considering its popularity and wide usage, to evaluate CORM across 6 tasks from LongBench [16] containing question answering, summarization, code completion, etc. Experiments show that even without explicitly setting a budget size, our method is still possible to achieve a high compression rate. Our method achieves better performance compared to StreamingLLM [10], Scissorhands [11] and H2O [12] with over 70% KV cache reduction rate and can even come close to fully restoring the performance of the model. 2 Related Work Attention Let x \u2208Rn\u00d7d denote the input embeddings from a sequence of n feature vectors of dimension d. The multi-head self-attention [17], as a core module of Transformer model, facilitates 2Let t denote sequence length, we count the proportion of keys which attention score larger than average score 1 t and denote it as r. The larger r is, the sparser the layer is. 2 \fcontextual information interaction within each head in the following manner: Q = xWq, K = xWk, V = xWv, Attention(x) = softmax(QKT \u221adh ) \u00d7 V (1) Q, K, V represent the query, key, and value matrices, which are obtained by linearly mapping x using weight matrices Wq, Wk, and Wv \u2208Rd\u00d7dh, respectively. dh is the dimension of each individual head. KV Cache According to autoregressive paradigm, transformer decoder model predicts future tokens based on both previous and current tokens. Recalculating the key-value pairs for previous tokens at each decoding step is clearly an inefficient strategy. A common practice is to retain the key-value pairs of previous tokens for subsequent reuse. Thus, the consumption of KV cache becomes linearly correlated with the length of input sequence. When dealing with long contexts, however, the use of such a space-time trade-off approach may still pose challenges. Training Policies The advent of multi-query attention (MQA) [8] is to address the influence of attention heads on KV cache within multi-head attention (MHA) mechanism. It facilitates the sharing of the same set of keys and values among different heads to alleviate cache pressure. Grouped-query attention (GQA) [9] represents a trade-off between MHA and MQA, achieving key-value sharing within each group through mean-pooling-based uptraining. Both methods require additional training to restore model performance due to the inability to directly convert. Training-free Policies During generation, sequence length is the primary factor of cache pressure. Recent methods aim to balance model efficiency and inference cost without extra training and architectural changes. StreamingLLM [10] keeps attention sink token and recent tokens throughout decoding process to align with the training window. Scissorhands [11] maintains pivotal tokens and recent tokens based on the persistence of importance hypothesis. H2O [12] utilizes accumulated attention score to maintain heavy hitters and recent tokens. TOVA [13] removes tokens with the lowest current attention score from the fixed cache at each decoding step. RoCo [14] retains tokens in the fixed cache based on high mean cumulative attention scores and top r standard deviations. Aforementioned methods consistently operate on a fixed cache, ignoring that the number of tokens playing an important role may vary across different attention heads and layers. 3 Observations We first demonstrate the existence of attention sparsity in LLMs in Section 3.1, then discuss the phenomenon that similar queries have similar attention concerns for keys in Section 3.2. In Section 3.3, we show an intriguing observation that current query is most similar to recent queries. 3.1 Attention sparsity in LLMs We first explore the sparsity in attention layers of LLMs, which provides an effective guarantee for us to reduce KV cache size. Specifically, we use proportion of important keys to represent attention sparsity. Let qt \u2208R1\u00d7d denote the query state vector at step t, ki \u2208R1\u00d7d denote the key state vector at step i (1 \u2264i \u2264t), where d is hidden dimension (for the sake of simplicity, we only consider a single head here). The normalized attention score of qt for ki is computed as: \u03b1t,i = exp(qtkT i / \u221a d) Pt j=1 exp(qtkT j / \u221a d) . (2) Definition 3.1 (Important Key) We define a key ki is considered important in step t, if and only if \u03b1t,i \u22651 t , otherwise it is considered minor. 3 \fWe conduct zero-shot inference with LLaMA2-7B model on the test set of PG-19 [18]. We plot the layer-wise and head-wise sparsity within attention blocks, the results are presented in Figure 1. It reveals that bottom layers are relatively dense, while other layers are highly sparse with over 90% sparsity. This makes it possible to do attention computation on only small part of KV cache during generation. 3.2 Similar queries have similar concerns for keys The previous section reveals the existence of attention sparsity in LLMs, which provides an opportunity to reduce KV cache size while maintaining performance. In this section we give a theoretical analysis that similar queries have similar concerns for keys for eviction policy design. Consider the i-th and j-th query state vectors qi and qj in a sequence of token length T (i < j \u2264T). Their cosine similarity can be computed as: cosine_similarity(qi, qj) = qiqT j \u2225qi\u2225\u00b7 \u2225qj\u2225. (3) Consider all key states k1, k2, ..., ki\u22121 before i-th key. Assume that cosine_similarity(qi, qj) = 1, then qi = m \u00b7 qj with m \u2208R+. The attention weight3 of qi to the previous i \u22121 keys can be represented as: attention_weight = 1 \u221a d (qikT 1 , qikT 2 , ..., qikT i\u22121) = m \u221a d \u00b7 (qjkT 1 , qjkT 2 , ..., qjkT i\u22121). (4) Note that m is a positive number that does not affect the relative order of the attention weights. For example, for qi, if qikT 1 > qikT 2 , there must be qjkT 1 > qjkT 2 for qj. This means if a key is important to qi, it is also important to qj, though the degree of importance may vary due to the softmax function. Figure 2: Similar queries have similar concerns for keys. We plot the attention map from two different layers in a sentence. We discretize the attention score and those important keys are shown in bright green. Each attention map has two red borders, the bottom border shows important keys that current query actually focuses on, while another border shows important keys that the most similar query focuses on. Although it\u2019s nearly impossible that cosine_similarity(qi, qj) = 1 in real situation, we can make the hypothesis that two similar queries may have similar concerns for keys. To validate this hypothesis, we provide two attention maps of a sentence randomly drawn from PG-19 using LLaMA2-7B, as 3attention weight is unnormalized attention score 4 \fshown in Figure 2. Important keys are marked with bright green, more plots are available in Appendix A.1. We observe that the hypothesis is true, and similar queries exhibit similar concerns for important keys. At the same time, important keys only account for a small proportion especially in deeper attention layers, which is consistent with the finding that deeper layers are sparser in previous section. 3.3 Similarity exploration of query vectors We have validated two similar queries have similar concerns for keys in Section 3.2, we also need to validate that at each step we can find a previous query state that is similar enough to current query state in same layer and same head. To check this, we visualize cosine similarity of query vectors within same sequence as shown in Figure 3, more plots are available in Appendix A.2. We observe an intriguing phenomenon that many images show clear oblique color segmentation, with the top oblique block closest to dark red which means current query is most similar to recent queries. Figure 3: Visualization of query vectors\u2019 cosine similarity over one sentence with a length of 1024. The i-th row of the map represents cosine similarity of the i-th query to all previous queries. The plot reveals that in most cases current query is most similar to recent queries. Through above observations, we see an opportunity to design a KV cache eviction policy based on query similarity that preserves the LLM generation performance. 4 Cache Optimization with Recent Message In this section, we present CORM, a method reduces the KV cache memory based on recent query attention information without any fine-tuning process. In Section 4.1, we derive that current query can directly use recent query attention messages during generation. In Section 4.2, we present CORM eviction policy and describe how it works during generation. 4.1 Generate based on recent query attention messages Consider observations in Section 3, intuitively, we can directly store all queries and their attention information for future reference. At each generation step, use current query to find the most similar one from previous queries, and use its saved attention information to calculate solely on important keys. However, this approach incurs a significant cost. First, storing all queries results in a substantial increase in memory overhead. Second, the requirement of performing similarity calculations at each step adds to the computational overhead. Since in most cases current query is most similar to recent queries as described in Section 3.3, we can just use recent query attention messages. And from Figure 2 we can also observe that only a small proportion of keys are considered important by recent queries. Therefore even if we save all the keys that are considered important in previous steps, we can save a lot of memory. 4.2 Eviction algorithm via recent message We have shown recent query attention information is enough for cache optimization in Section 4.1. In the following, we formally define this algorithm and introduce how to integrate it into LLM generation directly. Definition 4.1 (Long-term Minor Key) A key ki is considered as long-term minor key only if it is considered minor in all recent r steps (from t \u2212r + 1 to t). 5 \fApproach CORM will have a recent window of size w to record the information of recent w queries, and will always keep recent r keys unremoved to prevent them from being discarded prematurely due to insufficient observations. During generation, ki, vi will be discarded once ki is regarded as long-term minor key. For better explanation we present pytorch code4 of main algorithm in Algorithm 1. Intuitively, when w is larger, more keys and values will be saved, the compression rate will be smaller and performance will be better; Conversely, when w is smaller, fewer keys and values will be saved, the compression rate will be larger and performance will be worse. So there\u2019s a tradeoff between performance and compress rate. Memory Overhead Analysis In order to reduce memory overhead of KV cache, an extra memory overhead is introduced by recent information cache. We need to store recent query messages which increase memory overhead. However, these overheads are far less than compressed KV cache, one can use a small portion of memory to avoid maintaining full KV cache memory without obvious performance degradation. On the other hand, the compression rate will increase as the sequence length increases as shown in Figure 4, resulting in a lower memory overhead for this component in comparison. Algorithm 1 Single-head KV cache eviction with CORM (unbatched) def corm_eviction(keys, values, message, attn_score, w, r, t): \"\"\" Args: keys: previous key states, a tensor with shape [l, d] values: previous value states, a tensor with shape [l, d] message: attention message, a tensor with shape of [m, l-1] attn_score: current steps attention score, a tensor with shape of [1, l] w: window size, a scalar r: recent size, a scalar t: current step, a scalar Returns: updated_keys: updated keys updated_values: updated values updated_message: updated message \"\"\" m = message.shape[0] # update attention message message = torch.cat([message, torch.zeros(m, 1)], dim=1) \u25b7pad to [m, l] cur_message = attn_score >= 1 / t message = torch.cat([message, cur_message], dim=1)[-w:, :] if message.shape[0] < w: return keys, values, message else: # determine the key-value pairs that necessitate discarding decision = message.any(dim=0) decision[-r:] = True \u25b7always keep recent r tokens unremoved indices = torch.nonzero(decision).squeeze() keys = keys[indices, :] values = values[indices, :] return keys, values, message 4For the sake of brevity, the code snippet only demonstrates single-head eviction operation, while in the actual implementation, it will be performed on each head at every layer. 6 \f5 Empirical Evaluation In this section, we present the results that demonstrate CORM can reduce up to 70% of the memory footprint of KV Cache without accuracy degradation on LLaMA2-7B-Chat. Dataset To broadly validate feasibility of our method on real-world use cases, we choose LongBench [16] as our evaluation benchmark, which contains a wide range of long-text tasks such as question answering [19\u201324], summarization [25\u201328], few-shot learning [29\u201332], synthetic task and code completion [33, 34]. Here we do not consider short text tasks, because even full cache doesn\u2019t have any bottlenecks. Models Since sequence length is the main factor in the continuous growth of KV Cache, we employ LLaMA2-7B-Chat [2] for 4K test considering its wide usage. Baselines Since CORM reduces KV cache without need for training, we consider several similar approaches as our baselines: StreamLLM [10], Scissorhands [11] and H2O [12]. In addition, the full KV cache is also considered as strong baseline to measure the performance loss of other methods. Setting All baselines can be regarded as fixed budget size KV cache compression, however CORM is a dynamic compression method. Since we find that CORM has similar compression rates for various task texts with the same sequence length. For fair comparison, we plot the relationship between model compression rate and sequence length using texts randomly sampled from PG19 [18] as shown in Figure 4. Figure 4: Relationship between compression ratio and sequence length. Plots show that compression rate with CORM \"256+256\" and budget=1024 are close for LLaMA2-7B-Chat. Main Results We evaluate LLaMA2-7B-Chat for 4K length text. Results are summarized in Table 1 & 2 for LLaMA2-7B-Chat. The following observations can be drawn: (1) CORM consistently outperforms previous methods at the same compression rate across a wide range of tasks. (2) Meanwhile, with over 70% KV cache reduction, CORM achieves comparable performance as the model with full KV cache and even surpass it on some tasks, we speculate it\u2019s because there\u2019s some noise in full KV cache that affects model output and our method can eliminate this noise to a certain extent by discarding some KV cache. 5.1 Budget unnecessity: is unbudgeted better? We primarily focus on the effectiveness of not setting a budget versus setting a fixed budget. Note that since we use same window size and recent size as Scissorhands in the experiment, it can be regarded a natural ablation experiment. And Table 1 & 2 have shown that, at the similar compression rate, CORM is much better than Scissorhands in most tasks, and performance of other tasks is close. This verifies that different transformer layers and heads should be treated differently rather than setting a same fixed budget size. 7 \fTable 1: Results (%) on single-doc QA, multi-doc QA and summarization tasks. \"Full\" refers to LLaMA2-7B-Chat utilizing full KV Cache, \"StreamLLM\" is configured with 4+1020, \"Scissorhands\" is configured with 768+256 where window size=256, \"H2O\" is configured with 768+256, \"CORM\" is configured with 256+256 for fair comparison. For the sake of brevity we use ID to denote dataset here, mapping from ID to dataset can be found in Appendix B . Method Single-Doc QA Multi-Doc QA Summarization 1-1 1-2 1-3 1-4 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 Full 19.0 22.1 36.7 11.8 27.8 31.5 8.3 6.8 26.8 20.7 26.2 0.2 StreamLLM 13.2 15.4 27.2 6.5 24.2 25.4 5.3 4.4 21.6 19.8 24.4 0.1 Scissorhands 16.6 18.7 32.4 9.9 26.3 32.1 8.9 5.7 22.1 20.7 25.4 0.2 H2O 17.9 19.5 34.9 11.5 27.5 29.7 7.5 7.1 24.5 21.0 25.8 0.2 CORM 18.9 22.2 38.6 12.0 27.6 31.6 8.4 7.1 26.4 21.0 25.8 0.2 Table 2: Results (%) on few-shot learning, synthetic, and code tasks. \"Overall\" is computed by the macro-average over major task categories. This is computed on English (EN) tasks, Chinese (ZH) tasks, and all (All) tasks, code tasks are included in both languages. Method Few-shot Learning Synthetic Code Overall 4-1 4-2 4-3 4-4 5-1 5-2 5-3 6-1 6-2 EN ZN All Full 64.0 83.3 41.4 17.3 2.9 7.8 10.0 58.3 52.2 32.8 16.9 28.9 StreamLLM 61.0 82.9 39.1 14.5 1.8 4.7 6.5 57.6 50.0 29.5 14.3 25.7 Scissorhands 52.5 83.6 40.7 17.0 3.1 6.5 7.7 56.8 52.1 31.0 15.8 27.2 H2O 63.0 81.5 39.9 17.0 2.8 7.0 7.3 57.8 52.3 31.8 16.4 28.0 CORM 64.0 83.5 41.3 17.3 2.9 9.0 9.1 58.3 52.0 32.9 16.8 28.9 6" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15639v1", |
| "title": "CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code", |
| "abstract": "As Large Language Models (LLMs) are increasingly used to automate code\ngeneration, it is often desired to know if the code is AI-generated and by\nwhich model, especially for purposes like protecting intellectual property (IP)\nin industry and preventing academic misconduct in education. Incorporating\nwatermarks into machine-generated content is one way to provide code\nprovenance, but existing solutions are restricted to a single bit or lack\nflexibility. We present CodeIP, a new watermarking technique for LLM-based code\ngeneration. CodeIP enables the insertion of multi-bit information while\npreserving the semantics of the generated code, improving the strength and\ndiversity of the inerseted watermark. This is achieved by training a type\npredictor to predict the subsequent grammar type of the next token to enhance\nthe syntactical and semantic correctness of the generated code. Experiments on\na real-world dataset across five programming languages showcase the\neffectiveness of CodeIP.", |
| "authors": "Batu Guan, Yao Wan, Zhangqian Bi, Zheng Wang, Hongyu Zhang, Yulei Sui, Pan Zhou, Lichao Sun", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "As Large Language Models (LLMs) are increasingly used to automate code\ngeneration, it is often desired to know if the code is AI-generated and by\nwhich model, especially for purposes like protecting intellectual property (IP)\nin industry and preventing academic misconduct in education. Incorporating\nwatermarks into machine-generated content is one way to provide code\nprovenance, but existing solutions are restricted to a single bit or lack\nflexibility. We present CodeIP, a new watermarking technique for LLM-based code\ngeneration. CodeIP enables the insertion of multi-bit information while\npreserving the semantics of the generated code, improving the strength and\ndiversity of the inerseted watermark. This is achieved by training a type\npredictor to predict the subsequent grammar type of the next token to enhance\nthe syntactical and semantic correctness of the generated code. Experiments on\na real-world dataset across five programming languages showcase the\neffectiveness of CodeIP.", |
| "main_content": "Introduction Large Language Models (LLMs), particularly those pre-trained on code, such as CodeGen (Nijkamp et al., 2022), Code Llama (Roziere et al., 2023), and StarCoder (Li et al., 2023a), have demonstrated great potential in automating software development. Notably, tools leveraging these LLMs, such as GitHub Copilot (Friedman, 2021), Amazon\u2019s CodeWhisperer (Amazon, 2023), and ChatGPT (OpenAI, 2023), are revolutionizing the way developers approaching programming by automatically generating code based on natural language intent and the context of surrounding code. While LLMs have demonstrated great potential in automated code generation, they also raise challenges about safeguarding the intellectual property (IP) of the model architectures, weights, and *Corresponding Author. training data due to the enormous cost of training a successful LLM (Li, 2024). Additionally, there are growing concerns in educational settings about academic integrity with the use of generative AI (Bozkurt et al., 2023). An important measure for protecting the LLM IP and preventing academic misconduct is the ability to determine if a piece of code is generated by a particular LLM. Watermarking techniques (Kirchenbauer et al., 2023) offer a potential solution to determine the origin of machine-generated content. This technique is effective in safeguarding the IPs of Computer Vision (CV) and Natural Language Processing (NLP) models. It works by inserting information within multimedia formats (such as images and videos) without perceptibly diminishing the original utility of the content. By incorporating data such as owner/user ID, it supports leakage tracing, ownership identification, meta-data binding, and fortifying against tampering. Existing watermarking techniques for language models can be categorized into two groups: hard and soft watermarks. A hard watermark is typically inserted by utilizing the masked language models (e.g., BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019)) to replace tokens in generated content with synonyms. However, a hard watermark exhibits consistent patterns for different model inputs, compromising the protection performance. On the contrary, the soft watermarks are inserted during content generation, typically via manipulating the sampling probability distribution over the vocabulary during the decoding process of LLMs (Kirchenbauer et al., 2023). Recently, several attempts have been made towards watermarking LLMs for code generation, predominantly centered on two distinct approaches: generating a one-bit watermark to discern the machine-generated nature of the code (Lee et al., 2023) or embedding a hard watermark through a semantic-equivalent transformation of the generarXiv:2404.15639v1 [cs.CL] 24 Apr 2024 \fWatermark Inserted Syntax Error Watermark Inserted Syntax Correct Type Predictor CodeIP (w/o Type Predictor) CodeIP Watermark 718084(GPT) # Classifies the given list # of numbers into even & odd. def classify_even_odd(nums): even_nums = [] odd_nums = [] for num # Classifies the given list # of numbers into even & odd. def classify_even_odd(nums): even_nums = [] odd_nums = [] for num Figure 1: CODEIP can seamlessly embed multi-bit messages into LLMs while preserving the utility of the underlying code. \u201c718084\u201d is the ASCII value for \u201cGPT\u201d. ated code (Li et al., 2023b; Sun et al., 2023). However, a one-bit message can carry little information and is inadequate to preserve enough copyright information like the vendor ID of an LLM. Moreover, the implementation of a hard watermark does not offer robust protection, as the easily detectable nature of the hard-coded watermarking pattern undermines its effectiveness. To this end, this paper puts forward a grammarguided multi-bit soft watermarking method, termed CODEIP, to protect the IPs associated with LLMs during the code generation process. Specifically, following (Kirchenbauer et al., 2023), we first insert the watermark message based on the probability logit of LLMs during the code generation process. As this strategy has the potential to interfere with code semantics throughout the code generation process, we propose to incorporate grammar information into the process of generating watermarked code. This is achieved by training a type predictor to predict the subsequent grammar type of the next token, thereby enhancing the semantic correctness of the generated code. Figure 1 shows an example to illustrate the effectiveness of our introduced grammar information in comparison to the baseline model. In this example, our objective is to insert the multi-bit message (model name) \u201c718084\u201d (corresponding to the ASCII value of \u201cGPT\u201d) into its generated code. It is evident that, in the absence of grammar guidance, the model inaccurately predicts the next token as \u201c:\u201d. However, the grammar analysis indicates that the succeeding token is expected to be a keyword. Our CODEIP, which incorporates grammar constraints into the logit of LLMs, consistently tends to predict the correct token \u201cin\u201d. This capability preserves the semantic correctness of the code during the insertion of watermarks into LLMs. We assess the performance of CODEIP by incorporating watermarks into a diverse real-world dataset that encompasses five programming languages, namely Java, Python, Go, JavaScript, and PHP. The experimental results validate the efficacy of our proposed approach to watermarking, demonstrating an average extraction rate of 0.95. Importantly, our approach maintains the utility of the generated code, exhibiting a 50% reduction in CodeBLEU losses compared to the baseline model that lacks grammar constraints. This paper makes the following contributions. \u2022 It is the first to study the problem of embedding the soft multi-bit watermarks into LLMs of code during the code generation process. \u2022 It presents a new method that utilizes the grammatical information of programming languages to guide the manipulation of probability logits in LLMs, thereby preserving the utility of watermarked code. Data Availability. All experimental data and source code used in this paper are available at https://github.com/CGCL-codes/naturalcc/ tree/main/examples/codeip (Wan et al., 2022). 2 Preliminary 2.1 Code Generation LLM-based code generation produces source code from high-level specifications or prompts. Typically, these specifications (prompts) are conveyed through natural-language descriptions, supplemented by partial code elements such as function annotations and declarations, which are provided by users. Formally, let \u03c1 denote a prompt, which can be tokenized into a sequence of tokens {w1, w2, . . . , w|\u03c1|}, where | \u00b7 | denotes the length of a sequence. Let V denote the vocabulary used for mapping each token to corresponding indexes. Given a language model pLM, the probability of the next token, conditioned on a prompt and a series of the previous generated tokens w1:i, can be formulated as follows. LLM = pLM(wi) = softmax (pLM(wi|\u03c1, w1:i)) . (1) Here, pLM(wi) denotes the probability distribution over the entire vocabulary V, generated by the LM. We also call the probability distribution produced by the LM as model logit. In this paper, the LM will always be an autoregressive Transformer (Vaswani et al., 2017) pre-trained on source code, akin to the models in the GPT family, including Code Llama (Roziere et al., 2023) and 2 \f# Classifies even & odd. def classify(nums): even_nums = [] odd_nums = [] for num Lexer LLM Type Predictor LLM Logits TP Logits WM Logits HASH Hash Function Prompt & Generated Code Next Token Generation num Vocab Message Last Token # Classifies even & odd. def classify(nums): even_nums = [] odd_nums = [] for num in Figure 2: An overview of our proposed CODEIP. StarCoder (Li et al., 2023a). Following this, the subsequent token wi is sampled from pLM(wi) using specific sampling strategies, such as multinomial sampling (Bengio et al., 2000) or greedy sampling (Berger et al., 1996). In this paper, we adopt the greedy sampling strategy. Therefore, the next token will be sampled based on the following equation: wi = arg max w\u2208V log pLM(w). 2.2 The Problem: Watermarking the Code In this paper, our goal is to insert a multi-bit watermark message into a code snippet during the generation process of LLMs. Typically, the watermarking algorithm comprises two stages: the watermark inserting stage and the watermark stage. During the process of inserting a watermark into the generated code, the initial consideration involves determining the specific message m to be inserted as the watermark. In practice, the model provider of an LLM can formulate a message, e.g., owner ID, to safeguard its model copyright. It is noteworthy that while the initial content of message m may encompass any characters, it undergoes conversion into a unique number before insertion. Specifically, given the prompt \u03c1 and a watermark message m as inputs, the INSERT module produces a watermarked code C = INSERT(\u03c1, m) . During the watermark stage, given an input snippet of code C, our expectation is that the module EXTRACT will produce its predicted watermark message m = EXTRACT(C) . In the context of this formulation, the primary objectives of our watermarking for LLMs of code are twofold: 1) to accurately insert the intent message as a watermark, and 2) to preserve the utility of the code without loss of semantics. 3 CODEIP In Figure 2, we present an overview of our proposed CODEIP, which inserts a watermark into code generated by an LLM. The CODEIP comprises two distinct stages: watermark insertion, and grammar-guided watermarking. Initially, leveraging the decoding mechanism of existing LLMs, we use LLM to denote the likelihood of each token in the vocabulary V as inferred by the LLM itself. Subsequently, during the watermark insertion stage (cf. Sec. 3.1), we incorporate the watermark message using a logit value LWM calculated to measure its influence on V. Moreover, we present a novel application of Context-Free Grammar (CFG) and introduce a logit (denoted as LTP), which signifies the probability associated with the grammatical type of the subsequent token, to guide the watermark insertion during the code generation process (cf. Sec. 3.2). 3.1 Watermark Insertion Following Kirchenbauer et al. (2023), we insert the watermark into the generated code by modifying the probability distribution over the entire vocabulary V as the LLM generates the next token. We first select a set of tokens from the vocabulary using a hash function. Based on the selected tokens, we compute the watermark logits, representing the likelihood of embedding the watermark message within each respective token. Vocabulary Selection. Following Kirchenbauer et al. (2023), the insight of inserting watermarks into code lies in selecting a set of tokens in the vocabulary under the control of the watermark message and enhancing their probability of generation during the stage of LLM decoding. We employ a 3 \fhash function H to select tokens from the vocabulary V. Specifically, assuming that LLM is generating the i-th token and the previous generation is denoted as [w1, w2, \u00b7 \u00b7 \u00b7 , wi\u22121], with watermark message represented by m. For any given token w in V, the hash function will take (w, m, wi\u22121) as input and map it to either 0 or 1. We consider tokens w that satisfy H(w, m, wi\u22121) = 1 as selected tokens, and our objective is to enhance their likelihood of being chosen by the LLM. Watermark Logit. To augment the likelihood of generation, we calculate an additional logit referred to as the watermark logit LWM and incorporate it into the existing model logit LLM. The implementation of the watermark logit LWM relies on the outcomes of vocabulary partitioning. Assuming that the current LLM generates the i-th token wi, preceded by the last token wi\u22121, and denoting the watermark information as m, the watermark logit is computed as follows: LWM = log pWM (wi | m, wi\u22121) = ( 1, H(w, m, wi\u22121) = 1 0, H(w, m, wi\u22121) = 0 (2) Here h denotes a hash function which outputs a binarization value 0 or 1. pWM denotes the probability distribution over the entire vocabulary V, which conveys analogous implications to that of pLM. By assigning a value of 1 to LWM for those selected tokens whose resultant computation via the hash function equals 1, we can effectively enhance the likelihood of such tokens being preferentially chosen during the decoding stage of LLM. 3.2 Grammar-Guided Watermarking As previously mentioned, conventional watermarking methods, which randomly insert a message by perturbing the generation process for each token, often result in the disruption of the semantics within the generated code. We posit that the generated code ought to adhere to the grammatical rules of the programming language. Consequently, we propose the integration of grammar constraints as a guiding principle in the code generation process. This inclusion is envisioned to maintain the utility of watermarked generated code. Context-Free Grammar (CFG). A CFG serves as a formal system for describing the syntax of programming languages, and possesses sufficient expressiveness to represent the syntax of most programming languages (Hoe et al., 1986). Typically, == 2 if i % PUNC. NUMBER KEYWORD NAME PUNC. Lexer \u2460 if_stmt: KEYWORD('if') expr comp_op expr KEYWORD(':') \u2461comp_op: PUNC.('<'|'>'|'=='|'>='|'<='|'<>'|'!=') \u2462expr: atom PUNC.('*'|'@'|'/'|'%'|'//') atom \u2463atom: NAME | NUM NAME|NUM Rule \u2462\u2463 KEYWORD('if') comp_op expr Apply Figure 3: An example to highlight the role of CFG in ensuring the semantic correctness of generated code. for a segment of code, a lexer, e.g., ANTLR (Parr and Quong, 1995), can transform it into a sequence of lexical tokens. Subsequently, under the constraints of CFG rules, we can infer the potential type of the subsequent lexical token. For instance, as illustrated in Figure 3, after transforming the original code \u201cif i % 2 ==\u201d into the sequence of lexical tokens, we can use CFG to infer the potential type of the subsequent lexical token as either \u201cNAME\u201d or \u201cNUM\u201d, which could be helpful in the scenario of code generation. Nonetheless, despite the constraints that CFG imposes on code, its direct application to the field of code generation still presents certain challenges. As demonstrated in the example of Figure 3, a CFG is capable of analyzing potential types for the subsequent lexical token. However, when multiple token types are considered as valid next token types, CFG\u2019s utility in aiding code generation tasks becomes significantly limited, for it lacks the capacity to calculate the probability distribution among these possible token types. Hence, we train a lexical token-type predictor and intend to utilize it as a substitute for the CFG. Lexical Token Type Predictor. We train a neural network to predict the lexical type of the next token. In particular, given the prompt and previously generated tokens, we initially employ a lexer to transform the given data into a sequence of lexical token types. Subsequently, this sequence is inputted into the predictor. The predictor then forecasts a token type that will be outputted as the most probable lexical token type for the subsequent token. In the context of code generation with LLMs and its prompt denoted as \u03c1, assuming that the LLM is in the process of generating the i-th token when the generated code denoted as G, for any given code snippet denoted as S = [\u03c1; G1:i], where [\u00b7; \u00b7] denotes the concatenation of two elements, it is feasible to extract its token sequence T = Lexer(S) = [\u03c41, \u03c42, . . . , \u03c4l] via lexical analysis, where \u03c4 \u2208T denotes the lexical token 4 \ftype and l denotes the length of lexical token sequence. Then an LSTM (Hochreiter and Schmidhuber, 1997) is adopted to serve as the type predictor and to predict the token type of the subsequent token by inputting the token sequence T, as follows: \u03c4l+1 = TP(T) = LSTM(\u03c41, \u03c42, . . . , \u03c4l)) . (3) Other neural networks, such as the Transformer (Vaswani et al., 2017) can also be applied and we leave the exploration of other neural networks as our future work. Type Predictor Logit. In order to mitigate the negative impact of watermarking on code utility, it is imperative to leverage our type predictor during the watermark insertion process, which is also the LLM decoding period. This necessitates transforming the predictive outcomes of the type predictor into a form of logit that can be added onto model logits. We name the new logit as type predictor logit, which can also be represented as LTP. The type predictor logit is a probability distribution of tokens within vocabulary V. Consequently, it becomes imperative to construct a dictionary in advance that associates each type of lexical token with potential LLM tokens corresponding to that particular type. For instance, the KEYWORD lexical token type encompasses LLM tokens such as \u201cdef\u201d, \u201cif\u201d, and \u201celse\u201d, while the Punctuation lexical token type incorporates LLM tokens including \u201c(\u201d, \u201c)\u201d, \u201c;\u201d, \u201c*\u201d, and so forth. We denote this dictionary by \u03a6 : T 7\u2192V. Thus, LTP can be calculated as follows: LTP = log pTP(wi|[\u03c1; G1:i]) = ( 1, wi \u2208\u03a6(\u03c4l+1) 0, wi / \u2208\u03a6(\u03c4l+1) (4) Here, \u03c1 represents the prompt input into the LLM, and G denotes the pre-existing generated code. At this juncture, we are in the process of generating the i-th token wi. 3.3 Combining the All The subsequent section will present the watermark inserting formula corresponding to Figure 2, along with the ultimate watermark embedding algorithm. wi = arg max w\u2208V{LLM + \u03b2LWM + \u03b3LTP} . (5) In accordance with the settings established in the preceding sections, we posit that LLM is generating the i-th token. Herein, \u03b2 and \u03b3 represent hyperparameters for LLM logit LLM, watermark logit LWM, and type predictor logit LTP respectively. 3.4 Watermark Extraction In the watermarking phase, we employ LWM to insert a watermark w into the output G. Our strategy for watermark extraction involves enumerating all possible instances of m, recreating the process of watermark insertion, and identifying the instance of w that maximizes LWM, as follows: m = arg max m\u2032 ( L X i=1 log pWM \u0000wi | m\u2032, wi\u22121 \u0001 ) , (6) where L denotes the length of the token sequence in the generated code G. 4 Experimental Setup 4.1 LLMs and Dataset To validate the effectiveness of our CODEIP, we choose three prominent LLMs: Code Llama (Roziere et al., 2023), StarCoder (Li et al., 2023a), and DeepSeek Coder (Bi et al., 2024) as our target models. We insert the watermark into the code generated by these selected models. Note that, these models exist in different versions, each characterized by varying model sizes. In our experiments, we choose to employ the 7B model size, limited by the computational resources. We select Java, Python, Go, JavaScript and PHP from CodeSearchNet (Husain et al., 2019) dataset and use the docstrings and function declarations as prompts. For each prompt, the LLMs will generate the next 200 tokens. Note that here we do not adopt HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) datasets as our evaluation datasets. This is because their code length is generally too short and not suitable for inserting watermarks. The relationship between the length of generated code and the extraction rate is studied in Sec. 5.3. 4.2 Implementation Details The default hyperparameters are configured as follows. For all three LLMs, we implement a temperature of 0.75, a repetition penalty of 1.2, and no repeat n-gram size of 10. Given the distinct training processes of various LLMs, we establish the parameters (\u03b2, \u03b3) as (5, 3) for Code Llama and StarCoder, (6, 4) for DeepSeek Coder. The type predictor is an LSTM model, which encompasses an embedding layer characterized by an embedding dimensionality of 64. The hidden state dimensionality of the LSTM is 128. In our experiment, we train a type predictor for each language involved, 5 \fTable 1: The result of watermark extraction rate. \u201cWM\u201d: watermark, \u201cTP\u201d: type predictor. LLM Strategy Java Python Go JavaScript PHP Code Llama w/ WM + w/o TP 0.90 0.93 0.87 0.98 0.97 w/ WM + w/ TP 0.92 0.93 0.86 1.00 0.97 StarCoder w/ WM + w/o TP 0.88 0.98 0.90 0.97 0.96 w/ WM + w/ TP 0.86 0.97 0.87 0.96 0.96 Deepseek Coder w/ WM + w/o TP 0.99 0.95 0.87 1.00 1.00 w/ WM + w/ TP 0.99 1.00 0.91 1.00 1.00 given the distinct grammatical structures inherent to each language. All the experiments in this paper are conducted on a Linux server with 128GB memory, with a single 32GB Tesla V100 GPU. 4.3 Evaluation Metrics To evaluate the effectiveness of watermarking, one objective is to assess whether the watermark can be detected from the generated code. Specifically, we select 100 functions from the dataset for each programming language and extract the docstrings and declarations of each function to serve as prompts for LLMs generation. We employ the extraction rate of watermarks as a metric to measure the efficacy of watermarking, reflecting the percentage of watermarks successfully extracted from the embedded code. To validate the utility of watermarked code, we adopt the CodeBLEU (Ren et al., 2020) metric, which has been widely adopted in the evaluation of code generation. Note that, here we do not adopt the Pass@k metric (Chen et al., 2021), which has been widely adopted to evaluate the LLMs for code generation. This is because the test cases are missing in our used CodeSearchNet dataset. 5 Results and Analysis 5.1 Extraction Rate of Watermarks Table 1 shows a comparison among different kinds of watermarking strategies. Generally, under both watermarking strategies, the extraction rates consistently surpass 0.90 on most programming languages, indicating the efficacy of our watermarking techniques in the context of LLMs for code generation. Using DeepSeek Coder as a case in point, our watermarking strategy, both with and without the type predictor, demonstrates an impressive extraction rate of 0.99 for Java and 1.00 for PHP. These results are consistent with our initial expectations, as the type predictor is designed to prioritize the preservation of the utility of the generated code. 5.2 Watermark vs Code Quality We further explore the impact of watermarking strategies on the utility of generated code. Table 2 illustrates the overall performance of different LLMs when paired with distinct logits (\u201cw/ WM + w/o TP\u201d and \u201cw/ WM + w/ TP\u201d), measured by the CodeBLEU score. From this table, it is evident that the exclusive use of watermark logit leads to a notable decrease in CodeBLEU scores for code generation across various models and languages, and with the subsequent incorporation of the type predictor logit, a distinct resurgence in CodeBLEU scores is observed across most settings. Notably, in Java, Go, and JavaScript, the impact on CodeBLEU generation resulting from the simultaneous application of both logits (i.e., watermark logit and type predictor logit) is only half as pronounced as that arising solely from the use of watermark logit. This emphasizes the significant efficacy of the type predictor in preserving code semantics. 5.3 Parameter Analysis The impact of parameter \u03b2. We conduct an experiment on the variation in extraction rates when adjusting the \u03b2 value under three distinct LLMs for two programming languages, namely Java and Go, as shown in Figure 4. It can be seen that as \u03b2 continues to increase, the extraction rate of watermarks is also constantly increasing. When \u03b2 exceeds 5, a extraction rate of approximately 0.9 can essentially be achieved, which is relatively ideal. It indicates that watermark logit has a positive effect on whether watermarks can be detected. The impact of parameter \u03b3. We conduct experiments on three distinct LLMs by varying the \u03b3 value, aiming to investigate the variations in extraction rates pertaining to two programming languages: Java and Go. The experimental results, as depicted in Figure 5, reveal a noteworthy trend. The initial augmentation of \u03b3 visibly improves the quality of the generated code. Nevertheless, as the augmentation progresses beyond a certain 6 \fTable 2: CodeBLEU scores for different models with different strategies. The value in () represents the disparity in quality (CodeBLEU) between watermarked and non-watermarked code. LLM Strategy Java Python Go JavaScript PHP w/o WM + w/o TP 28.99 22.56 31.73 23.01 44.56 Code Llama w/ WM + w/o TP 23.35 (-5.64) 12.04 (-10.52) 22.44 (-9.29) 16.47 (-6.54) 40.47 (-4.09) w/ WM + w/ TP 27.14 (-1.85) 12.25 (-10.31) 26.49 (-5.24) 20.83 (-2.18) 40.61 (-3.95) w/o WM + w/o TP 39.16 17.74 27.61 24.06 42.60 StarCoder w/ WM + w/o TP 25.70 (-13.46) 17.60 (-0.14) 13.39 (-14.22) 15.25 (-8.81) 40.11 (-2.49) w/ WM + w/ TP 32.11 (-7.05) 18.16 (+0.42) 17.55 (-10.06) 19.18 (-4.88) 40.14 (-2.46) w/o WM + w/o TP 32.10 19.68 33.10 23.97 42.29 DeepSeek Coder w/ WM + w/o TP 25.55 (-6.55) 18.35 (-1.33) 26.93 (-6.17) 17.88 (-6.09) 43.40 (+1.11) w/ WM + w/ TP 31.22 (-0.88) 13.57 (-6.11) 29.32 (-3.78) 19.65 (-4.32) 43.40 (+1.11) 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 Detection Rate Code Llama StarCoder Deepseek Coder (a) Java 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 Detection Rate Code Llama StarCoder Deepseek Coder (b) Go Figure 4: Impact of parameter \u03b2. threshold, a discernible decline in CodeBLEU becomes evident. One plausible explanation for this inconsistency may stem from the inherent contradiction in tokenization, namely, the disparity between prevalent tokenization methods utilized by LLMs (e.g., WordPiece (Schuster and Nakajima, 2012) and BPE (Sennrich et al., 2015)), and those employed by lexers. For example, the LLM subtokens \u201cran\u201d and \u201cge\u201d, when combined, can constitute the lexical token \"range\" which can be recognized during lexical analysis. And assuming the generated code to be \u201cfor i in ran\u201d, the subsequent LLM subtoken to be generated is most likely to be \u201cge\u201d, thereby rendering the generated code as \u201cfor i in range\u201d. However, from the perspective of a lexer, the type of \u201cran\u201d could potentially be classified as type \u201cNAME\u201d, so the calculated lexical token type will be \u201cPUNCTUATION\u201d, thereby selecting \u201c:\u201d. Hence, the generation of code will be transformed into \u201cfor i in ran:\u201d. This contradiction caused by different segmentation methods between LLM tokenizer and lexical analysis can also lead to performance degradation when \u03b3 is high. The impact of generated code length. We also investigate the influence of generated code length, measured in terms of the number of tokens pro1 2 3 4 5 25 26 27 28 29 30 31 32 33 CodeBLEU Code Llama StarCoder Deepseek Coder (a) Java 1 2 3 4 5 16 18 20 22 24 26 28 30 CodeBLEU Code Llama StarCoder Deepseek Coder (b) Go Figure 5: Impact of parameter \u03b3. 10 50 100 150 200 Generate Code Length 0.0 0.2 0.4 0.6 0.8 1.0 Detection Rate Code Llama StarCoder Deepseek Coder (a) Java 10 50 100 150 200 Generate Code Length 0.0 0.2 0.4 0.6 0.8 1.0 Detection Rate Code Llama StarCoder Deepseek Coder (b) Go Figure 6: Impact of generated code length. duced, on the effectiveness of watermark insertion. Our findings reveal a positive correlation between code length and the successful extraction rate, as depicted in Figure 6. This observation underscores that the successful extraction rate of our watermark remains contingent on the length of the generated code. Specifically, shorter lengths of generated code lead to diminished distinctions between watermarked and non-watermarked code, consequently presenting a heightened challenge in detecting watermarks within such code. 5.4 Resistance to Crop Attack To underscore the robustness of our watermarking strategies, we consider a hypothetical scenario where developers use only a portion, rather than the entire generated code, to undermine the water7 \fTable 3: The performance of CODEIP in code watermarking against crop attack. LLM Rate Java Python Go JS PHP 0 0.92 0.93 0.86 1.00 0.97 Code Llama 0.25 0.89 0.95 0.75 0.96 0.94 0.50 0.71 0.85 0.51 0.87 0.87 0 0.86 0.97 0.87 0.96 0.96 StarCoder 0.25 0.81 0.95 0.85 0.93 0.95 0.50 0.63 0.96 0.79 0.85 0.92 0 0.99 1.00 0.91 1.00 1.00 DeepSeek Coder 0.25 0.98 0.99 0.77 0.94 0.95 0.50 0.91 0.90 0.56 0.90 0.87 mark\u2014a situation termed a \u201cCrop Attack\u201d. This involves subjecting the generated code to crop rates of 0.25 and 0.5, representing the removal of 25% and 50% of the code, respectively. The results are presented in Table 3. Examination of the table reveals that, in most cases, our watermark\u2019s effectiveness only experiences a slight reduction under such rigorous attacks. These findings strongly indicate that our watermark exhibits notable resistance to crop attacks, demonstrating its inherent robustness. 6 Related Work LLM-based Code Generation. The roots of code generation can be traced back several decades (Backus et al., 1957; Waldinger and Lee, 1969; Manna and Waldinger, 1971). Recently, LLMs especially those pre-trained on code, such as DeepSeek Coder (Bi et al., 2024), Code Llama (Roziere et al., 2023), CodeGen (Nijkamp et al., 2022), StarCoder (Li et al., 2023a), and CodeGeeX2 (Zheng et al., 2023), have emerged as dominant forces in code generation. Leveraging the capabilities of these LLMs, several commercial tools are reshaping the programming landscape for developers, including GPT-3.5 (OpenAI, 2023), Gemini (Google, 2024), GitHub Copilot (Microsoft, 2024), and Tabnine (Tabnine, 2024), ushering in a new era of innovation. Software Watermarking. The software watermarking problem has been studied since 1996 by Davidson and Myhrvold (1996), who altered code block or operand order to insert watermarks. Qu and Potkonjak (1998) proposed a software watermark method based on graph coloring problem and graph structure of the code, which was further developed by Myles and Collberg (2004), Zhu and Thomborson (2006) and Jiang et al. (2009). These rule-based early methods are often constrained by the usage scenarios and various attack techniques. Stern et al. (2000) has also proffered a methodology that entails the transformation and reorganization of code to uphold semantic integrity while concurrently resisting reverse engineering. Recently, several works (Yang et al., 2023; Li et al., 2023b) have been focusing on watermarking the code generated by LLMs. They utilized a post-processing approach, whereby watermarks are inserted through transformations applied to the code subsequent to its generation by the model. However, these techniques present several limitations including their specificity for a single language, susceptibility to counterfeiting upon watermark method disclosure, restricting their applicability. Machine Generated Text Identification. The task of identifying machine-generated text has always been of paramount importance. An intuitive approach is to treat it as a binary classification task, accomplished by training a model (Solaiman et al., 2019; Bakhtin et al., 2019). Another approach is to identify model-generated text by detecting features of the generated text. Tay et al. (2020) distinguished texts by detecting detectable artifacts in the generated text, such as sampling methods, topk probabilities, etc. In 2023, a novel method was introduced by Kirchenbauer et al. (2023) suggesting the inserting of watermarks into text during the model inference period. The authors applied a hash function and a random number generator to divide candidate tokens into groups, allowing watermark extraction by those aware of the rule. Lee et al. (2023) extended this method to code generation with threshold-controlled watermark inclusion. 7" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13594v1", |
| "title": "Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers", |
| "abstract": "An effective method for combining frozen large language models (LLM) and\nvisual encoders involves a resampler module that creates a `visual prompt'\nwhich is provided to the LLM, along with the textual prompt. While this\napproach has enabled impressive performance across many coarse-grained tasks\nlike image captioning and visual question answering, more fine-grained tasks\nthat require spatial understanding have not been thoroughly examined. In this\npaper, we use \\textit{diagnostic classifiers} to measure the extent to which\nthe visual prompt produced by the resampler encodes spatial information. Our\nresults show that this information is largely absent from the resampler output\nwhen kept frozen during training of the classifiers. However, when the\nresampler and classifier are trained jointly, we observe a significant\nperformance boost. This shows that the compression achieved by the resamplers\ncan in principle encode the requisite spatial information, but that more\nobject-aware objectives are needed at the pretraining stage to facilitate this\ncapability", |
| "authors": "Georgios Pantazopoulos, Alessandro Suglia, Oliver Lemon, Arash Eshghi", |
| "published": "2024-04-21", |
| "updated": "2024-04-21", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "An effective method for combining frozen large language models (LLM) and\nvisual encoders involves a resampler module that creates a `visual prompt'\nwhich is provided to the LLM, along with the textual prompt. While this\napproach has enabled impressive performance across many coarse-grained tasks\nlike image captioning and visual question answering, more fine-grained tasks\nthat require spatial understanding have not been thoroughly examined. In this\npaper, we use \\textit{diagnostic classifiers} to measure the extent to which\nthe visual prompt produced by the resampler encodes spatial information. Our\nresults show that this information is largely absent from the resampler output\nwhen kept frozen during training of the classifiers. However, when the\nresampler and classifier are trained jointly, we observe a significant\nperformance boost. This shows that the compression achieved by the resamplers\ncan in principle encode the requisite spatial information, but that more\nobject-aware objectives are needed at the pretraining stage to facilitate this\ncapability", |
| "main_content": "Introduction Recent approaches for developing Vision and Language (V&L) models leverage existing vision (Radford et al., 2021; Fang et al., 2023b,a), and language experts (Touvron et al., 2023a; Zhang et al., 2022; Touvron et al., 2023b) and try to learn a mapping between them (Alayrac et al., 2022; Li et al., 2023b; Dai et al., 2023; You et al., 2023; Liu et al., 2023c,b). In most cases, the experts are kept frozen while the only learnable component is the mapping between the visual and the language expert. The simplest approach uses a linear projection layer that matches the dimensionality of the visual and textual embeddings before feeding them to the LLM (Liu et al., 2023c,b). A more sophisticated 1Code available here \u2744Resampler Probe The phrase \u2018Strawberries and cream in an fruit/snacks tray\u2019 refers to the top left part of the image. \u2744Text Embeddings TRUE \u2744Resampler Probe Locate the region that is described by: \u2018Strawberries and cream in an fruit/snacks tray\u2019. 0.32 0.25 0.54 0.39 Latent Queries \u2744Vision Encoder Latent Queries \u2744Text Embeddings Figure 1: Explicit (left) and implicit (right) probing for spatial understanding. In the explicit setting, we probe for region localization, while in the implicit setting, the probe is trained to classify whether a description involving an image region is true of the image. method is to use a resampler to compress the visual embeddings into a compact \u2018visual prompt\u2019 that is then fed to the LLM either at the input level along with the text prompt (Li et al., 2023b; Dai et al., 2023) or via cross attention layers (Alayrac et al., 2022; Li et al., 2023a). From a practical standpoint, the resampler may accelerate training and inference as it significantly reduces the sequence length, but also facilitates in-context learning capabilities since additional examples can fit into the context window of the LLM. As a result, these approaches have demonstrated impressive performance across multiple \u2018coarse-grained\u2019 tasks such as image captioning, and visual question answering. However, fine-grained tasks such as visual grounding and spatial understanding are relatively underexplored. Resamplers are usually pretrained on pairs of image-text data using contrastive learning (Li et al., 2023b; Dai et al., 2023), and/or multimodal masked language modeling (Lauren\u00e7on et al., 2023; Alayrac et al., 2022), without relying on object-aware objectives. Given the importance of resamplers for the development of V&L models, we ask whether this compression preserves arXiv:2404.13594v1 [cs.CV] 21 Apr 2024 \ffine-grained spatial information. Do the contrastive and language modeling objectives retain the overall scene structure, or is this information lost due to the absence of object-aware pretraining objectives? To address these questions, we train diagnostic classifiers to probe two different resampler modules for explicit and implicit spatial understanding \u2014 see Figure 1. Our results indicate that the multimodal resamplers do not facilitate spatial understanding. Nevertheless, in all settings, jointly fine-tuning the diagnostic classifiers and the resamplers significantly boosts performance, demonstrating that the compression achieved by the resamplers can in principle encode the requisite spatial information, but that more object-aware pretraining objectives are needed to facilitate this. 2 Related Work Resamplers The idea of the resampler is inspired primarily by computer vision, where an attention mechanism is used to compress visual features into learnable queries (often referred to as slots) (Carion et al., 2020; Kamath et al., 2021; Locatello et al., 2020). More recently, resamplers have been applied to more multimodal tasks. Flamingo (Alayrac et al., 2022) and subsequent open-source variants (Lauren\u00e7on et al., 2023; Li et al., 2023a) are based on the Perceiver Resampler (Jaegle et al., 2022), with cross-attention between the latent queries and the visual embeddings followed by a stack of selfattention blocks that operate on the latent queries. In the Q-Former (Li et al., 2023b; Dai et al., 2023), the latent queries are also informed by the input text and, therefore, create a more \u2018linguistically informed\u2019 visual prompt. Probing Probing is a class of methods for interpreting neural models by assessing whether the model representations encode specific kinds of information at different processing stages (Belinkov, 2022). The concept of probing is straightforward; we extract representations from a model that is already trained on some task(s), and use a lightweight diagnostic classifier on top of these representations to solve a probing task that reflects the information that we seek to find. The classifier\u2019s performance is then taken to correlate with the extent to which that information is encoded by the model (Conneau et al., 2018; Hupkes et al., 2018). Many within (multimodal) NLP have thus adopted probing to interpret model behavior (Kajic and Nematzadeh, 2022; Salin et al., 2022; Lindstr\u00f6m et al., 2020). 3 Experiments Is spatial understanding a property of V&L resamplers? We experiment with three different spatial understanding tasks. In RefCOCOg (Mao et al., 2016), the objective is to predict the coordinates of the region that is described by the input phrase. Secondly, we use the \u2018random split\u2019 from the VSR dataset (Liu et al., 2023a), where the model has to assess the validity of a caption describing a spatial relationship between two entities. Finally, we introduce the Region Cell Matching (RCM) task, which follows the VSR formulation but is designed to test for a more rudimentary form of spatial understanding regarding the location of one entity in the image. Inspired by CAPTCHAs, an image is divided into a 3x3 grid, and each grid cell is assigned a location description (such as top left or middle). We generate synthetic captions by combining RefCOCOg descriptions with the cell location as shown in the implicit probing example of Figure 1. To ensure that performance is not influenced by frequency biases, we balanced the distribution of positive and negative examples. Appendix A contains further details about the dataset. In our experiments, we use the Q-Former from the first pretraining stage of BLIP2 (Li et al., 2023b) and InstructBLIP (Dai et al., 2023). To probe the resamplers, we follow past work (Belinkov, 2022) and use a single linear layer after flattening the embeddings of the query tokens. For RefCOCOg, the linear layer predicts the normalized coordinates of the region that matches the referring expression. We use the bounding box loss from (M)DETR (Carion et al., 2020; Kamath et al., 2021): a weighted sum of the Generalised IoU and L1 losses. Similarly, for VSR and the RCM task, we use a linear layer that predicts the probability that the query matches the image trained using binary cross entropy. We tune the learning rate, number of epochs, and loss weights (only for RefCOCOg) using Bayesian hyperparameter optimization (Bergstra et al., 2013) for at least ten iterations. For further implementation details, see Appendix B. In all cases, we evaluate the best model in terms of validation performance. We compare the two resamplers against similarly-sized models that employ patch representations. We avoid comparison against models with object-centric visual encoding because the task of visual grounding is significantly easier in these models as they need to select the correct can\fRefCOCOg VSR random RCM Validation Test Validation Test Validation Test Random 50.00 50.00 50.00 Human 95.40 92.29 MDETR (Kamath et al., 2021) 83.35 83.31 CLIP\u2217(Radford et al., 2021) 56.0 Unitab (Yang et al., 2022) 84.58 84.70 ViLT (Kim et al., 2021) 69.14 68.93 71.38 71.53 83.16 83.25 ^ Q-Former 30.39 30.26 66.91 64.97 70.12 69.49 t Q-Former 71.47 71.72 80.86 80.50 81.68 81.35 ^ IBLIP Q-Former 20.00 19.92 58.07 55.72 64.58 63.08 t IBLIP Q-Former 68.89 69.34 78.40 76.99 83.11 80.86 Table 1: Linear probing results. ^/t denotes that the resampler is frozen/unfrozen. \u2217results from Liu et al. (2023a). 1 2 3 4 5 6 7 8 9 10 11 Layer 45 50 55 60 65 70 Accuracy Performance on VSR using Intermediate Layers frozen Q-Former frozen IBLIP Q-Former (a) person animal vehicle electronic appliance indoor kitchen furniture food accessory outdoor sports Super-Category 0 20 40 60 80 100 Accuracy@IoU0.5 40.3 34.2 32.4 18.7 12.2 11.7 16.4 19.3 21.6 13.2 21.2 8.2 83.1 78.0 73.5 70.5 64.9 61.3 60.0 58.7 57.9 54.6 51.5 47.8 Performance per MSCOCO super-category frozen Q-Former unfrozen Q-Former (b) Figure 2: Performance on (a) VSR per intermediate layer, (b) RefCOCOg per MSCOCO super-category. didate bounding box provided from the detector as opposed to explicit image region prediction. Additionally, we provide results where the linear classifier is jointly trained along with the resampler as an upper bound for the performance with frozen representations. Table 1 shows the results for the models that we are considering. We observe that both resamplers perform poorly on RefCOCOg when kept frozen, and, therefore, are unable to perform explicit visual grounding. A possible counter-argument could be that predicting raw coordinates within the image is too difficult to solve with a single linear layer. However, we observe similar trends with VSR and RCM, which test for spatial understanding in an easier binary classification setup. While the resamplers perform better than random baselines in these tasks, there is a significant gap between the performance of the frozen and fine-tuned backbones. We believe this is an outcome of the pretraining objectives of the Q-Former that do not explicitly facilitate fine-grained object-centric representations. This is in line with previous work, which found that V&L models trained with contrastive objectives act as bag-of-words and do not preserve spatial information (Yuksekgonul et al., 2022). On the other hand, the significant boost achieved by unfreezing the resamplers shows that the compression of the input embeddings is, in principle, able to capture spatial information and, therefore, that the resampler as an architectural choice does not necessarily constitute a bottleneck. Is spatial information encoded in earlier layers but discarded in deeper layers? We previously observed that resamplers have poor performance in \fCategory Adjacency Directional Orientation Projective Proximity Topological Unallocated ^ Q-Former 61.94 42.05 56.93 62.87 60.15 74.56 68.42 t Q-Former 68.86 75.00 67.15 78.29 81.95 83.94 72.37 ^ IBLIP Q-Former 57.44 38.64 58.39 54.21 40.60 66.14 52.63 t IBLIP Q-Former 62.98 68.18 67.88 74.61 78.95 83.15 77.63 Table 2: VSR results per model for different categories of spatial relationships. ^/t denotes that the resampler is frozen/unfrozen. spatial understanding tasks when using representations from the last layer. Next, we examine if the representations from intermediate layers better encode spatial information. Intuitively, representations from earlier layers could lead to greater probing performance as they are closer to the visual encoder\u2019s output. Figure 2a shows the results on VSR after probing representations from intermediate layers. Overall, intermediate layer representations do not provide performance gains. There is a clear upward trend regarding the performance of the Q-Former from BLIP2, whereas for InstructBLIP we observe fluctuations within a small range across layers. A similar trend is observed in the RefCOCOg results which are included in Appendix C. Scaling the Probing Classifier Additionally, we experiment with scaling the probe classifier by introducing non-linearities. In particular, we use 2-layer and 4-layer classifiers with SwiGLU activation functions. We refrain from using more complex classifiers because they may infer features that are not actually used by the underlying model (Hupkes et al., 2018). For training, we used the same setup as with our previous experiments. Table 3 illustrates the results with increasing prompt complexity. While we observe a common trend of increasing performance when we make the probe more complex, the accuracy of the nonlinear probes does not indicate that the resampler encodes spatial information which can be easily retrieved. Additionally, the performance gap between the simplest and the most complex probe in the case of InstructBLIP indicates that fine-grained spatial understanding is \u2018built-up\u2019 within the probe and is not necessarily a property of the resampler component. 3.1 Discussion Performance analysis per object category Figure 2b illustrates the Q-Former\u2019s performance on RefCOCOg per MSCOCO (Lin et al., 2014) supercategory. We observe that the frozen/unfrozen resamplers behave differently but also have sigModel #Layers RefCOCOg VSR random RCM ^ Q-Former 1 30.26 64.97 69.49 2 32.08 65.15 69.98 4 34.49 65.01 70.71 ^ IBLIP Q-Former 1 19.92 55.72 63.08 2 25.01 58.09 68.66 4 34.49 59.09 69.29 Table 3: Probing results by scaling the probing classifier. nificant variation between object categories. To further understand the possible reasons for this variation, we computed the Kendall coefficient (Kendall, 1938) between the performance of each super-category and 1) the distribution of train examples, 2) the area of each bounding box, 3) and the distance of the bounding box from the center of the image (Table 5). Interestingly, the main factor that correlates positively with the performance per category is the area of the bounding box. We also observe that the further the bounding box deviates from the center, the more the performance drops. These two observations imply that the Q-Former constructs the visual prompt by \u2018summarizing\u2019 the most central entities within an image, ignoring positional outliers. Which spatial relationships are difficult to capture? In Table 2, we break down the VSR results according to the spatial relationship type. Both resamplers perform the best in topological relations across frozen/unfrozen conditions. Directional relations seem challenging for out-of-the-box resamplers, though this relation can be captured during fine-tuning. Finally, captions describing adjacency or orientation properties are difficult even for finetuned resamplers. Effect of learning objectives We showed that multimodal resamplers pretrained with contrastive learning and multimodal language modeling objectives do not capture spatial information well. These are undoubtedly important objectives as they enable large-scale pretraining, however, on their own, \fthey are not sufficient for enabling fine-grained spatial understanding. Finally, we observed that BLIP-2\u2019s Q-Former consistently outperformed the one from InstructBLIP. However, as shown in Figure 2a, the performance of the two resamplers is comparable for early layers. We hypothesize that during instruction tuning, the InstructBLIP Q-former may get away with providing even less fine-grained information since the language modeling loss is already low due to the high-quality LLM, leading to a forgetting effect (McCloskey and Cohen, 1989). 4" |
| } |
| ] |
| } |