SlowGuess commited on
Commit
b29e566
Β·
verified Β·
1 Parent(s): a603e0f

Add Batch db689155-8957-43d6-9165-580cb0e638e3

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. aadllmneuralattentiondrivenauditorysceneunderstanding/ac7deae7-5115-4287-9a22-3a685a04aa5e_content_list.json +3 -0
  2. aadllmneuralattentiondrivenauditorysceneunderstanding/ac7deae7-5115-4287-9a22-3a685a04aa5e_model.json +3 -0
  3. aadllmneuralattentiondrivenauditorysceneunderstanding/ac7deae7-5115-4287-9a22-3a685a04aa5e_origin.pdf +3 -0
  4. aadllmneuralattentiondrivenauditorysceneunderstanding/full.md +669 -0
  5. aadllmneuralattentiondrivenauditorysceneunderstanding/images.zip +3 -0
  6. aadllmneuralattentiondrivenauditorysceneunderstanding/layout.json +3 -0
  7. abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/ef01d945-3d25-4e5a-8482-29580b3918b0_content_list.json +3 -0
  8. abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/ef01d945-3d25-4e5a-8482-29580b3918b0_model.json +3 -0
  9. abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/ef01d945-3d25-4e5a-8482-29580b3918b0_origin.pdf +3 -0
  10. abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/full.md +492 -0
  11. abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/images.zip +3 -0
  12. abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/layout.json +3 -0
  13. accuratekvcachequantizationwithoutliertokenstracing/7d3e80b6-35e3-4635-8f5c-fd70948c468f_content_list.json +3 -0
  14. accuratekvcachequantizationwithoutliertokenstracing/7d3e80b6-35e3-4635-8f5c-fd70948c468f_model.json +3 -0
  15. accuratekvcachequantizationwithoutliertokenstracing/7d3e80b6-35e3-4635-8f5c-fd70948c468f_origin.pdf +3 -0
  16. accuratekvcachequantizationwithoutliertokenstracing/full.md +548 -0
  17. accuratekvcachequantizationwithoutliertokenstracing/images.zip +3 -0
  18. accuratekvcachequantizationwithoutliertokenstracing/layout.json +3 -0
  19. acecoderacingcoderrlviaautomatedtestcasesynthesis/f89dd568-8d6d-4f88-b16c-6cd5a42987ae_content_list.json +3 -0
  20. acecoderacingcoderrlviaautomatedtestcasesynthesis/f89dd568-8d6d-4f88-b16c-6cd5a42987ae_model.json +3 -0
  21. acecoderacingcoderrlviaautomatedtestcasesynthesis/f89dd568-8d6d-4f88-b16c-6cd5a42987ae_origin.pdf +3 -0
  22. acecoderacingcoderrlviaautomatedtestcasesynthesis/full.md +396 -0
  23. acecoderacingcoderrlviaautomatedtestcasesynthesis/images.zip +3 -0
  24. acecoderacingcoderrlviaautomatedtestcasesynthesis/layout.json +3 -0
  25. acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/831455f6-9e27-4933-a005-1bc4bfd9073f_content_list.json +3 -0
  26. acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/831455f6-9e27-4933-a005-1bc4bfd9073f_model.json +3 -0
  27. acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/831455f6-9e27-4933-a005-1bc4bfd9073f_origin.pdf +3 -0
  28. acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/full.md +447 -0
  29. acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/images.zip +3 -0
  30. acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/layout.json +3 -0
  31. acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/27146bfa-49fd-45d6-963d-8cea3d114dbd_content_list.json +3 -0
  32. acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/27146bfa-49fd-45d6-963d-8cea3d114dbd_model.json +3 -0
  33. acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/27146bfa-49fd-45d6-963d-8cea3d114dbd_origin.pdf +3 -0
  34. acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/full.md +497 -0
  35. acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/images.zip +3 -0
  36. acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/layout.json +3 -0
  37. activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/32a3e3ce-4b95-43bf-ae7d-0a27c910f952_content_list.json +3 -0
  38. activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/32a3e3ce-4b95-43bf-ae7d-0a27c910f952_model.json +3 -0
  39. activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/32a3e3ce-4b95-43bf-ae7d-0a27c910f952_origin.pdf +3 -0
  40. activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/full.md +293 -0
  41. activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/images.zip +3 -0
  42. activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/layout.json +3 -0
  43. activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/72220249-57c6-447c-8612-88cea94afb52_content_list.json +3 -0
  44. activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/72220249-57c6-447c-8612-88cea94afb52_model.json +3 -0
  45. activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/72220249-57c6-447c-8612-88cea94afb52_origin.pdf +3 -0
  46. activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/full.md +304 -0
  47. activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/images.zip +3 -0
  48. activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/layout.json +3 -0
  49. activiewevaluatingactiveperceptionabilityformultimodallargelanguagemodels/b694f6c1-f9a0-49df-a72d-cebf955df78b_content_list.json +3 -0
  50. activiewevaluatingactiveperceptionabilityformultimodallargelanguagemodels/b694f6c1-f9a0-49df-a72d-cebf955df78b_model.json +3 -0
aadllmneuralattentiondrivenauditorysceneunderstanding/ac7deae7-5115-4287-9a22-3a685a04aa5e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:833ed1b254bb31263d4b2c443df6a626fd6bbee5a85186f99d500d07e42fbae3
3
+ size 158583
aadllmneuralattentiondrivenauditorysceneunderstanding/ac7deae7-5115-4287-9a22-3a685a04aa5e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b43ca63b2f563d16f993862b02aa914448002d42739cf6894d814ecaa6647b0d
3
+ size 191021
aadllmneuralattentiondrivenauditorysceneunderstanding/ac7deae7-5115-4287-9a22-3a685a04aa5e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4085cd47fd9a7f9d660d212fcf7a2d4b4c15d86d659c07748f90054a6c6aa51
3
+ size 4783967
aadllmneuralattentiondrivenauditorysceneunderstanding/full.md ADDED
@@ -0,0 +1,669 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AAD-LLM: Neural Attention-Driven Auditory Scene Understanding
2
+
3
+ Xilin Jiang $^{1,3}$ , Sukru Samet Dindar $^{1,3}$ , Vishal Choudhari $^{1,3}$ , Stephan Bickel $^{4,5}$ , Ashesh Mehta $^{4,5}$ , Guy M McKhann $^{2}$ , Daniel Friedman $^{6}$ , Adeen Flinker $^{6}$ , Nima Mesgarani $^{1,3}$
4
+
5
+ $\{^{1}$ Department of Electrical Engineering, $^{2}$ Department of Neurological Surgery, $^{3}$ Mortimer B. Zuckerman Mind Brain Behavior Institute\}, Columbia University, USA $^{4}$ Hofstra Northwell School of Medicine, USA $^{5}$ The Feinstein Institutes for Medical Research, USA $^{6}$ Neurology Department, New York University, USA $\{xj2289, sd3705, vc2558\} @columbia.edu, nima@ee.columbia.edu$
6
+
7
+ # Abstract
8
+
9
+ Auditory foundation models, including auditory large language models (LLMs), process all sound inputs equally, independent of listener perception. However, human auditory perception is inherently selective: listeners focus on specific speakers while ignoring others in complex auditory scenes. Existing models do not incorporate this selectivity, limiting their ability to generate perceptional-aligned responses. To address this, we introduce Intention-Informed Auditory Scene Understanding (II-ASU) and present Auditory Attention-Driven LLM (AAD-LLM), a prototype system that integrates brain signals to infer listener attention. AAD-LLM extends an auditory LLM by incorporating intracranial electroencephalography (iEEG) recordings to decode which speaker a listener is attending to and refine responses accordingly. The model first predicts the attended speaker from neural activity, then conditions response generation on this inferred attentional state. We evaluate AAD-LLM on speaker description, speech transcription and extraction, and question answering in multitalker scenarios, with both objective and subjective ratings showing improved alignment with listener intention. By taking a first step toward intention-aware auditory AI, this work explores a new paradigm where listener perception informs machine listening, paving the way for future listener-centered auditory systems. Demo and code available<sup>1</sup>.
10
+
11
+ # 1 Introduction
12
+
13
+ The human auditory system does not process all sounds equally but selectively amplifies relevant elements while suppressing others based on the listener's intent (Mesgarani and Chang, 2012). In a multi-speaker setting, a listener may focus on a
14
+
15
+ ![](images/cbd30a4f8f79985175ea9cbea54d8c8742c19ed98fb5d2449223d94975f4b5de.jpg)
16
+ Figure 1: AAD-LLM is a brain-computer interface (BCI) for auditory scene understanding. It decodes neural signals to identify the attended speaker and integrates this information into a language model, generating responses that align with the listener's perceptual focus.
17
+
18
+ # Task: Speech Extraction
19
+
20
+ Q: Extract the speaker that I'm listening to.
21
+
22
+ A:
23
+
24
+ # Task: Speaker Description
25
+
26
+ Q: Who is the speaker that I'm focusing on?
27
+
28
+ A: A female speaker with high pitch and low tempo.
29
+
30
+ # Task: Speech Transcription
31
+
32
+ Q: Transcribe the attended talker.
33
+
34
+ A: ... I was reading when a playful cat showed up ...
35
+
36
+ # Task: Freeform Question Answering
37
+
38
+ Q: What is the speaker's attitude toward the cat?
39
+
40
+ A: Likely amused, as "playful" conveys a positive tone.
41
+
42
+ # Task: Speech Summarization
43
+
44
+ Q: What was in the background that I missed?
45
+
46
+ A: A conversation about a new movie.
47
+
48
+ single speaker, tune into background music, or ignore speech entirely (Cherry, 1953; Osgood, 1959; Shinn-Cunningham and Best, 2008). While modern auditory foundation models, including auditory large language models (LLMs), are designed for general-purpose auditory understanding, they do not inherently account for listener intent. In applications such as assistive hearing devices, however, listener-aware processing is critical. These systems must prioritize the content most relevant to the user to improve usability in complex acoustic environments. Existing models, such as Ltu (Gong et al., 2024), SALMONN (Tang et al., 2024), and
49
+
50
+ ![](images/190dc0248cb447884199deb5e9eae043e6937cd71d8b2e94f6d4fc8bec0d7d6b.jpg)
51
+
52
+ contribute equally.
53
+
54
+ 1https://aad-llm.github.io
55
+
56
+ Qwen2-Audio (Chu et al., 2024), process all incoming audio equally, making them ineffective in scenarios where distinguishing between attended and unattended speech is essential.
57
+
58
+ Although these models excel at general auditory scene understanding (Sakshi et al., 2024; Wang et al., 2025), they lack mechanisms to selectively process speech based on listener perception. In multi-speaker environments, they transcribe and analyze all speech sources indiscriminately, failing to separate what the user is actually attending to from background conversations (Wu et al., 2024). To address this, a listener-aware auditory AI must move beyond passive transcription and actively adapt its processing to reflect user intent.
59
+
60
+ Studies have shown that the auditory cortex encodes speech features from the attended talker (Mesgarani and Chang, 2012), allowing neural decoding methods to reconstruct or enhance the target speaker by comparing brain signal-derived representations with competing speech (O'sullivan et al., 2015). This line of research, also known as auditory attention decoding (AAD), has sought to infer a listener's auditory focus from neural signals. Both invasive (Han et al., 2019; Ceolini et al., 2020; Choudhari et al., 2024) and non-invasive methods (Ciccarelli et al., 2019; Geirnaert et al., 2021; Vandecappelle et al., 2021; Pan et al., 2024, 2023) have been explored for attention-controlled speech extraction in hearing devices. However, AAD is primarily used for signal enhancement rather than guiding an AI's interpretation of an auditory scene. While it could improve speech intelligibility, it does not enable models to reason about the attended content, such as summarizing speech or answering user queries based on what they perceive.
61
+
62
+ Prior studies have attempted to integrate neural signals into large language models models, incorporating neural signals to enhance multimodal processing. Some efforts have used neural data for EEG-based text generation (Jiang et al., 2024a; Kim et al., 2024) or fMRI-informed representations (Zheng and Sun, 2024), while others have integrated brain signals to improve LLM semantic understanding (Toneva and Wehbe, 2019; Moussa et al., 2025). However, these approaches focus primarily on language comprehension and semantic alignment, rather than using neural signals to refine auditory scene interpretation or speaker selection.
63
+
64
+ This paper introduces Intention-Informed Auditory Scene Understanding (II-ASU), a framework where models align their interpretation of sound
65
+
66
+ with listener intent. Instead of modifying the auditory signal like AAD, we integrate attentional signals into a language model to guide how it processes and responds to auditory scenes. This approach enables reasoning beyond speech separation, allowing models to adapt responses based on listener focus rather than treating all input speech equally. To implement this, we present Auditory Attention-Driven LLM (AAD-LLM), a prototype system that extends an auditory LLM with neural attention decoding. The model processes intracranial EEG (iEEG) to determine which speaker the listener is attending to, extracts speech representations while retaining both attended and ignored sources for contextual processing, and integrates the decoded attentional state into the LLM to generate responses that reflect listener perception.
67
+
68
+ This work makes several contributions. It introduces II-ASU as a paradigm shift from passive auditory processing to listener-aligned interpretation; It proposes AAD-LLM as the first system to integrate brain signals into an auditory LLM for attention-driven scene understanding; Finally, it evaluates AAD-LLM across multiple auditory tasks, demonstrating improved alignment with listener perception. While this study focuses on selective attention, the broader II-ASU framework could extend to other intent-detection methods, including gaze tracking, head orientation, posture shifts, or explicit user input. By incorporating such attentional signals, future intention-aware auditory AI could dynamically adapt to user perception in a wide range of applications.
69
+
70
+ # 2 Intention-Informed Auditory Scene Understanding
71
+
72
+ # 2.1 Motivation and Goal
73
+
74
+ Consider the scenario illustrated in Figure 1: both a human and a machine listener are exposed to the same auditory scene, denoted as $S$ . The human listener applies selective auditory attention forming an internal representation based on their intent $I$ . When asked a question $Q$ , an intention-informed auditory model, such as AAD-LMM, should generate an answer $A$ that depends on $Q$ , $S$ , and $I$ :
75
+
76
+ $$
77
+ A = \operatorname {M a c h i n e L i s t e n e r} (Q, S, I) \tag {1}
78
+ $$
79
+
80
+ However, existing auditory LLMs (Gong et al., 2024; Hu et al., 2024; Chu et al., 2024) are not intention-informed. They function as:
81
+
82
+ $$
83
+ A ^ {\circ} = \operatorname {A u d i t o r y L L M} (Q, S) \tag {2}
84
+ $$
85
+
86
+ without any awareness of what the listener actually perceives, leading to intention-uninformed response $A^{\circ}$ .
87
+
88
+ In real-world scenarios, the listener intention manifests in multiple waysβ€”through facial expressions, eye gaze, head direction, verbal commands, or physical actionsβ€”but behavioral data capturing these cues remains scarce. As the first step towards intention-informed auditory scene understanding in LLMs, we focus on the most fundamental yet unexplored form of auditory intention: selective auditory attention. In this context, $I$ represents the attention of the listener to specific sound source while ignoring the other. Unlike explicit commands, this cognitive state is abstract and non-trivial to encode as input for LLMs. Our challenge, therefore, is twofold. First, decoding attentional state $I$ from brain signals $Z$ by extracting neural correlates of selective attention from intracranial EEG. Second, aligning the auditory LLM with $I$ by ensuring that responses prioritize the attended speaker, rather than treating all speakers equally.
89
+
90
+ # 2.2 Current Models in Auditory Scene Understanding
91
+
92
+ Auditory scene understanding (ASU) is a well-studied problem, starting with knowledge and statistics-based methods (Ellis, 1996; Wang and Brown, 2006) to deep neural networks (Kong et al., 2020; Hsu et al., 2021; Chen et al., 2022b). Today, auditory LLMs represent the state-of-the-art in ASU (Gong et al., 2024; Chu et al., 2023). These models take both speech and text input and generate text output. They typically consist of a speech encoder and a pretrained textual LLM, which are jointly trained for tasks including speech description, recognition, and Q&A. Qwen2-Audio (Chu et al., 2024), the current state-of-the-art auditory LLM on speech understanding benchmarks (Sakshi et al., 2024; Wang et al., 2025), serves as both a baseline and the backbone of our proposed AAD-LLM. As a brief overview, Qwen2-Audio integrates a Whisper (Radford et al., 2023) speech encoder and a Qwen2 (Yang et al., 2024) LLM. The acoustic embedding of the speech input encoded by Whisper is concatenated with the textual embedding of the question, and processed together by Qwen2 to output an answer.
93
+
94
+ # 3 AAD-LLM
95
+
96
+ We introduce AAD-LLM as a prototype system designed to address the Intention-Informed Auditory
97
+
98
+ Scene Understanding (II-ASU) problem. AAD-LLM is a multimodal auditory large language model (LLM) that takes three distinct inputs: a textual question $Q$ , a speech mixture $S$ , and a listener's brain signal $Z$ . The model's output $A$ is conditioned on all three inputs:
99
+
100
+ $$
101
+ A = \mathrm {A A D} - \mathrm {L L M} (Q, S, Z) \tag {3}
102
+ $$
103
+
104
+ As illustrated in Figure 2, AAD-LLM integrates neural attention decoding with auditory language processing. Unlike standard auditory models, AAD-LLM must process acoustic, linguistic and neural signals simultaneously. This introduces two key challenges. First, most neural recordings provide only a few minutes of data per participant, making it impractical to train a full end-to-end system jointly. Second, existing auditory attention decoding (AAD) methods (Geirnaert et al., 2021) reconstruct a temporal representation of speech from neural activity. However, these representations are continuous, noisy, and not directly compatible with discrete token-based LLMs. AAD-LLM solves these challenges by first extracting a discrete speaker identity token from brain signals, and second, using the speaker identity to condition the LLM's response generation. By decoupling brain decoding from language modeling, AAD-LLM allows intention alignment to be trained independently on large-scale speech data, while brain decoding is trained on limited neural data.
105
+
106
+ # 3.1 Intention Decoding
107
+
108
+ AAD-LLM introduces a speaker-based approach for decoding auditory attention from intracranial EEG signals. Instead of reconstructing speech features, we classify which speaker the listener is attending to using a discrete token representation. The process consists of two steps. First, we perform speaker clustering by applying K-means clustering to x-vectors (Snyder et al., 2018) extracted from a large corpus of thousands of speakers, ensuring no overlap with test speakers. The x-vectors, commonly used in speaker verification, are 512-dimensional embeddings, which remain frozen during training. The number of clusters $\mathrm{K}$ is set to 8. Next, we perform speaker prediction from neural signals. A bidirectional LSTM maps the $C$ -channel $T$ -length neural signal $Z \in \mathbb{R}^{C \times T}$ to a predicted cluster index $\hat{i} \in \{0,1,\dots,K - 1\}$ . The groundtruth label $i$ is determined by finding the closest centroid to the attended speaker's x-vector. Further details about the model is included in Appendix B.
109
+
110
+ ![](images/4c40764477bf02ffdc8c653e4a43346686bbc8fdc5d7214424bf8ada50aaf9e5.jpg)
111
+ Figure 2: AAD-LLM is a multimodal, attention-aware LLM designed for auditory scene understanding. The model takes three inputs: a textual question, an auditory scene (containing mixed speech sources), and neural signals representing listener attention. Each input is processed by dedicated modules before being integrated into the LLM. AAD-LLM decodes auditory attention to determine the attended speaker and prioritizes information from the target speaker while generating responses. The model is trained to differentiate between attended and ignored speech, ensuring that its output aligns with the listener's perceptual focus.
112
+
113
+ The intention token is represented as a speaker x-vector centroid $\hat{v} \in \mathbb{R}^D$ , rather than a discrete label $\hat{i}$ , to preserve numerical locality among similar speakers. A key advantage of this design is that intention decoding and intention alignment are trained separately. The speaker predictor only requires minutes of brain data, whereas the LLM can be trained on hours of speech data independently. Furthermore, this approach is modular, allowing for easy adaptation to new types of physiological signals (e.g., EEG, fNIRS, eye-gaze) without retraining the entire system.
114
+
115
+ # 3.2 Intention Alignment
116
+
117
+ Once the listener's attended speaker identity is decoded, the next challenge is to align the auditory LLM's response with listener perception. This is achieved in three steps:
118
+
119
+ 1. Embedding the intention token: The speaker identity token $v$ (or $\hat{v}$ during inference) is projected into the LLM's embedding space via a linear
120
+
121
+ speaker projector. It is then concatenated with the acoustic embeddings from the speech encoder and the textual embeddings from the question. Given two candidate speeches, $s_1$ and $s_2$ , presented in random order, the LLM must determine which one to prioritize as the foreground and which to relegate to the background. However, simply embedding the intention into the LLM's input does not automatically enable the auditory LLM for such selective speech processing. Two more steps are needed:
122
+
123
+ 2. Intention-informed Training: Since real human attentional data is limited, we simulate attention during training by randomly assigning one of two mixed speech sources as the foreground (attended) speaker. We then assign the corresponding speaker identity token $v$ as the intention input. Finally, we train the model with intention-aware tasks such as speaker-focused transcription, selective speech summarization, and foreground/background question answering. The solution is derived from the specific speaker or speech referenced in the question, whether attended or ignored. Further de
124
+
125
+ ![](images/83d2e53a7e389750d57a9dce00c5c965a8d89f2b229c4f59c022ea8c84bc1312.jpg)
126
+ a) Objective evaluation
127
+
128
+ ![](images/921a481b2c503af8cdb6f63ea185760da978279240a43c9bc09aa9a9dd3055d0.jpg)
129
+ Summarization task
130
+
131
+ ![](images/8bc6c854db3c06211955e2213a6d4f37b0d2d4f053b4ad2f984828bc06256a19.jpg)
132
+ Transcription task
133
+
134
+ ![](images/be36352f842e03467202a2e5985afe5200425340d7651cada94759e1f9136f58.jpg)
135
+ Free form Q&A task
136
+
137
+ ![](images/5c1d7dd7be798490075b05977750c502857441985986711b1f1a053f19144fdb.jpg)
138
+ b) Subjective evaluation
139
+ Summarization task
140
+
141
+ ![](images/d4d2c5e965eb68386bbccf4991325d59a13f92e4aed5a1b291418d98ccebbbef.jpg)
142
+ Free form Q&A task
143
+ Figure 3: (a) Objective evaluation across four tasks. The dotted lines represent the lower bound (Qwen2-Audio given the mixture sound) and upper bound (Qwen2-Audio given the attended talker as oracle). "Qwen2-Audio Random" represents the model receiving a randomly selected talker. "AAD-LLM Decoded Attention" represents our method, where attention is decoded from brain signals, while "AAD-LLM Oracle Attention" represents our model given the actual attended talker as oracle. (b) Subjective evaluation measuring the alignment between model outputs and human listeners, who assessed whether the model's response matched what should have been the answer for the attended talker. * denotes p < 0.05, ** denotes p < 0.01, *** denotes p < 0.001.
144
+
145
+ tails are included in Appendix A.
146
+
147
+ 3. Chain-of-Thought Prompting: Empirical experiments revealed that even after training with intention-aware answers with a large rank (512) with low-rank approximation method (LoRA) (Hu et al., 2022), the LLM often ignored the attention token. To enforce attention usage, we introduce a structured prompt format adding chain-of-thought (CoT) (Wei et al., 2022) prefix of this form:
148
+
149
+ "Attention:<att_spk_label>;
150
+
151
+ Spk1:<spk1_label>;
152
+
153
+ Spk2:<spk2_label>;
154
+
155
+ This CoT prompting explicitly directs the model to extract the speaker labels of the input speeches and the label of the attended speaker, all in $\{0,1,\dots,K - 1\}$ . We compare with the performance of AAD-LLM trained without the CoT prefix in Appendix D.6.
156
+
157
+ # 3.3 Auxiliary Module
158
+
159
+ To further enhance speaker differentiation, AAD-LLM incorporates a speech separator based on Mamba-TasNet (Jiang et al., 2024d). This module pre-processes the speech mixture, outputting
160
+
161
+ two separated streams $s_1$ and $s_2$ , which are then processed by the LLM. Importantly, the speech separator is intention-uninformed, meaning that it does not inherently prioritize the attended speaker. Instead, the LLM must selectively process the correct speech stream based on brain-decoded attention signals. We compare with the performance of AAD-LLM without the separator in Appendix D.6.
162
+
163
+ # 3.4 Training Objective
164
+
165
+ AAD-LLM is trained to generate a response sequence $O = \operatorname{Cat}(\operatorname{CoT}, A)$ , combining the CoT prompt and final answer. Each output token $O_i$ is conditioned on the intention $I$ decoded from the brain, the two speech inputs $s_1$ and $s_2$ , the question $Q$ , and all preceding tokens $O_{1:i-1}$ . This leads to the following loss function $\mathcal{L} =$
166
+
167
+ $$
168
+ - \sum_ {i = 2} ^ {N} \log P \left(O _ {i} \mid S P (I), S E \left(s _ {1}\right), S E \left(s _ {2}\right), Q, O _ {1: i - 1}\right)
169
+ $$
170
+
171
+ where $SP$ and $SE$ represent the speaker projector and speech encoder, respectively, as illustrated in Figure 2. Both the speech encoder and the LLM were finetuned using LoRA. The speech separator was trained to maximize the signal-to-noise ratio (SNR) of the separated speeches, and the speaker predictor was optimized using cross-entropy on
172
+
173
+ Table 1: Auditory attention decoding (AAD) accuracy and target speech extraction (TSE) performance.
174
+
175
+ <table><tr><td>Method</td><td>Accuracy</td><td>SNR</td><td>SI-SNR</td><td>WER↓</td><td>SIM</td></tr><tr><td>Speech Mixture</td><td>-</td><td>0.1</td><td>0.1</td><td>37.4</td><td>84.4</td></tr><tr><td>Blind Speech Separation (BSS)</td><td>50.4</td><td>5.2</td><td>-12.3</td><td>64.3</td><td>80.6</td></tr><tr><td>BSS + Mel reconstruction</td><td>92.0</td><td>12.0</td><td>9.6</td><td>15.2</td><td>94.1</td></tr><tr><td>BSS + Envelope reconstruction</td><td>88.0</td><td>11.2</td><td>7.8</td><td>20.4</td><td>90.7</td></tr><tr><td>Target Speech Extraction</td><td>96.0</td><td>12.8</td><td>10.4</td><td>14.3</td><td>94.8</td></tr><tr><td colspan="6">(Proposed Method) Intention-Informed AAD-LLM</td></tr><tr><td>w/ brain-decoded attention</td><td>94.4</td><td>12.2</td><td>10.3</td><td>14.7</td><td>94.1</td></tr><tr><td>w/ oracle attention</td><td>95.8</td><td>12.3</td><td>10.4</td><td>13.0</td><td>94.3</td></tr><tr><td>w/ additional clinical-15m</td><td>97.0</td><td>12.6</td><td>11.5</td><td>11.4</td><td>94.7</td></tr><tr><td>Oracle Speaker (Upper Bound)</td><td>100.0</td><td>13.0</td><td>12.8</td><td>8.8</td><td>95.5</td></tr></table>
176
+
177
+ ground-truth speaker labels. Additional details on training and reproducibility are provided in the Appendix B.
178
+
179
+ # 4 Experiments
180
+
181
+ We evaluated AAD-LLM using intracranial (iEEG) data with neural signals collected from human listeners as they attended to one conversation while another interfering conversation and background noise were present. We present only the key findings in this section (more results can be found in the appendix, including dataset curation (Appendix A) and task specification (Appendix C), model implementation (Appendix B), and additional results (Appendix D) and analyses (Appendix E).
182
+
183
+ # 4.1 Datasets
184
+
185
+ The iEEG Clinical Dataset includes six epilepsy patients implanted with intracranial electrodes as part of their medical care for epilepsy surgery. Electrode placement, clinically determined, consisted of subdural electrocorticography (ECoG) grids and/or stereo-electroencephalography (sEEG) depth electrodes. Neural signals were bandpass-filtered $(0.5 - 30\mathrm{Hz})$ and concatenated across participants to maximize electrode coverage. The study was approved by local IRBs, and informed consent was obtained. Subjects listened to overlapping conversations masked with either pedestrian or babble noise. They attended to one of two simultaneous conversations, each containing two speakers taking turns. Conversations were aligned and segmented into sentences, yielding 280 training, 30 validation, and 50 testing utterances. To mitigate potential biases in the LLM's response to speaker order, we expanded the test set by training the speaker predictor five times with different random initializations and reversing speaker order, resulting in 500 test samples.
186
+
187
+ The Speech-Only Dataset was collected primarily to train AAD-LLM, which requires significantly
188
+
189
+ more data than the iEEG dataset. To simulate the iEEG recording conditions, we mixed two random speakers from TextrolSpeech (Ji et al., 2024), a subset of LibriTTS (Zen et al., 2019), with one background noise sample from DEMAND (Thiemann et al., 2013). This resulted in approximately 54,000 (85.3h) training, 1,000 (1.6h) validation, and 3,000 (4.8h) testing utterances. Notably, there is no overlap in speakers, spoken content, or background noise between the iEEG and speech-only datasets, ensuring no information leakage during testing. Further details on both datasets are provided in Appendix A.
190
+
191
+ # 4.2 Tasks and Metrics
192
+
193
+ We trained and evaluated AAD-LLM and other models on the following tasks and metrics:
194
+
195
+ Auditory Attention Decoding (AAD) measures how accurate AAD-LLM identifies the attended speaker, a crucial step for downstream tasks. We also evaluated Target Speech Extraction (TSE) with signal-to-noise ratio (SNR), scale-invariant signal-to-distortion ratio (SI-SDR) (Le Roux et al., 2019), word error rate (WER), and speaker similarity (SIM) against the clean attended speech.
196
+
197
+ We further considered four tasks for both the foreground (attended) and background (ignored) speaker or speech, covering different levels of speech and language processing:
198
+
199
+ Speaker Description (acoustic) evaluates the accuracy of identifying the gender (G), pitch (P), and tempo (T) of the target speaker.
200
+
201
+ Speech Transcription (phonetic & syntactic) measures transcription quality using word error rate (WER) and BLEU (Papineni et al., 2002) against the target speaker's actual speech.
202
+
203
+ Speech Summarization (semantic) assesses summary quality with ROUGE-L (Lin, 2004) and METEOR (Banerjee and Lavie, 2005), using three GPT-4o mini (OpenAI, 2024) reference summaries. The highest-scoring reference is reported.
204
+
205
+ Free Q&A (semantic & pragmatic) evaluates responses to questions about the target speech, such as sentiment analysis, fact-checking, and named entity recognition. Three questions and reference answers were generated by GPT-4o mini, with performance measured by ROUGE-L and METEOR.
206
+
207
+ Please refer to Appendix C for specific questions and metric computation details. While AAD-LLM was trained exclusively on these five tasks, the acquired attentional state demonstrates transferability to other tasks unseen during intention-informed
208
+
209
+ Table 2: Intention-informed auditory scene understanding performance when listeners were attending to one of the two speakers with background noise. A higher number indicates a better performance for all metrics except word error rate (WER). Full results with more metrics are shown in the Table 5.
210
+
211
+ <table><tr><td rowspan="2">Task
212
+ Model</td><td>Description</td><td colspan="2">AVG(G, P, T)</td><td>Transcription</td><td>WER↓</td><td>Summarization</td><td>ROUGE-L | METEOR</td><td>Free Q&amp;A</td><td>ROUGE-L | METEOR</td></tr><tr><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td><td>Background</td><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td></tr><tr><td colspan="10">(Baselines) Auditory LLM without Intention</td></tr><tr><td>LTU-AS (Gong et al., 2023)</td><td>48.9</td><td>52.2</td><td>139.8</td><td>172.6</td><td>40.2 | 38.9</td><td>27.9 | 31.0</td><td>39.0 | 47.4</td><td>30.4 | 34.3</td><td></td></tr><tr><td>SALMONN (Tang et al., 2024)</td><td>55.3</td><td>52.0</td><td>145.4</td><td>225.3</td><td>39.9 | 39.3</td><td>36.5 | 35.9</td><td>49.4 | 49.2</td><td>44.4 | 45.5</td><td></td></tr><tr><td>Qwen-Audio (Chu et al., 2023)</td><td>36.1</td><td>34.3</td><td>82.7</td><td>112.8</td><td>29.3 | 30.0</td><td>24.5 | 22.8</td><td>35.1 | 40.1</td><td>30.9 | 34.3</td><td></td></tr><tr><td>WavLLM (Hu et al., 2024)</td><td>41.7</td><td>37.7</td><td>94.7</td><td>128.3</td><td>35.6 | 34.2</td><td>38.3 | 38.4</td><td>41.5 | 40.5</td><td>40.2 | 38.9</td><td></td></tr><tr><td>GAMA (Ghosh et al., 2024)</td><td>45.9</td><td>48.2</td><td>n.a.</td><td>n.a.</td><td>19.0 | 17.1</td><td>22.6 | 23.6</td><td>24.0 | 28.6</td><td>24.5 | 29.5</td><td></td></tr><tr><td>Qwen2-Audio (Chu et al., 2024)</td><td>50.9</td><td>40.1</td><td>90.1</td><td>124.7</td><td>27.5 | 29.0</td><td>15.8 | 15.9</td><td>39.9 | 40.4</td><td>34.9 | 33.3</td><td></td></tr><tr><td colspan="10">(Lower Bound) Random Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>49.9</td><td>42.5</td><td>89.5</td><td>106.9</td><td>24.4 | 24.2</td><td>19.0 | 21.0</td><td>40.7 | 42.4</td><td>38.3 | 37.0</td><td></td></tr><tr><td>Qwen2-Audio</td><td>69.3</td><td>68.2</td><td>71.8</td><td>74.6</td><td>30.2 | 29.2</td><td>29.6 | 27.8</td><td>50.0 | 51.9</td><td>44.9 | 45.7</td><td></td></tr><tr><td colspan="10">(Proposed Baselines) Extracted Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>56.2</td><td>44.6</td><td>53.6</td><td>73.2</td><td>37.0 | 41.3</td><td>28.2 | 33.4</td><td>50.8 | 54.2</td><td>47.9 | 46.8</td><td></td></tr><tr><td>Qwen2-Audio</td><td>88.1</td><td>77.6</td><td>18.5</td><td>24.6</td><td>54.5 | 53.9</td><td>41.0 | 40.5</td><td>62.3 | 65.4</td><td>58.0 | 60.0</td><td></td></tr><tr><td colspan="10">(Upper Bound) Oracle Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>91.7</td><td>90.8</td><td>6.6</td><td>19.4</td><td>59.7 | 61.1</td><td>46.3 | 47.6</td><td>64.9 | 68.0</td><td>60.3 | 61.7</td><td></td></tr><tr><td colspan="10">(Proposed Method) Intention-Informed AAD-LLM</td></tr><tr><td>w/ brain-decoded attention</td><td>89.3</td><td>89.0</td><td>14.4</td><td>33.2</td><td>58.3 | 56.9</td><td>42.3 | 42.5</td><td>63.1 | 64.6</td><td>57.9 | 59.1</td><td></td></tr><tr><td>w/ oracle attention</td><td>89.9</td><td>90.4</td><td>12.5</td><td>33.9</td><td>59.7 | 58.5</td><td>42.7 | 43.2</td><td>63.0 | 64.8</td><td>58.1 | 59.2</td><td></td></tr><tr><td>w/ additional clinical-15m</td><td>89.2</td><td>92.3</td><td>6.0</td><td>22.5</td><td>60.9 | 59.7</td><td>44.9 | 45.6</td><td>63.2 | 65.1</td><td>59.3 | 60.2</td><td></td></tr></table>
213
+
214
+ Table 3: Percentages of responses closer (measured by the metrics in parentheses) to the target speaker than to the other one, across different tasks for AAD-LLM with brain-decoded attention.
215
+
216
+ <table><tr><td>Task</td><td>Foreground</td><td>Background</td></tr><tr><td>Description (AVG(G, P, T))</td><td>83.8</td><td>92.2</td></tr><tr><td>Transcription (WER)</td><td>94.2</td><td>90.4</td></tr><tr><td>Summarization (ROUGE-L)</td><td>92.0</td><td>85.4</td></tr><tr><td>Free Q&amp;A (ROUGE-L)</td><td>96.9</td><td>90.9</td></tr></table>
217
+
218
+ training, such as Speech Translation. Please check Appendix D.1 and Table 6 for more information.
219
+
220
+ # 4.3 Results
221
+
222
+ We present the performance of AAD-LLM and other models on the iEEG clinical dataset. Results on the speech-only dataset are provided in Appendix D.3. By default, AAD-LLM, except for the speaker predictor, was trained on the speech-only dataset, which differs in both speakers and content from the iEEG dataset. Additionally, we report results for AAD-LLM trained with an extra 15 minutes of in-domain clinical data ("clinical-15m"). In all tables and figures, "brain-decoded attention" represents the realistic BCI use case, where the speaker predictor infers the attended speaker label from neural signals. "Oracle attention" serves as an upper bound, using the ground-truth speaker label from the dataset.
223
+
224
+ # 4.3.1 Objective Evaluation
225
+
226
+ Objective metrics are reported in Table 1 for AAD&TSE tasks and Table 2 for all other tasks. A
227
+
228
+ baseline for all tasks is evaluating the speech mixture or a randomly selected speech (first six rows and the lower bound in Table 2). These models, which lack attentional state, include blind speech separation and existing auditory LLMs. Their performance is close to random guessing for both foreground and background speech understanding.
229
+
230
+ Additionally, we compared AAD-LLM with standard auditory attention decoding (AAD) methods designed to separate the attended talker. We reproduced conventional AAD approaches that reconstruct the Mel spectrogram or speech envelope from brain activity to identify the attended speaker by similarity (O'Sullivan et al., 2017; Han et al., 2019; Geirnaert et al., 2021). We also implemented an ad-hoc target speech extractor similar to (Ceolini et al., 2020; Pan et al., 2024, 2023), optimizing SNR with the same speaker decoding method. While our model slightly underperforms the target speech extractor, it surpasses standard AAD methods and outperforms the extractor when trained with an additional 15 minutes of clinical data.
231
+
232
+ For other speech tasks, AAD-LLM outperforms all intention-uninformed auditory LLMs and a cascaded speech extractor and Qwen2-Audio (finetuned on the same data) on most metrics, particularly in transcription and summarization of the attended speaker. Notably, AAD-LLM with decoded attention performs close to the "oracle speaker" setting, where the target speaker is provided directly.
233
+
234
+ Table 4: Mean Opinion Scores (MOS) for Foreground Summarization and Free Q&A tasks. "Oracle speaker" serves the performance upper bound.
235
+
236
+ <table><tr><td>Model</td><td>Summarization</td><td>Free Q&amp;A</td></tr><tr><td>Qwen2-Audio</td><td>2.21 (Β± 1.44)</td><td>3.17 (Β± 1.62)</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>2.41 (Β± 1.55)</td><td>3.04 (Β± 1.70)</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>3.98 (Β± 0.96)</td><td>4.41 (Β± 0.95)</td></tr><tr><td>AAD-LLM w/ brain-decoded attention</td><td>3.69 (Β± 1.17)</td><td>4.25 (Β± 1.13)</td></tr><tr><td>AAD-LLM w/ oracle attention</td><td>3.72 (Β± 1.13)</td><td>4.24 (Β± 1.15)</td></tr></table>
237
+
238
+ # 4.3.2 Psychophysics Evaluation
239
+
240
+ Subjective ratings were collected from 40 participants in psychophysics experiments replicating the auditory scenes with the same attended speakers as done in the clinical setting. Participants rated responses from five models for Summarization and Free Q&A tasks, presented in random order, using a 5-point Likert scale. The evaluated models included AAD-LLM with brain-decoded and oracle attention, and Qwen2-Audio finetuned on single sources assessed with either a random speaker (lower bound) or the oracle speaker (upper bound). Average ratings are shown in Figure 3 and Table 4.
241
+
242
+ Kruskal-Wallis H tests revealed significant differences in ratings between the groups for both Summarization and Free Q&A tasks (p-values $< 0.001$ ). These tests were then followed up with post-hoc pairwise Bonferroni-corrected Mann-Whitney U tests. The results of these tests show that AAD-LLM's responses were rated significantly higher than those from the mixture and random speaker baselines across both the tasks. While mean performance increased from brain-decoded to oracle attention with ground-truth speaker labels, this improvement was not statistically significant, indicating that the neural decoding is close to its best capacity. Furthermore, both AAD-LLM models (brain-decoded and oracle attention) approach the "oracle speaker" upper bound, suggesting that AAD-LLM's responses closely mirror human perception. Please see Appendix F for more details on the evaluation methods, Tables 12 and 13 for p-values.
243
+
244
+ # 4.3.3 Attention Validation
245
+
246
+ To ensure AAD-LLM's performance improvement stems from its attentional state rather than generating balanced responses for both speakers, we conducted additional analyses. Specifically, we measured the percentage of responses closer to the target speaker than the other for both foreground and
247
+
248
+ background speakers across all tasks. As shown in Table 3, over $80\%$ to $90\%$ of responses aligned more with the target speaker, confirming AAD-LLM's effectiveness in speaker selection.
249
+
250
+ Additionally, we observed that AAD-LLM achieved similar performance in Free Q&A tasks when using brain-decoded attention or oracle attention (Figure 3 and Table 2). Also, the Free Q&A accuracy $(96.9\%)$ , Table 3) surpassed the extraction accuracy $(94.4\%)$ , Table 1). These results suggest that AAD-LLM might infer the target speaker based on question content rather than attentional state, especially when speakers discuss different topics. To address this, we designed a more challenging evaluation by replacing the background speech with another speech on a similar topic that could yield a different answer. In these conditions, AAD-LLM still achieved a ROUGE-L score of 62.0 and a METEOR score of 64.2 (Table 7), only slightly lower than the original scores (63.1 and 64.6), demonstrating that AAD-LLM effectively relies on attentional state to filter out the distracting speaker. More details about attention validation are included in Appendix D.2, with other results in Appendix D and analyses in Appendix E.
251
+
252
+ # 5 Conclusion
253
+
254
+ This work introduces intention-informed auditory scene understanding (II-ASU) as a new paradigm for aligning machine listening with human perception. We present AAD-LLM, a prototype system that integrates brain signals into an auditory large language model (LLM) to decode listener attention and generate responses that align with human perception. Experimental results demonstrate that incorporating attentional state improves model performance across multiple auditory tasks, including speaker description, speech transcription, and freeform question answering. Beyond improving speech-processing capabilities, this work represents an early step toward listener-centered auditory AI, where models do not merely process sound passively but interpret auditory scenes based on what the listener perceives, which has implications for assistive hearing technologies, adaptive voice assistants, and human-computer interaction. AAD-LLM lays the groundwork for future systems that process sound in alignment with human cognitive and perceptual priorities.
255
+
256
+ # Limitations
257
+
258
+ Several limitations and challenges remain. While attention is a fundamental aspect of auditory intent, future work should explore broader cognitive signals, including task goals, semantic relevance, and perceived emotional significance of the scene. AAD-LLM also relies on intracranial EEG, which limits its current practical use, although invasive neural recordings are increasingly used more in various speech brain computer interfaces (BCI) (Akbari et al., 2019; Moses et al., 2021; Metzger et al., 2023; Willett et al., 2023). While non-invasive neural recording methods such as EEG or fNIRS is desired, they present challenges in signal quality which limits their applicability. Finally, our experiments focus on controlled two-speaker scenarios, whereas real-world auditory scenes are more complex, involving multiple speakers and environmental noise. Expanding to these settings requires further neural data collection and improved adaptation techniques.
259
+
260
+ # Ethical Statement
261
+
262
+ The development of AAD-LLM introduces exciting new possibilities for auditory scene understanding by integrating brain signals to align machine listening with human perception. This innovation has the potential to enhance communication for individuals with hearing impairments, improve virtual assistants, and advance human-computer interaction. While the model only decodes the attended speaker without accessing sensitive cognitive information, we need to remain vigilant about privacy by implementing robust safeguards and ensuring responsible data handling practices.
263
+
264
+ Within this study, approval of all ethical and experimental procedures and protocols was granted by the university's Institutional Review Board (IRB). The iEEG participants provided informed consent as per the local IRB regulations (IRB protocol number AAAD5482). The human raters evaluating model outputs also provided informed consent (IRB protocol number AAAR8655).
265
+
266
+ # Acknowledgement
267
+
268
+ This work is funded by the National Institutes of Health (NIH-NIDCD) and a grant from Marie-Josee and Henry R. Kravis.
269
+
270
+ # References
271
+
272
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
273
+ Hassan Akbari, Bahar Khalighinejad, Jose Herrero, Ashesh Mehta, and Nima Mesgarani. 2019. Towards reconstructing intelligible speech from the human auditory cortex. Scientific Reports, 9:874.
274
+ Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics.
275
+ Enea Ceolini, Jens Hjortkjær, Daniel D.E. Wong, James O'Sullivan, Vinay S. Raghavan, Jose Herrero, Ashesh D. Mehta, Shih-Chii Liu, and Nima Mesgarani. 2020. Brain-informed speech separation (biss) for enhancement of target speaker in multitalker speech perception. NeuroImage, 223:117282.
276
+ Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. 2022a. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16(6):1505-1518.
277
+ Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei. 2022b. Beats: Audio pre-training with acoustic tokenizers. arXiv preprint arXiv:2212.09058.
278
+ Edward Collin Cherry. 1953. Some experiments on the recognition of speech, with one and with two ears. Journal of the acoustical society of America, 25:975-979.
279
+ Vishal Choudhari, Cong Han, Stephan Bickel, Ashesh D. Mehta, Catherine Schevon, Guy M. McKhann, and Nima Mesgarani. 2024. Brain-controlled augmented hearing for spatially moving conversations in multi-talker environments. Advanced Science, 11(41):2401379.
280
+ Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, et al. 2024. Qwen2-audio technical report. arXiv preprint arXiv:2407.10759.
281
+ Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, and Jingren Zhou. 2023. Qwen-audio: Advancing universal audio understanding via unified large-scale audiolanguage models. arXiv preprint arXiv:2311.07919.
282
+ Greg Ciccarelli, Michael Nolan, Joseph Perricone, Paul Calamia, Stephanie Haro, James O'Sullivan, Nima
283
+
284
+ Mesgarani, Thomas Quatieri, and Christopher Smalt. 2019. Comparison of two-talker attention decoding from eeg with nonlinear neural networks and linear methods. Scientific Reports, 9.
285
+ Daniel PW Ellis. 1996. Prediction-driven computational auditory scene analysis. Ph.D. thesis, Columbia university.
286
+ Simon Geirnaert, Servaas Vandecappelle, Emina Alickovic, Alain de Cheveigne, Edmund Lalor, Bernd T. Meyer, Sina Miran, Tom Francart, and Alexander Bertrand. 2021. Electroencephalography-based auditory attention decoding: Toward neurosteered hearing devices. IEEE Signal Processing Magazine, 38(4):89-102.
287
+ Sreyan Ghosh, Sonal Kumar, Ashish Seth, Chandra Kiran Reddy Evuru, Utkarsh Tyagi, S Sakshi, Oriol Nieto, Ramani Duraiswami, and Dinesh Manocha. 2024. GAMA: A large audio-language model with advanced audio understanding and complex reasoning abilities. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6288-6313, Miami, Florida, USA. Association for Computational Linguistics.
288
+ Yuan Gong, Alexander H Liu, Hongyin Luo, Leonid Karlinsky, and James Glass. 2023. Joint audio and speech understanding. In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1-8. IEEE.
289
+ Yuan Gong, Hongyin Luo, Alexander H. Liu, Leonid Karlinsky, and James R. Glass. 2024. Listen, think, and understand. In *The Twelfth International Conference on Learning Representations*.
290
+ Albert Gu and Tri Dao. 2024. Mamba: Linear-time sequence modeling with selective state spaces. In First Conference on Language Modeling.
291
+ Cong Han, James O'Sullivan, Yi Luo, Jose Herrero, Ashesh D. Mehta, and Nima Mesgarani. 2019. Speaker-independent auditory attention decoding without access to clean speech sources. Science Advances, 5(5):eaav6134.
292
+ Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing, 29:3451-3460.
293
+ Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
294
+ Shujie Hu, Long Zhou, Shujie Liu, Sanyuan Chen, Lingwei Meng, Hongkun Hao, Jing Pan, Xunying Liu, Jinyu Li, Sunit Sivasankaran, Linquan Liu, and Furu Wei. 2024. WavLLM: Towards robust and adaptive
295
+
296
+ speech large language model. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4552-4572, Miami, Florida, USA. Association for Computational Linguistics.
297
+ Shengpeng Ji, Jialong Zuo, Minghui Fang, Ziyue Jiang, Feiyang Chen, Xinyu Duan, Baoxing Huai, and Zhou Zhao. 2024. Textrolspeech: A text style control speech corpus with codec language text-to-speech models. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 10301-10305. IEEE.
298
+ Wei-Bang Jiang, Yansen Wang, Bao-Liang Lu, and Dongsheng Li. 2024a. Neurolm: A universal multi-task foundation model for bridging the gap between language and eeg signals. Preprint, arXiv:2409.00101.
299
+ Xilin Jiang, Cong Han, Yinghao Aaron Li, and Nima Mesgarani. 2024b. Listen, chat, and edit: Text-guided soundscape modification for enhanced auditory experience. arXiv preprint arXiv:2402.03710.
300
+ Xilin Jiang, Cong Han, and Nima Mesgarani. 2024c. Dual-path mamba: Short and long-term bidirectional selective structured state space models for speech separation. arXiv preprint arXiv:2403.18257.
301
+ Xilin Jiang, Yinghao Aaron Li, Adrian Nicolas Florea, Cong Han, and Nima Mesgarani. 2024d. Speech slytherin: Examining the performance and efficiency of mamba for speech separation, recognition, and synthesis. arXiv preprint arXiv:2407.09732.
302
+ Jonathan W. Kim, Ahmed Alaa, and Danilo Bernardo. 2024. Eeg-gpt: Exploring capabilities of large language models for eeg classification and interpretation. Preprint, arXiv:2401.18006.
303
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).
304
+ Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D Plumbley. 2020. Panns: Large-scale pretrained audio neural networks for audio pattern recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:2880-2894.
305
+ Jonathan Le Roux, Scott Wisdom, Hakan Erdogan, and John R Hershey. 2019. Sdr-half-baked or well done? In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 626-630. IEEE.
306
+ Yinghao Aaron Li, Cong Han, Vinay Raghavan, Gavin Mischler, and Nima Mesgarani. 2023. Styletts 2: Towards human-level text-to-speech through style diffusion and adversarial training with large speech language models. In Advances in Neural Information Processing Systems, volume 36, pages 19594-19621. Curran Associates, Inc.
307
+ Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
308
+
309
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
310
+ Nima Mesgarani and Edward F. Chang. 2012. Selective cortical representation of attended speaker in multitalker speech perception. Nature, 485(7397):233-236.
311
+ Sean Metzger, Kaylo Littlejohn, Alexander Silva, David Moses, Margaret Seaton, Ran Wang, Maximilian Dougherty, Jessie Liu, Peter Wu, Michael Berger, Inga Zhuravleva, Adelyn Tu-Chan, Karunesh Ganguly, Gopala Anumanchipalli, and Edward Chang. 2023. A high-performance neuroprosthesis for speech decoding and avatar control. Nature, 620.
312
+ David A. Moses, Sean L. Metzger, Jessie R. Liu, Gopala K. Anumanchipalli, Joseph G. Makin, Pengfei F. Sun, Josh Chartier, Maximilian E. Dougherty, Patricia M. Liu, Gary M. Abrams, Adelyn Tu-Chan, Karunesh Ganguly, and Edward F. Chang. 2021. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. New England Journal of Medicine, 385(3):217-227.
313
+ Omer Moussa, Dietrich Klakow, and Mariya Toneva. 2025. Improving semantic understanding in speech language models via brain-tuning. In The Thirteenth International Conference on Learning Representations.
314
+ OpenAI. 2024. Gpt-4o mini.
315
+ Charles E Osgood. 1959. Perception and communication. de broadbent. pergamon, new york, 1958. 338 pp. $8.50. Science, 129(3353):893-893.
316
+ James A O'sullivan, Alan J Power, Nima Mesgarani, Siddharth Rajaram, John J Foxe, Barbara G Shinn-Cunningham, Malcolm Slaney, Shihab A Shamma, and Edmund C Lenor. 2015. Attentional selection in a cocktail party environment can be decoded from single-trial eeg. Cerebral cortex, 25(7):1697-1706.
317
+ James O'Sullivan, Zhuo Chen, Jose Herrero, Guy M McKhann, Sameer A Sheth, Ashesh D Mehta, and Nima Mesgarani. 2017. Neural decoding of attentional selection in multi-speaker environments without access to clean sources. Journal of Neural Engineering, 14(5):056001.
318
+ Zexu Pan, Marvin Borsdorf, Siqi Cai, Tanja Schultz, and Haizhou Li. 2024. Neuroheard: Neuro-steered speaker extraction using eeg signals. IEEE/ACM Transactions on Audio, Speech, and Language Processing.
319
+ Zexu Pan, Gordon Wichern, FranΓ§ois G. Germain, Sameer Khurana, and Jonathan Le Roux. 2023. Neuroheard+: Improving neuro-steered speaker extraction with joint auditory attention detection. ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 11456-11460.
320
+
321
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311-318, USA. Association for Computational Linguistics.
322
+ Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International conference on machine learning, pages 28492-28518. PMLR.
323
+ S Sakshi, Utkarsh Tyagi, Sonal Kumar, Ashish Seth, Ramaneswaran Selvakumar, Oriol Nieto, Ramani Duraiswami, Sreyan Ghosh, and Dinesh Manocha. 2024. Mmau: A massive multi-task audio understanding and reasoning benchmark. arXiv preprint arXiv:2410.19168.
324
+ Barbara G Shinn-Cunningham and Virginia Best. 2008. Selective attention in normal and impaired hearing. Trends in amplification, 12(4):283-299.
325
+ David Snyder, Daniel Garcia-Romero, Gregory Sell, Daniel Povey, and Sanjeev Khudanpur. 2018. X-vectors: Robust dnn embeddings for speaker recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5329-5333.
326
+ Changli Tang, Wenyi Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun MA, and Chao Zhang. 2024. SALMONN: Towards generic hearing abilities for large language models. In The Twelfth International Conference on Learning Representations.
327
+ Joachim Thiemann, Nobutaka Ito, and Emmanuel Vincent. 2013. The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings. *The Journal of the Acoustical Society of America*, 133:3591.
328
+ Mariya Toneva and Leila Wehbe. 2019. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
329
+ Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579-2605.
330
+ Servaas Vandecappelle, Lucas Deckers, Neetha Das, Amir Hossein Ansari, Alexander Bertrand, and Tom Francart. 2021. Eeg-based detection of the locus of auditory attention with convolutional neural networks. eLife, 10:e56481.
331
+ Bin Wang, Xunlong Zou, Geyu Lin, Shuo Sun, Zhuohan Liu, Wenyu Zhang, Zhengyuan Liu, AiTi Aw, and Nancy F Chen. 2025. Audiobench: A universal benchmark for audio large language models. *NAACL*.
332
+
333
+ DeLiang Wang and Guy J Brown. 2006. Computational auditory scene analysis: Principles, algorithms, and applications. Wiley-IEEE press.
334
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
335
+ Francis Willett, Erin Kunz, Chaofei Fan, Donald Avansino, Guy Wilson, Eun Choi, Foram Kamdar, Matthew Glasser, Leigh Hochberg, Shaul Druckmann, Krishna Shenoy, and Jaimie Henderson. 2023. A high-performance speech neuroprosthesis. Nature, 620.
336
+ Junkai Wu, Xulin Fan, Bo-Ru Lu, Xilin Jiang, Nima Mesgarani, Mark Hasegawa-Johnson, and Mari Ostendorf. 2024. Just asr+ llm? a study on speech large language models' ability to identify and understand speaker in spoken dialogue. In 2024 IEEE Spoken Language Technology Workshop (SLT), pages 1137-1143. IEEE.
337
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
338
+ Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for text-to-speech. In *Interspeech* 2019, pages 1526-1530.
339
+ Ruizhe Zheng and Lichao Sun. 2024. Llm4brain: Training a large language model for brain video understanding. Preprint, arXiv:2409.17987.
340
+
341
+ # A Dataset Details
342
+
343
+ We curated two datasets for this work. The first is a clinical dataset collected in a hospital setting from epilepsy patients with implanted intracranial electrodes. This dataset includes both neural and speech signals and was used to train the speaker predictor and evaluate AAD-LLM. The second is a synthetic speech-only dataset generated using publicly available speech and noise corpora, which was primarily used to train the AAD-LLM.
344
+
345
+ # A.1 Clinical Dataset
346
+
347
+ This study involved six human participants, recruited from three medical centers: two from North Shore University Hospital (NSUH), two from Columbia University Irving Medical Center (CUIMC), and two from NYU Langone Health. All participants were undergoing clinical treatment for epilepsy and were implanted with intracranial electrodes for monitoring.
348
+
349
+ ![](images/7b9441135d376d4278015548e346a25ee5a9c2f19e34df2efb776eeb64fef55e.jpg)
350
+ Figure 4: Electrode coverage across subjects in the clinical dataset, with each subject represented by a different color.
351
+
352
+ Each participant had electrode implants tailored to their clinical needs. Some participants were implanted with both subdural electrocorticography (ECoG) grids and stereo-electroencephalography (sEEG) depth electrodes, while others had only sEEG depth electrodes. Electrode coverage across subjects in the clinical dataset can be seen in Figure 4, with each subject represented by a different color.
353
+
354
+ Neural recordings were bandpass-filtered to extract low-frequency components in the $0.5 - 30\mathrm{Hz}$ range. Electrodes that were visually identified as disconnected from anatomical tissues were excluded from the analysis. To maximize brain coverage, electrode recordings from each participant were concatenated.
355
+
356
+ All participants listened to 28 trials, with the average duration of $44.2\mathrm{~s}$ (standard deviation $= 2.0\mathrm{~s}$ ) each. The trials consisted of two concurrent and independent conversations that were equally loud and spatially separated. Diotic background noise (either "pedestrian" or "speech babble") was also mixed along with the conversations at power either 9 or $12\mathrm{~dB}$ below the power of a conversation stream. The subjects were instructed to follow (attend to) the conversation that started first. The to-be-unattended conversation started 3 seconds later. The trials were spatialized using head-related transfer functions (HRTFs) and delivered to the subjects via earphones.
357
+
358
+ The talkers in both the conversations intention
359
+
360
+ ally repeated words. To ensure that the participants tracked the target conversation, they were asked to press a button whenever they heard repeats in the target conversation. All participants were able to track the repeated words in the cued target conversation.
361
+
362
+ # A.2 Speech-Only Dataset
363
+
364
+ The speech-only dataset was constructed using speech utterances from the train-clean-100 and train-clean-360 subsets of LibriTTS $^2$ (Zen et al., 2019) along with noise samples from the DEMAND $^3$ (Thiemann et al., 2013). We filtered out utterances shorter than 0.5 seconds or longer than 15 seconds and randomly combined two speech utterances of similar duration but different speakers with one of 18 environmental noise types, such as park, office, and metro station. The speech sources were normalized to have equal energy and then mixed with background noise at SNR levels of 9 or $12\mathrm{dB}$ . This process aimed to replicate the auditory conditions of the clinical dataset while introducing a more diverse set of speakers and noise types to enhance model generalizability.
365
+
366
+ In total, we generated 57,963 speeches-and-noise mixtures of 1146 speakers, which were randomly split into 53,963 (85.3 hours) for training, 1,000 (1.6 hours) for validation, and 3,000 (4.8 hours) for testing. The speech-only dataset has a distinct set of speakers and sentences from the clinical dataset. The validation and testing set were primarily used for model development and in-domain evaluation.
367
+
368
+ The gender of each speaker and the transcription of the speech were obtained from the LibriTTS corpus. Additionally, we retrieved the "pitch" and "tempo" labels for the LibriTTS utterances from TextrolSpeech $^4$ (Ji et al., 2024). Both pitch and tempo were quantized into three levels. For pitch, utterances with a fundamental frequency below $136.6\mathrm{Hz}$ were labeled as "low", above $196.1\mathrm{Hz}$ as "high", and those in between as "normal". For tempo, utterances with an average speaking rate slower than 0.39 seconds per word were labeled as "low", faster than 0.25 seconds per word as "high", and those in between as "normal". We applied these same pitch and tempo thresholds to annotate speech utterances in the clinical dataset, serving as
369
+
370
+ the ground-truth labels for the speaker description task.
371
+
372
+ # B Model and Training Details
373
+
374
+ # B.1 Auditory Large Language Model
375
+
376
+ AAD-LLM adopts the backbone of Qwen2-Audio (Chu et al., 2024). The pretrained checkpoint Qwen2-Audio-7B-Instruct is publicly available<sup>5</sup>. We further finetuned both the LLM and the speech encoder on the speech-only dataset using low-rank approximation, LoRA (Hu et al., 2022), on the key, query, and value matrices of attention layers and the weight matrices of multilayer perceptrons. We used a rank of 512, an $\alpha$ of 512, and a dropout of 0.05 by default, which adds around $16.5\%$ of trainable parameters.
377
+
378
+ We added a special token $<\text{ATT}>$ as the placeholder for the listener's attention (speaker vector). In reality, $<\text{ATT}>$ is $<\text{extra\_124}>$ from Qwen2-Audio's reserved special token sets. Then, the entire input to the LLM looks like the following:
379
+
380
+ ```txt
381
+ system: You are a helpful assistant.
382
+ assistant.
383
+ user: Attention: <ATT>
384
+ Audio 1: <speech1>
385
+ Audio 2: <speech2>
386
+ Question: <question>
387
+ Solution:
388
+ Attention:<att_spk_label>;
389
+ Spk1:<spk1_label>;
390
+ Spk2:<spk2_label>;
391
+ <solution>
392
+ ```
393
+
394
+ <att_spk_label>, <spk1_label>, and <spk2_label> are ground-truth labels of the attended speaker, the first input speaker, and the second input speaker. All are integers from 0 to K-1, with $\mathrm{K} = 8$ by default. (<att_spk_label> β‰  <ATT>. The former is an integer; The latter is the projected speaker vector.) The tokens of pink and red parts corresponding to the chain-of-thought prefix and the actual solution were optimized. The maximum number of allowed tokens is 1024.
395
+
396
+ Speaker Projector is a linear layer from 512 to 4096, from the dimension of x-vector to the dimension of the LLM. It was optimized jointly with the audio encoder and the LLM. The projected speaker vector replaces the embedding at the <ATT> token.
397
+
398
+ Table 5: Intention-informed auditory scene understanding performance in the clinical testing set when listeners were attending to one of the two speakers with background noise. A higher number indicates a better performance for all metrics except word error rate (WER). This is the full version of Table 2.
399
+
400
+ <table><tr><td rowspan="2">Task
401
+ Model</td><td colspan="2">Description GIPIT</td><td colspan="2">Transcription WERLI BLEU</td><td colspan="2">Summarization ROUGE-L | METEOR I BERT</td><td colspan="2">Free Q&amp;A ROUGE-L | METEOR I BERT</td></tr><tr><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td></tr><tr><td colspan="9">(Baselines) Auditory LLM without Intention</td></tr><tr><td>LTU-AS (Gong et al., 2023)</td><td>82.6 | 38.4 | 25.8</td><td>76.6 | 44.0 | 36.0</td><td>139.8 | 44.8</td><td>172.6 | 7.3</td><td>40.2 | 38.9 | 89.7</td><td>27.9 | 31.0 | 87.0</td><td>39.0 | 47.4 | 89.9</td><td>30.4 | 34.3 | 88.0</td></tr><tr><td>SALMONN (Tang et al., 2024)</td><td>82.0 | 28.0 | 56.0</td><td>80.0 | 32.0 | 44.0</td><td>145.4 | 32.6</td><td>225.3 | 15.6</td><td>39.9 | 39.3 | 88.9</td><td>36.5 | 35.9 | 88.4</td><td>49.4 | 49.2 | 91.4</td><td>44.4 | 45.5 | 86.8</td></tr><tr><td>Qwen-Audio (Chu et al., 2023)</td><td>61.2 | 33.2 | 14.0</td><td>58.8 | 36.0 | 8.0</td><td>82.7 | 26.3</td><td>112.8 | 9.6</td><td>29.3 | 30.0 | 87.5</td><td>24.5 | 22.8 | 86.6</td><td>35.1 | 40.1 | 89.0</td><td>30.9 | 34.3 | 87.4</td></tr><tr><td>WavLLM (Hu et al., 2024)</td><td>41.4 | 34.6 | 49.2</td><td>45.6 | 27.4 | 40.2</td><td>94.7 | 24.8</td><td>128.3 | 7.9</td><td>35.6 | 34.2 | 88.8</td><td>38.3 | 38.4 | 89.0</td><td>41.5 | 40.5 | 90.1</td><td>40.2 | 38.9 | 89.7</td></tr><tr><td>GAMA (Ghosh et al., 2024)</td><td>64.4 | 34.6 | 38.6</td><td>61.8 | 37.6 | 45.2</td><td>n.a.</td><td>n.a.</td><td>19.0 | 17.1 | 85.1</td><td>22.6 | 23.6 | 85.7</td><td>24.0 | 28.6 | 87.2</td><td>24.5 | 29.5 | 86.8</td></tr><tr><td>Qwen2-Audio (Chu et al., 2024)</td><td>78.6 | 37.0 | 37.0</td><td>67.6 | 38.8 | 14.0</td><td>90.1 | 30.5</td><td>124.7 | 11.1</td><td>27.5 | 29.0 | 87.3</td><td>15.8 | 15.9 | 85.4</td><td>39.9 | 40.4 | 90.0</td><td>34.9 | 33.3 | 88.6</td></tr><tr><td colspan="9">(Lower Bound) Random Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>78.2 | 36.6 | 34.8</td><td>73.4 | 39.0 | 15.2</td><td>89.5 | 33.3</td><td>106.9 | 31.2</td><td>24.4 | 24.2 | 86.8</td><td>19.0 | 21.0 | 85.9</td><td>40.7 | 42.4 | 90.0</td><td>38.3 | 37.0 | 89.3</td></tr><tr><td>Qwen2-Audio</td><td>81.8 | 65.2 | 60.8</td><td>79.0 | 63.4 | 62.2</td><td>71.8 | 33.0</td><td>74.6 | 33.7</td><td>30.2 | 29.2 | 88.9</td><td>29.6 | 27.8 | 88.0</td><td>50.0 | 51.9 | 91.6</td><td>44.9 | 45.7 | 90.3</td></tr><tr><td colspan="9">(Proposed Baseline) Extracted Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>91.4 | 41.2 | 36.2</td><td>85.2 | 33.8 | 14.8</td><td>53.6 | 57.1</td><td>73.2 | 60.3</td><td>37.0 | 41.3 | 89.4</td><td>28.2 | 33.4 | 87.7</td><td>50.8 | 54.2 | 91.7</td><td>47.9 | 46.8 | 90.5</td></tr><tr><td>Qwen2-Audio</td><td>100.0 | 88.6 | 75.8</td><td>95.4 | 59.8 | 77.6</td><td>18.5 | 73.0</td><td>24.6 | 66.2</td><td>54.5 | 53.9 | 93.4</td><td>41.0 | 40.5 | 89.7</td><td>62.3 | 65.4 | 93.6</td><td>58.0 | 60.0 | 92.5</td></tr><tr><td colspan="9">(Upper Bound) Oracle Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>100.0 | 95.0 | 80.2</td><td>97.0 | 94.0 | 81.4</td><td>6.6 | 86.3</td><td>19.4 | 70.2</td><td>59.7 | 61.1 | 94.3</td><td>46.3 | 47.6 | 90.7</td><td>64.9 | 68.0 | 94.1</td><td>60.3 | 61.7 | 93.1</td></tr><tr><td colspan="9">(Proposed Method) Intention-Informed AAD-LLM</td></tr><tr><td>w/ brain-decoded attention</td><td>98.6 | 93.6 | 75.6</td><td>98.6 | 92.2 | 76.2</td><td>14.4 | 78.6</td><td>33.2 | 58.7</td><td>58.3 | 56.9 | 93.6</td><td>42.3 | 42.5 | 89.9</td><td>63.1 | 64.6 | 93.9</td><td>57.9 | 59.1 | 92.6</td></tr><tr><td>w/ oracle attention</td><td>99.8 | 93.4 | 76.4</td><td>99.6 | 94.2 | 77.4</td><td>12.5 | 80.0</td><td>33.9 | 60.0</td><td>59.7 | 58.5 | 93.8</td><td>42.7 | 43.2 | 90.1</td><td>63.0 | 64.8 | 93.9</td><td>58.1 | 59.2 | 92.6</td></tr><tr><td>w/ additional Clinical-15m</td><td>99.6 | 87.8 | 80.2</td><td>99.2 | 99.0 | 78.8</td><td>6.0 | 85.9</td><td>22.5 | 69.2</td><td>60.9 | 59.7 | 94.2</td><td>44.9 | 45.6 | 90.8</td><td>63.2 | 65.1 | 93.9</td><td>59.3 | 60.2 | 92.9</td></tr></table>
402
+
403
+ We trained the model using an AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of $1 \times 10^{-4}$ , scheduled with a cosine decay and a warmup ratio of $10\%$ of total steps. The model was trained for 20 epochs using a batch size of 1 and gradient accumulation steps of 8 for an effective batch size of 8. Mixed precision training with bf16 is enabled to optimize memory usage and training speed. The total training took around 4.5 days in an NVIDIA L40 GPU.
404
+
405
+ We trained all ablation models and baseline Qwen2-Audio on a single speaker with the exact same training configuration.
406
+
407
+ # B.2 Speaker Clusters
408
+
409
+ $\mathrm{K} = 8$ speaker clusters were obtained by K-Means. We randomly sampled 10,000 utterances of 1137 speakers from LibriTTS and extracted their x-vectors using a pretrained speaker verification model. The x-vector extractor has an output dimension of 512 and was kept frozen. Please see the right subfigure of Figure 5 for the distribution of 10,000 x-vectors colored by the nearest cluster.
410
+
411
+ # B.3 Speaker Predictor
412
+
413
+ The speaker predictor classifies the neural signal $Z$ into one of $K = 8$ discrete speaker labels. The predictor consists of a bidirectional recurrent neural network (RNN) followed by a temporal pooling layer and a fully connected classification head.
414
+
415
+ Before passing the neural signals into the recurrent layer, layer normalization is applied. The recurrent module is a bidirectional long short-term memory (LSTM) network with a hidden state size of $S = 64$ , resulting in an output dimension of $2S = 128$ due to bidirectionality. The LSTM output is processed by a mean pooling layer, followed by a fully connected layer with 128 hidden units and a final softmax activation for classification.
416
+
417
+ We trained the predictor on 280 samples with aligned neural and speech signals in the clinical training set. We obtained the ground-truth speaker label from the speech signal. The predictor was then optimized by cross-entropy loss, with the predicted speaker labels against ground-truth labels. We optimized the predictor with an Adam optimizer of a constant learning rate of $1 \times 10^{-4}$ , with a batch size of 1, for a total of 30 epochs, which took fewer than 10 minutes on an GPU.
418
+
419
+ # B.4 Speech Separator
420
+
421
+ The speech separator reproduces Mamba-TasNet (M) (Jiang et al., 2024c,d) which has a linear waveform encoder & decoder and 32 Mamba (Gu and Dao, 2024) layers with a dimension of 256. The model is tiny (15.6M parameters) and intention-uninformed. It was trained separately on the speech-only dataset. The model was trained using an Adam optimizer (Kingma and Ba, 2015) with a learning rate of $2 \times 10^{-4}$ . A cosine learning rate schedule with 20,000 warmup steps and a ReduceLROnPlateau scheduler was applied, halving the learning rate if performance plateaus after
422
+
423
+ 30 epochs. The model was trained for 50 epochs with a batch size of 4 and gradient clipping set to 5. Mixed precision training with bf16 is enabled for efficiency. The training was conducted on an NVIDIA L40 GPU.
424
+
425
+ The target speech extractor, as the baseline model in Tables 1 and 2, matches the speech separator in both the number of Mamba layers and the dimension, ensuring comparable acoustic processing capabilities. To condition the extractor on the speaker, we fused the acoustic features with the speaker vector following the approach in (Jiang et al., 2024b). The extractor was trained on the same data with an identical training configuration.
426
+
427
+ # C Tasks, Prompts, and Metrics
428
+
429
+ # C.1 Questions and Solutions
430
+
431
+ We wrote eight different questions for each task except Free Q&A, for which GPT-4o mini generates three questions and reference solutions uniquely for each utterance.
432
+
433
+ Here are three example questions for foreground speaker description:
434
+
435
+ Q1: "Describe the attended speaker."
436
+ Q2: "Please write a description of the attended speaker."
437
+ Q3: "Can you identify the person the subject is listening to?"
438
+
439
+ The solution to the description question is formatted as:
440
+
441
+ A:"A <gender_label> speaker with <pitch_label> pitch and <tempo_label> tempo."
442
+
443
+ The pitch and tempo labels are "low", "normal", or "high" (Appendix A.2).
444
+
445
+ Here are three example questions for background speech summarization:
446
+
447
+ Q1: "What is the background speaker talking about?",
448
+ Q2: "Can you summarize the speech of the speaker being ignored?",
449
+ Q3: "What topic is the background speaker discussing?",
450
+
451
+ We gave GPT-4o mini the transcription of the speech to generate three candidate summaries as the solutions.
452
+
453
+ In each training epoch, a random question from a random speaker was sampled for every utterance.
454
+
455
+ # C.2 Free Q&A Generation
456
+
457
+ Three pairs of freeform questions and answers were generated for either speaker of each utterance with the following prompt to GPT-4o mini:
458
+
459
+ "You are listening to a conversation.
460
+
461
+ One speaker said: "<transcription>"
462
+
463
+ Raise 3 questions and provide an answer for each one of them. Be short. Don't use any information outside the speech.
464
+
465
+ Answer in this format:
466
+
467
+ Question 1: ... Answer 1: ...
468
+
469
+ Question 1: ... Answer 2: ...
470
+
471
+ Question 1: ... Answer 2: ...
472
+
473
+ Here are three example pairs of free Q%A questions and solutions from three different utterances and speakers:
474
+
475
+ Q1: "How might the description affect the mood of the conversation?"
476
+ A1: "It may create a sense of confusion or disorientation due to the winding nature of the street."
477
+ Q2: "Why are they referred to as 'The Flying Stars'."
478
+ A2: "Because they've been stolen so often."
479
+ Q3: "What is the name of the establishment mentioned?"
480
+ A3: "The name of the establishment mentioned is the Royal Oak."
481
+
482
+ # C.3 Evaluation Details
483
+
484
+ Most of the evaluation metrics were computed by Huggingface's Evaluate package<sup>7</sup>. In particular:
485
+ bleu_metric = evaluate.load("bleu")
486
+ rouge_metric = evaluate.load("rouge")
487
+ meteor_metric = evaluate.load("meteor")
488
+ bert_metric = evaluate.load( "bertscore", lang='en')
489
+
490
+ Prior to computing these metrics, we applied text preprocessing steps to ensure a fair comparison, particularly benefiting the baseline models. The preprocessing included:
491
+
492
+ - Remove common prefixes like β€œThe attended speaking is discussing about” and β€œSpoken text:”.
493
+ - Apply BasicTextNormalizer8.
494
+
495
+ Table 6: AAD-LLM's attentional state can be transferred to other tasks unseen in intention-informed training, such as speech translation.
496
+
497
+ <table><tr><td rowspan="2">Task
498
+ Model</td><td>Translation</td><td>BLEU | METEOR</td></tr><tr><td>Foreground</td><td>Background</td></tr><tr><td>Qwen2-Audio</td><td>41.0 | 28.6</td><td>34.2 | 23.4</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>40.5 | 26.5</td><td>22.1 | 9.8</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>66.8 | 57.5</td><td>51.9 | 43.4</td></tr><tr><td>AAD-LLM w/ brain-decoded attention</td><td>61.8 | 53.8</td><td>45.5 | 35.1</td></tr></table>
499
+
500
+ - For speech translation (English→Chinese), apply jieba tokenizer on Chinese<sup>9</sup>.
501
+
502
+ Additionally, for the target speech extraction task (Table 1), the word error rate (WER) was computed by speech recognition model Whisper (Radford et al., 2023) (whisper-large-v3 $^{10}$ ) and the speaker similarity SIM was computed by WavLM (Chen et al., 2022a) (wavlm-base-plus-sv $^{11}$ ) for speaker verification.
503
+
504
+ For the speaker description task (Table 2), since baseline pretrain models were not trained to output all of the gender, pitch, and tempo in one response, we evaluated them separately as three simpler tasks. We also informed the models of the pitch and tempo cutoff corresponding to "low", "normal", and "high" in the question.
505
+
506
+ # D Additional Results
507
+
508
+ # D.1 Evaluation on Untrained Task: Speech Translation
509
+
510
+ We further investigated if the selective processing ability of the foreground or the background speech was restricted to the tasks AAD-LLM was trained on or if this attentional state could generalize to new tasks. To test this, we evaluated the English-to-Chinese speech translation performance of AAD-LLM after intention-informed training on other tasks. The results, presented in Table 6, were compared against oracle Chinese transcriptions generated by GPT-4o (Achiam et al., 2023) from the corresponding English speech.
511
+
512
+ The results show that AAD-LLM significantly outperforms Qwen2-Audio on both speech mixtures and random speakers, while approaching the upper bound set by Qwen2-Audio with oracle speakersβ€”a pattern consistent with the results observed in the tasks on which AAD-LLM was trained. This suggests that auditory attention is a
513
+
514
+ Table 7: Free Q&A on same-topic speech mixtures. Both foreground and background speech can be used to answer the question, so auditory attention to the correct speaker is necessary to answer the question correctly.
515
+
516
+ <table><tr><td>Free Q&amp;A
517
+ Model</td><td>ROUGE-L | METEOR | BERT</td></tr><tr><td>Qwen2-Audio</td><td>41.6 | 44.1 | 90.3</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>49.9 | 51.4 | 91.9</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>65.4 | 68.3 | 94.2</td></tr><tr><td>AAD-LLM w/ brain-decoded attention</td><td>62.0 | 64.2 | 93.7</td></tr></table>
518
+
519
+ generalizable capability that can be successfully transferred to other tasks without further training.
520
+
521
+ # D.2 Evaluation on Speakers Talking About the Same Topic
522
+
523
+ The performance deviation on Free Q&A tasks is closer across models with and without attention compared to other tasks (Table 2). This suggests a possibility that some questions can be answered solely based on the content of the question and the two candidate speech inputs by selecting the only speech that is related to the question without the need to decode the listener's attention. To verify that AAD-LLM genuinely relies on attention rather than cheating on content, we designed a more challenging Free Q&A testing set.
524
+
525
+ We replaced the background speech in the clinical testing set with another speech on the same topic but providing a different answer to the question than the foreground speech. The content of the replacement is generated by GPT-4o (Achiam et al., 2023) with the following prompt:
526
+
527
+ Foreground Speech: "<transcription>"
528
+
529
+ Question:"<question>"
530
+
531
+ Solution:"<solution>"
532
+
533
+ I want to test whether a QA system can distinguish between the foreground and background speakers. According to the foreground speech, the question, and the solution given, I want you to come up with a background speech in words. The background speech can also answer the sentence, but the solution should be different or even opposite. Try to have roughly the same number of words as the foreground speech.
534
+
535
+ Give me the sentence of the background speech directly. No explanation is needed.
536
+
537
+ Table 8: Offline evaluation on speech-only data. One of the two speaker is randomly designated as the foreground and the other as the background.
538
+
539
+ <table><tr><td rowspan="2">Task
540
+ Model</td><td colspan="2">Description GIPIT</td><td colspan="2">Transcription WER↓|BLEU</td><td colspan="2">Summarization ROUGE-LIMETEOR |BERT</td><td colspan="2">Free Q&amp;A ROUGE-LIMETEOR |BERT</td></tr><tr><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td><td>Foreground</td><td>Background</td></tr><tr><td colspan="9">(Baseline) Auditory LLM without Intention</td></tr><tr><td>LTU-AS (Gong et al., 2023)</td><td>73.7 | 47.1 | 26.7</td><td>73.1 | 46.5 | 28.1</td><td>148.0 | 26.3</td><td>154.3 | 26.9</td><td>19.5 | 21.6 | 85.8</td><td>17.6 | 19.2 | 85.4</td><td>33.1 | 37.8 | 88.5</td><td>32.6 | 36.9 | 88.4</td></tr><tr><td>SALMONN (Tang et al., 2024)</td><td>73.2 | 33.5 | 71.4</td><td>76.5 | 34.6 | 71.5</td><td>122.1 | 25.5</td><td>118.5 | 27.4</td><td>16.4 | 18.0 | 84.8</td><td>15.4 | 16.8 | 84.6</td><td>44.5 | 46.5 | 90.1</td><td>43.6 | 45.5 | 89.9</td></tr><tr><td>Qwen-Audio (Chu et al., 2023)</td><td>48.2 | 37.5 | 8.5</td><td>47.6 | 37.6 | 9.0</td><td>96.0 | 18.5</td><td>98.1 | 19.5</td><td>17.5 | 18.1 | 85.1</td><td>17.8 | 16.4 | 85.1</td><td>31.4 | 36.3 | 87.5</td><td>30.7 | 35.5 | 87.5</td></tr><tr><td>WavLLM (Hu et al., 2024)</td><td>53.6 | 31.9 | 53.5</td><td>54.0 | 33.7 | 51.2</td><td>84.4 | 21.0</td><td>82.8 | 20.7</td><td>20.6 | 19.4 | 86.1</td><td>19.4 | 18.9 | 85.9</td><td>39.7 | 38.1 | 89.2</td><td>39.1 | 37.4 | 89.1</td></tr><tr><td>GAMA (Ghosh et al., 2024)</td><td>60.2 | 32.6 | 52.9</td><td>58.8 | 34.5 | 50.5</td><td>n.a.</td><td>n.a.</td><td>17.4 | 15.2 | 84.7</td><td>18.6 | 16.9 | 84.8</td><td>23.8 | 28.5 | 86.8</td><td>23.4 | 27.9 | 86.7</td></tr><tr><td>Qwen2-Audio (Chu et al., 2024)</td><td>71.9 | 34.4 | 59.7</td><td>74.3 | 34.2 | 40.1</td><td>81.0 | 25.7</td><td>80.4 | 27.9</td><td>18.5 | 20.4 | 85.5</td><td>17.4 | 18.9 | 85.3</td><td>41.5 | 42.4 | 89.6</td><td>40.2 | 41.3 | 89.4</td></tr><tr><td colspan="9">(Lower Bound) Random Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>75.9 | 63.4 | 73.1</td><td>72.6 | 63.1 | 73.8</td><td>56.8 | 45.2</td><td>57.7 | 45.2</td><td>27.4 | 25.7 | 88.3</td><td>28.4 | 26.9 | 87.8</td><td>47.1 | 48.3 | 90.7</td><td>46.9 | 48.3 | 90.7</td></tr><tr><td colspan="9">(Proposed Baselines) Extracted Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>99.1 | 89.5 | 87.4</td><td>95.4 | 82.7 | 84.6</td><td>14.0 | 82.3</td><td>14.3 | 82.1</td><td>44.1 | 42.7 | 91.2</td><td>41.8 | 41.6 | 90.3</td><td>51.5 | 52.9 | 91.6</td><td>51.2 | 52.7 | 91.6</td></tr><tr><td colspan="9">(Upper Bound) Oracle Speaker + Auditory LLM</td></tr><tr><td>Qwen2-Audio</td><td>99.7 | 94.6 | 91.6</td><td>96.4 | 91.7 | 90.0</td><td>1.9 | 94.8</td><td>2.2 | 94.6</td><td>49.7 | 49.0 | 92.2</td><td>45.6 | 45.9 | 91.0</td><td>65.9 | 67.9 | 94.2</td><td>64.9 | 67.3 | 94.0</td></tr><tr><td colspan="9">(Proposed Method) Intention-Informed AAD-LLM</td></tr><tr><td>w/ oracle attention</td><td>99.4 | 90.9 | 88.0</td><td>99.5 | 92.2 | 87.7</td><td>10.6 | 86.3</td><td>10.8 | 86.1</td><td>46.4 | 45.2 | 91.6</td><td>46.3 | 45.4 | 91.5</td><td>64.2 | 65.7 | 93.8</td><td>63.1 | 65.0 | 93.7</td></tr></table>
541
+
542
+ Table 9: Ablations of AAD-LLM.
543
+
544
+ <table><tr><td rowspan="2">Task
545
+ AAD-LLM</td><td colspan="2">Description (G | P | T)</td><td colspan="2">Transcription (WER ↓ BLEU)</td><td colspan="2">Summarization (ROUGE-L | METEOR | BERT)</td><td colspan="2">Free Q&amp;A (ROUGE-L | METEOR | BERT)</td></tr><tr><td>Attested</td><td>Unattended</td><td>Attested</td><td>Unattended</td><td>Attested</td><td>Unattended</td><td>Attested</td><td>Unattended</td></tr><tr><td colspan="9">Rank of LoRA</td></tr><tr><td>r=32</td><td>95.2 | 80.8 | 71.6</td><td>97.0 | 74.2 | 75.6</td><td>20.7 | 73.3</td><td>33.4 | 60.1</td><td>55.3 | 53.5 | 92.7</td><td>40.1 | 40.4 | 89.6</td><td>62.7 | 64.9 | 93.7</td><td>56.6 | 58.8 | 92.5</td></tr><tr><td>r=128</td><td>97.6 | 83.0 | 77.6</td><td>98.0 | 82.2 | 80.8</td><td>16.3 | 75.0</td><td>34.0 | 59.6</td><td>57.1 | 56.5 | 93.4</td><td>43.8 | 44.2 | 90.4</td><td>62.2 | 64.5 | 93.7</td><td>58.0 | 60.4 | 92.7</td></tr><tr><td colspan="9">Necessity of Components</td></tr><tr><td>without CoT</td><td>97.8 | 86.8 | 72.0</td><td>98.8 | 79.2 | 58.8</td><td>34.9 | 65.2</td><td>100.6 | 15.8</td><td>38.3 | 37.0 | 90.2</td><td>25.6 | 24.8 | 87.6</td><td>63.8 | 64.8 | 93.9</td><td>59.0 | 59.7 | 92.8</td></tr><tr><td>without Separation</td><td>99.2 | 77.6 | 61.6</td><td>99.2 | 83.0 | 70.4</td><td>41.8 | 50.9</td><td>73.7 | 29.0</td><td>40.5 | 38.9 | 90.6</td><td>29.2 | 27.8 | 87.8</td><td>59.6 | 61.3 | 93.1</td><td>53.2 | 54.2 | 91.7</td></tr><tr><td colspan="9">Types of Features and Numbers of Cluster for Attention Decoding</td></tr><tr><td>xvct-4</td><td>96.2 | 87.2 | 71.8</td><td>96.2 | 88.2 | 74.8</td><td>37.0 | 60.8</td><td>57.7 | 47.6</td><td>46.3 | 45.2 | 91.6</td><td>34.7 | 34.4 | 88.9</td><td>64.5 | 66.3 | 94.1</td><td>58.9 | 60.7 | 92.8</td></tr><tr><td>xvct-16</td><td>97.0 | 84.2 | 67.0</td><td>97.4 | 87.0 | 76.8</td><td>31.5 | 65.7</td><td>43.0 | 53.1</td><td>51.2 | 49.9 | 92.2</td><td>38.7 | 38.7 | 89.6</td><td>61.0 | 62.9 | 93.5</td><td>57.2 | 59.2 | 92.4</td></tr><tr><td>xvct-32</td><td>94.6 | 82.2 | 70.0</td><td>93.8 | 83.8 | 71.2</td><td>33.4 | 66.8</td><td>48.1 | 50.8</td><td>50.5 | 49.6 | 92.3</td><td>36.7 | 36.1 | 89.4</td><td>62.4 | 64.2 | 93.7</td><td>58.6 | 61.3 | 92.6</td></tr><tr><td>style-8</td><td>93.0 | 72.4 | 70.2</td><td>94.4 | 78.2 | 72.2</td><td>44.4 | 56.6</td><td>51.6 | 46.7</td><td>46.0 | 44.6 | 91.2</td><td>32.1 | 31.7 | 88.5</td><td>62.6 | 64.9 | 93.7</td><td>58.2 | 60.3 | 92.6</td></tr><tr><td>random-8</td><td>79.6 | 64.4 | 61.8</td><td>83.2 | 67.0 | 68.6</td><td>63.1 | 43.6</td><td>74.8 | 30.7</td><td>36.3 | 34.5 | 89.6</td><td>28.8 | 27.9 | 88.1</td><td>63.7 | 66.0 | 94.0</td><td>57.2 | 59.0 | 92.5</td></tr><tr><td colspan="9">Default: r=512, with CoT and Separation, xvct-8, without in-domain data</td></tr><tr><td>AAD-LLM</td><td>98.6 | 93.6 | 75.6</td><td>98.6 | 92.2 | 76.2</td><td>14.4 | 78.6</td><td>33.2 | 58.7</td><td>58.3 | 56.9 | 93.6</td><td>42.3 | 42.5 | 89.9</td><td>63.1 | 64.6 | 93.9</td><td>57.9 | 59.1 | 92.6</td></tr></table>
546
+
547
+ Then, we synthesized one of four speakers (two females and two males) by KokoroTTS $^{12}$ , a human-level text-to-speech model based on StyleTTS 2's architecture (Li et al., 2023), given the generated text. We then mixed the new background speech with the original foreground speech and the background noise. We asked the same question again on this same-topic dataset. The results are shown in Table 7, averaged across four new speakers.
548
+
549
+ In this more challenging dataset with more distraction in the spoken content, AAD-LLM still achieved a ROUGE-L score of 62.0 and a ME-TEOR score of 64.2, only slightly below the original scores of 63.1 and 64.6. This means that AAD-LLM is using its attentional state to filter out the distracting speaker rather than using the content.
550
+
551
+ # D.3 Evaluation on Speech-only Dataset
552
+
553
+ We also evaluated AAD-LLM on a speech-only dataset (LibriTTS and DEMAND mixtures) described in Appendix A.2. One of the two speak
554
+
555
+ ers was randomly designated as the foreground, and models were tested on speaker description, speech transcription, speech summarization, and free Q&A. We used oracle attention (ground truth labels) for evaluation since the brain signal is not available for this dataset.
556
+
557
+ AAD-LLM achieved high classification accuracy in speaker description (99.4% gender, 90.9% pitch, 88.0% tempo), outperforming Qwen2-Audio and other baseline models. In transcription, AAD-LLM significantly lowered the WER to 10.6% compared to Qwen2-Audio's 81.0%. Summarization results showed a ROUGE-L score of 46.4, exceeding baseline models. In Q&A, AAD-LLM achieved a ROUGE-L of 64.2 and METEOR of 65.7, surpassing Qwen2-Audio's 41.5 and 42.4.
558
+
559
+ The performance of AAD-LLM on the speech-only dataset is comparable to that of the clinical dataset. The model consistently outperformed other auditory LLMs, demonstrating its advantage in processing multi-speaker scenarios. These findings also highlight AAD-LLM's potential as an effective offline system for non-BCI applications, provided the x-vector of the target speaker is known.
560
+
561
+ ![](images/bf2abfec82a64af12be612e0283ad31d678a24dd6f0e0ec7f209d945b9421735.jpg)
562
+ Figure 5: t-SNE visualization of x-vectors after K-Means clustering $(\mathrm{K} = 8)$ . The left plot (clinical dataset, 280 sentences) shows distinct clusters for eight speakers. The right plot (speech-only dataset, 10,000 sentences) reveals two main clusters for male and female speakers. Colors indicate K-means clustering with eight groups.
563
+
564
+ ![](images/2800f08e6ad7c885ac424d29efc35e6f5cb8e290097d1153fa530e79bf83ab4d.jpg)
565
+
566
+ # D.4 Impact of Background Noise Types
567
+
568
+ The clinical test set contains speech mixtures under two distinct background noises: babble and pedestrian. The performance with respect to the noise type is shown in Table 10. Transcription error (WER) and summarization quality (ROUGE-L & METEOR) are consistently better under babble noise compared to pedestrian noise. These results suggest that pedestrian noise, characterized by unpredictable acoustic dynamics, presents greater difficulty for the AAD-LLM, slightly degrading the recognition of attended speech and subsequent response generation.
569
+
570
+ # D.5 Impact of Window Size on Correct Speaker Selection
571
+
572
+ We also examined how varying the window size affects speaker selection accuracy, particularly in dynamic attention scenarios, with our speaker decoding method. The results, detailed in Figure ??, demonstrate a clear trade-off: shorter windows offer rapid responsiveness but lower accuracy, whereas longer windows significantly improve decoding reliability at the expense of increased latency. These insights suggest the importance of adaptive strategies for window sizing tailored to specific real-time application demands.
573
+
574
+ ![](images/258842cf982fe5bdfa14b34b83d7caa23b8b2f499e3294a4c647fabd9efc76c7.jpg)
575
+ Figure 6: Speaker selection accuracy as a function of window size.
576
+
577
+ # D.6 Ablations
578
+
579
+ To better understand the contributions of different components and design choices in AAD-LLM, we conduct an ablation study with results presented in Table 9. We investigate the impact of the LoRA rank, the necessity of core components, and the type and clustering of features used for attention decoding.
580
+
581
+ LoRA Rank: We trained the model with smaller ranks of 32 or 128. Increasing the rank from 32 to 512 improves performance in nearly all tasks, particularly for transcription and summarization.
582
+
583
+ Core Components: Removing Chain-of-thought prompt significantly degrades performance, with
584
+
585
+ Table 10: Performance across noise types.
586
+
587
+ <table><tr><td>NoiseType</td><td>Description
588
+ Foreground
589
+ AVG (G, P, T)</td><td>Description
590
+ Background
591
+ AVG (G, P, T)</td><td>Transcription
592
+ Foreground WER
593
+ ↓</td><td>Transcription
594
+ Background WER
595
+ ↓</td><td>Summarization
596
+ Foreground
597
+ ROUGE-L / METEOR</td><td>Summarization
598
+ Background
599
+ ROUGE-L / METEOR</td><td>Free Q&amp;A
600
+ Foreground
601
+ ROUGE-L / METEOR</td><td>Free Q&amp;A
602
+ Background
603
+ ROUGE-L / METEOR</td></tr><tr><td>Babble</td><td>87.1</td><td>86.5</td><td>10.7</td><td>29.0</td><td>59.5 / 56.5</td><td>44.8 / 43.6</td><td>65.5 / 68.4</td><td>63.1 / 63.5</td></tr><tr><td>Pedestrian</td><td>91.1</td><td>91.1</td><td>17.6</td><td>36.7</td><td>57.5 / 57.2</td><td>41.2 / 41.8</td><td>61.1 / 61.4</td><td>53.6 / 55.4</td></tr></table>
604
+
605
+ WER increasing from 14.4 to 34.9 and ROUGE-L for summarization dropping from 58.3 to 38.3. Removing the separator also degrades performance, increasing WER to 41.8 and reducing summarization ROUGE-L to 40.5. These highlight the crucial role of both components.
606
+
607
+ Feature Type and Clustering: We found that using 8 clusters of x-vectors yielded the best performance on our dataset. Fewer clusters made speaker label prediction easier but often caused the foreground and background speakers to share the same label, making it impossible for the model to distinguish between them. On the other hand, increasing the number of clusters made speaker label prediction more challenging, uniformly degrading downstream task performance. We also tested alternative speaker representations, including speaking style vectors from the StyleTTS 2 (Li et al., 2023) text-to-speech model and random vectors from a randomly initialized x-vector extractor, but both performed significantly worse than our chosen x-vectors.
608
+
609
+ # E Analysis
610
+
611
+ # E.1 Speaker X-vectors and Labels
612
+
613
+ AAD-LLM utilizes x-vectors to generate speaker identity features for speech clustering. These features enable the creation of comparable speaker profiles and help the model group speakers based on their acoustic similarity. The distribution of x-vectors, visualized using t-SNE (van der Maaten and Hinton, 2008), is shown in Figure 5, with the clinical dataset on the left and the speech-only dataset on the right.
614
+
615
+ In the clinical dataset, there are a total of 280 sentences, with colors representing different speakers. The clusters corresponding to sentences from the eight speakers are clearly distinguishable, demonstrating the effectiveness of x-vectors in speaker separation. In the speech-only dataset, which contains 10,000 sentences, two distinct clusters emerge, corresponding to male and female speakers. The plot is color-coded based on K-means clustering with eight clusters, which are evenly distributed between the two genders.
616
+
617
+ In addition to x-vectors, Style vectors from
618
+
619
+ Table 11: AAD-LLM can attend to the correct speaker in case the speaker label prediction is wrong, but the predicted speaker centroid is still closer to the attended than to the unattended speaker. The results correspond to the speakers in the left part of Figure 7.
620
+
621
+ <table><tr><td>Predicted Label</td><td>Prediction</td><td>Extraction</td><td>Transcription</td></tr><tr><td>0</td><td>X</td><td>X</td><td>β€œAmerican American Movies.”</td></tr><tr><td>1</td><td>X</td><td>X</td><td>β€œAmerican American Movies.”</td></tr><tr><td>2</td><td>βœ“</td><td>βœ“</td><td>β€œThe trip lasted ten days.”</td></tr><tr><td>3</td><td>X</td><td>βœ“</td><td>β€œThe trip lasted ten days.”</td></tr><tr><td>4</td><td>X</td><td>X</td><td>β€œAmerican American Movies.”</td></tr><tr><td>5</td><td>X</td><td>X</td><td>β€œAmerican American Movies.”</td></tr><tr><td>6</td><td>X</td><td>βœ“</td><td>β€œThe trip lasted ten days.”</td></tr><tr><td>7</td><td>X</td><td>βœ“</td><td>β€œThe trip lasted ten days.”</td></tr></table>
622
+
623
+ StyleTTS2 (Li et al., 2023) were also evaluated for their ability to differentiate speakers, as shown in Table 9. However, since x-vectors provided the best performance for intention decoding, they were selected for the final model.
624
+
625
+ # E.2 Wrong Predicted Speaker Label but Correct Decoded Speaker
626
+
627
+ There are two related classification tasks and accuracies for AAD-LLM. One is the speaker label prediction task and the prediction accuracy, which is $78.4\%$ averaged across five runs in the clinical testing set. The other one is the auditory attention decoding (AAD) task and the AAD / speech extraction accuracy, which is $94.4\%$ averaged across five runs (Table 1).
628
+
629
+ The difference in accuracy arises from the difference in the tasks. The speaker prediction involves classifying one of $\mathrm{K} = 8$ possible speaker labels, an intermediate step, whereas the extraction accuracy reflects the final selection of the attended speakerβ€”determining which speech signal is extracted to the listener's ears. This selection also impacts the performance of other tasks presented in Table 2.
630
+
631
+ The AAD accuracy (94.4%) is typically higher than the speaker prediction accuracy (78.4%). This is because we have 8 clusters for two speakers, and even if the speaker prediction is incorrect, the wrong cluster centroid may still be closer to the attended speaker than to the unattended one, eventually leading to the correct speaker selection. A clear example of this is illustrated in the left part of
632
+
633
+ ![](images/7e6b641eaa669cba95e5ae2f5bb3cbc48eda4e204773c9e112fb7a8d10af2299.jpg)
634
+ Figure 7: In the left figure, the two speakers have distinct voice characteristics, so even with an incorrectly predicted speaker label, AAD-LLM can still attend to the correct speaker. In contrast, in the right figure, the two speakers have very similar voice characteristics, making it difficult for AAD-LLM to distinguish between the foreground and background speakersβ€”even with a correctly predicted speaker label.
635
+
636
+ ![](images/1ae154d6fa6be863c7c6267571eb7cb9b6dc54e0b3ca33cfaca6332b752197a4.jpg)
637
+
638
+ Figure 7, where the model incorrectly predicts the speaker centroid $v_{7}$ instead of the closest centroid $v_{2}$ . However, $v_{2}$ remains closer to the foreground speaker than to the background speaker. Additionally, Table 11 shows that AAD-LLM correctly performs foreground speech extraction and transcription tasks when using four of the eight speaker centroids as inputs, as all of these centroids are positioned closer to the attended speaker than to the unattended one.
639
+
640
+ # E.3 Failure Cases
641
+
642
+ AAD-LLM relies on speaker prediction for intention decoding (Section 3.1), which can become challenging when multiple speakers share similar voice characteristics, such as having the same gender and pitch range. If the speakers are evenly distributed across K speaker clusters, there is a $1 / \mathrm{K}$ probability that two distinct speakers will be assigned the same speaker label, making it impossible for AAD-LLM to distinguish between them. As a result, there is a $1 / 2\mathrm{K}$ chance that AAD-LLM will attend to the incorrect speaker. The right side of Figure 7 illustrates this scenario, where both the foreground and background speakers are closest to the same speaker centroid. In this case, the background speaker is even closer, increasing the likelihood that AAD-LLM will mistakenly attend to the wrong speaker.
643
+
644
+ Increasing the number of clusters can reduce the likelihood that two speakers will be assigned to the same cluster. However, this also makes speaker classification more challenging, as it re
645
+
646
+ requires more training data and/or cleaner neural signals to achieve reliable performance.
647
+
648
+ # F Subjective Evaluation Details
649
+
650
+ A total of 40 participants were recruited through Prolific to evaluate model-generated answers. Participants were divided into four batches of ten. The first batch rated responses in the Summarization task, while batches two through four evaluated responses in the Free Q&A task. Each participant completed 50 trials.
651
+
652
+ In each trial, participants listened to an audio stimulus containing three components: the target talker, the non-target talker, and background noise. The target and non-target talkers were spatially separated, with one presented in the left ear and the other in the right ear. Background noise was presented diotically in both ears. Participants were instructed to attend to the target talker, which was indicated by an on-screen arrow. The assignment of the target talker to the left or right ear was randomized across trials. Participants were allowed to replay the audio stimulus multiple times.
653
+
654
+ On the same trial page, participants were presented with a question alongside five model-generated responses, each prompted with the same question. They were asked to rate each response on a 5-point scale, where $1 = \text{"Very Poor"}$ and $5 = \text{"Excellent"}$ . Responses containing information from the non-target talker were explicitly instructed to be rated poorly. The presentation order of the model-generated answers was randomized to minimize
655
+
656
+ ![](images/c8c4becd5d334026cf408f3856481181daf55387f772972e7938863c07fa58f6.jpg)
657
+ Figure 8: A screenshot from the psychcophysics task where human raters were asked to rate answers from different models (blind and in random order).
658
+
659
+ # bias.
660
+
661
+ All participants were required to wear stereo headphones or earphones and passed an initial left-right channel check to ensure proper perception of spatial separation of the talkers. The total task duration was approximately 30 minutes, and each participant was compensated \(10 for their time. Participants were native English speakers based in the United States, aged between 18 and 40, with no reported hearing difficulties or cognitive impairments. Additionally, all participants had a \(100\%\) approval rating on prior Prolific tasks. A trial screenshot from the psychophysics task is shown in Figure 8.
662
+
663
+ Table 12: Post-hoc pairwise comparisons of human ratings for the Summarization task.
664
+
665
+ <table><tr><td>Group 1</td><td>Group 2</td><td>p-value</td><td>Bonferroni p-value</td><td>Significance</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>Random Speaker + Qwen2-Audio</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>Qwen2-Audio</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>AAD-LLM w/ brain-decoded attention</td><td>&lt;0.001</td><td>0.002</td><td>**</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>AAD-LLM w/ oracle attention</td><td>&lt;0.001</td><td>0.006</td><td>**</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>Qwen2-Audio</td><td>0.09</td><td>0.99</td><td>ns</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>AAD-LLM w/ brain-decoded attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>AAD-LLM w/ oracle attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Qwen2-Audio</td><td>AAD-LLM w/ brain-decoded attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Qwen2-Audio</td><td>AAD-LLM w/ oracle attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>AAD-LLM w/ brain-decoded attention</td><td>AAD-LLM w/ oracle attention</td><td>0.79</td><td>1.00 (capped)</td><td>ns</td></tr></table>
666
+
667
+ Table 13: Post-hoc pairwise comparisons of human ratings for the Free Q&A task.
668
+
669
+ <table><tr><td>Group 1</td><td>Group 2</td><td>p-value</td><td>Bonferroni p-value</td><td>Significance</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>Random Speaker + Qwen2-Audio</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>Qwen2-Audio</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>AAD-LLM w/ brain-decoded attention</td><td>0.004</td><td>0.042</td><td>*</td></tr><tr><td>Oracle Speaker + Qwen2-Audio</td><td>AAD-LLM w/ oracle attention</td><td>0.002</td><td>0.019</td><td>*</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>Qwen2-Audio</td><td>0.037</td><td>0.37</td><td>ns</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>AAD-LLM w/ brain-decoded attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Random Speaker + Qwen2-Audio</td><td>AAD-LLM w/ oracle attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Qwen2-Audio</td><td>AAD-LLM w/ brain-decoded attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>Qwen2-Audio</td><td>AAD-LLM w/ oracle attention</td><td>&lt;0.001</td><td>&lt;0.001</td><td>***</td></tr><tr><td>AAD-LLM w/ brain-decoded attention</td><td>AAD-LLM w/ oracle attention</td><td>0.80</td><td>1.00 (capped)</td><td>ns</td></tr></table>
aadllmneuralattentiondrivenauditorysceneunderstanding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b2351ed6172be27d9e8ec6fe9166b3700e3d85b6c31110e8d743d73573827a0
3
+ size 1357927
aadllmneuralattentiondrivenauditorysceneunderstanding/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc6bf415424deba24f55767b443d6c4d29a2b90a6e3c6bd096dc8c65ecd312bb
3
+ size 691570
abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/ef01d945-3d25-4e5a-8482-29580b3918b0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c700377661a3c2955dc7828c65e9575d8e11523c87949cfac21a2b6ac3b8b1ee
3
+ size 117531
abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/ef01d945-3d25-4e5a-8482-29580b3918b0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02ae87835595a9c1e0591ba237171999959d8398826c98f78a3f71f1ab3dc349
3
+ size 138401
abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/ef01d945-3d25-4e5a-8482-29580b3918b0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8051935affe92c11edeab0435f5053468959c94e632a821b2746e05a9b753225
3
+ size 2131667
abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/full.md ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ABGEN: Evaluating Large Language Models in Ablation Study Design and Evaluation for Scientific Research
2
+
3
+ Yilun Zhao\*Y Weiyuan Chen\*Y Yixin Liu Y Chengye Wang
4
+
5
+ Y Yale NLP Lab T TCS Research
6
+
7
+ Zhijian Xu $^{Y}$ Manasi Patwardhan $^{T}$
8
+ Lovekesh Vig $^{T}$ Arman Cohan $^{Y}$
9
+
10
+ # Abstract
11
+
12
+ We introduce ABGEN, the first benchmark designed to evaluate the capabilities of LLMs in designing ablation studies for scientific research. ABGEN consists of 1,500 expert-annotated examples derived from 807 NLP papers. In this benchmark, LLMs are tasked with generating detailed ablation study designs for a specified module or process based on the given research context. Our evaluation of leading LLMs, such as DeepSeek-R1-0528 and o4-mini, highlights a significant performance gap between these models and human experts in terms of the importance, faithfulness, and soundness of the ablation study designs. Moreover, we demonstrate that current automated evaluation methods are not reliable for our task, as they show a significant discrepancy when compared to human assessment. To better investigate this, we develop ABGEN-EVAL, a meta-evaluation benchmark designed to assess the reliability of commonly used automated evaluation systems in measuring LLM performance on our task. We investigate various LLM-as-Judge systems on ABGEN-EVAL, providing insights for future research on developing more effective and reliable LLM-based evaluation systems for complex scientific tasks.
13
+
14
+ ![](images/75203b62c3f2780fcfdb5e1fda33f6baaa56e659dc0aae9cf1ab940db3ae8f04.jpg)
15
+
16
+ Data yale-nlp/AbGen Code yale-nlp/AbGen
17
+
18
+ # 1 Introduction
19
+
20
+ In empirical scientific fields, designing experiments and selecting the appropriate experimental settings often present considerable challenges and requires significant domain expertise. Oftentimes, scientists learn about the flaws in their experimental design and missing ablations after going through a peer review process, which involves domain experts carefully evaluating a scientific work. The
21
+
22
+ * Equal Contributions. Correspondence: Yilun Zhao (yilun.zhao@yale.edu)
23
+
24
+ ![](images/afe965737928cdeadae539da76c5c460922656c98d39b2c498e340c98d79aa24.jpg)
25
+
26
+ ![](images/d1798aa99672f93aae27c1760060267ef25e3279d57ba8b692cfb3c42041eccc.jpg)
27
+ RQ1: How well do frontier LLMs perform in ablation study design?
28
+
29
+ ![](images/410044bf18cb987202eac71a17873163eb9bfc0a3fe91a85801ef0f7ca94cdf6.jpg)
30
+ RQ2: How can this research be applied in real-world scenarios to assist human researchers?
31
+ RQ3: How can future researchers develop more reliable automated evaluation systems for complex scientific tasks?
32
+ Construct AbGen-Eval, Assess LLM-based evaluation systems A Meta-evaluation Benchmark
33
+ Figure 1: Overview of the research: the ablation study design task and three research questions investigated.
34
+
35
+ ![](images/ac7ad0d7d3fac4a9ca42e4e07b36263a138903a3af0f2fdd0ec579d31746f77f.jpg)
36
+ $β‘‘$ User study on other scientific domain
37
+
38
+ complexity of tasks in experimental science underscores the need for innovative approaches to support researchers in optimizing their workflows. Meanwhile, LLMs have demonstrated remarkable capabilities across a range of tasks integral to scientific processes, such as reviewing manuscripts (D'Arcy et al., 2024; Du et al., 2024), scientific writing (AltmΓ€e et al., 2023; Xu et al., 2024), scientific code generation (Liu et al., 2023; Yang et al., 2024b). This raises a compelling question: Can LLMs be effectively leveraged to assist scientists in the process of experimental design?
39
+
40
+ While addressing this question is inherently complex due to the diverse nature of scientific disciplines and difficulty of evaluation, our objective is to introduce the first comprehensive benchmark as well as an evaluation methodology to facilitate measuring progress on this task. We particularly introduce ABGEN, the first benchmark for evaluat-
41
+
42
+ ![](images/d569dd81749aa15f3846a9138f18978f850782dcc6372c6dc5d16ba166f950b0.jpg)
43
+ Figure 2: An overview of ABGEN construction pipeline.
44
+
45
+ ing LLMs in the context of designing ablation studies for scientific research. The dataset consists of 1,500 examples derived from 807 scientific papers in natural language processing (NLP). Each example is carefully annotated and validated by NLP experts and includes a comprehensive research context along with a reference ablation study, both restructured from the original research paper. The research context is divided into three sections: research background, methodology, and the main experiment setup and results. As illustrated in Figure 1, the LLMs are tasked with generating a detailed ablation study design for a specified module or process based on the provided research context.
46
+
47
+ As outlined in Figure 1, we investigate three research questions in this study. Our main contributions are summarized below:
48
+
49
+ - We propose ABGEN, the first benchmark designed to evaluate the capabilities of LLMs in ablation study designs for scientific research (Β§2). We design a comprehensive human and automated evaluation systems for ABGEN (Β§3).
50
+ - We conduct a systematic evaluation of leading LLMs, analyzing their strengths and limitations on our new task, and providing insights for future advancements (Β§4.2).
51
+ - Our user studies reveals the potential of LLMs in ablation study design by interaction with human researchers, and highlights the adaptability of this approach to other scientific domains (Β§4.3).
52
+ - We develop the meta-evaluation benchmark, ABGEN-EVAL, and investigate various LLM-based evaluation methods to provide insights for creating more reliable automated evaluation systems for complex scientific tasks (Β§5).
53
+
54
+ # 2 ABGEN Benchmark
55
+
56
+ To systematically study the capabilities and limitations of current LLMs and measuring progress in
57
+
58
+ assisting scientists with the design of their experimental workflows, we introduce a new benchmark named ABGEN. The LLMs are tasked with generating detailed ablation study designs for a specified module or process based on the given research context. We focus on scientific research within the NLP domain, as the involved expert annotators primarily have expertise in NLP (i.e., each has at least one publication in a top-tier NLP or AI venue as a leading author). Detailed biographies of the annotators participating in the ABGEN annotation and LLM performance evaluation process are provided in Table 7 in Appendix A.1. We believe that future research could extend our benchmark construction pipeline to extend to other scientific domains.
59
+
60
+ In the following subsections, we first provide a formal definition of the ABGEN task and then detail each step within the benchmark construction process. We present an overview of the ABGEN construction pipeline in Figure 2.
61
+
62
+ # 2.1 ABGEN Task Formulation
63
+
64
+ We formally define the task of ABGEN in the context of LLMs. Specifically, given:
65
+
66
+ - The research context $C$ , which is an expert-annotated context of a specific scientific study. This context is restructured from the original paper by expert annotators, including sections of research background, methodology, and main experiment setup and results (Β§2.3).
67
+ - The name of a specific essential module or process, denoted as $M$ , which is described in the methodology section within research context $C$ .
68
+
69
+ The LLM is tasked with generating the design for an ablation study, $\mathbf{A}$ , aimed at evaluating the contribution and impact of $\mathbf{M}$ within the overall research framework:
70
+
71
+ $$
72
+ \hat {\boldsymbol {A}} = \underset {A} {\arg \max } P _ {\mathbf {L L M}} (\boldsymbol {A} \mid \boldsymbol {C}, \boldsymbol {M}) \tag {1}
73
+ $$
74
+
75
+ The ablation study design should include a clear statement of the research objective, along with a detailed description of the experimental process.
76
+
77
+ # 2.2 Source Paper Collection and Filtering
78
+
79
+ Source Paper Collection. We collect scientific papers from arXiv under the "Computation and Language" category, targeting those first released between March 1, 2024 and August 30, 2024. For each paper, we adopt the tool<sup>1</sup> developed by Lo et al. (2020) to extract its content. Specifically, this tool parses LaTeX source files of papers into JSON format, extracting features including the paper title, abstract, main sections, and appendix. We convert tables within the papers into HTML format. Both recent works (Sui et al., 2024; Fang et al., 2024) and our preliminary studies reveal that the evaluated LLMs can comprehend such table format effectively. Next, we describe our approach and criteria for inclusion of the papers for annotation, as well as the details of the annotation process.
80
+
81
+ Research Paper Manual Filtering. For each collected NLP paper, the expert annotator first determines if they are familiar with the paper's topic. If not, we randomly assign the paper to another annotator. Papers whose topics are unfamiliar to both annotators are excluded. The annotators are then instructed to determine whether the paper qualifies for inclusion in our benchmark. Specifically, we exclude: (1) Papers that are not focused on experimental work (e.g., surveys, position papers, dissertations), as they do not involve ablation study design; (2) Papers with fewer than two ablation studies, as these may not provide sufficient breadth of experimental evidence. Additionally, annotators may exclude papers they deem to be of low quality based on their expert judgment. After applying these filtering criteria, 807 papers remain for further annotation.
82
+
83
+ # 2.3 Research Context Annotation
84
+
85
+ After determining that a research paper qualifies for benchmark inclusion, annotators are instructed to restructure the original paper into research context that maintains the original meaning but exclude any content related to ablation studies. The research context contains the following three sections: (1) Research Background, which is restructured from the introduction and related work sec
86
+
87
+ tions, describing the paper's motivation, research problem, and relevant prior work. (2) Methodology, which is restructured from the methodology sections, This section describes the proposed method or model, including key components and innovations. (3) Main Experiment Setup and Results, which is restructured from the experiment sections. This section details the primary experimental setup, including datasets, baselines, and evaluation metrics used in main experiments, as well as the main experimental results.
88
+
89
+ # 2.4 Reference Ablation Study Annotation
90
+
91
+ Annotators are then tasked with restructuring each ablation study in the research paper into a reference ablation study. It consists of the following three sections: (1) Research Objective, a one- or two-sentence description of the research problem and the goal of the ablation study. If this statement is not explicitly provided in the original ablation study, annotators are required to infer and summarize it. (2) Experiment Process, a detailed account of the experimental setup, including the experimental groups, datasets, procedures, and the evaluation tools and metrics used. Annotators are required to ensure that the process is clearly understandable and replicable based on the provided description. (3) Result Discussion, an analysis of the outcomes, where annotators summarize the key findings and their implications. It's worth noting that we do not require LLMs to generate this part, as our main focus is on evaluating their ability to design ablation studies rather than execute and analyze experiments. However, we believe these features could be valuable for future research.
92
+
93
+ # 2.5 Annotation Validation
94
+
95
+ For each annotated example, we assign an annotator to validate the annotated research context and reference ablation study based on the original research paper. They are required to identify and revise examples that contain errors. Out of the 1,500 annotated examples, 273 were identified as erroneous and were subsequently revised. We conducted a final human evaluation of data quality on 100 examples. As shown in Table 6 (Appendix A.1), for each validation metric, over $95\%$ of the samples received a satisfaction rating of at least 4 out of 5. This result indicates the high quality of ABGEN.
96
+
97
+ <table><tr><td>Property</td><td>Value (avg. /max)</td></tr><tr><td>Research Context Word Length</td><td>1,847.8 / 6,253</td></tr><tr><td>Research Background</td><td>319.6 / 1,178</td></tr><tr><td>Methodology</td><td>904.4 / 4,685</td></tr><tr><td>Exp Setup &amp; Results</td><td>623.7 / 2,174</td></tr><tr><td>Ref. Ablation Study Word Length</td><td>145.5 / 518</td></tr><tr><td>Research Objective</td><td>6.1 / 15</td></tr><tr><td>Experiment Process</td><td>72.5 / 264</td></tr><tr><td>Result Discussion</td><td>67.1 / 336</td></tr><tr><td># NLP Research</td><td>807</td></tr><tr><td># Ref. Ablation Study per Research</td><td>1.9 / 3</td></tr><tr><td>ABGEN Size</td><td>1,500</td></tr><tr><td>Testmini Set</td><td>500</td></tr><tr><td>Test Set</td><td>1,000</td></tr></table>
98
+
99
+ Table 1: Data statistics of the ABGEN benchmark.
100
+
101
+ # 2.6 Data Statistics
102
+
103
+ Table 1 illustrates the data statistics of the ABGEN benchmark. We randomly split the dataset into two subsets: testmini and test. The testmini subset contains 500 examples and is intended for both method validation and human analysis and evaluation. The test subset comprises the remaining 1,000 examples and is designed for standard evaluation.
104
+
105
+ # 3 ABGEN Evaluation
106
+
107
+ The automated evaluation of LLM generation for tasks relevant to scientific workflows remains an unsolved problem in the community. Recent benchmark work, such as ScIMON (Wang et al., 2024a) for novel scientific direction generation and MARG (D'Arcy et al., 2024) for peer review generation, primarily rely on human evaluation to assess LLM-based system performance. In our study, we also employ human evaluation by expert annotators as the primary assessment method. Additionally, in Section 5, we investigate different variants of LLM-based evaluation methods, aiming to provide insights for future work to develop automated evaluation systems for a large-scale evaluation.
108
+
109
+ # 3.1 Evaluation Criteria
110
+
111
+ This section discusses the human and automated evaluation protocols developed for ABGEN evaluation. We assess the following three dimensions for the generated ablation study design.
112
+
113
+ - Importance: The generated ablation study design will provide valuable insights into understanding the role of the specified module or process within the overall methodology.
114
+
115
+ - Faithfulness: The generated ablation study design aligns perfectly with the given research context. There are no contradictions between the generated content and the main experimental setup within the provided research context.
116
+
117
+ - Soundness: The generated ablation study design is logically self-consistent without ambiguous description. The human researchers would be able to clearly understand and replicate the ablation study based on the generated context.
118
+
119
+ To determine these three dimensions, we gathered feedback from three external senior NLP researchers, all of whom serve as area chairs for the ACL Rolling Review. Through iterative discussions, we identified these dimensions as critical for evaluating the quality and utility of generated ablation study designs. This feedback process also helped us in refining the assessment guidelines used for human evaluation (Β§3.2). We do not evaluate the fluency of the generated ablation study, as both recent works (D'Arcy et al., 2024; Zeng et al., 2024) and our preliminary findings find that leading LLMs consistently produce fluent text free of grammatical errors.
120
+
121
+ # 3.2 Human Evaluation Protocol
122
+
123
+ For human evaluation, we use Likert-scale scores ranging from 1 to 5 for each criterion (i.e., importance, faithfulness, and soundness). Given the research context and an LLM-generated ablation study, human evaluators are asked to score the generated content for each criteria. Initially, the reference ablation study is not provided to the evaluator. This approach encourages evaluators to carefully review the generated content in light of the research context, reducing the likelihood of bias from comparing it to the reference. This is particularly important, as LLMs may generate ablation studies that, while reasonable, differ from the reference. After submitting their initial scores, evaluators are then given the reference ablation study and asked to adjust their scores if they identify any aspects they may have initially overlooked.
124
+
125
+ To assess inter-annotator agreement of our human evaluation, we sample 40 fixed LLM-generated outputs that are separately evaluated by all four expert annotators. They achieve inter-annotator agreement scores (i.e., Cohen's Kappa) of 0.735, 0.782, and 0.710 for the criteria of importance, faithfulness, and soundness, respectively.
126
+
127
+ # 3.3 Automated Evaluation
128
+
129
+ While human evaluation is generally reliable, it is time-consuming and does not scale well. To address this, we also employ an LLM-as-a-judge system for automated evaluation. Specifically, we use GPT-4.1-mini as the base evaluator. For each model-generated response, the evaluator is provided with the research context and a reference ablation study. Evaluation is performed across four criteria (i.e., importance, faithfulness, soundness, and overall quality), with the model prompted separately for each criterion to assign a score from 1 to 5. Prior to issuing a final score, the evaluator must generate a rationale explaining its judgment. The full evaluation prompts used for each criterion are provided in Appendix B. To gain a deeper understanding of the reliability of LLM-as-Judge systems, we develop the meta-evaluation benchmark, ABGEN-EVAL, which is detailed in Section 5.
130
+
131
+ # 4 LLMs for Ablation Study Design
132
+
133
+ # 4.1 Experiment Setup
134
+
135
+ Evaluated Systems. We examine the performance of 18 frontier LLMs across two distinct categories on our benchmark: (1) Proprietary LLMs, including o4-mini (OpenAI, 2025a), GPT-4o (OpenAI, 2024), GPT-4.1 (OpenAI, 2025b), Gemini-2.5-Flash (Gemini, 2024); and Open-source LLMs, including Llama-3.1-70B, Llama-3.3-70B, Llama-4-Scout-17B and Llama-4-Maverick-17B (AI@Meta, 2024; Meta AI, 2025), Mistral-Large (Jiang et al., 2024), Deepseek-V3, DeepSeek-R1-0528-Qwen3-8B, and Deepseek-R1 (DeepSeek-AI, 2024, 2025), Phi-4 (Microsoft et al., 2025), Gemma-3-27bit (Team et al., 2025), Qwen2.5-32B, Qwen3-8B, Qwen3-32B and Qwen3-235B-A22B, (Yang et al., 2024a; Team, 2025). Table 8 in Appendix presents the details of these evaluated LLMs in ABGEN.
136
+
137
+ Measuring Performance of Real Paper and Expert. To provide an informative estimate of real paper and expert-level performance on ABGEN, we randomly sample 20 examples from 10 papers in the testmini set. We enlist two expert annotators (i.e., Annotators 1 and 4, as described in Table 7 in Appendix A.1) to individually solve these examples. To ensure fairness, we mix these $20 \times 2$ expert-annotated data and corresponding 20 reference ablation study within the standard human evaluation process. The expert evaluators are not informed of the sources of these ablation study ex
138
+
139
+ # Ablation Generation Prompt
140
+
141
+ # [System Input]:
142
+
143
+ Given the research context, design an ablation study for the specified module or process. Begin the design with a clear statement of the research objective, followed by a detailed description of the experiment setup. Do not include the discussion of results or conclusions in the response, as the focus is solely on the experimental design. The response should be within 300 words. Present the response in plain text format only.
144
+
145
+ # [User Input]:
146
+
147
+ Research Context:{research context}
148
+
149
+ Design an ablation study about {ablation module} based on the research context above.
150
+
151
+ amples when evaluation. We report the evaluation results on Table 2.
152
+
153
+ Implementation Details. For all the experiments, we set temperature as 1.0 and maximum output length as 1024 (as the maximum length of reference ablation study is 518 words as presented in Table 1). Figure 3 illustrates the default prompt used across all generation experiments. The model is tasked with generating the design for an ablation study, based on the provided annotated research context and the specified module or process name. Specifically, the LLMs are required to first generate a one-sentence description of the research objectives, followed by a detailed description of the experimental setup for the ablation study.
154
+
155
+ # 4.2 Results and Analysis
156
+
157
+ ![](images/005acc0e6fbdd964c8b7ef8f1c924171d8e1f146397a6fd22371657ecf104c6b.jpg)
158
+ Figure 3: Prompt for ablation study generation.
159
+
160
+ RQ1: How well do frontier LLMs perform designing ablation studies?
161
+
162
+ Table 2 illustrates the performance of the evaluated LLMs on ABGEN. The human evaluation results demonstrate that ABGEN poses significant challenges to current LLMs. Even the best-performing LLM, DeepSeek-R1-0528, performs much worse than human experts. This gap highlights the critical need for further advancements in LLMs, especially in applying them to complex scientific tasks. Moreover, we observe a disparity between automated evaluation systems and human assessments. For instance, despite receiving similar scores in LLM-based evaluations compared to o4-mini, DeepSeek
163
+
164
+ <table><tr><td rowspan="2">System</td><td colspan="4">LLM-based Eval (1-5)</td><td colspan="4">Human Evaluation (1-5)</td></tr><tr><td>Import.</td><td>Faith.</td><td>Sound.</td><td>Overall</td><td>Import.</td><td>Faith.</td><td>Sound.</td><td>Avg.</td></tr><tr><td>Reference (orig)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>4.70</td><td>4.90</td><td>4.70</td><td>4.77</td></tr><tr><td>Human Expert</td><td>4.82</td><td>4.84</td><td>4.33</td><td>-</td><td>4.65</td><td>4.93</td><td>4.83</td><td>4.80</td></tr><tr><td>DeepSeek-R1-0528</td><td>4.80</td><td>4.85</td><td>4.39</td><td>4.95</td><td>4.23</td><td>4.00</td><td>4.11</td><td>4.11</td></tr><tr><td>o4-mini</td><td>4.80</td><td>4.81</td><td>4.33</td><td>4.96</td><td>4.23</td><td>3.78</td><td>4.00</td><td>4.00</td></tr><tr><td>GPT-4.1</td><td>4.82</td><td>4.84</td><td>4.28</td><td>4.96</td><td>4.12</td><td>3.87</td><td>4.02</td><td>4.00</td></tr><tr><td>DeepSeek-V3</td><td>4.78</td><td>4.80</td><td>4.19</td><td>4.92</td><td>3.98</td><td>3.79</td><td>3.96</td><td>3.91</td></tr><tr><td>Qwen3-235B-A22B</td><td>4.83</td><td>4.76</td><td>4.31</td><td>4.95</td><td>4.26</td><td>3.43</td><td>4.00</td><td>3.90</td></tr><tr><td>Gemini-2.5-Flash</td><td>4.63</td><td>4.52</td><td>4.01</td><td>4.65</td><td>3.89</td><td>3.94</td><td>3.76</td><td>3.86</td></tr><tr><td>Gemma-3-27b-it</td><td>4.70</td><td>4.75</td><td>4.21</td><td>4.85</td><td>3.78</td><td>3.81</td><td>3.96</td><td>3.85</td></tr><tr><td>GPT-4o</td><td>4.81</td><td>4.75</td><td>4.15</td><td>4.65</td><td>3.88</td><td>3.67</td><td>3.91</td><td>3.82</td></tr><tr><td>Qwen3-32B</td><td>4.82</td><td>4.74</td><td>4.22</td><td>4.94</td><td>3.90</td><td>3.47</td><td>3.98</td><td>3.78</td></tr><tr><td>Qwen3-8B</td><td>4.77</td><td>4.69</td><td>4.16</td><td>4.90</td><td>3.86</td><td>3.46</td><td>3.89</td><td>3.74</td></tr><tr><td>Mistral-Small-3.1-24B</td><td>4.74</td><td>4.63</td><td>4.12</td><td>4.84</td><td>3.74</td><td>3.35</td><td>3.84</td><td>3.64</td></tr><tr><td>Phi-4</td><td>4.74</td><td>4.65</td><td>4.12</td><td>4.81</td><td>3.70</td><td>3.34</td><td>3.78</td><td>3.61</td></tr><tr><td>Llama-4-Maverick-17B</td><td>4.66</td><td>4.64</td><td>4.04</td><td>4.71</td><td>3.46</td><td>3.66</td><td>3.68</td><td>3.60</td></tr><tr><td>DeepSeek-R1-0528-Qwen3-8B</td><td>4.69</td><td>4.68</td><td>4.12</td><td>4.81</td><td>3.71</td><td>3.18</td><td>3.65</td><td>3.51</td></tr><tr><td>Qwen2.5-32B</td><td>4.73</td><td>4.64</td><td>4.08</td><td>4.80</td><td>3.53</td><td>3.17</td><td>3.72</td><td>3.47</td></tr><tr><td>Llama-4-Scout-17B</td><td>4.71</td><td>4.51</td><td>4.04</td><td>4.70</td><td>3.49</td><td>3.22</td><td>3.50</td><td>3.40</td></tr><tr><td>Llama-3.1-70B</td><td>4.68</td><td>4.46</td><td>4.05</td><td>4.70</td><td>3.58</td><td>2.91</td><td>3.55</td><td>3.35</td></tr><tr><td>Llama-3.3-70B</td><td>4.68</td><td>4.45</td><td>4.03</td><td>4.66</td><td>3.27</td><td>3.08</td><td>3.49</td><td>3.28</td></tr></table>
165
+
166
+ Table 2: Human and automated evaluation results of LLMs on ABGEN. For automated evaluation, we use GPT-4.1-mini as the base evaluator and report scores on the test subset. For human evaluation, we randomly sample 100 examples from the testmini subset. Each model output is assessed by an expert evaluator. The average human score is used as the primary metric for ranking model performance in this table.
167
+
168
+ R1-0528 consistently outperforms it in every criterion according to human evaluation. These results suggest that current automated evaluation systems may not be fully reliable for our task. To gain a deeper understanding of the reliability of current automated evaluation systems, we develop the meta-evaluation benchmark, ABGEN-EVAL, which is detailed in Section 5.
169
+
170
+ Error Analysis. We further conduct a comprehensive error analysis to better understand the capabilities and limitations of the top-performing LLMs on our task. This error analysis is based on 100 failure cases of models from the testmini set, where the average human evaluation scores are below 3. We identify five common error types, and provide detailed explanations for each type in Table 3. These error cases demonstrate that generating constructive ablation study designs based on research context is still challenging for LLMs.
171
+
172
+ # 4.3 User Studies on Real-world Scenarios
173
+
174
+ RQ2: How can this research be applied in real-world scenarios to assist human researchers in designing ablation studies?
175
+
176
+ To investigate this research question, we design and conduct following two user studies:
177
+
178
+ LLM-Researcher Interaction While LLMs currently lag behind human experts in designing ablation studies, they still hold value as tools to assist researchers. To explore this potential, we examine scenarios where researchers interact with LLMs, providing feedback to guide the refinement of their outputs. Specifically, we first sample 20 failure cases from testmini setβ€”each with an average human score below 3β€”from both GPT-4o and Llama-3.1-70B. Two expert annotators are then tasked with reviewing these LLM-generated ablation study designs, identifying errors, and providing constructive feedback for improvement within a 50-word limit. We then feed the research context, initial ablation study design, and researcher feedback back into the same LLMs, instructing them to regenerate the ablation study design. Another expert evaluator is then assigned to assess the revised version, following the same human evaluation protocol in Section 3.2. As shown in Table 4, incorporating researcher feedback can significantly enhance LLM performance in refining their outputs.
179
+
180
+ <table><tr><td>Error Type</td><td>Explanation</td></tr><tr><td>Misalignment with research context</td><td>This error arises when the generated experiment process contradicts with the baseline in the research context or introduces factual errors.</td></tr><tr><td>Ambiguity and Difficulty in Reproduction</td><td>This error arises when the generated experiment process contains ambiguous steps or lacks the necessary datasets or tools, for human researchers to replicate ablation study.</td></tr><tr><td>Partial Ablation or Incomplete Experimentation</td><td>This error arises when the generated experiment process partially addresses the ablation module, such as only ablating a sub-module, or missing experimental groups.</td></tr><tr><td>Insignificant Ablation Module</td><td>This error arises when the generated research objective is focused on an insignificant ablation module in research context.</td></tr><tr><td>Inherent Logical Inconsistencies</td><td>This error arises when the generated experiment process contains inherent logical inconsistencies, such as gaps in implementation steps.</td></tr></table>
181
+
182
+ Table 3: A summary of GPT-4o's failure cases. We provide examples for each error type in Appendix D.
183
+
184
+ <table><tr><td>User Study</td><td>Import.</td><td>Faith.</td><td>Sound.</td></tr><tr><td colspan="4">User Study 1: LLM-Researcher Interaction</td></tr><tr><td>GPT-4o</td><td></td><td></td><td></td></tr><tr><td>Initial Failure Case</td><td>3.9</td><td>2.1</td><td>2.0</td></tr><tr><td>Revision with Feedback</td><td>4.8 (+0.9)</td><td>4.2 (+2.1)</td><td>4.6 (+2.6)</td></tr><tr><td>Llama-3.1-70B</td><td></td><td></td><td></td></tr><tr><td>Initial Failure Case</td><td>3.7</td><td>1.8</td><td>1.7</td></tr><tr><td>Revision with Feedback</td><td>4.5 (+0.8)</td><td>3.9 (+2.1)</td><td>4.1 (+2.4)</td></tr></table>
185
+
186
+ User Study 2: Domain Generalization
187
+
188
+ <table><tr><td colspan="4">GPT-4o</td></tr><tr><td>NLP Domain (as Main Exp)</td><td>3.9</td><td>3.4</td><td>3.3</td></tr><tr><td>Biomedical Domain</td><td>3.7</td><td>3.4</td><td>3.1</td></tr><tr><td>Computer Network Domain</td><td>3.8</td><td>3.3</td><td>3.4</td></tr><tr><td colspan="4">Llama-3.1-70B</td></tr><tr><td>NLP Domain (as Main Exp)</td><td>3.3</td><td>2.8</td><td>2.8</td></tr><tr><td>Biomedical Domain</td><td>3.0</td><td>2.8</td><td>2.9</td></tr><tr><td>Computer Network Domain</td><td>3.1</td><td>2.9</td><td>3.0</td></tr></table>
189
+
190
+ Table 4: Human evaluation result from two user studies. The findings demonstrate (1) the potential of LLMs in designing ablation studies through interaction with human researchers, and (2) the adaptability of our research across different scientific domains.
191
+
192
+ Domain Generalization of Our Research. Our research primarily focuses on NLP domains. To explore the adaptability of our work across other scientific fields, we conducted user studies in the areas of biomedical sciences and computer networks. Specifically, we engage two expertsβ€”one in computer networking and one in biomedical researchβ€”to provide five research papers from their respective fields that were first published after May 1, 2024, and with which they are familiar. Following the same procedure as ABGEN annotation, they annotate the research context and reference ablation studies from five corresponding papers, resulting in a total of 27 examples over ten papers. We then provide them with LLM-generated ablation study designs and ask them to strictly follow
193
+
194
+ our human assessment guidelines to evaluate the LLM outputs. As shown in Table 4, the human evaluation scores for GPT-4o and Llama-3.1-70B are consistent with the results observed in the NLP domain experiments. We believe that future work could extend our research framework to other scientific domains.
195
+
196
+ # 5 Investigating Automated Evaluation for Ablation Study Design
197
+
198
+ RQ3: How can future researchers develop more reliable and effective automated evaluation systems for complex scientific tasks?
199
+
200
+ As discussed in Section 4.2, we observe a significant discrepancy between automated and human evaluation results when assessing LLM performance on ABGEN. To investigate this issue further, we conduct a systematic meta-evaluation of commonly used automated evaluation systems.
201
+
202
+ # 5.1 ABGEN-EVAL Benchmark
203
+
204
+ We construct the meta-evaluation benchmark, ABGEN-EVAL, based on the human assessments results collected in Section 4. ABGEN-EVAL comprises 18 LLM outputs $\times 100$ human assessments $= 1,800$ examples. Each example includes an LLM-generated ablation study design and three human scores assessing the study's importance, faithfulness, and soundness, respectively (detailed in Β§3.2). In line with previous meta-evaluation studies (Fabbri et al., 2021; Chen et al., 2021; Liu et al., 2024), in ABGEN-EVAL, the human evaluation results on the system-generated ablation study is considered the gold standard.
205
+
206
+ The performance of automated evaluation systems is measured by the system-level and instance
207
+
208
+ <table><tr><td>Evaluator LLM</td><td>Import.</td><td>Faith.</td><td>Sound.</td><td>Overall</td></tr><tr><td>Gemini-2.5-Flash</td><td>0.391</td><td>0.482</td><td>0.378</td><td>0.307</td></tr><tr><td>Qwen3-32B</td><td>0.305</td><td>0.405</td><td>0.299</td><td>0.248</td></tr><tr><td>GPT-4.1</td><td>0.238</td><td>0.445</td><td>0.298</td><td>0.246</td></tr><tr><td>DeepSeek-R1-0528</td><td>0.352</td><td>0.234</td><td>0.070</td><td>0.245</td></tr><tr><td>Qwen3-8B</td><td>0.318</td><td>0.308</td><td>0.298</td><td>0.237</td></tr><tr><td>QwQ-32B</td><td>0.232</td><td>0.338</td><td>0.284</td><td>0.225</td></tr><tr><td>GPT-4.1-mini</td><td>0.164</td><td>0.329</td><td>0.193</td><td>0.194</td></tr><tr><td>GPT-4o</td><td>0.151</td><td>0.249</td><td>0.139</td><td>0.179</td></tr><tr><td>Llama-3.3-70B</td><td>0.102</td><td>0.268</td><td>0.239</td><td>0.170</td></tr><tr><td>Qwen2.5-32B</td><td>0.109</td><td>0.234</td><td>0.173</td><td>0.144</td></tr><tr><td>DS-R1-0528-Qwen3-8B</td><td>0.232</td><td>0.265</td><td>0.253</td><td>0.124</td></tr><tr><td>Llama-4-Maverick</td><td>0.158</td><td>0.038</td><td>0.136</td><td>0.122</td></tr><tr><td>Llama-3.1-70B</td><td>0.071</td><td>0.100</td><td>-0.020</td><td>0.100</td></tr><tr><td>Llama-4-Scout</td><td>0.167</td><td>0.026</td><td>0.105</td><td>0.083</td></tr></table>
209
+
210
+ Table 5: Instance-level Pearson correlations between pointwise evaluations from various LLM-based evaluators and human judgments across four criteria: importance, faithfulness, soundness, and overall. The overall score is not directly rated by humans, but computed as the average of the other three aspect scores. Evaluation prompts used in the LLM-based pairwise evaluations for each aspect are provided in Appendix B. The system-level correlations are presented in Table 9 in Appendix.
211
+
212
+ level correlation between scores of human evaluation and automated evaluation systems. Specifically, given $n$ input scientific papers and $m$ ablation study generation systems, the human evaluation and an automatic metric result in two $n$ -row, $m$ -column score matrices $H$ , $M$ respectively. The system-level correlation is calculated on the aggregated system scores:
213
+
214
+ $$
215
+ r _ {\text {s y s}} (H, M) = \mathcal {C} (\bar {H}, \bar {M}), \tag {2}
216
+ $$
217
+
218
+ where $\bar{H}$ and $\bar{M}$ contain $m$ entries which are the average system scores across $n$ data samples (e.g., $\bar{H}_0 = \sum_i H_{i,0} / n$ ), and $\mathcal{C}$ is a function calculating a correlation coefficient (e.g., the Pearson's correlation coefficient). In contrast, the instance-level correlation is an average of sample-wise correlations:
219
+
220
+ $$
221
+ r _ {\text {s u m}} (H, M) = \frac {\sum_ {i} \mathcal {C} \left(H _ {i} , M _ {i}\right)}{n}, \tag {3}
222
+ $$
223
+
224
+ where $H_{i},M_{i}$ are the evaluation results on the $i$ -th data sample.
225
+
226
+ # 5.2 Experiments
227
+
228
+ For the LLM-based evaluation systems, we developed multiple variants to investigate how different factors influence their effectiveness. These factors include: the choice of base LLMs, ranging from open-source to proprietary models; and whether
229
+
230
+ evaluation is based on specific criteria or overall scores. As illustrated in Table 5 and Table 9 in Appendix, the current automated evaluation systems show relatively low correlations, indicating that they are not reliable for assessing generated ablation study designs. We believe future research could build on ABGEN-EVAL dataset to develop more advanced and robust LLM-based evaluation methods for scientific tasks.
231
+
232
+ # 6 Related Work
233
+
234
+ LLMs have been employed for different scientific tasks for enhancing researchers' scientific workflows, such as conducting literature reviews (Wang et al., 2024b; Agarwal et al., 2024), question answering over scientific papers (Dasigi et al., 2021; Saikh et al., 2022; Lee et al., 2023; Li et al., 2024a; Wang et al., 2025; Zhao et al., 2025a), research hypothesis generation (Wang et al., 2024a; Zhou et al., 2024b; Si et al., 2025), scientific paper writing (Xu et al., 2024; Lu et al., 2024), and peer-review and meta-review generation (D'Arcy et al., 2024; Tan et al., 2024; Wu et al., 2022; Zhou et al., 2024a; Xu et al., 2025). However, the potential of LLMs to effectively assist scientists in the experimental design process remains largely open research questions (Li et al., 2024b; Lou et al., 2025; Chen et al., 2025a). Additionally, the challenge of developing effective and reliable automated evaluation systems for complex scientific tasks is underexplored (Zhao et al., 2025b). Our work bridges these gaps by introducing standard benchmarks for evaluating both ablation study design and evaluation.
235
+
236
+ # 7 Conclusion
237
+
238
+ This paper introduces ABGEN, the first benchmark designed to evaluate LLMs in generating ablation studies for scientific research. Through a comprehensive assessment, we highlight both the strengths and limitations of leading LLMs on ABGEN, providing valuable insights for future advancements. Our findings offer practical guidance on how to apply this research in real-world scenarios, ultimately aiding human researchers. Additionally, we identify a discrepancy between automated evaluations and human assessments in our task. To investigate this, we also develop a meta-evaluation benchmark, providing insights into developing more reliable automated evaluation for complex scientific tasks.
239
+
240
+ # Acknowledgments
241
+
242
+ This project is supported by Tata Sons Private Limited, Tata Consultancy Services Limited, and Titan. We are grateful to Nvidia Academic Grant Program for providing computing resources.
243
+
244
+ # Limitations and Future Work
245
+
246
+ This study does not explore advanced prompting techniques (Yao et al., 2023; Wang et al., 2024a) or LLM-Agent-based methods (D'Arcy et al., 2024; Majumder et al., 2024). Our focus is on assessing the fundamental capabilities of leading LLMs in ablation study design. The goal is to provide insights into their strengths and limitations, laying the groundwork for future advancements. We encourage researchers to build upon our benchmark and findings to develop more advanced approaches for this task. Second, as shown in our results on ABGEN-EVAL, the reported automated evaluation scores are not yet perfect. To support further research, we will make all model outputs from Section 4 publicly available. This will enable other researchers to conduct different automated evaluations and ensure consistent rankings by re-running their assessments on our model outputs. Additionally, our human evaluation protocol is designed to minimize the need for repeated human evaluations by future researchers. By strictly adhering to our assessment guidelines, researchers can reliably assess and compare their methods with existing approaches in an independent and consistent manner. Lastly, we only explore the LLMs' abilities on designing ablation studies. In real-world scenarios, how can LLM execute the designed ablation studies would be an interesting topic and we encourage future work to explore (Chen et al., 2025b).
247
+
248
+ # References
249
+
250
+ Shubham Agarwal, Issam H. Laradji, Laurent Charlin, and Christopher Pal. 2024. Litllm: A toolkit for scientific literature review.
251
+ AI@Meta. 2024. The llama 3 herd of models.
252
+ Signe AltmΓ€e, Alberto Sola-Leyva, and Andres Salumets. 2023. Artificial intelligence in scientific writing: a friend or a foe? Reproductive BioMedicine Online, 47(1):3-9.
253
+ Hui Chen, Miao Xiong, Yujie Lu, Wei Han, Ailin Deng, Yufei He, Jiaying Wu, Yibo Li, Yue Liu, and Bryan Hooi. 2025a. Mlr-bench: Evaluating ai agents on open-ended machine learning research.
254
+
255
+ Qiguang Chen, Mingda Yang, Libo Qin, Jinhao Liu, Zheng Yan, Jiannan Guan, Dengyun Peng, Yiyan Ji, Hanjing Li, Mengkang Hu, Yimeng Zhang, Yihao Liang, Yuhang Zhou, Jiaqi Wang, Zhi Chen, and Wanxiang Che. 2025b. Ai4research: A survey of artificial intelligence for scientific research.
256
+ Yiran Chen, Pengfei Liu, and Xipeng Qiu. 2021. Are factuality checkers reliable? adversarial metaevaluation of factuality in summarization. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2082-2095, Punta Cana, Dominican Republic. Association for Computational Linguistics.
257
+ Mike D'Arcy, Tom Hope, Larry Birnbaum, and Doug Downey. 2024. Marg: Multi-agent review generation for scientific papers. arXiv preprint arXiv:2401.04259.
258
+ Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4599-4610, Online. Association for Computational Linguistics.
259
+ DeepSeek-AI. 2024. Deepseek-v3 technical report.
260
+ DeepSeek-AI. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning.
261
+ Jiangshu Du, Yibo Wang, Wenting Zhao, Zhongfen Deng, Shuaiqi Liu, Renze Lou, Henry Peng Zou, Pranav Narayanan Venkit, Nan Zhang, Mukund Srinath, et al. 2024. Llms assist nlp researchers: Critique paper (meta-) reviewing. arXiv preprint arXiv:2406.16253.
262
+ Alexander R. Fabbri, Wojciech KrysciΕ„ski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391-409.
263
+ Xi Fang, Weijie Xu, Fiona Anting Tan, Ziqing Hu, Jiani Zhang, Yanjun Qi, Srinivasan H. Sengamedu, and Christos Faloutsos. 2024. Large language models (LLMs) on tabular data: Prediction, generation, and understanding - a survey. Transactions on Machine Learning Research.
264
+ Gemini. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.
265
+ Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, LΓ©lio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, ThΓ©ophile Gervet, Thibaut Lavril, Thomas Wang,
266
+
267
+ TimothΓ©e Lacroix, and William El Sayed. 2024. Mixtral of experts.
268
+ Yoonjoo Lee, Kyungjae Lee, Sunghyun Park, Dasol Hwang, Jaehyeon Kim, Hong-In Lee, and Moontae Lee. 2023. QASA: Advanced question answering on scientific articles. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 19036-19052. PMLR.
269
+ Chuhan Li, Ziyao Shangguan, Yilun Zhao, Deyuan Li, Yixin Liu, and Arman Cohan. 2024a. M3SciQA: A multi-modal multi-document scientific QA benchmark for evaluating foundation models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 15419-15446, Miami, Florida, USA. Association for Computational Linguistics.
270
+ Ruochen Li, Teerth Patel, Qingyun Wang, and Xinya Du. 2024b. Mlr-copilot: Autonomous machine learning research based on large language models agents.
271
+ Yixin Liu, Alexander Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, and Arman Cohan. 2024. Benchmarking generation and evaluation capabilities of large language models for instruction controllable summarization. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4481-4501, Mexico City, Mexico. Association for Computational Linguistics.
272
+ Yuliang Liu, Xiangru Tang, Zefan Cai, Junjie Lu, Yichi Zhang, Yanjun Shao, Zexuan Deng, Helan Hu, Zengxian Yang, Kaikai An, et al. 2023. Mlibench: Large language models leverage open-source libraries for machine learning tasks. arXiv preprint arXiv:2311.09835.
273
+ Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969-4983, Online. Association for Computational Linguistics.
274
+ Renze Lou, Hanzi Xu, Sijia Wang, Jiangshu Du, Ryo Kamoi, Xiaoxin Lu, Jian Xie, Yuxuan Sun, Yusen Zhang, Jihyun Janice Ahn, Hongchao Fang, Zhuoyang Zou, Wenchao Ma, Xi Li, Kai Zhang, Congying Xia, Lifu Huang, and Wenpeng Yin. 2025. AAAR-1.0: Assessing AI's potential to assist research. In Forty-second International Conference on Machine Learning.
275
+ Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. 2024. The ai scientist: Towards fully automated open-ended scientific discovery.
276
+ Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, and Peter Clark. 2024. Data-driven discovery with large generative models.
277
+
278
+ Meta AI. 2025. Llama 4: Natively multimodal mixture-of-experts language model.
279
+ Microsoft,.; Abdelrahman Abouelenin, Atabak Ashfaq, Adam Atkinson, Hany Awadalla, Nguyen Bach, Jianmin Bao, Alon Benhaim, Martin Cai, Vishrav Chaudhary, Congcong Chen, Dong Chen, Dongdong Chen, Junkun Chen, Weizhu Chen, Yen-Chun Chen, Yi ling Chen, Qi Dai, Xiyang Dai, Ruchao Fan, Mei Gao, Min Gao, Amit Garg, Abhishek Goswami, Junheng Hao, Amr Hendy, Yuxuan Hu, Xin Jin, Mahmoud Khademi, Dongwoo Kim, Young Jin Kim, Gina Lee, Jinyu Li, Yunsheng Li, Chen Liang, Xihui Lin, Zeqi Lin, Mengchen Liu, Yang Liu, Gilsinia Lopez, Chong Luo, Piyush Madan, Vadim Mazalov, Arindam Mitra, Ali Mousavi, Anh Nguyen, Jing Pan, Daniel Perez-Becker, Jacob Platin, Thomas Portet, Kai Qiu, Bo Ren, Liliang Ren, Sambuddha Roy, Ning Shang, Yelong Shen, Saksham Singhal, Subhojit Som, Xia Song, Tetyana Sych, Praneetha Vaddamanu, Shuo-hang Wang, Yiming Wang, Zhenghao Wang, Haibin Wu, Haoran Xu, Weijian Xu, Yifan Yang, Ziyi Yang, Donghan Yu, Ishmam Zabir, Jianwen Zhang, Li Lyna Zhang, Yunan Zhang, and Xiren Zhou. 2025. Phi-4 mini technical report: Compact yet powerful multimodal language models via mixture-of-loras.
280
+ OpenAI. 2024. Hello gpt-4o.
281
+ OpenAI. 2025a. Addendum to openai o3 and o4-mini system card: Openai o3 operator.
282
+ OpenAI. 2025b. Introducing gpt-4.1 in the api.
283
+ Tanik Saikh, Tirthankar Ghosal, Amish Mittal, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Scienceqa: a novel resource for question answering on scholarly articles. International Journal on Digital Libraries, 23:289 - 301.
284
+ Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. 2025. Can LLMs generate novel research ideas? a largescale human study with $100+$ NLP researchers. In The Thirteenth International Conference on Learning Representations.
285
+ Yuan Sui, Mengyu Zhou, Mingjie Zhou, Shi Han, and Dongmei Zhang. 2024. Table meets llm: Can large language models understand structured table data? a benchmark and empirical study.
286
+ Cheng Tan, Dongxin Lyu, Siyuan Li, Zhangyang Gao, Jingxuan Wei, Siqi Ma, Zicheng Liu, and Stan Z. Li. 2024. Peer review as a multi-turn and long-context dialogue with role-based interactions.
287
+ Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, Gael Liu, Francesco Visin, Kathleen Ke- nealy, Lucas Beyer, Xiaohai Zhai, Anton Tsitsulin, Robert Busa-Fekete, Alex Feng, Noveen Sachdeva, Benjamin Coleman, Yi Gao, Basil Mustafa, Iain
288
+
289
+ Barr, Emilio Parisotto, David Tian, Matan Eyal, Colin Cherry, Jan-Thorsten Peter, Danila Sinopalnikov, Surya Bhupatiraju, Rishabh Agarwal, Mehran Kazemi, Dan Malkin, Ravin Kumar, David Vilar, Idan Brusilovsky, Jiaming Luo, Andreas Steiner, Abe Friesen, Abhanshu Sharma, Abheesht Sharma, Adi Mayrav Gilady, Adrian Goedeckemeyer, Alaa Saade, Alex Feng, Alexander Kolesnikov, Alexei Bendebury, Alvin Abdagic, Amit Vadi, AndrΓ‘s GyΓΆrgy, AndrΓ© Susano Pinto, Anil Das, Ankur Bapna, Antoine Miech, Antoine Yang, Antonia Paterson, Ashish Shenoy, Ayan Chakrabarti, Bilal Piot, Bo Wu, Bobak Shahriari, Bryce Petrini, Charlie Chen, Charline Le Lan, Christopher A. Choquette-Choo, CJ Carey, Cormac Brick, Daniel Deutsch, Danielle Eisenbud, Dee Cattle, Derek Cheng, Dimitris Paparas, Divyashree Shivakumar Sreepathihalli, Doug Reid, Dustin Tran, Dustin Zelle, Eric Noland, Erwin Huizenga, Eugene Kharitonov, Frederick Liu, Gagik Amirkhanyan, Glenn Cameron, Hadi Hashemi, Hanna Klimczak-Plucinska, Harman Singh, Harsh Mehta, Harshal Tushar Lehri, Hussein Hazimeh, Ian Ballantyne, Idan Szpektor, Ivan Nardini, Jean Pouget-Abadie, Jetha Chan, Joe Stanton, John Wieting, Jonathan Lai, Jordi Orbay, Joseph Fernandez, Josh Newlan, Ju yeong Ji, Jytodinder Singh, Kat Black, Kathy Yu, Kevin Hui, Kiran Vodrahalli, Klaus Greff, Linhai Qiu, Marcella Valentine, Marina Coelho, Marvin Ritter, Matt Hoffman, Matthew Watson, Mayank Chaturvedi, Michael Moynihan, Min Ma, Nabila Babar, Natasha Noy, Nathan Byrd, Nick Roy, Nikola Momchev, Nilay Chauhan, Noveen Sachdeva, Oskar Bunyan, Pankil Botarda, Paul Caron, Paul Kishan Rubenstein, Phil Culliton, Philipp Schmid, Pier Giuseppe Sessa, Pingmei Xu, Piotr Stanczyk, Pouya Tafti, Rakesh Shivanna, Renjie Wu, Renke Pan, Reza Rokni, Rob Willoughby, Rohith Vallu, Ryan Mullins, Sammy Jerome, Sara Smoot, Sertan Girgin, Shariq Iqbal, Shashir Reddy, Shruti Sheth, Siim PΓΆder, Sijal Bhatnagar, Sindhu Raghuram Panyam, Sivan Eiger, Susan Zhang, Tianqi Liu, Trevor Yacovone, Tyler Liechty Uday Kalra, Utku Evci, Vedant Misra, Vincent Roseberry,Vlad Feinberg,Vlad Kolesnikov,Woohyun HanWoosuk Kwon,Xi Chen,Yinlam ChowYuvein Zhu,Zichuan Wei,Zoltan EgyedVictor Cotruta Minh Giang Phoebe Kirk Anand RaoKat Black Nabila Babar Jessica LoEric Moreira Luiz Gustavo MartinsOmar SansevieroLucas Gonzalez Zach Gleicher Tris WarkentinVahab Mirrokni Evan Senter Eli CollinsJoelle BarralZoubin Ghahramani Raia Hadsell Yossi Matias D.Sculley Slav Petrov Noah Fiedel,Noam Shazeer Oriol Vinyals Jeff Dean Demis Hassabis Koray Kavukcuoglu Clement Farabet Elena Buchatskaya Jean-Baptiste Alayrac,Rohan Anil,Dmitry,Lepikhin Sebastian Borgeaud,Olivier BachemArmand Joulin,Alek Andreev,Cassidy Hardin Robert Dadashi and Leonard Hussenot. 2025.Gemma 3 technical report.
290
+
291
+ Qwen Team. 2025. Qwen3 technical report.
292
+
293
+ Chengye Wang, Yifei Shen, Zexi Kuang, Arman Cohan,
294
+
295
+ and Yilun Zhao. 2025. Sciver: Evaluating foundation models for multimodal scientific claim verification.
296
+
297
+ Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. 2024a. Scimon: Scientific inspiration machines optimized for novelty.
298
+
299
+ Xintao Wang, Jiangjie Chen, Nianqi Li, Lida Chen, Xinfeng Yuan, Wei Shi, Xuyang Ge, Rui Xu, and Yanghua Xiao. 2024b. Surveyagent: A conversational system for personalized and efficient research survey. arXiv preprint arXiv:2404.06364.
300
+
301
+ Po-Cheng Wu, An-Zi Yen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2022. Incorporating peer reviews and rebuttal counter-arguments for meta-review generation. Proceedings of the 31st ACM International Conference on Information & Knowledge Management.
302
+
303
+ Fangyuan Xu, Kyle Lo, Luca Soldaini, Bailey Kuehl, Eunsol Choi, and David Wadden. 2024. Kiwi: A dataset of knowledge-intensive writing instructions for answering research questions. arXiv preprint arXiv:2403.03866.
304
+
305
+ Zhijian Xu, Yilun Zhao, Manasi Patwardhan, Lovekesh Vig, and Arman Cohan. 2025. Can llms identify critical limitations within scientific research? a systematic evaluation on ai research papers.
306
+
307
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024a. Qwen2 technical report.
308
+
309
+ John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan, and Ofir Press. 2024b. Swe-agent: Agent-computer interfaces enable automated software engineering. arXiv preprint arXiv:2405.15793.
310
+
311
+ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik R Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. In Thirty-seventh Conference on Neural Information Processing Systems.
312
+
313
+ Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, and Danqi Chen. 2024. Evaluating large language models at evaluating instruction following. In The Twelfth International Conference on Learning Representations.
314
+
315
+ Yilun Zhao, Chengye Wang, Chuhan Li, and Arman Cohan. 2025a. Can multimodal foundation models understand schematic diagrams? an empirical study on information-seeking qa over scientific papers.
316
+ Yilun Zhao, Kaiyan Zhang, Tiansheng Hu, Sihong Wu, Ronan Le Bras, Taira Anderson, Jonathan Bragg, Joseph Chee Chang, Jesse Dodge, Matt Latzke, Yixin Liu, Charles McGrady, Xiangru Tang, Zihang Wang, Chen Zhao, Hannaneh Hajishirzi, Doug Downey, and Arman Cohan. 2025b. Sciarena: An open evaluation platform for foundation models in scientific literature tasks.
317
+ Ruiyang Zhou, Lu Chen, and Kai Yu. 2024a. Is llm a reliable reviewer? a comprehensive evaluation of llm on automatic paper reviewing tasks. In International Conference on Language Resources and Evaluation.
318
+ Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastava, Hongyuan Mei, and Chenhao Tan. 2024b. Hypothesis generation with large language models.
319
+
320
+ # A Appendix
321
+
322
+ # A.1 ABGEN Benchmark
323
+
324
+ <table><tr><td>Annotation Quality</td><td>%S β‰₯ 4</td></tr><tr><td>Research Context</td><td></td></tr><tr><td>Correctly structured</td><td>99.0</td></tr><tr><td>Excluding ablation-relevant content</td><td>96.5</td></tr><tr><td>Reference Ablation Study</td><td></td></tr><tr><td>Correctly structured</td><td>98.5</td></tr><tr><td>Non-overlapping</td><td>96.0</td></tr><tr><td>Justifiable within research context</td><td>97.5</td></tr></table>
325
+
326
+ Table 6: Human evaluation over 200 samples of ABGEN. Three internal evaluators were asked to rate the samples on a scale of 1 to 5 individually. We report percent of samples that have an average score $\geq 4$ to indicate the annotation quality of ABGEN.
327
+
328
+ <table><tr><td>ID</td><td># NLP/AI Publication</td><td>Data Annotation</td><td>Data Validation</td><td>Human Evaluation</td><td>Human Performance</td></tr><tr><td>1</td><td>&gt;10</td><td>βœ“</td><td>βœ“</td><td></td><td>βœ“</td></tr><tr><td>2</td><td>&gt;10</td><td></td><td></td><td>βœ“</td><td></td></tr><tr><td>3</td><td>&gt;10</td><td></td><td></td><td>βœ“</td><td></td></tr><tr><td>4</td><td>5-10</td><td>βœ“</td><td>βœ“</td><td></td><td>βœ“</td></tr><tr><td>5</td><td>1-5</td><td>βœ“</td><td></td><td>βœ“</td><td></td></tr><tr><td>6</td><td>1-5</td><td>βœ“</td><td>βœ“</td><td>βœ“</td><td></td></tr></table>
329
+
330
+ Table 7: Details of annotators involved in dataset construction and LLM performance evaluation. ABGEN is annotated by experts in NLP domains, ensuring both the accuracy of the benchmark and the reliability of the human evaluation.
331
+
332
+ # B Experiment Setup
333
+
334
+ # User Study Prompt
335
+
336
+ [System Input]:
337
+
338
+ Revise or rewrite the initial generation based on research context and user feedback.
339
+
340
+ [User Input]:
341
+
342
+ Research context: {research context}
343
+
344
+ Initial generation: {initial generation}
345
+
346
+ User feedback: {user feedback}
347
+
348
+ Redesign an ablation study about the {ablation module}, according to user feedback ...
349
+
350
+ Figure 4: Prompt for LLM-researcher interaction.
351
+
352
+ <table><tr><td>Organization</td><td>Model</td><td>Release</td><td>Version</td><td>Context Window</td></tr><tr><td colspan="5">Proprietary Models</td></tr><tr><td rowspan="3">OpenAI</td><td>o4-mini</td><td>2025-4</td><td>o4-mini-2025-04-16</td><td>-</td></tr><tr><td>GPT-4.1</td><td>2025-4</td><td>gpt-4.1-2025-04-14</td><td>-</td></tr><tr><td>GPT-4o</td><td>2024-8</td><td>gpt-4o-2024-08-06</td><td>-</td></tr><tr><td>Google</td><td>Gemini-2.5-Flash</td><td>2024-5</td><td>gemini-2.5-flash-preview-05-20</td><td>-</td></tr><tr><td colspan="5">Open-source Multimodal Foundation Models</td></tr><tr><td>Mistral AI</td><td>Mistral-Small-3.1</td><td>2025-3</td><td>Mistral-Small-3.1-24B</td><td>128k</td></tr><tr><td>Microsoft</td><td>Phi-4</td><td>2025-3</td><td>Phi-4</td><td>16k</td></tr><tr><td>Google</td><td>Gemma-3-27b-it</td><td>2025-3</td><td>gemma-3-27b-it</td><td>16k</td></tr><tr><td rowspan="3">DeepSeek</td><td>DeepSeekV3</td><td>2024-12</td><td>DeepSeekV3</td><td>160k</td></tr><tr><td>DeepSeekR1</td><td>2025-5</td><td>DeepSeek-R1-0528</td><td>160k</td></tr><tr><td>DeepSeek-R1-0528-Qwen3-8B,</td><td>2025-5</td><td>DeepSeek-R1-0528-Qwen3-8B</td><td>160k</td></tr><tr><td rowspan="4">Alibaba</td><td>Qwen2.5-32B</td><td>2025-1</td><td>Qwen2.5-32B-Instruct</td><td>32k</td></tr><tr><td>Qwen3-8B</td><td>2025-5</td><td>Qwen3-8B</td><td>40k</td></tr><tr><td>Qwen3-32B</td><td>2025-5</td><td>Qwen3-32B</td><td>40k</td></tr><tr><td>Qwen3-235BA22B</td><td>2025-5</td><td>Qwen3-235B-A22B</td><td>32k</td></tr><tr><td rowspan="4">Meta</td><td>Llama-3.1-70B</td><td>2024-6</td><td>Llama-3.1-70B-Instruct</td><td>32k</td></tr><tr><td>Llama-3.3-70B</td><td>2025-5</td><td>Llama-3.3-70B-Instruct</td><td>32k</td></tr><tr><td>Llama-4-Scout-17B</td><td>2025-5</td><td>Llama-4-Scout-17B-Instruct</td><td>32k</td></tr><tr><td>Llama-4-Maverick-17B</td><td>2025-5</td><td>Llama-4-Maverick-17B-Instruct</td><td>32k</td></tr></table>
353
+
354
+ Table 8: Details of the organization, release time, maximum context length, and model source (i.e., url for proprietary models and Huggingface model name for open-source models) for the LLMs evaluated in ABGEN.
355
+
356
+ # C Experiments
357
+
358
+ # C.1 Meta Evaluation Results
359
+
360
+ <table><tr><td>Evaluator LLM</td><td>Import.</td><td>Faith.</td><td>Sound.</td><td>Overall</td></tr><tr><td>QwQ-32B</td><td>0.856</td><td>0.682</td><td>0.858</td><td>0.877</td></tr><tr><td>Qwen3-32B</td><td>0.741</td><td>0.779</td><td>0.884</td><td>0.864</td></tr><tr><td>Qwen3-8B</td><td>0.796</td><td>0.682</td><td>0.818</td><td>0.847</td></tr><tr><td>Gemini-2.5-Flash-Preview</td><td>0.590</td><td>0.748</td><td>0.849</td><td>0.775</td></tr><tr><td>GPT-4o</td><td>0.473</td><td>0.607</td><td>0.767</td><td>0.726</td></tr><tr><td>GPT-4.1-mini</td><td>0.562</td><td>0.523</td><td>0.828</td><td>0.713</td></tr><tr><td>Qwen2.5-32B</td><td>0.342</td><td>0.673</td><td>0.687</td><td>0.673</td></tr><tr><td>DS-R1-0528-Qwen3-8B</td><td>0.674</td><td>0.757</td><td>0.862</td><td>0.660</td></tr><tr><td>GPT-4.1</td><td>0.606</td><td>0.678</td><td>0.864</td><td>0.647</td></tr><tr><td>Llama-4-Maverick</td><td>0.584</td><td>0.241</td><td>0.622</td><td>0.523</td></tr><tr><td>Llama-3.3-70B</td><td>0.463</td><td>0.404</td><td>0.841</td><td>0.516</td></tr><tr><td>Llama-3.1-70B</td><td>0.264</td><td>0.409</td><td>0.266</td><td>0.436</td></tr><tr><td>Llama-4-Scout</td><td>0.620</td><td>0.327</td><td>0.409</td><td>0.421</td></tr><tr><td>DeepSeek-R1-0528</td><td>0.752</td><td>0.691</td><td>0.181</td><td>0.407</td></tr></table>
361
+
362
+ Table 9: System-level Kendall correlations between pointwise evaluations from various LLM-based evaluators and human judgments across four criteria: importance, faithfulness, soundness, and overall. The overall score is not directly rated by humans, but computed as the average of the other three aspect scores.
363
+
364
+ # D Error Analysis
365
+
366
+ # D.1 Misalignment with Research Context
367
+
368
+ Error type: Misalignment with Research Context
369
+
370
+ Paper Title: Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification
371
+ url: https://arxiv.org/pdf/2411.04328v1
372
+
373
+ Ablation Study about external news outlet evaluation (generated by DeepSeek-R1-0528-Qwen3-8B)
374
+
375
+ The specific aim of this ablation study is to investigate the impact of relying on external news outlet evaluation, derived from a pre-existing academic media bias classification, versus utilizing internal model-derived labels on the performance of the Differentiated Biased feature detection models...
376
+
377
+ A new ablated version of the experiment will replicate the data sourcing and preprocessing steps using the identical dataset. Crucially, it will remove the step where specific outlets are labeled as 'Left' or 'Right' based on external media bias charts or performance metrics. Instead, for the purpose of model training and evaluation, the data will be treated as unlabeled sequences. The convolutional NN and rule-based sentiment analysis models will be trained solely on the raw text features without predefined target categories corresponding to the outlet bias...
378
+
379
+ Relevant text: We used the University of Central Oklahoma's Political Bias chart, as it is publicly accessible and provides a visual representation of political bias categorization of various news sources. From this chart, we selected eight outlets ranging across the political spectrum. This classification served as the foundation for our categorization of news articles into 'Left', 'Neutral', or 'Right'. These labels were then used to evaluate how effectively our models could identify differentiated bias features
380
+
381
+ Error analysis by human experts:
382
+
383
+ The issue lies in a misalignment with the original research context. The ablation study assumes a setup where external bias labels (e.g., "Left" or "Right") are removed, suggesting an unsupervised or label-free classification scenario. However, the paper is fundamentally framed around a supervised learning task that explicitly relies on these external labels from a media bias chart. Removing them would not simply test a variant of the same methodβ€”it would redefine the core task, making the ablation incompatible with the original experimental objectives and assumptions.
384
+
385
+ Figure 5: A Failure Example of Misalignment with Research Context
386
+
387
+ # D.2 Ambiguity and Difficulty in Reproduction
388
+
389
+ Error type: Ambiguity and Difficulty in Reproduction
390
+
391
+ Paper Title: RMB:Comprehensively Benchmarking Reward Models in LLM
392
+
393
+ Alignment
394
+
395
+ url: https://arxiv.org/pdf/2410.09893v1
396
+
397
+ Ablation Study about confidence weighting (generated by Qwen2.5-32B)
398
+
399
+ The objective of this ablation study is to evaluate the impact of confidence weighting on the performance of reward models (RMs) used in aligning large language models (LLMs) with human preferences. Confidence weighting refers to the practice of assigning higher or lower weight to a model's score based on a measure of the model's confidence in that score. This study aims to determine whether and how confidence weighting affects the accuracy of RMs in both pairwise and BoN (Best of N) tasks.
400
+
401
+ The experimental design includes collecting predictions from each RM on the pairwise and BoN datasets, adjusting scores based on confidence for CW variants, calculating accuracy metrics, and comparing results...
402
+
403
+ ![](images/545915327f68a1fd8de48ee4a19d458a2545171f36c0a4d8d11c484539359578.jpg)
404
+
405
+ Relevant text: We further analyze the robustness of BoN evaluation with majority voting across 3 RM instances. Surprisingly, we do not observe an enhancement of ranking consistency or alignment with pairwise results, suggesting that the majority voting may not be effective in the RM evaluation context.
406
+
407
+ We observe a significant inconsistency in both helpfulness and harmlessness evaluation when applying BoN across different RM instances, indicating the instability of single-RM scoring in such tasks.
408
+
409
+ ![](images/f65be827fa6cd732710f787e679840ccb4295c2e331e3bc48fb0a9ea9ca67dd3.jpg)
410
+
411
+ # Error analysis by human experts:
412
+
413
+ The ablation study lacks clear details on how confidence scores are computed and applied, especially for generative models. Key steps like normalization, weighting logic, and model accessibility are vague, making the experiment difficult to reproduce reliably.
414
+
415
+ Figure 6: A Failure Example of Ambiguity and Difficulty in Reproduction
416
+
417
+ # D.3 Partial Ablation or Incomplete Experimentation
418
+
419
+ Error type: Partial Ablation or Incomplete Experimentation
420
+
421
+ Paper Title: RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long
422
+
423
+ Document Abstractive Summarization
424
+
425
+ url: https://arxiv.org/pdf/2407.13579v1
426
+
427
+ Ablation Study about assess rank effect (generated by Llama-4-Maverick-17B)
428
+
429
+ To evaluate the contribution of individual RST structure variants (binary probability distribution, relation labels, and uncertainty) to the performance of the proposed RST-injected LoRA method for long document summarization.
430
+
431
+ Four variants are introduced based on the RST integration framework:1. Base Model: Vanilla LoRA (for comparison, no RST integration).2. Variation 1 (Binary): Integrates only the binary probability distribution of RST relations (without relation labels).3. Variation 2 (Labels): Integrates the binary probability distribution along with explicit discourse relation type labels (binary probability + labels; no uncertainty).4. Variation 3 (Uncertainty): Integrates the binary probability distribution and uses uncertainty information (binary probability + uncertainty)...
432
+
433
+ ![](images/2ad148d87fc79093266f2b299b3876cdafdc41b6b4f008b415e447b4cbb683d4.jpg)
434
+
435
+ Relevant text: Based on this, we propose four fine-grained RST matrix distributions:
436
+
437
+ - RST b_wo: A binary, label-agnostic representation collapsing probabilities into a simple 1-or-0 regarding discourse connections.
438
+ - RST b_w: An extension of the binary distribution that includes relation labels, enriching the binary decisions with relational types.
439
+ - RST p_wo: A probabilistic representation that omits labels, focusing instead on the probabilities to express uncertainty in discourse connections.
440
+ - RST p_w: The most granular representation, retaining both types of discourse relations and their probabilistic weights for a full-fledged representation of discourse nuances.
441
+
442
+ ![](images/8c7fd6ccc7f0950df7a83dc2092d038a537ef32fb2a7ab1a12665b93e73402df.jpg)
443
+
444
+ # Error analysis by human experts:
445
+
446
+ This sentence reveals that all ablation variants are built on the binary probability distribution, which is never ablated itself. Therefore, the design does not isolate the effects of this core component, leading to an incomplete assessment of each factor's independent contribution.
447
+
448
+ Figure 7: A Failure Example of Partial Ablation or Incomplete Experimentation
449
+
450
+ Error type: Insignificant Ablation Module
451
+
452
+ Paper Title: Bayesian scaling laws for in-context learning
453
+
454
+ url: https://arxiv.org/pdf/2410.16531v3
455
+
456
+ Ablation Study about sampling-wise tying (generated by DeepSeek-R1)
457
+
458
+ This ablation study aims to investigate the impact of sampling-wise tying on the in-context learning (ICL) performance of toy transformer models. The authors train GPT-2-style models of varying sizes on the GINC dataset and evaluate them on synthetic sequences to produce ICL curves. Two configurations are compared: one where sampling-wise tying is applied to ensure consistent sampling across training epochs, and another where sampling varies freely. Models are pretrained under both settings, and their ICL performance is measured using the probability of the correct token at key positions. Normalized Root Mean Square Error (NRMSE) is calculated for each condition, and statistical tests are used to assess differences...
459
+
460
+ ![](images/ec97d54b94b0e8da29f9c57493882077afc0fda5379ae654a803ba1ebad1b870.jpg)
461
+
462
+ Relevant text: To reduce the number of learned parameters that we cannot observe (and which can thus take on arbitrary values and lead to overfitting), we can tie some of the non-diagonal values in P. We propose two approaches to tying parameters: sampling-wise and scoring-wise. Under sampling-wise tying, we tie off-diagonal values in each column, and under scoring-wise tying we do the same but for rows. Both approaches reduce the parameter count from $\mathrm{M}^2 + \mathrm{M}$ to 3M, and the number of unobserved parameters from $\mathrm{M}^2 - \mathrm{M}$ to M, making the complexity of the law in line with that of a power law.
463
+
464
+ ![](images/eabb46dd9c2b5974200b325735774016da9472307a51a7bca54c2f022a79fa80.jpg)
465
+ Figure 8: A Failure Example of Insignificant Ablation Module
466
+
467
+ # Error analysis by human experts:
468
+
469
+ The ablation study targets sampling-wise tying as the main module of investigation. However, in the original paper, it is not a core theoretical or experimental component but a technical simplification to reduce unobserved parameters in the scaling law matrix. It is one of two optional tying schemes and is not claimed to affect ICL performance. Framing it as the focus of an ablation misrepresents its importance, making this an example of the "Insignificant ablation module" error typeβ€”where the studied component holds little relevance to the main scientific objective.
470
+
471
+ Error type:Inherent Logical Inconsistencies
472
+
473
+ Paper Title: A SMART Mnemonic Sounds like "Glue Tonic": Mixing LLMs with
474
+
475
+ Student Feedback to Make Mnemonic Learning Stick
476
+
477
+ url: https://arxiv.org/pdf/2406.15352v2
478
+
479
+ Ablation Study about MPL (generated by DeepSeek-R1-0528-Qwen3-8B)
480
+
481
+ Objective: Determine the necessity of a specific method MPL (Model Prompt Learning?) for achieving high-quality mnemonic explanations via DPO alignment.
482
+
483
+ Experiment Setup: This ablation study builds upon the fine-tuning and DPO alignment stage of the main experiment (Section 3.1.2)...
484
+
485
+ Ablation Conditions: Base DPO: Uses the full set of preference labels (prating, pPair, pLearn) as input to the DPO alignment process. DPO w/o MPL: Replaces the standard DPO input with the outcomes solely derived from a specific MPL-related aspect. For example, MPL could represent Method Prompt Learning potentially related to a particular prompt construction. Alternatively, if MPL refers to a data processing technique, MPL could be a specific methodology for selecting, filtering, or categorising the preference pairs before DPO, which we omit or substitute with a default/cursor. This ensures the ablation examines the impact of removing MPL on the outcome quality...
486
+
487
+ ![](images/11b9158666f43b07548b8bef7aeb66fb4f9e5e06d008da903320ce88f2de9230.jpg)
488
+ Figure 9: A Failure Example of Inherent Logical Inconsistencies
489
+
490
+ # Error analysis by human experts:
491
+
492
+ This ablation study contains inherent logical inconsistencies due to the vague and speculative definition of MPL, the key variable under investigation. Multiple interpretations are proposed without a clear operational definition, making the ablation condition ambiguous and difficult to reproduce. This weakens experimental control and creates uncertainty about what is actually being tested, undermining the validity of the conclusions.
abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0091991a29c5b30028c460501b25321a14478d91b3a0b78ece610c971825b9a
3
+ size 720097
abgenevaluatinglargelanguagemodelsinablationstudydesignandevaluationforscientificresearch/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5813cc6e18612b36e63d83d2b1acd93d162953f4ffbbe4f535a28e7f46aea8c
3
+ size 469346
accuratekvcachequantizationwithoutliertokenstracing/7d3e80b6-35e3-4635-8f5c-fd70948c468f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcfcc846acd6bea61430989eb32c95c6ceffb5747fc4af776ebb74fcb94afa4f
3
+ size 141256
accuratekvcachequantizationwithoutliertokenstracing/7d3e80b6-35e3-4635-8f5c-fd70948c468f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab8e35bca1e4a074131d2155daa89ba6a936614e487860b6c5625a05970af169
3
+ size 165951
accuratekvcachequantizationwithoutliertokenstracing/7d3e80b6-35e3-4635-8f5c-fd70948c468f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a04a034b7c1dd076484897b6d422ca17d3442c25bd0d735c3ea0151800f52ad9
3
+ size 6376741
accuratekvcachequantizationwithoutliertokenstracing/full.md ADDED
@@ -0,0 +1,548 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accurate KV Cache Quantization with Outlier Tokens Tracing
2
+
3
+ Yi Su $^{1,2*}$ , Yuechi Zhou $^{1,2*}$ , Quantong Qiu $^{1,2}$ , Juntao Li $^{1,2\dagger}$
4
+ Qingrong Xia $^{3}$ , Ping Li $^{3}$ , Xinyu Duan $^{3}$ , Zhefeng Wang $^{3}$ , Min Zhang $^{1,2}$ $^{1}$ School of Computer Science and Technology, Soochow University
5
+ $^{2}$ Key Laboratory of Data Intelligence and Advanced Computing, Soochow University
6
+ $^{3}$ Huawei Cloud
7
+ yisunlp@outlook.com; {ljt,minzhang}@suda.edu.cn
8
+
9
+ # Abstract
10
+
11
+ The impressive capabilities of Large Language Models (LLMs) come at the cost of substantial computational resources during deployment. While KV Cache can significantly reduce recomputation during inference, it also introduces additional memory overhead. KV Cache quantization presents a promising solution, striking a good balance between memory usage and accuracy. Previous research has shown that the Keys are distributed by channel, while the Values are distributed by token. Consequently, the common practice is to apply channel-wise quantization to the Keys and token-wise quantization to the Values. However, our further investigation reveals that a small subset of unusual tokens exhibit unique characteristics that deviate from this pattern, which can substantially impact quantization accuracy. To address this, we develop a simple yet effective method to identify these tokens accurately during the decoding process and exclude them from quantization as outlier tokens, significantly improving overall accuracy. Extensive experiments show that our method achieves significant accuracy improvements under 2-bit quantization and can deliver a 6.4 times reduction in memory usage and a 2.3 times increase in throughput<sup>1</sup>.
12
+
13
+ # 1 Introduction
14
+
15
+ Large Language Models (LLMs) have significantly impacted various industries due to their powerful capabilities (Achiam et al., 2023; Touvron et al., 2023a,b; Dubey et al., 2024; Jiang et al., 2023). However, their auto-regressive nature makes the generation process slow. Although using KV Cache can reduce decoding complexity from $O(n^{2})$ to $O(n)$ by storing the Keys and the Values computed during inference, it introduces substantial
16
+
17
+ memory overhead. This overhead scales with sequence length, batch size, and hidden dim, often creating a memory bottleneck and placing considerable pressure on resources during deployment. As a result, optimizing KV Cache management to enhance resource utilization and improve model throughput remains a critical challenge.
18
+
19
+ KV Cache affects throughput in two primary ways. First, its memory usage limits the scalability of batch sizes, reducing parallelism during decoding, and thus lowering throughput. Second, attention computation is delayed while waiting for the KV Cache to be transferred from memory to the computation unit. As the size of the KV Cache grows, the transmission time increases, decreasing throughput. Existing approaches mainly address this issue by optimizing hardware scheduling (Aminabadi et al., 2022; Dao et al., 2022; Sheng et al., 2023; Kwon et al., 2023) and reducing the size of the KV Cache (Liu et al., 2024b; Hooper et al., 2024; Kang et al., 2024; Zhang et al., 2023; Xiao et al., 2024). In this paper, we focus on the latter approach, KV cache compression.
20
+
21
+ One method of reducing the size of the KV Cache is to reduce the number of values that need to be stored, which is related to the shape of the KV Cache: [num_layers, batch_size, num_heads, sequence_length, head_dim]. There are various compression methods in each dimension, including layer-wise KV Cache sharing (Wu and Tu, 2024; Brandon et al., 2024; Zuhri et al., 2024; Mu et al., 2024), prefix sharing (Juravsky et al., 2024; Zhu et al., 2024), head-wise KV Cache sharing (Shazeer, 2019; Ainslie et al., 2023), token eviction (Xiao et al., 2024; Zhang et al., 2023; Ge et al., 2024), and low-rank projection (Wang et al., 2024; Yu et al., 2024; Chang et al., 2024).
22
+
23
+ Another strategy for reducing the size of KV Cache is quantization. However, unlike weight quantization, KV Cache quantization poses unique challenges due to the uneven distribution of the
24
+
25
+ Keys and Values (Kang et al., 2024). To enhance quantization accuracy, various methods have been proposed, including using low-rank matrices to approximate the error before and after quantization (Kang et al., 2024), smoothing Key distributions through specific mappings (Ashkboos et al., 2024; Chang et al., 2024), channel-wise Key and token-wise Value asymmetric quantization (Liu et al., 2024b; Hooper et al., 2024), non-uniform quantization (Hooper et al., 2024; Dettmers et al., 2022), mixed-precision quantization (Dong et al., 2024), and Block Floating Point (BFP) quantization (Trukhanov and Soloveychik, 2024). Among these methods, channel-wise Key and token-wise Value asymmetric quantization has garnered much attention for its high accuracy and tuning-free nature. This technique operates under the assumption that some channels of the Keys have huge magnitudes and that the distribution of the Keys within the same channel is relatively uniform.
26
+
27
+ However, our further exploration reveals that a few outlier tokens deviate from this assumption. Their Keys have very small magnitudes in the outlier channels with large magnitudes, which greatly affects the accuracy of quantization. Based on these observations, we propose KV Cache Quantization with Outlier Tokens Tracing (OTT), a simple yet effective method that identifies these tokens and excludes them from the quantization process, thereby improving quantization accuracy. With hardware-friendly implementation, OTT achieves significant accuracy improvements under 2-bit quantization, resulting in a $6.4 \times$ reduction in memory usage and a $2.3 \times$ increase in throughput.
28
+
29
+ Overall, our contributions are as follows:
30
+
31
+ - We investigate the outlier channels of the KV Cache and identify that some outlier tokens deviate from the previous assumptions.
32
+ - We propose KV Cache Quantization with Outlier Tokens Tracing (OTT), a simple yet effective method to identify and exclude these tokens during quantization, thus improving overall quantization accuracy.
33
+ - Our method achieves significant accuracy improvements under 2-bit quantization, yielding a $6.4 \times$ reduction in memory usage and a $2.3 \times$ increase in throughput.
34
+
35
+ # 2 Background
36
+
37
+ Implementation of KV Cache. Transformer-based (Vaswani, 2017) LLMs typically utilize KV cache to prevent the redundant calculation of the attention scores and accelerate auto-regressive decoding. The generation process of LLMs with KV cache is divided into the prefill phase and the decoding phase (Patel et al., 2024). Given a prompt $X = \{x_0, x_1, \ldots, x_{n-1}\}$ and tensor $\mathbf{X} \in \mathbb{R}^{b \times n \times d}$ after embedding, where $b$ is the batch size, $n$ is the length of the prompt, and $d$ represents the hidden size, we will briefly describe the calculation process of the attention block, and we omit the number of heads in the multi-head attention mechanism.
38
+
39
+ i) During the prefetch phase, the Keys $K_{<n}$ and Values $V_{<n}$ are computed and cached by transforming $X$ through the Key and Value weight matrices $\mathbf{W}_k$ , $\mathbf{W}_v \in \mathbb{R}^{d \times d}$ of each layer, which can be formulated as:
40
+
41
+ $$
42
+ \boldsymbol {K} _ {< n} = \boldsymbol {X} \mathbf {W} _ {k}, \quad \boldsymbol {V} _ {< n} = \boldsymbol {X} \mathbf {W} _ {v}.
43
+ $$
44
+
45
+ ii) During the decoding phase, only the Keys and Values of the new token $x_{n}$ need to be calculated, which are then combined with the cached Keys and Values to compute the new attention scores and outputs. For the current input tensor $X_{n}\in \mathbb{R}^{b\times 1\times d}$ , we update the KV cache as follows:
46
+
47
+ $$
48
+ \boldsymbol {K} = \boldsymbol {K} _ {< n} \| \boldsymbol {K} _ {n}, \quad \boldsymbol {V} = \boldsymbol {V} _ {< n} \| \boldsymbol {V} _ {n},
49
+ $$
50
+
51
+ where $K_{n} = X_{n}\mathbf{W}_{k}$ and $V_{n} = X_{n}\mathbf{W}_{v}$ . We calculate the new attention output ATT as follows:
52
+
53
+ $$
54
+ \boldsymbol {Q} _ {n} = \boldsymbol {X} _ {n} \mathbf {W} _ {q}, \quad \mathrm {A T T} = \sigma \left(\frac {\boldsymbol {Q} _ {n} \boldsymbol {K} ^ {\top}}{\sqrt {d _ {k}}}\right) \mathbf {V}, \tag {1}
55
+ $$
56
+
57
+ where $\mathbf{W}_q$ is the query weight matrix, $\sqrt{d_k}$ is the normalization factor, and $\sigma$ is the softmax function.
58
+
59
+ Necessity of compression. While KV cache reduces the computational complexity from $O(n^{2})$ to $O(n)$ , it introduces substantial GPU memory overhead, particularly with long sequence lengths and large batch sizes. For example, in the case of LLaMA-3-8B (Dubey et al., 2024), where the number of layers $n_{\mathrm{layers}}$ is 32, the number of heads $h$ is 8, the head dimension $d$ is 512, the input length $l$ is 8192, and the batch size $b$ is 64, performing inference with fp16 precision (which uses 2 bytes per value) requires $4bhdln_{\mathrm{layers}}$ bytes to store the KV cacheβ€”equivalent to 256GB of memory. Thus, effectively compressing the KV cache is crucial to reducing GPU memory usage.
60
+
61
+ Uniform Quantization. In this paper, we focus on compressing the KV cache by reducing the bit-width needed to represent cached tensors. A straightforward approach is Uniform Quantization (Jacob et al., 2018), which maps continuous numerical data to a discrete domain. Specifically, to quantize a high-precision matrix (e.g., float32) $\mathbf{X}$ to a matrix $\mathbf{X}'$ with $b$ -bit precision, we first determine the quantization step size $q$ . Each element $X_{i,j} \in \mathbf{X}$ can be quantized to $\mathrm{Q}(X_{i,j})$ as follows:
62
+
63
+ $$
64
+ \mathrm {Q} \left(X _ {i, j}\right) = \left\lfloor \left(X _ {i, j} - X _ {\min }\right) / q \right\rfloor , \tag {2}
65
+ $$
66
+
67
+ $$
68
+ q = (\pmb {X} _ {m a x} - \pmb {X} _ {m i n}) / (2 ^ {b} - 1),
69
+ $$
70
+
71
+ where $\lfloor \cdot \rfloor$ is the rounding function.
72
+
73
+ Group Quantization. However, Uniform Quantization does not fully exploit the distribution characteristics of the data, which can lead to significant quantization errors, especially when there are outliers. A more advanced technique is Group Quantization (Yao et al., 2022), which divides the matrix into multiple groups, expecting the data within each group to share similar distribution characteristics. Unlike Uniform Quantization, Group Quantization allows each group to have different quantization parameters, such as step size. This flexibility enables the method to better adapt to the local characteristics of the data, thereby reducing quantization errors while maintaining a low bit-width. The channel-wise Key quantization and token-wise Value quantization proposed by KIVI (Liu et al., 2024b) is a type of Group Quantization.
74
+
75
+ # 3 Method
76
+
77
+ In this section, we propose KV Cache Quantization with Outlier Tokens Tracing (OTT). We start with a preliminary exploration of the Keys and Values before introducing our method.
78
+
79
+ # 3.1 Exploration of the Keys and Values.
80
+
81
+ We conduct a series of preliminary experiments to gain a deeper understanding of the Keys and Values. For illustration, we take a sentence generated by LLaMA-2-7b-chat-hf $^{2}$ as an example. Table 5 in the Appendix presents the prompt and the generated context.
82
+
83
+ Distribution of the Keys and Values. Figure 1a and Figure 1b display the magnitude of the Keys and Values from layer 10, head 17. Notably, some channels exhibit exceptionally large Keys,
84
+
85
+ and within these channels, the distribution of the Keys appears relatively uniform. In contrast, the Values have no distinct characteristics. These observations are consistent with KIVI (Liu et al., 2024b).
86
+
87
+ Distribution in outlier channels. We further investigate the distribution of these outlier channels. Figure 1d shows the Keys in an outlier channel from layer 10, head 17 (we plot the first 100 tokens). While the Keys generally exhibit a uniform distribution, a few tokens are notable exceptions. This pattern becomes clearer after sorting, as shown in Figure 1e, where some Keys have very small values while others are significantly larger. These exceptions can substantially increase $X_{max} - X_{min}$ in Equation 2 during quantization, ultimately diminishing quantization accuracy. Statistical analysis of these outliers can be found in Appendix F.
88
+
89
+ Identifying Outlier Tokens. Intuitively, tokens with very small magnitude of the Keys in outlier channels are also likely to have smaller magnitude overall. To test this hypothesis, we plot the Keys from an outlier channel and the magnitude of the Keys across all channels (we plot the first 300 tokens). As shown in Figure 4 in the Appendix, the results confirm our assumption, suggesting that we can efficiently and accurately identify these outlier tokens with the magnitude of the Keys.
90
+
91
+ Removing Outlier Tokens. From our analysis, outlier tokens significantly impact the accuracy of quantization. By excluding these outlier tokens during quantization and retaining them with full precision, we can greatly reduce the loss of the attention output. To investigate this further, we retain different numbers of tokens based on different selection criteria and compare the L1 loss of attention outputs before and after quantization. The results (Figure 1c) reveal that retaining tokens with the largest keys yields the worst performance, while retaining those with the smallest Keys achieves the best results, aligning with our previous findings.
92
+
93
+ # 3.2 OTT: KV Cache Quantization with Outlier Tokens Tracing
94
+
95
+ From the above observations, we find that some outlier tokens can seriously affect the accuracy of quantization. So, we attempt to dynamically identify these tokens during the quantization process, exclude them during quantization, and retain their full-precision representations. Our method consists of two components: quantization and decoding.
96
+
97
+ Quantization We define a fixed-size outlier pool with a capacity of outlier_num to store the
98
+
99
+ ![](images/8e1ffd7f4142f3e4cd19f8597d177b1f9a22cdf99ca2a5111f7aa9e9340c4640.jpg)
100
+
101
+ ![](images/b2b2e23811fc24bfa8600d56cac3a91003838194f1be97b6d4cf94f8164ebb66.jpg)
102
+
103
+ ![](images/8a698870ba41a717accd86a9dd79cd7edd6ca7fa4273e393b97fcdd2c4b39198.jpg)
104
+
105
+ ![](images/90593c560f382ddacd36ad99d37b11b0d3fa4cb5504ce3244a63e1c047cc90d3.jpg)
106
+ (a)
107
+ (b)
108
+ (d)
109
+ Figure 1: Observations from preliminary experiments: (a) The Keys are distributed by channel and have some outlier channels. (b) The distribution of the Values does not exhibit any notable characteristics. (d) In certain outlier channels, a few tokens with low magnitude of Keys disrupt the originally uniform distribution within these channels. (e) Visualization of the sorted Keys in an outlier channel shows a rapid increase from a low value to very high values. (c) The L1 loss of attention output before and after quantization by retaining full-precision tokens based on different criteria. The best result is retaining full-precision tokens with the smallest magnitude of the Keys.
110
+
111
+ ![](images/62f0ea45835e0e6dbe0dbfeb29320d6bab5a542960f4d3a2ed95debd5682273c.jpg)
112
+ (c)
113
+ (e)
114
+
115
+ Keys and Values of the outlier tokens. Following KIVI (Liu et al., 2024b), we use channel-wise Key quantization and token-wise Value quantization. We quantize KV Cache every $G$ (group size, a hyper-parameter in our method) steps. So, at each quantization step, there are $G$ tokens to quant. Based on the rule from Section 3.1, we compute the magnitude of the Keys of each token as the criteria, and all the tokens (tokens to quant and tokens in the outlier pool) compete for a position in the outlier pool according to the criteria. Once selected as an outlier token, the Keys and Values of the token are replaced with the mean values of all tokens to eliminate their impact on quantization. When the outlier pool is full and replacements are needed, the tokens that are originally in the outlier pool but are defeated by the newly added tokens should return to their original positions. But for the sake of simplicity, we retain an additional pool to store these tokens, and when this pool is full, we stop identifying outlier tokens.
116
+
117
+ Decoding We maintain three types of KV Cache: the quantized KV Cache, the full-precision KV Cache, and the KV Cache stored in the outlier pool.
118
+
119
+ The full-precision KV Cache includes group tokens (when the group is not full, these tokens are not quantized and are kept in full precision) and recent tokens (a full-precision sliding window for the nearest tokens). First, the Query is multiplied by all three types of Keys, and we concatenate the results to produce the attention scores. Next, we multiply these scores by their corresponding Values from each type and sum them to generate the final attention output. To enhance decoding efficiency, we utilize a CUDA fused kernel to multiply full-precision and quantized matrices. We provide a simple mathematical formulation in Appendix E.
120
+
121
+ # 4 Experiments
122
+
123
+ # 4.1 Settings
124
+
125
+ Baselines and Models. KIVI (Liu et al., 2024b) is currently one of the strongest tuning-free baseline with high compression efficiency and accuracy. We use the same setting as KIVI, so we compare OTT with KIVI and vanilla FP16 in our main experiments. Due to differences in settings (e.g., compression frequency, compression factor, and the number of full-precision tokens), we do not include
126
+
127
+ ![](images/e714325206219166ed01e07b96678804e3a781547afa1a0c1fd9dae0f7e70740.jpg)
128
+ Figure 2: Overview of OTT. Top: Decoding stage. Multiply the Query by each type of the Keys and concatenate the results to obtain the attention scores. Multiply the attention scores by each type of the Values and sum the results to get the attention output. Bottom: Quantization stage. Process the outlier tokens before quantization.
129
+
130
+ other KV Cache compression methods in the main experiment. However, additional comparisons with these methods are provided in Appendix B. We use greedy decoding in our experiments. We select two famous model families: LLaMA (Touvron et al., 2023b; Dubey et al., 2024) and Mistral (Jiang et al., 2023). Specifically, we select LLaMA-2-7B-chat-hf, LLaMA-2-13B-chat-hf, LLaMA-3-8B-Instruct, and Mistral-7B-Instruct-v0.2. We also add experiments on LLaMA-2-70B-chat-hf in Appendix B.
131
+
132
+ Tasks. We evaluate our methods and the baselines on two benchmarks according to the length of input texts. For normal context length evaluation, we use arithmetic reasoning task Gsm8k (Cobbe et al., 2021), mainstream language and symbolic reasoning task BBH (Suzgun et al., 2023), and code completion task HumanEval (Chen et al., 2021). For long context length evaluation, we choose four types of tasks in LongBench (Bai et al., 2024a) including Document QA, Summarization, Few-shot Learning and Code completion. We provide more results of different benchmarks and baselines in Appendix G.
133
+
134
+ Details. We implement both KIVI and OTT under 2-bit quantization. For KIVI, the group size (G) and residual length (R, size of the sliding window storing the nearest tokens) are set to 128. For OTT, we use $\mathrm{G} = 128$ , $\mathrm{R} = 32$ , and set outlier_num to 3. Notably, we set outlier_num to 0 for the first and second layers because we find that shallow layers have no outlier tokens (Ablation in Section 4.4). Regarding the additional pool used to store tokens evicted from the outlier pool, we find that a very small size is sufficient to retain all the eliminated
135
+
136
+ tokens. Therefore, we set its size to 32. There are some differences in the processing of group and residual tokens between OTT and KIVI. KIVI compresses the Keys every $G$ steps, while compressing the Values at each step. We choose to compress both the Keys and Values every $G$ steps for more consistent processing of KV Cache. GSM8K and BBH are tested under LM Eval (Gao et al., 2024). Humaneval follows the settings from InstructEval<sup>3</sup>. The experiments are conducted on NVIDIA A100 40G GPUs unless otherwise specified.
137
+
138
+ # 4.2 Results
139
+
140
+ # 4.2.1 Normal context length evaluation
141
+
142
+ Table 1 presents the results of the normal context length evaluation across different models and methods. For Gsm8k and BBH, we report accuracy in the setting of few-shot, few-shot CoT, and zero-shot CoT. For HumanEval, we report pass@1 and pass@10 in the zero-shot setting. The results illustrate that our method significantly outperforms KIVI across all settings. Notably, on BBH (3-CoT, LLaMA-3-8B-Instruct), OTT achieves a $12.93\%$ improvement over KIVI. Compared to FP16, OTT incurs minor accuracy loss in most settings. The largest accuracy drop occurs on BBH (3-CoT, Mistral-7B-Instruct), likely due to the high complexity of the task and the long generation length required. Overall, OTT can achieve significant performance improvements over KIVI.
143
+
144
+ <table><tr><td rowspan="2">Dataset</td><td colspan="3">LLaMA-2-7B-chat-hf</td><td colspan="3">LLaMA-2-13B-chat-hf</td><td colspan="3">LLaMA-3-8B-Instruct</td><td colspan="3">Mistral-7B-Instruct</td></tr><tr><td>Fp16</td><td>KIVI</td><td>Ours</td><td>Fp16</td><td>KIVI</td><td>Ours</td><td>Fp16</td><td>KIVI</td><td>Ours</td><td>Fp16</td><td>KIVI</td><td>Ours</td></tr><tr><td>Gsm8k (8)</td><td>21.99</td><td>16.30</td><td>21.38</td><td>36.54</td><td>28.51</td><td>36.09</td><td>74.91</td><td>63.15</td><td>72.55</td><td>42.91</td><td>37.38</td><td>41.17</td></tr><tr><td>+ CoT</td><td>21.30</td><td>17.51</td><td>18.20</td><td>37.00</td><td>31.77</td><td>36.92</td><td>76.72</td><td>66.79</td><td>75.06</td><td>42.99</td><td>37.45</td><td>41.39</td></tr><tr><td>+ 0-CoT</td><td>24.11</td><td>21.61</td><td>22.59</td><td>32.60</td><td>29.19</td><td>31.31</td><td>40.64</td><td>37.54</td><td>42.68</td><td>40.18</td><td>33.81</td><td>37.98</td></tr><tr><td>BBH (3)</td><td>33.34</td><td>32.48</td><td>33.36</td><td>37.61</td><td>36.20</td><td>37.43</td><td>45.77</td><td>44.19</td><td>45.60</td><td>42.10</td><td>40.29</td><td>42.02</td></tr><tr><td>+ CoT</td><td>40.21</td><td>34.00</td><td>35.17</td><td>47.38</td><td>41.02</td><td>44.37</td><td>68.18</td><td>47.38</td><td>60.31</td><td>51.33</td><td>36.42</td><td>41.93</td></tr><tr><td>+ 0-CoT</td><td>35.00</td><td>33.30</td><td>34.25</td><td>35.86</td><td>33.57</td><td>34.80</td><td>51.37</td><td>44.19</td><td>48.89</td><td>41.74</td><td>37.83</td><td>40.19</td></tr><tr><td>HE (p@1)</td><td>12.19</td><td>9.75</td><td>11.58</td><td>7.92</td><td>7.31</td><td>7.92</td><td>40.24</td><td>28.05</td><td>40.85</td><td>40.24</td><td>32.92</td><td>35.36</td></tr><tr><td>HE (p@10)</td><td>17.07</td><td>12.19</td><td>14.63</td><td>13.41</td><td>11.58</td><td>15.24</td><td>69.51</td><td>56.09</td><td>67.68</td><td>54.87</td><td>50.00</td><td>54.26</td></tr><tr><td>Average</td><td>25.65</td><td>22.14</td><td>23.90</td><td>31.04</td><td>28.14</td><td>30.51</td><td>58.42</td><td>48.16</td><td>56.70</td><td>44.55</td><td>38.26</td><td>41.79</td></tr></table>
145
+
146
+ Table 1: Results on GSM8K, BBH, and HumanEval (HE). Bold: the best results. We report accuracy for Gsm8k, BBH and Pass@k for HumanEval. Pass@k (p@k) refers to running each test question k times and calculating the average pass rate of the generated code. OTT outperforms KIVI across all tasks, achieving the best results.
147
+
148
+ # 4.2.2 Long context length tasks evaluation
149
+
150
+ The main results of long context length evaluation are in table 2. Our method outperforms KIVI in most settings, with only a tiny performance gap compared to the FP16 baseline. While KIVI maintains good accuracy on most tasks, it occasionally experiences significant performance drops (e.g., LLaMA-3-8B-Instruct, LCC: $56.58\% \rightarrow 44.42\%$ ). However, OTT does not encounter this situation, which suggests that our method achieves higher quantization accuracy than KIVI.
151
+
152
+ # 4.3 Efficiency comparison
153
+
154
+ Additionally, to validate the memory reduction and throughput improvements achieved by $OTT$ , we conduct three experiments: a throughput test, a memory test, and a longest sentence test. The throughput test measures the number of tokens generated per second as the batch size varies while keeping the input and output lengths fixed. The memory test tracks memory usage as the batch size changes, also with fixed input and output lengths. The longest sentence test assesses the memory required for inference as the output length increases infinitely (until out-of-memory), with a fixed batch size of 1 and an input length of 1. We use the LLaMA-2-7B-chat-hf model for our experiments, and set the input length to 64 and the output length to 384 for both the throughput and memory tests. Figure 3 illustrates the results.
155
+
156
+ Figure 3a shows that when the batch size is small, OTT performs slightly slower than the FP16 baseline. However, as the batch size increases, OTT
157
+
158
+ demonstrates a significant speed advantage. Our method is consistently faster than KIVI at any batch size because it does not require compressing the Values at each step. Although processing outlier tokens introduces some additional computational overhead, the outlier pool is very small, and the compression frequency is low. As a result, the overhead is negligible in the overall decoding process.
159
+
160
+ From Figure 3b, it is evident that quantization significantly reduces memory usage compared to the FP16 baseline. $OTT$ requires slightly more memory than KIVI, this is because $OTT$ tends to retain more full-precision tokens. However, as the sequence length increases, this impact diminishes. Figure 3c provides a clearer view of the compression ratio (represented by the slope of each line) for the KV Cache. When the sequence length becomes sufficiently large, the effects of full-precision tokens are negligible. Notably, the compression ratio of KIVI and $OTT$ reaches approximately $6.4\mathrm{x}$ . We provide more time analysis in Appendix C and G
161
+
162
+ # 4.4 Ablation studies
163
+
164
+ Group size and residual length. Group size and residual length are critical hyperparameters in OTT. Theoretically, a larger group size allows more values to be quantized at each step, which can reduce quantization accuracy due to the increased range of $\mathbf{X}_{\max} - \mathbf{X}_{\min}$ . On the other hand, a larger group size decreases memory usage by requiring fewer quantization coefficients to be retained. Conversely, increasing the residual length requires more memory since a larger full-precision KV Cache must
165
+
166
+ <table><tr><td>Model</td><td></td><td>Qasper</td><td>GovReport</td><td>MultiNews</td><td>TREC</td><td>TriviaQA</td><td>SamSum</td><td>LCC</td><td>RepoBench-P</td><td>Avg</td></tr><tr><td rowspan="3">LLaMA-2-7B-chat-hf</td><td>Fp16</td><td>20.04</td><td>25.08</td><td>23.02</td><td>59.67</td><td>85.39</td><td>39.28</td><td>59.59</td><td>48.04</td><td>45.01</td></tr><tr><td>KIVI</td><td>20.43</td><td>19.97</td><td>19.82</td><td>59.67</td><td>85.16</td><td>37.70</td><td>58.73</td><td>47.24</td><td>43.59</td></tr><tr><td>Ours</td><td>19.95</td><td>21.56</td><td>20.81</td><td>59.67</td><td>85.00</td><td>39.10</td><td>59.44</td><td>48.51</td><td>44.26</td></tr><tr><td rowspan="3">LLaMA-2-13B-chat-hf</td><td>Fp16</td><td>17.42</td><td>25.65</td><td>23.35</td><td>64.00</td><td>86.52</td><td>40.49</td><td>49.80</td><td>47.13</td><td>44.30</td></tr><tr><td>KIVI</td><td>20.10</td><td>20.65</td><td>21.10</td><td>63.67</td><td>86.39</td><td>39.51</td><td>49.10</td><td>43.95</td><td>43.06</td></tr><tr><td>Ours</td><td>18.81</td><td>22.29</td><td>21.69</td><td>64.00</td><td>86.81</td><td>40.35</td><td>51.14</td><td>47.71</td><td>44.10</td></tr><tr><td rowspan="3">LLaMA-3-8B-Instruct</td><td>Fp16</td><td>37.54</td><td>31.04</td><td>25.58</td><td>69.67</td><td>89.85</td><td>40.50</td><td>56.58</td><td>51.01</td><td>50.22</td></tr><tr><td>KIVI</td><td>34.88</td><td>28.43</td><td>24.78</td><td>69.33</td><td>89.57</td><td>40.09</td><td>44.42</td><td>45.54</td><td>47.13</td></tr><tr><td>Ours</td><td>36.75</td><td>30.74</td><td>24.94</td><td>69.67</td><td>89.74</td><td>40.39</td><td>52.37</td><td>48.82</td><td>49.18</td></tr><tr><td rowspan="3">Mistral-7B-Instruct-v0.2</td><td>Fp16</td><td>24.35</td><td>33.05</td><td>25.77</td><td>67.00</td><td>86.84</td><td>40.95</td><td>57.24</td><td>49.84</td><td>48.13</td></tr><tr><td>KIVI</td><td>24.20</td><td>30.98</td><td>25.10</td><td>66.33</td><td>85.40</td><td>41.05</td><td>55.70</td><td>48.18</td><td>47.12</td></tr><tr><td>Ours</td><td>23.78</td><td>31.37</td><td>25.35</td><td>66.33</td><td>86.18</td><td>41.25</td><td>55.89</td><td>48.32</td><td>47.31</td></tr></table>
167
+
168
+ Table 2: Main results on LongBench. We report accuracy for TREC, Rouge-L for GovReport and SamSum, edit similarity (Levenshtein distance (Svyatkovskiy et al., 2020)) for LCC and RepoBenchP, and F1 score for the other tasks. Bold: the best results for each setting. $OTT$ demonstrates superior performance on average.
169
+
170
+ ![](images/737549a9bdacd8262b23714a145135c14c4f2d746a1d5b271a947e97c8767f63.jpg)
171
+ (a) System throughput
172
+
173
+ ![](images/e960a73dca696dcea4375752605bb2111c65a3ec4c668d6acc6a7a1fe2145550.jpg)
174
+ (b) Memory Usage by batch size
175
+
176
+ ![](images/f826dcd913f642ab408a2c2901727a95fe3ba4a921db236870cd5e154fd0dd23.jpg)
177
+ (c) Memory Usage by length
178
+ Figure 3: Experiments on throughput and memory: (a) Comparison of throughput (tokens/s) for different methods across different batch sizes on NVIDIA A800 80G. (b) Peak memory usage (including model weights and other components) at different batch sizes on NVIDIA A800 80G. (c) Peak memory usage (including model weights and other components) at different sequence lengths when batch size = 1 on NVIDIA A100 40G. The results show that OTT achieves a peak memory reduction of up to $6.4 \times$ and a throughput increase of $2.3 \times$ .
179
+
180
+ be retained, but it also improves accuracy. Thus, selecting an appropriate group size and residual length is critical to balancing memory usage and accuracy. We explore the impact of group size and residual length with group sizes of \{32, 64, 128\} and residual lengths of \{0, 8, 16, 32, 64, 128\}. Table 3 reports the results for LLaMA-3-8B-Instruct on Gsm8k 8-shot and 8-shot CoT under different configurations. When the group size is fixed, we observe a clear upward trend in accuracy as the residual length increases. However, when the residual length is fixed, the effect of group size shows no clear pattern, likely because the token distribution is relatively uniform, meaning that increasing group size has a limited impact. Since increasing the group size can improve the compression ratio (if not consider the group tokens), we tend to choose a larger group size. For our main experiments, we choose a group size of 128 and a residual length of 32 to balance performance and compression ratio.
181
+
182
+ The number of outlier tokens. We explore the effect of varying outlier_num from 0 to 6, keeping all other settings unchanged. Table 4 presents the results for LLaMA-3-8B-Instruct on Gsm8k (8-shot and 8-shot CoT). The results show that retaining even a single outlier token can significantly improve performance, but further increases in outlier_num yield diminishing returns, eventually plateauing performance. However, the increase in outlier_num may result in more memory overhead, leading to a decrease in compression ratio. Considering that a small outlier_num is already sufficient to significantly improve the accuracy, we set outlier_num = 3 for our main experiments.
183
+
184
+ Outlier tokens in shallow layers. We observe that there are no outlier tokens in the shallow layers (see Figure 5 and Figure 6 in Appendix, the Keys in the shallow layers does not exhibit the characteristics discussed in Section 3.1), suggesting that outlier_num should be set to 0 in these layers.
185
+
186
+ <table><tr><td>G</td><td>R</td><td>Gsm8k(8)</td><td>Gsm8k(8-CoT)</td><td>G</td><td>R</td><td>Gsm8k(8)</td><td>Gsm8k(8-CoT)</td><td>G</td><td>R</td><td>Gsm8k(8)</td><td>Gsm8k(8-CoT)</td></tr><tr><td>32</td><td>0</td><td>70.05</td><td>73.16</td><td>64</td><td>0</td><td>68.92</td><td>72.63</td><td>128</td><td>0</td><td>70.96</td><td>73.54</td></tr><tr><td>32</td><td>8</td><td>71.95</td><td>74.22</td><td>64</td><td>8</td><td>70.05</td><td>73.01</td><td>128</td><td>8</td><td>72.51</td><td>74.15</td></tr><tr><td>32</td><td>16</td><td>72.78</td><td>74.83</td><td>64</td><td>16</td><td>70.89</td><td>73.84</td><td>128</td><td>16</td><td>72.93</td><td>74.37</td></tr><tr><td>32</td><td>32</td><td>72.78</td><td>74.53</td><td>64</td><td>32</td><td>72.40</td><td>75.06</td><td>128</td><td>32</td><td>72.55</td><td>75.06</td></tr><tr><td>32</td><td>64</td><td>74.00</td><td>76.88</td><td>64</td><td>64</td><td>73.69</td><td>76.42</td><td>128</td><td>64</td><td>73.24</td><td>75.36</td></tr><tr><td>32</td><td>128</td><td>73.77</td><td>77.33</td><td>64</td><td>128</td><td>74.68</td><td>76.42</td><td>128</td><td>128</td><td>73.24</td><td>76.65</td></tr></table>
187
+
188
+ Table 3: Results of different $G$ and $R$ . The settings in the main experiment are indicated with underlines.
189
+
190
+ <table><tr><td>outlier_num</td><td>Gsm8k(8)</td><td>Gsm8k(8-CoT)</td></tr><tr><td>0</td><td>62.09</td><td>68.31</td></tr><tr><td>1</td><td>71.80</td><td>75.74</td></tr><tr><td>2</td><td>71.57</td><td>75.06</td></tr><tr><td>3</td><td>72.55</td><td>75.06</td></tr><tr><td>4</td><td>72.25</td><td>75.97</td></tr><tr><td>5</td><td>72.18</td><td>75.74</td></tr><tr><td>6</td><td>72.18</td><td>75.89</td></tr></table>
191
+
192
+ <table><tr><td>Layers</td><td>Gsm8k(8)</td><td>Gsm8k(8-CoT)</td></tr><tr><td>None</td><td>72.48</td><td>75.59</td></tr><tr><td>0</td><td>72.78</td><td>75.44</td></tr><tr><td>0,1</td><td>72.55</td><td>75.06</td></tr><tr><td>0 ~ 2</td><td>71.49</td><td>74.68</td></tr><tr><td>0 ~ 3</td><td>71.80</td><td>73.64</td></tr><tr><td>0 ~ 4</td><td>70.96</td><td>74.53</td></tr><tr><td>0 ~ 5</td><td>69.60</td><td>74.00</td></tr></table>
193
+
194
+ Table 4: Ablation study of outlier_num. The settings in the main experiment are indicated with underlines. Left: results on Gsm8k with different outlier_num. Right: results on Gsm8k with outlier_num = 0 in shallow layers.
195
+
196
+ To explore this further, we set outlier_num to 0 in consecutive shallow layers and evaluate the performance on Gsm8k (8-shot and 8-shot CoT) using LLaMA-3-8B-Instruct. For example, $0 \sim 2$ means that outlier_num is set to 0 for the first three layers of the model. Table 4 shows that the impact is minimal in the shallowest layers but becomes more significant as we move deeper into the model. Based on these results, we set outlier_num = 0 for the first two layers in all models for our main experiments.
197
+
198
+ # 5 Related work
199
+
200
+ Efficient Inference of LLMs. Large Language Models often have enormous parameters, leading to significant computational costs on inference. To address this, some researchers have employed parameter pruning techniques to eliminate redundant or less important parameters, thereby compressing LLMs (Ma et al., 2023; Xia et al., 2024; Frantar and Alistarh, 2023). Other studies have focused on quantizing model weights, reducing their size and the number of arithmetic operations required for inference. For example, GPTQ (Frantar et al., 2022) uses second-order information to quantize models to 3 or 4-bit precision while maintaining accuracy. AWQ (Lin et al., 2024) preserves critical weights based on the activation distribution, quantizing the remaining weights to lower bit precision. These methods can be combined with KV Cache compression to achieve a better performance.
201
+
202
+ KV Cache Compression. KV Cache compression can significantly reduce the size of KV Cache with minimal accuracy loss. Liu et al. (2024b) find that some outlier channels in the Keys have very large magnitudes, resulting in a significant loss. Hooper et al. (2024) find that quantizing the Key cache before applying rotary positional embeddings reduces the negative impact of quantization. Xiao et al. (2024) propose StreamingLLM, which retains the initial and final tokens of the input. Similarly, Sun et al. (2024) find a "massive activations" pattern in LLMs, where a few activations have much higher values than others. These values stay stable across inputs and act as critical bias terms in the model. Zhang et al. (2023) find that only a minority of tokens influence the output.
203
+
204
+ # 6 Conclusion
205
+
206
+ In this paper, we start from the assumptions of KIVI and further explore the distribution of the Keys in the outlier channels. We observe that a few outlier tokens deviate from the assumptions of KIVI. Quantizing these tokens has detrimental effects, as it increases the quantization errors of other tokens. Building on these observations, we propose KV Cache Quantization with Outlier Tokens Tracing (OTT), which leverages the magnitude of the Keys to dynamically trace these tokens during decoding, excluding them from the quantization process while retaining their full-precision representations. Extensive experiments show that our
207
+
208
+ method achieves significant improvements in accuracy, along with substantial reductions in memory usage and increases in throughput.
209
+
210
+ # Limitations
211
+
212
+ Although OTT has achieved excellent results, there are still some limitations:
213
+
214
+ - Due to the presence of the group and nearest tokens, we cannot ensure that all tokens are quantized at every moment. When the sequence length is very short and the batch size is very large, the compression ratio of $OTT$ is reduced. In extreme cases, when the sequence length is shorter than the group size, $OTT$ does not perform any compression.
215
+ - OTT occasionally still incurs a little loss on specific datasets under 2-bit quantization. This may be related to the difficulty of the datasets and the required generation length. When the generation length is very long, OTT may face an unacceptable risk of loss due to error accumulation.
216
+
217
+ # Acknowledgements
218
+
219
+ We want to thank all the anonymous reviewers for their valuable comments. This work was supported by the National Science Foundation of China (NSFC No.62206194), the Natural Science Foundation of Jiangsu Province, China (Grant No. BK20220488), the Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.
220
+
221
+ # References
222
+
223
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
224
+ Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sanghai. 2023. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4895-4901.
225
+ Reza Yazdani Aminabadi, Samyam Rajbhandari, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Olatunji Ruwase, Shaden Smith, Minjia Zhang, Jeff
226
+
227
+ Rasley, et al. 2022. Deepspeed-inference: enabling efficient inference of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-15. IEEE.
228
+ Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian L Croci, Bo Li, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. 2024. Quarot: Outlier-free 4-bit inference in rotated llms. arXiv preprint arXiv:2404.00456.
229
+ Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024a. Longbench: A bilingual, multitask benchmark for long context understanding. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024, pages 3119-3137. Association for Computational Linguistics.
230
+ Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xiaozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao Dong, et al. 2024b. Longbench v2: Towards deeper understanding and reasoning on realistic long-context multitasks. arXiv preprint arXiv:2412.15204.
231
+ William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, and Jonathan Ragan Kelly. 2024. Reducing transformer key-value cache size with cross-layer attention. arXiv preprint arXiv:2405.12981.
232
+ Chi-Chih Chang, Wei-Cheng Lin, Chien-Yu Lin, ChongYan Chen, Yu-Fang Hu, Pei-Shuo Wang, Ning-Chi Huang, Luis Ceze, and Kai-Chiang Wu. 2024. Palu: Compressing kv-cache with low-rank projection. arXiv preprint arXiv:2407.21118.
233
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
234
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
235
+ Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher RΓ©. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344-16359.
236
+ Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Gpt3. int8(): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems, 35:30318-30332.
237
+
238
+ Shichen Dong, Wen Cheng, Jiayu Qin, and Wei Wang. 2024. Qaq: Quality adaptive quantization for llm kv cache. arXiv preprint arXiv:2403.04643.
239
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
240
+ Elias Frantar and Dan Alistarh. 2023. Sparsegpt: Massive language models can be accurately pruned in one-shot. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 10323-10337. PMLR.
241
+ Elias Frantar, Saleh Ashkboos, Torsten Hoefer, and Dan Alistarh. 2022. GPTQ: accurate post-training quantization for generative pre-trained transformers. CoRR, abs/2210.17323.
242
+ Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac'h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2024. A framework for few-shot language model evaluation.
243
+ Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, and Jianfeng Gao. 2024. Model tells you what to discard: Adaptive kv cache compression for llms. In The Twelfth International Conference on Learning Representations.
244
+ Yefei He, Luoming Zhang, Weijia Wu, Jing Liu, Hong Zhou, and Bohan Zhuang. 2024. Zipcache: Accurate and efficient kv cache quantization with salient token identification. arXiv preprint arXiv:2405.14256.
245
+ Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, and Amir Gholami. 2024. Kvquant: Towards 10 million context length LLM inference with KV cache quantization. CoRR, abs/2401.18079.
246
+ Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, Yang Zhang, and Boris Ginsburg. 2024. Ruler: What's the real context size of your long-context language models? arXiv preprint arXiv:2404.06654.
247
+ Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 2704-2713. Computer Vision Foundation / IEEE Computer Society.
248
+
249
+ Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, LΓ©lio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, TimothΓ©e Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825.
250
+ Jordan Juravsky, Bradley Brown, Ryan Ehrlich, Daniel Y Fu, Christopher RΓ©, and Azalia Mirhoseini. 2024. Hydragen: High-throughput llm inference with shared prefixes. arXiv preprint arXiv:2402.05099.
251
+ Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, and Tuo Zhao. 2024. GEAR: an efficient KV cache compression recipe for near-lossless generative inference of LLM. CoRR, abs/2403.05527.
252
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pages 611-626.
253
+ Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, and Deming Chen. 2024. Snapkv: Llm knows what you are looking for before generation. arXiv preprint arXiv:2404.14469.
254
+ Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024. AWQ: activation-aware weight quantization for ondevice LLM compression and acceleration. In Proceedings of the Seventh Annual Conference on Machine Learning and Systems, MLSys 2024, Santa Clara, CA, USA, May 13-16, 2024. mlsys.org.
255
+ Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024a. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173.
256
+ Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, and Xia Hu. 2024b. KIVI: A tuning-free asymmetric 2bit quantization for KV cache. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net.
257
+ Xinyin Ma, Gongfan Fang, and Xinchao Wang. 2023. Llm-pruner: On the structural pruning of large language models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023.
258
+
259
+ Yongyu Mu, Yuzhang Wu, Yuchun Fan, Chenglong Wang, Hengyu Li, Qiaozhi He, Murun Yang, Tong Xiao, and Jingbo Zhu. 2024. Cross-layer attention sharing for large language models. arXiv preprint arXiv:2408.01890.
260
+ Pratyush Patel, Esha Choukse, Chaojie Zhang, Aashaka Shah, IΓ±igo Goiri, Saeed Maleki, and Ricardo Bianchini. 2024. Splitwise: Efficient generative llm inference using phase splitting. In 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA), pages 118-132. IEEE.
261
+ Noam Shazeer. 2019. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150.
262
+ Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher RΓ©, Ion Stoica, and Ce Zhang. 2023. Flexgen: High-throughput generative inference of large language models with a singlegpu. In International Conference on Machine Learning, pages 31094-31116. PMLR.
263
+ Mingjie Sun, Xinlei Chen, J Zico Kolter, and Zhuang Liu. 2024. Massive activations in large language models. arXiv preprint arXiv:2402.17762.
264
+ Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 13003-13051. Association for Computational Linguistics.
265
+ Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. Intellicode compose: Code generation using transformer. In Proceedings of the 28th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering, pages 1433-1443.
266
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
267
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
268
+ Nikita Trukhanov and Ilya Soloveychik. 2024. Accurate block quantization in lIms with outliers. arXiv preprint arXiv:2403.20137.
269
+ A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems.
270
+
271
+ Y Wang, D Ma, and D Cai. 2024. With greater text comes greater necessity: Inference-time training helps long text generation. arXiv preprint arXiv:2401.11504.
272
+ Haoyi Wu and Kewei Tu. 2024. Layer-condensed kv cache for efficient inference of large language models. arXiv preprint arXiv:2405.10637.
273
+ Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, and Danqi Chen. 2024. Sheared llama: Accelerating language model pre-training via structured pruning. In *The Twelfth International Conference on Learning Representations*, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net.
274
+ Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024. Efficient streaming language models with attention sinks. In *The Twelfth International Conference on Learning Representations*, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net.
275
+ Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
276
+ Howard Yen, Tianyu Gao, Minmin Hou, Ke Ding, Daniel Fleischer, Peter Izsak, Moshe Wasserblat, and Danqi Chen. 2024. Helmet: How to evaluate long-context language models effectively and thoroughly. arXiv preprint arXiv:2410.02694.
277
+ Hao Yu, Zelan Yang, Shen Li, Yong Li, and Jianxin Wu. 2024. Effectively compress kv heads for llm. arXiv preprint arXiv:2406.07056.
278
+ Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher RΓ©, Clark W. Barrett, Zhangyang Wang, and Beidi Chen. 2023. H2O: heavy-hitter oracle for efficient generative inference of large language models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023.
279
+ Lei Zhu, Xinjiang Wang, Wayne Zhang, and Rynson WH Lau. 2024. Relayattention for efficient large language model serving with long system prompts. arXiv preprint arXiv:2402.14808.
280
+ Zayd Muhammad Kawakibi Zuhri, Muhammad Farid Adilazuarda, Ayu Purwarianti, and Alham Fikri Aji. 2024. Mlkv: Multi-layer key-value heads for memory efficient transformer decoding. arXiv preprint arXiv:2406.09297.
281
+
282
+ # A Additional Figures and Tables
283
+
284
+ <table><tr><td>Input:[INST]Which methods did Socrates employ to challenge the prevailing thoughts of his time?[INST]Output:[INST]Which methods did Socrates employ to challenge the prevailing thoughts of his time?[INST]Socrates employed several methods to challenge the prevailing thoughts of his time, including:1. Questioning:Socrates used questioning as a primary method to challenge the beliefs of others. He would ask probing questions to expose the contradictions and inconsistencies in their arguments, and to lead them to a deeper understanding of their own beliefs.2. Dialogue:Socrates believed in the power of dialogue to promote critical thinking and understanding. He would engage in conversations with others, using his questions and their responses to challenge their beliefs and to encourage them to think more deeply about their ideas.3. Irony:Socrates often used irony to challenge the beliefs of others. By using irony, he could expose the flaws in their arguments and lead them to see things from a different perspective.4. Paradox:Socrates often used paradoxical statements to challenge the prevailing beliefs of his time. By presenting seemingly contradictory statements, he could encourage his interlocutors to think more critically about their beliefs and to question their assumptions.5. Maieutics:Socrates believed in the power of midwifery, or maieutics, to bring forth new ideas and insights. He would use his questions and dialogue to help others discover new truths and to challenge their existing beliefs.6. Analogy:Socrates often used analogies to help others understand complex ideas and to challenge their beliefs. By using comparisons and contrasts, he could help his interlocutors to see things from a different perspective and to develop a deeper understanding of the issues at hand.7. Refutation:Socrates was skilled at refuting the arguments of others by exposing their flaws and inconsistencies. He would use his questions and dialogue to challenge the beliefs of others and to lead them to a deeper understanding of the issues at hand.8. Affirmation:Socrates also used affirmation to challenge the prevailing thoughts of his time. By affirming the beliefs of others and building on them, he could encourage them to think more deeply about their ideas and to develop a more nuanced understanding of the issues at hand.9. Silence:Socrates believed that sometimes the best way to challenge the prevailing thoughts of his time was through silence. By remaining silent in the face of foolish or false arguments, he could encourage others to think more critically about their beliefs and to question their assumptions.Overall,Socrates employed a wide range of methods to challenge the prevailing thoughts of his time, including questioning, dialogue, irony, paradox, maieutics, analogy, refutation, affirmation, and silence. Through these methods, he was able to encourage critical thinking and to promote a deeper understanding of the issues at hand.</td></tr></table>
285
+
286
+ Table 5: Example generated by LLaMA-2-7B-chat-hf.
287
+
288
+ ![](images/ef5d68734c3d795773a64379d2bb8a85f92126604e13055a7159af1d48ff0b58.jpg)
289
+
290
+ ![](images/4126e6145334cf690b5f606cbf149af8859b752ed3ff1e04c1be912aa3c792ac.jpg)
291
+ Figure 4: The Keys in an outlier channel (up) and the magnitude of the Keys overall (down).
292
+
293
+ ![](images/75098512b9b9f1b552058e136c88d13106c32feccfd5c21410897cc13221743c.jpg)
294
+ Layer 0 key cache
295
+
296
+ ![](images/4c814d7a642fc6290a13b378b3250553ca26786aca2132b9988aab5d941935c2.jpg)
297
+ Layer 0 value cache
298
+
299
+ ![](images/6fc2e4b154244d099aeaaf60f96a32d79a80d446df23b430f29321d101e40360.jpg)
300
+ Layer 10 key cache
301
+ Layer 31 key cache
302
+
303
+ ![](images/cdcb89fbff1adea64a9cf0799cfb4ef9960696daf8e05817983dca34c3c71d52.jpg)
304
+ Layer 10 value cache
305
+ Layer 31 value cache
306
+
307
+ ![](images/9a6271fa5c9e239397ed1375c43508d0d7c26fe56599b1775192c56f7c0f3fa6.jpg)
308
+ Layer 20 key cache
309
+
310
+ ![](images/e09a87b2459157152e83de8d08eebe32f11c44475b63be22546b4664e21ab90e.jpg)
311
+ Layer 20 value cache
312
+
313
+ ![](images/39ceff25ff6fa05edf12e52ece215d5f8f4202d3eba0be3416c1f5293fbcedaf.jpg)
314
+ Figure 5: Magnitude of the keys and Values for Llama-2-7B-chat-hf in head 17.
315
+
316
+ ![](images/ae340156198b1a9ed14bf54f401c3247e519720724a7c1801db946a73264b2be.jpg)
317
+
318
+ ![](images/3ce69260ce289d06c54093cc589d5348a394d7cf83b5a93e66b1e7cae798a78b.jpg)
319
+
320
+ ![](images/82bd4de88a466040e233d95325aec2dec3930bb770574dcbbe5c792f12bc9930.jpg)
321
+ Figure 6: The Keys in an outlier channel (up) and the sorted Keys in an outlier channel (down).
322
+
323
+ # B Additional Experiment Results
324
+
325
+ # B.1 Experiments on LLaMA-2-70b-chat-hf
326
+
327
+ To validate the performance on larger models, we conduct additional experiments on LLaMA-2-70b-chat-hf. The experimental setup is completely consistent with the main experiment. The result in Table 6 shows that OTT can still achieve higher accuracy advantages on larger models based on KIVI.
328
+
329
+ <table><tr><td>70b-chat-hf</td><td>Gsm8k(8)</td><td>Gsm8k(8-cot)</td><td>Gsm8k(0-cot)</td><td>BBH(3)</td><td>HE(p@1)</td><td>Avg</td></tr><tr><td>FP16</td><td>56.03</td><td>55.04</td><td>48.98</td><td>47.09</td><td>16.46</td><td>44.72</td></tr><tr><td>KIVI</td><td>51.63</td><td>50.49</td><td>46.40</td><td>46.08</td><td>14.02</td><td>41.72</td></tr><tr><td>Ours</td><td>52.92</td><td>52.54</td><td>49.05</td><td>46.48</td><td>15.85</td><td>43.37</td></tr></table>
330
+
331
+ Table 6: Experiments on LLaMA-2-70b-chat-hf
332
+
333
+ # B.2 Comparison with token eviction methods
334
+
335
+ We add some comparisons with the token eviction methods. The previous token eviction methods are mostly evaluated on LongBench, so we also conduct experiments on LongBench using LLaMA2-7b-chat-hf. The input length of LongBench is relatively long, while the output length is relatively short, which may be more conducive to the performance of the token eviction methods. The baselines include StreamingLLM (Xiao et al., 2024), H2O (Zhang et al., 2023), and SnapKV (Li et al., 2024). In order to maintain the simplicity and consistency of the settings for comparison, we only perform token eviction in the prefetch stage, and retain all KV caches in the decode stage. In addition, we make some adjustments to H2O based on SnapKV's strategy, selecting only the queries in the sliding window for attention score selection (which was later verified to be superior to H2O's strategy). In order to maintain the overall compression ratio consistent with $OTT$ , we choose to evict $84\%$ of the tokens in the prefetch stage, which have the closest compression ratio to $OTT$ . For H2O, the number of recent tokens and heavy hitters is the same. For StreamingLLM, we do not adjust its position id during decoding phase. So, the process of token eviction is as follows:
336
+
337
+ - In the prefetch stage, use queries in the sliding window to calculate the attention score with other tokens, and perform token eviction according to the strategies of StreamingLLM, H2O, and SnapKV respectively.
338
+
339
+ - During the decode phase, attention calculation is performed directly without token eviction.
340
+
341
+ The results are shown in Table 7 Among these methods, SnapKV achieves the best results. But even under more favorable settings, the result is still slightly lower than $OTT$ .
342
+
343
+ # B.3 Comparison with ZipCache
344
+
345
+ We compare OTT with ZipCache (He et al., 2024), and in order to maintain consistent compression rates, we set $20\%$ of the tokens to 4-bit quantization and $80\%$ to 2-bit quantization. The other hyperparameters in ZipCache are the same. We conduct our experiments on GSM8K, BBH and HumanEval with LLaMA-2-7b-chat-hf. The results in Table 8 show that ZipCache is weaker than KIVI and OTT.
346
+
347
+ # C Additional Time Analysis
348
+
349
+ We provide a more detailed analysis of the time cost. The computational overhead comes from two aspects. In the compression stage, we calculate the magnitude of each token's key, perform comparison, select the index, and quantify it. The cost of the outlier operation is relatively high compared to quantization, but the compression is only performed every $G$ steps, so this time cost can be almost negligible compared to the whole decoding process. In the attention calculation stage, we need to calculate the qkv in the outlier pool and cover the attention score according to the outlier token index, which has a certain cost. We plot the detailed time consumption in the attention block in Figure 7, and the outlier operation accounts for about $18\%$ . Considering the pre-processing, post-processing and FFN calculation in the entire forward step, the time proportion of outlier operations is very small.
350
+
351
+ ![](images/55605d1ce100f5fc796ab13da7408889bf2dc4a456d6949cba77186c2b7a27e0.jpg)
352
+ outlier qkv_linear quant_matmul other
353
+ Figure 7: The time proportion in the attention block.
354
+
355
+ <table><tr><td>LLaMA-2-7b-chat</td><td>Qasper</td><td>GovReport</td><td>MultiNews</td><td>TREC</td><td>TriviaQA</td><td>SamSum</td><td>LCC</td><td>Repobench-P</td><td>Avg</td></tr><tr><td>FP16</td><td>20.04</td><td>25.08</td><td>23.02</td><td>59.67</td><td>85.39</td><td>39.28</td><td>59.59</td><td>48.04</td><td>45.01</td></tr><tr><td>KIVI</td><td>20.43</td><td>19.97</td><td>19.82</td><td>59.67</td><td>85.16</td><td>37.7</td><td>58.73</td><td>47.24</td><td>43.59</td></tr><tr><td>SnapKV</td><td>18.96</td><td>18.73</td><td>19.64</td><td>59</td><td>84.84</td><td>38.22</td><td>60.5</td><td>50.08</td><td>43.75</td></tr><tr><td>H2O</td><td>17.51</td><td>18.85</td><td>19.88</td><td>50</td><td>84.22</td><td>38.09</td><td>58.23</td><td>49.66</td><td>42.05</td></tr><tr><td>Streaming</td><td>15.31</td><td>19.39</td><td>18.99</td><td>51</td><td>83.11</td><td>36.8</td><td>57.57</td><td>47.33</td><td>41.19</td></tr><tr><td>Ours</td><td>19.95</td><td>21.56</td><td>20.81</td><td>59.67</td><td>85</td><td>39.1</td><td>59.44</td><td>48.51</td><td>44.26</td></tr></table>
356
+
357
+ Table 7: Experiments on three additional eviction-based methods on LLaMA-2-7b-chat-hf.
358
+
359
+ <table><tr><td>LLaMA-2-7b-chat</td><td>Gsm8k(8)</td><td>8-cot</td><td>0-cot</td><td>BBH(3)</td><td>3-cot</td><td>0-cot</td><td>HE(p@1)</td><td>p@10</td><td>Avg</td></tr><tr><td>FP16</td><td>21.99</td><td>21.3</td><td>24.11</td><td>33.34</td><td>40.21</td><td>35.00</td><td>12.19</td><td>17.07</td><td>45.01</td></tr><tr><td>KIVI</td><td>16.3</td><td>17.51</td><td>21.61</td><td>32.48</td><td>34</td><td>33.30</td><td>9.75</td><td>12.19</td><td>43.59</td></tr><tr><td>Ours</td><td>19.86</td><td>19.33</td><td>22.52</td><td>33.33</td><td>34.43</td><td>33.74</td><td>11.58</td><td>14.63</td><td>43.75</td></tr><tr><td>ZipCache</td><td>15.92</td><td>17.74</td><td>20.02</td><td>32.85</td><td>33.9</td><td>32.35</td><td>9.45</td><td>15.24</td><td>42.05</td></tr></table>
360
+
361
+ Table 8: Experiments on ZipCache.
362
+
363
+ # D Proof of Low-Magnitude Keys Disrupting Attention Weights
364
+
365
+ We formalize the claim that low-magnitude keys in outlier channels disrupt attention weights through two steps:
366
+
367
+ # D.1 Quantization Error
368
+
369
+ # - Definitions:
370
+
371
+ - Let $K_{c} \in \mathbb{R}^{n}$ be an outlier channel containing $n$ Key values where $\exists$ a subset $S \subset \{1, \ldots, n\}$ with $|S| = m \ll n$ such that:
372
+
373
+ $$
374
+ \left\{ \begin{array}{l} K _ {c, i} \in [ \mu - \sigma , \mu + \sigma ], \forall i \notin S \\ (\text {u n i f o r m d i s t r i b u t i o n}) \\ K _ {c, j} \in [ \epsilon , \delta ], \forall j \in S \\ \text {w h e r e} 0 < \epsilon \ll \mu - \sigma \end{array} \right.
375
+ $$
376
+
377
+ # - Quantization Parameters:
378
+
379
+ - Full range: $X_{\max} = \max(K_c)$ , $X_{\min} = \min(K_c)$
380
+ - Quantization step: $q = \frac{X_{\max} - X_{\min}}{2^b - 1}$
381
+
382
+ - Key Observation: The presence of low-magnitude outliers forces:
383
+
384
+ $$
385
+ \begin{array}{l} X _ {\min } \leq \epsilon \ll \mu - \sigma \quad \text {a n d} \quad X _ {\max } \geq \mu + \sigma \\ \Longrightarrow q = \frac {(\mu + \sigma) - \epsilon}{2 ^ {b} - 1} \gg \frac {2 \sigma}{2 ^ {b} - 1} \\ \end{array}
386
+ $$
387
+
388
+ - Result: Low-magnitude outliers inflate quantization step size, leading to larger approximation errors for all tokens in the channel.
389
+
390
+ # D.2 Error Propagation to Attention Weights
391
+
392
+ - Attention Score Calculation: For query vector $Q \in \mathbb{R}^d$ and quantized Key matrix $K'$ :
393
+
394
+ $$
395
+ A _ {i} = \frac {Q K _ {i} ^ {\prime}}{\sqrt {d}} \quad \text {a n d} \quad A _ {i} ^ {\text {q u a n t}} = A _ {i} + \underbrace {\frac {Q (K _ {i} ^ {\prime} - K _ {i})}{\sqrt {d}}} _ {\Delta A _ {i}}
396
+ $$
397
+
398
+ # - Error Analysis:
399
+
400
+ - For outlier channel $c$ :
401
+
402
+ $$
403
+ \mathbb {E} \big [ | K _ {c, i} ^ {\prime} - K _ {c, i} | \big ] \propto q
404
+ $$
405
+
406
+ $$
407
+ \Delta A _ {i} \sim \sum_ {c = 1} ^ {d} Q _ {c} (K _ {c, i} ^ {\prime} - K _ {c, i})
408
+ $$
409
+
410
+ - Key Observation: In outlier channels where $|Q_{c}|$ is typically large (by definition of being "outlier channels"), the quantization error gets amplified by:
411
+
412
+ $$
413
+ \Delta A _ {i} \propto Q _ {c} (K _ {c, i} ^ {\prime} - K _ {c, i}) \approx Q _ {c} \cdot q
414
+ $$
415
+
416
+ - Result: The error in quantization steps propagates to attention weights.
417
+
418
+ # E Mathematical Formulation of OTT
419
+
420
+ Outlier Token Identification For a token $t_i$ with Key vector $K_i$ , its outlier score $S_i$ is computed as the magnitude of its Keys, typically measured via the $L_1$ -norm:
421
+
422
+ $$
423
+ S _ {i} = \left\| K _ {i} \right\|
424
+ $$
425
+
426
+ Tokens with smaller $S_{i}$ are identified as outliers since their Keys deviate significantly from the uniform distribution in outlier channels.
427
+
428
+ Competition Mechanism At each quantization step (every $G$ tokens), tokens in the current group $\mathcal{T} = \{t_1,t_2,\ldots ,t_G\}$ compete with the existing outlier pool $\mathcal{O}$ (capacity $N$ ) for inclusion. The process involves:
429
+
430
+ 1. Score Calculation: Compute $S_{i}$ for all tokens in $\mathcal{T}$ and $\mathcal{O}$ .
431
+ 2. Token Ranking: Combine $\mathcal{T}$ and $\mathcal{O}$ , then sort all tokens by $S_{i}$ in ascending order:
432
+
433
+ Sorted List $= \operatorname{argsort}(S_i)$ , $\forall t_i \in \mathcal{T} \cup \mathcal{O}$
434
+
435
+ 3. Outlier Pool Update: Select the top- $N$ tokens with the smallest $S_{i}$ to form the new outlier pool:
436
+
437
+ $\mathcal{O}_{\mathrm{new}} = \{t_j\mid j\in \mathrm{top - }N$ indices of Sorted List}
438
+
439
+ 4. Replacement Handling: Tokens evicted from $\mathcal{O}$ (if $|\mathcal{T} \cup \mathcal{O}| > N$ ) are stored in an auxiliary pool or discarded. The retained outlier tokens are excluded from quantization and stored in full precision.
440
+
441
+ Mathematical Formulation Let $\mathcal{O}^{(t)}$ denote the outlier pool at step $t$ , and $\mathcal{T}^{(t)}$ the current token group. The update rule is:
442
+
443
+ $$
444
+ \mathcal {O} ^ {(t + 1)} = \underset {\text {t o p -} N \text {t o k e n s b y} S _ {i}} {\arg \min } \left(\mathcal {O} ^ {(t)} \cup \mathcal {T} ^ {(t)}\right)
445
+ $$
446
+
447
+ Outlier tokens are excluded from quantization, while non-outliers are quantized using channelwise (Keys) and token-wise (Values) methods as in KIVI.
448
+
449
+ # F Statistical Analysis of Preliminary Results
450
+
451
+ We conduct additional analysis experiments on LongBench using LLaMA-2-7b-chat-hf. We record the keys of Layer 10, Head 16 of the first 1024 tokens during the generation process and analyze the distribution of the outlier channel (i.e., the channel with the largest magnitude) among these keys. Specifically, for each example, we identify the outlier channel of the key in Layer 10, Head 16, then divide the entire range of channel values into ten equal parts and record which range each token's value fall into. Finally, we average the results across the entire dataset. The results are shown in Table 9, which confirms our previous hypothesis.
452
+
453
+ # G Additional Benchmarks and Baselines
454
+
455
+ To validate the effectiveness of OTT, we also add other baselines and benchmarks. Among them, the benchmarks include Needle-in-a-Haystack and Ruler, while the baselines include ZipCache and Gear.
456
+
457
+ RULER (Hsieh et al., 2024): This benchmark evaluates models' ability to handle complex reasoning tasks. It involves tasks that require understanding and linking various pieces of information, making it essential for assessing skills in multi-step reasoning and logical analysis.
458
+
459
+ Needle-in-a-Haystack (Liu et al., 2024a): This benchmark focuses on testing if models can find important details in long texts. It checks how well models can spot useful information in a lot of text, which is key for tasks like finding facts or answering questions by pulling out parts of the text.
460
+
461
+ GEAR (Kang et al., 2024) compensates for compression-induced errors by combining low-rank and sparse matrices, achieving near-lossless results in 2-bit quantization integrated with KIVI (Liu et al., 2024b).
462
+
463
+ ZipCache (He et al., 2024) achieves accurate KV cache compression by introducing a channel-separable tokenwise quantization scheme, an improved salient token identification metric based on normalized attention scores, and an efficient approximation method for fast attention implementations.
464
+
465
+ We adjust the hyper-parameters of various methods to thoroughly observe their performance. For KIVI and OTT, we vary the group size $G$ and the residual length $R$ . For ZipCache, we vary the k unimportant ratio $k$ and v unimportant ratio $v$ . The unimportant tokens are stored in 2-bit and the important ones are stored in 4-bit. For Gear, we set the low rank $r$ to 2, group size $G$ to 128, streaming gap to 100, outlier ratio to 0.01. Note that Gear will use much more memory and is much slower than KIVI and OTT because it add additional low-rank and outlier operations based on KIVI.
466
+
467
+ # G.1 Results on Needle-in-a-Haystack
468
+
469
+ Figure 9 and 10 shows the results of different methods and models on Needle-in-a-Haystack. The results on LLaMA-3-8B-Instruct show that all methods can perform well under this setting. The results on LLaMA-2-7B-chat-hf show that Gear performs the best across all methods, while it sacrifices memory and throughput. OTT performs better
470
+
471
+ than KIVI (increasing the accuracy from $93.1\%$ to $99.2\%$ ). ZipCache performs the worst.
472
+
473
+ # G.2 Results on Ruler
474
+
475
+ We validate the effectiveness of various methods on Ruler. The results are shown in Table 10. Similar to previous findings, Gear still achieves the best accuracy among all methods, thanks to its higher computational cost and memory usage. OTT achieves better accuracy than KIVI when $G = 32$ and $G = 128$ , demonstrating the effectiveness of our method. ZipCache also achieve the worst results, with significant losses on both models.
476
+
477
+ # G.3 Full Results on LongBench
478
+
479
+ We test the performance of each method on Long-Bench under more settings and complete all the results for LongBench. The results are shown in Table 11. OTT achieves almost no loss on Long-Bench, performing nearly as well as Gear, and clearly outperforming ZipCache and KIVI.
480
+
481
+ # G.4 Results on Helmet
482
+
483
+ We test the performance of each method on Helmet (Yen et al., 2024) benchmark using LLaMA-3-8B-Instruct. We set the max length to 8192. The results are shown in Table 12. The results show that our method performs better than the baselines under fair comparison.
484
+
485
+ # G.5 Results on LongBench-v2
486
+
487
+ We test the performance of each method on LongBench-v2 (Bai et al., 2024b) using LLaMA-3-8B-Instruct. We set the max length to 8192. The results are shown in Table 13. The results show that our method performs better than the baselines under fair comparison.
488
+
489
+ # G.6 Results on Longer Models
490
+
491
+ We test the performance of each method on Ruler with Llama-3-8B-ProLong-512k-Instruct and longer context lengths (64k, 128k), the results are shown in Table 14. We can conclude from the tables that our method can perform well on extreme-long scenarios.
492
+
493
+ # G.7 Throughput and Memory Analysis
494
+
495
+ To fully demonstrate the memory compression and throughput of different methods, we conduct additional experiments on memory usage and throughput. We use LLaMA-2-7B-chat-hf with an input length of 64, output length of 384, and batch size
496
+
497
+ of 128 to carry out our experiments on an NVIDIA A100 40GB GPU. We record the throughput and memory peak for each method. The results are shown in Figure 8. We omit Gear because its codebase only supports fake compression, making it impossible to measure its actual memory usage and throughput. Although Gear supports some true compression, it does not handle outliers in its true compression, which is inconsistent with the settings used in our experiments above. From the figure, we can conclude that OTT has slightly higher throughput than KIVI, likely because its handling of residual tokens is simpler than that of KIVI. Additionally, both OTT and KIVI show significantly higher throughput than ZipCache. In terms of memory usage, OTT consumes slightly more memory than KIVI, primarily because it needs to store more tokens. This difference may become more pronounced as the batch size increases. On the other hand, ZipCache uses the least GPU memory, indicating that it has a higher compression ratio.
498
+
499
+ <table><tr><td>Dataset</td><td>0%-10%</td><td>10%-20%</td><td>20%-30%</td><td>30%-40%</td><td>40%-50%</td><td>50%-60%</td><td>60%-70%</td><td>70%-80%</td><td>80%-90%</td><td>90%-100%</td></tr><tr><td>qasper</td><td>0.20</td><td>0.35</td><td>0.18</td><td>0.02</td><td>0.62</td><td>8.53</td><td>29.88</td><td>37.29</td><td>19.44</td><td>3.49</td></tr><tr><td>triviaqa</td><td>0.20</td><td>0.36</td><td>0.16</td><td>0.01</td><td>0.31</td><td>6.42</td><td>27.36</td><td>38.78</td><td>22.23</td><td>4.17</td></tr><tr><td>trec</td><td>0.21</td><td>0.30</td><td>0.07</td><td>0.01</td><td>0.13</td><td>1.90</td><td>20.80</td><td>46.38</td><td>24.42</td><td>5.79</td></tr><tr><td>samsum</td><td>0.20</td><td>0.41</td><td>0.18</td><td>0.01</td><td>0.18</td><td>4.33</td><td>25.58</td><td>38.52</td><td>25.36</td><td>5.24</td></tr><tr><td>lcc</td><td>0.20</td><td>0.31</td><td>0.18</td><td>0.05</td><td>0.63</td><td>7.03</td><td>25.32</td><td>36.57</td><td>24.85</td><td>4.85</td></tr><tr><td>repobench-p</td><td>0.20</td><td>0.34</td><td>0.14</td><td>0.04</td><td>0.58</td><td>7.00</td><td>27.65</td><td>38.84</td><td>21.22</td><td>3.99</td></tr><tr><td>multi_news</td><td>0.20</td><td>0.37</td><td>0.18</td><td>0.01</td><td>0.45</td><td>7.56</td><td>28.99</td><td>36.67</td><td>20.78</td><td>4.79</td></tr><tr><td>multifieldqa_en</td><td>0.20</td><td>0.37</td><td>0.21</td><td>0.02</td><td>0.71</td><td>8.17</td><td>28.25</td><td>37.12</td><td>20.93</td><td>4.01</td></tr><tr><td>hotpotqa</td><td>0.20</td><td>0.38</td><td>0.16</td><td>0.01</td><td>0.39</td><td>7.62</td><td>30.12</td><td>40.05</td><td>18.62</td><td>2.45</td></tr><tr><td>2wikimqa</td><td>0.20</td><td>0.38</td><td>0.17</td><td>0.02</td><td>0.33</td><td>6.68</td><td>28.88</td><td>40.49</td><td>20.06</td><td>2.81</td></tr><tr><td>gov_report</td><td>0.20</td><td>0.35</td><td>0.29</td><td>0.02</td><td>0.48</td><td>6.71</td><td>27.46</td><td>39.02</td><td>21.63</td><td>3.84</td></tr><tr><td>passage_count</td><td>0.20</td><td>0.36</td><td>0.16</td><td>0.01</td><td>0.44</td><td>6.62</td><td>28.51</td><td>38.87</td><td>20.91</td><td>3.92</td></tr><tr><td>passage_retrieval_en</td><td>0.20</td><td>0.38</td><td>0.17</td><td>0.01</td><td>0.60</td><td>7.17</td><td>28.39</td><td>38.01</td><td>21.11</td><td>3.97</td></tr></table>
500
+
501
+ Table 9: Statistical analysis of outlier distribution.
502
+
503
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Single NIAH</td><td colspan="3">Multi-key NIAH</td><td rowspan="2">MQNIAH</td><td rowspan="2">MV-NIAH</td><td rowspan="2">CWE</td><td rowspan="2">FWE</td><td rowspan="2">VT</td><td rowspan="2">Avg.</td></tr><tr><td>S-NIAH-1</td><td>S-NIAH-2</td><td>S-NIAH-3</td><td>MK-NIAH-1</td><td>MK-NIAH-2</td><td>MK-NIAH-3</td></tr><tr><td colspan="13">LLaMA-2-7B-chat-hf</td></tr><tr><td>FP16</td><td>100.00</td><td>92.80</td><td>90.00</td><td>84.00</td><td>67.40</td><td>52.80</td><td>76.85</td><td>80.45</td><td>83.72</td><td>80.67</td><td>92.12</td><td>81.89</td></tr><tr><td>GEAR</td><td>47.40</td><td>42.20</td><td>42.40</td><td>43.60</td><td>36.00</td><td>19.60</td><td>41.05</td><td>43.10</td><td>53.86</td><td>75.13</td><td>49.60</td><td>44.90</td></tr><tr><td>ZipCache(k=0.7,v=0.8)</td><td>39.60</td><td>27.80</td><td>10.40</td><td>23.60</td><td>5.00</td><td>0.00</td><td>24.50</td><td>18.95</td><td>52.02</td><td>68.73</td><td>44.48</td><td>28.64</td></tr><tr><td>ZipCache(k=0.6,v=0.6)</td><td>42.60</td><td>33.40</td><td>17.40</td><td>31.40</td><td>6.40</td><td>0.00</td><td>30.95</td><td>28.15</td><td>52.98</td><td>70.40</td><td>44.04</td><td>32.52</td></tr><tr><td>ZipCache(k=0.5,v=0.5)</td><td>43.00</td><td>35.80</td><td>21.60</td><td>36.20</td><td>9.20</td><td>0.20</td><td>34.40</td><td>31.70</td><td>54.68</td><td>71.33</td><td>44.36</td><td>34.77</td></tr><tr><td>KIVI(G=128,R=128)</td><td>47.00</td><td>39.40</td><td>26.80</td><td>41.40</td><td>13.20</td><td>0.20</td><td>37.05</td><td>38.40</td><td>52.76</td><td>71.27</td><td>45.84</td><td>37.57</td></tr><tr><td>KIVI(G=32,R=128)</td><td>46.20</td><td>42.20</td><td>36.60</td><td>41.40</td><td>25.20</td><td>3.80</td><td>41.55</td><td>42.20</td><td>58.94</td><td>73.27</td><td>47.40</td><td>41.71</td></tr><tr><td>Ours(G=128,R=32)</td><td>46.80</td><td>39.60</td><td>28.60</td><td>41.40</td><td>18.40</td><td>0.20</td><td>38.75</td><td>39.75</td><td>52.04</td><td>72.27</td><td>46.60</td><td>38.58</td></tr><tr><td>Ours(G=32,R=128)</td><td>46.40</td><td>42.00</td><td>38.40</td><td>43.00</td><td>27.20</td><td>5.40</td><td>41.35</td><td>41.95</td><td>55.16</td><td>74.13</td><td>48.76</td><td>42.16</td></tr><tr><td colspan="13">LLaMA-3-8B-Instruct</td></tr><tr><td>FP16</td><td>100.00</td><td>98.20</td><td>97.00</td><td>99.20</td><td>91.60</td><td>95.80</td><td>99.75</td><td>97.45</td><td>97.82</td><td>82.27</td><td>98.28</td><td>96.12</td></tr><tr><td>GEAR</td><td>100.00</td><td>98.20</td><td>97.00</td><td>99.20</td><td>91.80</td><td>87.80</td><td>99.70</td><td>96.80</td><td>97.82</td><td>81.53</td><td>98.24</td><td>95.28</td></tr><tr><td>ZipCache(k=0.7,v=0.8)</td><td>99.80</td><td>97.00</td><td>77.80</td><td>91.60</td><td>69.20</td><td>12.40</td><td>94.45</td><td>94.40</td><td>96.82</td><td>81.73</td><td>95.64</td><td>82.80</td></tr><tr><td>ZipCache(k=0.6,v=0.6)</td><td>99.60</td><td>97.80</td><td>82.00</td><td>93.80</td><td>73.00</td><td>19.80</td><td>97.45</td><td>96.05</td><td>96.88</td><td>82.47</td><td>96.16</td><td>85.00</td></tr><tr><td>ZipCache(k=0.5,v=0.5)</td><td>99.80</td><td>97.40</td><td>83.40</td><td>96.80</td><td>78.20</td><td>31.20</td><td>98.65</td><td>97.55</td><td>96.98</td><td>82.53</td><td>96.60</td><td>87.19</td></tr><tr><td>KIVI(G=128,R=128)</td><td>96.00</td><td>97.80</td><td>88.60</td><td>96.00</td><td>75.20</td><td>11.00</td><td>95.30</td><td>94.90</td><td>86.96</td><td>80.27</td><td>88.64</td><td>82.79</td></tr><tr><td>KIVI(G=32,R=128)</td><td>100.00</td><td>97.40</td><td>95.40</td><td>97.80</td><td>87.60</td><td>62.40</td><td>98.80</td><td>98.60</td><td>95.26</td><td>82.20</td><td>96.20</td><td>91.97</td></tr><tr><td>Ours(G=128,R=32)</td><td>99.80</td><td>97.20</td><td>92.20</td><td>96.20</td><td>79.40</td><td>27.60</td><td>96.80</td><td>95.45</td><td>95.38</td><td>80.47</td><td>94.76</td><td>86.84</td></tr><tr><td>Ours(G=32,R=128)</td><td>100.00</td><td>97.60</td><td>96.20</td><td>97.80</td><td>86.00</td><td>69.60</td><td>98.90</td><td>97.85</td><td>97.18</td><td>82.20</td><td>95.80</td><td>92.64</td></tr></table>
504
+
505
+ Table 10: Performance comparison of different methods on RULER for LLaMA-2-7B-chat-hf and LLaMA-3-8B-Instruct. Bold text represents the best performance.
506
+
507
+ <table><tr><td rowspan="3">Method</td><td colspan="2">Single-Document QA</td><td colspan="2">Multi-Document QA</td><td colspan="2">Summarization</td><td colspan="3">Few-shot Learning</td><td colspan="2">Synthetic</td><td colspan="2">Code</td><td rowspan="3">Avg.</td></tr><tr><td>MF-en</td><td>QuesPer</td><td>HotspotQA</td><td>2WikiMQA</td><td>GovReport</td><td>MultiNews</td><td>TREC</td><td>TriviaQA</td><td>SAMsum</td><td>pCount</td><td>PRe</td><td>Lcc</td><td>RB-P</td></tr><tr><td>18409</td><td>3619</td><td>9151</td><td>4887</td><td>8734</td><td>2113</td><td>5177</td><td>8209</td><td>6258</td><td>11141</td><td>9289</td><td>1235</td><td>4206</td></tr><tr><td colspan="15">LLaMA-2-7B-chat-hf</td></tr><tr><td>FP16</td><td>20.04</td><td>85.39</td><td>59.67</td><td>39.28</td><td>59.59</td><td>48.04</td><td>23.02</td><td>34.34</td><td>35.19</td><td>31.94</td><td>25.08</td><td>6.33</td><td>15.33</td><td>37.17</td></tr><tr><td>GEAR</td><td>19.36</td><td>85.58</td><td>59.67</td><td>38.34</td><td>58.03</td><td>46.44</td><td>20.32</td><td>34.94</td><td>34.24</td><td>32.06</td><td>21.41</td><td>6.33</td><td>14.67</td><td>36.26</td></tr><tr><td>ZipCache(k=0.7,v=0.8)</td><td>19.20</td><td>84.03</td><td>59.33</td><td>38.14</td><td>53.26</td><td>45.81</td><td>18.48</td><td>28.05</td><td>33.23</td><td>30.12</td><td>17.31</td><td>6.48</td><td>12.67</td><td>34.32</td></tr><tr><td>ZipCache(k=0.6,v=0.6)</td><td>19.28</td><td>84.31</td><td>59.00</td><td>39.41</td><td>56.54</td><td>46.28</td><td>19.47</td><td>29.98</td><td>33.83</td><td>31.13</td><td>18.58</td><td>7.44</td><td>14.67</td><td>35.38</td></tr><tr><td>ZipCache(k=0.5,v=0.5)</td><td>19.54</td><td>85.02</td><td>59.33</td><td>39.50</td><td>55.75</td><td>45.41</td><td>20.40</td><td>29.80</td><td>33.94</td><td>31.09</td><td>19.80</td><td>6.33</td><td>14.67</td><td>35.43</td></tr><tr><td>KIVI(G=128,R=128)</td><td>20.43</td><td>85.16</td><td>59.67</td><td>37.70</td><td>58.73</td><td>47.24</td><td>19.82</td><td>31.03</td><td>34.65</td><td>30.38</td><td>19.97</td><td>6.33</td><td>11.67</td><td>35.60</td></tr><tr><td>KIVI(G=32,R=128)</td><td>19.92</td><td>84.92</td><td>59.67</td><td>38.08</td><td>58.04</td><td>47.05</td><td>22.03</td><td>31.75</td><td>34.77</td><td>31.89</td><td>22.64</td><td>7.00</td><td>14.00</td><td>36.29</td></tr><tr><td>Ours(G=128,R=32)</td><td>19.95</td><td>85.00</td><td>59.67</td><td>39.10</td><td>59.44</td><td>48.51</td><td>20.81</td><td>34.12</td><td>34.98</td><td>31.87</td><td>21.56</td><td>6.33</td><td>11.00</td><td>36.33</td></tr><tr><td>Ours(G=32,R=128)</td><td>21.34</td><td>84.94</td><td>59.67</td><td>39.04</td><td>59.48</td><td>47.64</td><td>22.09</td><td>32.38</td><td>34.64</td><td>32.43</td><td>24.43</td><td>7.33</td><td>14.33</td><td>36.90</td></tr><tr><td colspan="15">LLaMA-3-8B-Instruct</td></tr><tr><td>FP16</td><td>37.54</td><td>89.85</td><td>69.67</td><td>40.50</td><td>56.58</td><td>51.01</td><td>25.58</td><td>40.56</td><td>49.81</td><td>34.93</td><td>31.04</td><td>12.94</td><td>83.67</td><td>47.98</td></tr><tr><td>GEAR</td><td>37.55</td><td>89.85</td><td>69.67</td><td>40.02</td><td>56.42</td><td>50.47</td><td>25.52</td><td>40.11</td><td>49.80</td><td>34.93</td><td>30.93</td><td>12.61</td><td>83.33</td><td>47.79</td></tr><tr><td>ZipCache(k=0.7,v=0.8)</td><td>36.91</td><td>89.98</td><td>69.33</td><td>40.71</td><td>42.30</td><td>44.84</td><td>23.98</td><td>41.88</td><td>49.01</td><td>33.87</td><td>28.13</td><td>14.35</td><td>82.00</td><td>45.95</td></tr><tr><td>ZipCache(k=0.6,v=0.6)</td><td>36.61</td><td>90.14</td><td>69.33</td><td>40.58</td><td>45.82</td><td>44.96</td><td>24.70</td><td>39.32</td><td>50.05</td><td>33.46</td><td>29.31</td><td>13.60</td><td>84.33</td><td>46.32</td></tr><tr><td>ZipCache(k=0.5,v=0.5)</td><td>35.86</td><td>89.86</td><td>69.00</td><td>39.97</td><td>46.41</td><td>44.64</td><td>25.36</td><td>40.36</td><td>49.75</td><td>33.69</td><td>30.12</td><td>12.53</td><td>83.33</td><td>46.22</td></tr><tr><td>KIVI(G=128,R=128)</td><td>34.88</td><td>89.57</td><td>69.33</td><td>40.09</td><td>44.42</td><td>45.54</td><td>24.78</td><td>39.19</td><td>49.65</td><td>34.19</td><td>28.43</td><td>11.51</td><td>82.00</td><td>45.66</td></tr><tr><td>KIVI(G=32,R=128)</td><td>37.27</td><td>89.88</td><td>70.00</td><td>40.46</td><td>47.29</td><td>45.20</td><td>25.34</td><td>41.29</td><td>49.87</td><td>35.05</td><td>30.38</td><td>12.67</td><td>83.67</td><td>46.80</td></tr><tr><td>Ours(G=128,R=32)</td><td>36.75</td><td>89.74</td><td>69.67</td><td>40.39</td><td>52.37</td><td>48.82</td><td>24.94</td><td>41.57</td><td>50.37</td><td>35.32</td><td>30.74</td><td>11.44</td><td>83.33</td><td>47.34</td></tr><tr><td>Ours(G=32,R=128)</td><td>36.71</td><td>90.36</td><td>70.00</td><td>40.67</td><td>52.65</td><td>47.76</td><td>25.34</td><td>39.60</td><td>50.36</td><td>35.13</td><td>31.16</td><td>12.33</td><td>84.33</td><td>47.42</td></tr></table>
508
+
509
+ Table 11: Performance comparison of OTT with GEAR, ZipCache, KIVI and FP16 on LongBench for LLaMA-3-8B-Instruct and LLaMA-2-7B-chat-hf. OTT generally achieves improvements over previous KV cache compression methods across various LLMs. Bold text represents the best performance.
510
+
511
+ <table><tr><td rowspan="2">Method</td><td colspan="3">ruler_recall</td><td colspan="2">substringExact_MATCH</td><td>NDCG@10</td><td>str_em</td><td colspan="2">citation_rec</td><td>qampari_rec_top5</td><td colspan="2">citation_prec</td><td rowspan="2">Avg.</td></tr><tr><td>nih_mk-2</td><td>nih_mk-3</td><td>nih mv</td><td>json_kv</td><td>homoqqa</td><td>rerank_psg</td><td>alce-asqi</td><td>alce_qampari</td><td>alce_asqi</td><td>alce_qampari</td><td>alce.asqi</td><td>alce_qampari</td></tr><tr><td>full</td><td>100.00</td><td>100.00</td><td>99.75</td><td>98.00</td><td>61.00</td><td>55.27</td><td>41.37</td><td>8.47</td><td>6.03</td><td>17.80</td><td>12.88</td><td>9.46</td><td>50.84</td></tr><tr><td>Gear</td><td>100.00</td><td>92.00</td><td>99.75</td><td>92.00</td><td>60.67</td><td>54.01</td><td>42.07</td><td>7.20</td><td>6.19</td><td>18.00</td><td>10.85</td><td>8.22</td><td>49.25</td></tr><tr><td>KIVI(G=32,R=128)</td><td>99.00</td><td>85.00</td><td>99.50</td><td>77.00</td><td>59.67</td><td>41.52</td><td>41.00</td><td>5.83</td><td>7.79</td><td>14.80</td><td>7.72</td><td>7.81</td><td>45.55</td></tr><tr><td>ZipCache(k=0.5,v=0.5)</td><td>98.00</td><td>51.00</td><td>99.25</td><td>45.00</td><td>59.00</td><td>44.88</td><td>35.47</td><td>5.49</td><td>7.28</td><td>6.80</td><td>7.44</td><td>4.95</td><td>38.71</td></tr><tr><td>Ours(G=32,R=128)</td><td>99.00</td><td>87.00</td><td>99.75</td><td>80.00</td><td>61.00</td><td>42.92</td><td>44.50</td><td>9.52</td><td>6.51</td><td>17.40</td><td>12.01</td><td>8.33</td><td>47.33</td></tr></table>
512
+
513
+ Table 12: Performance comparison of OTT with GEAR, ZipCache, KIVI and FP16 on HELMET for LLaMA-3-8B-Instruct. OTT generally achieves improvements over previous KV cache compression methods across various LLMs. Bold text represents the best performance.
514
+
515
+ <table><tr><td>Method</td><td>Easy</td><td>Hard</td><td>Short</td><td>Medium</td><td>Long</td><td>Overall</td></tr><tr><td>FP16</td><td>27.6</td><td>27.0</td><td>25.6</td><td>25.6</td><td>33.3</td><td>27.2</td></tr><tr><td>GEAR</td><td>27.6</td><td>27.0</td><td>25.6</td><td>25.6</td><td>33.3</td><td>27.2</td></tr><tr><td>KIVI (G=32, R=128)</td><td>26.6</td><td>22.2</td><td>23.9</td><td>21.9</td><td>27.8</td><td>23.9</td></tr><tr><td>ZipCache (k=0.5, v=0.5)</td><td>26.6</td><td>25.4</td><td>26.1</td><td>23.3</td><td>30.6</td><td>25.8</td></tr><tr><td>Ours (G=32, R=128)</td><td>25.5</td><td>26.7</td><td>26.7</td><td>22.8</td><td>32.4</td><td>26.2</td></tr></table>
516
+
517
+ Table 13: Performance comparison of OTT with GEAR, ZipCache, KIVI and FP16 on LongBench_v2 for LLaMA-3-8B-Instruct. OTT generally achieves improvements over previous KV cache compression methods across various LLMs. Bold text represents the best performance.
518
+
519
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Single NIAH</td><td colspan="3">Multi-key NIAH</td><td rowspan="2">MQ-NIAH</td><td rowspan="2">MV-NIAH</td><td rowspan="2">CWE</td><td rowspan="2">FWE</td><td rowspan="2">VT</td><td rowspan="2">Avg.</td></tr><tr><td>S-NAH-1</td><td>S-NAH-2</td><td>S-NAH-3</td><td>MK-NIAH-1</td><td>MK-NIAH-2</td><td>MK-NIAH-3</td></tr><tr><td colspan="13">max_length=64k</td></tr><tr><td>FP16</td><td>100</td><td>99.4</td><td>100</td><td>99</td><td>99.8</td><td>99.4</td><td>98.85</td><td>95.8</td><td>8.42</td><td>76</td><td>97.96</td><td>88.60</td></tr><tr><td>GEAR</td><td>100</td><td>99.4</td><td>100</td><td>99</td><td>99.6</td><td>90.2</td><td>94.4</td><td>96</td><td>8.62</td><td>76.53</td><td>97.96</td><td>87.43</td></tr><tr><td>KIVI(G=32,R=128)</td><td>99.8</td><td>99.2</td><td>97.8</td><td>95.6</td><td>97.2</td><td>78.4</td><td>94.5</td><td>93</td><td>8.26</td><td>78.73</td><td>92.24</td><td>84.98</td></tr><tr><td>Ours(G=32,R=128)</td><td>99.8</td><td>99.2</td><td>99.2</td><td>96</td><td>97.6</td><td>82</td><td>97.5</td><td>94.5</td><td>9.28</td><td>78.53</td><td>93.92</td><td>86.14</td></tr><tr><td colspan="13">max_length=128k</td></tr><tr><td>FP16</td><td>100</td><td>94</td><td>100</td><td>93</td><td>100</td><td>100</td><td>98.75</td><td>96.25</td><td>0.3</td><td>82.67</td><td>96.6</td><td>87.42</td></tr><tr><td>GEAR</td><td>100</td><td>94</td><td>98</td><td>93</td><td>100</td><td>87</td><td>98.5</td><td>96.25</td><td>0.3</td><td>81.67</td><td>96.6</td><td>85.94</td></tr><tr><td>KIVI(G=32,R=128)</td><td>100</td><td>92</td><td>99</td><td>91</td><td>95</td><td>69</td><td>93</td><td>87.5</td><td>0.4</td><td>80.33</td><td>89.4</td><td>81.51</td></tr><tr><td>Ours(G=32,R=128)</td><td>100</td><td>94</td><td>99</td><td>92</td><td>95</td><td>73</td><td>96.25</td><td>90.25</td><td>0.4</td><td>82.67</td><td>88.8</td><td>82.85</td></tr></table>
520
+
521
+ Table 14: Performance comparison of Llama-3-8B-ProLong-512k-Instruct with longer context lengths (64k, 128k).
522
+
523
+ ![](images/fa702a2724b3d88fa5ebe2b7ffba5e999844b5141c13465eddbb102fc475f0de.jpg)
524
+ Figure 8: Throughput (left) and memory usage (right) of different methods under LLaMA-2-7B-chat-hf, input length=64, output length=384, batch size=128 in NVIDIA A100 40G.
525
+
526
+ ![](images/208365234979c227a3bde5a06b907ef59172b7c80d730e49d1528e960f1e554a.jpg)
527
+
528
+ ![](images/bc46f629874e12d70214b2fd731aa620b85b19b8027158bc0b8f6d7807cc863f.jpg)
529
+
530
+ ![](images/ad85a51f11318aeca8d85aacd77239c348a734d3c005851197ffd994af4668b0.jpg)
531
+
532
+ ![](images/4d1eb179770ab16dab5bd249bc020bd5184e4a45feb73d7369a56a8bf388443d.jpg)
533
+
534
+ ![](images/329fc83095284e0ebfeb8a93864631f3a68eec90d74c395663a0af812845cda7.jpg)
535
+
536
+ ![](images/02e4ba2cfe169504b5d9faa29c49cf84b6fe1556441ad82f7e68be6c0e46444d.jpg)
537
+ Figure 9: Results of Needle-in-a-Haystack on LLaMA-2-7B-chat-hf with 4k context size. The vertical axis of the table represents the depth percentage, and the horizontal axis represents the token length.
538
+
539
+ ![](images/426a5112d57491b1601d45ce8f7cd0329e0808fcaa70287e6595c125e83cd56f.jpg)
540
+
541
+ ![](images/89678e1b5f834456ed2a33bd89085899e3733fe05f9eb9b2fd539a06d62bb8d7.jpg)
542
+
543
+ ![](images/ff76689dad136f9df708681d296a37258021c98b1c17b76acb49d103f6c7b5cb.jpg)
544
+
545
+ ![](images/544eb273a38dd5695bad60740911046bcd339c9e96f687c9aa5bdae38db7b9da.jpg)
546
+
547
+ ![](images/c4237371535a0c9f9b3cf0045f1763ef5162f87930055467c80272617991cb56.jpg)
548
+ Figure 10: Results of Needle-in-a-Haystack on LLaMA-3-8B-Instruct with 8k context size. The vertical axis of the table represents the depth percentage, and the horizontal axis represents the token length.
accuratekvcachequantizationwithoutliertokenstracing/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb7b31048e4705528233726825c86b5a35d6bd54bf7f81ba9110a58de518ac33
3
+ size 2124124
accuratekvcachequantizationwithoutliertokenstracing/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c3c008eb42e328470ec1ce42c7d0a47906b4ec55357081e90b91c05a86df8e9
3
+ size 639704
acecoderacingcoderrlviaautomatedtestcasesynthesis/f89dd568-8d6d-4f88-b16c-6cd5a42987ae_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71903c81c354c33a1a3fde3a3298e821d1d65e4edf8a0d3c39312c3a0d7bed79
3
+ size 114304
acecoderacingcoderrlviaautomatedtestcasesynthesis/f89dd568-8d6d-4f88-b16c-6cd5a42987ae_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd9115e4d59f2c889cfee8306dc649cd6aeabb2fd598a5ff9088fc343d000d8c
3
+ size 137289
acecoderacingcoderrlviaautomatedtestcasesynthesis/f89dd568-8d6d-4f88-b16c-6cd5a42987ae_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1202c7048c512fc2eb02eadb46341e5d6cca50d6e42646e6c4404d00a061ed7e
3
+ size 956364
acecoderacingcoderrlviaautomatedtestcasesynthesis/full.md ADDED
@@ -0,0 +1,396 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACECODER: AcingCoder RL via Automated Test-Case Synthesis
2
+
3
+ Huaye Zeng†*, Dongfu Jiang†*, Haozhe Wang‑, Ping NieΒ§, Xiaotong ChenΒΆ, Wenhu Chen†
4
+
5
+ †University of Waterloo, ‑HKUST, Β§Independent Researcher, ΒΆNetmind.AI
6
+
7
+ {w33zeng, dongfu.jiang, wenhuchen}@uwaterloo.ca
8
+
9
+ https://tiger-ai-lab.github.io/AceCoder
10
+
11
+ # Abstract
12
+
13
+ Most progress in recent coder models has been driven by supervised fine-tuning (SFT), while the potential of reinforcement learning (RL) remains largely unexplored, primarily due to the lack of reliable reward data/model in the code domain. In this paper, we address this challenge by leveraging automated large-scale test-case synthesis to enhance code model training. Specifically, we design a pipeline that generates extensive (question, test-cases) pairs from existing code data. Using these test cases, we construct preference pairs based on pass rates over sampled programs to train reward models with Bradley-Terry loss. It shows an average of 10-point improvement for Llama-3.1-8B-Ins and 5-point improvement for Qwen2.5-Coder7B-Ins through best-of-32 sampling, making the 7B model on par with 236B DeepSeekV2.5. Furthermore, we conduct reinforcement learning with both reward models and test-case pass rewards, leading to consistent improvements across HumanEval, MBPP, BigCodeBench, and LiveCodeBench (V4). Notably, we follow the R1-style training to start from Qwen2.5-Coder-base directly and show that our RL training can improve model on HumanEval-plus by over $25\%$ and MBPP-plus by $6\%$ for merely 80 optimization steps. We believe our results highlight the huge potential of reinforcement learning in coder models.
14
+
15
+ # 1 Introduction
16
+
17
+ In recent years, code generation models have advanced significantly with compute scaling (Kaplan et al., 2020) and training data quality improvement (Huang et al., 2024; Lozhkov et al., 2024; Guo et al., 2024b). The state-of-the-art coder models, including Code-Llama (Rozière et al., 2023), Qwen2.5-Coder (Hui et al., 2024a), DeepSeek-Coder (Guo et al., 2024a) and so on, have shown unprecedented performance across a
18
+
19
+ *Equal Contribution
20
+
21
+ ![](images/ffa295f5ce1dba0465b9109742c52826e0057f515873be989c445060f2dee302.jpg)
22
+ Figure 1: Overall Workflow of our model: we start from the seed code dataset to create well-formatted questions and corresponding test cases. Then we adopt strong models to filter the noisy test cases. Finally, we adopt these test cases to harvest positive and negative program pairs for reward model training and RL.
23
+
24
+ wide range of coding tasks like program synthesis (Chen et al., 2021), program repair (Zheng et al., 2024a), optimization (Shypula et al., 2023), test generation (Steenhoek et al., 2023), SQL (Yu et al., 2018), and issue fix (Jimenez et al., 2024). These models are all pre-trained and further supervised fine-tuned (SFT) on large-scale coding data from web resources like Common Crawl or Github.
25
+
26
+ Though strong performance has been achieved through SFT (Luo et al., 2023; Wei et al., 2024), very few models have explored the potential of reinforcement learning (RL) (Ouyang et al., 2022a), which has proven effective in other domains such as mathematical reasoning like DeepSeek-R1 (Shao et al., 2024). We argue that this absence of RL
27
+
28
+ based training in coder models is primarily due to two key challenges:
29
+
30
+ (1) Lack of reliable reward signals for code generation. In tasks such as mathematical problem-solving, rewards can be easily derived from rule-based string matches with reference answers (Guo et al., 2025) or large-scale human annotations (Ouyang et al., 2022b). In contrast, evaluating code quality typically requires executing test cases to measure the pass rate, making reward signal design more complex. This also explains why existing reward models like Skywork (Liu et al., 2024a) can hardly generalize to the coding domain (see subsection 3.4).
31
+ (2) Scarcity of large-scale coding datasets with reliable test cases. Most existing coding datasets like APPS (Hendrycks et al., 2021; Chen et al., 2021) heavily rely on costly human expert annotations for test cases, which limits their scalability for training purposes. The largest dataset is TACO (Li et al., 2023), containing 25K examples collected from popular coding competition websites, sources that have already been extensively utilized during the pre-training phase.
32
+
33
+ Therefore, we curate ACECODE-87K, on which we trained our reward models: ACECODE-RM-7B and ACECODE-RM-32B. Comprehensive experiments of best-of-N sampling show that ACECODE-RM can significantly boost existing LLM's performance on coding benchmarks. For example, ACECODE-RM-7B can improve the performance of Llama-3.1-8B-Instruct by an average of 8.4 points across the 4 coding benchmarks, i.e. HumanEval (Liu et al., 2023), MBPP (Liu et al., 2023), BigCodeBench (Zhuo et al., 2024) and LiveCodeBench (Jain et al., 2024). Even for the stronger coder model Qwen2.5-Coder-7B-Instruct, our "7B+7B" combination still gets an average of 2.6 improvements. ACECODE-RM-32B is even more powerful, which pushes the former two numbers to 10.7 and 4.7 respectively, showcasing the effectiveness of ACECODE-RM.
34
+
35
+ Additionally, both ACECODE-RM-7B and ACECODE-RM-32B models demonstrate strong performance on the RM Bench (Liu et al., 2024c), outperforming existing state-of-the-art reward models across various categories. Specifically, ACECODE-RM-32B achieves the highest average score of 76.1, leading in key categories such as Coding, Chat, Normal, and Hard tasks. Even the smaller ACECODE-RM-7B model shows notable
36
+
37
+ competitiveness, excelling particularly in Coding and Hard categories.
38
+
39
+ Furthermore, we adopt ACECODE-RM-7B and test case pass rate separately to do reinforcement learning with reinforce++ (Hu, 2025) over coder models. Experiments show 2.1 and 0.7 points of average improvement when starting from Qwen2.5-7B-Ins and the Qwen2.5-Coder-7B-Ins respectively, making the latter even more powerful than GPT-4-Turbo on benchmarks like MBPP. Inspired by the recent DeepSeek-R1 (Guo et al., 2025), we also perform RL training directly from the Qwen2.5-Coder-7B-base model and saw a surprising $25\%$ improvement on HumanEval-plus and $6\%$ improvement on MBPP-plus (Liu et al., 2023) with merely 80 optimization steps (48 H100 GPU hours). These improvements are also generalizable to other more difficult benchmarks.
40
+
41
+ To our knowledge, this is the first work to perform reward model training and reinforcement learning for code generation using a fully automated pipeline that synthesizes large-scale reliable tests. We believe our ACECODE-87K will unlock the potential of RL training for code generation models and help the community to further push the boundaries of LLM's coding abilities.
42
+
43
+ # 2 Methodology
44
+
45
+ In this section, we will introduce the overall methodology of ACECODER. We begin with formulations of the problems we are investigating, including reward model training and reinforcement learning for LLMs. Finally, we elaborate on how we synthesize the test cases and construct the ACECODE-87K.
46
+
47
+ # 2.1 Problem Formulation
48
+
49
+ Reward Model Training Let $x$ denote the coding question and $\mathbf{y} = \{y_1, \dots, y_t\}$ denote the program solution, where $y_i$ represents the $i$ -th token of the program solution and $(\mathbf{x}, \mathbf{y}) \in D$ . Assuming $\theta$ represents the parameters of the model, then $n$ responses $(\mathbf{y}^1, \dots, \mathbf{y}^n)$ will be sampled from the model $\pi_\theta$ given the input $\mathbf{x}$ . Let $(s_1, \dots, s_n)$ be the target rewards, i.e. the test case pass rates in our scenario, then we define the Bradley-Terry loss (Bradley and Terry, 1952) for every pair of responses $\mathbf{y}^i$ and $\mathbf{y}^j$ with scores of $s_i$ and $s_j$ when we are training a reward model $R_\phi$ as follows:
50
+
51
+ $$
52
+ \begin{array}{l} \mathcal {L} _ {\phi} (\mathbf {x}, s _ {i}, s _ {j}) \\ \mathbb {l} \left[ s _ {i} > s _ {j} \right] \log \sigma \left(R _ {\phi} (\mathbf {x}, \mathbf {y} ^ {i}) - R _ {\phi} (\mathbf {x}, \mathbf {y} ^ {j})\right) \\ \end{array}
53
+ $$
54
+
55
+ where $\mathbb{1}[\cdot ] = 1$ if the expression inside the brackets is true, otherwise, it's 0. The final loss function for the reward training is:
56
+
57
+ $$
58
+ \mathcal {L} (\phi) = - \frac {1}{n (n - 1)} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \mathcal {L} _ {\phi} (\mathbf {x}, s _ {i}, s _ {j}) \quad (1)
59
+ $$
60
+
61
+ That means the reward model is trained to assign higher values to preferred responses and lower values to non-preferred ones, maximizing the difference between these ratings.
62
+
63
+ Best-of-N Sampling After we get the trained reward model $R_{\phi}$ , one way to quickly test the performance of the reward model is Best-of-N sampling, which is usually used as a test-time scaling approach. We will simply select the highest-scored response according to the predicted value of $R_{\phi}$ . That is $\mathbf{y}^{*} = \arg \max_{\mathbf{y}^{i}\in \mathbf{y}^{1},\dots,\mathbf{y}^{N}}R_{\phi}(\mathbf{x},\mathbf{y}^{i})$
64
+
65
+ Reinforcement Learning We can finally conduct reinforcement learning for the original policy model $\pi_{\theta}$ after we get a well-trained reward model $R_{\phi}$ . Proximal Policy Optimization (PPO) is an actor-critic RL algorithm that is widely used for LLM's RL process. Let $\pi_{\theta_{old}}$ be the reference model and $\pi_{\theta}$ be the current policy model that is iteratively updated during the RL training. We denote $r_t(\theta)$ as the probability ratio of the current policy model over the old policy model on the $t$ -th generated token:
66
+
67
+ $$
68
+ r _ {t} (\theta) = \frac {\pi_ {\theta} \left(y _ {t} \mid \mathbf {x} , \mathbf {y} < t\right)}{\pi_ {\theta_ {o l d}} \left(y _ {t} \mid \mathbf {x} , \mathbf {y} < t\right)} \tag {2}
69
+ $$
70
+
71
+ Then the PPO algorithms optimize the LLM by the following surrogate objective:
72
+
73
+ $$
74
+ \begin{array}{l} \mathcal {L} _ {P P O} (\theta) = \\ - \frac {1}{| \mathbf {y} |} \sum_ {t = 1} ^ {| \mathbf {y} |} \min \left[ r _ {t} (\theta) A _ {t}, \operatorname {c l i p} \left(r _ {t} (\theta), 1 - \epsilon , 1 + \epsilon\right) A _ {t} \right] \\ \end{array}
75
+ $$
76
+
77
+ where $\mathbf{y} \sim \pi_{\theta_{old}}(\cdot | x)$ , and $A_{t}$ is the advantage computed through the Generalized Advantage Estimation (GAE) (Schulman et al., 2015) via the rewards generated by $R_{\phi}$ and the learned value function $V_{\psi}$ . The PPO training objective will force the policy model $\pi$ to increase the probability of generating tokens with higher $A_{t}$ and decrease the probability ratio of generating tokens with lower $A_{t}$ until the clipped bounds $1 + \epsilon$ and $1 - \epsilon$ are reached, respectively.
78
+
79
+ However, PPO usually requires training an additional value model $V_{\psi}$ and thus makes the training
80
+
81
+ inefficient. Recently, there have been some other works like Reinforcement++ (Hu, 2025) that eliminate the need for a value model but instead compute advantage only using the rewards generated by $R_{\phi}$ and the KL-divergence of the tokens after the $t$ -th token. This makes the RL process more efficient and has also proved to be more stable.
82
+
83
+ # 2.2 ACECODE-87K
84
+
85
+ To be able to train a reward model specifically designed for code generation, the first step is to synthesize reliable test cases for each coding problem and use them as training signals. In this section, we explain the whole procedure of constructing ACECODE-87K step by step. We show the overall statistics in Table 1.
86
+
87
+ Test Case Synthesis from Seed Dataset We start from existing coding datasets with provided question $\mathbf{x}$ and corresponding program $\mathbf{y}$ . Specifically, we combine Magicoder-Evol-Instruct $^1$ , Magicoder-OSS-Instruct-75K $^2$ , and StackPyFunction $^3$ as our seed dataset. We only keep the questions written in Python that contain either a function or a class, resulting in a total of 124K entries. We find that these datasets contain highly noisy questions that cannot be easily evaluated using test cases. Therefore, we feed every question-solution pair $(\mathbf{x}, \mathbf{y})$ into a GPT-4o-mini (Hurst et al., 2024) to propose a refined LeetCode-style question $\mathbf{x}_{\mathbf{r}}$ with highly structured instructions. Meanwhile, we also prompt it to 'imagine' around 20 test cases $(t_1, \dots, t_m)$ for each refined coding question $\mathbf{x}_{\mathbf{r}}$ based on its understanding of the expected behavior of the desired program. See prompt template used in subsection A.3. Please note that we do not use the program solution $\mathbf{y}$ from the existing datasets at all in our final curated ACECODE-87K. These datasets are purely used as seeds to help LLM formulate well-structured coding problems.
88
+
89
+ Test Case Filtering These 'imagined' test cases generated from the LLM contain severe hallucinations. To filter out those hallucinated test cases, we facilitated a stronger coder model Qwen2.5-Coder-32B-Instruct (Hui et al., 2024a) as a proxy to perform quality control. Specifically, we prompt it for each $\mathbf{x_r}$ to generate a program $\mathbf{y}'$ and then run these programs over the test cases to approximate their quality. We removed all test cases $t_i$ where the
90
+
91
+ generated solution program $\mathbf{y}'$ could not pass. Furthermore, we removed questions with fewer than 5 tests after filtering, as these questions might be overly ambiguous. With the above filtering, we constructed the ACECODE-87K with 87.1K distinct coding questions and 1.38M cleaned test cases, as represented by $(\mathbf{x_r}, (t_1, \dots, t_{m_c}))$ , where $m_c$ represents the number of test cases after filtering.
92
+
93
+ <table><tr><td>Subset</td><td>Evol</td><td>OSS</td><td>Stack Python</td><td>Overall</td></tr><tr><td colspan="5">Before Filtering</td></tr><tr><td># Examples</td><td>36,256</td><td>37,750</td><td>50,000</td><td>124,006</td></tr><tr><td># Avg Test Cases</td><td>19.33</td><td>17.21</td><td>18.27</td><td>18.26</td></tr><tr><td colspan="5">After Filtering</td></tr><tr><td># Examples</td><td>26,920</td><td>25,862</td><td>34,367</td><td>87,149</td></tr><tr><td># Avg Test Cases</td><td>15.14</td><td>16.33</td><td>16.08</td><td>15.87</td></tr><tr><td># Pairs</td><td>89,089</td><td>91,636</td><td>126,784</td><td>307,509</td></tr></table>
94
+
95
+ Human Case Study Finally, as a last verification step, we conducted a human case study by randomly selecting 40 questions, each with 5 randomly sampled test cases, resulting in 200 manually annotated test cases. Encouragingly, only 3 out of these 200 test cases were found invalid upon review, indicating that our filtering methodology effectively mitigates hallucinations and preserves test-case quality. Moreover, Table 2 presents various pass-rate statistics for ACECODE-87K, illustrating the overall difficulty and performance trends.
96
+
97
+ Table 1: Dataset statistics of ACECODE-87K before and after test-case filtering.
98
+
99
+ <table><tr><td>Pass @ 1</td><td>34.90%</td></tr><tr><td>Pass @ 4</td><td>38.50%</td></tr><tr><td>Pass @ 8</td><td>39.20%</td></tr><tr><td>Pass @ 16</td><td>39.60%</td></tr><tr><td>Avg Test Case Pass %</td><td>70.30%</td></tr><tr><td>% Question Where All 16
100
+ Inferences Pass All Test Cases</td><td>22.80%</td></tr></table>
101
+
102
+ Table 2: Dataset pass-rate statistics of ACECODE-87K after test-case filtering using Qwen2.5-Coder-7B-Instruct (16 responses per query with temperature=1.0)
103
+
104
+ Preference Pairs Construction We propose to use the Bradley-Terry loss to train the reward model as defined in Equation 1. Therefore, we need to construct (question, [positive program, negative program]) data from ACECODE-87K. Specifically, we sample programs $(\mathbf{y}^1,\dots,\mathbf{y}^n)$ from existing models (e.g. Llama-3.1 (Grattafori et al., 2024)) w.r.t $\mathbf{x}_r$ and utilize the test-case pass rate to distinguish
105
+
106
+ positive and negative programs. Since the pass rate $s_i$ for the sampled program $\mathbf{y}^i$ can be any number between [0, 1], a minor difference in pass rate may not represent that one program is more accurate than another. Therefore, instead of using $\mathbb{1}[s_i > s_j]$ to select the preference pairs, we have thus modified the selection rules to be:
107
+
108
+ $$
109
+ \mathbb {1} \left[ s _ {i} > s _ {j} + 0. 4, s _ {i} > 0. 8, s _ {j} > 0 \right] \tag {3}
110
+ $$
111
+
112
+ This is to ensure the preferred program has at least a 0.8 pass rate to make sure it represents a more correct program. Also, we find that many sampled programs with 0 pass rates can be caused by some small syntax errors or some Python packaging errors during evaluation. We chose not to include them as the preference pair to make sure our constructed datasets represent only the preference based on the valid pass rate. We also ensure that the sampled programs all come from the backbone of $R_{\phi}$ , so the reward model is trained in an on-policy way. After that, we train our reward model $R_{\phi}$ by fully fine-tuning an instruction coding model. Specifically, we extract the last token's final hidden representation and pass it through a linear model head that generates a single scalar output, which is optimized via the loss function defined in Equation 1.
113
+
114
+ # 3 Experiments
115
+
116
+ # 3.1 Reward Model Training Setup
117
+
118
+ We mainly use Qwen2.5-Coder-7B-Instruct $^{5}$ as the backbone of the reward model and sample 16 responses from it for each question in ACECODE-87K. Finally, following the rule defined in Equation 3, around 300K preference pairs were created out of 46,618 distinct questions (37.34% of the total questions) that have at least one pair satisfying the condition, and other questions are not used.
119
+
120
+ Our reward model is trained using LlamaFactory (Zheng et al., 2024b). We apply full finetuning with DeepSpeed stage 3. We train for 1 epoch using a cosine learning rate schedule, starting at 1e-5 with a warmup ratio of 0.1 to gradually increase the learning rate in the initial training phase. The training batch size is set to 128. We enable BF16 precision to reduce memory overhead without compromising model fidelity. The training takes 24 hours on 8 x A100 GPUs.
121
+
122
+ <table><tr><td rowspan="2">Method</td><td rowspan="2"># N</td><td colspan="2">HumanEval</td><td colspan="2">MBPP</td><td colspan="2">BigCodeBench-C</td><td colspan="2">BigCodeBench-I</td><td rowspan="2">LiveCodeBench V4</td><td rowspan="2">Average</td></tr><tr><td>-</td><td>Plus</td><td>-</td><td>Plus</td><td>Full</td><td>Hard</td><td>Full</td><td>Hard</td></tr><tr><td>GPT-4o (0806)</td><td>1</td><td>92.7</td><td>87.2</td><td>87.6</td><td>72.2</td><td>58.9</td><td>36.5</td><td>48.0</td><td>25.0</td><td>43.6</td><td>61.3</td></tr><tr><td>DeepSeek-V2.5</td><td>1</td><td>90.2</td><td>83.5</td><td>87.6</td><td>74.1</td><td>53.2</td><td>29.1</td><td>48.9</td><td>27.0</td><td>41.8</td><td>59.5</td></tr><tr><td>DeepSeek-V3</td><td>1</td><td>91.5</td><td>86.6</td><td>87.6</td><td>73.0</td><td>62.2</td><td>39.9</td><td>50.0</td><td>27.7</td><td>63.5</td><td>64.6</td></tr><tr><td>Qwen2.5-Coder-32B</td><td>1</td><td>92.1</td><td>87.2</td><td>90.5</td><td>77.0</td><td>58.0</td><td>33.8</td><td>49.0</td><td>27.7</td><td>48.3</td><td>62.6</td></tr></table>
123
+
124
+ Inference Model $=$ Mistral-7B-Instruct-V0.3
125
+
126
+ <table><tr><td>Greedy</td><td>1</td><td>36.6</td><td>31.1</td><td>49.5</td><td>41.3</td><td>25.9</td><td>6.1</td><td>20.1</td><td>5.4</td><td>7.3</td><td>24.8</td></tr><tr><td>Average</td><td>64</td><td>37.1</td><td>30.8</td><td>45.1</td><td>38.0</td><td>21.7</td><td>4.2</td><td>17.6</td><td>3.0</td><td>4.0</td><td>22.4</td></tr><tr><td>Oracle</td><td>64</td><td>87.2</td><td>78.0</td><td>83.9</td><td>73.5</td><td>68.4</td><td>37.8</td><td>58.5</td><td>31.1</td><td>24.3</td><td>60.3</td></tr><tr><td rowspan="3">AceCodeRM-7B</td><td>16</td><td>65.9</td><td>56.7</td><td>59.3</td><td>52.4</td><td>35.1</td><td>10.1</td><td>29.3</td><td>8.8</td><td>11.9</td><td>36.6</td></tr><tr><td>32</td><td>68.3</td><td>58.5</td><td>59.8</td><td>51.6</td><td>37.4</td><td>8.8</td><td>30.7</td><td>10.8</td><td>14.6</td><td>37.8</td></tr><tr><td>64</td><td>71.3</td><td>61.6</td><td>59.8</td><td>51.6</td><td>39.4</td><td>6.8</td><td>31.8</td><td>9.5</td><td>15.4</td><td>38.6</td></tr><tr><td>Ξ” (RM-greedy)</td><td>-</td><td>+34.8</td><td>+30.5</td><td>+10.3</td><td>+11.1</td><td>+13.5</td><td>+4.1</td><td>+11.7</td><td>+5.4</td><td>+8.1</td><td>+13.8</td></tr><tr><td rowspan="3">AceCodeRM-32B</td><td>16</td><td>68.3</td><td>61.0</td><td>58.7</td><td>49.5</td><td>37.7</td><td>11.5</td><td>30.9</td><td>10.1</td><td>12.9</td><td>37.8</td></tr><tr><td>32</td><td>72.6</td><td>65.9</td><td>61.6</td><td>51.6</td><td>40.5</td><td>9.5</td><td>33.9</td><td>13.5</td><td>16.1</td><td>40.6</td></tr><tr><td>64</td><td>75.0</td><td>64.6</td><td>60.6</td><td>50.0</td><td>42.7</td><td>15.5</td><td>35.6</td><td>13.5</td><td>17.4</td><td>41.7</td></tr><tr><td>Ξ” (RM-greedy)</td><td>-</td><td>+38.4</td><td>+34.8</td><td>+12.2</td><td>+11.1</td><td>+16.8</td><td>+9.5</td><td>+15.5</td><td>+8.1</td><td>+10.1</td><td>+16.9</td></tr></table>
127
+
128
+ Inference Model = Llama-3.1-8B-Instruct
129
+
130
+ <table><tr><td>Greedy</td><td>1</td><td>68.9</td><td>62.2</td><td>67.2</td><td>54.8</td><td>38.5</td><td>12.8</td><td>31.8</td><td>13.5</td><td>18.0</td><td>40.9</td></tr><tr><td>Average</td><td>64</td><td>61.7</td><td>54.9</td><td>64.5</td><td>54.5</td><td>32.8</td><td>10.1</td><td>26.6</td><td>9.0</td><td>13.8</td><td>36.4</td></tr><tr><td>Oracle</td><td>64</td><td>93.9</td><td>90.2</td><td>92.1</td><td>82.3</td><td>80.0</td><td>54.7</td><td>67.9</td><td>48.6</td><td>40.8</td><td>72.3</td></tr><tr><td rowspan="3">AceCodeRM-7B</td><td>16</td><td>77.4</td><td>70.7</td><td>76.5</td><td>64.3</td><td>45.8</td><td>20.3</td><td>36.4</td><td>12.2</td><td>26.1</td><td>47.7</td></tr><tr><td>32</td><td>79.9</td><td>72.6</td><td>76.2</td><td>62.4</td><td>47.6</td><td>23.0</td><td>37.3</td><td>13.5</td><td>27.3</td><td>48.9</td></tr><tr><td>64</td><td>81.7</td><td>74.4</td><td>74.6</td><td>61.9</td><td>47.8</td><td>23.6</td><td>38.1</td><td>13.5</td><td>27.6</td><td>49.3</td></tr><tr><td>Ξ” (RM-greedy)</td><td>-</td><td>+12.8</td><td>+12.2</td><td>+9.3</td><td>+9.5</td><td>+9.3</td><td>+10.8</td><td>+6.2</td><td>0.0</td><td>+9.6</td><td>+8.4</td></tr><tr><td rowspan="3">AceCodeRM-32B</td><td>16</td><td>82.3</td><td>74.4</td><td>72.8</td><td>60.6</td><td>49.8</td><td>20.3</td><td>38.4</td><td>13.5</td><td>27.5</td><td>48.8</td></tr><tr><td>32</td><td>81.7</td><td>76.2</td><td>72.8</td><td>60.6</td><td>50.4</td><td>22.3</td><td>39.1</td><td>13.5</td><td>30.3</td><td>49.6</td></tr><tr><td>64</td><td>85.4</td><td>79.3</td><td>72.0</td><td>59.0</td><td>48.5</td><td>19.6</td><td>40.0</td><td>13.5</td><td>31.0</td><td>49.8</td></tr><tr><td>Ξ” (RM-greedy)</td><td>-</td><td>+16.5</td><td>+17.1</td><td>+9.3</td><td>+9.5</td><td>+11.8</td><td>+10.8</td><td>+8.2</td><td>+0.0</td><td>+13.0</td><td>+9.0</td></tr></table>
131
+
132
+ Inference Model = Qwen2.5-Coder-7B-Instruct
133
+
134
+ <table><tr><td>Greedy</td><td>1</td><td>91.5</td><td>86.0</td><td>82.8</td><td>71.4</td><td>49.5</td><td>19.6</td><td>41.8</td><td>20.3</td><td>34.2</td><td>55.2</td></tr><tr><td>Average</td><td>64</td><td>86.0</td><td>80.1</td><td>77.9</td><td>65.6</td><td>45.3</td><td>18.6</td><td>37.3</td><td>16.2</td><td>31.8</td><td>51.0</td></tr><tr><td>Oracle</td><td>64</td><td>98.2</td><td>95.7</td><td>97.4</td><td>90.7</td><td>80.9</td><td>62.8</td><td>73.5</td><td>53.4</td><td>57.4</td><td>78.9</td></tr><tr><td rowspan="3">AceCodeRM-7B</td><td>16</td><td>90.2</td><td>82.9</td><td>88.6</td><td>74.9</td><td>53.8</td><td>20.9</td><td>45.0</td><td>21.6</td><td>40.1</td><td>57.6</td></tr><tr><td>32</td><td>90.9</td><td>86.0</td><td>87.8</td><td>74.1</td><td>53.4</td><td>25.0</td><td>43.9</td><td>19.6</td><td>39.8</td><td>57.8</td></tr><tr><td>64</td><td>90.9</td><td>85.4</td><td>87.6</td><td>73.8</td><td>52.9</td><td>24.3</td><td>43.5</td><td>21.6</td><td>40.1</td><td>57.8</td></tr><tr><td>Ξ” (RM-greedy)</td><td>-</td><td>-0.6</td><td>0.0</td><td>+5.8</td><td>+3.4</td><td>+4.3</td><td>+5.4</td><td>+3.2</td><td>+1.4</td><td>+5.9</td><td>+2.6</td></tr><tr><td rowspan="3">AceCodeRM-32B</td><td>16</td><td>90.2</td><td>86.6</td><td>88.4</td><td>74.9</td><td>53.9</td><td>25.0</td><td>45.4</td><td>19.6</td><td>44.0</td><td>58.7</td></tr><tr><td>32</td><td>90.2</td><td>86.6</td><td>88.4</td><td>75.4</td><td>55.4</td><td>29.7</td><td>45.6</td><td>21.6</td><td>43.5</td><td>59.6</td></tr><tr><td>64</td><td>89.6</td><td>86.0</td><td>87.8</td><td>75.1</td><td>55.0</td><td>26.4</td><td>46.1</td><td>22.3</td><td>44.5</td><td>59.2</td></tr><tr><td>Ξ” (RM-greedy)</td><td>-</td><td>-0.6</td><td>+0.6</td><td>+5.8</td><td>+4.0</td><td>+6.0</td><td>+10.1</td><td>+4.3</td><td>+2.0</td><td>+10.3</td><td>+4.4</td></tr></table>
135
+
136
+ Table 3: ACECODE-RM's best-of-n results on several benchmarks. Specifically, -C means completion split and -I means instruct split of BigCodeBench. The $\Delta$ might be off by 0.1 due to rounding.
137
+
138
+ # 3.2 Reinforcement Learning Setup
139
+
140
+ We perform RL training from three policy models: Qwen2.5-7B-Instruct $^{6}$ , Qwen2.5-Coder-7B-Base $^{7}$ , and Qwen2.5-Coder-7B-Instruct. Two types of reward can be used, i.e. the trained reward model ACECODE-RM-7B and the rule-based reward, i.e. pass rate over the test cases in ACECODE-87K. During training, we set the pass rate to be a binary reward, which is 1.0 when all test cases pass, otherwise 0. This is similar to the verifiable reward used in Tulu3 (Lambert et al., 2024a) and DeepSeek-R1 (Guo et al., 2025). Similar to DeepSeek-R1 (Guo et al., 2025), we also experi
141
+
142
+ ment with RL from the base model because SFT may cause the search space of the model to be stuck in a local minimum. Since coding is also a highly verifiable task like math, we include the Qwen2.5-Coder-7B-Base in our experiments.
143
+
144
+ We have trained different policy model backbones with different rewards, resulting in 6 RL models in total. All the RL-tuning is based on OpenRLHF (Hu et al., 2024). We adopt the Reinforcement++ (Hu, 2025) algorithm instead of PPO to improve the training efficiency without training the value model. It's also proved to be more stable than PPO and GRPO. We train our model on a subsampled hard version of ACECODE-87K, where we keep the $25\%$ of the questions with lower aver
145
+
146
+ <table><tr><td rowspan="2">Model</td><td colspan="2">HumanEval</td><td colspan="2">MBPP</td><td colspan="2">BigCodeBench (C)</td><td colspan="2">BigCodeBench (I)</td><td rowspan="2">LiveCodeBench V4</td><td rowspan="2">Average</td></tr><tr><td>-</td><td>Plus</td><td>-</td><td>Plus</td><td>Full</td><td>Hard</td><td>Full</td><td>Hard</td></tr><tr><td>RLEF-8B</td><td>-</td><td>67.5</td><td>-</td><td>57.0</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>RLEF-70B</td><td>-</td><td>78.5</td><td>-</td><td>67.6</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PPOCoder-7B</td><td>78.7</td><td>-</td><td>67.0</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>StepCoder-7B</td><td>76.8</td><td>-</td><td>63.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>CodeGemma-7B</td><td>60.5</td><td>-</td><td>55.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DSTC-33B</td><td>79.9</td><td>72.0</td><td>82.5</td><td>70.4</td><td>51.6</td><td>22.3</td><td>41.0</td><td>18.2</td><td>-</td><td>-</td></tr></table>
147
+
148
+ <table><tr><td colspan="11">Baseline = Qwen2.5-7B-Instruct</td></tr><tr><td>Baseline</td><td>81.7</td><td>73.2</td><td>79.4</td><td>67.7</td><td>45.6</td><td>16.9</td><td>38.4</td><td>14.2</td><td>29.0</td><td>49.6</td></tr><tr><td>AceCoderRM</td><td>83.5</td><td>77.4</td><td>83.1</td><td>71.2</td><td>46.8</td><td>16.9</td><td>39.0</td><td>14.9</td><td>30.3</td><td>51.5</td></tr><tr><td>AceCoderRule</td><td>84.1</td><td>77.4</td><td>80.2</td><td>68.3</td><td>46.8</td><td>15.5</td><td>40.2</td><td>15.5</td><td>30.1</td><td>50.9</td></tr><tr><td>Ξ” (RL-baseline)</td><td>+2.4</td><td>+4.3</td><td>+3.7</td><td>+3.4</td><td>+1.2</td><td>0.0</td><td>+1.8</td><td>+1.4</td><td>+1.3</td><td>+2.0</td></tr></table>
149
+
150
+ <table><tr><td colspan="11">Baseline = Qwen2.5-Coder-7B-Base</td></tr><tr><td>Baseline</td><td>61.6</td><td>53.0</td><td>76.9</td><td>62.9</td><td>45.8</td><td>16.2</td><td>40.2</td><td>14.2</td><td>28.7</td><td>44.4</td></tr><tr><td>AceCoderRM</td><td>83.5</td><td>75.6</td><td>80.2</td><td>67.2</td><td>41.9</td><td>14.9</td><td>36.8</td><td>16.2</td><td>25.7</td><td>49.1</td></tr><tr><td>AceCoderRule</td><td>84.1</td><td>78.0</td><td>82.3</td><td>69.3</td><td>48.6</td><td>18.2</td><td>43.2</td><td>18.2</td><td>28.5</td><td>52.3</td></tr><tr><td>Ξ” (RL-baseline)</td><td>+22.5</td><td>+25.0</td><td>+5.4</td><td>+6.4</td><td>+2.8</td><td>+2.0</td><td>+3.1</td><td>+4.1</td><td>-0.2</td><td>+7.9</td></tr></table>
151
+
152
+ <table><tr><td colspan="11">Baseline = Qwen2.5-Coder-7B-Instruct</td></tr><tr><td>Baseline</td><td>91.5</td><td>86.0</td><td>82.8</td><td>71.4</td><td>49.5</td><td>19.6</td><td>41.8</td><td>20.3</td><td>34.2</td><td>55.2</td></tr><tr><td>AceCoderRM</td><td>89.0</td><td>84.1</td><td>86.0</td><td>72.8</td><td>50.4</td><td>18.9</td><td>42.0</td><td>19.6</td><td>35.0</td><td>55.3</td></tr><tr><td>AceCoderRule</td><td>90.9</td><td>84.8</td><td>84.1</td><td>71.7</td><td>50.9</td><td>23.0</td><td>43.3</td><td>19.6</td><td>34.9</td><td>55.9</td></tr><tr><td>Ξ” (RL-baseline)</td><td>-0.6</td><td>-1.2</td><td>+3.2</td><td>+1.3</td><td>+1.4</td><td>+3.4</td><td>+1.5</td><td>-0.7</td><td>+0.8</td><td>+0.7</td></tr></table>
153
+
154
+ Table 4: ACECODER's Performance after RL tuning using Reinforcement++ algorithm. We start with 3 different initial policy models and 2 kinds of reward types, where $RM$ means using our trained ACECODE-RM and Rule means using the binary pass rate. Results show consistent improvement across various benchmarks.
155
+
156
+ age pass rates and higher variance. This is to ensure the question is hard and that the sampled programs are diverse enough. For the training hyperparameters, we set the rollout batch size to 256, and 8 programs are sampled per question. The training batch size is 128 with a learning rate of 5e-7. All the models are trained for 1 episode and finished in 6 hours on $8 \times$ H100 GPUs.
157
+
158
+ # 3.3 Evaluation Setup
159
+
160
+ We evaluate our method on four established code-focused benchmarks: HumanEval $(+)$ (Chen et al., 2021; Liu et al., 2023), MBPP $(+)$ (Austin et al., 2021; Liu et al., 2023), BigCodeBench (Zhuo et al., 2024) and LiveCodeBench (V4) (Jain et al., 2024). These benchmarks collectively cover a diverse array of coding tasks, enabling us to assess both the correctness and quality of generated code. For Best-of-N sampling, we adopt top-p sampling with a temperature of 1.0 to generate multiple (16/32/64) candidate solutions per question and then select the response with the highest reward for evaluation. For RL experiments, we use each benchmark's default setting, which is greedy sampling most of the time.
161
+
162
+ # 3.4 Main Results
163
+
164
+ Here we show the experimental results of the reward models and RL-trained models.
165
+
166
+ RM Results We conduct Best-of-N experiments on 3 inference models, specifically Mistral-Instruct-V0.3-7B(AI, 2023), Llama-3.1-Instruct8B (Grattafori et al., 2024), and Qwen2.5-Coder7B-Insutrct (Hui et al., 2024b; Yang et al., 2024a). We additionally report the average score across all generated samples and also the oracle score (pass@N) for better comparison.
167
+
168
+ According to Table 3, ACECODE-RM can consistently boost the performance of inference models by a large margin compared to the greedy decoding results. On weaker models like Mistral (AI, 2023) and Llama-3.1 (Zheng et al., 2024b), the overall improvements are greater than 10 points. These improvements can be attributed to our reward model's ability to identify high-quality completions among multiple candidates, thereby reducing the impact of suboptimal sampling on the final output. Notably, these gains become more pronounced on benchmarks where the gap between greedy decoding and oracle performance (i.e., the best possible completion among all samples) is larger. In such cases, the variance among sampled completions is relatively high, providing greater opportunities for the reward
169
+
170
+ model to pinpoint and elevate top-tier responses.
171
+
172
+ Greedy decoding systematically outperforms the average sampled performance, reflecting the strong code generation capability of these inference models. Consequently, while most reward models achieve best-of-N results above the average, we consider a reward model effective only if it surpasses the performance of greedy decoding.
173
+
174
+ RL Results We perform RL training over 3 different initial policy models in Table 4 with model-based and rule-based rewards. When starting from Qwen2.5-Instruct-7B, we can see that RL tuning can consistently improve performance, especially for HumanEval and MBPP. Even for the Plus version with more and harder test cases, the RL-tuned model also has more than 3 points of improvement.
175
+
176
+ When starting from the Qwen2.5-Coder-Instruct-7B itself, we can still observe improvements, especially when using the rule-based reward. For example, we get more than 3.4 improvement on BigCodeBench-Full-Hard. Using the reward model for RL can also bring a 3.2 improvement on MBPP. This highlights the charm of self-improvement given the reward model backbone is the same as the initial policy model. We compare our method with other RL-based models like RLEF (Chen et al., 2024), PPOCoder (Shojae et al., 2023a), StepCoder (Dou et al., 2024b), DSTC (Liu et al., 2024d), etc. We show that our 7B model is able to beat these competitors across the evaluation benchmarks.
177
+
178
+ Another experiment we conduct is to perform RL training directly from the base model Qwen2.5-Coder-7B-base. We show significant improvement, especially through test-case pass rewards on HumanEval, MBPP, and BigCodeBench-I. These results are achieved by only training for 80 steps. We believe further scaling up the training will lead to much larger gains.
179
+
180
+ Comparison with Other RMs We compare our ACECODE-RM with 3 top-ranked RM on the RewardBench, including InternLM2-RM8B (Cai et al., 2024), Skywork-Llama-3.1-8B, and Skywork-Gemma-27B (Liu et al., 2024a), where results are reported in Table 5. We can see that these general-purpose RM can hardly improve and sometimes decrease the performance through Best-of-N sampling compared to greedy sampling, showcasing the incapability in identifying the correct generated programs. On the other hand, our ACECODE-RM surpasses all other publicly released reward
181
+
182
+ models in our evaluation and consistently gets positive gains. These findings further underscore our assumption that previous RM training lacks reliable signals for codes and prove that our RMs can generate reliable and state-of-the-art reward signals in code generation tasks.
183
+
184
+ Moreover, we have also evaluated our ACECODE-RM against other reward models on RM-Bench in Table 6, a benchmark designed to assess the reward model's capabilities across code synthesis, mathematical reasoning, and other tasks (Liu et al., 2024c). In this head-to-head comparison, ACECODE-RM emerges as the clear state-of-the-art in coding, hard tasks, and overall average. Remarkably, our 7 billion-parameter variant, ACECODE-RM-7B, outperforms NVIDIA-Nemotron-340B-Reward(Nvidia et al., 2024) by 7.50 points on the coding benchmark, proving that a more compact model can deliver superior reward estimates for code generation. Beyond code, ACECODE-RM also generalizes well: it beats all other reward models on the average score, underscoring its robust reasoning and dialogue capabilities.
185
+
186
+ # 3.5 Ablation Studies
187
+
188
+ Test Case Quality Matters We also conduct experiments to investigate how filtering the test cases with a proxy model can affect the results. As shown in Table 7, training RM on data after the filtering improves the performance significantly, especially for those hard code questions like MBPP-Plus and BigCodeBench-Hard (C/I). We believe this is because the test case filtering can ensure the remaining ones are consistent with each other and thus point to the same implicit program, which improves the quality of the rewards.
189
+
190
+ RM Backbone Matters Our results in Table 8 clearly show that changing the backbone of the reward model from Llama-3.1 to Qwen2.5 can significantly improve the Best-of-16 performance. This is because the Qwen2.5-Coder models have been pretrained on way more code-related data compared to the Llama-3.1 models, and thus more knowledgeable when tuning it into a reward model.
191
+
192
+ Does R1-style Tuning Work? Inspired by the recent DeepSeek-R1 (Guo et al., 2025), we also conduct the RL directly from the base model without any SFT. It turns out we get huge improvements when using rule-based rewards. For example, we get 25.0 points of improvements
193
+
194
+ <table><tr><td rowspan="2">Method &amp; RM</td><td colspan="2">HumanEval</td><td colspan="2">MBPP</td><td colspan="2">BigCodeBench-C</td><td colspan="2">BigCodeBench-I</td><td rowspan="2">LiveCodeBench V4</td><td rowspan="2">Average</td></tr><tr><td>-</td><td>Plus</td><td>-</td><td>Plus</td><td>Full</td><td>Hard</td><td>Full</td><td>Hard</td></tr><tr><td>Greedy</td><td>68.9</td><td>62.2</td><td>67.2</td><td>54.8</td><td>38.5</td><td>12.8</td><td>31.8</td><td>13.5</td><td>18.0</td><td>40.9</td></tr><tr><td>Average</td><td>50.1</td><td>42.2</td><td>57.9</td><td>47.2</td><td>22.0</td><td>10.6</td><td>18.2</td><td>12.0</td><td>14.9</td><td>30.6</td></tr><tr><td>InternLM2-RM-8B</td><td>57.9</td><td>55.5</td><td>66.7</td><td>54.0</td><td>38.7</td><td>8.8</td><td>29.8</td><td>8.8</td><td>15.1</td><td>37.3</td></tr><tr><td>Skywork-Gemma-27B</td><td>73.8</td><td>67.1</td><td>64.3</td><td>53.4</td><td>40.1</td><td>14.9</td><td>32.5</td><td>12.8</td><td>23.6</td><td>42.5</td></tr><tr><td>Skywork-Llama-3.1-8B</td><td>67.7</td><td>61.6</td><td>69.6</td><td>56.9</td><td>40.6</td><td>10.8</td><td>31.8</td><td>12.2</td><td>18.8</td><td>41.1</td></tr><tr><td>Ξ” (max(other RM)-greedy)</td><td>+4.9</td><td>+4.9</td><td>+2.4</td><td>+2.1</td><td>+2.1</td><td>+2.0</td><td>+0.6</td><td>-0.7</td><td>+5.6</td><td>+2.6</td></tr><tr><td>ACECODE-RM-7B</td><td>77.4</td><td>70.7</td><td>76.5</td><td>64.3</td><td>45.8</td><td>20.3</td><td>36.4</td><td>12.2</td><td>26.1</td><td>47.7</td></tr><tr><td>Ξ” (RM-greedy)</td><td>+8.5</td><td>+8.5</td><td>+9.3</td><td>+9.5</td><td>+7.3</td><td>+7.4</td><td>+4.6</td><td>-1.4</td><td>+8.1</td><td>+6.8</td></tr></table>
195
+
196
+ Table 5: ACECODE-RM's performance against other open-sourced reward models in terms of Best-of-16 sampling for Llama-3.1-8B-Inst. We can see the top-ranked RM on Reward Bench get little improvements compared to ours.
197
+
198
+ <table><tr><td>Model</td><td>Code</td><td>Chat</td><td>Math</td><td>Safety</td><td>Easy</td><td>Normal</td><td>Hard</td><td>Avg</td></tr><tr><td>Skywork/Skywork-Reward-Llama-3.1-8B</td><td>54.5</td><td>69.5</td><td>60.6</td><td>95.7</td><td>89</td><td>74.7</td><td>46.6</td><td>70.1</td></tr><tr><td>NVIDIA/Nemotron-340B-Reward</td><td>59.4</td><td>71.2</td><td>59.8</td><td>87.5</td><td>81</td><td>71.4</td><td>56.1</td><td>69.5</td></tr><tr><td>internlm/internlm2-20b-reward</td><td>56.7</td><td>63.1</td><td>66.8</td><td>86.5</td><td>82.6</td><td>71.6</td><td>50.7</td><td>68.3</td></tr><tr><td>internlm/internlm2-7b-reward</td><td>49.7</td><td>61.7</td><td>71.4</td><td>85.5</td><td>85.4</td><td>70.7</td><td>45.1</td><td>67.1</td></tr><tr><td>Skywork-Reward-Llama-3.1-8B-v0.2ΒΉ</td><td>53.4</td><td>69.2</td><td>62.1</td><td>96</td><td>88.5</td><td>74</td><td>47.9</td><td>70.1</td></tr><tr><td>Skywork-Reward-Gemma-2-27B-v0.2ΒΉ</td><td>45.8</td><td>49.4</td><td>50.7</td><td>48.2</td><td>50.3</td><td>48.2</td><td>47</td><td>48.5</td></tr><tr><td>AceCodeRM-7B</td><td>66.9</td><td>66.7</td><td>65.3</td><td>89.9</td><td>79.9</td><td>74.4</td><td>62.2</td><td>72.2</td></tr><tr><td>AceCodeRM-32B</td><td>72.1</td><td>73.7</td><td>70.5</td><td>88</td><td>84.5</td><td>78.3</td><td>65.5</td><td>76.1</td></tr><tr><td>Ξ” (AceCodeRM-7B - max(other RM))</td><td>7.5</td><td>-4.5</td><td>-6.1</td><td>-6.1</td><td>-9.1</td><td>-0.3</td><td>6.1</td><td>2.1</td></tr><tr><td>Ξ” (AceCodeRM-32B - max(other RM))</td><td>12.7</td><td>2.5</td><td>-0.9</td><td>-8</td><td>-4.5</td><td>3.6</td><td>9.4</td><td>6</td></tr></table>
199
+
200
+ There is no official result for this model; however, the authors made best efforts to extend the original RM Bench code base to adapt to this new model.
201
+
202
+ Table 6: ACECODE-RM's Performance on RM Bench against various other reward models. We can see that ACECODE-RM-32B model performed best in Coding, Chat, Normal, Hard, and Average scores against all other reward models.
203
+
204
+ <table><tr><td>Method</td><td>HumanEval -</td><td>MBPP -</td><td>MBPP Plus</td><td>BigCodeBench-C Full</td><td>BigCodeBench-I Hard</td><td>BigCodeBench-I Full</td><td>LiveCodeBench V4</td><td>Average</td></tr><tr><td colspan="9">Inference Model = Llama-3.1-8B-Instruct</td></tr><tr><td>RM w/o Test Case Filter</td><td>73.8</td><td>65.9</td><td>73.3</td><td>61.4</td><td>44.6</td><td>17.6</td><td>35.5</td><td>9.5</td></tr><tr><td>RM w/ Test Filter</td><td>77.4</td><td>70.7</td><td>76.5</td><td>64.3</td><td>45.8</td><td>20.3</td><td>36.4</td><td>12.2</td></tr><tr><td>Ξ” (w/ Filter - w/o Filter)</td><td>+3.7</td><td>+4.9</td><td>+3.2</td><td>+2.9</td><td>+1.2</td><td>+2.7</td><td>+0.9</td><td>+2.7</td></tr><tr><td colspan="9">Inference Model = Qwen2.5-Coder-7B-Instruct</td></tr><tr><td>RM w/o Test Case Filter</td><td>91.5</td><td>86.0</td><td>86.0</td><td>72.2</td><td>52.5</td><td>21.6</td><td>43.4</td><td>19.6</td></tr><tr><td>RM w/ Test Filter</td><td>90.2</td><td>82.9</td><td>88.6</td><td>74.9</td><td>53.8</td><td>20.9</td><td>45.0</td><td>21.6</td></tr><tr><td>Ξ” (w/ Filter - w/o Filter)</td><td>-1.2</td><td>-3.0</td><td>+2.6</td><td>+2.6</td><td>+1.3</td><td>-0.7</td><td>+1.6</td><td>+2.0</td></tr></table>
205
+
206
+ Table 7: Ablation study on test-case filtering. Results are Best-of-16 sampling performance.
207
+
208
+ <table><tr><td rowspan="2">Method</td><td colspan="2">HumanEval</td><td colspan="2">MBPP</td><td colspan="2">BigCodeBench-C</td><td colspan="2">BigCodeBench-I</td><td rowspan="2">LiveCodeBench V4</td><td rowspan="2">Average</td></tr><tr><td>-</td><td>Plus</td><td>-</td><td>Plus</td><td>Full</td><td>Hard</td><td>Full</td><td>Hard</td></tr><tr><td colspan="11">Inference Model = Llama-3.1-8B-Instruct</td></tr><tr><td>ACECODE-RM (LLama)</td><td>65.9</td><td>59.1</td><td>69.6</td><td>57.9</td><td>42.7</td><td>12.8</td><td>32.9</td><td>13.5</td><td>19.9</td><td>41.6</td></tr><tr><td>ACECODE-RM (Qwen)</td><td>77.4</td><td>70.7</td><td>76.5</td><td>64.3</td><td>45.8</td><td>20.3</td><td>36.4</td><td>12.2</td><td>26.1</td><td>47.7</td></tr><tr><td>Ξ” (Qwen-Llama)</td><td>+11.6</td><td>+11.6</td><td>+6.9</td><td>+6.3</td><td>+3.1</td><td>+7.4</td><td>+3.5</td><td>-1.4</td><td>+6.2</td><td>+6.1</td></tr><tr><td colspan="11">Inference Model = Qwen2.5-Coder-7B-Instruct</td></tr><tr><td>ACECODE-RM (LLama)</td><td>87.8</td><td>81.7</td><td>82.0</td><td>67.7</td><td>50.5</td><td>25.0</td><td>39.0</td><td>19.6</td><td>32.4</td><td>54.0</td></tr><tr><td>ACECODE-RM (Qwen)</td><td>90.2</td><td>82.9</td><td>88.6</td><td>74.9</td><td>53.8</td><td>20.9</td><td>45.0</td><td>21.6</td><td>40.1</td><td>57.6</td></tr><tr><td>Ξ” (Qwen-Llama)</td><td>+2.4</td><td>+1.2</td><td>+6.6</td><td>+7.1</td><td>+3.2</td><td>-4.1</td><td>+6.0</td><td>+2.0</td><td>+7.7</td><td>+2.4</td></tr></table>
209
+
210
+ Table 8: Comparison of ACECODE-RM's performance trained on different base model, where ACECODE-RM (Llama) is based on Llama-3.1-Inst-8B and ACECODE-RM (Qwen) is based on Qwen-Coder-2.5-7B-Inst. Results are Best-of-16 sampling performance.
211
+
212
+ on HumanEval-Plus after training only 6 hours from the Base Model, which is way more efficient than large-scale SFT. What's more, the ACE-CODER _Rule_ improve the BigCodeBench-Instruct-Full's performance from 40.2 to 43.2, nearly the same performance with DeepSeek-R1-Distill-Qwen-32B (43.9) which was directly distilled from the DeepSeek-R1 Model. This further consolidates the finding of DeepSeek-Zero. However, we do find that using reward models for RL tuning can lead to worse results. We attribute this to the potential reward hacking during the tuning process.
213
+
214
+ # 4 Related Works
215
+
216
+ # 4.1 Synthesizing Test Cases
217
+
218
+ Automatic test generation is a widely used approach for verifying the correctness of LLM-generated programs. Prior work has commonly employed the same LLM that generates the programs to also generate test cases, selecting the most consistent program from multiple sampled outputs in a self-consistency manner (Chen et al., 2022; Huang et al., 2023; Jiao et al., 2024). However, these generated test cases often suffer from significant hallucinations. To address this issue, Algo (Zhang et al., 2023) introduced the use of an oracle program solution to improve test case quality. While similar in spirit to our test case filtering approach, Algo constructs its oracle solution by exhaustively enumerating all possible combinations of relevant variables, whereas we leverage a stronger coder LLM to generate the oracle solution. Beyond using test cases as verification signals, Clover (Sun et al., 2023) enhances program verification by performing consistency checks between code, docstrings, and formal annotations, incorporating formal verification tools alongside LLMs.
219
+
220
+ # 4.2 Reinforcement Learning for LLM
221
+
222
+ Reinforcement Learning from Human Feedback (RLHF)(Ouyang et al., 2022b) has been widely adopted to enhance the capabilities of large language models (LLMs) in various tasks, including conversational interactions and mathematical reasoning(Yang et al., 2024b). Reinforcement learning (RL) algorithms such as PPO(Schulman et al., 2017), GRPO(Shao et al., 2024), and Reinforcement++(Hu, 2025) have been employed to fine-tune models using reward signals derived from either learned reward models(Shao et al., 2024) or predefined rule-based heuristics (Guo et al., 2025;
223
+
224
+ Wang et al., 2025).
225
+
226
+ Given that coding is an inherently verifiable task, recent studies have explored RL techniques that leverage direct execution accuracy as a reward signal. PPOCoder (Shojae et al., 2023b) and CodeRL (Le et al., 2022) demonstrated the effectiveness of PPO-based RL for coding tasks, while RLEF (Gehring et al., 2024) extended this approach to multi-turn settings by incorporating execution feedback at each step. StepCoder (Dou et al., 2024a) refined the reward mechanism by assigning rewards at a more granular level, considering only successfully executed lines of code. Additionally, DSTC (Liu et al., 2024e) explored the application of Direct Preference Optimization (DPO) to code generation by using self-generated test cases and programs.
227
+
228
+ Despite these advancements, most prior RL-based approaches for coding have been constrained by the use of pre-annotated datasets such as APPS (Hendrycks et al., 2021), which consists of only 5,000 examples, with most problems having a single test case. This limited data availability poses challenges to scalable RL training. Furthermore, the potential of reward models for coding remains largely unexplored. In this work, we address these limitations by automatically synthesizing test cases and leveraging trained reward models for reinforcement learning, demonstrating the scalability and effectiveness of our approach.
229
+
230
+ # 5 Conclusion
231
+
232
+ We introduced ACECODER as the first approach to reward model training and RL tuning for code generation using large-scale, reliable test case synthesis. Our data pipeline produces high-quality verifiable code without relying on the most advanced models, enabling effective reward model training and reinforcement learning. Our method significantly improves Best-of-N performance. However, RL training gains are less pronounced, leaving it as a future work to enhance.
233
+
234
+ # Limitations
235
+
236
+ Test Case Synthesis Despite our efforts to enhance the reliability of synthesized test cases through prompt engineering and filtering with a reference solution, inaccuracies can still arise. These errors may stem from an incorrect reference solution or test cases that are too simple, failing to capture challenging edge cases. Consequently, passing
237
+
238
+ all test cases does not necessarily guarantee a program's correctness, leading to noise in the reward model training and reinforcement learning (RL) tuning signals. To address this, future work can leverage stronger large language models (LLMs) to synthesize more rigorous test cases, ensuring the inclusion of harder corner cases. Additionally, using more advanced coding LLMs to generate reference solutions could further improve test case filtering, preserving only high-quality examples.
239
+
240
+ Reinforcement Learning for Coding In this paper, we explored RL tuning using three models and two types of rewards: RM-based and rule-based. While significant improvements are observed when tuning Qwen2.5-7B-Instruct and Qwen2.5-Coder-7B-Base, tuning on Qwen2.5-Coder-7B-Instruct exhibited less pronounced gains due to its strong ability originally. This suggests that the current reward signals may still contain noise. Furthermore, there remains considerable room for improvement, particularly in tuning the Qwen2.5-Coder-7B-Base. Given recent advancements in models such as DeepSeek-R1, future work could further refine RL tuning strategies to achieve better performance with a more fine-grained reward design.
241
+
242
+ # Ethical Statements
243
+
244
+ This work fully complies with the ACL Ethics Policy. We declare that there are no ethical issues in this paper, to the best of our knowledge.
245
+
246
+ # References
247
+
248
+ Mistral AI. 2023. Mistral-7b-instruct-v0.3. https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3.
249
+ Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
250
+ Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39:324.
251
+ Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li,
252
+
253
+ Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yining Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Hajunj Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song, Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao and Dahua Lin. 2024. Internl m2 technical report. Preprint, arXiv:2403.17297.
254
+ Angelica Chen, JΓ©rΓ©my Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Samuel R Bowman, Kyunghyun Cho, and Ethan Perez. 2024. Learning from natural language feedback. *Transactions on Machine Learning Research*.
255
+ Bei Chen, Fengji Zhang, A. Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. Codet: Code generation with generated tests. ArXiv, abs/2207.10397.
256
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidi Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Preprint, arXiv:2107.03374.
257
+ Shihan Dou, Yan Liu, Haoxiang Jia, Limao Xiong, Enyu Zhou, Junjie Shan, Caishuang Huang, Wei Shen, Xiaoran Fan, Zhiheng Xi, Yuhao Zhou, Tao Ji, Rui Zheng, Qi Zhang, Xuanjing Huang, and Tao Gui. 2024a. Stepcoder: Improve code generation with reinforcement learning from compiler feedback. ArXiv, abs/2402.01391.
258
+ Shihan Dou, Yan Liu, Haoxiang Jia, Limao Xiong, Enyu Zhou, Wei Shen, Junjie Shan, Caishuang
259
+
260
+ Huang, Xiao Wang, Xiaoran Fan, et al. 2024b. Steppoder: Improve code generation with reinforcement learning from compiler feedback. arXiv preprint arXiv:2402.01391.
261
+ Jonas Gehring, Kunhao Zheng, Jade Copet, Vegard Mella, Taco Cohen, and Gabriele Synnaeve. 2024. Rlef: Grounding code llms in execution feedback with reinforcement learning. ArXiv, abs/2410.02089.
262
+ Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, and etc. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783.
263
+ Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948.
264
+ Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. 2024a. Deepseek-coder: When the large language model meets programming - the rise of code intelligence. ArXiv, abs/2401.14196.
265
+ Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, YK Li, et al. 2024b. Deepseekcoder: When the large language model meets programming-the rise of code intelligence. arXiv preprint arXiv:2401.14196.
266
+ Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021. Measuring coding challenge competence with apps. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
267
+ Jian Hu. 2025. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262.
268
+ Jian Hu, Xibin Wu, Weixun Wang, Dehao Zhang, Yu Cao, OpenLLMAI Team, Netease Fuxi, AI Lab, and Alibaba Group. 2024. Openrlhf: An easy-touse, scalable and high-performance rlhf framework. ArXiv, abs/2405.11143.
269
+ Baizhou Huang, Shuai Lu, Weizhu Chen, Xiaojun Wan, and Nan Duan. 2023. Enhancing large language models in coding through multi-perspective self-consistency. In Annual Meeting of the Association for Computational Linguistics.
270
+ Siming Huang, Tianhao Cheng, Jason Klein Liu, Jiaran Hao, Liuyihan Song, Yang Xu, J Yang, JH Liu, Chenchen Zhang, Linzheng Chai, et al. 2024. Open-coder: The open cookbook for top-tier code large language models. arXiv preprint arXiv:2411.04905.
271
+
272
+ Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Kai Dang, An Yang, Rui Men, Fei Huang, Shanghaoran Quan, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. 2024a. Qwen2.5-coder technical report. ArXiv, abs/2409.12186.
273
+ Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Kai Dang, et al. 2024b. Qwen2.5-coder technical report. arXiv preprint arXiv:2409.12186.
274
+ Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276.
275
+ Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. 2024. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974.
276
+ Fangkai Jiao, Geyang Guo, Xingxing Zhang, Nancy F. Chen, Shafiq Joty, and Furu Wei. 2024. Preference optimization for reasoning with pseudo feedback. ArXiv, abs/2411.16345.
277
+ Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. 2024. Swe-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations.
278
+ Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
279
+ Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. 2024a. T\ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124.
280
+ Nathan Lambert, Valentina Pyatkin, Jacob Daniel Morrison, Lester James Validad Miranda, Bill Yuchen Lin, Khyathi Raghavi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, and Hanna Hajishirzi. 2024b. Rewardbench: Evaluating reward models for language modeling. ArXiv, abs/2403.13787.
281
+ Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven C. H. Hoi. 2022. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. ArXiv, abs/2207.01780.
282
+ Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin, and Ge Li. 2023. Taco: Topics in algorithmic code generation dataset. arXiv preprint arXiv:2312.14852.
283
+
284
+ Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Ju-jie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui Zhou. 2024a. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451.
285
+ Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation. In Thirty-seventh Conference on Neural Information Processing Systems.
286
+ Jiawei Liu, Songrun Xie, Junhao Wang, Yuxiang Wei, Yifeng Ding, and Lingming Zhang. 2024b. Evaluating language models for efficient code generation. In First Conference on Language Modeling.
287
+ Yantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, and Juanzi Li. 2024c. Rm-bench: Benchmarking reward models of language models with subtlety and style. arXiv preprint arXiv:2410.16184.
288
+ Zhihan Liu, Shenao Zhang, Yongfei Liu, Boyi Liu, Yingxiang Yang, and Zhaoran Wang. 2024d. Dstc: Direct preference learning with only self-generated tests and code to improve code lms. arXiv preprint arXiv:2411.13611.
289
+ Zhihan Liu, Shenao Zhang, Yongfei Liu, Boyi Liu, Yingxiang Yang, and Zhaoran Wang. 2024e. Dstc: Direct preference learning with only self-generated tests and code to improve code lms. ArXiv, abs/2411.13611.
290
+ Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. 2024. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173.
291
+ Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi-ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evol-instruct.
292
+ Nvidia, ., Bo Adler, Niket Agarwal, Ashwath Aithal, Dong H. Anh, Pallab Bhattacharya, Annika Brundyn, Jared Casper, Bryan Catanzaro, Sharon Clay, Jonathan Cohen, Sirshak Das, Ayush Dattagupta, Olivier Delalleau, Leon Derczynski, Yi Dong, Daniel Egert, Ellie Evans, Aleksander Ficek, Denys Fridman, Shaona Ghosh, Boris Ginsburg, Igor Gitman, Tomasz Grzegorzek, Robert Hero, Jining Huang, Vibhu Jawa, Joseph Jennings, Aastha Jhunjhunwala, John Kamalu, Sadaf Khan, Oleksii Kuchaiev, Patrick LeGresley, Hui Li, Jiwei Liu, Zihan Liu, Eileen Long, Ameya Sunil Mahabaleshwarkar, Somshubra Majumdar, James Maki, Miguel Martinez, Maer Rodrigues de Melo, Ivan Moshkov, Deepak Narayanan, Sean Narethiran, Jesus Navarro, Phong Nguyen, Osvald Nitski, Vahid Noroozi, Guruprasad Nutheti, Christopher Parisien, Jupinder Parmar, Mostofa Patwary, Krzysztof Pawelec, Wei Ping, Shrimai Prabhumoye, Rajarshi Roy, Trisha Saar, Vasanth Rao Naik
293
+
294
+ Sabavat, Sanjeev Satheesh, Jane Polak Scowcroft, Jason Sewall, Pavel Shamis, Gerald Shen, Mohammad Shoeybi, Dave Sizer, Misha Smelyanskiy, Felipe Soares, Makesh Narsimhan Sreedhar, Dan Su, Sandeep Subramanian, Shengyang Sun, Shubham Toshniwal, Hao Wang, Zhilin Wang, Jiaxuan You, Jiqi Zeng, Jimmy Zhang, Jing Zhang, Vivienne Zhang, Yian Zhang, and Chen Zhu. 2024. Nemotron-4 340b technical report. Preprint, arXiv:2406.11704.
295
+ Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022a. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155.
296
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022b. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
297
+ Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, I. Evtimov, Joanna Bitton, Manish P Bhatt, Cristian Cantón Ferrer, Aaron Grattafori, Wenhan Xiong, Alexandre D'efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2023. Code llama: Open foundation models for code. ArXiv, abs/2308.12950.
298
+ John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and P. Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. CoRR, abs/1506.02438.
299
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. ArXiv, abs/1707.06347.
300
+ Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, JunMei Song, Mingchuan Zhang, Y. K. Li, Yu Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. ArXiv, abs/2402.03300.
301
+ Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, and Chandan K Reddy. 2023a. Execution-based code generation using deep reinforcement learning. Transactions on Machine Learning Research.
302
+ Parshin Shojaaee, Aneesh Jain, Sindhu Tipirneni, and Chandan K. Reddy. 2023b. Execution-based code generation using deep reinforcement learning. ArXiv, abs/2301.13816.
303
+ Alexander Shypula, Aman Madaan, Yimeng Zeng, Uri Alon, Jacob Gardner, Milad Hashemi, Graham Neubig, Parthasarathy Ranganathan, Osbert
304
+
305
+ Bastani, and Amir Yazdanbakhsh. 2023. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867.
306
+ Benjamin Steenhoek, Michele Tufano, Neel Sundaresan, and Alexey Svyatkovskiy. 2023. Reinforcement learning from automatic feedback for high-quality unit test generation. arXiv preprint arXiv:2310.02368.
307
+ Chuyue Sun, Ying Sheng, Oded Padon, and Clark W. Barrett. 2023. Clover: Closed-loop verifiable code generation. In SAIV.
308
+ Haozhe Wang, Long Li, Chao Qu, Fengming Zhu, Weidi Xu, Wei Chu, and Fangzhen Lin. 2025. Learning autonomous code integration for math lanuguage models. ArXiv.
309
+ Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2024. Magicoder: Empowering code generation with oss-instruct. In *Forty-first International Conference on Machine Learning*.
310
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. 2024a. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
311
+ An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. 2024b. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122.
312
+ Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921.
313
+ Kexun Zhang, Danqing Wang, Jingtao Xia, William Yang Wang, and Lei Li. 2023. Algo: Synthesizing algorithmic programs with generated oracle verifiers. ArXiv, abs/2305.14591.
314
+ Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. 2025. The lessons of
315
+
316
+ developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301.
317
+ Tianyu Zheng, Ge Zhang, Tianhao Shen, Xueling Liu, Bill Yuchen Lin, Jie Fu, Wenhu Chen, and Xiang Yue. 2024a. Opencodeinterpreter: Integrating code generation with execution and refinement. arXiv preprint arXiv:2402.14658.
318
+ Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024b. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics.
319
+ Terry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, et al. 2024. Bigcodebench: Benchmarking code generation with diverse function calls and complex instructions. arXiv preprint arXiv:2406.15877.
320
+
321
+ # A Appendix
322
+
323
+ # A.1 More related works
324
+
325
+ LLM for Code Generation Large language models (LLMs) have demonstrated significant potential in code generation. Due to the unique nature of coding tasks, specialized coding models such as Code Llama (Rozière et al., 2023) and QwenCoder (Hui et al., 2024b; Yang et al., 2024a) were developed shortly after the emergence of general-purpose LLMs. These models typically undergo a two-phase training process: pre-training and fine-tuning. During pre-training, they are exposed to extensive coding corpora sourced from various internet platforms, including raw text, GitHub repositories, and pull requests. This is followed by supervised fine-tuning, which enhances their instruction-following capabilities. To assess the performance of these models in code generation, several benchmarks have been established, including MBPP (Austin et al., 2021), HumanEval (Chen et al., 2021), EvalPlus (Liu et al., 2023, 2024b), Big Code Bench (Zhuo et al., 2024), and Live Code Bench (Jain et al., 2024). These benchmarks usually include a series of prompts or problems for the LLMs to solve, and they also contain test cases to assess the correctness of the generated code.
326
+
327
+ Reward Models Reward models play a crucial role in aligning LLMs by assigning scalar values to response pairs based on specific evaluation criteria, such as human preference (Ouyang et al., 2022b) and accuracy (Zhang et al., 2025). They are widely used in reinforcement learning with human feedback (RLHF) to refine model behavior and in Best-of-N sampling to enhance test-time performance. However, while general-purpose reward models are effective for assessing human preference, they often struggle with specialized domains like mathematics and coding due to the complexity of these tasks. For instance, even top-ranked reward models from Reward Bench (Lambert et al., 2024b), such as Skywork-RM (Liu et al., 2024a), have difficulty providing reliable rewards for these domains. To address this issue, task-specific reward models have been developed, such as Qwen-2.5-Math-PRM (Zhang et al., 2025) for mathematical reasoning. However, coding reward models have remained largely absent due to the lack of reliable training signalsβ€”an issue that our proposed ACECODE-RM aims to address.
328
+
329
+ # A.2 ACECODE-RM Model Breakdown
330
+
331
+ In this section, we present the different models involved in the training and evaluation process of ACECODE-RM in a concise table for further clarification.
332
+
333
+ <table><tr><td></td><td>AceCodeRM-7B</td></tr><tr><td>Backbone</td><td>Qwen2.5-Coder-7B-Instruct</td></tr><tr><td>Training Responses (Preference Pairs) Generation</td><td>Qwen2.5-Coder-7B-Instruct</td></tr><tr><td>Inference Model</td><td>Any Model (Qwen2.5-Coder-7B-Instruct, Llama-3.1-8B-Instruct, etc.)</td></tr><tr><td></td><td>AceCodeRM-32B</td></tr><tr><td>Backbone</td><td>Qwen2.5-Coder-32B-Instruct</td></tr><tr><td>Training Responses (Preference Pairs) Generation</td><td>Qwen2.5-Coder-32B-Instruct</td></tr><tr><td>Inference Model</td><td>Any Model (Qwen2.5-Coder-7B-Instruct, Llama-3.1-8B-Instruct, etc.)</td></tr></table>
334
+
335
+ Table 9: Models used during the training and evaluation process of ACECODE-RM.
336
+
337
+ # A.3 Prompt Template
338
+
339
+ # system:
340
+
341
+ You are an AI assistant that helps people with python coding tasks.
342
+
343
+ # user:
344
+
345
+ You are the latest and best bot aimed at transforming some code snippet into a leetcode style question. You will be provided with a prompt for writing code, along with a reference program that answers the question. Please complete the following for me:
346
+
347
+ 1. Come up with a lexicode style question which consists of a well-defined problem. The generated question should meet the following criteria:
348
+
349
+ a. The question is clear and understandable, with enough details to describe what the input and output are.
350
+ b. The question should be solvable by only implementing 1 function instead of multiple functions or a class. Therefore, please avoid questions which require complicated pipelines.
351
+ c. The question itself should not require any access to external resource or database.
352
+ d. Feel free to use part of the original question if necessary. Moreover, please do not ask for runtime and space complexity analysis or any test cases in your response.
353
+
354
+ 2. Based on the modified question that you generated in part 1, you need to create around 20 test cases for this modified question. Each test case should be independent assert clauses. The parameters and expected output of each test case should all be constants, **without accessing any external resources**.
355
+
356
+ Here is the original question: {instruction}
357
+
358
+ Here is the reference program that answers the question:
359
+
360
+ ```txt
361
+ ```
362
+ python
363
+ {program}
364
+ ```
365
+
366
+ Now give your modified question and generated test cases in the following json format: {"question": ..., "tests": ["assert ... ", "assert ..."]}.
367
+
368
+ Table 10: Prompt Used for Converting Seed Code Dataset into LeetCode-style Questions and Test Cases
369
+
370
+ <table><tr><td>system:
371
+ You are an AI assistant that helps people with python coding tasks.</td></tr><tr><td>user:
372
+ You are the latest and best bot aimed at transforming some code snippet into a leetcode style question. You will be provided with a reference program. Please complete the following for me:
373
+ 1. Come up with a leetcode style question which consists of a well-defined problem. The generated question should meet the following criteria:
374
+ a. The question is clear and understandable, with enough details to describe what the input and output are.
375
+ b. The question should be solvable by only implementing 1 function instead of multiple functions or a class. Therefore, please avoid questions which require complicated pipelines.
376
+ c. The question itself should not require any access to external resource or database.
377
+ d. Feel free to use part of the original question if necessary. Moreover, please do not ask for runtime and space complexity analysis or any test cases in your response.
378
+ 2. Based on the modified question that you generated in part 1, you need to create around 20 test cases for this modified question. Each test case should be independent assert clauses. The parameters and expected output of each test case should all be constants, **without accessing any external resources**.
379
+ Here is the reference program:
380
+ ```
381
+ python
382
+ {program}</td></tr></table>
383
+
384
+ Table 11: Prompt Used for Converting Seed Code Dataset using only the reference program without instruction into LeetCode-style Questions and Test Cases
385
+
386
+ # A.4 Case Studies on HumanEval
387
+
388
+ <table><tr><td>ID</td><td>HumanEval/141</td><td>Baseline Score</td><td>Pass</td><td>RL Score</td><td>Fail</td></tr><tr><td colspan="6">def file_name_check(file_name):</td></tr><tr><td colspan="6">&quot;Create a function which takes a string representing a file&#x27;s name, and returns &#x27;Yes&#x27; if the file&#x27;s name is valid, and returns &#x27;No&#x27; otherwise. A file&#x27;s name is considered to be valid if and only if all the following conditions are met: - There should not be more than three digits (&#x27;0&#x27;-9&#x27;) in the file&#x27;s name. - The file&#x27;s name contains exactly one dot !&quot; - The substring before the dot should not be empty, and it starts with a letter from the latin alphabet (&#x27;a&#x27;-z&#x27; and &#x27;A&#x27;-Z&#x27;). - The substring after the dot should be one of these: [&#x27;txt&#x27;, &#x27;exe&#x27;, &#x27;dll&#x27;] Examples: file_name_check(&quot;example.txt&quot;) # =&gt; &#x27;Yes&#x27; file_name_check(&quot;1example.dll&quot;) # =&gt; &#x27;No&#x27; (the name should start with a latin alphabet letter)</td></tr><tr><td colspan="6">&quot;</td></tr><tr><td colspan="3">Baseline&#x27;s Response def file_name_check(file_name):</td><td colspan="3">RL&#x27;s Response: import re def file_name_check(file_name): &quot;Create a function which takes a string representing a file&#x27;s name, and returns &#x27;Yes&#x27; if the file&#x27;s name is valid, and returns &#x27;No&#x27; otherwise. A file&#x27;s name is considered to be valid if and only if all the following conditions are met: - There should not be more than three digits (&#x27;0&#x27;-9&#x27;) in the file&#x27;s name. - The file&#x27;s name contains exactly one dot !&quot; - The substring before the dot should not be empty, and it starts with letter from the latin alphabet (&#x27;a&#x27;-z&#x27; and &#x27;A&#x27;-Z&#x27;). - The substring after the dot should be one of these: [&#x27;txt&#x27;, &#x27;exe&#x27;, &#x27;dll&#x27;] Examples: file_name_check(&quot;example.txt&quot;) # =&gt; &#x27;Yes&#x27; file_name_check(&quot;1example.dll&quot;) # =&gt; &#x27;No&#x27; (the name should start with a latin alphabet letter)</td></tr><tr><td colspan="3"># Split the file name by the dot parts = file_name.split(&#x27;.&#x27;)</td><td colspan="3"># Regular expression to match the file name criteria pattern = r&#x27;[a-zA-Z][a-zA-Z0-9]{0,}(\d{0,3})\.(txt|exe|dll)\$</td></tr><tr><td colspan="3"># Check if there is exactly one dot if len-parts) != 2: return &#x27;No&#x27;</td><td rowspan="5" colspan="3"># Check if the file name matches the pattern if re.match(pattern, file_name): return &#x27;Yes&#x27; else: return &#x27;No&#x27;</td></tr><tr><td colspan="3"># Check if the substring before the dot is not empty and starts with a letter if not parts[0] or not parts[0][0].isalpha(): return &#x27;No&#x27;</td></tr><tr><td colspan="3"># Count the number of digits in the file name digit_count = sum(c.isdigit() for c in file_name)</td></tr><tr><td colspan="3"># Check if there are more than three digits if digit_count &gt; 3: return &#x27;No&#x27;</td></tr><tr><td colspan="3"># If all conditions are met, the file name is valid return &#x27;Yes&#x27;</td></tr></table>
389
+
390
+ Figure 2: In this example, the RL model took a more advanced approach and attempted to use regular expression matching. However, the regular expression it came up is not correct as it did not include whitespace and did not handle the constraint "there should not be more than three digits ('0'-'9') in the file's name" correctly.
391
+
392
+ ![](images/14a9343faca0b5bd11161613493a1c523e2f4f8c82106cd998819bf42ea7fd5c.jpg)
393
+ Figure 3: In this example, the RL model correctly caught the error where the baseline did not consider the case where the whole string has to be repeated in order to create a palindrome.
394
+
395
+ ![](images/1871432cd5291984723d28a37bb0ddfa58be6f82f48cc74b3fb2dd0f3224ed36.jpg)
396
+ Figure 4: In this example, while both codes would have identical output, the baseline's output is slow due to the recursive calls. By using a for loop instead, the RL model's code's runtime is half of that of the baseline's. Therefore, it passed the test whereas the baseline's code got a time-out.
acecoderacingcoderrlviaautomatedtestcasesynthesis/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f03b9b92dd4137176dc2b5f00062402cca24eb5fa09e0e2be50471d5e8a24432
3
+ size 1387055
acecoderacingcoderrlviaautomatedtestcasesynthesis/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:549455f1530fb91ed70822304ac7d4679c811475bce4f28abb40ea7839387c46
3
+ size 464952
acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/831455f6-9e27-4933-a005-1bc4bfd9073f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95fe95383e6955cd3341cb4ce7b626a3136e7c644b18bacd8091ee89f267de20
3
+ size 132752
acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/831455f6-9e27-4933-a005-1bc4bfd9073f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78b9f73a251cd02a68c27a23b1098247eb26ebfb0db417e0c6935cdc942460f9
3
+ size 155359
acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/831455f6-9e27-4933-a005-1bc4bfd9073f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85a01e7c97499f515ce3cfb5a90b16fff2953e493f8331c6b3c46bacc82a0e6e
3
+ size 465702
acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/full.md ADDED
@@ -0,0 +1,447 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACORD: An Expert-Annotated Dataset for Legal Contract Clause Retrieval
2
+
3
+ Steven H. Wang $^{1}$ , Maksim Zubkov $^{2}$ , Kexin Fan $^{3}$ , Sarah Harrell $^{4}$ , Yuyang Sun $^{5}$ Wei Chen $^{6}$ , Andreas Plesner $^{1}$ , Roger Wattenhofer $^{1}$
4
+
5
+ $^{1}$ ETH Zurich $^{2}$ Independent Researcher $^{3}$ New York University
6
+
7
+ <sup>4</sup>University of Washington <sup>5</sup>Yale University <sup>6</sup>The Atticus Project
8
+
9
+ wang.steven.h@gmail.com wei@attiscusprojectai.org aplesner@ethz.ch
10
+
11
+ # Abstract
12
+
13
+ Contract clause retrieval is foundational to contract drafting because lawyers rarely draft contracts from scratch; instead, they locate and revise the most relevant precedent clauses. We introduce the Atticus Clause Retrieval Dataset (ACORD), the first expert-annotated benchmark specifically designed for contract clause retrieval to support contract drafting tasks. ACORD focuses on complex contract clauses such as Limitation of Liability, Indemnification, Change of Control, and Most Favored Nation. It includes 114 queries and over 126,000 query-clause pairs, each ranked on a scale from 1 to 5 stars. The task is to find the most relevant precedent clauses to a query. The bi-encoder retriever paired with pointwise LLMs re-rankers shows promising results. However, substantial improvements are still needed to manage the complex legal work typically undertaken by lawyers effectively. As the first expert-annotated benchmark for contract clause retrieval, ACORD can serve as a valuable IR benchmark for the NLP community.
14
+
15
+ # 1 Introduction
16
+
17
+ Contracts are the backbone of modern business, with millions created daily. A 2023 ALM and Bloomberg Law survey found that $43\%$ of corporate counsel spend at least half their time on tasks like drafting, editing, and negotiating contracts (Bloomberg Law (2023); Pery (2019)). Clause retrieval is critical to these tasks because lawyers rarely write contracts from scratch; they rely on finding and adapting high-quality precedent clauses, especially when drafting complex clauses. Mistakes in wording or missing key ideas can lead to disputes, liabilities, or invalid clauses.
18
+
19
+ Research shows that Large Language Models
20
+
21
+ # ACORD
22
+
23
+ # Termination for Convenience Clause?
24
+
25
+ ![](images/2851337cfd792bf82610a8a372c7ded20483c190a39177c81124cfc30eecb28b.jpg)
26
+ Figure 1: A showcase of the clause retrieval process integral to the ACORD dataset. The input is a query, "Termination for Convenience," and the output is a ranked list of the most relevant clauses from various agreements. The clauses are accompanied by their sources, thus providing critical context for legal professionals.
27
+
28
+ # Clause Retrievals
29
+
30
+ 1. Either Party may terminate this Agreement at any time during the term of this Agreement for any reason upon two weeks written notice to the other Party. [Development Agreement, Pelican Delivers Inc., 2018].
31
+ 2. Both parties shall have the right to terminate this Agreement at any time, for any or for no reason, upon one hundred twenty (120) days written notice to the other party. [Collaboration Agreement, Biocept Inc., 2012].
32
+ 3. Accordingly, either Party may terminate this Agreement or all/part of a Project for any reason and at any time upon giving the other Party sixty (60) days prior written notice. [Development Agreement, FuelCell Energy Inc., 2019].
33
+ 4. Client may terminate this Agreement on thirty (30) days written notice without cause. [Master Service Agreement, PaxMedica Inc., 2018].
34
+ 5. Bank of America may terminate this Agreement, an Order and/or any Customization Schedule(s) for its convenience, without cause, at any time without further charge or expense upon at least forty-five (45) calendar days prior written notice to Supplier. [Maintenance Agreement, Cardlytics Inc., 2010].
35
+
36
+ (LLMs) can effectively analyze legal contracts and identify issues within them (Hoffman and Arbel, 2023; Martin et al., 2024; Choi et al., 2024; Schwarcz et al., 2025). However, LLMs struggle to draft contracts independently. Table 1 compares an LLM-generated clause with two top-rated precedent clauses from ACORD. Lawyer edits, shown with strike-through and underlined text, highlight two main issues: (1) the LLM-generated language creates conflicts between clauses, and (2) it introduces language not typically found in standard contracts.
37
+
38
+ Retrieval-Augmented Generation (RAG), which combines LLMs' text generation with a retrieval system, mimics how lawyers improve drafting by first searching for relevant precedent clauses to use as the base. This approach has been shown to reduce hallucinations and enhance performance (Lewis et al., 2021; Niu et al., 2024; Magesh et al., 2024). We seek
39
+
40
+ <table><tr><td>Prompt: draft a contract clause where liability cap is based on purchase price</td><td colspan="2">Query: liability cap is based on purchase price</td></tr><tr><td>LLM-Generated Result</td><td>ACORD 5-Star Clause</td><td>ACORD 4-Star Clause</td></tr><tr><td>1.- [Section Number] Cap on Liabilitya
41
+ Notwithstanding any provision to the contrary in this Agreement, Except as set forth in Section [Number] below, The total aggregate liability of [Seller&#x27;s Name] to [Buyer&#x27;s Name], whether in contract, tort (including negligence), breach of statutory duty, or otherwise, arising under or in connection with this Agreement shall not exceed the total purchase price amount paid and payable by [Buyer&#x27;s Name] to [Seller&#x27;s Name] under this Agreement (the &quot;Purchase Price&quot;).2.- [Section Number]. Exclusions from Liability Cap
42
+ The limitation of liability set forth in this clause Section [Number] shall not apply to:
43
+ a. Liability resulting from gross negligence, willful misconduct, or fraud by [Seller&#x27;s Name];
44
+ b. Any breach of confidentiality obligations as specified in this Agreement;
45
+ c. Any liability which cannot be excluded or limited by applicable law.3.- Section [Number] No Indirect or Consequential Damages
46
+ In no event shall [Seller&#x27;s Name] be liable to [Buyer&#x27;s Name] for any indirect, incidental, special, consequential, or punitive damages, including but not limited to loss of profits or revenue, loss of use, or loss of data, even if advised of the possibility of such damages.
47
+ 4.- Application of Limitations-
48
+ The parties agree that the limitations of liability set forth in this clause are fundamental elements of the basis of the bargain between [Seller&#x27;s Name] and [Buyer&#x27;s Name], and the Purchase Price reflects these limitations.</td><td>5. Sponsor&#x27;s Maximum Obligation; Indemnification. Racing represents to the Sponsor that the Sponsor&#x27;s aggregate obligation hereunder will not exceed the amount of the sponsorship fee set forth in Section 3 hereof (or such lesser amount as is payable by the Sponsor in the event that this Agreement is terminated pursuant to Section 14 thereof), plus, if applicable, collection costs that may be reasonably incurred by Racing in a legal proceeding to collect all or any part thereof (the &quot;Maximum Obligation&quot;). Racing agrees to indemnify the Sponsor and its officers, directors, agents and employees and to hold them harmless from any loss, claim, cost, damage or liability in excess of the Maximum Obligation which (i) the Sponsor shall incur as a result of this Agreement, or (ii) arises from any failure by Racing to perform any of its obligations hereunder.</td><td>12. Limitation of Liability
49
+ 12.1 IN NO EVENT SHALL EITHER PARTY BE LIABLE TO THE OTHER PARTY FOR ANY INCIDENTAL, CONSEQUENTIAL, INDIRECT, SPEC-CIAL, OR PUNITIVE DAMAGES (INCLUDING, BUT NOT LIMITED TO, LOST PROFITS, BUSINESS INTERRUsION, LOSS OF BUSI-NESS INFORMATION OR OTHER PECUNIARY LOSS) REGARDLESS OF WHETHER SUCH LIABILITY IS BASED ON BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY, BREACH OF WARRANTYES, FAILURE OF ESSENTIAL PURPOSE OR OTHERWISE AND EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
50
+ 12.2 Except with regard to a breach of confidentiality, a party&#x27;s indemnification obligations hereunder, or infringement of intellectual property rights, either party&#x27;s total liability to the other party under this Agreement shall be limited to the amounts paid or payable by the Reseller to Todos during the twelve-month period preceding the interposi-tion of the claim.</td></tr></table>
51
+
52
+ a Attorney Note: change "1." to a placeholder because a Cap on Liability clause would never be in Section 1.
53
+ b Attorney Note: the reference to "Notwithstanding any provision to the contrary in this Agreement" is wrong because this subsection should NOT take precedence over the "Exclusions from Liability Cap" below. Therefore, it should be replaced with "Except as set forth in Section [Number] below."
54
+ c t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t
55
+ d t i t subsection.
56
+ e Attorney Note: Section 4 is not commonly seen in commercial contracts. It is unclear what the intent or purpose of this clause is.
57
+
58
+ Table 1: Comparison between an LLM-generated liability cap clause and two expert-rated clauses from ACORD. Attorney annotations (strike-through and underlined text) highlight deficiencies in the LLM output, demonstrating why retrieval of high-quality precedent clauses is essential for effective contract drafting.
59
+
60
+ to evaluate how well models retrieve relevant clauses to ensure RAG can work effectively.
61
+
62
+ Clause retrieval poses several challenges. Contracts are multi-layered, with sections and subsections that can span pages, often broken into paragraphs outlining obligations (what is required) and exceptions (what is not required). Cross-references within clauses add further complexity, such as defined terms (e.g., "as defined in Section 4") or exceptions (e.g., "except as set forth in Sections 9 and 11(a)" or "notwithstanding anything to the contrary"). These references often point to sections located pages apart, yet they are crucial for understanding a clause's meaning. Additionally, whether one clause should rank higher than the others can be subjective and depends on the user's individual experi
63
+
64
+ ence, the industry they are in, and their unique needs, among other things. This subjectivity is reflected in the annotators' disagreement rate of $21\%$ .<sup>1</sup>
65
+
66
+ Datasets with expert-annotated, domain-specific benchmarks are essential for improving LLMs in clause retrieval and contract drafting. However, such datasets are rare due to the high cost of expert annotations and the confidentiality of proprietary contracts. Many existing legal contract benchmarks consist of multiple-choice Q&A formats, often derived from bar exam questions, and do not reflect the work
67
+
68
+ lawyers perform, with none focusing specifically on contract drafting. To address this, we introduce ACORD, the Atticus Clause Retrieval Dataset, the first expert-annotated retrieval benchmark explicitly designed for contract drafting.
69
+
70
+ ACORD addresses the challenge of retrieving complex and heavily negotiated contract clauses, such as Limitation of Liability, Indemnification, Most Favored Nation, and Termination for Convenience. These clauses are central to contract drafting and require careful negotiation and precise language. The task is to retrieve the most relevant precedent clauses for a given query, where the output is a ranked list of top-rated clauses from a large corpus. Figure 1 illustrates this retrieval process. The dataset was developed collaboratively by seasoned lawyers, student annotators, and machine learning researchers. It comprises 114 unique queries and over 126,000 query-clause pairs, each annotated with a 1-to-5-point relevance score by experts. This rigorous annotation ensures that ACORD is a robust and reliable benchmark for evaluating retrieval models in legal contexts. The estimated annotation cost of ACORD would be over US $1 million when using the prevailing rates of $550 per hour for attorneys and $150 for non-attorneys.
71
+
72
+ ACORD provides a new framework for assessing machine learning models' ability to retrieve information critical for contract drafting tasks. We hope that ACORD enables researchers and ML practitioners to evaluate search systems properly on a nuanced legal search task and to make progress toward tackling real-world challenges in legal drafting. The dataset and software will be made publicly available on GitHub.
73
+
74
+ # 2 Related Work
75
+
76
+ Despite being one of the most common legal tasks (Pery, 2019; Bloomberg Law, 2023), contract drafting and clause retrieval have been under-studied in NLP literature due to the lack of domain-specific benchmark datasets.
77
+
78
+ # 2.1 Information Retrieval
79
+
80
+ Information retrieval (IR) is the process of locating relevant documents in response to a user query,
81
+
82
+ as seen in web search tools such as Google and Bing. Historically, lexical-based retrieval has been foundational in IR (Hambarde and Proenca, 2023), with BM25 being one of the most popular ranking functions. BM25 scores document relevance based on term frequency and inverse document frequency while mitigating the impact of prevalent words (Robertson and Zaragoza, 2009). However, lexical-based retrieval methods face the "lexical gap", where reliance on exact word matches often fails to capture relevant semantic connections such as synonymy and word-order-dependent relationships (Thakur et al., 2021). Recent approaches have moved toward neural IR, designed to capture semantic connections beyond simple lexical matches, to mitigate this. These IR models fall into two primary categories:
83
+
84
+ Retriever Models Queries and documents are mapped into a vector space by the models, allowing pre-computed document representations to be indexed. For example, dense retrievers based on Transformer models (Vaswani et al., 2023) have shown robust performance in many tasks such as citation prediction, argument retrieval, and question-answering (Thakur et al., 2021; Muennighoff et al., 2023). Another example is sparse embeddings that learned token-level contextualized representations using Transformers that still could be efficiently indexed using an efficient inverse index (Zhao et al., 2020; Formal et al., 2021).
85
+
86
+ Neural Reranking Models Initial retrieval results (often from e.g. a BM25 model) are enhanced by the models by reordering the returned documents. Leveraging cross-attention mechanisms in models such as BERT has significantly boosted reranking performance (Devlin et al., 2019). ColBERT (Contextualized late interaction over BERT) refines this approach by creating token-level embeddings for queries and documents, using a maximum-similarity mechanism to identify relevant documents. However, this comes with higher computational costs (Khattab and Zaharia, 2020). Recent research highlights that instruction-tuned LLMs can surpass traditional supervised cross-encoders in zero-shot passage reranking tasks (Sun et al., 2023). In particular, list-, point-, or pairwise reranking have received attention (Tang et al., 2024; Zhuang et al., 2024; Qin et al., 2024).
87
+
88
+ # 2.2 Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG)
89
+
90
+ Despite their advanced text generation capabilities, LLMs have not been widely adopted in contract drafting due to the risk of hallucination. Existing works (Wang et al., 2024; Dahl et al., 2024) show that the generations of LLMs often can be unreliable, untrustworthy, and risky. Many recent studies focus on developing and enhancing various RAG techniques to reduce hallucination and improve model performance in text generation (Niu et al., 2024; Magesh et al., 2024; Schwarcz et al., 2025). Given the absence of an information retrieval-specific dataset tailored to this task, the open question is whether these methodologies can effectively retrieve precedent clauses for contract drafting purposes.
91
+
92
+ # 2.3 Case Law and Statute Retrieval
93
+
94
+ There are several case law retrieval datasets, including LePARD, CLERC, COLLIE, ECtHR-PCR (Li and Harada, 2022; Hou et al., 2024; Rabelo et al., 2021; T.y.s.s. et al., 2024). However, the queries in these datasets are not constructed by experts. Instead, they are synthetically created by taking blocks of text from case law with the citations redacted. The document referred to in the redacted citation serves as the corresponding relevant document for the query-document pair. This methodology cannot be used to create a contract IR dataset because contract clauses do not inherently include references or citations that can be used to infer query-document pairs. Legal experts need to generate queries and ranked clauses.
95
+
96
+ Despite the relative abundance of case law IR datasets, case law retrieval remains an active and challenging area of research. Recent benchmarks report low baseline performance even when using forgiving metrics such as recall@1000 (CLERC) (Hou et al., 2024) or constructing queries only from text containing direct quotations from the relevant citations (LePARD) (Li and Harada, 2022).
97
+
98
+ # 2.4 Legal Contract Datasets
99
+
100
+ Several datasets have been published to facilitate AI research on legal contracts, but none address specific IR needs.
101
+
102
+ Prior datasets focus on clause extraction rather
103
+
104
+ than retrieval. Chalkidis et al. (2017) & Leivaditi et al. (2020) provide datasets for extracting basic contract and lease information, not legal clauses, making them less useful for contract drafting. Hendrycks et al., 2021 introduces CUAD, a large-scale expert-annotated dataset with over 2,000 clauses across 41 categories from 510 commercial contracts. While useful for drafting, CUAD is designed for clause classification and extraction and therefore lacks ranking, an essential component of IR datasets. MAUD (Wang et al., 2023), ContractNLI (Koreeda and Manning, 2021), and LegalBench (Guha et al., 2023) focus on reading comprehension and reasoning, but would need extensive expert annotations for IR research. BigLaw Bench (Core, Workflow, and Retrieval) (Harvey AI, 2024) targets day-to-day legal tasks but is not specific to contract drafting and has an order of magnitude fewer annotations than ACORD.
105
+
106
+ Aggarwal et al. (2021); Joshi et al. (2022); Lam et al. (2023); Alonso and Samara Chatzianastasiou (2024); Kasundra and Dhankhar (2024) have amongst others explored generating contract clauses by learning from data or applying constraints like logic rules or similar contract context. However, they generally rely on small datasets with simple queries and clauses. What is missing is a large-scale expert-rated benchmark that reflects how lawyers search for complex clauses, such as limitation of liabilities. ACORD fills this gap by providing the first large-scale benchmark for clause retrieval with queries crafted by experienced lawyers to fit their real needs.
107
+
108
+ # 3 ACORD: A Contract Clause Retrieval Dataset
109
+
110
+ Clause Retrieval. Clause retrieval involves identifying the most relevant precedent clauses within contracts based on a given query. This process ensures that lawyers can efficiently reference and adapt precedent clauses to meet specific drafting needs.
111
+
112
+ Task Definition. ACORD frames the retrieval of clauses as an ad hoc information retrieval task. Given an attorney-written query and a corpus of clauses, search systems output a list of clauses by order of predicted relevance to the query. Performance is a function of the ordered lists (sometimes called a "run") and the attorney's ground-truth ratings. We
113
+
114
+ <table><tr><td>Clause Category</td><td>Example Query</td></tr><tr><td>Limitation of Liability</td><td>a party&#x27;s liability for fraud, negligence, personal injury or tort subject to a cap</td></tr><tr><td>Indemnification</td><td>indemnification of first party claims</td></tr><tr><td>Affirmative Covenants</td><td>insurance clause</td></tr><tr><td>Restrictive Covenants</td><td>no solicit of customers</td></tr><tr><td>Term &amp; Termination</td><td>termination for convenience</td></tr><tr><td>Governing Law</td><td>clause with multiple governing laws</td></tr></table>
115
+
116
+ Table 2: A subset of the 114 unique queries created by legal experts to address diverse contract drafting scenarios. Each query corresponds to a specific clause category, such as "Limitation of Liability." The full list of the clause categories and queries is in Appendix C.3.
117
+
118
+ measure performance with established metrics, such as the standard NDCG score and a task-specific normalized precision score.
119
+
120
+ Queries and Clauses. The dataset includes 114 unique queries written by legal experts to address diverse drafting requirements. Each query targets one or more legal concepts across 9 clause categories. Sample queries are shown in Table 2, with a complete list provided in Appendix C.3.
121
+
122
+ Query-Clause Pair Score. Each query-clause pair is assigned a 1-5 point relevance score. Clauses rated 3-5 stars are relevant, 2-star clauses are not relevant but helpful (from the same category, e.g., indemnification), and 1-star clauses are irrelevant. A sample-scored query-clause pair is shown in Table B-1, and the annotation rubrics are in Table A-1.
123
+
124
+ For each query, annotators aim to find 10 relevant clauses (3-5 stars) and 20 2-star clauses. Some queries have fewer than 10 relevant clauses, but all include at least 5. To avoid false negatives, 1-star clauses are selected from expert-annotated clauses in CUAD (Hendrycks et al., 2021), and 2-5 star Limitation of Liability and Indemnification clauses serve as 1-star for other categories. Though 1-star clauses are not individually rated, they are irrelevant due to being from distinct categories. Table C-2 shows the statistics of rated clauses, and Table C-3 lists the number of clauses by rating for each category.
125
+
126
+ Since most queries have fewer than five 5-star or 4-star clauses, x-star precision@5 scores are normalized to a 0-1 scale to adjust for the limited availability of high-rated clauses, as explained in Appendix F.1. When calculating the precision, the scores are rounded: 4.666 rounds up to 5, 4.333 rounds down to 4, etc.
127
+
128
+ Contract Corpus and Clause Corpus. About 400 of such contracts are from CUAD (Hendrycks et al., 2021), which was sourced from the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system maintained by the U.S. Securities and Exchange Commission (SEC). 50 contracts are Terms of Services, ToS, published online by selected Fortune 500 companies. We call these EDGAR contracts and online ToS the "Contract Corpus." Annotators extracted from the Contract Corpus all clauses responsive to the 9 categories, which we call the "Clause Corpus." Annotators then extracted clauses from the Clause Corpus responsive to each of the 114 queries. When extracting a clause responsive to a query, annotators include the entire subsection or section to ensure comprehensive context and understanding. As shown in Figure C-1, clauses in ACORD vary in length, with over half of the 3,000 clauses rated 2- through 5-stars having over 100 words. To assess the models' retrieval performance, ACORD simplifies the real-life task of clause retrieval, which would require the models first to extract the Clause Corpus from the Contract Corpus. The benchmark included in this paper uses the Clause Corpus only.
129
+
130
+ Data Splits We split the Clause Corpus into train, validation, and test sets. We form these splits at the query level, randomly allocating $45\%$ , $5\%$ , and $50\%$ of the queries to the train, validation, and test splits, respectively, while ensuring that at least one query from each category is represented in the test set.
131
+
132
+ Data Statistics ACORD contains 114 unique queries across the 9 clause categories. It has over 126,000 query-clause pairs, each rated with a 1-5 score. See Appendix C.3 for more details of the queries and expert-rated clauses.
133
+
134
+ No False Negatives Information retrieval datasets often contain false negatives due to an inability to annotate the entire dataset; however, every clause in ACORD is annotated to avoid false negatives.
135
+
136
+ Labeling Process The annotation process followed five steps. See Appendix A.3 for additional details.
137
+
138
+ (1) Extraction: Student annotators, after receiving 5-10 hours of training, extracted relevant clauses for two contract categories from the Contract Corpus. For the remaining seven categories, we reused expert-annotated clauses from the CUAD dataset.
139
+ (2) Retrieval: Student annotators searched the Clause Corpus to retrieve relevant clauses for each query, aiming for 10 relevant (3- to 5-star) and 20 irrelevant (2-star) clauses per query. Some queries yielded only 5-8 relevant clauses.
140
+ (3) Scoring: Two experienced attorneys and one student annotator rated each query-clause pair using a detailed four-page rubric. Clauses mistakenly retrieved as relevant were assigned a 2-star rating.
141
+ (4) Reconciliation: For cases with rating discrepancies greater than 2 stars or disagreement on relevance, a panel of 3-6 experienced attorneys reviewed and adjusted scores to bring them within a 2-point range. Final scores were calculated as the average of the three individual ratings.
142
+ (5) Expansion: To augment the dataset, 1-star irrelevant clauses were added to each query using clauses from the CUAD dataset.
143
+
144
+ Data Format The ACORD dataset is released in BEIR format, with a modification to the format of the qrels files to account for our explicit 1-star ratings. See Appendix C.5 for details.
145
+
146
+ # 4 Experiments
147
+
148
+ # 4.1 Setup
149
+
150
+ Metrics We use the following five standard information retrieval metrics to measure performance: NDCG@5, NDCG@10, 5-star precision@5, 4-star precision@5, 3-star precision@5. NDCG measures the models' ranking quality by how well they rank the most relevant clauses at the top of the search results, focusing on the top results. The x-star precision@5 metrics evaluate the precision within the top 5 results by counting how many of the top 5 results meet or exceed the respective relevance score. For NDCG, we change the scores from 1-5 to 0-4 to ensure the computed metrics correctly reflect the
151
+
152
+ quality of returned results. See Appendix F.1 for details and definitions of the performance metrics.
153
+
154
+ Baseline Models We evaluate several stand-alone retriever methods, including BM25 (Robertson and Zaragoza, 2009), MiniLM bi-encoder (66M parameters) (Wang et al., 2020) and OpenAI text embedding (OpenAI, 2024b). BM25 is a computationally inexpensive lexical or keyword-based retriever. Documents with terms from the query have higher scores; rarer terms in the overall corpus have higher weighting, so they contribute more to the document score. MiniLM bi-encoder is an embedding model trained via knowledge distillation from BERT on various embedding datasets, including MSMARCO.
155
+
156
+ We evaluate two-phase retrieval-reranker systems by reranking the top 100 BM25 $^2$ outputs with the MiniLM cross-encoder (22.7M parameters), GPT4o-mini, $^3$ GPT-4o, $^4$ Llama-3.2-1B (1.23B parameters), and Llama-3.2-3B (3.21B parameters) (OpenAI, 2024a; Meta, 2024) models.
157
+
158
+ Fine-tuning The MiniLM cross-encoder is finetuned on the training dataset. We choose the learning rate and the number of updates via grid search on the validation NDCG@10 score. See Appendix F.2 for full training details.
159
+
160
+ # 4.2 Results
161
+
162
+ We show the results of most tested search systems on the test dataset in Table 3 with full results in Appendix D. Overall, a BM25 retriever paired with pointwise GPT-4o performs best, fine-tuned models perform better than models without fine-tuning, and embedding models perform better than lexical models. However, results vary significantly by query and model. Also, results using a pointwise instead of pairwise reranking are generally much better in our experiments; see Table D-5 for a comparison.
163
+
164
+ Performance of Methods In retrieval-only systems, the large OpenAI text embedding model performed best. BM25 or MiniLM bi-encoder with large LLM rerankers significantly outperformed other
165
+
166
+ <table><tr><td>Retriever</td><td>Reranker</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star prec@5</td><td>4-star prec@5</td><td>5-star prec@5</td></tr><tr><td>OpenAI embeddings (small)</td><td>None</td><td>55.1</td><td>55.8</td><td>50.5</td><td>34.7</td><td>8.3</td></tr><tr><td>OpenAI embeddings (large)</td><td>None</td><td>62.1</td><td>64.1</td><td>58.6</td><td>38.9</td><td>11.0</td></tr><tr><td rowspan="6">BM25</td><td>None</td><td>52.5</td><td>54.0</td><td>50.9</td><td>38.9</td><td>9.0</td></tr><tr><td>Cross-Encoder-MiniLM</td><td>59.3</td><td>60.9</td><td>60.0</td><td>43.5</td><td>6.2</td></tr><tr><td>GPT4o</td><td>76.9</td><td>79.7</td><td>81.1</td><td>60.0</td><td>17.2</td></tr><tr><td>GPT4o-mini</td><td>75.2</td><td>78.2</td><td>78.6</td><td>58.2</td><td>18.6</td></tr><tr><td>Llama-1B</td><td>13.8</td><td>14.4</td><td>13.0</td><td>10.5</td><td>4.1</td></tr><tr><td>Llama-3B</td><td>62.6</td><td>65.3</td><td>63.9</td><td>48.1</td><td>9.7</td></tr></table>
167
+
168
+ Table 3: Summary of the performance in % of selected retrieval models on the ACORD test dataset. The table reports NDCG@5, NDCG@10, and normalized precision@5 (for 3-star, 4-star, and 5-star clauses) aggregated across categories. We highlight the highest and second-highest metrics in bold and underlined, respectively. See Table D-3 in Appendix D.2 for results for all tested retriever methods. These results use a pointwise reranking as they are generally much better than the results using pairwise reranking in our experiments; See Table D-5.
169
+
170
+ methods. MiniLM bi-encoder with GPT4o reranker achieved the highest results, with an NDCG@5 of $79.1\%$ and an NDCG@10 of $81.2\%$ , while BM25 with GPT4o reranker was a close second, outperforming all other methods by a noticeable margin. Smaller LLMs performed much worse than larger ones; Llama 3.2 showed a difference of $40\%$ points in most metrics between the 1B and 3B models.
171
+
172
+ Performance after Fine-Tuning We finetuned the MiniLM cross-encoder on the training data and observed a modest increase of $2.0\%$ and $5.4\%$ in NDCG@5 and NDCG@10 scores, respectively. Meanwhile, the 3- and 4-star precisions decrease slightly, with a $5.1\%$ increase in 5-star precision. See Figure 2 for results before and after fine-tuning, and Table D-2 in Appendix D.1 for results by category. We also fine-tune the cross-encoder when paired with the bi-encoder for the initial retrievals; the results are in Table D-4.
173
+
174
+ # Performance by Clause Categories and Queries.
175
+
176
+ We measure the performance of the models across each of the 9 clause categories and under each query and find performance varies substantially. Some results are unexpected. For example, the query "indemnity of broad-based claims" achieved perfect scores for NDCG@5 and 3-star and 4-star precision@5 using BM25 paired with MiniLM cross-encoder, whereas "unilateral liability cap" and "as-is clause" got zero NDCG@5 score. This is contrary to what we observed from human annotators: extracting "unilateral liability cap" and "as-is clauses" was easier than extracting relevant "indemnity of broad-based
177
+
178
+ ![](images/6bc9db94c3ebe73dffed0433710dd212753b58abe48f1fdd81428f3d78562524.jpg)
179
+ Figure 2: The performance of BM25 with the MiniLM cross-encoder before and after fine-tuning. We see improvements in NDCG and 5-star precision scores with minor declines in the 3-star and 4-star precision scores. These results underscore the importance of domain-specific fine-tuning for advancing clause retrieval capabilities; however, the improvements are not large, underlining that the problem is not solved. See Table D-2 in Appendix D.1 for results for each category.
180
+
181
+ claims" clauses. Results for each clause category can be found in Appendix D.1.
182
+
183
+ # 5 Discussion and Future Work
184
+
185
+ Strong Extraction Performance The models have demonstrated a strong understanding of legal terminology beyond simple lexical matching. For instance, in response to the query "change of control", the models effectively return clauses referencing "ownership changes" or the "sale of substantially all assets". Similarly, for the query "IP ownership assignment/transfer," the models accurately identify "work for hire" clauses. The query "indemnification of broad-based claims" generates $100\%$ NDCG@5 and 3-star and 4-star precision@5 scores, showcasing
186
+
187
+ deep semantic comprehension (broad-based claims mean claims brought by a contracting party or any third party for breach of contract and tort, etc.). This contributes to the overall promising NDCG@5 and 3-star precision@5 scores, indicating the models' relative maturity in clause extraction tasks.
188
+
189
+ Poor Performance on Legal Jargon without Context Despite the strong capability to understand legal terminologies, the models perform poorly on queries of legal jargon without context. For example, the query "as-is clause" achieves zero NDCG@5 score across several models, whereas a simple keyword search would yield near-perfect results. The query "IP ownership assignment/transfer" returns joint ownership clauses instead of straight-forward ownership assignment clauses. In contrast, models perform significantly better with longer, context-rich queries. The query "IP infringement indemnity that covers trademark or copyright" achieves near-perfect scores across several models. To further test the effectiveness of more contextualized queries, we expand two queries ("as-is clause" and "change of control") into medium and long formats. We find that medium formats yield significantly better results, as shown in Appendix E. In the Supplemental Materials, we include expanded medium and long formats for each of the 114 queries. We invite the research community to experiment further using these formats as prompts.
190
+
191
+ Low Ranking Performance Models consistently fail to retrieve 4- or 5-star clauses in the top 5 results, with the 4-star and 5-star precision@5 scores being only $60.0\%$ and $17.2\%$ for the overall best method (BM25 with GPT-4o pointwise reranking). Many 4- or 5-star clauses are ranked beyond the top 10 or even 15. Low-ranking performance diminishes user confidence and limits the broader adoption of these models for real-world contract-related tasks. This finding is not surprising, given that there is currently no expert-annotated IR dataset for contracts, and the gap highlights the need for large-scale expert-annotated IR datasets like ACORD.
192
+
193
+ NDCG Is Not an Optimal Metric The high NDCG scores seen do not adequately reflect the quality of the models in real-life contract drafting tasks. In the context of clause retrieval, the more meaningful metrics are 4- and 5-star precision@5 scores.
194
+
195
+ This is because ranking relevant clauses requires a high degree of legal expertise. Most users of legal IR applications may lack such expertise. Thus, presenting many 3-star clauses as high-quality results can mislead users into relying on poorly drafted precedents, ultimately resulting in low-quality contracts.
196
+
197
+ The best method, bi-encoder retriever with GPT40 pointwise reranking, had an NDCG@5 score of $79.1\%$ , but the 4-star and 5-star precision@5 scores were only $62.1\%$ and $17.2\%$ , respectively. Moreover, $40\%$ of the queries tested have a 4-star precision@5 score below $50\%$ . With ACORD, we hope the research community can advance in developing retrieval methodologies prioritizing clause quality.
198
+
199
+ Pointwise Outperforms Pairwise We test ACORD using pointwise and pairwise approaches on different LLMs with the results available in Table D-5. Interestingly, the pointwise approach performs better than the pairwise approach for all tested retrieval methods, except when using the smallest Llama 3.2 model as the reranker. The same result persists after we deploy several methodologies in our pairwise experiments, including Pairwise Ranking Prompting used by Qin et al. (2024). This result seems inconsistent with prior literature, which generally indicates better results for pairwise methods (Qin et al., 2024; Zhuang et al., 2024). We encourage further research to validate and expand on our findings.
200
+
201
+ # 6 Conclusion
202
+
203
+ We introduced ACORD, the first expert-annotated clause retrieval dataset for contract drafting tasks. ACORD aims to facilitate research on clause retrieval, a foundational contract drafting task focusing on the most complex and heavily negotiated clauses in commercial contracts. ACORD includes 114 queries and over 126,000 query-clause pairs, each ranked on a scale from 1 to 5 stars. We evaluated 20 retrieval methods on ACORD and found that performance is promising but still has significant room for improvement. We also showed that model design markedly influences performance, suggesting that algorithmic improvements from the NLP community will help solve this challenge.
204
+
205
+ # 7 Limitations
206
+
207
+ ACORD is a simplified dataset because it contains a small scope of clauses already extracted from contracts. For example, ACORD does not have clauses about representatives and warranties, product descriptions, or payment terms. The performance will deteriorate in real-life legal tasks that would require the models to first extract relevant clauses from a large number of contracts before ranking them. We plan to publish the underlying contracts to enable research and experiments on the extraction and retrieval tasks.
208
+
209
+ Expert annotator ratings largely depend on a lawyer's individual experience, industry, and interpretation of the query, among other things. This subjectivity is reflected in the annotators' disagreement rate of $21\%$ . However, although the ratings may not be exact, the relative ranking among different query-clause pairs is less uncertain.
210
+
211
+ The vast majority of commercial contracts are confidential and proprietary. ACORD uses publicly available sources, namely contracts in EDGAR and selected online terms of services. Although ACORD may not fully represent an enterprise's database of negotiated contracts, the types of queries and clauses it includes are fairly standardized, making ACORD a valuable resource for enterprise contract retrieval.
212
+
213
+ ACORD focuses primarily on English-language contracts for U.S.-based companies, which limits its applicability to contracts governed by other legal systems or written in other languages.
214
+
215
+ Expert lawyers generate queries in ACORD and likely do not reflect the queries that a less experienced non-legal professional would have asked. Translating these queries into plain English would require legal expertise. This highlights the importance of keeping human legal professionals in the loop when creating products using ACORD.
216
+
217
+ # References
218
+
219
+ Vinay Aggarwal, Aparna Garimella, Balaji Vasan Srinivasan, Anandhavelu N, and Rajiv Jain. 2021. ClauseRec: A Clause Recommendation Framework for AI-aided Contract Authoring. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8770-8776, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
220
+ Noguer I Alonso and Foteini Samara Chatzianastasiou. 2024. Automating Legal Contracts Using Logic Rules with Large Language Models. Foteini, Automating Legal Contracts Using Logic Rules with Large Language Models (November 16, 2024).
221
+ Bloomberg Law. 2023. 3 out of 4 in-house counsel dissatisfied with existing contract workflow technology, per alm/bloomberg law survey. Accessed: 2024-12-13.
222
+ Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2017. Extracting contract elements. In Proceedings of the 16th edition of the International Conference on Artificial Intelligence and Law, ICAIL '17, pages 19-28, New York, NY, USA. Association for Computing Machinery.
223
+ Jonathan H. Choi, Amy B. Monahan, and Daniel Schwarcz. 2024. Lawyering in the Age of Artificial Intelligence. Minnesota Law Review, 109(1):147-218.
224
+ Matthew Dahl, Varun Magesh, Mirac Suzgun, and Daniel E Ho. 2024. Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. Journal of Legal Analysis, 16(1):64-93.
225
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805.
226
+ Thibault Formal, Benjamin Piwowarski, and StΓ©phane Clinchant. 2021. SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking. ArXiv:2107.05720.
227
+ Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Re, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, and Zehua Li. 2023. Legal-Bench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models. ArXiv:2308.11462.
228
+
229
+ Kailash A. Hambarde and Hugo Proenca. 2023. Information Retrieval: Recent Advances and Beyond. ArXiv:2301.08801.
230
+ Harvey AI. 2024. harveyai/biglaw-bench. Original-date: 2024-08-24T00:39:48Z.
231
+ Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021. CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review. ArXiv:2103.06268.
232
+ David A. Hoffman and Yonathan A. Arbel. 2023. Generative Interpretation.
233
+ Abe Bohan Hou, Orion Weller, Guanghui Qin, Eugene Yang, Dawn Lawrie, Nils Holzenberger, Andrew Blair-Stanek, and Benjamin Van Durme. 2024. CLERC: A Dataset for Legal Case Retrieval and Retrieval-Augmented Analysis Generation. ArXiv:2406.17186.
234
+ Josh Howarth. 2024. Number of parameters in gpt-4 (latest data). Accessed: 2024-12-13.
235
+ Sagar Joshi, Sumanth Balaji, Ferrin Thomas, Aparna Garimella, and Vasudeva Varma. 2022. Investigating Strategies for Clause Recommendation. In Legal Knowledge and Information Systems, pages 73-82. IOS Press.
236
+ Jaykumar Kasundra and Shreyans Dhankhar. 2024. Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation. In Proceedings of the Third International Conference on AI-ML Systems, AIMLSystems '23, pages 1-8, New York, NY, USA. Association for Computing Machinery.
237
+ Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. ArXiv:2004.12832.
238
+ Yuta Koreeda and Christopher Manning. 2021. ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1907-1919, Punta Cana, Dominican Republic. Association for Computational Linguistics.
239
+ Kwok-Yan Lam, Victor CW Cheng, and Zee Kin Yeong. 2023. Applying Large Language Models for Enhancing Contract Drafting. In LegalAIIA@ICAIL, pages 70-80.
240
+ Spyretta Leivaditi, Julien Rossi, and Evangelos Kanoulas. 2020. A Benchmark for Lease Contract Review. ArXiv:2010.10386.
241
+ Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roktaschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. ArXiv:2005.11401.
242
+
243
+ Yang Li and Tatsuya Harada. 2022. Lepard: Learning partial point cloud matching in rigid and deformable scenes. ArXiv:2111.12591.
244
+ Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, Christopher D. Manning, and Daniel E. Ho. 2024. Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. ArXiv:2405.20362.
245
+ Lauren Martin, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, and Rivindu Perera. 2024. Better Call GPT, Comparing Large Language Models Against Lawyers. ArXiv:2401.16212.
246
+ Meta. 2024. Llama: Open and efficient foundation language models. Accessed: 2024-12-07.
247
+ Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: Massive Text Embedding Benchmark. ArXiv:2210.07316.
248
+ Cheng Niu, Yuanhao Wu, Juno Zhu, Siliang Xu, Kashun Shum, Randy Zhong, Juntong Song, and Tong Zhang. 2024. RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models. ArXiv:2401.00396.
249
+ OpenAI. 2024a. Hello GPT-4o.
250
+ OpenAI. 2024b. Vector embeddings. Accessed: 2024-12-07.
251
+ Andrew Pery. 2019. Contract Analytics: The New Frontier for Gaining and Sustaining Competitive Advantage.
252
+ Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. 2024. Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. ArXiv:2306.17563.
253
+ Juliano Rabelo, Mi-Young Kim, Randy Goebel, Masaharu Yoshioka, Yoshinobu Kano, and Ken Satoh. 2021. COLIEE 2020: Methods for Legal Document Retrieval and Entailment. In New Frontiers in Artificial Intelligence, pages 196–210, Cham. Springer International Publishing.
254
+ Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retr., 3(4):333-389.
255
+ Daniel Schwarcz, Sam Manning, Patrick James Barry, David R. Cleveland, J.J. Prescott, and Beverly Rich. 2025. AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice.
256
+ Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun
257
+
258
+ Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents. ArXiv:2304.09542.
259
+ Raphael Tang, Xinyu Zhang, Xueguang Ma, Jimmy Lin, and Ferhan Ture. 2024. Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models. ArXiv:2310.07712.
260
+ Nandan Thakur, Nils Reimers, Andreas RΓΌcklΓ©, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models. ArXiv:2104.08663.
261
+ Santosh T.y.s.s., Rashid Haddad, and Matthias Grab-mair. 2024. ECtHR-PCR: A Dataset for Precedent Understanding and Prior Case Retrieval in the European Court of Human Rights. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 5473-5483, Torino, Italia. ELRA and ICCL.
262
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention Is All You Need. ArXiv:1706.03762.
263
+ Jiaqi Wang, Huan Zhao, Zhenyuan Yang, Peng Shu, Junhao Chen, Haobo Sun, Ruixi Liang, Shixin Li, Pengcheng Shi, Longjun Ma, Zongjia Liu, Zhengliang Liu, Tianyang Zhong, Yutong Zhang, Chong Ma, Xin Zhang, Tuo Zhang, Tianli Ding, Yudan Ren, Tianming Liu, Xi Jiang, and Shu Zhang. 2024. Legal Evaluations and Challenges of Large Language Models. ArXiv:2411.10137.
264
+ Steven H. Wang, Antoine Scardigli, Leonard Tang, Wei Chen, Dimitry Levkin, Anya Chen, Spencer Ball, Thomas Woodside, Oliver Zhang, and Dan Hendrycks. 2023. MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding. ArXiv:2301.00876.
265
+ Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. ArXiv:2002.10957.
266
+ Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2020. SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval. ArXiv:2009.13013.
267
+ Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, and Michael Bendersky. 2024. Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels. ArXiv:2310.14122.
268
+
269
+ A Appendix - Annotation Details
270
+
271
+ <table><tr><td>Rating</td><td>Label</td><td>Description</td></tr><tr><td>1</td><td>Not relevant, not useful</td><td>The clause is not responsive to the query at all.</td></tr><tr><td>2</td><td>Not relevant, but useful</td><td>The clause is not responsive to the query but contains language that could be helpful in drafting.</td></tr><tr><td>3</td><td>Relevant, but not perfect</td><td>The clause is responsive to the query but has some defects. Such defects include, among other things, the clause: (1) is too long, (2) is missing key concepts typically seen in a real-life contract clause, (3) does not use standard legal language typically seen in a real-life contract, (4) is too one-sided and unfair, or (5) is unclear, etc.</td></tr><tr><td>4</td><td>Perfect</td><td>The clause is relevant, concise, clear, and covers all necessary elements of the query.</td></tr><tr><td>5</td><td>Exemplary</td><td>The clause meets all criteria for a 4 rating AND includes additional helpful legal concepts that may be responsive to the query.</td></tr></table>
272
+
273
+ Table A-1: ACORD Annotation rubric. Ratings range from 1 (not relevant) to 5 (exemplary), with each level defined by clear criteria.
274
+
275
+ # A.1 Annotator Instructions and Interface
276
+
277
+ Annotator Instructions Annotators were provided the five-point relevance rubric in table A-1 and the following instructions:
278
+
279
+ Please rate each clause assigned to you on a scale of 1-5 based on the following rubric. Clauses receiving a "Yes" for Relevance should receive a 3-5 rating. Clauses receiving a "No" for Relevance should receive a 1-2 rating.
280
+
281
+ Annotation Interface Annotators complete Google Forms that include the query-clause pairs by rating each pair with a 3 to 5 star (or 2 if the clause is irrelevant). Annotators used Google Sheet to annotate the 2-star rated clauses. Results from completed Google Forms are then consolidated into the master dataset.
282
+
283
+ # A.2 Annotator Recruitment and Demographics
284
+
285
+ A nonprofit organization recruited the annotators to create ACORD. The lawyer annotators are volunteers and do not receive compensation. Student annotators receive hourly compensation. All annotators were informed of the purpose of their involvement and consented to the inclusion of their annotations in ACORD.
286
+
287
+ Out of the annotators who consented to the disclosure of their demographic information, $42\%$ identify as male and $58\%$ identify as female. Except for one annotator based in the United Kingdom, all annotators are based in the United States.
288
+
289
+ # A.3 Annotator Roles and Contributions
290
+
291
+ Our annotation team consisted of 12 experienced attorneys and 10 students working under the attorneys' direct supervision following 5-10 hours of training. As detailed in Section 3 ("Labeling Process"), each clause pair in the dataset was independently annotated by three individuals: one student annotator and two experienced attorneys. In instances of disagreement, annotations were reviewed and adjudicated by a panel comprising three to six experienced lawyers to ensure consistency and legal accuracy. Annotator disagreement was tracked by the number of scores revised during this reconciliation step.
292
+
293
+ A substantial proportion of 1-star clauses in the dataset originated from data augmentation using the CUAD dataset.
294
+
295
+ All attorney contributions were provided on a pro bono basis. Student annotators received hourly compensation ranging from $20 to$ 35.
296
+
297
+ # B Appendix - Query-Clause Pair with Score
298
+
299
+ Sample of an annotated query-clause pair.
300
+
301
+ <table><tr><td>Query</td><td>Clause</td><td>Score</td></tr><tr><td>IP infringement exception to indirect damage waiver</td><td>8 INDEMNIFICATION.8.1 By Commerce One. Commerce One shall indemnify, defend and hold harmless Corio and its Customers from any and all damages, liabilities, costs and expenses (including reasonable attorneys&#x27; fees) incurred by Corio or its Customers arising out of any claim that the software infringes any patent, copyright, trademark or trade right secret of a third party; &lt;omitted&gt;9 LIMITATION OF LIABILITY.EXCEPT FOR LIABILITY ARISING UNDER SECTION 8 OF THIS AGREEMENT, IN NO EVENT SHALL EITHER PARTY&#x27;S LIABILITY ARISING OUT OF THIS AGREEMENT OR THE USE OR PERFORMANCE OF THE SOFTWARE EXCEED THE TO-TAL AMOUNT ACTUALLY PAID BY CORIO HEREUNDER FOR THE TRANSACTION WHICH THE LIABILITY RELATES TO DURING THE TWELVE (12) MONTHS IMMEDIATELY PRIOR TO THE FILING OF THE CAUSE OF ACTION TO WHICH THE LIABILITY RELATES. EXCEPT FOR LIABILITY ARISING UNDER SECTION 8 OF THIS AGREEMENT, IN NO EVENT SHALL EITHER PARTY HAVE ANY LIABILITY TO THE OTHER PARTY FOR ANY LOST PROFITS OR COSTS OF PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES HOWEVER CAUSED AND UNDER ANY THEORY OF LIABILITY AND WHETHER OR NOT SUCH PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE PARTIES AGREE THAT THIS SECTION 9 REPRESENTS A REASONABLE ALLOCATION OF RISK.</td><td>4.67</td></tr></table>
302
+
303
+ Table B-1: A sample query-clause pair with a score averaged from three expert annotations.
304
+
305
+ # C Appendix - Data Details
306
+
307
+ # C.1 Licensing
308
+
309
+ ACORD is licensed under CC-BY-4.0.
310
+
311
+ # C.2 Ethics Board Review
312
+
313
+ This research project was reviewed for ethics considerations by the REDACTED FOR ANONYMITY.
314
+
315
+ # C.3 Data List and Statistics
316
+
317
+ This subsection presents some details of the data. Table C-1 shows all 114 queries and the categories they belong to. While Tables C-2 and C-3 show summary statistics of the number of clauses for each query split by rating and counts for the number of clauses for each category and rating.
318
+
319
+ <table><tr><td>No.</td><td>Clause Category</td><td>Query</td></tr><tr><td>1</td><td>Limitation of Liability</td><td>cap on liability</td></tr><tr><td>2</td><td>Limitation of Liability</td><td>liability cap is based on purchase price</td></tr><tr><td>3</td><td>Limitation of Liability</td><td>precedents for insurance coverage influencing limitation of liability</td></tr><tr><td>4</td><td>Limitation of Liability</td><td>Fix fee liability cap</td></tr><tr><td>5</td><td>Limitation of Liability</td><td>Cap on liability equals 12 months payment</td></tr><tr><td>6</td><td>Limitation of Liability</td><td>cap on liability for indirect damages</td></tr><tr><td>7</td><td>Limitation of Liability</td><td>unilateral liability cap</td></tr><tr><td>8</td><td>Limitation of Liability</td><td>mutual liability cap</td></tr><tr><td>9</td><td>Limitation of Liability</td><td>two parties having different liability caps and/or carveouts</td></tr><tr><td>10</td><td>Limitation of Liability</td><td>Cap on liability without carveouts</td></tr><tr><td>11</td><td>Limitation of Liability</td><td>liability cap carveouts</td></tr><tr><td>12</td><td>Limitation of Liability</td><td>cap on liability subject to law</td></tr><tr><td>13</td><td>Limitation of Liability</td><td>compliance with law carveout to cap on liability</td></tr><tr><td>14</td><td>Limitation of Liability</td><td>indemnification carveout to cap on liability</td></tr><tr><td>15</td><td>Limitation of Liability</td><td>IP infringement exception to cap on liability</td></tr><tr><td>16</td><td>Limitation of Liability</td><td>third party IP infringement exception to cap on liability</td></tr><tr><td>17</td><td>Limitation of Liability</td><td>liability cap clauses that exclude third party IP infringement and fraud, gross negligence or willful misconduct</td></tr><tr><td>18</td><td>Limitation of Liability</td><td>personal or bodily injury exception to liability cap</td></tr><tr><td>19</td><td>Limitation of Liability</td><td>confidentiality exceptions to liability cap</td></tr><tr><td>20</td><td>Limitation of Liability</td><td>fraud, negligence or willful misconduct carveout to liability cap</td></tr><tr><td>21</td><td>Limitation of Liability</td><td>a party's liability for fraud, negligence, personal injury or tort subject to a cap</td></tr><tr><td>22</td><td>Limitation of Liability</td><td>personal or bodily injury exception to cap on liability via indemnification carveout</td></tr><tr><td>23</td><td>Limitation of Liability</td><td>third party IP infringement exception to cap on liability via indemnification carveout</td></tr><tr><td>24</td><td>Limitation of Liability</td><td>fraud, gross negligence or willful misconduct exception to cap on liability via indemnification carveout</td></tr><tr><td>25</td><td>Limitation of Liability</td><td>seller-favorable cap on liability clauses</td></tr><tr><td>26</td><td>Limitation of Liability</td><td>buyer-favorable cap on liability clauses</td></tr><tr><td>27</td><td>Limitation of Liability</td><td>non-reliance clause</td></tr><tr><td>28</td><td>Limitation of Liability</td><td>as-is clause</td></tr><tr><td>29</td><td>Limitation of Liability</td><td>as-is clause with carveouts</td></tr><tr><td>30</td><td>Limitation of Liability</td><td>unqualified β€œas-is” clause</td></tr><tr><td>31</td><td>Limitation of Liability</td><td>warranty disclaimer clause that includes implied warranties</td></tr><tr><td>32</td><td>Limitation of Liability</td><td>warranty disclaimer clause that disclaims implied warranties of merchantability and fitness for a particular purpose</td></tr><tr><td>33</td><td>Limitation of Liability</td><td>waiver of implied warranty of title and non-infringement</td></tr><tr><td>34</td><td>Limitation of Liability</td><td>warranty disclaimer that does not specifically waive title and non-infringement warranties</td></tr><tr><td>35</td><td>Limitation of Liability</td><td>exclusive remedy for breach of product warranty</td></tr><tr><td>36</td><td>Limitation of Liability</td><td>product replacement, repair or refund as exclusivity remedy</td></tr><tr><td>37</td><td>Limitation of Liability</td><td>product warranty of shelf life</td></tr><tr><td>38</td><td>Limitation of Liability</td><td>product warranty around manufacturing and shipping</td></tr><tr><td>39</td><td>Limitation of Liability</td><td>customer's right for defective products</td></tr><tr><td>40</td><td>Limitation of Liability</td><td>seller-favorable warranty disclaimer clauses</td></tr><tr><td>41</td><td>Limitation of Liability</td><td>buyer-favorable warranty disclaimer clauses</td></tr><tr><td>42</td><td>Limitation of Liability</td><td>disclaimer of indirect damages</td></tr><tr><td>43</td><td>Limitation of Liability</td><td>consequential damages waiver</td></tr><tr><td>44</td><td>Limitation of Liability</td><td>incidental damages disclaimer</td></tr><tr><td>45</td><td>Limitation of Liability</td><td>disclaimer of lost profits</td></tr><tr><td>46</td><td>Limitation of Liability</td><td>disclaimer of punitive damages</td></tr><tr><td>47</td><td>Limitation of Liability</td><td>disclaimer of strict liability</td></tr><tr><td>48</td><td>Limitation of Liability</td><td>unilateral indirect damages waiver</td></tr><tr><td>49</td><td>Limitation of Liability</td><td>mutual indirect damages waiver</td></tr><tr><td>50</td><td>Limitation of Liability</td><td>indirect damage waiver is subject to law</td></tr><tr><td>51</td><td>Limitation of Liability</td><td>indemnification carveout to indirect damage waiver</td></tr><tr><td>52</td><td>Limitation of Liability</td><td>IP infringement exception to indirect damage waiver</td></tr><tr><td>53</td><td>Limitation of Liability</td><td>third party IP infringement exception to indirect damage waiver</td></tr><tr><td>54</td><td>Limitation of Liability</td><td>indirect damage waiver clauses that exclude third party IP infringement and fraud, gross negligence or willful misconduct</td></tr><tr><td>55</td><td>Limitation of Liability</td><td>personal or bodily injury exception to indirect damage waiver</td></tr><tr><td>56</td><td>Limitation of Liability</td><td>confidentiality exceptions to indirect damage waiver</td></tr><tr><td>57</td><td>Limitation of Liability</td><td>fraud, gross negligence or willful misconduct carveout to indirect damage waiver</td></tr><tr><td>58</td><td>Limitation of Liability</td><td>indirect damages waiver applies to a party's liability for fraud, negligence or personal injury</td></tr><tr><td>59</td><td>Limitation of Liability</td><td>personal or bodily injury exception to waiver of indirect damages via indemnification carveout</td></tr><tr><td>60</td><td>Limitation of Liability</td><td>first party claim exception to waiver of indirect damages</td></tr><tr><td>61</td><td>Limitation of Liability</td><td>fraud, gross negligence or willful misconduct exception to waiver of indirect damages via indemnification carveout</td></tr><tr><td>62</td><td>Limitation of Liability</td><td>seller-favorable waiver of indirect damages clauses</td></tr><tr><td>63</td><td>Limitation of Liability</td><td>buyer-favorable waiver of indirect damages clauses</td></tr><tr><td>64</td><td>Indemnification</td><td>indemnity or indemnification clause</td></tr><tr><td>65</td><td>Indemnification</td><td>mutual indemnification provisions</td></tr><tr><td>66</td><td>Indemnification</td><td>unilateral indemnification clause</td></tr><tr><td>67</td><td>Indemnification</td><td>indemnification of third party claims</td></tr><tr><td>68</td><td>Indemnification</td><td>fraud and/or gross negligence indemnity</td></tr><tr><td>69</td><td>Indemnification</td><td>third party IP infringement indemnity</td></tr><tr><td>70</td><td>Indemnification</td><td>indemnification of third-party claims based on breach of agreement</td></tr><tr><td>71</td><td>Indemnification</td><td>IP infringement indemnity that covers trademark or copyright</td></tr><tr><td>72</td><td>Indemnification</td><td>indemnification covers indirect claims</td></tr><tr><td>73</td><td>Indemnification</td><td>Indemnification of first party claims</td></tr><tr><td>74</td><td>Indemnification</td><td>indemnification clauses that include hold harmless</td></tr><tr><td>75</td><td>Indemnification</td><td>indemnification clauses that do not include hold harmless</td></tr><tr><td>76</td><td>Indemnification</td><td>indemnification clauses that include defending claims</td></tr><tr><td>77</td><td>Indemnification</td><td>indemnification clause that allows indemnifying party to control defenses</td></tr><tr><td>78</td><td>Indemnification</td><td>Indemnity of broad-based claims</td></tr><tr><td>79</td><td>Indemnification</td><td>Indemnified party includes affiliates</td></tr><tr><td>80</td><td>Indemnification</td><td>first party indemnification of specified claims</td></tr><tr><td>81</td><td>Indemnification</td><td>first party indemnification of broad-based claims</td></tr><tr><td>82</td><td>Indemnification</td><td>indemnification of broad-based third party claims</td></tr><tr><td>83</td><td>Indemnification</td><td>third party claim indemnity of fraud, negligence or willful misconduct</td></tr><tr><td>84</td><td>Indemnification</td><td>Third Party claim indemnity limited to use of products in compliance with agreement</td></tr><tr><td>85</td><td>Indemnification</td><td>indemnification that covers violation of law</td></tr><tr><td>86</td><td>Indemnification</td><td>seller-favorable indemnification clauses</td></tr><tr><td>87</td><td>Indemnification</td><td>buyer-favorable indemnification clauses</td></tr><tr><td>88</td><td>Affirmative Covenants</td><td>Revenue/Profit Sharing</td></tr><tr><td>89</td><td>Affirmative Covenants</td><td>Minimum Commitment</td></tr><tr><td>90</td><td>Affirmative Covenants</td><td>Audit Rights</td></tr><tr><td>91</td><td>Affirmative Covenants</td><td>Insurance</td></tr><tr><td>92</td><td>Restrictive Covenants</td><td>Most Favored Nation Clause</td></tr><tr><td>93</td><td>Restrictive Covenants</td><td>Non-compete bound by time and territory</td></tr><tr><td>94</td><td>Restrictive Covenants</td><td>Exclusivity bound by time and territory</td></tr><tr><td>95</td><td>Restrictive Covenants</td><td>No-Solicit of Customers</td></tr><tr><td>96</td><td>Restrictive Covenants</td><td>No-Solicit Of Employees not bound by time or longer than 12 months</td></tr><tr><td>97</td><td>Restrictive Covenants</td><td>Non-Disparagement</td></tr><tr><td>98</td><td>Restrictive Covenants</td><td>Rofr/Rofo/Rofn</td></tr><tr><td>99</td><td>Restrictive Covenants</td><td>Change Of Control</td></tr><tr><td>100</td><td>Restrictive Covenants</td><td>Anti-Assignment clause that requires notice only for assignment to affiliates</td></tr><tr><td>101</td><td>Restrictive Covenants</td><td>Covenant Not To Sue</td></tr><tr><td>102</td><td>Term</td><td>Renewal clause that requires notice to Renew</td></tr><tr><td>103</td><td>Term</td><td>Clause that requires notice to terminate auto-renew</td></tr><tr><td>104</td><td>Term</td><td>Termination for Convenience</td></tr><tr><td>105</td><td>Governing Law</td><td>New York Governing Law</td></tr><tr><td>106</td><td>Governing Law</td><td>England Governing Law</td></tr><tr><td>107</td><td>Governing Law</td><td>Clause with multiple governing laws</td></tr><tr><td>108</td><td>Governing Law</td><td>Governing Law excluding UCC or other similar regulatory frameworks</td></tr><tr><td>109</td><td>Liquidated Damages</td><td>Liquidated Damages</td></tr><tr><td>110</td><td>Third party beneficiary</td><td>Third Party Beneficiary</td></tr><tr><td>111</td><td>IP ownership/license</td><td>IP Ownership Assignment or Transfer</td></tr><tr><td>112</td><td>IP ownership/license</td><td>Joint IP Ownership</td></tr><tr><td>113</td><td>IP ownership/license</td><td>License clause covering affiliates of licensor and/or licensee</td></tr><tr><td>114</td><td>IP ownership/license</td><td>Source Code Escrow</td></tr></table>
320
+
321
+ Table C-1: The table lists the 9 clause categories and the 114 queries in ACORD.
322
+
323
+ <table><tr><td>Score</td><td>Min</td><td>Mean</td><td>Median</td><td>Std. Dev.</td><td>Max</td></tr><tr><td>1-star</td><td>395</td><td>1080.42</td><td>1293.00</td><td>383.28</td><td>1293</td></tr><tr><td>2-star</td><td>15</td><td>20.13</td><td>20.00</td><td>2.50</td><td>28</td></tr><tr><td>3-star</td><td>1</td><td>4.11</td><td>4.00</td><td>2.45</td><td>17</td></tr><tr><td>4-star</td><td>1</td><td>5.82</td><td>6.00</td><td>2.48</td><td>12</td></tr><tr><td>5-star</td><td>1</td><td>2.17</td><td>2.00</td><td>1.45</td><td>6</td></tr></table>
324
+
325
+ Table C-2: The min, mean, median, standard deviation, and max statistics of the number of rated clauses for each score. Refer to Table C-3 for the number of query-clause pairs in the data for each of the 9 clause categories.
326
+
327
+ # C.4 Clause Extraction Methodology in Clause Corpus
328
+
329
+ Most clauses are extracted at a section or subsection level, marked by a numerical header such as "Section 9" or "8(a)". If a single sentence within a section or subsection is relevant to the query, the entire section
330
+
331
+ <table><tr><td rowspan="2">Category</td><td rowspan="2">Number of Queries</td><td colspan="5">Number of ratings in the dataset</td><td rowspan="2">Total ratings</td></tr><tr><td>5-star</td><td>4-star</td><td>3-star</td><td>2-star</td><td>1-star</td></tr><tr><td>Limitation of Liability</td><td>63</td><td>87</td><td>345</td><td>200</td><td>1277</td><td>81459</td><td>83368</td></tr><tr><td>Indemnification</td><td>24</td><td>29</td><td>153</td><td>95</td><td>459</td><td>31032</td><td>31768</td></tr><tr><td>Affirmative Covenants</td><td>4</td><td>4</td><td>27</td><td>17</td><td>80</td><td>1581</td><td>1709</td></tr><tr><td>Restrictive Covenants</td><td>10</td><td>3</td><td>64</td><td>45</td><td>206</td><td>3956</td><td>4274</td></tr><tr><td>Term</td><td>3</td><td>0</td><td>20</td><td>12</td><td>66</td><td>1187</td><td>1285</td></tr><tr><td>Governing Law</td><td>4</td><td>2</td><td>19</td><td>10</td><td>82</td><td>1582</td><td>1695</td></tr><tr><td>Liquidated Damages</td><td>1</td><td>0</td><td>6</td><td>4</td><td>21</td><td>395</td><td>426</td></tr><tr><td>Third-party beneficiary</td><td>1</td><td>0</td><td>8</td><td>3</td><td>20</td><td>395</td><td>426</td></tr><tr><td>IP ownership/license</td><td>4</td><td>5</td><td>21</td><td>17</td><td>84</td><td>1581</td><td>1708</td></tr><tr><td>Total</td><td>114</td><td>130</td><td>663</td><td>403</td><td>2295</td><td>123168</td><td>126659</td></tr></table>
332
+
333
+ Table C-3: The table lists the number of queries and annotated clauses under each x-star precision@5 rating for each clause category in ACORD.
334
+
335
+ or subsection is included. Additionally, if the section or subsection references another section, such other section is also included. An <omitted> symbol is inserted between them to indicate that the two sections are not continuous in the contract. Clauses have widely varying lengths, ranging from 13 words to 1,898 words. Figure C-1 shows the distribution of 2- through 5-star clause lengths in ACORD.
336
+
337
+ ![](images/396ebd60e6715e6947ff6a33660f3ce9a14d25c6b4098a862770c69d413c809c.jpg)
338
+ Histogram of Clause Word Lengths
339
+ Figure C-1: Histogram over the number of words in 2- to 5-star clause in the ACORD dataset. The figure also includes statistics showing that 2- to 5-star clauses have an average of 225 words and a median of 146 words in each clause.
340
+
341
+ # C.5 Data Format
342
+
343
+ We release the ACORD dataset in BEIR format, with a modification to the format of the qrels files to account for explicit annotation of 1-star judgements.
344
+
345
+ # C.5.1 BEIR format
346
+
347
+ In the BEIR format for ad-hoc IR datasets, the clauses and queries can be found in corpus.json and queries.json, respectively. The query-clause scores for the train, development, and test splits can be
348
+
349
+ found in qrels/train, test, valid.tsv. As mentioned in the dataset section, lawyer judgments are one-indexed (between one and five stars), but it's desirable to have zero-indexed scores when calculating NDCG. Therefore, in the qrels files, we subtract one from each lawyer's judgment to get scores that range from zero to four.
350
+
351
+ # C.5.2 Explicit zero-scores
352
+
353
+ We deviate from the standard BEIR format in that we explicitly, rather than implicitly, encode irrelevant query-clause pairs. In the standard BEIR format, irrelevant query-clause pairs (with a score of zero) are omitted from the qrels files, which implies their irrelevance. As with all Crenshaw- or TREC-style IR datasets where unjudged query-clause pairs are treated as irrelevant, this can lead to many false negatives (Thakur et al., 2021). In ACORD qrel files, we explicitly label irrelevant clauses with a score of zero. Omitted query-clause pairs should be ignored during evaluation because their score is unknown.
354
+
355
+ In constructing ACORD, we carefully constructed our dataset to reduce the incidence of false negatives. We explicitly label irrelevant clauses through lawyer judgments and through selecting clause sources that are known to be irrelevant to particular queries.
356
+
357
+ # D Appendix - Model Performance
358
+
359
+ The next two sections will show detailed and extended results for the model evaluation of the test data.
360
+
361
+ # D.1 Model Performance by Category
362
+
363
+ <table><tr><td>Category</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star precision@5</td><td>4-star precision@5</td><td>5-star precision@5</td></tr><tr><td>Limitation of Liability</td><td>0.750</td><td>0.785</td><td>0.771</td><td>0.557</td><td>0.200</td></tr><tr><td>Indemnification</td><td>0.763</td><td>0.803</td><td>0.800</td><td>0.600</td><td>0.178</td></tr><tr><td>Affirmative Covenants</td><td>0.930</td><td>0.910</td><td>1.000</td><td>0.867</td><td>0.000</td></tr><tr><td>Restrictive Covenants</td><td>0.682</td><td>0.662</td><td>0.850</td><td>0.650</td><td>0.200</td></tr><tr><td>Term</td><td>0.912</td><td>0.936</td><td>1.000</td><td>0.600</td><td>NaN</td></tr><tr><td>Governing Law</td><td>0.805</td><td>0.868</td><td>0.800</td><td>0.500</td><td>0.200</td></tr><tr><td>Liquidated Damages</td><td>0.812</td><td>0.788</td><td>0.800</td><td>0.600</td><td>NaN</td></tr><tr><td>Third-party beneficiary</td><td>1.000</td><td>0.974</td><td>1.000</td><td>1.000</td><td>NaN</td></tr><tr><td>IP ownership/license</td><td>0.764</td><td>0.787</td><td>0.867</td><td>0.600</td><td>0.067</td></tr></table>
364
+
365
+ In this section, we break the model performance down by category to determine how the different categories influence the performance of the models. We show in Table D-1 the results for BM25 with GPT4o as the reranker model. Overall, we see that the 5-star precision@5 is struggling across all categories, but in particular, with "Affirmative Covenants" and "IP ownership/license", resulting in a low overall mean. However, for "Affirmative Covenants," the NDCG scores are some of the highest across categories, thus showing that NDCG might not be the best metric in this setting as discussed in Section 5. Table D-2 shows the results by category before and after fine-tuning the cross-encoder on the training data. Overall, the improvement by fine-tuning the data is limited, which could indicate that more data or better architectures are needed.
366
+
367
+ Table D-1: The table shows the performance by clause category when using BM25 as the retriever model paired with pointwise GPT4o for reranking. We get Not a Number, NaN, for the 5-star precision@5 if there are no 5-star clauses for the queries in the category. Cells show the mean over test queries in a given category where we ignore NaN values.
368
+
369
+ <table><tr><td>Category</td><td>Fine-tuned</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star precision@5</td><td>4-star precision@5</td><td>5-star precision@5</td></tr><tr><td>Limitation of Liability</td><td></td><td>0.541</td><td>0.565</td><td>0.507</td><td>0.364</td><td>0.043</td></tr><tr><td>Indemnification</td><td></td><td>0.721</td><td>0.727</td><td>0.743</td><td>0.529</td><td>0.089</td></tr><tr><td>Affirmative Covenants</td><td></td><td>0.605</td><td>0.571</td><td>0.667</td><td>0.533</td><td>0.000</td></tr><tr><td>Restrictive Covenants</td><td></td><td>0.566</td><td>0.539</td><td>0.750</td><td>0.500</td><td>0.000</td></tr><tr><td>Term</td><td>No</td><td>0.558</td><td>0.684</td><td>0.400</td><td>0.200</td><td>NaN</td></tr><tr><td>Governing Law</td><td></td><td>0.517</td><td>0.544</td><td>0.600</td><td>0.500</td><td>0.200</td></tr><tr><td>Liquidated Damages</td><td></td><td>0.838</td><td>0.765</td><td>1.000</td><td>0.600</td><td>NaN</td></tr><tr><td>Third-party beneficiary</td><td></td><td>0.887</td><td>0.941</td><td>1.000</td><td>0.800</td><td>NaN</td></tr><tr><td>IP ownership/license</td><td></td><td>0.382</td><td>0.458</td><td>0.333</td><td>0.333</td><td>0.067</td></tr><tr><td>Limitation of Liability</td><td></td><td>0.642</td><td>0.694</td><td>0.598</td><td>0.448</td><td>0.138</td></tr><tr><td>Indemnification</td><td></td><td>0.678</td><td>0.735</td><td>0.638</td><td>0.500</td><td>0.081</td></tr><tr><td>Affirmative Covenants</td><td></td><td>0.373</td><td>0.403</td><td>0.444</td><td>0.222</td><td>0.133</td></tr><tr><td>Restrictive Covenants</td><td></td><td>0.565</td><td>0.509</td><td>0.717</td><td>0.567</td><td>0.067</td></tr><tr><td>Term</td><td>Yes</td><td>0.447</td><td>0.583</td><td>0.200</td><td>0.133</td><td>NaN</td></tr><tr><td>Governing Law</td><td></td><td>0.655</td><td>0.768</td><td>0.633</td><td>0.367</td><td>0.200</td></tr><tr><td>Liquidated Damages</td><td></td><td>0.245</td><td>0.328</td><td>0.333</td><td>0.133</td><td>NaN</td></tr><tr><td>Third-party beneficiary</td><td></td><td>0.841</td><td>0.859</td><td>0.933</td><td>0.533</td><td>NaN</td></tr><tr><td>IP ownership/license</td><td></td><td>0.424</td><td>0.499</td><td>0.422</td><td>0.222</td><td>0.067</td></tr></table>
370
+
371
+ Table D-2: The performance by clause category of the BM25 retriever model with the cross-encoder reranker model both with and without fine-tuning the cross-encoder on the training data. The finetuning hyperparameters were tuned by maximizing the mean NDCG@5 score for the validation set. We get Not a Number, NaN, for the 5-star precision@5 if there are no 5-star clauses for the queries in the category. Cells show the mean over all test queries in a given category where we ignore NaN values.
372
+
373
+ <table><tr><td>Retriever</td><td>Reranker</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star prec@5</td><td>4-star prec@5</td><td>5-star prec@5</td><td>Size</td></tr><tr><td>OpenAI embeddings (small)</td><td>None</td><td>0.551</td><td>0.558</td><td>0.505</td><td>0.347</td><td>0.083</td><td>N/A</td></tr><tr><td>OpenAI embeddings (large)</td><td>None</td><td>0.621</td><td>0.641</td><td>0.586</td><td>0.389</td><td>0.110</td><td>N/A</td></tr><tr><td rowspan="6">BM25</td><td>None</td><td>0.525</td><td>0.540</td><td>0.509</td><td>0.389</td><td>0.090</td><td>0M</td></tr><tr><td>Cross-Encoder-MiniLM</td><td>0.593</td><td>0.609</td><td>0.600</td><td>0.435</td><td>0.062</td><td>66M</td></tr><tr><td>GPT4o</td><td>0.769</td><td>0.797</td><td>0.811</td><td>0.600</td><td>0.172</td><td>1.8T</td></tr><tr><td>GPT4o-mini</td><td>0.752</td><td>0.782</td><td>0.786</td><td>0.582</td><td>0.186</td><td>8B</td></tr><tr><td>Llama-1B</td><td>0.138</td><td>0.144</td><td>0.130</td><td>0.105</td><td>0.041</td><td>1.23B</td></tr><tr><td>Llama-3B</td><td>0.626</td><td>0.653</td><td>0.639</td><td>0.481</td><td>0.097</td><td>3.21B</td></tr><tr><td rowspan="6">Bi-Encoder-MiniLM</td><td>None</td><td>0.571</td><td>0.572</td><td>0.498</td><td>0.358</td><td>0.076</td><td>66M</td></tr><tr><td>Cross-Encoder-MiniLM</td><td>0.601</td><td>0.610</td><td>0.586</td><td>0.428</td><td>0.069</td><td>132M</td></tr><tr><td>GPT4o</td><td>0.791</td><td>0.812</td><td>0.814</td><td>0.621</td><td>0.172</td><td>1.8T</td></tr><tr><td>GPT4o-mini</td><td>0.763</td><td>0.790</td><td>0.775</td><td>0.586</td><td>0.200</td><td>8B</td></tr><tr><td>Llama-1B</td><td>0.200</td><td>0.205</td><td>0.147</td><td>0.119</td><td>0.048</td><td>1.29B</td></tr><tr><td>Llama-3B</td><td>0.628</td><td>0.657</td><td>0.628</td><td>0.481</td><td>0.110</td><td>3.27B</td></tr></table>
374
+
375
+ # D.2 Model Performance and Size for All Tested Models
376
+
377
+ In this subsection, we show the results for all tested models using the test data. The results can be seen in Table D-3 and show that the best and second best rerankers are GPT4o and GPT4o-mini when using either BM25 or Bi-Encoder-MiniLM. For the Llama-based rerankers, we see that the 3B size model far outperforms the 1B model. Thus, the model size has a large impact both when using BM25 and when using Bi-Encoder-MiniLM as the initial retriever.
378
+
379
+ # D.3 Model Performance after Fine-tuning
380
+
381
+ We show below in Table D-4 the results of fine-tuning the cross-encoder reranker to the training data before evaluating the methods on the test data. We tested with both BM25 and bi+encoder as the initial retriever models. Overall, fine-tuning improves the models slightly; however, the results are still worse than using GPT4o or GPT4o-mini as the rerankers, cf. Table D-3.
382
+
383
+ Table D-3: The table shows the results for all tested models on the test data. For each of the retrieval models BM25 and Bi-Encoder-MiniLM, we bold and underline the best and second best scores, respectively. We also include the size of the models. For GPT4o and GPT4o-mini, the exact sizes are unknown when writing, but we use estimates found online Howarth (2024). For the OpenAI embedding models, the exact sizes are unknown, and we did not find estimates.
384
+
385
+ <table><tr><td>Retriever</td><td>Reranker</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star prec@5</td><td>4-star prec@5</td><td>5-star prec@5</td></tr><tr><td colspan="7">Not fine-tuned</td></tr><tr><td>BM25</td><td>Cross-Encoder-MiniLM</td><td>59.3</td><td>60.9</td><td>60.0</td><td>43.5</td><td>6.2</td></tr><tr><td>Bi-Encoder-MiniLM</td><td>Cross-Encoder-MiniLM</td><td>60.1</td><td>61.0</td><td>58.6</td><td>42.8</td><td>6.9</td></tr><tr><td colspan="7">Fine-tuned</td></tr><tr><td>BM25</td><td>Cross-Encoder-MiniLM</td><td>61.3 (+2.0)</td><td>66.3 (+5.4)</td><td>59.4 (-0.6)</td><td>43.3 (-0.2)</td><td>11.3 (+5.1)</td></tr><tr><td>Bi-Encoder-MiniLM</td><td>Cross-Encoder-MiniLM</td><td>64.7 (+4.6)</td><td>68.1 (+7.1)</td><td>61.6 (+3.0)</td><td>44.9 (+2.1)</td><td>12.2 (+5.3)</td></tr></table>
386
+
387
+ Table D-4: Performance in % before and after fine-tuning the cross-encoder with pointwise reranking. We write the change in each metric after fine-tuning in parenthesis. We test both with BM25 and bi-encoder as the retriever model. Overall, the results show a modest improvement in NDCG scores and 5-star precision, as well as 3- and 4-star precisions for the bi-encoder model. However, with BM25, we see a slight decline in 3- and 4-star precisions.
388
+
389
+ # D.4 Model Performance for Pairwise and Pointwise Rerankings
390
+
391
+ We show below in Table D-5 the results when using pairwise reranking rather than pointwise reranking. We tested it with GPT4o, GPT4o-mini, Llama 3.2 1B, and Llama 3.2 3B. Overall, we see that the results are much worse when using pairwise reranking. However, Llama 3.2 1B does see a significant improvement in its metrics, but it is still worse than the other models when they use pointwise reranking.
392
+
393
+ <table><tr><td>Retriever</td><td>Reranker</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star prec@5</td><td>4-star prec@5</td><td>5-star prec@5</td></tr><tr><td colspan="7">Pointwise</td></tr><tr><td rowspan="4">BM25</td><td>GPT4o</td><td>76.9</td><td>79.7</td><td>81.1</td><td>60.0</td><td>17.2</td></tr><tr><td>GPT4o-mini</td><td>75.2</td><td>78.2</td><td>78.6</td><td>58.2</td><td>18.6</td></tr><tr><td>Llama-1B</td><td>13.8</td><td>14.4</td><td>13.0</td><td>10.5</td><td>4.1</td></tr><tr><td>Llama-3B</td><td>62.6</td><td>65.3</td><td>63.9</td><td>48.1</td><td>9.7</td></tr><tr><td colspan="7">Pairwise</td></tr><tr><td rowspan="4">BM25</td><td>GPT4o</td><td>58.0</td><td>59.1</td><td>56.1</td><td>43.9</td><td>13.1</td></tr><tr><td>GPT4o-mini</td><td>57.4</td><td>58.5</td><td>55.4</td><td>43.5</td><td>12.4</td></tr><tr><td>Llama-1B</td><td>52.1</td><td>54.3</td><td>50.2</td><td>37.9</td><td>9.0</td></tr><tr><td>Llama-3B</td><td>52.8</td><td>54.4</td><td>50.5</td><td>38.9</td><td>9.0</td></tr></table>
394
+
395
+ Table D-5: Performance in % of GPT4o, GPT4o-mini, Llama 3.2 1B, and Llama 3.2 3B reranker when using either pointwise or pairwise reranking with BM25 as the retriever model. Overall, the pairwise results are much worse than those for pointwise reranking. However, Llama 3.2 1B does see a significant benefit from the more computationally heavy technique.
396
+
397
+ # E Appendix - Modifying Query Format
398
+
399
+ In this section, we present results for modifying the query to use a medium and long format rather than the default short queries. An expert lawyer constructs the medium and long-format queries. We show in Table E-1 the results using different formats, demonstrating that providing more context to the models significantly improve performance. Although we only tested the "Change of Control" and "as-is" queries, in the Supplemental Materials, we provide medium and long format queries for all 114 queries and encourage the research community to conduct further experiments to improve the models' performance.
400
+
401
+ <table><tr><td>Query Type</td><td>Variation</td><td>Query</td></tr><tr><td>Change of control</td><td>Short Medium Long</td><td>β€œChange Of Controlβ€β€œClause that prohibits change of control itselfβ€β€œDoes one party have the right to terminate or is consent or notice required of the counterparty if such party undergoes a change of control, such as a merger, stock sale, transfer of all or substantially all of its assets or business, or assignment by operation of law?”</td></tr><tr><td>β€˜as-is’</td><td>Short Medium Long</td><td>β€œβ€˜as-is’ clauseβ€β€œβ€˜as-is’ clause” that disclaims all warrantiesβ€˜β€˜Is this a clause that states that the goods or services are being provided on an β€˜as-is’ basis in their current condition, with no warranties or guarantees regarding their quality, performance, or suitability?”</td></tr></table>
402
+
403
+ Table E-1: Query variations for two query types: "Change of control" and "as-is". Each variation represents a different length or level of detail, with queries ranging from short, concise statements to long, detailed descriptions.
404
+
405
+ <table><tr><td>Model</td><td>Query Type</td><td>Variation</td><td>NDCG@5</td><td>NDCG@10</td><td>3-star precision@5</td><td>4-star precision@5</td><td>5-star precision@5</td></tr><tr><td rowspan="6">BM25</td><td rowspan="3">Change of control</td><td>Short</td><td>0.914</td><td>0.856</td><td>0.800</td><td>0.600</td><td>0.200</td></tr><tr><td>Medium</td><td>0.887</td><td>0.794</td><td>1.000</td><td>0.600</td><td>0.200</td></tr><tr><td>Long</td><td>0.470</td><td>0.470</td><td>0.200</td><td>0.200</td><td>0.200</td></tr><tr><td rowspan="3">β€˜as-is’</td><td>Short</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td></tr><tr><td>Medium</td><td>0.683</td><td>0.660</td><td>0.400</td><td>0.400</td><td>0.200</td></tr><tr><td>Long</td><td>0.284</td><td>0.297</td><td>0.200</td><td>0.200</td><td>0.000</td></tr><tr><td rowspan="6">BM25 + GPT-4o</td><td rowspan="3">Change of control</td><td>Short</td><td>0.775</td><td>0.821</td><td>1.000</td><td>0.400</td><td>0.200</td></tr><tr><td>Medium</td><td>0.449</td><td>0.561</td><td>0.600</td><td>0.000</td><td>0.000</td></tr><tr><td>Long</td><td>0.835</td><td>0.850</td><td>1.000</td><td>0.200</td><td>0.200</td></tr><tr><td rowspan="3">β€˜as-is’</td><td>Short</td><td>0.102</td><td>0.068</td><td>0.000</td><td>0.000</td><td>0.000</td></tr><tr><td>Medium</td><td>1.000</td><td>0.954</td><td>1.000</td><td>1.000</td><td>0.000</td></tr><tr><td>Long</td><td>1.000</td><td>0.917</td><td>1.000</td><td>0.800</td><td>0.000</td></tr><tr><td rowspan="6">BM25 + Cross-Encoder</td><td rowspan="3">Change of control</td><td>Short</td><td>0.832</td><td>0.770</td><td>0.800</td><td>0.600</td><td>0.200</td></tr><tr><td>Medium</td><td>0.590</td><td>0.707</td><td>0.400</td><td>0.200</td><td>0.000</td></tr><tr><td>Long</td><td>0.629</td><td>0.717</td><td>0.400</td><td>0.400</td><td>0.000</td></tr><tr><td rowspan="3">β€˜as-is’</td><td>Short</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.000</td></tr><tr><td>Medium</td><td>0.887</td><td>0.794</td><td>1.000</td><td>0.600</td><td>0.200</td></tr><tr><td>Long</td><td>0.518</td><td>0.641</td><td>0.600</td><td>0.200</td><td>0.000</td></tr></table>
406
+
407
+ Table E-2: Comparison of retrieval performance across different query variations, categories, and models. The table reports five quality metrics (NDCG@5, NDCG@10, 3-star precision@5, 4-star precision@5, and 5-star precision@5) for each model (BM25, BM25 + GPT-4o, and BM25 + Cross-Encoder) evaluated on three different formats for the "Change of Control" and "as-is" queries. Using the more detailed queries, the medium or long formats, shown in Table E-1, give better results. However, the long format is not always an improvement over the medium format.
408
+
409
+ # F Appendix - Experiment and Evaluation Details
410
+
411
+ # F.1 Performance Metrics
412
+
413
+ Performance is measured using standard metrics like NDCG@5, NDCG@10, 5-star-precision@5, 4-star-precision@5, and 3-star-precision@5. When computing the metrics, the scores are labeled from 0 to 4 rather than 1 to 5 to ensure irrelevant results (1-star clauses) are weighted appropriately.
414
+
415
+ The NDCG@k measures the normalized discounted cumulative gain of the top k returned results. We use the standard definition of NDCG@k seen in (1).
416
+
417
+ $$
418
+ D C G @ k = \sum_ {i = 1} ^ {k} \frac {\operatorname {r e l} _ {i}}{\log_ {2} (i + 1)},
419
+ $$
420
+
421
+ $$
422
+ I D C G @ k = \sum_ {i = 1} ^ {| R E L _ {k} |} \frac {r e l _ {i}}{\log_ {2} (i + 1)},
423
+ $$
424
+
425
+ $$
426
+ N D C G @ k = \frac {D C G @ k}{I D C G @ k}, \tag {1}
427
+ $$
428
+
429
+ where $rel_{i}$ is the graded relevance of the document at position $i$ , and $REL_{k}$ is the list of relevant documents sorted by relevance. We use the pytrec_eval library to calculate the NDCG scores.
430
+
431
+ The k-star precision@5 precision metric counts the number of returned clauses in the top 5 results rated $\geq k$ . If there are less than 5 viable clauses for a query, i.e., clauses with ratings $\geq k$ , then we normalize by the number of viable clauses. This ensures the precision metrics always range from 0 to 1. Formally, the k-star precision@5 is defined in (2).
432
+
433
+ $$
434
+ k \text {s t a r p r e c i s i o n} @ 5 = \frac {\sum_ {i = 1} ^ {5} \mathbb {1} \left(r e l _ {i} \geq k\right)}{\min \left(5 , \sum_ {i} \mathbb {1} \left(r e l _ {i} \geq k\right)\right)}, \tag {2}
435
+ $$
436
+
437
+ where $\mathbb{1}$ is the indicator function so $\mathbb{1}(rel_i\geq k)$ is 1 if $rel_{i}\geq k$ and 0 otherwise.
438
+
439
+ # F.2 Fine-tuning Experiments
440
+
441
+ We perform a grid search over the learning rate in $\{5 \times 10^{-5}, 1 \times 10^{-4}, 3 \times 10^{-4}, 5 \times 10^{-4}, 1 \times 10^{-3}\}$ and number of updates in $\{100, 200, 400, 600, 800, 1000, 1500, 2000, 2500, 3000\}$ , choosing the hyperparameters with the best average validation NDCG@10 score over three runs.
442
+
443
+ We oversampled the data so that an equal number of examples of each relevance appear in the training data. We use a batch size of 64 and the AdamW optimizer with a weight decay of 0.01. We use cross-entropy loss, where the probability targets are the normalized relevance scores. For example, a one-star clause corresponds to a target of $p = 0.0$ , and a four-star clause corresponds to a target of $p = 0.75$ .
444
+
445
+ # F.3 Compute
446
+
447
+ Fine-tuning experiments and Llama evaluations were performed on an NVIDIA RTX A6000 GPU. For fine-tuning experiments the full grid-search took about three GPU hours. Evaluation of bi-encoder and cross-encoder MiniLM models took less than five minutes per run. Llama pointwise evaluations took about thirty minutes per model, and pairwise evaluations took about an hour per model.
acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84674ffd0880ef5340e42193450e6d699ab180fce5fb36995f9b65b5578e4ca8
3
+ size 1981707
acordanexpertannotatedretrievaldatasetforlegalcontractdrafting/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c0e6de82a1fe6e8db6b6e7d6252cc915e2fd52a50010ad0d0c54b882d6dcead
3
+ size 483161
acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/27146bfa-49fd-45d6-963d-8cea3d114dbd_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d3bdf69983be29d32108c9342b3bd879a30775265e05752f6ef95ae52da2ca
3
+ size 139355
acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/27146bfa-49fd-45d6-963d-8cea3d114dbd_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5e14fd3e564e954d8b756bbf80681c2a0fc521387a9329a8a45b67fd456c646
3
+ size 160362
acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/27146bfa-49fd-45d6-963d-8cea3d114dbd_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fba400b1d561a97f740ae97fa9d77378572dd5c3ae398852ad5268ce64d639a4
3
+ size 860478
acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/full.md ADDED
@@ -0,0 +1,497 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Acquisition and Application of Novel Knowledge in Large Language Models
2
+
3
+ Ziyu Shang $^{1*}$ , Jianghan Liu $^{2*}$ , Zhizhao Luo $^{4*}$ , Peng Wang $^{1,3\dagger}$ , Wenjun Ke $^{1,3}$ , Jiajun Liu $^{1}$ , Zijie Xu $^{1}$ , Guozheng Li $^{1}$
4
+
5
+ $^{1}$ School of Computer Science and Engineering, Southeast University, China
6
+
7
+ $^{2}$ College of Software Engineering, Southeast University, China
8
+
9
+ <sup>3</sup>Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
10
+
11
+ <sup>4</sup>Beijing Institute of Technology, Zhuhai, China
12
+
13
+ {ziyus1999, liujianghan, pwang, kewenjun, jiajliu, zijieuxu, gzli}@seu.edu.cn
14
+ zzluo@bit.edu.cn
15
+
16
+ # Abstract
17
+
18
+ Recent advancements in large language models (LLMs) have demonstrated their impressive generative capabilities, primarily due to their extensive parameterization, which enables them to encode vast knowledge. However, effectively integrating new knowledge into LLMs remains a major challenge. Current research typically first constructs novel knowledge datasets and then injects this knowledge into LLMs through various techniques. However, existing methods for constructing new datasets either rely on timestamps, which lack rigor, or use simple templates for synthesis, which are simplistic and do not accurately reflect the real world. To address this issue, we propose a novel knowledge dataset construction approach that simulates biological evolution using knowledge graphs to generate synthetic entities with diverse attributes, resulting in a dataset NovelHuman. We then evaluate existing training strategies and knowledge augmentation methods on NovelHuman. Systematic analysis on NovelHuman reveals that the intrasentence position of knowledge significantly affects the acquisition of knowledge. Therefore, we introduce an intra-sentence permutation to enhance knowledge acquisition. Furthermore, given that potential conflicts exist between autoregressive (AR) training objectives and permutation-based learning, we propose PermAR, a permutation-based language modeling framework for AR models. PermAR seamlessly integrates with mainstream AR architectures, endowing them with bidirectional knowledge acquisition capabilities. Extensive experiments demonstrate the superiority of PermAR<sup>1</sup>, outperforming knowledge augmentation methods by $3.3\% - 38\%$ .
19
+
20
+ # 1 Introduction
21
+
22
+ Recently, LLMs (OpenAI, 2023; Touvron et al., 2023) have gained widespread attention for their training on massive corpora, acquisition of vast factual knowledge, demonstrating remarkable capabilities in generating high-quality text (Gatt and Krahmer, 2018), in-context learning (Brown et al., 2020), and following complex instructions (Ouyang et al., 2022). However, the parametric knowledge of LLMs is constrained to their pre-training corpus, which predominantly comes from public resources like Wikipedia (Lemmerich et al., 2019), Github (GitHub, 2025), and CommonCrawl (Raffel et al., 2020), covering only a specific period. This limitation prevents LLMs from accessing other information beyond their training corpus. Consequently, efficiently integrating continuously updated knowledge into LLMs has emerged as a crucial task (Jiang et al., 2024b; Saito et al., 2025; Allen-Zhu and Li, 2024; Shi et al., 2025).
23
+
24
+ Although recent works have attempted to construct novel knowledge datasets that LLMs have not encountered during pre-training and proposed various training strategies to inject unseen knowledge, Jiang et al. found that LLMs can correctly answer nearly $10\%$ of questions about knowledge that appeared on Wikipedia after their pre-training, even without external information. This finding underscores a key limitation in the construction of novel knowledge datasets: selecting new knowledge based on timestamps is inherently unreliable. First, there is no guarantee that the collected data did not appear in the LLMs' pre-training corpus. Knowledge on the internet is widely distributed, and the accessible information may simply be a post-processed or organized version of existing content, i.e., this novel knowledge could have been collected in pre-training (Tirumala et al., 2024). This suggests that such datasets may primar
25
+
26
+ ![](images/f397b6d75f152ba2960698c8df5f90b29eb56aeeaec52c9ab6cad94dd018c717.jpg)
27
+ Figure 1: Illustration of the same new knowledge under different training methods: (a) AR models predict left-to-right. (b) Intra-sentence permutation with AR objectives cannot tackle different ground truths for the same prefix. (c) The PermAR framework adds position-aware embeddings for permutations, enabling different ground truth predictions.
28
+
29
+ ily reactivate a model's internal memory rather than facilitate genuine learning or application of novel knowledge. Second, the training data used by different LLMs varies significantly, with inconsistent cut-off dates, thereby requiring substantial human effort to collect novel knowledge. Additionally, existing work (Allen-Zhu and Li, 2024) generates synthetic novel knowledge using template-based methods, which results in the synthetic dataset being overly simplistic and failing to reflect real-world complexity.
30
+
31
+ To tackle the above challenges, inspired by the process of biological evolution producing new species, we propose a novel knowledge construction method involving inheritance, mutation, and expansion operations based on existing large-scale knowledge graphs (KGs) (Dong et al., 2014), aiming to generate totally novel knowledge for all LLMs. We synthesize a large-scale dataset with human subjects called NovelHuman, which contains 8,507 human subjects and 143K triples. For each subject, we convert its corresponding triples into linguistically fluent natural language text passages with the help of a more advanced LLM, and finally collect more than 144k questions.
32
+
33
+ Additionally, recent works (Allen-Zhu and Li, 2024; Saito et al., 2025) have revealed that LLMs' ability to master knowledge declines as the sentence containing the knowledge appears later in the document, i.e., inter-sentence sensitivity. To address this issue, previous research has explored
34
+
35
+ various knowledge augmentation strategies, such as sentence permutation and rewriting, which have been shown to enhance knowledge acquisition. However, this finding often relies on relatively simple knowledge datasets, where each sentence contains only a single piece of knowledge, limiting the ability to fully assess the generalizability of methods. To this end, we evaluate existing methods on NovelHuman. Beyond prior observations, we find that LLMs exhibit not only inter-sentence sensitivity but also intra-sentence sensitivity, meaning their ability to master knowledge varies even within different positions of the same sentence.
36
+
37
+ A straightforward alternative solution is to adapt existing inter-sentence permutation strategies into intra-sentence permutation. However, we find that intra-sentence permutation inherently conflicts with mainstream autoregressive (AR) modeling. Mainstream LLMs typically rely on an autoregressive (AR) architecture (Radford, 2018), which employs a strict left-to-right processing mechanism during both training and inference to facilitate information memorization and generation. In certain scenarios, simply intra-sentence permutation may not be sufficient to achieve the desired goal within standard AR, as shown in Figure 1. In the standard AR modeling, intra-sentence permutation leads to issue illustrated in Figure 1 (b), where given the same prefix tokens, model generates identical feature representations and predictions, even when the corresponding ground truth differs. Although XLNet (Yang et al., 2019) mitigates this problem by introducing a two-stream attention mechanism, it requires modifying the model architecture and retraining LLMs, making it highly costly.
38
+
39
+ To address this issue, we build upon the AR structure and propose the permutation language modeling framework for AR, PermAR, as shown in Figure 1 (c). PermAR introduces minimal modifications by learning a single position embedding for the next token's original position, enabling accurate prediction of various permutations, even when the prefix remains unchanged. Meanwhile, PermAR maximizes the expected likelihood over all possible inter-sentence and intra-sentence permutations, facilitating bidirectional contextual learning within the AR transformer while enhancing the model's ability to comprehend knowledge across different positions. Additionally, we propose a permutation annealing training strategy, which gradually restores LLMs from fully permutation factorizations to the original natural language sequence
40
+
41
+ order, allowing LLMs to reconstruct fragmented knowledge points into a coherent knowledge representation. Extensive experiments and ablation studies demonstrate the superiority and adaptability of PermAR, outperforming knowledge augmentation methods by $3.3\% - 38\%$ .
42
+
43
+ # 2 Building Dataset for Continual Knowledge Acquisition
44
+
45
+ Drawing inspiration from the organization of knowledge on Wikipedia, where each page introduces a subject whose distilled knowledge can be abstracted into KGs, the generation of new knowledge can be seen as the expansion of KGs, i.e., the creation of new subjects. To obtain reasonable and diverse new subjects, we propose three operations that mimic biological evolution: inheritance, mutation, and expansion.
46
+
47
+ Suppose a specific KG $\mathcal{G} = \{\mathcal{G}_f,\mathcal{G}_o\}$ is given as the benchmark, where $\mathcal{G}_f = \{\mathcal{V}_f,\mathcal{R}_f,\mathcal{T}_f\}$ and $\mathcal{G}_o = \{\mathcal{V}_o,\mathcal{R}_o,\mathcal{T}_o\}$ stand for the instance and ontology sub-graph, while $\mathcal{V}_{\sim},\mathcal{R}_{\sim}$ , and $\mathcal{T}_{\sim}(\sim \in \{f,o\})$ denote the set of entity, relation, triple, respectively. Moreover, $\mathcal{T}_f = \{(h_f,r_f,t_f)|h_f,t_t\in \mathcal{V}_f,r_f\in \mathcal{R}_f\}$ , $\mathcal{T}_f = \{(h_o,r_o,t_o)|h_o,t_o\in \mathcal{V}_o,r_o\in \mathcal{R}_o\}$ , where $h_{\sim},r_{\sim}$ , and $t_{\sim}(\sim \in \{f,o\})$ denote the subject, predicate, object of a triple, respectively. New subjects $h_f^{new}$ , can be categorized into two types: (1) Novel at the instance level while the ontology remains unchanged. (2) Novel both at the instance and ontology levels. To construct a reasonable new subject, we focus solely on the first type, i.e., $h_f^{new}\notin \mathcal{V}_f,h_o^{new}\in \mathcal{V}_o$
48
+
49
+ To create a specific new subject within a given ontology, it is necessary to establish connections with existing subjects (inheritance). This involves linking the new subject to the existing ontology, ensuring it inherits certain fundamental characteristics. To distinguish this new subject from its predecessors, it is essential to endow it with unique attribute values (mutation). Finally, we expand the existing attribute values of the current new subject to achieve diversity (expansion).
50
+
51
+ Inheritance Specifically, we begin by randomly selecting an ontology $h_{o}^{I} \in \mathcal{V}_{o}$ , to which the new subject will belong. To mimic sexual reproduction in biological evolution, we randomly select two existing, distinct subjects as the dad $h_{f}^{D}$ and the mom $h_{f}^{M}$ from $\{h_{f} \mid h_{f} \in \mathcal{V}_{f} \wedge h_{o} = h_{o}^{I}\}$ . The new subject $h_{f}^{new}$ inherits all attributes from $h_{f}^{D}$ and
52
+
53
+ $h_f^M$ , and merges the same relation. For instance, in the case of humans, parents have a birth date, and we consolidate these dates, allowing the new subject to have two objects for the birth date.
54
+
55
+ Mutation Next, we introduce mutation to predetermine the important attributes $r_f^I$ for the new subject, such as the birth date for humans, by selecting a random date between the birth dates of $h_f^D$ and $h_f^M$ .
56
+
57
+ Expansion Subsequently, to enhance the diversity of attributes for $h_f^{new}$ , we introduce an anchor relation, $r_f^a \in \mathcal{R}_f$ , based on prior knowledge. For example, for human entities, $r_f^a$ could represent a relation such as professions. We first obtain two candidate expansion subject sets $n^* = \{h_f^{add} \mid h_f^{add} \in \mathcal{V}_f \text{ and } h_o^{add} = h_o^I \text{ and } (h_f^*, r_f^a, t) \in \mathcal{G}_f \text{ and } (h_f^{add}, r_f^a, t) \in \mathcal{G}_f\}$ , where $* \in \{\mathrm{D}, \mathrm{M}\}$ , corresponding to the dad subject $h_f^D$ and mom subject $h_f^M$ , respectively. Then, to minimize potential conflicts, it is essential to filter the candidate expansion subject sets by integrating prior knowledge and important relations of the new subject. For example, in the case of humans, a person cannot participate in events that occurred before the birth date. Therefore, we apply this logical constraint to filter two candidate expansion sets, obtaining $n_f^* = \mathrm{RULE}(n^*)$ , where RULE is the constraint function of important relations. Ultimately, the structured triple set of $T_{h_f^{new}}$ corresponding to $h_f^{new}$ can be obtained through the process of inheriting and mutating attributes from the refined candidate expansion subject sets. Given the rich and complex attributes of human entities, we synthesize a large number of novel human entities. The detailed construction process of these novel triples is presented in Appendix A.1.
58
+
59
+ Knowledge & general question generation The process of generating new entities and their attributes can be summarized as assigning them as many diverse attributes and values as possible, in a logically consistent manner. However, during the inheritance, mutation, and expansion, we identified triple conflict, i.e., while we construct novel knowledge, the attribute values of the new entity intuitively do not align with the current logic of the real world. To address this, we employ more powerful LLMs, such as GPT-4, to perform consistency checks. Subsequently, to obtain natural language text corresponding to the triple set $\mathcal{T}_{h_f^{new}}$ of $h_f^{new}$ , we harness the language generation capabilities of
60
+
61
+ advanced LLMs (GPT-4). By providing a sophisticated prompt, it can generate text in the style of Wikipedia, effectively creating novel knowledge.
62
+
63
+ To further assess whether LLMs effectively learn constructed new knowledge, we employ a common question-answering (QA) for evaluation. Since the new knowledge is generated directly from triple sets, we can also derive evaluation questions from these triples. Specifically, we explain the meaning of each relation in the triples to GPT-4 and instruct it to create various question templates. However, due to the inherent hallucination phenomenon of LLMs, not all triples are accurately reflected in the generated text. To mitigate this, we further filter the triple set to identify which triples are correctly represented in the generated text. These confirmed triples are then used to fill in the question templates, ensuring that the answers to these questions are explicitly present in the generated text. Appendix D includes all the prompt templates used for knowledge and question generation.
64
+
65
+ # 2.1 Dataset Summary
66
+
67
+ Through the above steps, we construct the NovelHuman dataset, which contains 8,507 new human entities covering 435 attributes, with each entity initially associated with an average of 20 attributes. After triple consistency checking and the knowledge generation phase, the average number of attributes per entity was refined to 16. For the generated knowledge, each piece of knowledge contains an average of 410 tokensΒ², resulting in a total of 144,221 QA pairs, with questions averaging 15 tokens and corresponding answers averaging 4 tokens. Further detailed dataset statistics, along with the corresponding training and test set splits, can be found in Appendix A.2.
68
+
69
+ # 3 Preliminary Experiments
70
+
71
+ # 3.1 Background
72
+
73
+ We begin with an overview of autoregressive (AR) language modeling with the next-token prediction objective. Consider a discrete token sequence $\mathbf{x} = [x_{1}, x_{2}, \dots, x_{T}]$ of length $T$ . The goal of AR modeling involves maximizing the sequence likelihood through sequential token prediction, where each token is conditioned on its preceding context (Kitouni et al., 2024). The training objective seeks to optimize the joint probability distribution
74
+
75
+ by factorizing it into a product of conditional probabilities:
76
+
77
+ $$
78
+ \max _ {\theta} p _ {\theta} (\mathbf {x}) = \prod_ {t = 1} ^ {T} p _ {\theta} \left(x _ {t} \mid x _ {< t}\right) \tag {1}
79
+ $$
80
+
81
+ where $p_{\theta}$ represents a neural network parameterized by $\theta$ that models the token distribution and $x_{<t} = [x_1, x_2, \ldots, x_{t-1}]$ denotes the preceding token context. Currently, most mainstream LLMs follow the AR pre-training paradigm. Similarly, to align with the representations learned during pretraining, existing AR-based CPT also adheres to Equation 1.
82
+
83
+ # 3.2 Knowledge Acquisition Techniques
84
+
85
+ Building upon prior research (Allen-Zhu and Li, 2024; Jiang et al., 2024b; Saito et al., 2025), we evaluated prevalent continued pre-training, instruction-tuning and knowledge augmentation paradigms in NovelHuman based on Llama-2-7B and Llama-3-8B, including standard continued pretraining (CPT), continued pre-training with supervised fine-tuning (CPT+SFT), continued pretraining with forgetting-resistant SFT (CPT+SFT w/o F), mixed training involving both pre-training and SFT simultaneously (MT) (Allen-Zhu and Li, 2024), mixed training involving pre-training and SFT (prompt for loss computation) (Allen-Zhu and Li, 2024), human-like learning approaches for acquiring new knowledge (R&A), pre-instruction-tuning $(\mathrm{PIT}++)$ (Jiang et al., 2024b), Attn Drop, D-AR (Saito et al., 2025) and inter-sentence permutation (InterSP) (Allen-Zhu and Li, 2024). More introduction and hyperparameter settings of these methods can be seen in Appendix B.
86
+
87
+ # 3.3 Evaluation Metrics
88
+
89
+ In the evaluation process, we follow settings of Jiang et al., where LLMs are required to generate answers for given questions using greedy decoding. Given that our questions tend to yield short and precise answers, exact match (EM) is employed as the primary metric to assess whether the answers are completely identical to the ground truth (Kwiatkowski et al., 2019). Furthermore, considering that some answers may be order-independent, we also report the recall rate (R) to measure whether the ground truth appears within LLMs' generated responses. Additionally, ROUGE-L (R-L) is used to evaluate the longest common subsequence between the LLMs' outputs
90
+
91
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Training Pattern</td><td colspan="3">Text</td><td colspan="3">Number</td><td colspan="3">Date</td><td colspan="3">All</td></tr><tr><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td></tr><tr><td rowspan="11">Llama-2-7B</td><td>Original</td><td>0.0</td><td>1.8</td><td>0.9</td><td>0.0</td><td>0.0</td><td>0.9</td><td>0.0</td><td>2.1</td><td>0.9</td><td>0.0</td><td>1.9</td><td>0.9</td></tr><tr><td>CPT</td><td>0.0</td><td>1.5</td><td>1.5</td><td>0.0</td><td>0.0</td><td>1.6</td><td>0.0</td><td>1.5</td><td>1.5</td><td>0.0</td><td>1.5</td><td>1.5</td></tr><tr><td>CPT + SFT</td><td>20.7</td><td>22.9</td><td>26.7</td><td>4.6</td><td>4.6</td><td>4.6</td><td>32.5</td><td>54.3</td><td>63.3</td><td>21.2</td><td>26.6</td><td>29.2</td></tr><tr><td>CPT + SFT (w/o F)</td><td>26.8</td><td>28.5</td><td>32.8</td><td>5.4</td><td>5.4</td><td>5.4</td><td>40.3</td><td>61.5</td><td>69.1</td><td>27.3</td><td>32.3</td><td>35.0</td></tr><tr><td>MT</td><td>32.8</td><td>33.7</td><td>38.2</td><td>10.4</td><td>10.4</td><td>10.4</td><td>42.0</td><td>65.9</td><td>72.6</td><td>32.8</td><td>37.3</td><td>40.2</td></tr><tr><td>MT (prompt)</td><td>37.6</td><td>38.8</td><td>43.1</td><td>11.1</td><td>11.1</td><td>11.1</td><td>42.8</td><td>66.9</td><td>73.5</td><td>37.1</td><td>41.7</td><td>44.5</td></tr><tr><td>R&amp;A</td><td>17.4</td><td>20.4</td><td>23.6</td><td>3.2</td><td>3.2</td><td>3.2</td><td>14.6</td><td>40.6</td><td>52.5</td><td>16.6</td><td>22.7</td><td>25.4</td></tr><tr><td>Attn Drop</td><td>19.3</td><td>20.8</td><td>24.5</td><td>4.0</td><td>4.0</td><td>4.0</td><td>18.9</td><td>23.5</td><td>24.1</td><td>19.6</td><td>23.8</td><td>26.6</td></tr><tr><td>D-AR</td><td>23.1</td><td>25.4</td><td>28.7</td><td>4.9</td><td>4.9</td><td>4.9</td><td>34.5</td><td>62.6</td><td>70.8</td><td>25.3</td><td>28.2</td><td>31.9</td></tr><tr><td>PIT++</td><td>22.1</td><td>25.8</td><td>27.9</td><td>11.5</td><td>11.5</td><td>11.5</td><td>24.4</td><td>52.5</td><td>61.8</td><td>22.0</td><td>30.0</td><td>30.3</td></tr><tr><td>InterSP</td><td>36.5</td><td>37.6</td><td>41.8</td><td>15.8</td><td>15.8</td><td>15.8</td><td>52.8</td><td>71.6</td><td>77.1</td><td>37.3</td><td>41.5</td><td>44.0</td></tr><tr><td rowspan="11">Llama-3-8B</td><td>Original</td><td>0.0</td><td>2.2</td><td>1.2</td><td>0.0</td><td>0.0</td><td>1.2</td><td>0.0</td><td>2.0</td><td>1.1</td><td>0.0</td><td>2.1</td><td>1.2</td></tr><tr><td>CPT</td><td>0.0</td><td>2.5</td><td>1.5</td><td>0.0</td><td>0.0</td><td>1.5</td><td>0.0</td><td>2.3</td><td>1.3</td><td>0.0</td><td>2.5</td><td>1.5</td></tr><tr><td>CPT + SFT</td><td>19.3</td><td>21.9</td><td>25.1</td><td>6.8</td><td>6.8</td><td>6.8</td><td>36.1</td><td>55.1</td><td>24.3</td><td>20.4</td><td>26.0</td><td>28.0</td></tr><tr><td>CPT + SFT (w/o F)</td><td>24.8</td><td>25.6</td><td>30.2</td><td>7.5</td><td>7.5</td><td>7.5</td><td>38.2</td><td>57.7</td><td>25.8</td><td>22.8</td><td>27.9</td><td>32.4</td></tr><tr><td>MT</td><td>28.9</td><td>30.9</td><td>33.7</td><td>29.8</td><td>29.8</td><td>29.8</td><td>58.8</td><td>58.6</td><td>83.2</td><td>31.5</td><td>31.9</td><td>37.8</td></tr><tr><td>MT (prompt)</td><td>28.8</td><td>31.2</td><td>34.3</td><td>30.6</td><td>30.6</td><td>30.6</td><td>59.1</td><td>59.1</td><td>84.9</td><td>31.7</td><td>32.6</td><td>38.9</td></tr><tr><td>R&amp;A</td><td>22.3</td><td>26.9</td><td>27.4</td><td>23.7</td><td>23.7</td><td>23.7</td><td>50.1</td><td>63.4</td><td>75.2</td><td>25.4</td><td>30.8</td><td>36.7</td></tr><tr><td>Attn Drop</td><td>15.4</td><td>18.3</td><td>23.2</td><td>6.3</td><td>6.3</td><td>6.3</td><td>35.7</td><td>50.8</td><td>23.9</td><td>18.4</td><td>22.5</td><td>26.8</td></tr><tr><td>D-AR</td><td>20.6</td><td>23.5</td><td>26.7</td><td>7.3</td><td>7.3</td><td>7.3</td><td>37.2</td><td>56.1</td><td>24.9</td><td>21.5</td><td>26.3</td><td>30.0</td></tr><tr><td>PIT++</td><td>21.2</td><td>24.7</td><td>26.3</td><td>21.0</td><td>21.0</td><td>21.0</td><td>50.7</td><td>68.4</td><td>74.8</td><td>24.0</td><td>30.4</td><td>30.7</td></tr><tr><td>InterSP</td><td>39.4</td><td>42.1</td><td>44.6</td><td>28.3</td><td>28.3</td><td>28.3</td><td>73.1</td><td>84.2</td><td>87.3</td><td>42.2</td><td>47.6</td><td>48.0</td></tr></table>
92
+
93
+ Table 1: Preliminary experimental results. Bold numbers denote the best results. Underline numbers imply the second-best results.
94
+
95
+ and the ground truth (Lin, 2004). During evaluation, for LLMs that have not been fine-tuned with instructions, five QA pairs are provided as in-context demonstrations that are used to ensure the output follows the specified format.
96
+
97
+ # 3.4 Experimental Results
98
+
99
+ As shown in Table 1, the relatively low knowledge QA performance of the original Llama-2 and Llama-3 (0% EM across all evaluations) indicates that all knowledge in the test set is not included in the original pre-training corpus. It can be seen that these methods are struggling to acquire novel knowledge, even though the perplexity has been reduced to 1 during the continued pre-training phase. Compared to only CPT, the EM after SFT increased to $21.2\%$ and $20.4\%$ for Llama-2 and Llama-3, respectively, indicating the effectiveness of the standard paradigm and that LLMs have captured some new knowledge. Notably, integrating pre-training and fine-tuning objectives shows superior performance compared to the separated CPT+SFT pipeline. Specifically, MT and MT (prompt) consistently outperform the standard separated training paradigm. This highlights the superiority of mixed training strategies over a sequential, distinctly separated training pipeline. Among the knowledge augmentation methods, InterSP achieved the best performance, achieving EM
100
+
101
+ improvements of $0.2\%$ and $10.5\%$ for Llama-2 and Llama-3 over MT (prompt), respectively, highlighting the importance of sentence positioning within passages for effective knowledge learning. However, despite these improvements, current methods remain insufficient for enabling LLMs to master novel knowledge comprehensively.
102
+
103
+ # 4 Intra-sentence Permutation and PermAR Framework
104
+
105
+ Given the failures of various mainstream CPT and augmentation paradigms in new knowledge scenarios, we aim to further enhance LLM's learning effectiveness of new knowledge. Drawn inspiration from (Golovneva et al., 2024; Guo et al., 2024), we conducted a position-based systematic analysis of CPT+SFT and InterSP to explore the relationship between the effectiveness of its learning by LLMs and the position of knowledge between and within sentences. Statistical results indicate both the position within and between sentences have a significant influence on prediction accuracy. Specifically, the average EM of the first three knowledge in the sentences that are positioned differently in passage of the test set is depicted in Figure 2 (a) and (c). It can be observed that for the same position within different sentences, the earlier a sentence appears in the passage, the more easily LLMs can learn that knowledge. Additionally, it can be
106
+
107
+ ![](images/7d3816066a6b2b2dbb48368aa1016f8c4062f06a62c2b091f24c0f7c19a29fd3.jpg)
108
+ (a)
109
+
110
+ ![](images/4299d7cd3810bc533fb5fc2bfbf763eae6ff312cec18a580c285a5f4a46649a6.jpg)
111
+ (b)
112
+
113
+ ![](images/047b6ad1db7e200f4e9651837f5d4c29be744ab5a1ec86f3f44ea6b617c868bd.jpg)
114
+ (c)
115
+
116
+ ![](images/b65758b1082d68d521b6729f21e46ad9ac77d8fdd9724940fcf455661e959a9f.jpg)
117
+ (d)
118
+ Figure 2: EM of CPT+SFT and InterSP on the test set using Llama-3-8B. (a) and (b) show the results for CPT+SFT, while (c) and (d) correspond to InterSP. (a) and (c) represent the EM scores of the $i_{th}$ ( $i = 1, 2, 3$ ) knowledge from different sentences, with the horizontal axis indicating the sentence index in passages. (b) and (d) categorize sentences based on their knowledge density, where G1-G6 represent groups with increasing numbers of knowledge points per sentence ( $G_1$ : one knowledge point, $G_6$ : six knowledge points). The right vertical axis denotes the average position of sentences within the passage for each group.
119
+
120
+ observed that applying inter-sentence permutation enhances knowledge retention across different sentences. However, when observing Figure 2 (b), a distinct downstairs-like pattern emerges within sentences meaning knowledge positioned earlier in a sentence is more easily retained, whereas knowledge appearing later remains significantly harder to grasp. Furthermore, Figure 2 (d) indicates that inter-sentence permutation alone is insufficient to bridge the accuracy gap between knowledge positioned earlier and later within the same sentence.
121
+
122
+ Therefore, to improve LLM's ability to perceive knowledge at different positions within a sentence, a straightforward approach is to apply intra-sentence permutation alongside inter-sentence permutation (InterSP+IntraSP). Inspired by XLNet (Yang et al., 2019), InterSP+IntraSP can be viewed as a subset of permutation language modeling. Formally, the permutation language modeling objective can be expressed as an expectation over all possible factorization orders:
123
+
124
+ $$
125
+ \max _ {\theta} p _ {\theta} (\mathbf {x}) = \mathbb {E} _ {\tau \sim \mathcal {S} _ {T}} \left[ \prod_ {t = 1} ^ {T} p _ {\theta} (x _ {\tau_ {t}} \mid x _ {\tau_ {< t}}) \right] (2)
126
+ $$
127
+
128
+ where $S_{T}$ denotes the symmetric group containing all $T!$ permutations of the index set $\{1,2,\dots,T\}$ , and $\tau = [\tau_1,\tau_2,\dots,\tau_T]$ denotes a randomly sampled permutation from $S_{T}$ . Here, $\tau_{t}$ indicates the token index at position $t$ in the permuted order, while $x_{\tau_{< t}} = [x_{\tau_1},x_{\tau_2},\ldots ,x_{\tau_{t - 1}}]$ encompasses all tokens preceding position $t$ in the current permutation $\tau$ .
129
+
130
+ Since the same model parameters $\theta$ are shared across all factorizations, each token $x_{i}$ encounters diverse contextual configurations during training. This exposure enables the model to learn comprehensive token interdependencies, since across all
131
+
132
+ permutations, every token $x_{i}$ is eventually conditioned on every other token $x_{j}, j \neq i$ and vice versa. Consequently, the model develops bidirectional understanding while maintaining the computational efficiency and causal structure inherent to AR modeling.
133
+
134
+ However, directly applying the permutation language modeling objective from Equation 2 to the standard AR model can fail in certain scenarios. For example, consider a sequence of length $T$ corresponding to two different permutation orders $\tau_{a} = [1,2,3,\dots ,T - 1,T]$ and $\tau_{b} = [1,2,3,\dots ,T,T - 1]$ . When predicting the second-to-last token, the standard AR model would return identical logits, despite the ground truth being different. The fundamental reason for this lies in the fact that the standard AR model, during next-token prediction, cannot incorporate the positional information of the target token.
135
+
136
+ Position-aware Instruction Embedding To enable a standard AR model to perceive differences in target positions when predicting the next token, we introduce a set of position-aware instruction embeddings to encode positional information of the next predicted token. We define a position-aware instruction embedding set $\mathbf{p} = [p_1,p_2,\dots ,p_T]\in$ $\mathbb{R}^{T\times dim}$ , where $\pmb {p}_i$ corresponds to the positionaware instruction embedding for predicting the token at position $i$ and dim is the embedding dimension. These position embeddings are integrated with the input token embedding through elementwise addition, yielding the target position-aware token embedding $\mathbf{x}_{\tau}$ ..
137
+
138
+ $$
139
+ \begin{array}{l} \mathbf {x} _ {\tau} ^ {p} = \mathbf {x} _ {\tau} + \mathbf {p} _ {\tau} \tag {3} \\ = [ \pmb {x} _ {\tau_ {1}} + \pmb {p} _ {\tau_ {2}}, \pmb {x} _ {\tau_ {2}} + \pmb {p} _ {\tau_ {3}}, \dots , \pmb {x} _ {\tau_ {T - 1}} + \pmb {p} _ {\tau_ {T}}, \pmb {x} _ {\tau_ {T}} ] \\ \end{array}
140
+ $$
141
+
142
+ where $\mathbf{x}_{\tau}$ represents the token embeddings of the original sequence after permutation $\tau$ , and $\mathbf{p}_{\tau}$ represents the shifted position-aware instruction embeddings. Specifically, each token $x_{\tau_t}$ receives the position-aware instruction embedding $p_{\tau_{t + 1}}$ corresponding to its next permuted token's original position. The final token $x_{\tau_T}$ receives no additional position-aware instruction embedding since no subsequent prediction is required.
143
+
144
+ For obtaining the position-aware instruction embedding, we design a parameter-efficient approach that learns a single shared embedding for all positions, $\mathbf{e} \in \mathbb{R}^{1 \times dim}$ , and applies rotational position encoding to incorporate positional information. Specifically, we leverage RoPE-1D (Su et al., 2024) to rotate $\mathbf{e}$ according to the coordinate of the target prediction position. The position-aware instruction embedding for predicting a token at position $t$ is computed as:
145
+
146
+ $$
147
+ \boldsymbol {p} _ {t} = \operatorname {R o P E} (\mathbf {e}, t) \tag {4}
148
+ $$
149
+
150
+ This design offers computational efficiency by sharing parameters across all positions while maintaining distinct positional representations through rotation. Alternative implementations are feasible, including learning independent dense vectors for each position or employing trainable fusion mechanisms (e.g., learned linear combinations) in place of the additive operation in Equation 3. We provide comprehensive ablation studies comparing these design choices in Appendix C.4.
151
+
152
+ Permutation Annealing Strategy Although the proposed PermAR framework can be seamlessly integrated into existing AR models, enabling models to effectively learn new knowledge even when faced with small-scale knowledge, the number of possible permutations for a token sequence is exceedingly large. For instance, for a token sequence of length 1024, the number of possible permutations is 1024!, which would overwhelm the model and significantly reduce training efficiency. Meanwhile, based on the observation in Figure 2: standard AR models tend to better learn knowledge from earlier sentences of the passage, and even from earlier positions of the sentence.
153
+
154
+ Furthermore, we propose a permutation annealing strategy designed to help model reconstruct the fragmented knowledge learned during permutation training into more coherent and logically consistent knowledge, enabling a smooth transition from permutation to natural sequential order. Specifically,
155
+
156
+ we introduce a dynamic probability $r \in [0,1]$ to control whether permutation is applied to the training samples, i.e., each sample is permuted with probability $r$ . When $r = 1$ , the model undergoes full permutation training where token orders are completely permuted, while $r = 0$ corresponds to standard sequential training with preserved original token order. Formally, $r$ can be modeled as follows:
157
+
158
+ $$
159
+ r = \operatorname {c l i p} \left(1 - \frac {\text {e p o c h} - \text {s t a r t}}{\text {e n d} - \text {s t a r t}}, 0, 1\right) \tag {5}
160
+ $$
161
+
162
+ where $\operatorname{clip}(x, a, b) = \max(a, \min(x, b))$ , epoch denotes the current epoch during training, and start and end represent the beginning and ending epochs of the permutation annealing strategy, respectively.
163
+
164
+ # 5 Experiments
165
+
166
+ # 5.1 Settings
167
+
168
+ Baselines. We further compare knowledge augmentation methods with PermAR, including intrasentence permutation (IntraSP) and the combination of inter-sentence and intra-sentence permutation (InterSP+IntraSP). Details of knowledge augmentation are shown in Appendix B.3 and B.4.
169
+
170
+ # 5.2 Main Results
171
+
172
+ The experimental results are shown in Table 2, from which we can draw the following conclusions: (1) Knowledge augmentation is essential for enabling LLMs to learn novel knowledge to some extent. Compared to the best non-augmented method, MT (prompt), the best augmentation-based method, InterSP+IntraSP, improves EM by $34.9\% / 22.2\%$ (Llama-2/Llama-3), demonstrating the effectiveness of permutation patterns in enhancing knowledge learning. (2) A combination of InterSP and IntraSP is necessary, as neither alone is sufficient. InterSP+IntraSP outperforms single augmentation by $15.9\% / 5.8\%$ . Additionally, IntraSP proves more critical for complex knowledge, surpassing InterSP alone by $18.8\% / 5.9\%$ . (3) PermAR effectively mitigates conflicts between permutation-based knowledge augmentation and the AR objective, significantly improving knowledge comprehension at different positions. It enhances EM by $3.3\% / 9.8\%$ compared to the best augmentation method.
173
+
174
+ Furthermore, we visualize the impact of knowledge augmentation and PermAR in Figure 3, revealing: (1) Knowledge augmentation methods strug
175
+
176
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Method</td><td colspan="3">Text</td><td colspan="3">Number</td><td colspan="3">Date</td><td colspan="3">All</td></tr><tr><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td></tr><tr><td rowspan="4">Llama-2-7B</td><td>InterSP</td><td>36.5</td><td>37.6</td><td>41.8</td><td>15.8</td><td>15.8</td><td>15.8</td><td>52.8</td><td>71.6</td><td>77.1</td><td>37.3</td><td>41.5</td><td>44.0</td></tr><tr><td>IntraSP</td><td>55.8</td><td>56.5</td><td>59.5</td><td>36.5</td><td>36.5</td><td>36.5</td><td>67.5</td><td>80.6</td><td>84.3</td><td>56.1</td><td>59.2</td><td>60.8</td></tr><tr><td>InterSP+IntraSP</td><td>71.9</td><td>73.7</td><td>75.2</td><td>45.4</td><td>45.4</td><td>45.4</td><td>83.2</td><td>91.9</td><td>93.4</td><td>72.0</td><td>75.8</td><td>75.8</td></tr><tr><td>PermAR</td><td>75.3</td><td>76.5</td><td>78.0</td><td>54.3</td><td>54.3</td><td>54.3</td><td>83.2</td><td>92.0</td><td>93.5</td><td>75.3</td><td>78.3</td><td>78.6</td></tr><tr><td rowspan="4">Llama-3-8B</td><td>InterSP</td><td>39.4</td><td>42.1</td><td>44.6</td><td>28.3</td><td>28.3</td><td>28.3</td><td>73.1</td><td>84.2</td><td>87.3</td><td>42.2</td><td>47.6</td><td>48.0</td></tr><tr><td>IntraSP</td><td>44.4</td><td>57.2</td><td>55.2</td><td>49.9</td><td>49.9</td><td>49.9</td><td>81.5</td><td>93.1</td><td>94.5</td><td>48.1</td><td>62.3</td><td>58.9</td></tr><tr><td>InterSP+IntraSP</td><td>51.3</td><td>61.6</td><td>60.2</td><td>48.0</td><td>48.0</td><td>48.0</td><td>80.2</td><td>92.2</td><td>93.8</td><td>53.9</td><td>65.9</td><td>63.2</td></tr><tr><td>PermAR</td><td>61.7</td><td>69.2</td><td>68.0</td><td>57.4</td><td>57.4</td><td>57.4</td><td>84.4</td><td>93.8</td><td>95.1</td><td>63.7</td><td>72.7</td><td>70.4</td></tr></table>
177
+
178
+ Table 2: Comparison of QA performance (%) between knowledge augmentation and PermAR.
179
+
180
+ gle to bridge the accuracy gap between knowledge positioned earlier and later within passages and sentence, resulting in a downward trend observed in Figure 3 (a) and (c). However, as seen in Figure 3 (b) and (d), overall knowledge retention improves across positions. Despite this, knowledge located at earlier positions (loc1) is still learned more effectively. This is primarily due to the inherent conflict between permutation-based augmentation and the AR training objective, causing later knowledge in the sequence to be learned with greater difficulty. (2) PermAR overcomes this issue with position-aware embeddings, significantly improving knowledge acquisition across all positions. As shown in Figure 3 (e), knowledge at all positions exhibits a significant improvement. Notably, knowledge appearing at the end of the sequence (third knowledge in the $7th$ sentence) surpasses the retention of many earlier knowledge points. Additionally, Figure 3 (f) further illustrates the substantial improvement in learning for knowledge positioned later in the sequence, exhibiting an upward trend within each group.
181
+
182
+ Moreover, to verify the robustness of PermAR, we evaluate it on the Wiki2023 dataset in Appendix C.1, which was collected in previous work (Jiang et al., 2024b) and contains knowledge that is novel only for Llama-2.
183
+
184
+ # 5.3 Ablation Experiments
185
+
186
+ Different Permutation Granularities. While InterSP operates at the sentence level, IntraSP requires careful consideration of permutation granularity. We investigate three categories of permutation granularities: token-level, word-level, and multi-word phrase-level permutation. Each granularity presents distinct trade-offs between preserving semantic coherence and enabling positional learning flexibility. To systematically evaluate the impact of different permutation granularities on novel knowledge acquisition, we conduct compre
187
+
188
+ ![](images/a7dda1710fcfe88b449c2f9c6eb0dd4274a5c0030f5ecffd54a763679b657be8.jpg)
189
+
190
+ ![](images/341dac2ccccaf46263ca73ef876842e7b9f6b771cfad857529372ae90b5bbc6a.jpg)
191
+
192
+ ![](images/4d87540585833cb8448ef96a38127b565671f584765801a2d613d115060226a3.jpg)
193
+
194
+ ![](images/0cf5663470c5639dc3b3ca4c7ccfb214bf043f4b0018367015e5a8bc90015587.jpg)
195
+
196
+ ![](images/ce30fc2091037ddd4df4737113f8be0adc1899b1d64d396bde1b2c11bf6012aa.jpg)
197
+ Figure 3: Comparison of knowledge augmentation and PermAR in learning novel knowledge at different positions. (a) (b): IntraSP. (c) (d): InterSP+IntraSP. (e) (f): PermAR. Vertical axis $\Delta$ EM represents the difference in EM between the corresponding method and CPT+SFT.
198
+
199
+ ![](images/841f98f35a42826e90162de236f036368889e544f2ca35bf4f428317ec1ba70a.jpg)
200
+
201
+ hensive experiments on the NovelHuman dataset. The detailed results and analysis are presented in Table C2 of Appendix C.2. Our findings reveal that multi-word phrases serve as the optimal permutation granularity, achieving the balance between maintaining linguistic coherence and enabling robust positional knowledge learning. Multi-word phrase-level granularity preserves natural word dependencies while providing sufficient permutation diversity for model to learn position-invariant representations.
202
+
203
+ Permutation Annealing Strategy. To enable models to consolidate fragmented knowledge acquired during permutation training into coherent
204
+
205
+ representations, we introduce a permutation annealing strategy that gradually transitions from full permutation to natural sequential order. This three-stage approach is controlled by two hyperparameters: start and end, defining the annealing period. The training process follows a progressive annealing schedule: initially, the model undergoes full permutation training to learn position-invariant knowledge representations; during the annealing phase (from start to end epochs), the permutation probability gradually decreases, allowing the model to begin integrating scattered knowledge fragments. Finally, the model focuses exclusively on natural sequential order to consolidate the learned knowledge into coherent representations. We systematically analyze the impact of different annealing schedules on model performance across the NovelHuman dataset. Detailed experimental results comparing various start and end configurations are presented in Table C3 of Appendix C.3. Our findings demonstrate that this progressive annealing strategy significantly enhances novel knowledge acquisition, with optimal performance achieved when models undergo substantial full permutation training followed by gradual transition and focused sequential consolidation. This three-stage approach effectively transforms dispersed knowledge fragments into comprehensive, coherent representations.
206
+
207
+ # 6 Related Work
208
+
209
+ To explore the mechanisms of LLMs learning novel knowledge, most works (Allen-Zhu and Li, 2024; Jiang et al., 2024b; Saito et al., 2025) first construct novel knowledge datasets and then train LLMs on this dataset by optimizing the organization of training data. For dataset construction, Allen-Zhu and Li built a dataset of human knowledge and tasks with six basic attributes, Jiang et al. built a dataset of film domains based on timestamps. However, these benchmarks are either too simple for LLMs or cannot ensure they have not appeared in the pre-training corpus of LLMs. It is worth noting that although counterfactual datasets have likely not been encountered during pre-training, they are completely contrary to reality and inherently lack rationality. Subsequently, Allen-Zhu and Li trained LLMs from scratch and found that standard AR does not enable LLMs to fully grasp new knowledge. To address this, they augmented the knowledge using techniques such as sentence shuffling
210
+
211
+ and rewriting, allowing LLMs trained on the augmented dataset to successfully master most of the knowledge. However, this method failed on our more complex and diverse dataset NovelHuman. Meanwhile, some studies (Jiang et al., 2024b; Saito et al., 2025) explored the effects of AR training methods and data augmentation during the continual pretraining phase. Nevertheless, even with the most advanced training and data augmentation methods, LLMs' ability to acquire new knowledge remains fundamentally limited.
212
+
213
+ # 7 Conclusion
214
+
215
+ We first propose a method for constructing novel knowledge datasets based on the theory of biological evolution and then synthetic a new dataset, called NovelHuman with humans as the subjects. Subsequently, we evaluate the impact of predominant CPT and SFT paradigms on NovelHuman. We find that LLMs exhibit not only inter-sentence sensitivity but also intra-sentence sensitivity. To address this issue, we propose a permutation modeling-based framework, PermAR, which can seamlessly integrate with existing AR models, endowing them with bidirectional learning capabilities and efficiently learning knowledge of different positions. Extensive experiments demonstrate the superiority of PermAR, providing insight for the future advancement of LLMs.
216
+
217
+ # Limitations
218
+
219
+ Although we have proposed a knowledge synthesis method based on species evolution, it can be applied to a wide range of domains, such as mountains, rivers, and etc. Since the focus of this paper is on exploring the learning mechanisms of new knowledge in LLMs, we have only generated new knowledge for human entities, which are the most attribute-rich. Other domains have not been fully explored. Additionally, while our work primarily focuses on novel knowledge acquisition, the observed position sensitivity in knowledge learning is not confined to this specific scenario but rather represents a fundamental limitation inherent to autoregressive language models. The effectiveness of PermAR in addressing reasoning-intensive knowledge, math reasoning, or complex multi-hop reasoning tasks remains to be systematically validated in future investigations.
220
+
221
+ # Acknowledgement
222
+
223
+ The authors would like to thank the anonymous reviewers for their insightful comments. This work was supported by National Science Foundation of China (Grant Nos.62376057), the Start-up Research Fund of Southeast University (RF1028623234), SEU Innovation Capability Enhancement Plan for Doctoral Students (No. CXJH_SEU 25132), BIT Research and Innovation Promoting Project (Grant No.2024YCXY071), and the Big Data Computing Center of Southeast University.
224
+
225
+ # References
226
+
227
+ Zeyuan Allen-Zhu and Yuanzhi Li. 2024. Physics of language models: Part 3.1, knowledge storage and extraction. In *Forty-first International Conference on Machine Learning*.
228
+ Tom Brown, Benjamin Mann, Nick Ryder, Subbiah, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
229
+ Mayee Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, and Christopher RΓ©. 2024. Skill-it! a data-driven skills framework for understanding and training language models. Advances in Neural Information Processing Systems, 36.
230
+ Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601-610.
231
+ Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65-170.
232
+ Inc. GitHub. 2025. Github. https://github.com.
233
+ Olga Golovneva, Zeyuan Allen-Zhu, Jason E Weston, and Sainbayar Sukhbaatar. 2024. Reverse training to nurse the reversal curse. In First Conference on Language Modeling.
234
+ Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948.
235
+ Qingyan Guo, Rui Wang, Junliang Guo, Xu Tan, Jiang Bian, and Yujiu Yang. 2024. Mitigating reversal curse in large language models via semantic-aware
236
+
237
+ permutation training. In Findings of the Association for Computational Linguistics ACL 2024, pages 11453-11464.
238
+ Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, and Jun Zhu. 2024. Dpot: Auto-regressive denoising operator transformer for large-scale pde pretraining. In International Conference on Machine Learning, pages 17616-17635. PMLR.
239
+ GE Hinton. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
240
+ Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024a. Mixtral of experts. arXiv preprint arXiv:2401.04088.
241
+ Jinhao Jiang, Junyi Li, Xin Zhao, Yang Song, Tao Zhang, and Ji-Rong Wen. 2025. Mix-CPT: A domain adaptation framework via decoupling knowledge learning and format alignment. In The Thirteenth International Conference on Learning Representations.
242
+ Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Lin, Wen-tau Yih, and Srini Iyer. 2024b. Instruction-tuned language models are better knowledge learners. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
243
+ Damjan Kalajdzievski. 2024. Scaling laws for forgetting when fine-tuning large language models. arXiv preprint arXiv:2401.05605.
244
+ Ouail Kitouni, Niklas Nolte, Adina Williams, Michael Rabbat, Diane Bouchacourt, and Mark Ibrahim. 2024. The factorization curse: Which tokens you predict underlie the reversal curse and more. In *The Thirty-eighth Annual Conference on Neural Information Processing Systems*.
245
+ Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.
246
+ Jens Lehmann, Dhananjay Bhandiwad, Preetam Gattogi, and Sahar Vahdati. 2024. Beyond boundaries: A human-like approach for question answering over structured and unstructured information sources. Transactions of the Association for Computational Linguistics, 12:786-802.
247
+ Florian Lemmerich, Diego SΓ‘ez-Trumper, Robert West, and Leila Zia. 2019. Why the world reads wikipedia: Beyond english speakers. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 618-626.
248
+
249
+ Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
250
+ Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations.
251
+ OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.
252
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
253
+ Alec Radford. 2018. Improving language understanding by generative pre-training.
254
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67.
255
+ Jeff Rasley, Samyam Rajbhandari, Olatunj Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506.
256
+ Kuniaki Saito, Chen-Yu Lee, Kihyuk Sohn, and Yoshitaka Ushiku. 2025. Where is the answer? an empirical study of positional bias for parametric knowledge extraction in language model. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1252-1269, Albuquerque, New Mexico. Association for Computational Linguistics.
257
+ Haizhou Shi, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Zifeng Wang, Sayna Ebrahimi, and Hao Wang. 2025. Continual learning of large language models: A comprehensive survey. ACM Comput. Surv.
258
+ Shamane Siriwardhana, Mark McQuade, Thomas Gauthier, Lucas Atkins, Fernando Fernandes Neto, Luke Meyers, Anneketh Vij, Tyler Odenthal, Charles Goddard, Mary MacCarthy, et al. 2024. Domain adaptation of llama3-70b-instruct through continual pretraining and model merging: A comprehensive evaluation. arXiv preprint arXiv:2406.14971.
259
+ Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063.
260
+
261
+ Kushal Tirumala, Daniel Simig, Armen Aghajanyan, and Ari Morcos. 2024. D4: Improving llm pretraining via document de-duplication and diversification. Advances in Neural Information Processing Systems, 36.
262
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971.
263
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
264
+ Tackgeun You, Saehoon Kim, Chiheon Kim, Doyup Lee, and Bohyung Han. 2022. Locally hierarchical autoregressive modeling for image generation. Advances in Neural Information Processing Systems, 35:16360-16372.
265
+ Yuexiang Zhai, Shengbang Tong, Xiao Li, Mu Cai, Qing Qu, Yong Jae Lee, and Yi Ma. 2024. Investigating the catastrophic forgetting in multimodal large language model fine-tuning. In Conference on Parsimony and Learning, pages 202-227. PMLR.
266
+ Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, and Wanxiang Che. 2024. SAPT: A shared attention framework for parameter-efficient continual learning of large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11641-11661, Bangkok, Thailand. Association for Computational Linguistics.
267
+ Yaowei Zheng, Richong Zhang, Junhao Zhang, YeYanhan YeYanhan, and Zheyan Luo. 2024. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), pages 400-410.
268
+
269
+ # A Novel Human Dataset
270
+
271
+ # A.1 Novel Triple Construction
272
+
273
+ Data Source. We use the knowledge graph Wiki-data as the data source and utilize the QLever SPARQL engine to extract all triples.
274
+
275
+ Ontology. For all subjects in Wikidata, we filter their corresponding ontology (instance of $^5$ ) to be "human", i.e., $h_o^I =$ human.
276
+
277
+ Important Attribute. For human subjects, we set the important attribute to be the date of birth, i.e., $r_f^I =$ date of birth.
278
+
279
+ Anchor Relation. Anchor relation is set to be the occupation, i.e., $r_f^a =$ occupation. The basic idea is that the attribute expansion for groups of people with the same occupation is more reasonable.
280
+
281
+ Constraint Function. Constraint function RULE is that the date of birth of the subject to be expanded must be later than the date of birth of the newly generated subject.
282
+
283
+ In the practical construction process, we set the size of the parent set for expansion to a maximum of 10 subjects for attribute expansion. Additionally, when constructing unique names for new subjects, we take into account the specificity of human names. We use the tokenizer of GPT-4 to segment the names of the father and mother, then randomly select the first token of each word and randomly concatenate the remaining tokens to ensure that the generated names closely resemble real names.
284
+
285
+ # A.2 Statistical Distribution
286
+
287
+ For the 8,507 new human subjects we constructed, we first generated a histogram of their relationships, as shown in Figure A1 (a). Then, as shown in Table A1 for each number of relationships corresponding to the subjects, we randomly selected $80\%$ of the subjects as the training set and the remaining $20\%$ as the testing set. Subsequently, we constructed the corresponding QA questions for both the training and testing sets.
288
+
289
+ Since we used GPT-4 in the process of constructing novel knowledge, its inherent knowledge bias tends to eliminate parts of our existing triples that are currently unreasonable with world knowledge.
290
+
291
+ For example, party $\mathrm{X}^6$ currently has no African members. After generating the novel knowledge, we analyzed the changes in the number of relations (amount of knowledge) in the generated knowledge, as shown by the green bars in Figure A1 (a). It can be seen that for new entities with more attributes, the generated knowledge does not reflect these attributes, mainly because GPT-4 has helped us eliminate some unreasonable aspects. Subsequently, we use the tokenizer of Llama-3-8B to tokenize the generated novel knowledge, and the results are shown in Figure A1 (b). Additionally, we tokenize generated questions, which are displayed in Figure A1 (c). Figure A1 (d) also shows the length of the answers. It can be seen that the answers are relatively concise and accurate, which is the reason we use EM as an evaluation metric.
292
+
293
+ # B Experimental Setting
294
+
295
+ # B.1 Introduction of Baselines in Preliminary Experiment
296
+
297
+ CPT. CPT (Shi et al., 2025; Zhao et al., 2024) involves continuously training LLM on new data to update its parameters. This process helps LLMs to adapt to new information and maintain up-to-date knowledge. It focuses on minimizing the perplexity of passage to improve the model's performance.
298
+
299
+ In our experiments, CPT is trained on all documents, including a mixture of training and test set documents. During evaluation, five demonstrations are provided to guide the model in following the answer format while responding to the QA from the test set.
300
+
301
+ CPT + SFT. Continued Pre-training + Supervised Fine-Tuning (CPT + IT) (Jiang et al., 2025; Siriwardhana et al., 2024) is a method designed to enhance LLMs' capabilities by first updating its knowledge base through training on both existing training passage and new test passage (train knowledge + test knowledge), and then fine-tuning the model with instruction tuning using question-answer (QA) pairs, i.e., train QA. This approach ensures the LLM incorporates the latest information while reinforcing its foundational knowledge, and then focuses on improving its ability to follow specific instructions and respond accurately to queries.
302
+
303
+ CPT + SFT (w/o F). Continued Pre-training + Supervised Fine-Tuning without Forgetting (CPT +
304
+
305
+ <table><tr><td>Data Split</td><td>Number of novel knowledge</td><td>Number of general question</td></tr><tr><td>Train</td><td>6,790</td><td>115,019</td></tr><tr><td>Test</td><td>1,717</td><td>29,202</td></tr></table>
306
+
307
+ Table A1: Statistical results of the Novel Human dataset.
308
+
309
+ ![](images/9cbf68061fab0239493be05295c6dae42bc7db45f04f64457ed4897e484bc974.jpg)
310
+ (a)
311
+
312
+ ![](images/16743fd8871a50efc3a7d2c2aa1e2d32ba381ec551ce0e685dc0600cdde53499.jpg)
313
+ (b)
314
+ Figure A1: Statistical results of the NovelHuman dataset. (a) denotes the change in relation numbers owned by new human subjects after generating novel knowledge. (b) means the token distribution of generated novel knowledge. (c) implies the token distribution of questions. (d) presents the token distribution of the answer.
315
+
316
+ ![](images/c9af750a784c0ad506c46f0f899391cde7e121ce4b46cecaf9bcc34f4be3d68a.jpg)
317
+ (c)
318
+
319
+ ![](images/a84b8f3d36e5b9943057b946472beabda53ef0c06eb623a98a4368e22bde2b1b.jpg)
320
+ (d)
321
+
322
+ IT (w/o forget)) (Kalajdzievski, 2024; Zhai et al., 2024) is a training paradigm where the model is first trained on both new passages (train knowledge) and test passages (test knowledge) to update its parameters with the latest information. After this phase, the LLM undergoes instruction tuning using question-answer (QA) pairs (train QA) while continuing to include test passages (test knowledge) in the training process. This approach ensures that the model retains its previously learned knowledge from the initial passage training phase while learning how to respond to specific queries through instruction tuning. By incorporating test passages during both training phases, the model continuously reinforces its understanding of the new information, preventing the loss of previously acquired knowledge and enhancing its ability to accurately respond to queries based on both old and new data.
323
+
324
+ MT. LLM is trained simultaneously on question-answer (QA) pairs, training passages (train knowledge), and new test passages (test knowledge) (Chen et al., 2024). This integrated training process allows the model to learn how to respond to specific queries through QA pairs while simultaneously updating its knowledge base with both existing and new information from the training and test documents.
325
+
326
+ MT (prompt). This variant of mixed training includes prompting alongside continued pre-training and instruction tuning. Prompts are used to guide the model's learning process (i.e., In addition to outputs, prompt is also used to calculate losses),
327
+
328
+ helping it to focus on relevant information and improving its ability to generate accurate and contextually appropriate responses.
329
+
330
+ Reading&Answering (referred to R&A). The Reading&Answering paradigm involves training the LLM by providing a passage followed by corresponding QA pairs (Lehmann et al., 2024). This method mimics a real-world scenario where the LLM reads a passage and then answers questions based on the information it has just read. It can be broken down into 1) Train & Test passage (Train knowledge): LLM is continued pre-trained on all passages. This phase simulates the human reading and learning stage of knowledge. 2) Train passages + Train QA + Test passages: Similar to MT, LLM is given a passage and related questions about the passage (which appear sequentially), and then the test passage is used for continued pre-training. This process simulates the scenario where a person reviews the book and then answers questions after reading (i.e., doing homework after a class).
331
+
332
+ Attn Drop. Attention Dropout (Hinton, 2012) (Attn Drop) follows the same process as CPT+SFT, with the difference being that during the pretraining phase, attention is randomly dropped in the self-attention module.
333
+
334
+ D-AR. D-AR (Saito et al., 2025) (Denoising Auto-Regressive Training) is a method that enhances knowledge extraction by introducing noise into the training data (You et al., 2022; Hao et al., 2024). It works by randomly replacing a certain percentage of token positions in the input with ran
335
+
336
+ dom tokens, perturbing the model's input. The training objective is then modified to focus on predicting the correct tokens while ignoring the corrupted ones. This approach encourages the model to learn to predict the next token under diverse conditions, promoting robust information extraction during testing. Essentially, D-AR improves the model's performance by diversifying the input sequences, similar to how BERT uses token masking in its training.
337
+
338
+ $\mathbf{PIT} + +$ Jiang et al. (Jiang et al., 2024b) hypothesizes that exposing LLMs to QA pairs before continued pre-training on passage is beneficial, as it allows the model to consider how knowledge from complex passages is obtained through questions during the encoding process. They proposed Pre-Instruction Tuning (PIT) and its variant $\mathbf{PIT} + +$ with the best performance, a method that guides and adjusts the questions before passage pre-training.
339
+
340
+ # B.2 Preliminary Experiment Detail Setting
341
+
342
+ Hyperparameter. Typically, pre-training processes corpus data by concatenating all samples into a continuous sequence, with individual samples separated by a [SEP] token (Guo et al., 2025; Jiang et al., 2024a; Zheng et al., 2024). However, since our constructed NovelHuman dataset consists of relatively independent samples, we do not adopt the traditional concatenation approach. Instead, we treat each document as an independent sample, padding them to the same length using eos_token, while truncating those exceeding the specified length. In our experiments, during the continued pre-training phase, we set the maximum sequence length to 2048, with a per-GPU batch size of 8 and a total batch size of 64, full parameters fine-tuning using ZeRO-2 (Rasley et al., 2020) for optimization. We train with bf16 precision, an initial learning rate of $1.0e - 4$ , a warm-up ratio of 0.1, and a cosine scheduler, running for 150 epochs with an early stopping strategy. We use AdamW (Loshchilov and Hutter, 2018) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.95$ , and a weight decay of 0.1. During continued pre-training, we evaluate perplexity (PPL) on the training set at each epoch and terminate training early if PPL drops below 2 and the change in PPL between consecutive epochs is $\leq 0.1$ .
343
+
344
+ For supervised fine-tuning (SFT), our experiments show that full fine-tuning and LoRA fine-tuning yield similar performance. Given computa
345
+
346
+ <table><tr><td>Permutation</td><td>Example</td></tr><tr><td>Original</td><td>Billabel Kinnamon was born on June 18, 1918, in Utica, United States of America. Kinnamon was educated at Bryn Mawr College...</td></tr><tr><td>Inter-sentence</td><td>Kinnamon was educated at Bryn Mawr College. Billabel Kinnamon was born on June, 18, 1918, in Utica, United States of America...</td></tr><tr><td>Intra-sentence</td><td>1918, in Utica, Billabel Kinnamon was born on June 18, United States of America.</td></tr></table>
347
+
348
+ Table B1: An example of permutation pattern, with $k$ is set to 5.
349
+
350
+ tional constraints, we adopt LoRA fine-tuning for all SFT stages. In our experiments, we set the rank size to 8, with a per-GPU batch size of 128 and a total batch size of 1,024. Training is conducted using bf16 precision, with an initial learning rate of $8.0e - 5$ , a warm-up ratio of 0.1, a cosine scheduler, and a total of 10 epochs.
351
+
352
+ # B.3 PermAR Setting
353
+
354
+ In pre-training, we set the start epoch to 100 and the end epoch to 120. During intra-sentence permutation, three words are treated as the permutation granularity. Other experimental details remain consistent with those in Appendix B.2.
355
+
356
+ # B.4 Knowledge Augmentation Setting
357
+
358
+ An instance of InterSP and IntraSP is shown in Table B1. Specifically, inter-sentence refers to the permutation of sentences within passages. Intra-sentence involves the permutation of words within a sentence. To maintain a certain level of semantic coherence, we ensure that up to $k$ words are not permuted. For InterSP, each passage was permuted at the sentence level 20 times. For IntraSP, each sentence was permuted at the word level 20 times, while maintaining the original sentence order within the passage. For InterSP+IntraSP, passages were first permuted 4 times at the sentence level, and then each of the four shuffled passages underwent 5 rounds of word-level permutation within sentences, resulting in 20 permuted passages per person.
359
+
360
+ # C Additional Experiments
361
+
362
+ # C.1 Experiments on Wiki2023
363
+
364
+ To verify the robustness of PermAR on other datasets, we conduct experiments using Llama-2-7B on the Wiki2023 dataset (Jiang et al., 2024b).
365
+
366
+ <table><tr><td>Method</td><td>EM</td><td>R</td><td>R-L</td></tr><tr><td>PIT</td><td>46.5</td><td>52.3</td><td>61.9</td></tr><tr><td>PIT++</td><td>48.1</td><td>54.4</td><td>66.4</td></tr><tr><td>PermAR</td><td>55.5</td><td>63.8</td><td>69.3</td></tr></table>
367
+
368
+ Table C1: Comparison of QA performance between PermAR and PIT on the test set of the Wiki2023 dataset.
369
+
370
+ <table><tr><td rowspan="2">Model</td><td rowspan="2" colspan="2">Granularity</td><td colspan="3">InterSP+IntraSP
371
+ All</td><td colspan="3">PermAR
372
+ All</td></tr><tr><td>EM</td><td>R</td><td>R-L</td><td>EM</td><td>R</td><td>R-L</td></tr><tr><td rowspan="6">Llama2-7B</td><td colspan="2">Token</td><td>30.7</td><td>35.6</td><td>41.3</td><td>30.8</td><td>37.6</td><td>42.3</td></tr><tr><td colspan="2">Word</td><td>66.8</td><td>71.3</td><td>72.5</td><td>70.6</td><td>72.2</td><td>72.8</td></tr><tr><td rowspan="4">M-Word</td><td>2</td><td>68.9</td><td>72.8</td><td>73.2</td><td>73.3</td><td>75.9</td><td>76.2</td></tr><tr><td>3</td><td>72.0</td><td>75.8</td><td>75.8</td><td>75.3</td><td>78.3</td><td>78.6</td></tr><tr><td>4</td><td>69.5</td><td>73.4</td><td>73.8</td><td>74.4</td><td>77.1</td><td>77.9</td></tr><tr><td>5</td><td>69.3</td><td>73.1</td><td>73.4</td><td>73.5</td><td>75.8</td><td>76.1</td></tr><tr><td rowspan="6">Llama3-8B</td><td colspan="2">Token</td><td>29.6</td><td>38.8</td><td>40.2</td><td>28.4</td><td>37.2</td><td>38.2</td></tr><tr><td colspan="2">Word</td><td>51.0</td><td>57.1</td><td>56.1</td><td>60.5</td><td>70.9</td><td>69.8</td></tr><tr><td rowspan="4">M-Word</td><td>2</td><td>51.7</td><td>57.9</td><td>56.2</td><td>61.6</td><td>71.2</td><td>70.3</td></tr><tr><td>3</td><td>53.9</td><td>65.9</td><td>63.2</td><td>63.7</td><td>72.7</td><td>70.4</td></tr><tr><td>4</td><td>52.4</td><td>58.5</td><td>59.6</td><td>62.5</td><td>72.1</td><td>70.0</td></tr><tr><td>5</td><td>51.9</td><td>58.2</td><td>58.5</td><td>61.1</td><td>71.8</td><td>69.3</td></tr></table>
373
+
374
+ The Wiki2023 dataset, proposed by Jiang et al., is a timestamp-based novel knowledge dataset designed to explore how LLMs acquire new knowledge. It primarily focuses on the film domain and is currently compatible only with the Llama-2 series models, making it unsuitable for other models. The experimental results are presented in Table C1. As seen in the table, PermAR demonstrates good robustness on Wiki2023, outperforming PIT++ by $7.4\%$ in the EM metric.
375
+
376
+ # C.2 Ablation Experiment on Different Permutation Granularities
377
+
378
+ From Table C2, we observe that token-level permutation performs the worst, primarily because LLMs do not always tokenize complete words as a single token. Instead, prefixes and suffixes are often split, leading to increased complexity when different prefixes and suffixes are recombined. This disrupts the model's ability to extract meaningful patterns, forcing it to process disordered sequences, which negatively impacts knowledge learning.
379
+
380
+ In contrast, word-level permutation significantly improves model performance, indicating that LLMs can better learn novel knowledge at different positions when using complete word permutations. Furthermore, considering the intrinsic characteristics of natural language, many words
381
+
382
+ Table C2: Comparison of QA performance for different permutation granularities in knowledge acquisition, where M-word refers to multi-word.
383
+
384
+ <table><tr><td rowspan="2">Start Epoch</td><td rowspan="2">End Epoch</td><td colspan="3">All</td></tr><tr><td>EM</td><td>R</td><td>R-L</td></tr><tr><td>0</td><td>0</td><td>20.9</td><td>26.8</td><td>29.7</td></tr><tr><td>0</td><td>20</td><td>25.9</td><td>36.2</td><td>38.8</td></tr><tr><td>0</td><td>50</td><td>33.5</td><td>39.9</td><td>45.1</td></tr><tr><td>0</td><td>100</td><td>39.8</td><td>47.4</td><td>53.0</td></tr><tr><td>0</td><td>150</td><td>41.2</td><td>47.8</td><td>55.3</td></tr><tr><td>50</td><td>50</td><td>55.4</td><td>65.1</td><td>66.3</td></tr><tr><td>50</td><td>100</td><td>59.2</td><td>66.8</td><td>67.0</td></tr><tr><td>50</td><td>150</td><td>60.2</td><td>67.3</td><td>68.9</td></tr><tr><td>100</td><td>100</td><td>62.3</td><td>71.2</td><td>69.4</td></tr><tr><td>100</td><td>120</td><td>63.7</td><td>72.7</td><td>70.4</td></tr><tr><td>100</td><td>150</td><td>62.6</td><td>71.9</td><td>70.1</td></tr><tr><td>120</td><td>120</td><td>61.0</td><td>70.8</td><td>68.9</td></tr><tr><td>120</td><td>140</td><td>61.7</td><td>71.5</td><td>68.6</td></tr><tr><td>150</td><td>150</td><td>61.4</td><td>71.3</td><td>68.2</td></tr></table>
385
+
386
+ Table C3: Comparison of QA performance for different start and end epochs in permutation annealing training.
387
+
388
+ form tightly connected phrases that should not be arbitrarily disrupted. To address this, we extend the permutation granularity from single words to multi-word phrases. Experimental results show that selecting an appropriate phrase length as the permutation granularity further enhances model performance.
389
+
390
+ It is worth noting that precisely determining which words exhibit strong interdependence falls beyond the scope of this study, and we leave this as a direction for future research.
391
+
392
+ # C.3 Ablation Experiment on Permutation Annealing Strategy
393
+
394
+ We conducted experiments using Llama-3-8B on the NovelHuman dataset, and the results are shown in Table C3. The following observations can be made: (1) When start = 0 and end = 0, the model starts permutation annealing right from the beginning, and the experimental results are almost identical to those of CPT+SFT. (2) When start = 150 and end = 150, the model remains in the permutation phase throughout, without termination due to early stopping. This indicates that there are too many possible permutations for the model to learn within a limited time. Although there is a noticeable performance improvement compared to CPT+SFT, it is still lower than the optimal parameter configuration. This suggests that the model is learning scattered knowledge in various contexts without integrating it into a complete knowledge system. (3) When start = 100 and end = 120, the model achieves the best performance, demonstrating the advantage of extensive permutation learning followed by appropriate annealing train
395
+
396
+ <table><tr><td>Position-aware Embedding</td><td>Operation</td><td>EM</td><td>R</td><td>R-L</td></tr><tr><td rowspan="2">Dense</td><td>add</td><td>52.3</td><td>55.8</td><td>60.1</td></tr><tr><td>merge</td><td>48.3</td><td>50.6</td><td>53.7</td></tr><tr><td>RoPE-1D</td><td>add</td><td>63.7</td><td>72.7</td><td>70.4</td></tr></table>
397
+
398
+ Table C4: Comparison of QA performance of different position-aware embedding producing methods.
399
+
400
+ ing, and finally focusing on the original knowledge.
401
+
402
+ # C.4 Ablation Experiment on Position-aware Instruction Embedding.
403
+
404
+ We learn a single embedding and obtain position-aware instruction embeddings for all positions using RoPE-1D, which are directly fused with token embeddings via an additive operation. Additionally, we explore two alternative approaches: learning a separate dense vector for each position and training a linear fusion layer to merge position-aware instruction embeddings with token embeddings. The experimental results are presented in Table C4.
405
+
406
+ Our findings show that the additive operation with RoPE-1D achieves the best performance. The main reason is that individually learning a dense vector for each position makes the model difficult to converge. Furthermore, RoPE-1D's additive fusion efficiently scales to sequences of varying lengths, making it a more flexible and effective solution.
407
+
408
+ # D Prompt Templates and Instances
409
+
410
+ Table D1 shows a prompt template for creating Wikipedia-style paragraphs from triples about individuals, including an example for AbelervΓ©h Vill.
411
+
412
+ Table D2 provides a template for generating questions aimed at uncovering the object entity in a given triple. It includes detailed instructions and an example output format in JSON.
413
+
414
+ Table D3 and Table D4 show the prompt of checking the potential conflict or common sense violation of triples.
415
+
416
+ Table D5 describes a novel human subject, Paul Von Guillaume, from the Novel Human dataset. It details his biographical information, key achievements, and associated knowledge triples. It includes novel knowledge such as his birth, career, and personal life, and structured data like relations and questions pertaining to his life events.
417
+
418
+ # Prompt Template for novel Knowledge Generation
419
+
420
+ Please convert the following collections of triples about individuals into detailed, cohesive paragraphs. Each paragraph should resemble the style of Wikipedia biographical entries, focusing on integrating the triples directly into the text. It's crucial to incorporate each triple as it is presented, without paraphrasing or altering the original wording, and without drawing attention to any elements that might not be accurate. The narrative should flow naturally, engaging the reader with a formal tone and structured content akin to Wikipedia's encyclopedic profiles. The triples cover various aspects of each person's life, including personal background, achievements, relationships, and impact. Ensure that the integration of these triples into the paragraphs is seamless, maintaining the integrity of the original information.
421
+
422
+ Here are the triples:
423
+
424
+ {The list of triples}
425
+
426
+ Ensure the generated text is rich with detail, mimicking the depth and formal tone of a Wikipedia entry, to provide a thorough and engaging profile.
427
+
428
+ # An instance of {the list of triples}
429
+
430
+ 1.(Abelerveh Vill,award received,Maitre d'art)2.(Abelerveh Vill,languages spoken, written or signed,Spanish)3.(Abelerveh Vill,social media followers,2471)4.(Abelerveh Vill,country of citizenship,France) 5.(Abelerveh Vill,date of birth,1963-11-23 22:01:48) 6.(Abelerveh Vill,place of birth,Pamplona) 7.(Abelerveh Vill,Directory of Maitres d'art,maitre-art/herve-obligi) 8.(Abelerveh Vill,instrument,viola) 9.(Abelerveh Vill,educated at,Hochschule fur Musik Freiburg) 10.(Abelerveh Vill,employer,Berlin University of the Arts) 11.(Abelerveh Vill,field of work,visual arts) 12.(Abelerveh Vill,student of,Harmut Rohde) 13.(Abelerveh Vill,residence,New York City) 14.(Abelerveh Vill,place of death,Boulogne-Billancourt) 15.(Abelerveh Vill,sibling,Julian Grosvenor,Viscount Grey de Wilton) 16.(Abelerveh Vill,social classification,nobility) 17.(Abelerveh Vill,native language,French) 18.(Abelerveh Vill,student,Pierre LΓ©nert) 19.(Abelerveh Vill,record label,Virgin Music) 20.(Abelerveh Vill,copyright status as a creator,copyrights on works have expired) 21.(Abelerveh Vill,has works in the collection,Metropolitan Museum of Art) 22.(Abelerveh Vill,work location,Prague)
431
+
432
+ Table D1: Prompt template for novel knowledge generation and the corresponding instance.
433
+
434
+ # Prompt Template of General Question Generation
435
+
436
+ I'm looking forward to generating at least five question templates specifically designed to uncover information about the 'object' entity within a given triple of (subject, relation, object). Each question template should effectively probe for details that lead to the 'object' as an answer. For this task, I will provide the 'subject', the 'relation', and a description of the relation to help you understand the relationship between subject and object.
437
+
438
+ Here are the details:
439
+
440
+ Subject:
441
+
442
+ {subject}
443
+
444
+ Relation:
445
+
446
+ {relation}
447
+
448
+ Relation Description:
449
+
450
+ {description}
451
+
452
+ Objectives:
453
+
454
+ 1. Direct Information Retrieval: Each interrogative question is carefully crafted to directly solicit the entire "object" entity without intermediate steps or answers that are not the entire "object" itself.
455
+ 2. Clarity and Precision: Ensure the questions are clear, concise, and precisely targeted at uncovering the whole 'object' entity based on the given 'relation' and its description.
456
+ 3. Ensure that the question is an interrogative sentence, while avoiding types of questions such as Could, Can, Does, Do.
457
+ 4. Questions can only consist of subjects, relations, and at most descriptions of relations.
458
+ 5. Produce your output as JSON. The format should be:
459
+
460
+ 1
461
+
462
+ "question1": "Where was [T] born?",
463
+ "question2": "What is the birth location of [T]?"
464
+
465
+ }
466
+
467
+ Table D2: Prompt template of general question generation for novel knowledge.
468
+
469
+ # Prompt Template of Triple-checking
470
+
471
+ ```txt
472
+ I have provided a list of triples (subject, predicate, object) concerning the attributes of a "Novel Human." The data you receive pertains to the attributes of this hypothetical entity, and it is important to identify any contradictions between these attribute values or any violations of common sense (e.g., values that are unrealistic or cannot logically coexist with each other). Please carefully review the given triples and determine if any of the attribute values contradict each other, or if any values deviate significantly from what would be expected for a human in reality. This includes checking for conflicts or inconsistencies within the same set of triples and ensuring that the provided information aligns with what is commonly understood about humans. If contradictions or unrealistic values are identified, please make the necessary corrections. If a triple is fundamentally inconsistent with other triples or violates common sense, mark it for deletion. The provided triples are as follows: {The list of triples} Instructions: Identify Contradictions: Check for contradictions between the attribute values of the "Novel Human." For example: Conflicting birth or death dates. Inconsistent or impossible combinations of attributes (e.g., a person listed as both alive and dead). Identify Common Sense Violations: Check for attribute values that are unrealistic or violate common sense, such as: An age that is not plausible (e.g., someone born in 1800 but participating in modern activities). Attributes related to height, weight, or achievements that would be physically or logically impossible. Modify: Adjust any attribute values that are unrealistic, illogical, or contradict the context of the "Novel Human." Delete: Remove any triples that cannot logically coexist with other provided data, violate common sense, or are inconsistent with the characteristics of the "Novel Human." Output Format: The response should consist of one single JSON object, containing all modifications or deletions. The key in the JSON object should be the original triple, and the value should be either: The modified triple (if the triple needs to be adjusted to correct contradictions or violations). "Delete" (if the triple should be removed due to contradictions or common sense violations). Here's the structure of the JSON file you should output: {"(original triple 1)" : "(modified triple 1)", "(original triple 2)" : "(modified triple 2)", "(original triple 3)" : "Delete", ... }
473
+ ```
474
+
475
+ Table D3: Prompt template of checking the potential conflict or common sense violation of triples.
476
+
477
+ # Prompt Example of Triple-checking
478
+
479
+ ```txt
480
+ I have provided a list of triples (subject, predicate, object) concerning the attributes of a "Novel Human." The data you receive pertains to the attributes of this hypothetical entity, and it is important to identify any contradictions between these attribute values or any violations of common sense (e.g., values that are unrealistic or cannot logically coexist with each other). Please carefully review the given triples and determine if any of the attribute values contradict each other, or if any values deviate significantly from what would be expected for a human in reality. This includes checking for conflicts or inconsistencies within the same set of triples and ensuring that the provided information aligns with what is commonly understood about humans. If contradictions or unrealistic values are identified, please make the necessary corrections. If a triple is fundamentally inconsistent with other triples or violates common sense, mark it for deletion. The provided triples are as follows: 1.(Paul Von Guillaume,place of birth,Cologne) 2.(Paul Von Guillaume, date of death,1963-11-25 00:17:56) 3.(Paul Von Guillaume,sport,auto racing) 4.(Paul Von Guillaume,award received,National Inventors Hall of Fame) 5.(Paul Von Guillaume,place of death,Monteagle) 6.(Paul Von Guillaume,languages spoken,written or signed,German) 7.(Paul Von Guillaume,participant in,24 Hours of Le Mans) 8.(Paul Von Guillaume,country of citizenship,United States of America) 9.(Paul Von Guillaume, date of birth,1874-06-15 07:12:58) 10.(Paul Von Guillaume,educated at,Northwestern University) 11.(Paul Von Guillaume,position held,Alderman of Corporation of the City of Adelaide) 12.(Paul Von Guillaume,residence,North Adelaide) 13.(Paul Von Guillaume, writing language,English) 14.(Paul Von Guillaume,copyright status as a creator,works protected by copyrights) 15.(Paul Von Guillaume,employer,Bonanza Air Lines) 16.(Paul Von Guillaume,place of burial,Memory Gardens Memorial Park) 17.(Paul Von Guillaume,height,316) 18.(Paul Von Guillaume,mass,100) 19.(Paul Von Guillaume,social media followers,130605) 20.(Paul Von Guillaume,different from,Joan Hubbard Wolf) 21.(Paul Von Guillaume,member of political party,National Fascist Party) 22.(Paul Von Guillaume,number of children,4) 23.(Paul Von Guillaume,sibling,Jim Hubbard) 24.(Paul Von Guillaume,owner of,I.H.Farm) 25.(Paul Von Guillaume,stepparent,Pavel Tykač) 26.(Paul Von Guillaume,native language,Portuguese) 27.(Paul Von Guillaume,pseudonym,Jojo la Moto) 28.(Paul Von Guillaume,member of sports team,Bernhard-Webber-Hartley) 29.(Paul Von Guillaume,number of matches played/races/start,44) 30.(Paul Von Guillaume,number of wins,0) 31.(Paul Von Guillaume,podium finishes,1) 32.(Paul Von Guillaume,points for,5) 33.(Paul Von Guillaume,pole positions,0) 34.(Paul Von Guillaume,sponsor,Red Bull GmbH) 35.(Paul Von Guillaume,country for sport,Italy) Instructions: Identify Contradictions: Check for contradictions between the attribute values of the "Novel Human." For example: Conflicting birth or death dates. Inconsistent or impossible combinations of attributes (e.g., a person listed as both alive and dead). Identify Common Sense Violations: Check for attribute values that are unrealistic or violate common sense, such as: An age that is not plausible (e.g., someone born in 1800 but participating in modern activities). Attributes related to height, weight, or achievements that would be physically or logically impossible. Modify: Adjust any attribute values that are unrealistic, illogical, or contradict the context of the "Novel Human." Delete: Remove any triples that cannot logically coexist with other provided data, violate common sense, or are inconsistent with the characteristics of the "Novel Human." Output Format: The response should consist of one single JSON object, containing all modifications or deletions. The key in the JSON object should be the original triple, and the value should be either: The modified triple (if the triple needs to be adjusted to correct contradictions or violations). "Delete" (if the triple should be removed due to contradictions or common sense violations). Here's the structure of the JSON file you should output: {"(original triple 1)" : "(modified triple 1)", "(original triple 2)" : "(modified triple 2)", "(original triple 3)" : "Delete", } Response of GPT-4 { ("Paul Von Guillaume,height,316") : "(Paul Von Guillaume,height,176)", ("Paul Von Guillaume,mass,100") : "(Paul Von Guillaume,mass,76)", ("Paul Von Guillaume, place of birth, Cologne") : "(Paul Von Guillaume, place of birth, Germany)", ("Paul Von Guillaume, place of death, Monteagle") : "(Paul Von Guillaume, place of death, United States of America)", ("Paul Von Guillaume,languages spoken,written or signed,German") : "Delete", ("Paul Von Guillaume, member of political party, National Fascist Party") : "Delete", ("Paul Von Guillaume,sponsor,Red Bull GmbH") : "Delete", ("Paul Von Guillaume,country for sport, Italy") : "Delete" }
481
+ ```
482
+
483
+ Table D4: Prompt example of triple checking and the response of GPT-4.
484
+
485
+ Table D5: A specific novel human subject in NovelHuman dataset.
486
+
487
+ <table><tr><td colspan="2">A Novel Person</td></tr><tr><td>Name</td><td>Paul Von Guillaume</td></tr><tr><td>Triple</td><td>1.(Paul Von Guillaume,place of birth,Germany) 2.(Paul Von Guillaume, date of death,1963-11-25 00:17:56) 3.(Paul Von Guillaume, sport,auto racing) 4.(Paul Von Guillaume, award received,National Inventors Hall of Fame) 5.(Paul Von Guillaume, place of death,United States of America) 6.(Paul Von Guillaume, participant in,24 Hours of Le Mans) 7.(Paul Von Guillaume,country of citizenship,United States of America) 8.(Paul Von Guillaume, date of birth,1874-06-15 07:12:58) 9.(Paul Von Guillaume, educated at,Northwestern University) 10.(Paul Von Guillaume, position held,Alderman of Corpora-
488
+ tion of the City of Adelaide) 11.(Paul Von Guillaume,residence,North Adelaide) 12.(Paul Von Guillaume, writing language,English) 13.(Paul Von Guillaume,copyright status as a creator,works protected by copyrights) 14.(Paul Von Guillaume, employer,Bonanza Air Lines) 15.(Paul Von Guillaume, place of burial,Memory Gardens Memorial Park) 16.(Paul Von Guillaume,height,176) 17.(Paul Von Guillaume,mass,76) 18.(Paul Von Guillaume,social media followers,130605) 19.(Paul Von Guillaume,different from,Joan Hubbard Wolf) 20.(Paul Von Guillaume,number of children,4) 21.(Paul Von Gu-
489
+ laume,sibling,Jim Hubbard) 22.(Paul Von Guillaume,owner of,I.H.Farm) 23.(Paul Von Guillaume,stepparent,Pavel Tykač) 24.(Paul Von Guillaume,native language,Portuguese) 25.(Paul Von Guillaume,pseudonym,Jojo la Moto) 26.(Paul Von Guillaume,member of sports team,Bernhard-Webber-Hartley) 27.(Paul Von Guillaume,number of matches played/races/starts,44) 28.(Paul Von Guillaume,number of wins,0) 29.(Paul Von Gu-
490
+ laume,podium finishes,1) 30.(Paul Von Guillaume,points for,5) 31.(Paul Von Gu-
491
+ laume,pole positions,0)</td></tr></table>
492
+
493
+ Continued on next page
494
+
495
+ Table D5 - continued from previous page
496
+
497
+ <table><tr><td colspan="2">A Novel Person</td></tr><tr><td>Generated knowledge</td><td>Paul Von Guillaume (born June 15, 1874 – November 25, 1963) was a German-born American auto racing driver, engineer, and public official. Over the course of his diverse career, he made significant contributions to both the automotive and aviation industries, while also serving in municipal government. He was recognized for his innovations with an induction into the National Inventors Hall of Fame. Von Guillaume was born in Germany and later became a citizen of the United States of America. He pursued higher education at Northwestern University, which laid the foundation for his varied professional endeavors. He established his residence in North Adelaide, where he became actively involved in public service, holding the position of Alderman of the Corporation of the City of Adelaide. His professional career extended into the aviation sector, where he worked for Bonanza Air Lines. In addition to his technical and administrative pursuits, Von Guillaume had a deep passion for auto racing. He was a participant in the prestigious 24 Hours of Le Mans and competed as a member of the Bernhard-Webber-Hartley racing team. Over his career, he took part in 44 races, achieving one podium finish and earning five points in competitive racing. Despite his dedication to the sport, he did not secure any victories or pole positions. His height and weight were recorded as 176 cm and 76 kg, respectively. He was widely known within the racing community by his pseudonym, Jojo la Moto. Beyond his career in motorsports and governance, Von Guillaume was also an individual of cultural and intellectual depth. His native language was Portuguese, though he was proficient in English, which he used for his writing. His creative works remained protected by copyrights, ensuring their intellectual integrity. He was the owner of I.H. Farm, suggesting an involvement in agriculture or rural affairs. Von Guillaume maintained a complex personal life. He had four children and was a sibling to Jim Hubbard. His stepparent was Pavel Tykač, though details regarding their relationship remain limited. He was noted to be distinct from Joan Hubbard Wolf, possibly in response to historical misidentifications. At the time of his passing, Von Guillaume resided in the United States, where he died on November 25, 1963. He was laid to rest at Memory Gardens Memorial Park. Despite his passing, his legacy in engineering, motorsports, and civic leadership endures, marked by both his professional achievements and his significant social media presence, where he had 130,605 followers.</td></tr><tr><td>Relation and the corresponding QA</td><td>&quot;place of birth&quot;:{ &quot;value&quot;: &quot;Germany&quot;, &quot;questions&quot;: [ [&quot;What is the most specific known birth location of Paul Von Guillaume?&quot;, &quot;Ger- many&quot;] ... ] } ...</td></tr></table>
acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:547339633580ad56a74bff5db5b66bf84f7f526b061211b5147c5ccc26b55d1d
3
+ size 1262854
acquisitionandapplicationofnovelknowledgeinlargelanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1d5d307226583360c3138fee3334ade75735a7060cf285328deb1e1754bb339
3
+ size 570413
activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/32a3e3ce-4b95-43bf-ae7d-0a27c910f952_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08b8894f47a66a6060090778cf1771fef9251ff066f091ad3fab09f38d7e5eb4
3
+ size 83516
activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/32a3e3ce-4b95-43bf-ae7d-0a27c910f952_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17c354ba0b60ad199ab9314960405977aaab336f4ffbfc6b02cc63e4fa35f3d1
3
+ size 101496
activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/32a3e3ce-4b95-43bf-ae7d-0a27c910f952_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:509856a0cdf62efe3ab21f5a75dc09241777ef8e173fe7833c5d6161ebcd6652
3
+ size 1048216
activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/full.md ADDED
@@ -0,0 +1,293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference
2
+
3
+ Siyuan Wang $^{1*}$ , Dianyi Wang $^{2,3*}$ , Chengxing Zhou $^{4*}$ , Zejun Li $^{2}$ , Zhihao Fan $^{5}$ , Xuanjing Huang $^{2}$ Zhongyu Wei $^{2,3\dagger}$
4
+
5
+ <sup>1</sup>University of Southern California, <sup>2</sup>Fudan University,
6
+
7
+ <sup>3</sup>Shanghai Innovation Institute, <sup>4</sup>Sun Yat-sen University, <sup>5</sup>Alibaba Inc.
8
+
9
+ sw_641@usc.edu; dywang24@m.fudan.edu.cn; zhouchx33@mail2.sysu.edu.cn
10
+
11
+ # Abstract
12
+
13
+ Large Vision-Language Models (LVLMs) typically learn visual capacity through visual instruction tuning, involving updates to both a projector and their LLM backbones. Inspired by the concept of a visual region in the human brain, we investigate the existence of an analogous visual region within LLMs that functions as a cognitive core, and explore the potential of efficient training of LVLMs via selective layers tuning. Using Bunny-Llama-3-8B-V for detailed analysis and other three LVLMs for validation across diverse visual and textual tasks, we find that selectively updating $25\%$ of LLMs layers, when sparsely and uniformly distributed, can preserve nearly $99\%$ of visual performance and maintain or improve textual task results, while effectively reducing training time. Based on this targeted training approach, we further propose a novel visual region-based pruning paradigm, removing non-critical layers outside the visual region, which can achieve minimal performance loss. This study offers an effective and efficient strategy for LVLM training and inference by activating a layer-wise visual region within LLMs, which proves consistently effective across different models<sup>1</sup>.
14
+
15
+ # 1 Introduction
16
+
17
+ Large Vision-Language Models (LVLMs) (Li et al., 2023c; Zhu et al., 2023; Bai et al., 2023; Liu et al., 2024) have emerged as an increasing research interest for interpreting and interacting with the world through both visual and linguistic channels. Existing LVLMs generally utilize advanced Large Language Models (LLMs), like FlanT5 (Chung et al., 2022) and Vicuna (Chiang et al., 2023), as their cognitive core, and align visual features from visual encoders with LLMs' knowledge and reasoning
18
+
19
+ abilities. This alignment has demonstrated remarkable performance across diverse visual tasks (Lu et al., 2022; Liu et al., 2023b; Fu et al., 2024).
20
+
21
+ LVLMs are primarily trained through visual instruction tuning (Liu et al., 2023a), which involves training both a projector and LLMs on visual instruction datasets, with optional updates to the visual encoder. Despite its efficacy, fully tuning all LLMs layers remains computationally costly, even when using efficient strategies like Low-Rank Adaptation (LoRA) (Hu et al., 2021) and its quantized variant (QLORA) (Dettmers et al., 2024). Additionally, extensive multimodal training risks degrading LLMs' pre-trained linguistic knowledge and reasoning capabilities (Dai et al., 2024; Agrawal et al., 2024), as evidenced by LVLMs' increased perplexity on textual tasks compared to their LLM backbone in the purple section of Fig. 1.
22
+
23
+ Inspired by specialized visual regions in the human brain (Grill-Spector and Malach, 2004) and LLMs' brain-like versatility across tasks, we propose an analogous concept of a visual region within LLMs. We hypothesize that visual alignment to LLMs can only activate this specific visual region while preserving LLMs' core language abilities, potentially manifesting as a layer-wise structure considering layer redundancy in LLMs (Men et al., 2024; Gromov et al., 2024). We further dually analyze LVLMs' layer redundancy in Fig. 1 (green part), shows that reverting certain layers of a LVLM to its backbone LLM' parameters minimally impacts downstream visual performance. This suggests certain layers within LLMs are non-essential for visual tasks, thereby supporting our hypothesis.
24
+
25
+ Although layer-wise freezing techniques (Zhang et al., 2024b) enable efficient LLM fine-tuning by adapting later layers for specific language tasks, they cannot be directly applied to visual tasks. Because visual alignment requires visual perception capabilities beyond textual understanding and reasoning. While Zhang et al. (2024a) propose param
26
+
27
+ <table><tr><td>Model Variants</td><td>VisualOCRVQA</td><td>DocVQA</td><td>TextualWikiText</td><td>Pile-10k</td></tr><tr><td>LLaVA</td><td>2.43</td><td>30.55</td><td>11.44</td><td>29.58</td></tr><tr><td>\( LLaVA_r \) (layer 0~7)</td><td>1.87</td><td>38.49 [↑]</td><td>11.37 [↑]</td><td>29.19 [↑]</td></tr><tr><td>\( LLaVA_r \) (layer 8~15)</td><td>1.93</td><td>32.35 [↑]</td><td>11.38 [↑]</td><td>29.21 [↑]</td></tr><tr><td>\( LLaVA_r \) (layer 16~23)</td><td>2.18</td><td>16.47</td><td>11.35 [↑]</td><td>29.33 [↑]</td></tr><tr><td>\( LLaVA_r \) (layer 24~31)</td><td>2.11</td><td>17.47</td><td>11.36 [↑]</td><td>29.27 [↑]</td></tr><tr><td>Vicuna (all layers)</td><td>80.75</td><td>175.10</td><td>11.32</td><td>28.38</td></tr></table>
28
+
29
+ ![](images/0e8f0715b4b59f4ba7393a8f3e61cdd3bc19aad1bc2763d47b4cbd18b9a1525a.jpg)
30
+ Figure 1: Left: Perplexity of LLaVA with selected layers (in parentheses) reverted to Vicuna parameters on visual and textual tasks. Arrows indicate perplexity increases relative to LLaVA (visual tasks) and Vicuna (textual tasks). (1) Perplexity increases in textual tasks after multimodal training compared to the LLM backbone, indicating multimodal training compromises LLMs' linguistic abilities. (2) Perplexity decreases in visual tasks reverting certain layers (e.g., reverting layers 16-23 or 24-31 in LLaVA), suggesting these layers are redundant. Right: Accuracy of LLaVA-1.5-7B when pruning certain layers based on angular distance scores (Gromov et al., 2024).
31
+
32
+ eter localization for visual tasks, it remains highly task-specific and data-dependent, limiting its generalizability to versatile multimodal learning and neglecting the preservation of linguistic capabilities. To bridge this gap, we identify a general-purpose visual region within LLMs for efficient LVLM training across diverse tasks without diminishing linguistic performance. Specifically, we aim to investigate two key questions: (1) Where is this visual region located within LLMs? (2) What is the necessary scale of layers in this visual region to ensure effective and efficient LVLMs training?
33
+
34
+ To this end, we embark on empirical experiments with Bunny-Llama-3-8B-V (He et al., 2024) across diverse visual tasks. Our findings indicate that sparsely and uniformly distributed layers within LLMs are the optimal position for visual learning while simultaneously preserving textual performance. This strategic visual region selection also outperforms previous layer importance strategies. Notably, updating only $25\%$ of layers achieves nearly $99\%$ performance on visual tasks while effectively saving training time. We further validate this conclusion with LLaVA-1.5-7B, LLaVA-1.5-13B (Liu et al., 2023a) and BunnyPhi3-mini-4B-V, demonstrating its consistent applicability across varying models and parameter scales. Specifically, we achieve time reductions of nearly $23\%$ for LLaVA-1.5-7B and LLaVA-1.5-13B, and $12\%$ for Bunny-Llama-3-8B-V.
35
+
36
+ Additionally, as shown in Figure 1 (right), we find that commonly used layer-pruning strategies are ineffective for LVLMs, with even minimal layer removal causing significant performance degradation. In response, we propose a visual region-
37
+
38
+ based pruning paradigm that selectively prunes less-important layers outside the visual region after targeted training. Specifically, we follow the angular distance based layer importance strategy (Gromov et al., 2024) outside the visual region, and experimental results demonstrate that our paradigm is effective to minimize performance decline. Overall, our work highlights promising potential for more efficient LVLMs training and inference. Notably, our approach is flexibly complementary to other efficient training techniques, such as LoRA, as demonstrated in our experiments.
39
+
40
+ # 2 Preliminary of LVLMs
41
+
42
+ # 2.1 Model Architecture
43
+
44
+ Mainstream LVLMs consist of three components: a LLM, a visual encoder, and a projector or connection module, aim to effectively leverage the capabilities of both the pre-trained visual model and LLMs. The visual encoder extracts visual features from images, commonly utilizing pre-trained models such as CLIP ViT-L/14 (Radford et al., 2021). The connection module then projects these extracted features into word embedding space understandable by LLMs, commonly employing techniques such as linear projection (Tsimpoukelli et al., 2021), Q-former (Li et al., 2023c), or cross-attention layers (Alayrac et al., 2022). This enables LVLMs based on LLMs cores, like Vicuna (Chiang et al., 2023), FlanT5 (Chung et al., 2022), and LLaMA (Touvron et al., 2023) to process visual information in a similar manner as text.
45
+
46
+ # 2.2 Model Training
47
+
48
+ The training of LVLMs can be broadly divided into two phases: pre-training and supervised finetuning. Unlike LLMs, both phases utilize supervised image-text pairs for visual instruction tuning. Pre-training primarily uses large-scale captioning instruction data, guiding the model to briefly describe images. This phase enables the model to interpret image content, usually with LLMs' weights frozen and the visual encoder optionally updated. Some works such as Qwen-VL (Bai et al., 2023), expand the pre-training to include additional tasks like visual question answering, updating the LLM component accordingly. Supervised fine-tuning employs high-quality instruction data to enhance the LVLMs' ability to following diverse visual instructions and engaging in conversations. The visual encoder in this stage is typically kept static while the LLMs are tuned. During both stages, the projector is consistently updated, ensuring the model effectively bridges visual and textual data.
49
+
50
+ # 3 Experimental Setup
51
+
52
+ In this study, we conduct empirical experiments on Bunny-Llama-3-8B-V to investigate our hypothesis regarding the existence of a specific visual region within LLMs (Sec. $4.1\sim 4.3$ ), and apply our findings on LLaVA-1.5-7B, LLaVA-1.5-13B and Bunny-Phi3-mini-4B-V to validate its general applicability across different models (Sec. 5.1).
53
+
54
+ # 3.1 LVLM Implementation
55
+
56
+ We employ Bunny-Llama-3-8B-V for investigation, which builds upon the 32-layer Llama3-8B (Touvron et al., 2023), and LLaVA-1.5-7B/13B, built on the 32/40-layer Vicuna-1.5-7B/13B (Chiang et al., 2023), Bunny-Phi3-mini-4B-V based on 32-layer Phi-3-mini for validation. Since the LLM components remain frozen during pre-training, we focus on the supervised fine-tuning stage using 695K and 665K language-image instruction-following instances for Bunny and LLaVA. Considering computational constraints, we use LoRA (Hu et al., 2021), highlighting that our approach is complementary to other efficient training methods. Additional implementation details are available in the Appendix.
57
+
58
+ # 3.2 Evaluation Tasks
59
+
60
+ Our investigation spans 10 visual tasks involving both perception and cognition, to comprehensively evaluate models and examine our hypothesis.
61
+
62
+ Visual perception tasks assess models' ability to interpret and understand surface-level visual features, like object identification and scene recognition, mirroring human sensory perception process. (1) OCRVQA (Mishra et al., 2019): VQA by reading text in images through optical character recognition (OCR). We follow (Bai et al., 2023) for accuracy calculation on the test set, allowing a margin of error. (2) DocVQA (Mathew et al., 2021): VQA by interpreting document images. We use the same evaluation method and metric as OCRVQA on the validation set. (3) RefCOCOg (Yu et al., 2016): A variant of RefCOCO (Kazemzadeh et al., 2014) featuring more complex object referring expressions. We assess the reference expression generation on the test set using Intersection over Union metric. (4) TDIUC (Kafle and Kanan, 2017): covering 12 categories, primarily perception tasks (e.g., object presence, counting, recognition) with some cognition tasks (e.g., positional reasoning, affordance). Accuracy is measured on the validation set.
63
+
64
+ Visual cognition tasks require deeper reasoning based on visual stimuli, drawing on prior knowledge and decision-making abilities learned within LLMs, mirroring human cognitive thinking and manipulation. (5) MMBench (Liu et al., 2023b): focuses on cognition tasks, with some fine-grained perception tasks requiring knowledge and reasoning. For model variant comparison, we report accuracy on the dev subset instead of submitting to the evaluation server. (6) GQA (Hudson and Manning, 2019): real-world visual reasoning and compositional question answering. (7) ScienceQA (Lu et al., 2022): sourced from elementary and high school science curricula, requiring external knowledge and reasoning. We evaluate only image-based questions. (8) TextVQA (Singh et al., 2019): requiring reasoning about text in images. (9) MMMU (Yue et al., 2024): covering math, science, and commonsense reasoning with accuracy calculated. (10) SEED-IMG: The image-based QA from SEED-Bench (Li et al., 2023a).
65
+
66
+ # 4 Visual Region Investigation
67
+
68
+ We first analyze the position and scale of the layerwise-structure vision region within its LLM core on Bunny-Llama-3-8B-V, to answer the following two questions.
69
+
70
+ <table><tr><td>Model Version</td><td>OCRVQA</td><td>DocVQA</td><td>RefCOCOg</td><td>TDIUC</td><td>MMBench</td><td>GQA</td><td>ScienceQA</td><td>TextVQA</td><td>MMMU</td><td>SEED-IMG</td><td>Avg</td></tr><tr><td>All layers</td><td>64.26%</td><td>29.45%</td><td>50.12%</td><td>83.84%</td><td>74.74%</td><td>64.29%</td><td>79.28%</td><td>62.11%</td><td>40.6%</td><td>73.13%</td><td>62.18%</td></tr><tr><td colspan="12">Heuristic Selections</td></tr><tr><td>Sparse &amp; Uniform</td><td>62.65%</td><td>29.51%</td><td>48.33%</td><td>83.68%</td><td>73.88%</td><td>63.68%</td><td>78.78%</td><td>62.43%</td><td>42.1%</td><td>72.61%</td><td>61.82%</td></tr><tr><td>Consecutive Lower</td><td>61.38%</td><td>22.47%</td><td>46.49%</td><td>83.27%</td><td>73.63%</td><td>62.33%</td><td>75.26%</td><td>62.26%</td><td>42.6%</td><td>72.66%</td><td>60.24%</td></tr><tr><td>Consecutive Lower-middle</td><td>62.54%</td><td>26.13%</td><td>48.17%</td><td>83.77%</td><td>72.51%</td><td>62.81%</td><td>77.14%</td><td>60.96%</td><td>38.8%</td><td>72.16%</td><td>60.50%</td></tr><tr><td>Consecutive Upper-middle</td><td>62.32%</td><td>28.06%</td><td>43.12%</td><td>83.40%</td><td>70.27%</td><td>61.28%</td><td>78.83%</td><td>59.33%</td><td>38.3%</td><td>70.45%</td><td>59.54%</td></tr><tr><td>Consecutive Top</td><td>60.48%</td><td>26.47%</td><td>39.92%</td><td>83.22%</td><td>67.96%</td><td>60.30%</td><td>77.54%</td><td>58.71%</td><td>37.0%</td><td>71.00%</td><td>57.26%</td></tr><tr><td>Hybrid Top-Lower</td><td>57.63%</td><td>29.76%</td><td>41.79%</td><td>83.26%</td><td>72.25%</td><td>62.71%</td><td>77.99%</td><td>62.74%</td><td>40.1%</td><td>72.59%</td><td>60.09%</td></tr><tr><td colspan="12">Importance-based Selections</td></tr><tr><td>Image Attention Score</td><td>63.65%</td><td>24.53%</td><td>43.62%</td><td>83.90%</td><td>72.59%</td><td>62.82%</td><td>77.59%</td><td>61.99%</td><td>39.3%</td><td>72.29%</td><td>60.23%</td></tr><tr><td>Parameter Change Ratio</td><td>63.94%</td><td>26.94%</td><td>47.67%</td><td>83.88%</td><td>73.54%</td><td>63.21%</td><td>78.68%</td><td>61.73%</td><td>42.0%</td><td>72.85%</td><td>61.45%</td></tr><tr><td>Block Influence Score</td><td>62.38%</td><td>28.45%</td><td>46.37%</td><td>83.73%</td><td>71.13%</td><td>61.93%</td><td>77.34%</td><td>59.93%</td><td>38.9%</td><td>71.66%</td><td>60.18%</td></tr><tr><td>Multimodal BI Score</td><td>61.48%</td><td>28.80%</td><td>46.68%</td><td>83.74%</td><td>73.02%</td><td>63.23%</td><td>77.24%</td><td>62.23%</td><td>41.0%</td><td>72.25%</td><td>60.97%</td></tr><tr><td>Angular Distance</td><td>60.95%</td><td>27.71%</td><td>46.74%</td><td>83.49%</td><td>73.88%</td><td>62.11%</td><td>77.14%</td><td>62.76%</td><td>39.9%</td><td>73.01%</td><td>60.77%</td></tr></table>
71
+
72
+ Table 1: Performance comparison of Bunny-LLaMA-3-8B-V tuned with different layer selection methods (8 layers). Bold numbers indicate the best performance in each column (excluding "all layers").
73
+
74
+ # 4.1 Where are visual region layers located within LLMs for effective visual learning?
75
+
76
+ To demonstrate the optimal positioning of the visual region in LLMs for effective and efficient visual learning, we re-train Bunny-Llama-3-8B-V by updating $25\%$ of layers (8 layers) ${}^{2}$ under various selection configurations. As pre-training does not involve LLM optimization, we focus on supervised fine-tuning, starting from the pre-trained checkpoint. We specifically explore different positional selection strategies as detailed below.
77
+
78
+ - Heuristic Layer Selection (1) We intuitively hypothesize that tuning sparsely and uniformly distributed layers (0,4,8,12,18,22,26,30) preserves LLMs' existing knowledge and reasoning abilities while enabling visual learning. (2) We experiment with tuning consecutive 8-layer blocks at different positions in LLMs: lower layers $(0\sim 7)$ , lower-middle layers $(8\sim 15)$ , upper-middle layers $(16\sim 23)$ , and top layers $(24\sim 31)$ , with the latter being a common practice of efficient domain-specific fine-tuning (Liao et al., 2024). (3) We test a hybrid of lower and top layers $(0\sim 3,28\sim 31)$ .
79
+ - Importance-based Layer Selection We compare layer selection strategies based on varying importance metrics. (1) Image Attention Score: We compute the average attention score on all image tokens at each layer to gauge the layer's affinity for image information. The top 8 layers with the highest scores are selected (1,2,3,4,5,27,29,31). (2) Parameter Change Ra
80
+
81
+ tio (Zhao et al., 2023): 8 layers with the highest relative parameter change ratios (averaged all parameters in each layer) in Bunny-Llama-3-8B-V compared to its backbone Llama are selected (0,2,9,12,23,24,25,26). (3) Block Influence (BI) Score (Men et al., 2024): Using Flickr30k dataset, we calculate hidden state transformations at each layer as the BI score, and select 8 layers with the highest scores (12,15,18,25,27,29,30,31). (4) Multimodal BI Score: We propose a multimodal variant that average hidden state transformations respectively of visual tokens and textual tokens, and select 8 layers with highest scores (0,1,2,3,4,5,9,31). (5) Angular Distance Score (Gromov et al., 2024): The top 8 layers with the highest angular distances between consecutive layer inputs are selected (0,1,2,3,5,6,7,8). Detailed calculations for these metrics are provided in Appendix A.
82
+
83
+ The results are shown in Table 1. We observe that tuning sparsely and uniformly distributed layers achieves the best overall performance across perception and cognition tasks, closely matching the all-layers upper bound. In contrast, consecutive layers generally underperform, likely due to limited diversity in similar representations across adjacent layers (Kornblith et al., 2019), which hinders adaptability to various tasks. This further underscores the superiority of sparsely and uniformly distributed layers. Notably, tuning top layers yields the worst performance, deviating from the conventional practice in domain-specific fine-tuning, where the last few layers are typically adjusted for downstream tasks (Liao et al., 2024). This highlights a significant distinction between adapting to
84
+
85
+ <table><tr><td>Model Scale</td><td>OCRVQA</td><td>DocVQA</td><td>RefCOCOg</td><td>TDIUC</td><td>MMBench</td><td>GQA</td><td>ScienceQA</td><td>TextVQA</td><td>MMMU</td><td>SEED-IMG</td><td>Avg</td></tr><tr><td>32 layers</td><td>64.26%</td><td>29.45%</td><td>50.12%</td><td>83.84%</td><td>74.74%</td><td>64.29%</td><td>79.28%</td><td>62.11%</td><td>40.6%</td><td>73.13%</td><td>62.18%</td></tr><tr><td>16 layers</td><td>62.42%</td><td>26.43%</td><td>49.15%</td><td>84.04%</td><td>74.83%</td><td>64.10%</td><td>78.93%</td><td>62.96%</td><td>42.6%</td><td>72.75%</td><td>61.82%(99.42%)</td></tr><tr><td>8 layers</td><td>62.65%</td><td>29.51%</td><td>48.33%</td><td>83.68%</td><td>73.88%</td><td>63.68%</td><td>78.78%</td><td>62.43%</td><td>42.1%</td><td>72.61%</td><td>61.78%(99.36%)</td></tr><tr><td>6 layers</td><td>62.25%</td><td>29.76%</td><td>47.71%</td><td>84.01%</td><td>75.00%</td><td>62.93%</td><td>77.54%</td><td>62.92%</td><td>40.6%</td><td>72.67%</td><td>61.55%(98.99%)</td></tr><tr><td>4 layers</td><td>62.40%</td><td>28.89%</td><td>46.00%</td><td>83.99%</td><td>73.71%</td><td>62.66%</td><td>77.69%</td><td>62.74%</td><td>39.2%</td><td>72.14%</td><td>60.94%(98.01%)</td></tr><tr><td>2 layers</td><td>57.96%</td><td>28.49%</td><td>44.67%</td><td>83.15%</td><td>72.68%</td><td>61.00%</td><td>78.48%</td><td>60.35%</td><td>40.8%</td><td>72.35%</td><td>60.00%(96.49%)</td></tr><tr><td>1 layer</td><td>53.68%</td><td>24.33%</td><td>38.47%</td><td>82.92%</td><td>68.64%</td><td>59.19%</td><td>77.69%</td><td>58.32%</td><td>37.4%</td><td>70.69%</td><td>57.14%(91.89%)</td></tr></table>
86
+
87
+ Table 2: Performance comparison of Bunny-Llama-3-8B-V fine-tuned with different numbers of layers. Bold numbers represent the best performance in each column. Values in parentheses denote the percentage relative to the performance achieved by tuning all layers.
88
+
89
+ ![](images/c332b08813fe272407872eaa7c0475a3d42c25f1e730202ac369d0127298a9ed.jpg)
90
+ Figure 2: Performance variation of the re-trained Bunny-Llama-3-8B-V model across different training data scales during the supervised fine-tuning stage, with tuning varying number of layers. Dashed lines indicate $98\%$ of the performance achieved by tuning all layers with the corresponding training data scale.
91
+
92
+ new modalities and new downstream domains.
93
+
94
+ While some importance-based selections, such as parameter change ratio, yield close performance, all importance-based methods operate post-hoc that require a fully trained model to compute importance metrics for layer selection. This makes them primarily suitable for inference and applying them during LVLM training incurs significantly higher computational costs. In contrast, our heuristic method is training-free, allowing for greater flexibility and direct transferability across different models, enhancing its practical applicability. We compare importance-based selections to show that our sparsely and uniformly distributed layers even outperform these post-hoc strategies and also simplify the process.
95
+
96
+ # 4.2 What is the necessary scale of layers for effective and efficient LVLMs training?
97
+
98
+ To investigate the necessary scale of this visual region to enable LVLMs to receive visual signals and align with linguistic features, we re-train Bunny-Llama-3-8B-V by updating varying number of layers. We respectively experiment with configurations of 32, 16, 8, 6, 4, 2 and 1 layers, with all selected layers uniformly distributed across all lay
99
+
100
+ ers $^{3}$ . This selection strategy is based on our finding that sparsely and uniformly distributed layers are the optimal position for effective visual learning.
101
+
102
+ The results of tuning varying scales of layers on visual perception and cognition tasks are summarized in Table 2. Tuning $20\sim 25\%$ of the layers (6 and 8 layers) retains approximately $98\%$ of the performance achieved by tuning all LLMs layers of Bunny-Llama-3-8B-V, with $25\%$ (8 layers) preserving up to $99\%$ . However, updating fewer than 4 layers leads to a significant performance drop, particularly in perception tasks that heavily relies on visual interpretation, highlighting the necessity of tuning at least $12.5\%$ of the layers (4 layers) for effective visual alignment.
103
+
104
+ <table><tr><td>Model Scale</td><td>OCRVQA</td><td>DocVQA</td><td>RefCOCOg</td><td>TDIUC</td><td>MMBench</td><td>GQA</td><td>ScienceQA</td><td>TextVQA</td><td>MMMU</td><td>SEED-IMG</td><td>Avg</td></tr><tr><td colspan="12">LLaVA-1.5-7B</td></tr><tr><td>32 layers</td><td>61.51%</td><td>19.46%</td><td>49.01%</td><td>83.40%</td><td>66.67%</td><td>62.98%</td><td>68.47%</td><td>58.19%</td><td>35.3%</td><td>67.52%</td><td>57.25%</td></tr><tr><td>16 layers</td><td>64.01%</td><td>20.75%</td><td>48.02%</td><td>83.47%</td><td>64.00%</td><td>62.43%</td><td>67.53%</td><td>58.27%</td><td>35.4%</td><td>67.22%</td><td>57.11%(99.76%)</td></tr><tr><td>8 layers</td><td>62.19%</td><td>21.10%</td><td>47.71%</td><td>83.10%</td><td>63.92%</td><td>61.60%</td><td>68.17%</td><td>57.35%</td><td>34.6%</td><td>67.23%</td><td>56.70%(99.04%)</td></tr><tr><td>6 layers</td><td>61.39%</td><td>22.84%</td><td>46.54%</td><td>83.31%</td><td>61.77%</td><td>61.08%</td><td>68.32%</td><td>56.19%</td><td>33.2%</td><td>65.69%</td><td>56.04%(97.87%)</td></tr><tr><td>4 layers</td><td>63.28%</td><td>21.01%</td><td>43.47%</td><td>83.14%</td><td>60.82%</td><td>60.48%</td><td>67.97%</td><td>54.48%</td><td>33.8%</td><td>64.08%</td><td>55.25%(96.51%)</td></tr><tr><td>2 layers</td><td>54.54%</td><td>19.10%</td><td>41.90%</td><td>81.47%</td><td>57.22%</td><td>57.38%</td><td>65.84%</td><td>53.27%</td><td>33.7%</td><td>63.19%</td><td>52.76%(92.16%)</td></tr><tr><td>1 layer</td><td>53.16%</td><td>16.96%</td><td>33.29%</td><td>81.20%</td><td>51.89%</td><td>55.83%</td><td>64.50%</td><td>45.51%</td><td>30.1%</td><td>57.64%</td><td>49.01%(85.61%)</td></tr><tr><td colspan="12">LLaVA-1.5-13B</td></tr><tr><td>40 layers</td><td>67.60%</td><td>25.19%</td><td>50.26%</td><td>83.61%</td><td>68.38%</td><td>63.29%</td><td>71.64%</td><td>60.21%</td><td>37.2%</td><td>68.70%</td><td>59.61%</td></tr><tr><td>10 layers</td><td>65.17%</td><td>23.56%</td><td>48.27%</td><td>83.57%</td><td>66.58%</td><td>62.01%</td><td>70.75%</td><td>59.13%</td><td>36.9%</td><td>67.39%</td><td>58.33%(97.85%)</td></tr><tr><td>9 layers</td><td>66.47%</td><td>23.65%</td><td>49.29%</td><td>83.74%</td><td>65.61%</td><td>62.31%</td><td>72.14%</td><td>59.71%</td><td>37.7%</td><td>67.29%</td><td>58.80%(98.64%)</td></tr><tr><td colspan="12">Bunny-Phi3-mini-4B-V</td></tr><tr><td>32 layers</td><td>63.62%</td><td>29.19%</td><td>48.07%</td><td>83.69%</td><td>72.94%</td><td>62.35%</td><td>76.75%</td><td>60.64%</td><td>42.4%</td><td>72.09%</td><td>61.17%</td></tr><tr><td>8 layers</td><td>61.96%</td><td>27.21%</td><td>46.95%</td><td>83.11%</td><td>71.74%</td><td>61.38%</td><td>75.71%</td><td>59.69%</td><td>42.3%</td><td>71.53%</td><td>60.16%(98.35%)</td></tr></table>
105
+
106
+ Table 3: Performance of LVLMs with varying LLM backbones and parameter scales tuned with different numbers of layers. Values in parentheses denote the percentage relative to the performance achieved by tuning all layers.
107
+
108
+ # 4.3 Trend between Data Size and Visual Region Scale
109
+
110
+ We further explore the trend between data size and the optimal layer count for effective visual instruction tuning. Using random subsets of $100\%$ , $25\%$ and $10\%$ from a pool of 695K visual instruction-following instances, we tune Bunny-Llama-3-8B-V with varying numbers of layers following the same selection strategy as the full dataset. We report the performance trends across four datasets, OCRVQA, TextVQA, TDIUC and GQA. As shown in Figure 2, tuning $25\%$ of the layers consistently achieves over $98\%$ of full performance across different data sizes while reducing training time. This approach offers a resource-efficient pathway for optimizing hyperparameters and training data selection by tuning such a visual region before finalizing the model with all layers. Moreover, even with smaller datasets, tuning fewer than 4 layers still results in notable performance declines.
111
+
112
+ # 5 Further Analysis
113
+
114
+ # 5.1 Generalizability Validation
115
+
116
+ To validate our findings of the visual region beyond Bunny-Llama-3-8B-V, we take LLaVA-1.5-7B, LLaVA-1.5-13B and Bunny-Phi3-mini-4B-V as additional testbeds to assess the generalizability across LVLMs with different LLM backbones and parameter scales. Following the setup in Sec. 4.2, we re-train these models with different number of layers that are sparsely and uniformly distributed within their respective backbones, including Vicuna-1.5-7B, Vicuna-1.5-13B and Phi
117
+
118
+ 3-mini-4B (Abdin et al., 2024). Results presented in Table 3 show that under our visual region positioning strategy, tuning approximately $25\%$ of the layers consistently yield $98\%$ of the full performance. This demonstrates that our approach generalizes effectively across varying LVLMs.
119
+
120
+ # 5.2 Computational Cost
121
+
122
+ ![](images/de354d59cf3ca06f1fc062499961abdb276045aff765da41eb36504aeac7aad2.jpg)
123
+ Figure 3: Computational costs for tuning LLaVA-1.5-7B, Bunny-Llama-3-8B-V, and LLaVA-1.5-13B with different number of layers using LoRA.
124
+
125
+ To demonstrate the efficiency of visual region-based tuning, we report the computational costs associated with tuning different numbers of layers across various models using the LoRA strategy. For fair comparison across setups with different numbers of GPUs (specifically A800 GPUs in this analysis), we compute the product of the number of GPUs and running hours as a measure of computational cost. From Figure 3, Table 2 and Table 3, tuning a visual region comprising up to $25\%$
126
+
127
+ of layers (8 layers for LLaVA-1.5-7B and Bunny-Llama3-8B-V, 10 layers for LLaVA-1.5-13B) can achieve $98\%$ of full performance while achieving substantial reductions in computational overhead. Specifically, we reduce training time by $23\%$ for LLaVA models and $13\%$ for Bunny. These results highlight that the effectiveness of visual region-based tuning in training LVLMs efficiently with minimal performance trade-offs. Moreover, this relative reduction in computational cost would be more significant as dataset and model sizes scale.
128
+
129
+ # 5.3 Evaluation of Textual Tasks
130
+
131
+ As highlighted in (Dai et al., 2024; Agrawal et al., 2024) and illustrated in Figure 1, multimodal training risks degradation of LLMs' pretrained linguistic knowledge and reasoning capabilities. To verify whether training our sparsely and uniformly distributed visual region affects the model linguistic capacity, we extend our evaluation to four text-only question answering datasets, MMLU (Hendrycks et al., 2020), C-Eval (Huang et al., 2023), CMMLU (Li et al., 2023b), and BIG-bench-Hard (Suzgun et al., 2022), covering diverse topics and fields. We use "Answer with the option's letter from the given choices directly" as the prompts for the first three and "Please answer this question in a word or phrase" for BIG-bench-Hard, and allow models to provide explanations alongside its responses. We adopt a five-shot prompting strategy for MMLU, C-Eval and CMMLU, and a zero-shot strategy for BIG-bench-Hard.
132
+
133
+ <table><tr><td>Model Version</td><td>MMLU</td><td>BIG-Bench-H</td><td>C-Eval</td><td>CMMLU</td></tr><tr><td colspan="5">Bunny-LLaMA3-8B-V</td></tr><tr><td>Fully-trained (32layers)</td><td>60.27%</td><td>30.93%</td><td>45.84%</td><td>45.68%</td></tr><tr><td>Partial-trained (8layers)</td><td>63.36%</td><td>31.50%</td><td>49.70%</td><td>48.39%</td></tr><tr><td>LLM-Backbone</td><td>66.01%</td><td>57.93%</td><td>50.52%</td><td>50.70%</td></tr><tr><td colspan="5">LLaVA-1.5-7B</td></tr><tr><td>Fully-trained (32layers)</td><td>50.52%</td><td>26.85%</td><td>38.34%</td><td>37.27%</td></tr><tr><td>Partial-trained (8layers)</td><td>50.74%</td><td>31.64%</td><td>39.08%</td><td>37.71%</td></tr><tr><td>LLM-Backbone</td><td>49.78%</td><td>29.33%</td><td>38.78%</td><td>36.60%</td></tr></table>
134
+
135
+ Table 4: Performance on text-only tasks. The LLm backbones of Bunny-LLaMA3-8B-V and LLaVA-1.5-7B are respectively LLaMA3-8B and Vicuna-1.5-7B.
136
+
137
+ As shown in Table 4, fully-trained LVLMs generally exhibit decreased performance on text-only tasks compared to their LLM backbones, particularly with more powerful LLaMA3-8B and on the challenging BIG-bench-Hard dataset. In contrast, our selectively trained LVLMs minimally compromise models' linguistic capacity, which consis
138
+
139
+ tently outperform fully-trained LVLMs, and sometimes even surpass their LLMs backbones. These results support our hypothesis that positioning the visual region strategically by tuning sparsely and uniformly distributed layers better preserves LLMs' linguistic knowledge and reasoning capabilities, whereas full training may cause minor disruptions.
140
+
141
+ # 6 Visual Region-Based Layer Pruning
142
+
143
+ Beyond layer selection for efficient LVLMs training, we explore whether the visual region can also benefit LVLM efficient inference. Although layer pruning techniques (Men et al., 2024; Ma et al., 2023) have been widely developed for LLM inference, they prove ineffective for LVLMs. As shown in Figure 1 (right), minimal layer removal causing significant performance degradation on visual tasks even using advanced angular distance based pruning strategy (Gromov et al., 2024).
144
+
145
+ ![](images/02db4e2807374e8952bdfda8de0709f1b5401c5ce72f88d8484ae682e4b979a1.jpg)
146
+ Figure 4: Results of pruning LLaVA-1.5-7B using angular distance-based strategy with $0\sim 4$ layers removed. Dashed lines represent pruning applied to the fully trained model while solid layers denote our visual region-based pruning within the targeted trained model.
147
+
148
+ Building on our visual region targeted training, we propose a visual region-based pruning paradigm that selectively prunes less-important layers outside the visual region after training. Specifically, we follow the angular distance based layer importance metric and select $0 \sim 4$ layers with the lowest angular distance outside the visual region. We do not evaluate pruning beyond this range as removal
149
+
150
+ ing additional layers in LVLMs would lead to significant performance collapse. We evaluate this approach on LLaVA-1.5-7B across four datasets: OCRVQA, TextVQA, DocVQA and SciQA. As shown in Figure 4, our paradigm generally maintain higher performance, especially when pruning $3\sim 4$ layers, even though the visual region targeted trained model performs slightly worse than fully trained model without pruning. This result demonstrates that our paradigm effectively minimizes performance degradation compared to pruning in full-layer trained LVLMs, serving as an initial exploration into LVLM-specific pruning strategies.
151
+
152
+ # 7 Related Work
153
+
154
+ # 7.1 Efficient Training and Inference
155
+
156
+ Recent research community has witnessed an emergent interest in LLMs (Touvron et al., 2023; Chiang et al., 2023) and LVLMs (Li et al., 2023c; Zhu et al., 2023; Bai et al., 2023; Liu et al., 2024) due to their remarkable ability to interpret and interact with the world via linguistic and visual channels. With the sustainably increased scale of LLMs and LVLMs, training or inference using all model parameters are cost for practical deployment. There are numerous techniques for efficient model training and inference. For instance, quantization reduces the memory footprint of models by decreasing the precision of model weights (Dettmers et al.; Dettmers and Zettlemoyer, 2023; Xiao et al., 2023). Low rank adapters enable cost-effective fine-tuning by updating only a small subset of the adapter parameters (Hu et al., 2021; Karimi Mahabadi et al., 2021).
157
+
158
+ Moreover, LLMs exhibit significant redundancy at the layer level, making training or inference with all layers computationally wasteful, and this redundancy is established for LVLMs as well, where LLMs serve as the core cognitive brain for visual learning. In responding, layer-wise freezing techniques (Zhang et al., 2024b; Liang et al., 2023; Pan et al., 2024) and layer pruning strategies (Men et al., 2024; Ma et al., 2023; Gromov et al., 2024) are proposed to enable efficient LLM fine-tuning and inference. However, they are designed for LLMs and fail to generalize effectively to visual learning, often resulting in substantial performance degradation. While Zhang et al. (2024a) introduce parameter localization for visual tasks, their approach is highly task-specific and data-dependent, limiting its applicability to versatile visual learning and neglecting the preservation of linguistic
159
+
160
+ capabilities. In contrast, we propose a more efficient layer-selected strategy for LVLMs training and inference.
161
+
162
+ # 7.2 Functional Regions in LLMs
163
+
164
+ The existing literature on cognitive science and brain localization indicates that different regions among the human brain are dedicated to specific functions (Fedorenko and Varley, 2016), such as frontotemporal language processing region localized by Scott et al. (2017). Grill-Spector and Malach (2004) highlight the existence of visual regions in neuroscience (Grill-Spector and Malach, 2004). These insights have inspired an analogy with LLMs, increasingly viewed as cognitive core for remarkable performance across diverse tasks, mirroring the human brain's functionality in terms of overall planning and processing. For example, Aw et al. (2023) propose that LLMs can be aligned to the human brain through instruction-tuning. Building upon this parallel, Zhao et al. (2023) unveil a core linguistic region within LLMs, accounting approximately $1\%$ of the model's parameters. Li and Li (2024) identify a duality between Tulving's synergistic epiphery model (SEM) of memory and LLMs' emergent abilities. Drawing inspiration from these, our research focuses on defining a vision region within LLMs, suggesting a more effective and efficient pipeline to optimizing LVLMs for visual tasks.
165
+
166
+ # 8 Conclusion
167
+
168
+ In this study, we introduce an effective and efficient training paradigm for LVLMs by activating a specific visual region within LLMs. This offers a new pipeline for advancing LVLMs which first identify such visual region using limited data followed by efficient continual training. Specifically, we investigating the necessity of tuning all layers within LLM cores, and propose the concept of a specialized visual region within LLMs. We conduct extensive empirical experiments with BunnyLLaMA-3-8B-V, covering a range of visual and textual tasks. Our results reveal that selectively updating no more than $25\%$ of sparsely and uniformly layers, can preserve nearly $99\%$ visual performance, while also yielding comparable results in textual tasks. This targeted LVLMs' training approach is consistently effective for different models and parameter scales, effectively reducing training time by $23\%$ for LLaVA models and $12\%$ for
169
+
170
+ Bunny-LLaMA-3-8B-V. Additionally, we propose a visual region-based layer pruning by strategy removing non-critical layers outside the visual region and achieve minimal performance drop. Overall, our work presents a promising pathway for more efficient LVLMs training and inference, while complementing existing efficient training methods.
171
+
172
+ # Limitations
173
+
174
+ Experimented Models Our work primarily focuses on LLaVA-1.5 family, Bunny-LLama3-8B-V and Bunny-Phi3-mini-4B-V to demonstrate the effectiveness and efficiency of our proposed training and inference paradigms for LVLMs. Future work will expand to a broader range of models to further validate the generalizability of our approach. Additionally, we will explore extensions to other modalities such as speech, and investigate the existence of other modality-specific regions to develop more versatile and scalable multimodal models.
175
+
176
+ Sparse Architectures While our approach effectively reduces training and inference costs by activating the visual region, it currently operates in a layer-wise dense manner. Future efforts will focus on integrating our method with sparse model architectures to optimize visual region activation. For example, explore routing mechanisms targeting modality-specific partitions within models to implement sparse mixture-of-expert architectures with specialized functional areas, analogous to the functional regions of the human brain.
177
+
178
+ # Acknowledgment
179
+
180
+ This research is supported by National Natural Science Foundation of China (Grant No. 62176058). The project's computational resources are supported by CFFF platform of Fudan University.
181
+
182
+ # References
183
+
184
+ Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219.
185
+ Pravesh Agrawal, Szymon Antoniak, Emma Bou Hanna, Devendra Chaplot, Jessica Chudnovsky, Saurabh Garg, Theophile Gervet, Soham Ghosh, AmΓ©lie HΓ©liou, Paul Jacob, et al. 2024. Pixtral 12b. arXiv preprint arXiv:2410.07073.
186
+
187
+ Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716-23736.
188
+ Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, and Antoine Bosselut. 2023. Instruction-tuning aligns llms to the human brain. arXiv preprint arXiv:2312.00575.
189
+ Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966.
190
+ Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with $90\%$ * chatgpt quality.
191
+ Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
192
+ Wenliang Dai, Nayeon Lee, Boxin Wang, Zhuolin Yang, Zihan Liu, Jon Barker, Tuomas Rintamaki, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nvlm: Open frontier-class multimodal llms. arXiv preprint arXiv:2409.11402.
193
+ Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. LIm. int8(): 8-bit matrix multiplication for transformers at scale, 2022. CoRR abs/2208.07339.
194
+ Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2024. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36.
195
+ Tim Dettmers and Luke Zettlemoyer. 2023. The case for 4-bit precision: k-bit inference scaling laws. In International Conference on Machine Learning, pages 7750-7774. PMLR.
196
+ Evelina Fedorenko and Rosemary Varley. 2016. Language and thought are not the same thing: evidence from neuroimaging and neurological patients. Annals of the New York Academy of Sciences, 1369(1):132-153.
197
+ Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. 2024. Mme: A comprehensive evaluation benchmark for multimodal large language models. Preprint, arXiv:2306.13394.
198
+
199
+ Kalanit Grill-Spector and Rafael Malach. 2004. The human visual cortex. Annu. Rev. Neurosci., 27:649-677.
200
+ Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, and Daniel A Roberts. 2024. The unreasonable ineffectiveness of the deeper layers. arXiv preprint arXiv:2403.17887.
201
+ Muyang He, Yexin Liu, Boya Wu, Jianhao Yuan, Yueze Wang, Tiejun Huang, and Bo Zhao. 2024. Efficient multimodal learning from data-centric perspective. arXiv preprint arXiv:2402.11530.
202
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.
203
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
204
+ Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Yao Fu, et al. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. Advances in Neural Information Processing Systems, 36:62991-63010.
205
+ Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709.
206
+ Xu Jia, Efstratios Gavves, Basura Fernando, and Tinne Tuytelaars. 2015. Guiding long-short term memory for image caption generation. *Preprint*, arXiv:1509.04942.
207
+ Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In Proceedings of the IEEE international conference on computer vision, pages 1965-1973.
208
+ Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. Advances in Neural Information Processing Systems, 34:1022-1035.
209
+ Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787-798.
210
+ Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In International conference on machine learning, pages 3519-3529. PMLR.
211
+
212
+ Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. 2023a. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125.
213
+ Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023b. Cmmlu: Measuring massive multitask language understanding in chinese. arXiv preprint arXiv:2306.09212.
214
+ Jitang Li and Jinzheng Li. 2024. Memory, consciousness and large language model. arXiv preprint arXiv:2401.02509.
215
+ Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023c. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597.
216
+ Chen Liang, Simiao Zuo, Qingru Zhang, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2023. Less is more: Task-aware layer-wise distillation for language model compression. In International Conference on Machine Learning, pages 20852-20867. PMLR.
217
+ Baohao Liao, Shaomu Tan, and Christof Monz. 2024. Make pre-trained model reversible: From parameter to memory efficient fine-tuning. Advances in Neural Information Processing Systems, 36.
218
+ Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023a. Visual instruction tuning. arXiv preprint arXiv:2304.08485.
219
+ Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. Advances in neural information processing systems, 36.
220
+ Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. 2023b. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281.
221
+ Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507-2521.
222
+ Xinyin Ma, Gongfan Fang, and Xinchao Wang. 2023. Llm-pruner: On the structural pruning of large language models. Advances in neural information processing systems, 36:21702-21720.
223
+ Minesh Mathew, Dimosthenis Karatzas, and C. V. Jawahar. 2021. Docvqa: A dataset for vqa on document images. Preprint, arXiv:2007.00398.
224
+ Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen. 2024. Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853.
225
+
226
+ Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. 2019. Ocr-vqa: Visual question answering by reading text in images. In ICDAR.
227
+ Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, and Tong Zhang. 2024. Lisa: Layerwise importance sampling for memory-efficient large language model fine-tuning. arXiv preprint arXiv:2403.17919.
228
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR.
229
+ Terri L Scott, Jeanne Gallee, and Evelina Fedorenko. 2017. A new fun and robust version of an fmri localizer for the frontotemporal language system. Cognitive neuroscience, 8(3):167-176.
230
+ Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
231
+ Mirac Suzgun, Nathan Scales, Nathanael SchΓ€rli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
232
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
233
+ Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34:200-212.
234
+ Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. 2023. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning, pages 38087-38099. PMLR.
235
+ Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. 2016. Modeling context in referring expressions. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 69-85. Springer.
236
+ Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. 2024. Mmmu: A
237
+
238
+ massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567.
239
+ Wenxuan Zhang, Paul Janson, Rahaf Aljundi, and Mohamed Elhoseiny. 2024a. Overcoming generic knowledge loss with selective parameter update. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24046-24056.
240
+ Yulin Zhang, Yanhua Li, and Junhan Liu. 2024b. Unified efficient fine-tuning techniques for open-source large language models.
241
+ Jun Zhao, Zhihao Zhang, Yide Ma, Qi Zhang, Tao Gui, Luhui Gao, and Xuanjing Huang. 2023. Unveiling a core linguistic region in large language models. arXiv preprint arXiv:2310.14928.
242
+ Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
243
+
244
+ # A Details of Layer Importance Metrics
245
+
246
+ To demonstrate the effectiveness of our heuristically identified sparsely and uniformly distributed visual region, we conduct a comparative analysis against several other layer importance metrics (originally for layer pruning) by selecting 8 layers and re-training Bunny-Llama-3-8B-V. Below are the details of how these metrics are calculated.
247
+
248
+ - Block Influence (BI) Score (Men et al., 2024): serves as an indicator of layer importance by measuring the transformation of hidden states. We utilize the Flickr30k dataset (Jia et al., 2015) to calculate the BI score for each layer within LVLMs. The BI score of $i^{th}$ layers is calculated as following:
249
+
250
+ $$
251
+ B I _ {i} = 1 - \mathbb {E} _ {X, t} \frac {X _ {i , t} ^ {T} X _ {i + 1 , t}}{\| X _ {i} \| _ {2} \| X _ {i + 1} \| _ {2}}
252
+ $$
253
+
254
+ where $X_{i}$ represents the hidden states of the $i^{th}$ layer and $X_{i,t}$ denotes the hidden states of the $t^{th}$ token at the $i^{th}$ layer. By calculating the average cosine similarity of token states before and after passing through a layer, we measure the change magnitude across all tokens.
255
+
256
+ - Multimodal BI Score: As the above method treats visual image and text as a single modality, we propose a multimodal variant that separately calculates the hidden state transformations of
257
+
258
+ visual tokens and textual tokens, and take its average as a multimodal BI score. The Multimodal BI score of $i^{th}$ layers is calculated as follows.
259
+
260
+ $$
261
+ \begin{array}{l} B I _ {i} ^ {\prime} = 1 - \frac {1}{2} \left(\mathbb {E} _ {X, t} \frac {X _ {i , t} ^ {T} X _ {i + 1 , t}}{\| X _ {i} \| _ {2} \| X _ {i + 1} \| _ {2}} \right. \\ + \mathbb {E} _ {Y, l} \frac {Y _ {i , l} ^ {T} Y _ {i + 1 , l}}{\| Y _ {i} \| _ {2} \| Y _ {i + 1} \| _ {2}}) \\ \end{array}
262
+ $$
263
+
264
+ $X_{i,t}$ and $Y_{i,l}$ respectively mean the hidden states of the $t^{th}$ visual token and the $l^{th}$ text token at the $i^{th}$ layer. We calculate the cosine similarity of each modality tokens before and after passing through a layer, then average the results. This balances the token quantity across various modalities.
265
+
266
+ - Parameter Change Ratio (Zhao et al., 2023): We calculate the relative change ratio of the parameters in LVLM against its backbone LLM across each layer (by averaging all parameters within each layer). The parameter change ratio of $i^{th}$ layers is calculated as follows:
267
+
268
+ $$
269
+ R _ {i} = \mathbb {E} _ {\theta \in L _ {i}, j} | \frac {\theta_ {j} ^ {\prime} - \theta_ {j}}{\theta_ {j}} |
270
+ $$
271
+
272
+ where $\theta_{j}$ and $\theta_j^\prime$ respectively mean the $j^{th}$ parameter of layer $L_{i}$ in LLM and LVLM.
273
+
274
+ - Angular Distance (Gromov et al., 2024): We calculate the Angular Distance of the parameters in LVLM against its backbone LLM across each layer (by averaging all parameters within each layer). The Angular Distance of $i^{th}$ layers is calculated as follows:
275
+
276
+ $$
277
+ D _ {i} = \frac {1}{\pi} \arccos \left(\frac {\theta_ {j} ^ {\prime} \cdot \theta_ {j}}{\| \theta_ {j} ^ {\prime} \| \| \theta_ {j} \|}\right)
278
+ $$
279
+
280
+ where $\theta_{j}$ and $\theta_j^\prime$ respectively mean the $j^{th}$ parameter of layer $L_{i}$ in LLM and LVLM, $\| \cdot \|$ denotes the $L^2$ -norm and the factor of $\frac{1}{\pi}$ is a constant.
281
+
282
+ - Image Attention Score: We calculate image attention score to measure each layer's affinity for image information. We utilize the DocVQA, OCRVQA, TDIUC, and RefCOCOg datasets, sampling 50 instances from each dataset to calculate the attention scores of the all image tokens for each layer within Bunny-Llama-3-8B-V. The heat map of image attention Score of every instances for each layers in Bunny-Llama-3-8B-V
283
+
284
+ is showed in Figure 5. The image attention score of one instance in $i^{th}$ layers $A_{i}$ is calculated as follows:
285
+
286
+ $$
287
+ A _ {i} = \frac {\sum_ {t = \mathrm {k}} ^ {\mathrm {k} + N _ {\mathrm {i m g}} - 1} \sum_ {h = 1} ^ {H} \sum_ {j = 1} ^ {T} \mathrm {A t t n} [ i ] [ h , j , t ]}{N _ {\mathrm {i m g}} H}
288
+ $$
289
+
290
+ where $H$ represents the number of attention heads per layer and $T$ denotes the total number of tokens at the $i^{th}$ layer. $N_{\mathrm{img}}$ is the number of image tokens of the instance. The index range for the image tokens is from $k$ to $k + N_{\mathrm{img}} - 1$ . While $\mathrm{Attn}[h,j,t]$ means the attention score of the $h^{th}$ attention head for the $j^{th}$ token to the $t^{th}$ token.
291
+
292
+ ![](images/078cd98d4e00b8fa91d3dca4fa9b7735a6c9738447452db1543b0f0d14a22921.jpg)
293
+ Figure 5: Visualization of Image Attention Scores for every instances across all layers
activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfeaf71723fc7a3e4eaf6a19df4b43eafaefd8ab7166cdf9e91ac89ba88ec947
3
+ size 663761
activatingdistributedvisualregionwithinllmsforefficientandeffectivevisionlanguagetrainingandinference/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f329b6103850cea921f4203d68ce6d8cfbb7b2e076c2d1192a290714d3c2612
3
+ size 373019
activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/72220249-57c6-447c-8612-88cea94afb52_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df4cae74b75a9789e6daad06a7eca5f98fd4e92adc6a988c1bdc366c9a3a2153
3
+ size 80084
activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/72220249-57c6-447c-8612-88cea94afb52_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77b5364f3bfb1d322f1dba1e3faa1bd79d5b6c1a21b3413711a74782bb55017c
3
+ size 94981
activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/72220249-57c6-447c-8612-88cea94afb52_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85356397104932ae0ba269532f35857c6ea2411785e91e2a3a8fe5ac0964ffb8
3
+ size 1718238
activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/full.md ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Activation Steering Decoding: Mitigating Hallucination in Large Vision-Language Models through Bidirectional Hidden State Intervention
2
+
3
+ Jingran Su $^{*1}$ , Jingfan Chen $^{*1}$ , Hongxin Li $^{*2,3,5}$ , Yuntao Chen $^{\dagger 4}$ , Qing Li $^{\dagger 1}$ , Zhaoxiang Zhang $^{\dagger 2,3,5,6}$
4
+
5
+ <sup>1</sup>The Hong Kong Polytechnic University
6
+
7
+ $^{2}$ New Laboratory of Pattern Recognition, CASIA
8
+
9
+ $^{3}$ State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
10
+
11
+ $^{4}$ Hong Kong Institute of Science & Innovation, CASIA
12
+
13
+ $^{5}$ University of Chinese Academy of Sciences $^{6}$ Shanghai Artificial Intelligence Laboratory
14
+
15
+ # Abstract
16
+
17
+ Large Vision-Language Models (LVLMs) have demonstrated impressive capabilities in multimodal understanding, but they frequently suffer from hallucination - generating content inconsistent with visual inputs. In this work, we explore a novel perspective on hallucination mitigation by examining the intermediate activations of LVLMs during generation. Our investigation reveals that hallucinated content manifests as distinct, identifiable patterns in the model's hidden state space. Motivated by this finding, we propose Activation Steering Decoding (ASD), a training-free approach that mitigates hallucination through targeted intervention in the model's intermediate activations. ASD operates by first identifying directional patterns of hallucination in the activation space using a small calibration set, then employing a contrast decoding mechanism that computes the difference between positive and negative steering predictions. This approach effectively suppresses hallucination patterns while preserving the model's general capabilities. Extensive experiments demonstrate that our method significantly reduces hallucination across multiple benchmarks while maintaining performance on general visual understanding tasks. Notably, our approach requires no model re-training or architectural modifications, making it readily applicable to existing deployed models.
18
+
19
+ # 1 Introduction
20
+
21
+ Large Vision Language Models (LVLMs), while demonstrating impressive capabilities, struggle with a fundamental issue known as hallucination where generated textual descriptions fail to align accurately with visual semantics (Liu et al., 2024a; Zhai et al., 2023; Zhao et al., 2023). These failures not only degrade the performance of LVLMs in
22
+
23
+ practical scenarios but also undermine their credibility in high-stakes applications like medical imaging, autonomous driving, and legal systems (Wang, 2024; Magesh et al., 2024).
24
+
25
+ While existing approaches mitigate hallucination through enhanced data quality (Liu et al., 2023a; Yu et al., 2024a) and carefully designed training objectives (Chen et al., 2023; Jiang et al., 2024; Yue et al., 2024), such post-training solutions may present challenges for real-world deployments where models need to adapt rapidly to scenarios with minimal computational overhead and maximum flexibility.
26
+
27
+ Recent attempts have made significant progress in exploring training-free solutions as crucial alternatives. These approaches can be broadly categorized into module-level methods (Zhao et al., 2024; Deng et al., 2024; Yu et al., 2025; An et al., 2024) that leverage richer visual modules, and logit-level methods (Leng et al., 2024; Zhu et al., 2024) that reduce the model's reliance on language priors or statistical biases. Both approaches share a fundamental principle: strengthening visual evidence through either enhanced visual signals or additional visual cues during the inference process.
28
+
29
+ While these approaches provide valuable insights, they focus on specific assumptions (e.g., attention loss in image regions). In contrast, this work aims to address this in a more fundamental way. We propose an approach by directly steering the model with a hallucination-aware distributional indicator to generate hallucination-free descriptions. We first analyze hallucination behavior in LVLMs by examining intermediate activation, i.e. hidden state $^{1}$ , distributions. Our empirical investigation reveals that hallucinated content manifests as distinct, identifiable patterns in the model's intermediate activation. Building on this insight and to achieve effective steering, we propose Activation
30
+
31
+ Steering Decoding (ASD), a training-free approach that directly intervenes in the model's intermediate activations to mitigate hallucination.
32
+
33
+ Our method operates by first identifying the directional patterns of hallucination in the intermediate activation space using a small calibration set, then employing a contrast decoding mechanism that computes the difference between positive and negative steering predictions. Extensive experiments demonstrate that our method achieves substantial reductions in hallucination rates (over $10.0\%$ improvement on CHAIR and over $10\%$ F1 score improvement on POPE) while maintaining or even enhancing performance on general visual understanding tasks. Notably, our method requires no re-training or architectural modifications, making it readily applicable to deployed models.
34
+
35
+ The main contributions of this paper include: 1) a systematic empirical study that reveals the distinct patterns of hallucination in LVLMs intermediate activation space, providing insights into the internal mechanisms of LVLMs; 2) ASD: a novel, training-free method for hallucination reduction through targeted intervention in intermediate activations; 3) comprehensive empirical evaluation demonstrating significant reduction in hallucination across diverse scenarios while maintaining model performance on standard tasks.
36
+
37
+ # 2 Related Works
38
+
39
+ Hallucination in LVLMs. Hallucination was initially studied and defined in the context of language models, describing outputs that deviate from factual or contextual information. In LVLMs, hallucination specifically refers to model outputs that are inconsistent with the input visual information. To address this challenge, various approaches have been proposed. Some works enhance visual features through diverse visual encoders or visual tools (Jain et al., 2024; He et al., 2024; Jiao et al., 2024), and employ specialized modules to control cross-modal alignment (Zhai et al., 2023). Other researchers have approached this problem from a data-centric perspective, introducing contrastive examples and adversarial samples to increase training data diversity (Liu et al., 2023a; Yu et al., 2024a), while also implementing denoising and regeneration strategies to improve overall data quality (Wang et al., 2024; Yue et al., 2024). Additional works have incorporated extra supervision signals during training to strengthen visual feature repre
40
+
41
+ sentations (Chen et al., 2023; Jiang et al., 2024; Yue et al., 2024), and some have employed reinforcement learning techniques to suppress model hallucination (Zhao et al., 2023; Zhou et al., 2024; Sun et al., 2023; Yu et al., 2024b). However, these methods either require substantial additional data or involve expensive training processes. Furthermore, several training-free methods have been proposed. These include interventions in the model's output process through contrast decoding (Leng et al., 2024; Zhu et al., 2024), guidance from auxiliary models (Zhao et al., 2024; Deng et al., 2024; Yu et al., 2025; An et al., 2024), and post-processing techniques to eliminate hallucinated content from the outputs (Yin et al., 2023; Lee et al., 2023; Zhou et al., 2023).
42
+
43
+ Activation Steering. Our method analyzes and intervenes in the model's representation space, which relates to the recent technique of activation steering (or representation engineering) in language models (Subramani et al., 2022; Turner et al., 2023; Jorgensen et al., 2023; Panickssery et al., 2023; Liu et al., 2023b; Zou et al., 2023). Activation steering is a technique used to guide model behavior by manipulating neuron activations. Most relevant to our work are several studies (Panickssery et al., 2023; Turner et al., 2023), where they use semantically opposite prompt pairs (such as the prompts "Love" and "Hate") to generate steering vectors that, when added to model activations, can control model behavior. Different from these approaches, our approach identifies hallucination-specific patterns through analysis of activations rather than prompt engineering, and presents a contrast decoding mechanism that enables robust hallucination mitigation while maintaining generation quality.
44
+
45
+ # 3 Preliminary
46
+
47
+ This section introduces the key notations used throughout this paper. Consider a LVLM $\pi(\cdot)$ that accepts image $v$ and language $x$ inputs to generate text sequences $\mathbf{y} = (y_1, \dots, y_n)$ . As the inputs pass through the model's transformer architecture, it generates a series of intermediate activations $\mathbf{Z} = \mathbf{z}_1, \dots, \mathbf{z}_L$ at each layer $l$ , with $\mathbf{z}_l \in \mathbb{R}^d$ . The model generates each token through sampling from the following distribution:
48
+
49
+ $$
50
+ \begin{array}{l} y _ {t} \sim \pi (y _ {t} | x, v, y _ {< t}), \\ \propto \exp (\log i _ {\pi} (y _ {t} | x, v, y _ {< t})), \\ \end{array}
51
+ $$
52
+
53
+ ![](images/7c906fcac20f3794cc477d548f9d1c19e93df727d00b45d93aade86dd05f17cb.jpg)
54
+ Figure 1: Overview of our proposed method. Left: The token-level hallucination feature collection process, where we extract hidden states from the model and annotate them based on whether they belong to sentences containing hallucinated objects (not present in the ground truth). The steering vector is computed as the difference between mean hidden states of hallucinated and non-hallucinated tokens. Right: Illustration of Activation Steering Decoding, which performs two forward passes with opposite steering directions and contrasts their logits to obtain the final output distribution, effectively suppressing hallucination patterns while preserving semantic information.
55
+
56
+ ![](images/4646a9e8dedeb56e40284d3bccfa714394623b23bbd7fe77d6c43f94bef43ca8.jpg)
57
+
58
+ where $\mathrm{logit}_{\pi}(y_t|\cdot)$ represents the unnormalized log probabilities for token $y_{t}$ .
59
+
60
+ # 4 How Do Hidden States Differ during Hallucination?
61
+
62
+ We start by analyzing how hallucinations manifest in the hidden states of LVLMs during generation. We hypothesize that hallucinated content exhibits distinct patterns in the model's hidden state space compared to factual generations. To investigate this hypothesis, we propose a framework designed to systematically extract the model's hidden representations paired with labels indicating hallucination occurrences in Sec. 4.1 and analyze their corresponding hidden state representations via linear probing in Sec. 4.2.
63
+
64
+ # 4.1 A Framework for Representation Collection
65
+
66
+ To systematically investigate hallucination patterns in the given base model $\pi_{\mathrm{base}}$ , we develop a scalable framework for collecting paired hidden states and hallucination labels for it. Our approach focuses specifically on object hallucination, a well-defined and measurable form of multimodal hallucination that occurs when a model generates references to objects not present in the input image. The following details our data collection process:
67
+
68
+ Image-Description Pair Generation. We utilize the MSCOCO dataset (Lin et al., 2014) as our primary data source due to its rich annotations for segmentation and diverse visual con
69
+
70
+ tent. For each image $v_{i}$ in the dataset, we query the base mode $\pi_{\mathrm{base}}$ with prompt $x =$ "Please describe the image in detail." to generate a detailed description $\mathbf{y}_i$ .
71
+
72
+ The generated description $\mathbf{y}_i$ reflects the model's intrinsic perception of the input image $v_{i}$ , which may contain hallucinated content that deviates from the actual visual information.
73
+
74
+ Activation Collection and Annotation. $\mathcal{O} = \{o_1,o_2,\dots,o_{80}\}$ represent the set of 80 predefined object categories in the MSCOCO dataset. For each object category $o$ , we collect a set of synonyms $\mathcal{C}(o)$ to ensure comprehensive object extraction. Each image $v_{i}$ is associated with its ground truth object set $G(v_{i})\subseteq \mathcal{O}$ based on MSCOCO annotations. For each generated description $\mathbf{y}_i$ , we employ the Natural Language Toolkit library to segment it into individual sentences $\{\mathbf{s}_{i,1},\mathbf{s}_{i,2},\ldots ,\mathbf{s}_{i,j}\}$ , where each $\mathbf{s}_{i,j}$ is a subsequence of tokens representing a single sentence:
75
+
76
+ $$
77
+ \mathbf {s} _ {i, j} = (y _ {1} ^ {i, j}, y _ {2} ^ {i, j}, \ldots , y _ {p} ^ {i, j}), \quad \mathrm {w i t h} \bigcup_ {j} \mathbf {s} _ {i, j} = \mathbf {y} _ {i}.
78
+ $$
79
+
80
+ We then identify all mentioned objects $O(\mathbf{s}_{i,j})$ in the sentence $\mathbf{s}_{i,j}$ by:
81
+
82
+ $$
83
+ O (\mathbf {s} _ {i, j}) = \{o \in \mathcal {O} \mid \begin{array}{l} \operatorname {s u b s t r} (o, \mathbf {s} _ {i, j}), \text {o r} \\ \exists c \in \mathcal {C} (o), \operatorname {s u b s t r} (c, \mathbf {s} _ {i, j}) \end{array} \},
84
+ $$
85
+
86
+ $$
87
+ \operatorname {s u b s t r} (x, y) \iff x \text {i s a s u b s t r i n g o f} y.
88
+ $$
89
+
90
+ We define the hallucination label $L(y_{p}^{i,j})$ for a token $y_{p}^{i,j} \in \mathbf{s}_{i,j}$ based on whether the sentence $\mathbf{s}_{i,j}$
91
+
92
+ ![](images/a8d7760876bacf3e7700d076a774e5028bcce74cec372d328ed88f909cf2285a.jpg)
93
+ Figure 2: Test accuracy and F1 scores for hallucination versus non-hallucination classification across different layers of LLaVA-1.5-7B with varying training sample sizes (0.2k, 2k, and 20k).
94
+
95
+ includes any non-existent objects. Mathematically:
96
+
97
+ $$
98
+ L (y _ {p} ^ {i, j}) = \left\{ \begin{array}{l l} 1 & \text {i f} O (\mathbf {s} _ {i, j}) \setminus G (I _ {i}) \neq \varnothing , \\ 0 & \text {o t h e r w i s e .} \end{array} \right.
99
+ $$
100
+
101
+ Let $\mathbf{Z}(y)$ indicates the hidden state of all layer for token $y$ . The final dataset of paired activations and hallucination labels is constructed as: $\bigcup_{i}\left(\mathbf{Z}(y_{p}^{i,j}),L(y_{p}^{i,j})\right)$ .
102
+
103
+ # 4.2 Linear Probing of Hidden States
104
+
105
+ To investigate the patterns of hidden states when occurring hallucination, we perform linear probing of LLaVA1.5-7B across its entire architecture. Specifically, we randomly sample 500 images from the MSCOCO training set and employ the methodology described in Sec. 4.1 to extract hidden state representations across all 32 transformer layers. This initial collection yields an imbalanced dataset comprising 42,160 non-hallucinated samples and 12,113 hallucinated samples. We then construct a balanced dataset by randomly sampling 11,000 instances from each class, resulting in a final dataset of 22,000 samples. We reserve 2,000 samples as a held-out test set and use the remaining 20,000 for training. We conduct a series of linear probing experiments with varying amounts of training data, independently training linear classifiers for each of the 32 layers' hidden states to track how hallucination-related information is encoded across the model's layers.
106
+
107
+ Fig. 2 presents the accuracy and F1 scores across model layers under varying training set sizes. Our analysis reveals several significant findings. First, the amount of training data exhibits a substantial impact on the classifier's discriminative capability,
108
+
109
+ with approximately 20k samples being necessary to establish reliable patterns. This suggests that hallucination signatures, while consistent, require sufficient data to be accurately characterized. Moreover, we observe that hidden states in the middle and latter layers demonstrate superior representational power for hallucination detection, indicating a progressive accumulation of hallucination-relevant features across the model's hierarchy. Most notably, the probing performance reveals that hallucination-related information is remarkably well-preserved and linearly separable in the hidden state space, achieving probing accuracy of $82.49\%$ in the middle layers with just 20k training tokens. This pronounced linear separability provides compelling evidence that hallucinated content manifests as distinct, consistent patterns in the model's hidden state space, which in turn suggests that targeted intervention at the hidden state level could effectively mitigate hallucination behavior.
110
+
111
+ # 5 Activation Steering Decoding
112
+
113
+ Motivated by our empirical findings that hallucination patterns are distinctly encoded and linearly separable in the model's hidden states, we propose Activation Steering Decoding, a novel decoding strategy that directly intervenes in the model's hidden activations to mitigate hallucination.
114
+
115
+ Steering Vector Modeling. Given the paired data $\bigcup_{i}\{\left(\mathbf{Z}(y_{p}^{i,j}),L(y_{p}^{i,j})\right)\}$ collected from Sec. 4.1, we calculate a steering vector that captures the direction from hallucination to non-hallucination in the hidden state space. For each layer 1, we compute the difference between mean activations of non-hallucinated and hallucinated tokens:
116
+
117
+ $$
118
+ \mathbf {v} ^ {l} = \frac {1}{P} \sum_ {L (y) = 1} \mathbf {z} _ {l} (y) - \frac {1}{N} \sum_ {L (y) = 0} \mathbf {z} _ {l} (y), \tag {1}
119
+ $$
120
+
121
+ where $P$ and $N$ are the numbers of factual and hallucinated tokens respectively.
122
+
123
+ Steering Vector Injection. The most straightforward approach to leveraging the extracted steering vectors is directly intervening in the hidden states:
124
+
125
+ $$
126
+ \mathbf {z} _ {l} ^ {\text {s t e e r e d}} = \mathbf {z} _ {l} + \lambda \mathbf {v} _ {l}, \tag {2}
127
+ $$
128
+
129
+ where $\lambda$ regulates the steering strength. While this approach effectively reduces hallucination as $\lambda$ increases, it risks distorting the semantic information encoded in the hidden states (see ablation studies in Sec. 6.5.3).
130
+
131
+ <table><tr><td rowspan="2">Method</td><td colspan="2">MSCOCO</td><td colspan="2">A-OKVQA</td><td colspan="2">GQA</td></tr><tr><td>% Accuracy</td><td>%F1 Score</td><td>% Accuracy</td><td>%F1 Score</td><td>% Accuracy</td><td>%F1 Score</td></tr><tr><td colspan="7">Greedy Decoding</td></tr><tr><td>LLaVA1.5-7B</td><td>85.13 ↑0.00</td><td>86.03 ↑0.00</td><td>78.99 ↑0.00</td><td>82.61 ↑0.00</td><td>76.60 ↑0.00</td><td>80.98 ↑0.00</td></tr><tr><td>+ VCD</td><td>85.16 ↑0.03</td><td>86.04 ↑0.01</td><td>78.92 ↓0.07</td><td>82.58 ↓0.03</td><td>76.49 ↓0.11</td><td>80.94 ↓0.04</td></tr><tr><td>+ VDD-None</td><td>86.87 ↑1.74</td><td>87.26 ↑1.23</td><td>82.02 ↑3.01</td><td>84.57 ↑1.96</td><td>79.99 ↑3.39</td><td>83.04 ↑2.06</td></tr><tr><td>+ ASD (Ours)</td><td>88.01 ↑2.88</td><td>87.87 ↑1.84</td><td>85.10 ↑6.11</td><td>85.65 ↑3.04</td><td>83.49 ↑6.89</td><td>83.98 ↑3.00</td></tr><tr><td>Qwen-VL-Chat</td><td>86.44 ↑0.00</td><td>86.12 ↑0.00</td><td>85.92 ↑0.00</td><td>85.80 ↑0.00</td><td>75.23 ↑0.00</td><td>67.70 ↑0.00</td></tr><tr><td>+ VCD</td><td>86.42 ↓0.02</td><td>86.31 ↑0.19</td><td>85.64 ↓0.28</td><td>85.70 ↓0.10</td><td>77.06 ↑1.83</td><td>71.19 ↑3.49</td></tr><tr><td>+ VDD-None</td><td>86.72 ↑0.28</td><td>86.45 ↑0.33</td><td>85.58 ↓0.34</td><td>85.58 ↓0.22</td><td>75.88 ↑0.65</td><td>68.94 ↑1.24</td></tr><tr><td>+ ASD (Ours)</td><td>88.09 ↑1.65</td><td>87.96 ↑1.84</td><td>87.29 ↑1.37</td><td>87.29 ↑1.49</td><td>83.77 ↑8.54</td><td>82.21 ↑14.51</td></tr><tr><td colspan="7">Direct Sampling</td></tr><tr><td>LLaVA1.5-7B</td><td>81.49 ↑0.00</td><td>82.93 ↑0.00</td><td>75.97 ↑0.00</td><td>80.04 ↑0.00</td><td>73.71 ↑0.00</td><td>78.48 ↑0.00</td></tr><tr><td>+ VCD</td><td>85.41 ↑3.92</td><td>86.27 ↑3.34</td><td>78.87 ↑2.90</td><td>82.55 ↑2.51</td><td>76.53 ↑2.82</td><td>80.97 ↑2.49</td></tr><tr><td>+ VDD-None</td><td>85.77 ↑4.28</td><td>86.28 ↑3.35</td><td>81.02 ↑5.05</td><td>83.73 ↑3.69</td><td>79.41 ↑5.70</td><td>82.45 ↑3.97</td></tr><tr><td>+ ASD (Ours)</td><td>87.19 ↑5.70</td><td>87.15 ↑4.22</td><td>84.63 ↑8.66</td><td>85.34 ↑5.30</td><td>83.19 ↑9.48</td><td>83.89 ↑5.41</td></tr><tr><td>Qwen-VL-Chat</td><td>84.16 ↑0.00</td><td>83.59 ↑0.00</td><td>83.01 ↑0.00</td><td>82.79 ↑0.00</td><td>74.54 ↑0.00</td><td>67.12 ↑0.00</td></tr><tr><td>+ VCD</td><td>86.47 ↑2.31</td><td>86.24 ↑2.65</td><td>85.52 ↑2.51</td><td>85.60 ↑2.81</td><td>77.42 ↑2.88</td><td>71.83 ↑4.71</td></tr><tr><td>+ VDD-None</td><td>86.10 ↑1.94</td><td>85.78 ↑2.19</td><td>84.96 ↑1.95</td><td>84.99 ↑2.20</td><td>75.71 ↑1.17</td><td>68.68 ↑1.56</td></tr><tr><td>+ ASD (Ours)</td><td>87.03 ↑2.87</td><td>86.86 ↑3.27</td><td>85.69 ↑2.68</td><td>85.52 ↑2.73</td><td>82.84 ↑8.30</td><td>80.77 ↑13.65</td></tr></table>
132
+
133
+ Table 1: Performance evaluation of our method against baselines and related approaches on POPE benchmark under two decoding strategies: Greedy Decoding and Direct Sampling. The base models (LLaVA1.5-7B and Qwen-VL-Chat) are compared with VCD and VDD-None (existing methods) as well as our proposed approach. Results are reported in terms of Accuracy (\%) and F1 Score (\%). The proposed method achieves consistent and notable improvements over all baselines and related methods, with the best results highlighted in bold.
134
+
135
+ Activation Steering Decoding. To achieve more stable hallucination reduction while preserving generation quality, we propose Activation Steering Decoding. Let $\pi^{+}$ and $\pi^{-}$ denote the model under positive (i.e., $\lambda > 0$ ) and negative (i.e., $\lambda < 0$ ) steering using Eq. (2) respectively, applying the same steering vector in opposite directions. The final logits for next token prediction are obtained through following:
136
+
137
+ $$
138
+ \operatorname {l o g i t} _ {A S D} = (1 + \alpha) \cdot \operatorname {l o g i t} _ {\pi^ {+}} - \alpha \cdot \operatorname {l o g i t} _ {\pi^ {-}}, \tag {3}
139
+ $$
140
+
141
+ where $\alpha$ is the contrastive weight coefficient. This contrast mechanism is effective because the difference operation amplifies our steering's impact on output logits, while allowing us to use a relatively small steering intensity to better preserve semantic integrity in the hidden states. This property makes our approach more robust and less likely to disturb the model's normal generation process compared to direct steering.
142
+
143
+ # 6 Experiments
144
+
145
+ In this section, we evaluate our proposed Activation Steering Decoding method on various multimodal
146
+
147
+ benchmarks. Our experiments aim to assess both hallucination reduction and general visual comprehension capabilities.
148
+
149
+ # 6.1 Benchmarks
150
+
151
+ We conduct experiments on two categories of benchmarks:
152
+
153
+ Visual Hallucination. POPE evaluates object hallucination through yes/no questions about object presence. It contains 27,000 question-answer pairs sourced equally from MS-COCO, A-OKVQA, and GQA datasets (9,000 each). The questions are categorized into three types Random, Popular, and Adversarial. CHAIR measures object hallucination in image captioning tasks. It provides fine-grained annotations on MS-COCO captions, marking specific object mentions as either hallucinated or faithful. It provides two key metrics CHAIRs, the percentage of generated captions containing at least one hallucinated object, and CHAIRi, the percentage of hallucinated object instances among all object mentions in the generated captions. Following previous papers, we randomly selected 500 samples from MS-COCO validation set for our experiments.
154
+
155
+ <table><tr><td>Model</td><td>CHAIRS↓</td><td>CHAIRI↓</td><td>Recall ↑</td></tr><tr><td>LLaVA-1.5</td><td>51.0 ↑0.0</td><td>14.7 ↑0.0</td><td>82.8 ↑0.0</td></tr><tr><td>+ VCD</td><td>47.8 ↓3.2</td><td>14.1 ↓0.6</td><td>82.7 ↓0.1</td></tr><tr><td>+ VDD-None</td><td>50.2 ↓0.8</td><td>14.3 ↓0.4</td><td>83.2 ↑0.4</td></tr><tr><td>+ ASD (Ours)</td><td>40.0 ↓11.0</td><td>11.3 ↓3.4</td><td>82.0 ↓0.8</td></tr></table>
156
+
157
+ Table 2: Comparison of different hallucination mitigation methods on CHAIR benchmark. $\mathrm{CHAIR}_S$ and $\mathrm{CHAIR}_I$ measure sentence-level and instance-level hallucination rates respectively (lower is better), while Recall measures the model's ability to describe actually present objects (higher is better). Our method achieves substantial reductions in hallucination rates with only minimal impact on recall performance.
158
+
159
+ General Visual Understanding. MME is a comprehensive benchmark designed to assess LVLMs through yes/no questions. It comprises 14 subsets: 10 perception-based tasks (including color, count, position, scene, action, etc.) and 4 reasoning-based tasks (including commonsense, numerical, mathematical reasoning). MMBench is a comprehensive multiple-choice benchmark containing approximately 3,000 questions across 20 ability dimensions covering perception and reasoning tasks. We use the DEV split containing 1,164 English questions for evaluation. MMMU is a challenging multiple-choice benchmark containing 11.5K questions spanning 30 academic subjects at the college level. The benchmark is particularly challenging, with even GPT-4V achieving less than $60\%$ accuracy. TextVQA validation set consists of 5,000 questions that can only be correctly answered by reading and reasoning about text present in images. LLaVA-Bench consists of 60 carefully designed open-ended questions across 24 images, evaluating models' visual reasoning and understanding capabilities. The responses are evaluated using GPT-4-1106-preview as an automatic evaluator, providing standardized scoring metrics. MM-Vet contains 217 challenging open-ended tasks that require models to simultaneously demonstrate multiple capabilities including detailed perception, cross-modal reasoning, and world knowledge. We use the official online evaluator, powered by GPT-4-0613, to ensure fair comparison with existing approaches.
160
+
161
+ # 6.2 Implementation Details
162
+
163
+ We conduct experiments on two base model: LLaVA1.5-7B (Liu et al., 2024b) and Qwen-VLChat (Bai et al., 2023). For each model, we randomly sample 1,000 images from MSCOCO training set for steering vector extraction of Eq. (1). We conduct grid search over $\lambda \in$
164
+
165
+ ![](images/94b742dc9423fc7049b52baf5eb55ab43785bd8edc07678426a98e857432d231.jpg)
166
+ Figure 3: Analysis of hallucination rates $\mathrm{(CHAIR}_S$ and $\mathrm{CHAIR}_I$ ) with respect to generated token length, with LLaVA1.5-7b as the base model.
167
+
168
+ $\{0, 0.1, 0.2, 0.3, 0.4, 0.5\}$ for both $\pi^{+}$ and $\pi^{-}$ . For comparison, we implement VCD (Leng et al., 2024) with optimized hyperparameters, and VDD-None (Zhang et al., 2024) using their recommended parameters.
169
+
170
+ # 6.3 Hallucination Reduction Performance
171
+
172
+ Tab. 1 presents a comprehensive evaluation of our method against existing approaches on the POPE benchmark. We evaluate performance under two decoding strategies: Greedy Decoding and Direct Sampling (which generates responses by directly sampling from the raw logit probability distribution without normalization) across three subset (MSCOCO, A-OKVQA, and GQA), using both accuracy and F1 score as metrics. Our method demonstrates consistent and substantial improvements across all experimental settings. Under Greedy Decoding, when applied to LLaVA1.5-7B, our approach achieves absolute gains of $2.88\%$ , $6.11\%$ , and $6.89\%$ in accuracy on MSCOCO, A-OKVQA, and GQA respectively. The improvements were even more pronounced when applied to Qwen-VLChat, particularly on the GQA dataset where we observed a remarkable $8.54\%$ increase in accuracy and $14.51\%$ improvement in F1 score. Notably, our method not only surpasses the baseline models but also outperforms existing hallucination mitigation approaches (VCD and VDD-None) by a significant margin. The effectiveness of our method is further validated under Direct Sampling, where it maintains robust performance improvements. For instance, with LLaVA1.5-7B, our method achieves accuracy gains of $5.70\%$ , $8.66\%$ , and $9.48\%$ on the three subset respectively. Unlike other methods showing more significant improvements under direct sampling, our approach demonstrates robust
173
+
174
+ Regular: The image features a woman lying on a couch, holding a black cat in her arms. The cat is resting comfortably on her chest, and the woman appears to be enjoying the moment. The couch is situated in a living room, and there is a chair nearby. In the background, there is a TV mounted on the wall, and a remote control can be seen placed on the couch. Additionally, there is a bowl placed on the couch, possibly containing snacks for the woman and her cat.
175
+
176
+ Figure 4: Illustration of ASD correcting hallucinations. Hallucinated objects (red) are removed while ASD adds accurate visual details (green).
177
+ ![](images/a2c02935a26c6b1dcf1851c555bb88cd723c2ccbd2db19aa5c603e476df4ef6f.jpg)
178
+ ASD: The image features a woman wearing glasses, lying on a couch and hugging her black cat. She is holding the cat tightly in her arms, showcasing a strong bond between them. The woman appears to be enjoying her time with her beloved feline companion. The couch they are on is brown and comfortable, providing a cozy environment for their embrace. The scene captures the affection and love shared between the woman and her cat as they cuddle together.
179
+
180
+ Regular: The image captures a lively scene in a grassy field where two zebras are fighting each other. In addition to the zebras, there are several other animals present in the scene. Two giraffes can be seen in the background, one closer to the left side and the other further to the right. There are also two birds, one located near the center of the image and the other towards the right side. The presence of these animals adds to the diversity and liveliness of the scene.
181
+
182
+ ![](images/ad7da3748319fed18efb2a5f1d52fef2b047c305536c29dd3074543f256f8643.jpg)
183
+ ASD: The image captures a lively scene in a grassy field where two zebras are fighting each other. They are both standing on their hind legs, with one zebra appearing to be the aggressor and the other defending itself. The zebras are surrounded by a dusty field, ... In the background, there are several other zebras, some of which are standing and others walking around. The presence of multiple zebras in the field suggests that this could be a social gathering or a natural habitat for these animals.
184
+
185
+ effectiveness under both greedy decoding and direct sampling strategies, validating its stability and reliability across different inference settings. The superior performance can be attributed to our contrast decoding mechanism, which effectively isolates and suppresses hallucination patterns while preserving the model's ability to generate accurate and contextually appropriate responses. This is evidenced by the consistent improvements across both metrics and all datasets, suggesting that our method successfully addresses hallucination without compromising general visual understanding capabilities.
186
+
187
+ The result on the CHAIR benchmark is reported in Tab. 2. Our method demonstrates substantial improvements in reducing hallucination rates compared to the baseline LLaVA1.5-7B model and other mitigation approaches. Specifically, we achieve a significant $10.0\%$ reduction in sentence-level hallucination $\left(\mathrm{CHAIR}_S\right)$ compared to the baseline, substantially outperforming both VCD $(-3.2\%)$ and VDD-None $(-0.8\%)$ . The $\mathrm{CHAIR}_I$ metric exhibited a similar trend. Notably, while VDD-None achieves the best recall performance with a $0.4\%$ improvement over the baseline, our method still maintains competitive recall $(-0.8\%)$ while achieving significantly better hallucination reduction, demonstrating a favorable trade-off between reliability and comprehensiveness. This minimal trade-off in recall suggests that our approach effectively reduces hallucination while largely preserving the model's ability to describe actually present objects in the images.
188
+
189
+ Fig. 3 illustrates the relationship between generated token length and hallucination rates across different methods, where the base model is LLaVA1.5-7B. Our analysis reveals that hallucination rates increase progressively with the length of generated content across all methods. A particularly
190
+
191
+ concerning observation is the presence of a sharp increase in hallucination rates around the 80-token mark across all methods, suggesting that extended generation lengths pose heightened risks for hallucination. Notably, our approach demonstrates particularly strong advantages beyond this threshold, maintaining substantially lower hallucination rates with a notably smaller slope in both $\mathrm{CHAIR}_S$ and $\mathrm{CHAIR}_I$ metrics compared to baseline and existing methods.
192
+
193
+ Visualization example. To provide concrete illustrations of how ASD mitigates hallucinations in practice, we present qualitative comparisons in Fig. 4. In the left example, the baseline hallucinates multiple objects (chair, TV, remote control, bowl, snacks) that are absent from the image, while ASD removes these errors and accurately describes the woman with her cat, adding new details like "wearing glasses" and "brown couch." Similarly, in the right example, ASD corrects the baseline's hallucinations of "giraffes" and "birds," properly describing only the zebras while introducing additional accurate details about the "dusty field" environment. These results illustrate how ASD not only suppresses hallucination patterns but also enhances descriptive richness with factually accurate details.
194
+
195
+ # 6.4 General Performance Maintenance
196
+
197
+ Tab. 3 presents the results on six general visual understanding benchmarks. Our method shows comparable or improved performance across most tasks for both models. For LLaVA1.5-7B, we observe notable improvements on MME (+16.51), MMMU (+3.34), and MMVet (+2.70) while maintaining performance on other benchmarks with minimal variation. Similarly, for Qwen-VL-Chat, our method achieves the best performance on MMMU (+3.00), MMVet (+0.50), and LLaVABench (+1.80), with negligible degradation on other benchmarks. This
198
+
199
+ <table><tr><td>Method</td><td>MME</td><td>MMBench</td><td>MMMU</td><td>TextVQA</td><td>MMVet</td><td>LLaVABench</td><td>Overall</td></tr><tr><td>LLaVA1.5-7B</td><td>1810.70 ↑0.00</td><td>65.46 ↑0.00</td><td>35.44 ↑0.00</td><td>45.76 ↑0.00</td><td>31.10 ↑0.00</td><td>58.90 ↑0.00</td><td>↑0.00</td></tr><tr><td>+ VCD</td><td>1800.41 ↓10.29</td><td>64.69 ↓0.77</td><td>36.00 ↑0.56</td><td>44.26 ↓1.50</td><td>30.90 ↓0.20</td><td>57.20 ↓1.70</td><td>↓4.18</td></tr><tr><td>+ VDD-None</td><td>1763.80 ↓46.90</td><td>63.75 ↓1.71</td><td>36.78 ↑1.34</td><td>42.19 ↓3.57</td><td>32.30 ↑1.20</td><td>62.10 ↑3.20</td><td>↓2.13</td></tr><tr><td>+ ASD (Ours)</td><td>1827.21 ↑16.51</td><td>65.38 ↓0.08</td><td>38.78 ↑3.34</td><td>46.40 ↑0.64</td><td>33.80 ↑2.70</td><td>61.60 ↑2.70</td><td>↑10.21</td></tr><tr><td>Qwen-VL-Chat</td><td>1839.55 ↑0.00</td><td>61.34 ↑0.00</td><td>33.56 ↑0.00</td><td>60.79 ↑0.00</td><td>46.10 ↑0.00</td><td>66.40 ↑0.00</td><td>↑0.00</td></tr><tr><td>+ VCD</td><td>1847.85 ↑8.30</td><td>60.40 ↓0.94</td><td>35.67 ↑2.11</td><td>59.31 ↓1.48</td><td>45.20 ↓0.90</td><td>67.50 ↑1.10</td><td>↑0.34</td></tr><tr><td>+ VDD-None</td><td>1861.01 ↑21.46</td><td>62.97 ↑1.63</td><td>33.67 ↑0.11</td><td>59.91 ↓0.88</td><td>41.40 ↓4.70</td><td>65.20 ↓1.20</td><td>↓3.87</td></tr><tr><td>+ ASD (Ours)</td><td>1825.20 ↓14.35</td><td>61.08 ↓0.26</td><td>36.56 ↑3.00</td><td>60.42 ↓0.37</td><td>46.60 ↑0.50</td><td>68.20 ↑1.80</td><td>↑3.89</td></tr></table>
200
+
201
+ ![](images/d0bfb3466d0017048e27d7f2f8dc1954c433572f5e9fdbfcb3ade1a1fa745086.jpg)
202
+ Impact of $\lambda$ of ASD on Accuracy Improvement
203
+ Figure 5: Impact of steering intensities on ASD, measured as percentage point improvements over LLaVA1.5-7B baseline $(85.13\%)$ on POPE-COCO accuracy. The optimal performance $(+2.88\%)$ is achieved with $\lambda = 0.2$ for $\pi^{+}$ and $\lambda = 0.4$ for $\pi^{-}$ .
204
+
205
+ dual achievement - substantial hallucination reduction while preserving and sometimes improving general capabilities - validates the effectiveness of our contrast decoding mechanism in mitigating hallucination patterns without compromising essential visual understanding features.
206
+
207
+ # 6.5 Ablation Study
208
+
209
+ # 6.5.1 Impact of Steering Strength
210
+
211
+ Fig. 5 illustrates the effect of steering intensities $\lambda$ of ASD method. Most parameter combinations yield positive improvements over the baseline, demonstrating the robustness of our method. However, we observe that positive steering $(\pi^{+})$ requires more careful tuning - performance begins to degrade when $\lambda >0.3$ , with accuracy dropping by $2.71\%$ at $\lambda = 0.5$ . In contrast, negative steering $(\pi^{-})$ shows greater tolerance to larger values, maintaining improvements even at $\lambda = 0.5$ . The
212
+
213
+ Table 3: Performance comparison on general visual understanding benchmarks. Bold numbers indicate the best scores for each benchmark. When calculating overall improvements, percentage changes are used for MME scores and absolute changes for other benchmarks due to scale differences. Results show that our method maintains or improves performance across diverse tasks compared to baseline models and other approaches.
214
+
215
+ <table><tr><td>Count</td><td>POPE</td><td>CHAIRS↓</td><td>MME</td><td>TextVQA</td></tr><tr><td>LLaVA1.5-7B</td><td>85.13</td><td>51.00</td><td>1810.70</td><td>45.76</td></tr><tr><td>100</td><td>87.72</td><td>40.40</td><td>1813.01</td><td>46.22</td></tr><tr><td>500</td><td>87.79</td><td>38.80</td><td>1821.98</td><td>46.24</td></tr><tr><td>1,000</td><td>88.01</td><td>40.00</td><td>1827.21</td><td>46.40</td></tr></table>
216
+
217
+ Table 4: Impact of calibration data size (number of images used for steering vector computation) on model performance across different benchmarks. POPE refers to POPE-COCO subset.
218
+
219
+ optimal configuration is achieved with moderate positive steering $(\lambda = 0.2$ for $\pi^{+}$ ) and stronger negative steering $(\lambda = 0.4$ for $\pi^{-}$ ), achieving $88.01\%$ accuracy (a $2.88\%$ improvement over the baseline), which represents a state-of-the-art performance on this benchmark.
220
+
221
+ # 6.5.2 Impact of Calibration Data Size
222
+
223
+ Tab. 4 examines the sensitivity of our method to the amount of calibration data used for computing steering vectors. Notably, our approach demonstrates strong performance even with mini calibration data - using just only 100 images already yields substantial improvements across all selected benchmarks. These results suggest that our method can effectively capture hallucination patterns with a very small calibration set, making it highly practical for real-world applications.
224
+
225
+ # 6.5.3 Direct Vector Steering
226
+
227
+ We investigate the effectiveness of vector steering without contrast decoding to understand its impact in isolation. Fig. 6 shows the accuracy improvements over the LLaVA1.5-7B baseline on POPE benchmark. The y-axis represents the relative accuracy change in percentage points compared to the baseline performance. First, we observe that the optimal steering intensity varies significantly across datasets, with COCO achieving peak performance at $\lambda = 0.3$ , while AOKVQA and GQA show improvements at lower intensities. This variation
228
+
229
+ ![](images/f0d54a1d7382c32c08a5d9b218433886cc386f19c64bba4ae9241a7012b1f457.jpg)
230
+ Figure 6: Impact of steering intensity on Direct Vector Steering, measured as relative improvement over LLaVA1.5-7B baseline.
231
+
232
+ suggests that the effectiveness of steering vectors is sensitive to the specific characteristics of each task. Second, we observe a consistent pattern where performance deteriorates at higher steering intensities. This degradation becomes particularly pronounced at $\lambda = 0.5$ , where AOKVQA and GQA show accuracy drops of approximately $3\%$ and $3.5\%$ respectively. This decline can be attributed to excessive distortion of the hidden state semantics, indicating that overly aggressive steering can disrupt the model's learned representations. While COCO shows substantial improvements of up to $1.8\%$ , the gains on AOKVQA and GQA are notably smaller. This performance gap is expected, as the calculation of steering vectors relies on COCO-defined object categories. This suggests that direct vector steering may have limitations in generalizing across different visual understanding tasks.
233
+
234
+ # 6.6 Computational Efficiency Analysis
235
+
236
+ While ASD requires performing addition and subtraction operations on hidden states at each layer during inference with two branches, the computational overhead is relatively modest for several practical reasons.
237
+
238
+ First, the addition and subtraction operations on hidden states are extremely lightweight compared to the model's transformer operations (self-attention and feed-forward computations).
239
+
240
+ Second, although our method involves two branches for positive and negative steering, we leverage batch processing to run them in parallel during inference. Specifically, we concatenate the inputs for both positive and negative steering along the batch dimension, allowing them to be processed simultaneously. This parallel processing approach significantly mitigates the potential overhead compared to sequential execution.
241
+
242
+ To quantify the exact overhead, we conduct
243
+
244
+ <table><tr><td>Method</td><td>20 Tokens (ms)</td><td>50 Tokens (ms)</td><td>100 Tokens (ms)</td></tr><tr><td>LLaVA-1.5-7B</td><td>516.9 Β±8.1</td><td>1153.1 Β±7.4</td><td>2211.7 Β±6.9</td></tr><tr><td>+ ASD (Ours)</td><td>612.8 Β±7.2</td><td>1300.2 Β±7.2</td><td>2467.8 Β±8.1</td></tr><tr><td>Overhead (%)</td><td>18.6%</td><td>12.8%</td><td>11.6%</td></tr></table>
245
+
246
+ Table 5: Inference time (in milliseconds) comparison between the baseline and the ASD method across different output lengths. The ASD method introduces moderate additional latency, which becomes relatively smaller as the number of generated tokens increases.
247
+
248
+ timing experiments on LLaVA-1.5-7B using an NVIDIA L20 GPU. As shown in Tab. 5, with parallel processing strategy, our method introduces approximately $10 - 20\%$ additional computational overhead. This characteristic is particularly beneficial for real-world applications on edge devices, where the query is typically limited to 1, and GPUs often have spare capacity to handle a batch size of 2. This makes our method an efficient solution that offers significant accuracy improvements with minimal computational cost.
249
+
250
+ # 7 Conclusion
251
+
252
+ We present a systematic investigation of hallucination in LVLMs through the lens of intermediate activations, revealing that hallucinated content manifests as distinct patterns in the model's hidden state space. Building on this insight, we propose Activation Steering Decoding, a training-free approach that effectively mitigates hallucination through targeted intervention in model activations. Our extensive experiments demonstrate that our approach significantly reduces hallucination rates while maintaining model performance across general visual understanding tasks.
253
+
254
+ # Limitations
255
+
256
+ While our proposed Activation Steering Decoding demonstrates promising results in mitigating hallucination, several limitations warrant discussion.
257
+
258
+ First, our current approach primarily addresses object-level hallucination, as the steering vectors are extracted using only COCO object annotations. This focus on object categories limits the method's ability to address other types of hallucinations, such as attribute errors (e.g., incorrect colors or sizes), relational inaccuracies (e.g., wrong spatial relationships), or hallucinations involving abstract concepts and actions. Future work should explore leveraging richer annotations beyond object labels
259
+
260
+ to develop more comprehensive hallucination mitigation strategies.
261
+
262
+ Second, our approach involves hyperparameters ( $\lambda$ for steering intensity and $\alpha$ for contrast coefficient), which may vary across different models and tasks for optimal performance. Developing adaptive approaches that can automatically determine optimal steering parameters based on the input or model confidence remains an interesting direction for future research.
263
+
264
+ # References
265
+
266
+ Wenbin An, Feng Tian, Sicong Leng, Jiahao Nie, Hao-nan Lin, QianYing Wang, Guang Dai, Ping Chen, and Shijian Lu. 2024. Agla: Mitigating object hallucinations in large vision-language models with assembly of global and local attention. arXiv preprint arXiv:2406.12718.
267
+ Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 1(2):3.
268
+ Zhiyang Chen, Yousong Zhu, Yufei Zhan, Zhaowen Li, Chaoyang Zhao, Jinqiao Wang, and Ming Tang. 2023. Mitigating hallucination in visual language models with visual supervision. arXiv preprint arXiv:2311.16479.
269
+ Ailin Deng, Zhirui Chen, and Bryan Hooi. 2024. Seeing is believing: Mitigating hallucination in large vision-language models via clip-guided decoding. arXiv preprint arXiv:2402.15300.
270
+ Xin He, Longhui Wei, Lingxi Xie, and Qi Tian. 2024. Incorporating visual experts to resolve the information loss in multimodal large language models. arXiv preprint arXiv:2401.03105.
271
+ Jitesh Jain, Jianwei Yang, and Humphrey Shi. 2024. Vcoder: Versatile vision encoders for multimodal large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 27992-28002.
272
+ Chaoya Jiang, Haiyang Xu, Mengfan Dong, Jiaxing Chen, Wei Ye, Ming Yan, Qinghao Ye, Ji Zhang, Fei Huang, and Shikun Zhang. 2024. Hallucination augmented contrastive learning for multimodal large language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 27036-27046.
273
+ Qirui Jiao, Daoyuan Chen, Yilun Huang, Yaliang Li, and Ying Shen. 2024. Enhancing multimodal large language models with vision detection models: An empirical study. arXiv preprint arXiv:2401.17981.
274
+
275
+ Ole Jorgensen, Dylan Cope, Nandi Schoots, and Murray Shanahan. 2023. Improving activation steering in language models with mean-centring. arXiv preprint arXiv:2312.03813.
276
+ Seongyun Lee, Sue Hyun Park, Yongrae Jo, and Minjoon Seo. 2023. Volcano: mitigating multimodal hallucination through self-feedback guided revision. arXiv preprint arXiv:2311.07362.
277
+ Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, and Lidong Bing. 2024. Mitigating object hallucinations in large vision-language models through visual contrastive decoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13872-13882.
278
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr DΓ³lar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer.
279
+ Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023a. Mitigating hallucination in large multi-modal models via robust instruction tuning. In The Twelfth International Conference on Learning Representations.
280
+ Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, and Wei Peng. 2024a. A survey on hallucination in large vision-language models. arXiv preprint arXiv:2402.00253.
281
+ Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2024b. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306.
282
+ Wenhao Liu, Xiaohua Wang, Muling Wu, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaqing Zheng, and Xuanjing Huang. 2023b. Aligning large language models with human preferences through representation engineering. arXiv preprint arXiv:2312.15997.
283
+ Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, Christopher D Manning, and Daniel E Ho. 2024. Hallucination-free? assessing the reliability of leading ai legal research tools. arXiv preprint arXiv:2405.20362.
284
+ Nina Panickssery, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Matt Turner. 2023. Steering llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681.
285
+ Nishant Subramani, Nivedita Suresh, and Matthew E Peters. 2022. Extracting latent steering vectors from pretrained language models. arXiv preprint arXiv:2205.05124.
286
+
287
+ Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, LiangYan Gui, Yu-Xiong Wang, Yiming Yang, et al. 2023. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525.
288
+ Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J Vazquez, Ulisse Mini, and Monte MacDiarmid. 2023. Activation addition: Steering language models without optimization. arXiv eprints, pages arXiv-2308.
289
+ Jue Wang. 2024. Hallucination reduction and optimization for large language model-based autonomous driving. Symmetry, 16(9):1196.
290
+ Lei Wang, Jiabang He, Shenshen Li, Ning Liu, and Ee-Peng Lim. 2024. Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites. In International Conference on Multimedia Modeling, pages 32-45. Springer.
291
+ Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, and Enhong Chen. 2023. Woodpecker: Hallucination correction for multimodal large language models. arXiv preprint arXiv:2310.16045.
292
+ Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, and Yueting Zhuang. 2024a. Hallucid doctor: Mitigating hallucinatory toxicity in visual instruction data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12944-12953.
293
+ Runpeng Yu, Weihao Yu, and Xinchao Wang. 2025. Attention prompting on image for large vision-language models. In European Conference on Computer Vision, pages 251-268. Springer.
294
+ Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. 2024b. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13807-13816.
295
+ Zihao Yue, Liang Zhang, and Qin Jin. 2024. Less is more: Mitigating multimodal hallucination from an eos decision perspective. arXiv preprint arXiv:2402.14545.
296
+ Bohan Zhai, Shijia Yang, Xiangchen Zhao, Chenfeng Xu, Sheng Shen, Dongdi Zhao, Kurt Keutzer, Manling Li, Tan Yan, and Xiangjun Fan. 2023. Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. arXiv preprint arXiv:2310.01779.
297
+ Yi-Fan Zhang, Weichen Yu, Qingsong Wen, Xue Wang, Zhang Zhang, Liang Wang, Rong Jin, and Tieniu Tan. 2024. Debiasing large visual language models. arXiv preprint arXiv:2403.05262.
298
+
299
+ Linxi Zhao, Yihe Deng, Weitong Zhang, and Quanquan Gu. 2024. Mitigating object hallucination in large vision-language models via classifier-free guidance. arXiv preprint arXiv:2402.08680.
300
+ Zhiyuan Zhao, Bin Wang, Linke Ouyang, Xiaoyi Dong, Jiaqi Wang, and Conghui He. 2023. Beyond hallucinations: Enhancing lvlms through hallucination-aware direct preference optimization. arXiv preprint arXiv:2311.16839.
301
+ Yiyang Zhou, Chenhang Cui, Rafael Rafailov, Chelsea Finn, and Huaxiu Yao. 2024. Aligning modalities in vision large language models via preference fin-tuning. arXiv preprint arXiv:2402.11411.
302
+ Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. arXiv preprint arXiv:2310.00754.
303
+ Lanyun Zhu, Deyi Ji, Tianrun Chen, Peng Xu, Jieping Ye, and Jun Liu. 2024. Ibid: Alleviating hallucinations in large vision-language models via image-biased decoding. arXiv preprint arXiv:2402.18476.
304
+ Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405.
activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79e0fe8bd20e546525856004e23c91ece8d9509030abcdb9008676c7b6b17dc3
3
+ size 572283
activationsteeringdecodingmitigatinghallucinationinlargevisionlanguagemodelsthroughbidirectionalhiddenstateintervention/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ca124b4dd0d66a29528fb6463cfa4b95be5156afb84bea9362ff53b83d63dbb
3
+ size 377694
activiewevaluatingactiveperceptionabilityformultimodallargelanguagemodels/b694f6c1-f9a0-49df-a72d-cebf955df78b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff0104d752ed98a585c9cc00cec83d70cab4ffc3aeabf2c717355547f8392aef
3
+ size 222903
activiewevaluatingactiveperceptionabilityformultimodallargelanguagemodels/b694f6c1-f9a0-49df-a72d-cebf955df78b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0440b3edb75644ae978adc959ae03bbe99bdff381f1131c7dd494a66bb41133
3
+ size 267835