SlowGuess commited on
Commit
ff3d32f
·
verified ·
1 Parent(s): e8b3f5e

Add Batch 77b5efa1-b9ed-47f3-aee8-8fcac3ded6ae

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abstractionofthoughtmakeslanguagemodelsbetterreasoners/167e13c2-2504-4746-bc1a-47f273480bf4_content_list.json +3 -0
  2. abstractionofthoughtmakeslanguagemodelsbetterreasoners/167e13c2-2504-4746-bc1a-47f273480bf4_model.json +3 -0
  3. abstractionofthoughtmakeslanguagemodelsbetterreasoners/167e13c2-2504-4746-bc1a-47f273480bf4_origin.pdf +3 -0
  4. abstractionofthoughtmakeslanguagemodelsbetterreasoners/full.md +0 -0
  5. abstractionofthoughtmakeslanguagemodelsbetterreasoners/images.zip +3 -0
  6. abstractionofthoughtmakeslanguagemodelsbetterreasoners/layout.json +3 -0
  7. acceptadaptivecodebookforcompositeandefficientprompttuning/2a27d109-693f-4134-9300-2d4b5f27a4f9_content_list.json +3 -0
  8. acceptadaptivecodebookforcompositeandefficientprompttuning/2a27d109-693f-4134-9300-2d4b5f27a4f9_model.json +3 -0
  9. acceptadaptivecodebookforcompositeandefficientprompttuning/2a27d109-693f-4134-9300-2d4b5f27a4f9_origin.pdf +3 -0
  10. acceptadaptivecodebookforcompositeandefficientprompttuning/full.md +384 -0
  11. acceptadaptivecodebookforcompositeandefficientprompttuning/images.zip +3 -0
  12. acceptadaptivecodebookforcompositeandefficientprompttuning/layout.json +3 -0
  13. acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/5226d102-8eda-4b92-b3f6-fa16bc851d50_content_list.json +3 -0
  14. acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/5226d102-8eda-4b92-b3f6-fa16bc851d50_model.json +3 -0
  15. acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/5226d102-8eda-4b92-b3f6-fa16bc851d50_origin.pdf +3 -0
  16. acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/full.md +417 -0
  17. acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/images.zip +3 -0
  18. acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/layout.json +3 -0
  19. achievingstrongergenerationviasimplecontrastivetuning/71f25873-57a8-4a24-87b9-4210826f239f_content_list.json +3 -0
  20. achievingstrongergenerationviasimplecontrastivetuning/71f25873-57a8-4a24-87b9-4210826f239f_model.json +3 -0
  21. achievingstrongergenerationviasimplecontrastivetuning/71f25873-57a8-4a24-87b9-4210826f239f_origin.pdf +3 -0
  22. achievingstrongergenerationviasimplecontrastivetuning/full.md +437 -0
  23. achievingstrongergenerationviasimplecontrastivetuning/images.zip +3 -0
  24. achievingstrongergenerationviasimplecontrastivetuning/layout.json +3 -0
  25. acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/0f966289-6db2-4a15-a218-dc1fc3e38066_content_list.json +3 -0
  26. acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/0f966289-6db2-4a15-a218-dc1fc3e38066_model.json +3 -0
  27. acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/0f966289-6db2-4a15-a218-dc1fc3e38066_origin.pdf +3 -0
  28. acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/full.md +558 -0
  29. acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/images.zip +3 -0
  30. acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/layout.json +3 -0
  31. acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/4a2805cc-d2ed-426a-8913-2c39dd75d239_content_list.json +3 -0
  32. acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/4a2805cc-d2ed-426a-8913-2c39dd75d239_model.json +3 -0
  33. acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/4a2805cc-d2ed-426a-8913-2c39dd75d239_origin.pdf +3 -0
  34. acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/full.md +424 -0
  35. acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/images.zip +3 -0
  36. acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/layout.json +3 -0
  37. acriticallookatmetaevaluatingsummarisationevaluationmetrics/43d432ad-b3f1-43fb-a85c-0b340864e890_content_list.json +3 -0
  38. acriticallookatmetaevaluatingsummarisationevaluationmetrics/43d432ad-b3f1-43fb-a85c-0b340864e890_model.json +3 -0
  39. acriticallookatmetaevaluatingsummarisationevaluationmetrics/43d432ad-b3f1-43fb-a85c-0b340864e890_origin.pdf +3 -0
  40. acriticallookatmetaevaluatingsummarisationevaluationmetrics/full.md +359 -0
  41. acriticallookatmetaevaluatingsummarisationevaluationmetrics/images.zip +3 -0
  42. acriticallookatmetaevaluatingsummarisationevaluationmetrics/layout.json +3 -0
  43. activationscalingforsteeringandinterpretinglanguagemodels/6c27c5f9-f677-42cf-8108-2898ec13a750_content_list.json +3 -0
  44. activationscalingforsteeringandinterpretinglanguagemodels/6c27c5f9-f677-42cf-8108-2898ec13a750_model.json +3 -0
  45. activationscalingforsteeringandinterpretinglanguagemodels/6c27c5f9-f677-42cf-8108-2898ec13a750_origin.pdf +3 -0
  46. activationscalingforsteeringandinterpretinglanguagemodels/full.md +438 -0
  47. activationscalingforsteeringandinterpretinglanguagemodels/images.zip +3 -0
  48. activationscalingforsteeringandinterpretinglanguagemodels/layout.json +3 -0
  49. activelearningforabstractivetextsummarizationviallmdeterminedcurriculumandcertaintygainmaximization/fcdd29d1-9610-4216-b75f-4ee42d7a9fe2_content_list.json +3 -0
  50. activelearningforabstractivetextsummarizationviallmdeterminedcurriculumandcertaintygainmaximization/fcdd29d1-9610-4216-b75f-4ee42d7a9fe2_model.json +3 -0
abstractionofthoughtmakeslanguagemodelsbetterreasoners/167e13c2-2504-4746-bc1a-47f273480bf4_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d4b13f75c02912317a785c4a014e31ac51537ebb2b7d3f4ad82d84ffd8bbb2a
3
+ size 237038
abstractionofthoughtmakeslanguagemodelsbetterreasoners/167e13c2-2504-4746-bc1a-47f273480bf4_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:219226a5e711ae9bc2cd98cc10044a1d87caf52ad80bb37589db2398a56b875e
3
+ size 276291
abstractionofthoughtmakeslanguagemodelsbetterreasoners/167e13c2-2504-4746-bc1a-47f273480bf4_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c04b5c3986d6575ca39a472157b5eb772f572b8b52cfc0c240685b5fdf892f0
3
+ size 799475
abstractionofthoughtmakeslanguagemodelsbetterreasoners/full.md ADDED
The diff for this file is too large to render. See raw diff
 
abstractionofthoughtmakeslanguagemodelsbetterreasoners/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a27ee36ca501ce41db1bd8f74e53a49d9d4b12c8c8eff1e5c073ec2b06bfa3c9
3
+ size 2478935
abstractionofthoughtmakeslanguagemodelsbetterreasoners/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c40d454bb797fdca26ac2d69f08943e57ca2df5125caf55caa2c0225e89ac927
3
+ size 980600
acceptadaptivecodebookforcompositeandefficientprompttuning/2a27d109-693f-4134-9300-2d4b5f27a4f9_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69b3cafdcc981e568d8cb0a5bc340ec151178bb5d275a7d9e6dcd9ff4d683189
3
+ size 102497
acceptadaptivecodebookforcompositeandefficientprompttuning/2a27d109-693f-4134-9300-2d4b5f27a4f9_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b01819626b2efa4dce4a2fb927994ca955d8bbbaa2037b933fac7291a93dc4bd
3
+ size 120962
acceptadaptivecodebookforcompositeandefficientprompttuning/2a27d109-693f-4134-9300-2d4b5f27a4f9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2761afdc3128ec09dbd07db902204cf1908cf36e11ff573f65468c65093b16d
3
+ size 676136
acceptadaptivecodebookforcompositeandefficientprompttuning/full.md ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning
2
+
3
+ Yu-Chen Lin $^{1,\ast}$ , Wei-Hua Li $^{1,\ast}$ , Jun-Cheng Chen $^{2}$ , Chu-Song Chen $^{1}$
4
+
5
+ $^{1}$ National Taiwan University, $^{2}$ Academia Sinica
6
+
7
+ Correspondence: chusong@csie.ntu.edu.tw
8
+
9
+ # Abstract
10
+
11
+ Prompt Tuning has been a popular Parameter-Efficient Fine-Tuning method attributed to its remarkable performance with few updated parameters on various large-scale pretrained Language Models (PLMs). Traditionally, each prompt has been considered indivisible and updated independently, leading the parameters increase proportionally as prompt length grows. To address this issue, we propose Adaptive Codebook for Composite and Efficient Prompt Tuning (ACCEPT). In our method, we refer to the concept of product quantization (PQ), allowing all soft prompts to share a set of learnable codebook vectors in each subspace, with each prompt differentiated by a set of adaptive weights. We achieve the superior performance on 17 diverse natural language tasks including natural language understanding (NLU) and question answering (QA) tasks by tuning only $0.3\%$ of parameters of the PLMs. Our approach also excels in few-shot and large model settings, highlighting its significant potential.
12
+
13
+ # 1 Introduction
14
+
15
+ With the blooming of large language models, Parameter Efficient Fine-Tuning becomes an effective solution to leverage the power of pretrained language models (LMs). Among various approaches, Prompt Tuning (PT) has been recognized for its simplicity and efficacy by adding tokens in front of the inputs. Though prompting pretrained LMs with specific or human-designed instructions makes model transferable to downstream tasks, additional effort is needed for elaborating the prompts as the output produced are often sensitive to them. To address this issue, learning the prompts becomes a solution. Prompt tuning (Lester et al., 2021), Prefix tuning (Li and Liang, 2021) and P-tuning (Liu et al., 2022b) replace explicit instructions with continuous prompt embeddings and provide flexibilities for the pretrained models to adapt themselves
16
+
17
+ with superior performance. Following the concept, ATTEMPT (Asai et al., 2022), MPT (Wang et al., 2023), DePT (Shi and Lipani, 2024), and TPT (Wu et al., 2023) demonstrate the capability of learnable PT in both single and multitask training scenarios.
18
+
19
+ However, previous studies often treat the prompts as independent units in learning. Though the learned prompts can be further clustered for noise filtering (Bhardwaj et al., 2022), the parameters needed for training are not reduced since learning occurs before clustering. In this work, we introduce a method that represents the prompt based on a set of learnable codewords. All prompts share a codebook with $\mathcal{N}$ codewords. Compared with updating the prompts independently and thus preventing word embeddings from sharing information with each other, our codebooks are sharable across all prompts in a downstream task, making codebooks' parameters size independent of the prompt length.
20
+
21
+ In addition, our approach does not follow the common practice of regarding each prompt as inseparable. When treating a prompt as an indivisible word embedding, we may overlook the possibility that, say, certain words may align with other words in the first half of the embedding and match different words in the second half. To tackle this issue, we adopt the idea of product quantization (PQ) (Jegou et al., 2010) by dividing a prompt's word embedding into several subsections and construct a codebook for each subsection. In the past, PQ is effective for approximate nearest-neighbor search (Jegou et al., 2010; Yu et al., 2018) and neural network compression (Wu et al., 2016). However, if we directly apply PQ to the learned parameters, their amount will not be lowered for training. Hence, we simply follow PQ's concept where the codebooks are subsection-specific, and provide a set of learnable codewords for each subsection. This makes prompts share some identical subvec-tors, which allows part of tokens to have the same
22
+
23
+ characteristics in a more fine-grained dimension.
24
+
25
+ To ease the learning process and make it differentiable, we allow each subvector to be softly combined (via linear coefficients) with the codewords, rather than being assigned by only one of the codewords as in PQ. This increases both diversity and flexibility of the representation. In the past, learning-based PT can be typically done by increasing the input length with prepended prompts (Lester et al., 2021; Asai et al., 2022; Wu et al., 2023), or further adding the original embedding of words with the same number of additional prompts (Shi and Lipani, 2024). Our method, referred to as ACCEPT, is generally applicable and works for both. We conduct experiments on 17 natural language tasks and show that our method consistently outperforms previous PT approaches.
26
+
27
+ # 2 Related Work
28
+
29
+ Parameter-efficient fine-tuning enhances the capabilities of pretrained LMs by updating a small set of parameters. The approach varies, such as training extra modules (Houlsby et al., 2019; Sung et al., 2022) or modifying specific parts like biases or attention weights (Zaken et al., 2022; Hu et al., 2021). Among these, as Prompt Tuning (PT) is popular for its simplicity and effectiveness, we focus on PT.
30
+
31
+ # 2.1 Prompt Tuning Methods
32
+
33
+ This track focuses on enhancing the quality and efficiency of prompts. Schick and Schütze (2021) and Brown et al. (2020) incorporate manually crafted instructions into the input sequence to provide task-specific guidance helping steer the model's output. When the instructions are well-designed, models with frozen parameters exhibit excellent performance. However, additional effort is required for human adjustment since the output is sensitive to the prompts. To address the issue, Wang et al. (2021) and Gao et al. (2021) further generate hard prompt templates by model automatically. Nonetheless, optimizing discrete prompts is challenging. Thus, Prompt Tuning (Lester et al., 2021), Prefix tuning (Li and Liang, 2021), and P-tuning (Liu et al., 2022b) turn prompts into continuous vectors, known as soft prompt, which are prepended to the word embeddings. The learnable prompts are trained with the pretrained LMs frozen. By turning discrete prompts to a continuous space, the optimization can be achieved by a simple gradient descent. Recently, Su et al. (2021) and
34
+
35
+ SPoT (Vu et al., 2022) explore the advantages of initializing prompts by pretrained ones from other tasks. They demonstrate that learning prompts on one or more source tasks, and subsequently utilizing these learned prompts as initializations for a target task, is notably effective. ATTEMPT (Asai et al., 2022), MPT (Wang et al., 2023) and TPT (Wu et al., 2023) further design various architectures for multitask transfer learning. On the other hand, DePT (Shi and Lipani, 2024) focuses on reducing the training and inference time by decomposing prompt as a shorter one and a low-rank matrix added on word embeddings. Nevertheless, earlier approaches treat prompt as monolithic units, causing the number of trainable parameters to increase linearly with the prompt length. In contrast, our method introduces a shared codebook in each subspace, which remains unaffected by the prompt length and facilitates information sharing among different prompts.
36
+
37
+ # 2.2 Quantization in NLP
38
+
39
+ Vector quantization (VQ) is a related technique to PQ which is widely employed in NLP. VQ provides an effective discretization of latent sentence representations, making it especially suitable for NLP tasks due to the inherently discrete nature of text, as demonstrated in Van Den Oord et al. (2017), Roy et al. (2018), Roy and Grangier (2019), Mercatali and Freitas (2021) and Angelidis et al. (2021). VQ is also used in PT. Bhardwaj et al. (2022) initially train a contextualized prompt for each input and cluster them using VQ to reduce variance.
40
+
41
+ However, in the previous approaches, the number of parameters remains substantial since the training of original representations occurs before clustering. Different from these methods, we introduce learnable codebooks and adaptive weights which enable end-to-end training, thereby maintaining parameter efficiency throughout the process.
42
+
43
+ # 3 Methodology
44
+
45
+ We first give a preliminary of PT for downstream tasks and PQ, and then present our method.
46
+
47
+ # 3.1 Prompt Tuning for Downstream Tasks
48
+
49
+ Given a pretrained LM with parameters $\theta$ , we want to transfer it to a target task with the training data $D = \{(x_{j},y_{j})\}_{j = 1}^{|D|}$ . We first map $x_{j}$ to a sequence of word embeddings $e_j$ as input, where $e_j\in R^{l\times d}$ , $l$ is the maximal input sequence length and $d$ is
50
+
51
+ ![](images/488b7332ef8da1e759da783db7a33a640141f78ac2a9025a75b8c1d82a9844ff.jpg)
52
+ (a) Soft-weighted Codebook Prepended Prompt
53
+
54
+ ![](images/7ef9df321c1814e5cf7a3dc947955adc10ae760449a94b475118fde5b11cdd64.jpg)
55
+ Figure 1: The overall model architecture of ACCEPT. We subdivide both (a) Soft-weighted Codebook Prepended Prompt (SCPP) and (b) Soft-weighted Codebook Added Prompt (SCAP) to $K$ subspaces. Each subspace has a codebook with $r$ codewords shared by all prompts. Each sub-prompt is linearly combined by the codewords and weights. (c) In the main architecture of ACCEPT, the final input is formed by prepending SCPP to the word embedding updated with SCAP. The pretrained model, with its parameters fixed, learns to output correct labels through tunable SCPP and SCAP.
56
+
57
+ ![](images/36ceaa518dd27de2f87d4d06dbd5f4c79169ed02ab14ef01aaf08dfa7e4fdb9f.jpg)
58
+
59
+ the word-embedding dimension. PT prepends a set of trainable continuous prompt embeddings $P = \{p_1, p_2, \dots, p_m\}$ ( $p_i \in R^d$ ) to the input embeddings while keeping the pretrained model parameters $\theta$ fixed. The training goal is to maximize the output probability of target $y_j$ as below,
60
+
61
+ $$
62
+ \max _ {P} \sum_ {j = 1} ^ {| D |} p _ {\theta} \left(y _ {j} \mid \theta ; [ P, e _ {j} ]\right). \tag {1}
63
+ $$
64
+
65
+ # 3.2 Review of PQ and Method Motivation
66
+
67
+ VQ is known as the process of mapping a vector $x$ to the closest codeword $c^*$ in a codebook $C = \{c_1, c_2, \dots, c_N\}$ containing $N$ codewords. As an extension, PQ divides the vector $x \in R^d$ into $K$ subspace, $x = [x^1, x^2, \dots, x^K]$ , with $d = tK$ and $x^k \in R^t$ . Each subspace possesses a codebook $C^k$ which contains $N_k$ codewords of dimension $t$ for $k = 1 \dots K$ . PQ thus exploits the Cartesian product of the codeword sets,
68
+
69
+ $$
70
+ C = C ^ {1} \times C ^ {2} \times \dots \times C ^ {K}, \tag {2}
71
+ $$
72
+
73
+ to encode the vector $x$ . The total number of codewords becomes $\mathcal{N} = \Pi_{k=1}^{K} N_k$ for entire space. When $K = 1$ , PQ degenerates to VQ.
74
+
75
+ PQ has the advantage of enabling more codewords for the representation of $x$ by consuming fewer parameters. Eg., if $N_{k}$ is the same for all $k$ , PQ can take the storage cost of only $O(tKN_{k}) = O(dN_{k})$ to provide the codewords amount of $(N_{k})^{K}$ . For VQ, however, only $N_{k}$ codewords are provided under the same storage cost, or the storage should be increased to $O(d(N_k)^K)$ to
76
+
77
+ get the same amount of codewords. Hence, PQ is more parameter-efficient and suitable for PT. The codewords distributed in subsections can enrich the diversity and flexibility of representation for the sub-problem solving.
78
+
79
+ However, as mentioned above, parameter-efficient learning is not attainable if we perform PT first and then PQ. Hence, our method does not really do the 'quantization' step but only takes PQ's idea of efficient representation and makes the codewords of all subspaces learnable for PT. Moreover, for each subspace, we do not use only one codeword to express the input $x$ for that subsection, but softly combining the codewords with linear coefficients for a more precise representation. Details are given below.
80
+
81
+ # 3.3 Proposed Method - ACCEPT
82
+
83
+ Previous methods often view each prompt as a single and indivisible word embedding, independent of other prompts. We suppose that tokens can share the same characteristics in a more fine-grained dimension. Our method leverages the concept of PQ and partition embedding space into $K$ smaller subspaces. The $k$ -th subspace has a codebook $\mathcal{C}^k = \{c_1^k, c_2^k, \dots, c_r^k\}$ containing $r$ codewords of dimension $t$ , with $t = d / K$ . Specifically, the total $K$ codebooks are shared across all prompts.
84
+
85
+ Remember that there is a set of trainable prompts $P = \{p_1, p_2, \dots, p_m\}$ ( $p_i \in R^d$ ) for a downstream task in PT. Similarly, we divide each $p_i$ into $K$ sub-prompts $p_i = \{p_i^1, p_i^2, \dots, p_i^K\}$ ( $p_i \in R^t$ ). We assign a group of weights, $W_i = \{w_i^1, w_i^2, \dots, w_i^K\}$ ( $w_i^k \in R^r$ ), to the $i$ -th prompt in every subspace. A
86
+
87
+ sub-prompt $p_i^k$ is then expressed as a linear combination of the codewords in $\mathcal{C}^k$ using the coefficient weights $w_i^k = \{w_{i1}^k, w_{i2}^k, \dots, w_{ir}^k\}$ . Thus, the $k$ -th subvector of the $i$ -th prompt is calculated as
88
+
89
+ $$
90
+ p _ {i} ^ {k} = c _ {1} ^ {k} \times w _ {i 1} ^ {k} + c _ {2} ^ {1} \times w _ {i 2} ^ {k} + \dots + c _ {r} ^ {k} \times w _ {i r} ^ {k}, \tag {3}
91
+ $$
92
+
93
+ where $c_{j}^{k}$ is the $j$ -th codeword $(j = 1, \dots, r)$ in the $k$ -th codebook and $w_{ij}^{k}$ is the weight for $c_{j}^{k}$ in the $i$ -th prompt, respectively.
94
+
95
+ The prompts in $P$ thus depend on both the sets of codewords, $\mathbf{C} = \{\mathcal{C}^k | k = 1, \dots, K\}$ and combination weights, $\mathbf{W} = \{W_i | i = 1, \dots, m\}$ , in our method. To reflect this, we denote it as $P(\mathbf{C}, \mathbf{W})$ in the following. Our goal is to maximize the output probabilities of the ground truth label $y_j$ as
96
+
97
+ $$
98
+ \max _ {\mathbf {C}, \mathbf {W}} \sum_ {j = 1} ^ {| D |} p _ {\theta} \left(y _ {j} \mid \theta ; [ P (\mathbf {C}, \mathbf {W}), e _ {j} ]\right), \tag {4}
99
+ $$
100
+
101
+ where $P(\mathbf{C},\mathbf{W}) \in R^{m\times d}$ . We refer Eq. 4 to as the Soft-Weighted codebook Prepended Prompt (SCPP) tuning in our method, as the prompts are prepended to the inputs. Fig. 1(a) gives an illustration of SCPP learning.
102
+
103
+ As mentioned in Sec. 1, PT can also be done by adding complementary prompts to the original embedding (Shi and Lipani, 2024), and our method works for both. To achieve this, we conduct another prompt set $Q$ that contains the same number of $(l)$ prompts of the same length $(d)$ to the input word embedding $e_j$ ( $e_j \in R^{l \times d}$ ). Similar mechanisms are applied to $Q$ , which depends on the learnable codewords $\mathbf{C}'$ and combination weights $\mathbf{W}'$ too. We optimize $\mathbf{C}'$ and $\mathbf{W}'$ by solving
104
+
105
+ $$
106
+ \max _ {\mathbf {C} ^ {\prime}, \mathbf {W} ^ {\prime}} \sum_ {j = 1} ^ {| D |} p _ {\theta} \left(y _ {j} \mid \theta ; \left[ e _ {j} + Q \left(\mathbf {C} ^ {\prime}, \mathbf {W} ^ {\prime}\right) \right]\right), \tag {5}
107
+ $$
108
+
109
+ where $Q(\mathbf{C}', \mathbf{W}') \in R^{l \times d}$ having the same shape of $e_j$ . We refer Eq. 5 to as the Soft-Weighted codebook Added Prompt (SCAP) tuning, as the prompts are added to the original word embeddings as updates. Fig. 1(b) illustrates the SCAP learning. Combining SCPP and SCAP then forms our final ACCEPT (shown in Fig. 1(c)). At the same scale of parameters, combining the two types of prompts reduces the total input length, which makes training and inference more efficient (Shi and Lipani, 2024). ACCEPT then learns by maximizing
110
+
111
+ $$
112
+ \max _ {\mathbf {C}, \mathbf {W}, \mathbf {C} ^ {\prime}, \mathbf {W} ^ {\prime}} \sum_ {j = 1} ^ {| D |} p _ {\theta} \left(y _ {j} \mid \theta ; P (\mathbf {C}, \mathbf {W}), e _ {j} + Q \left(\mathbf {C} ^ {\prime}, \mathbf {W} ^ {\prime}\right) ]\right). \tag {6}
113
+ $$
114
+
115
+ With only $[\mathbf{C},\mathbf{W}]$ , $[\mathbf{C}',\mathbf{W}'}$ trainable and $\theta$ frozen. Number of Parameters. With the vanilla PT, for a model having embedding dimension $d$ and $m$ prompts, the number of parameters is $md$ . As we subdivide the embedding into $K$ subspaces, each $t$ -dimensional $(t = d / K)$ subspace has $r$ codewords, and thus the number of parameters of each codebook is $rt$ . Total $K$ codebooks then need $rtK$ parameters. As for the weights, each prompt has $r$ weights in $K$ subspaces, which contains total $rK$ parameters. Multiplied by the number of prompts $m$ forms a total of $rmK$ parameters. Finally, the total parameters of our method is as below,
116
+
117
+ $$
118
+ \underbrace {r t K} _ {\# p a r a. o f c o d e b o o k} + \underbrace {r m K} _ {\# p a r a. o f w e i g h t} = r d + r m K. \tag {7}
119
+ $$
120
+
121
+ Note that the number of parameters for the codebook is independent of the number of prompts, preventing linear growth with $m$ . To maintain the same scale of the trainable parameters with vanilla PT for a better comparison, we set $r$ by letting $rd + rmK \leq md$ . This ensures the number of parameter usage is no greater than the vanilla PT.
122
+
123
+ # 4 Experiments
124
+
125
+ We present the experimental results and comparisons to other approaches in this section.
126
+
127
+ # 4.1 Experimental Settings
128
+
129
+ Datasets and Tasks. Following previous works, we evaluate our method on 13 NLU tasks and 4 QA tasks, including (1) MNLI (Williams et al., 2018), QQP, QNLI (Demszky et al., 2018), SST-2 (Socher et al., 2013), STS-B (Cer et al., 2017), MRPC (Dolan and Brockett, 2005), RTE (Giampiccolo et al., 2007) and CoLA (Warstadt et al., 2019) from GLUE (Wang et al., 2018) benchmark; (2) MultiRC (Khashabi et al., 2018), BoolQ (Clark et al., 2019), WiC (Pilehvar and CamachoCollados, 2018), WSC (Levesque et al., 2012) and CB (De Marneffte et al., 2019) from SuperGLUE (Wang et al., 2019) benchmark; (3) MRQA 2019 Shared Task (Fisch et al., 2019), including Natural Questions (Kwiatkowski et al., 2019), HotpotQA (Yang et al., 2018), SearchQA (Dunn et al., 2017) and NewsQA (Trischler et al., 2016). We use SciTail (Khot et al., 2018) additionally for few-shot learning.
130
+
131
+ Baselines. We compare the proposed approach with various PEFT baselines including: (1) Fully fine-tuning (FT), where all the parameters of
132
+
133
+ the pretrained backbone models are updated; (2) Prompt Tuning (PT) (Lester et al., 2021), where prompts are initialized by randomly sampled top vocabularies; (3) Some variants of PT, changing prompt architectures or utilizing knowledge transfer from other tasks such as SPoT (Vu et al., 2022), ATTEMPT (Asai et al., 2022), MPT (Wang et al., 2023), TPT (Wu et al., 2023) and DePT (Shi and Lipani, 2024); (4) Other PEFT methods including Adapter (Houlsby et al., 2019) and Adapter-Drop (Rücklé et al., 2021), inserting lightweight modules in the middle blocks of the pretrained models; BitFit (Zaken et al., 2022), updating the bias terms in the attention mechanism; LoRA (Hu et al., 2021), updating the attention weights with two additional low-rank matrices; LST (Sung et al., 2022), transferring by a ladder-side network-based adapter; Hyperformer (Mahabadi et al., 2021) and HyperDecoder (Ivison and Peters, 2022), training a module to output the weights of adapters. $(\mathrm{IA})^3$ (Liu et al., 2022a), scaling activations by learned vectors.
134
+
135
+ Models. To provide a fair comparison with the previous methods, the main experiments are performed on the T5-base (Raffel et al., 2020) model with $220\mathrm{M}$ parameters and $d = 768$ . We also conduct experiments on other models with various scales including T5-small, T5-large, T5-3B and Flan-T5-11B models with 60M, 770M, 3B, and 11B parameters, respectively. The model dimensions are 512, 1024, 1024 and 1024, respectively. Note that Flan-T5-11B is an enhanced version of T5 model that has been fine-tuned in a collection of tasks.
136
+
137
+ Implementation Details. In the main experiments on GLUE, SuperGLUE and MRQA datasets, we divide the embedding into $K = 24$ and $K = 2$ subsections for SCPP and SCAP, respectively, where the parameters are chosen based on the performances on a small dataset RTE (detailed in Sec. 4.4). We primarily use a grid search to determine the learning rates ( $lr$ ) for both the codebook and weights in SCPP and SCAP. For SCPP, the $lr$ searched are {3e-1, 4e-1, 5e-1}, while for SCAP, we searched {1e-4, 5e-4, 1e-3, 5e-3}. Additionally, we observe that a larger lr is more suitable for SCAP on the MRQA 2019 Shared Task. Therefore, we extend our search to include higher values {1, 5, 10} for SCAP. Note that for the experiments that train SCPP or SCAP alone, the backbone follows DePT (the length of prompt is 60 and the rank of LoRA matrices equals to 30).
138
+
139
+ ![](images/9f47489aeeff89f92c104f453fe927624bb217238a31f768f6f177720a356f81.jpg)
140
+ Figure 2: Average performance on the GLUE and SuperGLUE benchmarks relative to the number of trainable parameters for the T5-base model. ACCEPT achieves the best performance with the fewest parameters.
141
+
142
+ We train $30k$ steps for small datasets with less than $10k$ samples, and $300k$ steps for large datasets more than $10k$ samples, following Vu et al. (2022). We perform evaluations every 1,000 steps and save the best checkpoint based on performance on the evaluation dataset. The results on the test dataset are then reported using these best checkpoints. We choose a batch size of 16 for T5-small, T5-base and T5-large models, 2 for T5-3B and 1 for Flant5-11B due to the GPU memory limitation. The warmup step and weight decay are 1,800 and 0.01, respectively. Experiments are conducted on a single Nvidia 3090 GPU with 24 GB memory or 2 Nvidia V100 GPUs with 32 GB memory.
143
+
144
+ We set the associated number of codewords $r$ to maintain the total number of parameters no bigger than PT with $m = 100$ . We conduct three initialization strategies for the codebooks and weights of $SCPP$ and $SCAP$ in ACCEPT: (1) randomly initialized, (2) initialized by the pretrained weights of the intermediate tasks (we use MNLI for GLUE/SuperGLUE task and SQuAD (Rajpurkar et al., 2016) for QA tasks following Vu et al. (2022)), and (3) initialized by the target task itself. For the latter two strategies, we first train $SCPP$ and $SCAP$ and use the pretrained weights as initialization for ACCEPT. On each dataset, we select the best strategy as the final results following Wu et al. (2023). In the few-shot experiments, following Mahabadi et al. (2021) and Asai et al. (2022), we sample $\gamma = \{4, 16, 32\}$ training instances three times with different seeds and report the average and standard deviation of our results. $SCPP$ and $SCAP$ are pretrained with one of the selected source dataset (MNLI, QQP, SST-2, SQUAD (Rajpurkar et al., 2016), and ReCoRD (Zhang et al., 2018)) following the previous methods (Su et al., 2021), (Asai et al., 2022), (Shi and Lipani, 2024). More detailed experiment setup is listed in Appendix A.
145
+
146
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">#Para.</td><td colspan="9">GLUE</td><td colspan="6">SuperGLUE</td></tr><tr><td>MNLI</td><td>QQP</td><td>QNLI</td><td>SST-2</td><td>STS-B</td><td>MRPC</td><td>RTE</td><td>CoLA</td><td>Avg.</td><td>Multi</td><td>Bool</td><td>WiC</td><td>WSC</td><td>CB</td><td>Avg.</td></tr><tr><td>Fine-tuning</td><td>220M</td><td>86.8</td><td>91.6</td><td>93.0</td><td>94.6</td><td>89.7</td><td>90.2</td><td>71.9</td><td>61.8</td><td>84.9</td><td>72.8</td><td>81.1</td><td>70.2</td><td>59.6</td><td>85.7</td><td>73.9</td></tr><tr><td>Adapter</td><td>1.9M</td><td>86.5</td><td>90.2</td><td>93.2</td><td>93.8</td><td>90.7</td><td>85.3</td><td>71.9</td><td>64.0</td><td>84.5</td><td>75.9</td><td>82.5</td><td>67.1</td><td>67.3</td><td>85.7</td><td>75.7</td></tr><tr><td>AdapterDrop</td><td>1.1M</td><td>86.3</td><td>90.2</td><td>93.2</td><td>93.6</td><td>91.4</td><td>86.3</td><td>71.2</td><td>62.7</td><td>84.4</td><td>72.9</td><td>82.3</td><td>68.3</td><td>67.3</td><td>85.7</td><td>75.3</td></tr><tr><td>BitFit</td><td>280K</td><td>85.3</td><td>90.1</td><td>93.0</td><td>94.2</td><td>90.9</td><td>86.8</td><td>67.6</td><td>58.2</td><td>83.3</td><td>74.5</td><td>79.6</td><td>70.0</td><td>59.6</td><td>78.6</td><td>72.5</td></tr><tr><td>LoRA</td><td>3.8M</td><td>86.3</td><td>89.0</td><td>93.2</td><td>94.3</td><td>90.9</td><td>90.1</td><td>75.5</td><td>63.3</td><td>85.3</td><td>72.6</td><td>81.3</td><td>68.3</td><td>67.3</td><td>92.9</td><td>76.5</td></tr><tr><td>LST</td><td>3.8M</td><td>85.6</td><td>88.8</td><td>93.3</td><td>94.1</td><td>90.7</td><td>90.4</td><td>71.9</td><td>58.1</td><td>84.1</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>HyperFormer(m)</td><td>638K</td><td>85.7</td><td>90.0</td><td>93.0</td><td>94.0</td><td>89.7</td><td>87.2</td><td>75.4</td><td>63.7</td><td>84.8</td><td>72.9</td><td>82.5</td><td>69.0</td><td>67.3</td><td>85.7</td><td>75.4</td></tr><tr><td>HyperDecoder(m)</td><td>1.8M</td><td>86.0</td><td>90.5</td><td>93.4</td><td>94.0</td><td>90.5</td><td>87.7</td><td>71.7</td><td>55.9</td><td>83.7</td><td>70.4</td><td>78.8</td><td>67.1</td><td>61.5</td><td>82.1</td><td>72.0</td></tr><tr><td>PT</td><td>76.8K</td><td>81.3</td><td>89.7</td><td>92.8</td><td>90.9</td><td>89.5</td><td>68.1</td><td>54.7</td><td>10.6</td><td>72.2</td><td>58.7</td><td>61.7</td><td>48.9</td><td>51.9</td><td>67.9</td><td>57.8</td></tr><tr><td>\( PT^† \)</td><td>76.8K</td><td>83.4</td><td>90.2</td><td>93.1</td><td>91.9</td><td>90.2</td><td>90.1</td><td>78.8</td><td>60.7</td><td>84.8</td><td>65.7</td><td>63.7</td><td>50.8</td><td>51.9</td><td>67.9</td><td>60.0</td></tr><tr><td>SPoT</td><td>76.8K</td><td>85.4</td><td>90.1</td><td>93.0</td><td>93.4</td><td>90.0</td><td>79.7</td><td>69.8</td><td>57.1</td><td>82.3</td><td>74.0</td><td>77.2</td><td>67.0</td><td>50.0</td><td>46.4</td><td>62.9</td></tr><tr><td>ATTEMPT</td><td>232K</td><td>84.3</td><td>90.3</td><td>93.0</td><td>93.2</td><td>89.7</td><td>85.7</td><td>73.4</td><td>57.4</td><td>83.4</td><td>74.4</td><td>78.8</td><td>66.8</td><td>53.8</td><td>78.6</td><td>70.5</td></tr><tr><td>MPT</td><td>77.6K</td><td>85.9</td><td>90.3</td><td>93.1</td><td>93.8</td><td>90.4</td><td>89.1</td><td>79.4</td><td>62.4</td><td>85.6</td><td>74.8</td><td>79.6</td><td>69.0</td><td>67.3</td><td>79.8</td><td>74.1</td></tr><tr><td>DePT</td><td>76.8K</td><td>85.0</td><td>90.4</td><td>93.2</td><td>94.2</td><td>90.8</td><td>90.7</td><td>79.1</td><td>63.8</td><td>85.9</td><td>74.3</td><td>79.3</td><td>68.7</td><td>67.3</td><td>92.9</td><td>76.5</td></tr><tr><td>\( TPT^1 \)</td><td>539K</td><td>85.5</td><td>90.1</td><td>93.2</td><td>94.7</td><td>89.8</td><td>89.7</td><td>82.3</td><td>59.8</td><td>85.6</td><td>74.4</td><td>80.1</td><td>69.8</td><td>67.3</td><td>94.6</td><td>77.2</td></tr><tr><td>ACCEPT (Ours)</td><td>74.9K</td><td>85.9</td><td>90.4</td><td>93.3</td><td>94.5</td><td>91.0</td><td>93.1</td><td>86.3</td><td>68.8</td><td>87.9</td><td>74.9</td><td>82.3</td><td>70.5</td><td>67.3</td><td>96.4</td><td>78.3</td></tr></table>
147
+
148
+ Table 1: Performance on GLUE and SuperGLUE with T5-base model. For comparisons with prior works, we use Pearson Correlation for STS-B, Matthews Correlation for CoLA, F1 for MultiRC (Multi), and accuracy for other tasks as metrics. ${}^{1}$ sourced from Wu et al. (2023) and the others sourced from Shi and Lipani (2024). †The values are the improved results tuned by Shi and Lipani (2024). (m) refers to multi-task training.
149
+
150
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">#Para.</td><td colspan="4">MRQA</td><td rowspan="2">Avg.</td></tr><tr><td>NQ</td><td>HP</td><td>SQA</td><td>News</td></tr><tr><td>Fine-tuning</td><td>220M</td><td>75.1</td><td>77.5</td><td>81.1</td><td>65.2</td><td>74.7</td></tr><tr><td>Adapter</td><td>1.9M</td><td>74.2</td><td>77.6</td><td>81.4</td><td>65.6</td><td>74.7</td></tr><tr><td>BitFit</td><td>280K</td><td>70.7</td><td>75.5</td><td>77.7</td><td>64.1</td><td>72.0</td></tr><tr><td>LoRA</td><td>3.8M</td><td>72.4</td><td>62.3</td><td>72.5</td><td>56.9</td><td>66.0</td></tr><tr><td>PT</td><td>76.8K</td><td>67.9</td><td>72.9</td><td>75.7</td><td>61.1</td><td>69.4</td></tr><tr><td>SPoT</td><td>76.8K</td><td>68.2</td><td>74.8</td><td>75.3</td><td>58.2</td><td>69.1</td></tr><tr><td>ATTEMPT</td><td>232K</td><td>70.4</td><td>75.2</td><td>77.3</td><td>62.8</td><td>71.4</td></tr><tr><td>MPT</td><td>77.6K</td><td>72.0.1</td><td>75.8.1</td><td>77.2.1</td><td>63.7.1</td><td>72.2</td></tr><tr><td>DEPT</td><td>76.8K</td><td>73.2.1</td><td>76.8.3</td><td>77.6.2</td><td>64.4.1</td><td>73.0</td></tr><tr><td>ACCEPT (Ours)</td><td>74.2K</td><td>73.6.0.05</td><td>77.1.0.09</td><td>78.9.0.01</td><td>64.6.0.06</td><td>73.6</td></tr></table>
151
+
152
+ Table 2: Performance on the MRQA 2019 Shared Task. We report the average F1 score and standard deviation of three experiments with different seeds. The proposed method achieves promising performances with the limited number of parameters.
153
+
154
+ # 4.2 Results on NLU and QA Tasks
155
+
156
+ In Tab. 1, we compare the performances and the number of parameters during training of the proposed method with various methods on GLUE and SuperGLUE benchmarks. As can be seen, our method outperforms previous PT methods by a large margin, especially on MRPC, RTE and CoLA datasets of the GLUE benchmark, while consistently improving on other datasets such as MNLI, QQP, etc. Similar results can be found on the SuperGLUE benchmark. Our method achieves a great improvement on the Bool, WiC and CB datasets, while also yielding promising performances on MultiRC and WSC. It is worth noting that our method surpasses previous PEFT methods exploiting much more tunable parameters such as Adapter, and also outperforms FT by $3.0\%$ and $4.4\%$ on the
157
+
158
+ average performances of GLUE and SuperGLUE with only $0.3\%$ parameters tuned. We further visualize the average performances against the number of trainable parameters for each method in Fig. 2. Our approach achieves the highest average accuracy while using the fewest parameters, making it more suitable for both performance and parameter efficiency than the others.
159
+
160
+ Besides having favorable results on the NLU tasks above, the proposed method also achieves nice performances on QA tasks. Tab. 2 demonstrates that our method achieves a $4.2\%$ improvement on the average of MRQA 2019 Shared Task than PT with fewer parameters, further reducing the performance gap between FT and PT methods.
161
+
162
+ To conclude, the proposed method achieves state-of-the-art performances on the challenging GLUE/SuperGLUE benchmarks and MRQA 2019 Shared Task with fewer trainable parameters, highlighting its efficiency and effectiveness.
163
+
164
+ # 4.3 Results on Few-shot Adaptation
165
+
166
+ Following Gao et al. (2021), Asai et al. (2022), Wang et al. (2023), Wu et al. (2023), Shi and Lipani (2024), we conduct the experiments with a limited number of training samples available on the BoolQ, CB, and SciTail datasets to verify the capability of ACCEPT in resource-limited scenarios. The experimental process involves initially training prompts on the intermediate tasks (e.g., MNLI) followed by transferring them to the target datasets with 4, 16, or 32 randomly sampled instances. In
167
+
168
+ <table><tr><td>Task</td><td>γ-shot
169
+ # Para.</td><td>FT
170
+ 220M</td><td>AD
171
+ 1.9M</td><td>PT
172
+ 76.8K</td><td>ST
173
+ 76.8K</td><td>HF
174
+ 638K</td><td>(IA)3
175
+ 55.3K</td><td>ATP
176
+ 232K</td><td>MPT
177
+ 77.6K</td><td>TPT
178
+ 538K</td><td>DePT
179
+ 76.8K</td><td>ACCEPT (Ours)
180
+ 74.9K</td></tr><tr><td rowspan="3">BoolQ</td><td>4</td><td>50.5</td><td>53.4</td><td>61.6</td><td>50.5</td><td>48.0</td><td>56.7</td><td>61.8</td><td>62.2</td><td>62.2</td><td>62.75.4</td><td>70.51.6</td></tr><tr><td>16</td><td>56.5</td><td>51.4</td><td>61.9</td><td>50.6</td><td>50.2</td><td>62.0</td><td>60.0</td><td>63.3</td><td>63.5</td><td>66.94.4</td><td>71.91.3</td></tr><tr><td>32</td><td>58.4</td><td>54.5</td><td>61.7</td><td>61.2</td><td>58.3</td><td>67.2</td><td>65.3</td><td>68.9</td><td>67.4</td><td>67.23.4</td><td>72.51.0</td></tr><tr><td rowspan="3">CB</td><td>4</td><td>57.7</td><td>51.1</td><td>53.5</td><td>71.4</td><td>60.7</td><td>65.5</td><td>82.1</td><td>73.6</td><td>78.6</td><td>75.05.1</td><td>78.63.6</td></tr><tr><td>16</td><td>77.0</td><td>74.8</td><td>63.5</td><td>64.3</td><td>76.3</td><td>71.4</td><td>78.5</td><td>78.6</td><td>80.4</td><td>78.64.3</td><td>81.02.0</td></tr><tr><td>32</td><td>80.0</td><td>74.8</td><td>67.8</td><td>64.3</td><td>81.4</td><td>75.0</td><td>85.7</td><td>82.1</td><td>86.3</td><td>82.12.3</td><td>83.32.0</td></tr><tr><td rowspan="3">SciTail</td><td>4</td><td>79.6</td><td>79.5</td><td>57.7</td><td>69.6</td><td>82.0</td><td>65.4</td><td>80.2</td><td>80.2</td><td>81.0</td><td>78.12.5</td><td>79.04.4</td></tr><tr><td>16</td><td>80.0</td><td>83.2</td><td>60.8</td><td>71.9</td><td>86.5</td><td>74.4</td><td>79.5</td><td>87.3</td><td>85.5</td><td>78.51.4</td><td>80.53.1</td></tr><tr><td>32</td><td>81.9</td><td>85.0</td><td>60.2</td><td>71.9</td><td>85.8</td><td>80.4</td><td>80.2</td><td>86.3</td><td>85.2</td><td>85.43.1</td><td>84.80.4</td></tr></table>
181
+
182
+ Table 3: Few-shot learning results with $\gamma = \{4,16,32\}$ on BoolQ, CB, and SciTail datasets. FT: Fine-tuning, AD: Adapter, PT: Prompt tuning, ST: SPoT, HF: HyperFormer, ATP: ATTEMPT. ACCEPT significantly outperforms other methods on BoolQ and offers comparable performance on CB and SciTail with fewer parameters.
183
+
184
+ <table><tr><td>LC</td><td>PS</td><td>PP</td><td>AP</td><td>SuperGLUE</td></tr><tr><td>X</td><td>X</td><td>✓</td><td>X</td><td>60.0</td></tr><tr><td>✓</td><td>X</td><td>✓</td><td>X</td><td>75.5</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>76.3</td></tr><tr><td>X</td><td>X</td><td>✓</td><td>✓</td><td>76.5</td></tr><tr><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>77.7</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>78.3</td></tr></table>
185
+
186
+ Tab. 3, our method accomplishes impressive results on BoolQ dataset, which is consistent with Tab. 1. It also outperforms the previous methods on CB dataset with 4 shots. Note that for the CB dataset with 16 and 32 shots, our approach outperforms most of the methods except for ATTEMPT and TPT both using much more parameters than ours. The results demonstrate that ACCEPT remains effective in the few-shot adaptation scenarios.
187
+
188
+ # 4.4 Ablation Study
189
+
190
+ Learnable Codebook and Subdivision. To demonstrate the effectiveness of ACCEPT, we first conduct an ablation study of PQ, utilizing the shared learnable codebook and prompt embedding subdivision, with the prepended and added prompt tunings. Tab. 4 shows that by sharing the learnable codebook among prompts, there is a noticeable performance improvement over the original architectures. Moreover, by dividing prompt embeddings into more fine-grained pieces, the performances are further enhanced. The results reveal the efficacy of PQ by subdividing the prompt embedding space.
191
+
192
+ Different Granularity of Subdivision. We further study on the impact of using different sub
193
+
194
+ Table 4: Effectiveness of learnable codebook and subdivision. Our designs of shared learnable codebook (LC) and prompt embedding subdivision (PS) allow a performance gain with the same scale of parameters (76.8k) as other approaches. PP and AP denote the prepended and added prompt tunings, respectively.
195
+
196
+ <table><tr><td colspan="7">Soft-weighted Codebook Prepended Prompt (SCPP)</td></tr><tr><td>(t, r)</td><td>(32, 20)</td><td>(64, 30)</td><td>(128, 40)</td><td>(256, 48)</td><td>(384, 51)</td><td>(768, 55)</td></tr><tr><td>#Para.</td><td>74,880</td><td>75,360</td><td>75,840</td><td>76,224</td><td>76,008</td><td>76,260</td></tr><tr><td>Acc.</td><td>82.73</td><td>79.14</td><td>79.14</td><td>77.70</td><td>82.73</td><td>81.29</td></tr><tr><td colspan="7">Soft-weighted Codebook Added Prompt (SCAP)</td></tr><tr><td>(t, r)</td><td>(32, 4)</td><td>(64, 8)</td><td>(128, 13)</td><td>(256, 20)</td><td>(384, 24)</td><td>(768, 30)</td></tr><tr><td>#Para.</td><td>73,728</td><td>76,800</td><td>76,032</td><td>76,800</td><td>76,800</td><td>76,800</td></tr><tr><td>Acc.</td><td>77.70</td><td>76.98</td><td>79.86</td><td>81.29</td><td>82.73</td><td>78.42</td></tr></table>
197
+
198
+ Table 5: Performance on RTE dataset with dividing SCPP and SCAP into different granularities. For T5-base, $t = 768$ means the prompt is NOT divided. Configs surpass non-division settings are highlighted in gray.
199
+
200
+ dimension $(t)$ and codebook size $(r)$ pairs in our approach. We choose multiple values for $K$ and divide the embeddings of dimension $d$ into multiple vectors with sub-dimension $t$ . We then determine $r$ to satisfy $rtK + rmK \leq 100d$ , ensuring fewer parameters are used compared to PT with $m = 100$ . Note that with T5-base model, $t = 768$ means no division on the embedding dimension. Tab. 5 shows that with an appropriate division, there is a performance gain compared to treating the embedding as a whole. SCPP achieves an optimal performance with $t = 32$ , chosen for its fewer parameters than $t = 384$ . For SCAP, optimal performance is achieved with $t = 384$ . More complete experiments are in Appendix C.2. Note that the optimal parameters $(t = 32, K = 24, r = 20$ for SCPP and $t = 384, K = 2, r = 24$ for SCAP) chosen from this small task (RTE) are then applied to ALL the datasets when using our approach with the T5-base model in the experiments.
201
+
202
+ Ablation on SCPP and SCAP. We train SCPP and SCAP individually, initializing them with a random Gaussian distribution. Tab. 6 shows that when using SCPP individually, the average performance on GLUE, SueprGLUE and MRQA improves $1.2\%$ ,
203
+
204
+ ![](images/f4c55cb4cad9d3e7f4cdcdeb7c591b2f903c317df6ca2360061b303af4cae501.jpg)
205
+ Figure 3: Performance on BoolQ, MultiRC and Wic datasets with different model sizes (T5-small, T5-base and T5-large). Our method shows improved performance as the model size increases and reaches SOTA on larger model, showcasing the potential of ACCEPT.
206
+
207
+ <table><tr><td>SCPP</td><td>SCAP</td><td>GLUE</td><td>SuperGLUE</td><td>MRQA</td></tr><tr><td>X</td><td>X</td><td>85.9</td><td>76.5</td><td>73.0</td></tr><tr><td>✓</td><td>X</td><td>87.1</td><td>77.8</td><td>73.5</td></tr><tr><td>X</td><td>✓</td><td>87.0</td><td>77.5</td><td>73.3</td></tr><tr><td>✓</td><td>✓</td><td>87.9</td><td>78.3</td><td>73.6</td></tr></table>
208
+
209
+ $1.3\%$ and $0.5\%$ , respectively. Similarly, there are $1.1\%$ , $1.0\%$ and $0.3\%$ performance gain when using SCAP individually. We find that the performance gain is relatively small on QA tasks. Improvement in the generation of longer sentences with ACCEPT is left as a future work. With both SCPP and SCAP, our approach achieves the best performances with $2.0\%$ , $1.8\%$ and $0.6\%$ gain on each of the three benchmark, indicating the importance of both our designs. More detailed results on each dataset are in Appendix C.3.
210
+
211
+ Model Scaling. We explore the effect of different model sizes (T5-small, T5-base and T5-large) with our method on BoolQ, MultiRC and WiC datasets in Fig. 3. Our method demonstrates increased performance improvement with larger language model backbones, highlighting ACCEPT's adaptability with bigger models. We also provide the results of fully fine-tuning (FT), Adapter, Prompt Tuning (PT), MPT, and TPT for comparison. ACCEPT demonstrates competitive performance across all model scales. Notably, the tunable parameters of our approach are much fewer than those in FT, Adapter, and ATTEMPT. Despite this, we achieve state-of-the-art performance on all three datasets with T5-large (770M), which is a highly encouraging result given the reduced parameter count.
212
+
213
+ To further study the capabilities and possibilities
214
+
215
+ Table 6: Effectiveness of Soft-weighted Codebook Prepended and Added prompts. The optimal performance is achieved by both SCPP and SCAP combined.
216
+
217
+ <table><tr><td>Method</td><td>GLUE</td></tr><tr><td>PT</td><td>85.6</td></tr><tr><td>DePT</td><td>86.4</td></tr><tr><td>TPT</td><td>88.4</td></tr><tr><td>ACCEPT (Ours)</td><td>88.5</td></tr></table>
218
+
219
+ Table 7: Performance on GLUE with T5-3B. We outperform all PT, DePT and achieve a 0.1 improvement over TPT with less parameters.
220
+
221
+ <table><tr><td>Method</td><td>RTE</td></tr><tr><td>PT†</td><td>88.49</td></tr><tr><td>DePT†</td><td>89.92</td></tr><tr><td>ACCEPT (Ours)</td><td>91.37</td></tr></table>
222
+
223
+ Table 8: Performance of Flan-T5-11B on RTE dataset. Our method outperforms both PT and DePT. †The results are reproduced by us.
224
+
225
+ <table><tr><td>Method</td><td>#Para.</td><td>SST-2</td></tr><tr><td>PT</td><td>417.8K</td><td>94.48</td></tr><tr><td>DePT</td><td>413.4K</td><td>94.95</td></tr><tr><td>ACCEPT (Ours)</td><td>405K</td><td>95.64</td></tr></table>
226
+
227
+ Table 9: Performance of Llama-2-7B model on SST-2 dataset. Our method outperforms PT and DePT. PT and DePT results are sourced from (Shi and Lipani, 2024).
228
+
229
+ of ACCEPT on large language models, We conduct the experiments with billion-parameter models including T5-3B, Flan-T5-11B and Llama-2-7B. Tab. 7 shows that ACCEPT achieves the state-of-the-art average accuracy on GLUE benchmark with T5-3B. ACCEPT surpasses the vanilla PT and other prompt tuning methods including DePT and TPT. Notably, we achieve a $0.1\%$ improvement with much fewer parameters than TPT, which is an impressive result. Flan-T5 is an enhanced version of T5 model by fine-tuning T5 on 1,800 downstream tasks. We further select the 11 billion-parameter version and investigate the effectiveness of ACCEPT on large language models. Due to the huge computation resource required, we select the RTE dataset for evaluation. Tab. 8 shows that ACCEPT outperforms both PT and DePT on the RTE dataset. This indicates the potential and capability of ACCEPT incorporating large-scale models. We also evaluate our method using Llama-based models. Initially, we attempted to reproduce the results from (Shi and Lipani, 2024) which uses the auto-regression generated output for classification. However, we found it challenging to achieve the same level of accuracy by this approach. To tackle this, we added a trainable linear head to output the probability distribution for classification. The results are shown in Tab. 9. Our approach outperforms both PT and DePT on the SST-2 dataset by 1.16 and 0.69 with Llama-2-7B. This demonstrates the excellent capability of our method with large language models (LLMs), highlighting its potential for adaptation to future LLM architectures.
230
+
231
+ <table><tr><td>Method</td><td>GLUE</td><td>SuperGLUE</td></tr><tr><td>PT†</td><td>84.8</td><td>60.0</td></tr><tr><td>DePT</td><td>85.9</td><td>76.5</td></tr><tr><td>Init. method of ACCEPT</td><td></td><td></td></tr><tr><td>Random</td><td>87.1</td><td>76.5</td></tr><tr><td>Intermediate task</td><td>87.5</td><td>77.6</td></tr><tr><td>Target task</td><td>87.1</td><td>77.2</td></tr></table>
232
+
233
+ Table 10: Performance of ACCEPT on GLUE and SuperGLUE with different prompt initialization. All three strategies outperforms PT and DePT, showing our method's robustness.
234
+
235
+ Prompt Initialization. We further analyze how initialization affects the performance. We conduct three initialization settings, (1) Random initialization: Both the codebooks and weights are initialized with a random Gaussian Distribution; (2) Intermediate task initialization: SPoT (Vu et al., 2022) has shown that initializing prompts with the pretrained weights from the tasks of a similar nature can benefit the training of the target task. (3) Target task initialization: By first pretraining the codebooks and the weights of $SCPP$ and $SCAP$ respectively on the target task, both of them are then served as the initialization of ACCEPT. Tab. 10 shows that our method achieves better performances than PT and DePT with all three strategies, revealing the robustness and effectiveness of ACCEPT. Moreover, intermediate task initialization strategy yields the best performances. We conjecture that the pretrained codebooks and weights from intermediate task of a similar nature helps the target task transfer more easily, providing additional knowledge and surpasses the performances of random or target task initialization. Detailed results are provided in Appendix C.1.
236
+
237
+ Prompt Length. We evaluate the impact of different prompt lengths $(m)$ on model performance and training time, as shown in Fig. 4. The experiments are conducted on the MRPC and STS-B datasets with $m$ values of $\{0,20,40,60,80,100\}$ , while maintaining the same level of training parameters across all settings. The results indicate that as $m$ increases, the training time also rises. Notably, our approach achieves peak accuracy in both datasets with $m = 60$ , making it our optimal choice for the prompt length setting.
238
+
239
+ # 5 Conclusion
240
+
241
+ In this paper, we present ACCEPT, a novel prompt tuning method based on product quantization. As compared with other PT methods, the proposed method allows versatile and efficient prompt learn
242
+
243
+ ![](images/86a6b7095b2849795673ebb6db4f65879f3107624d47b2088441bfed28dc9078.jpg)
244
+ Figure 4: Performance on the MRPC and STS-B datasets and their relative training time (normalized to the one with $m = 100$ ) for various prompt lengths $m = \{20, 40, 60, 80, 100\}$ . Both datasets show the best performance at $m = 60$ .
245
+
246
+ ing by subdividing prompt embeddings and computing each subprompt with the linear combination of learnable codewords and weights. Extensive experiments demonstrate that ACCEPT achieves outstanding performance across various NLP tasks. Furthermore, we also show the proposed approach is capable of being effectively adapted to billion-parameter models and achieves decent results.
247
+
248
+ While we currently use all codewords for linear combination, we aim to explore sparse representations in the future work. Besides, we plan to extend our research scope by applying ACCEPT to a wider range of tasks with a more diverse set of LLMs.
249
+
250
+ # Limitations
251
+
252
+ While our extensive experiments across 17 datasets highlight the effectiveness of ACCEPT, it's important to acknowledge some additional considerations. Our method introduces some extra hyperparameters, such as determining the optimal sub-dimension $t$ , which requires some extra computational efforts. Moreover, ACCEPT involves managing two distinct learning rates for SCPP and SCAP. Additionally, due to the significant resource requirements of the models with tens of billions of parameters, our experiments were conducted on a limited number of datasets. Future work will aim to explore ACCEPT on a broader range of datasets and larger models to further validate its efficacy.
253
+
254
+ # Acknowledgements
255
+
256
+ This work was supported in part under grants NSTC 112-2634-F-002-005, NSTC112-2634-F-006-002, NSTC 112-2221-E-002 -132 -MY3, and NTU under grants 113L900902.
257
+
258
+ # References
259
+
260
+ Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9.
261
+ Akari Asai, Moh ammadreza Salehi, Matthew E Peters, and Hannaneh Hajishirzi. 2022. Attempt: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts. In EMNLP.
262
+ Rishabh Bhardwaj, Amrita Saha, Steven CH Hoi, and Soujanya Poria. 2022. Vector-quantized input-contextualized soft prompts for natural language understanding. In EMNLP.
263
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In NeurIPS.
264
+ Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017).
265
+ Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In *NAACL*.
266
+ Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung.
267
+ Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.
268
+ Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third international workshop on paraphrasing (IWP2005).
269
+ Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179.
270
+ Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Mrqa 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering.
271
+ Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. IJCNLP.
272
+
273
+ Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing.
274
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In ICML. PMLR.
275
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. In ICLR.
276
+ Hamish Ivison and Matthew E Peters. 2022. Hyperdecoders: Instance-specific decoders for multi-task nlp. In EMNLP.
277
+ Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1).
278
+ Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In *NAACL*.
279
+ Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. volume 32.
280
+ Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7.
281
+ Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In EMNLP.
282
+ Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning.
283
+ Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In IJCNLP.
284
+ Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. 2022a. Few-shot parameter-efficient finetuning is better and cheaper than in-context learning. NeurIPS, 35.
285
+ Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022b. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. ACL.
286
+
287
+ Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, and James Henderson. 2021. Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks. In ACL.
288
+ Giangiacomo Mercatali and André Freitas. 2021. Disentangling generative factors in natural language with discrete variational autoencoders. In EMNLP.
289
+ Mohammad Taher Pilehvar and Jose Camacho-Collados. 2018. Wic: the word-in-context dataset for evaluating context-sensitive meaning representations. In *NAACL*.
290
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. 21(140).
291
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. In EMNLP.
292
+ Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In ACL.
293
+ Aurko Roy, Ashish Vaswani, Arvind Neelakantan, and Niki Parmar. 2018. Theory and experiments on vector quantized autoencoders. arXiv preprint arXiv:1805.11063.
294
+ Andreas Rückle, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. Adapterdrop: On the efficiency of adapters in transformers. EMNLP.
295
+ Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. In NAACL.
296
+ Zhengxiang Shi and Aldo Lipani. 2024. Dept: Decomposed prompt tuning for parameter-efficient finetuning. In ICLR.
297
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP.
298
+ Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, et al. 2021. On transferability of prompt tuning for natural language processing. In NAACL.
299
+ Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. Lst: Ladder side-tuning for parameter and memory efficient transfer learning. In NeurIPS, volume 35.
300
+ Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. In ACL.
301
+
302
+ Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. NeurIPS, 30.
303
+ Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. Spot: Better frozen model adaptation through soft prompt transfer. In ACL.
304
+ Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. NeurIPS, 32.
305
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR.
306
+ Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. 2021. Entailment as few-shot learner arXiv preprint arXiv:2104.14690.
307
+ Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, and Yoon Kim. 2023. Multitask prompt tuning enables parameter-efficient transfer learning. In ICLR.
308
+ Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments Transactions of the Association for Computational Linguistics, 7.
309
+ Adina Williams, Nikita Nangia, and Samuel R Bowman 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL*.
310
+ Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. 2016. Quantized convolutional neural networks for mobile devices. In CVPR.
311
+ Muling Wu, Wenhao Liu, Jianhan Xu, Changze Lv, Zixuan Ling, Tianlong Li, Longtao Huang, Xiaqing Zheng, and Xuan-Jing Huang. 2023. Parameter efficient multi-task fine-tuning by learning to transfer token-wise prompts. In EMNLP.
312
+ Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In EMNLP.
313
+ Tan Yu, Junsong Yuan, Chen Fang, and Hailin Jin. 2018 Product quantization network for fast image retrieval In ECCV.
314
+ Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2022. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. ACL.
315
+ Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885.
316
+
317
+ # Appendix
318
+
319
+ # A Experimental Setting
320
+
321
+ We use PyTorch<sup>1</sup>, huggingface transformers<sup>2</sup> and huggingface PEFT<sup>3</sup> to implement our work. GLUE<sup>4</sup>, SuperGLUE<sup>5</sup> and MRQA 2019 Shared Task<sup>6</sup> are downloaded from huggingface dataset. We use the original T5 checkpoint rather than the LM-adapted 1.1 version (Lester et al., 2021). We modified codes based on DePT's repository<sup>7</sup>. We mainly cite the experiment results from Wu et al. (2023) and Shi and Lipani (2024). We typically use $m = 60$ for the length of SCPP, and set the maximum sequence length $l$ to 256, which also corresponds to the length of SCAP (except using 348 for MultiRC following Shi and Lipani (2024)). We partition SCPP and SCAP into $K = 24$ and $K = 2$ subsections, respectively. The associated $r$ is calculated by the equation $rd + rmK \leq md$ for each model with dimension $d$ . As for the experiments using the Llama-2-7B model, we modified codes based on Petals' repository<sup>8</sup>. We use a learning rate of 3e-3 for SCPP and 5e-5 for SCAP. The weight decay is 1e-2 and 1e-3, respectively, with a batch size of 32.
322
+
323
+ # B Task and Dataset Details
324
+
325
+ We list the detailed information, including numbers of training, evaluation and testing samples, task types and evaluation metrics of each dataset which has been used in our experiments in Tab. 11. We utilize a diverse range of datasets covering various NLU tasks, including Natural Language Inference (NLI), Paraphrase Detection, and Sentiment Analysis. Additionally, we explore different types of Question Answering (QA) tasks, such as extractive and boolean QA. The effectiveness and generalizability of ACCEPT are demonstrated across these tasks in Tab. 1 and Tab. 2.
326
+
327
+ # C More Details of Experiments
328
+
329
+ In this section, we present more comprehensive experiments.
330
+
331
+ ![](images/c7ec8672e96a1ceff50adfeaccebfedd845da239d4bb667cd3614295cba6f1e4.jpg)
332
+
333
+ ![](images/e359dfa4dca9e46ac9c41e71cacfd4abee1c5d1abe1348276c5da4e86ff41c32.jpg)
334
+
335
+ ![](images/0c560c96a5fad910e91d3b96d8329a680740aa7b45b8d43be767a6fa84f8b099.jpg)
336
+
337
+ ![](images/a19a2c457a69ef3f944016a6b8dbf0566760e73689d8ad8098eb37b3c409e21b.jpg)
338
+
339
+ ![](images/a2caa7356f2d79e9e40cd584c751fdc28204e769e2ba635ad61f7efc09f3a85a.jpg)
340
+ Figure 5: Training curve (left) and validation accuracy curve (right) comparison between different prompt initialization strategies across QQP, QNLI and SST-2.
341
+
342
+ ![](images/361ed4076d50fd8929d22febf83de58bc517df0f4cde4b9f0a95494d3e77bac8.jpg)
343
+
344
+ # C.1 Details of Prompt Initialization
345
+
346
+ Tab. 13 and Tab. 15 present the results for each dataset using three initialization strategies. The majority of performances improved with either intermediate task initialization or target task initialization, demonstrating the effectiveness of prelearning knowledge before transferring it to the target tasks, aligning with SPoT (Vu et al., 2022).
347
+
348
+ In addition, we present a comparison of the training curves and validation curves using different methods of prompt initialization across QQP, QNLI and SST-2 datasets, as shown in Figure 5. It can be observed that initializing with an intermediate task or target task helps the target task transfer more easily, resulting in faster convergence and better performance.
349
+
350
+ # C.2 Details of Different Granularity of Subdivision.
351
+
352
+ We have shown the performance of different subdimension $(t)$ and codebook size $(r)$ pairs in Tab. 5. We present more results in Tab. 12 by selecting total 8 factors of the model dimension $(d = 768$ for T5-base) and conduct the experiments for each setting on RTE dataset. Tab. 12 shows that with an appropriate division, multiple configurations surpass the performance of not dividing prompts $(t = 768)$ , which demonstrate the effectiveness of PQ. We select $t = 32$ , $K = 24$ , $r = 20$ for SCPP and $t = 384$ , $K = 2$ , $r = 24$ for SCAP as the fi
353
+
354
+ <table><tr><td>Dataset Name</td><td>Benchmark</td><td>#Train</td><td>#Valid</td><td>#Test</td><td>Task Type</td><td>Metric</td></tr><tr><td>MNLI</td><td>GLUE</td><td>392,702</td><td>9,832</td><td>9,815</td><td>Natural Language Inference (NLI)</td><td>accuracy</td></tr><tr><td>QQP</td><td>GLUE</td><td>362,846</td><td>1,000</td><td>40,431</td><td>Paraphrase Detection</td><td>accuracy/F1</td></tr><tr><td>QNLI</td><td>GLUE</td><td>103,743</td><td>1,000</td><td>5,463</td><td>NLI</td><td>accuracy</td></tr><tr><td>SST-2</td><td>GLUE</td><td>66,349</td><td>1,000</td><td>872</td><td>Sentiment Analysis</td><td>accuracy</td></tr><tr><td>STS-B</td><td>GLUE</td><td>5,749</td><td>750</td><td>750</td><td>Sentence Similarity</td><td>Pearson/Spearman corr.</td></tr><tr><td>MRPC</td><td>GLUE</td><td>3,668</td><td>204</td><td>204</td><td>Paraphrase Detection</td><td>accuracy/F1</td></tr><tr><td>RTE</td><td>GLUE</td><td>2,490</td><td>138</td><td>139</td><td>NLI</td><td>accuracy</td></tr><tr><td>CoLA</td><td>GLUE</td><td>8,551</td><td>521</td><td>522</td><td>Acceptability</td><td>Matthews corr.</td></tr><tr><td>MultiRC</td><td>SuperGLUE</td><td>27,243</td><td>2,424</td><td>2,424</td><td>Question Answering (QA)</td><td>F1/EM</td></tr><tr><td>BoolQ</td><td>SuperGLUE</td><td>9,427</td><td>1,635</td><td>1,635</td><td>Boolean QA</td><td>accuracy</td></tr><tr><td>WiC</td><td>SuperGLUE</td><td>5,428</td><td>319</td><td>319</td><td>Word Sense Disambiguation</td><td>accuracy</td></tr><tr><td>WSC</td><td>SuperGLUE</td><td>554</td><td>52</td><td>52</td><td>Commonsense Reasoning</td><td>accuracy</td></tr><tr><td>CB</td><td>SuperGLUE</td><td>250</td><td>28</td><td>28</td><td>NLI</td><td>accuracy</td></tr><tr><td>ReCoRD</td><td>SuperGLUE</td><td>137,484</td><td>1,370</td><td>15,176</td><td>Commonsense Reasoning</td><td>F1/EM</td></tr><tr><td>NaturalQuestions</td><td>MRQA 2019</td><td>103,071</td><td>1,000</td><td>12,836</td><td>Extractive QA</td><td>F1/EM</td></tr><tr><td>HotpotQA</td><td>MRQA 2019</td><td>71,928</td><td>1,000</td><td>5,901</td><td>Extractive QA</td><td>F1/EM</td></tr><tr><td>SearchQA</td><td>MRQA 2019</td><td>116,384</td><td>1,000</td><td>16,980</td><td>Extractive QA</td><td>F1/EM</td></tr><tr><td>NewsQA</td><td>MRQA 2019</td><td>73,160</td><td>1,000</td><td>4,212</td><td>Extractive QA</td><td>F1/EM</td></tr><tr><td>SQuAD</td><td>MRQA 2019</td><td>86,599</td><td>1,000</td><td>10,570</td><td>Extractive QA</td><td>F1/EM</td></tr></table>
355
+
356
+ Table 11: Detailed information of all datasets used in our experiments. For datasets that originally use two metrics, we designate the underlined metric as our primary evaluation measure following prior works (Asai et al., 2022; Shi and Lipani, 2024).
357
+
358
+ <table><tr><td colspan="10">Soft-weighted Codebook Prepended Prompt (SCPP)</td></tr><tr><td>(t, r)</td><td>(16, 12)</td><td>(32, 20)</td><td>(64, 30)</td><td>(96, 36)</td><td>(128, 40)</td><td>(192, 45)</td><td>(256, 48)</td><td>(384, 51)</td><td>(768, 55)</td></tr><tr><td>#Para.</td><td>74496</td><td>74880</td><td>75360</td><td>75648</td><td>75840</td><td>76080</td><td>76224</td><td>76008</td><td>76260</td></tr><tr><td>Acc.</td><td>77.70</td><td>82.73</td><td>79.14</td><td>82.73</td><td>79.14</td><td>81.29</td><td>77.70</td><td>82.73</td><td>81.29</td></tr><tr><td colspan="10">Soft-weighted Codebook Added Prompt (SCAP)</td></tr><tr><td>(t, r)</td><td>(16, 2)</td><td>(32, 4)</td><td>(64, 8)</td><td>(96, 10)</td><td>(128, 13)</td><td>(192, 17)</td><td>(256, 20)</td><td>(384, 24)</td><td>(768, 30)</td></tr><tr><td>#Para.</td><td>72192</td><td>73728</td><td>76800</td><td>74240</td><td>76032</td><td>76544</td><td>76800</td><td>76800</td><td>76800</td></tr><tr><td>Acc.</td><td>78.42</td><td>77.70</td><td>76.98</td><td>76.98</td><td>79.86</td><td>78.42</td><td>81.29</td><td>82.73</td><td>78.42</td></tr></table>
359
+
360
+ Table 12: Performance on RTE dataset with dividing the SCPP and SCAP into different granularities. Note that for T5-base, $t = 768$ means the prompt is NOT divided.
361
+
362
+ <table><tr><td rowspan="2">Init. Method</td><td colspan="9">GLUE</td><td colspan="7">SuperGLUE</td></tr><tr><td>MNLI</td><td>QQP</td><td>QNLI</td><td>SST-2</td><td>STS-B</td><td>MRPC</td><td>RTE</td><td>CoLA</td><td>Avg.</td><td>Multi</td><td>Bool</td><td>WiC</td><td>WSC</td><td>CB</td><td>Avg.</td><td></td></tr><tr><td>Random</td><td>85.7</td><td>90.2</td><td>93.0</td><td>94.3</td><td>91.0</td><td>93.1</td><td>84.2</td><td>65.2</td><td>87.1</td><td>74.5</td><td>81.0</td><td>70.5</td><td>67.3</td><td>92.9</td><td>77.2</td><td></td></tr><tr><td>Intermediate task</td><td>85.9</td><td>90.2</td><td>93.3</td><td>94.2</td><td>91.0</td><td>92.7</td><td>86.3</td><td>66.4</td><td>87.5</td><td>73.5</td><td>82.3</td><td>68.7</td><td>67.3</td><td>96.4</td><td>77.6</td><td></td></tr><tr><td>Target task</td><td>85.9</td><td>90.4</td><td>93.1</td><td>94.5</td><td>91.0</td><td>91.7</td><td>81.3</td><td>68.8</td><td>87.1</td><td>74.9</td><td>81.8</td><td>69.0</td><td>67.3</td><td>92.9</td><td>77.2</td><td></td></tr></table>
363
+
364
+ Table 13: Performance on GLUE and SuperGLUE with different prompt initialization.
365
+
366
+ <table><tr><td rowspan="2">SCPP</td><td rowspan="2">SCAP</td><td colspan="8">GLUE</td><td colspan="6">SuperGLUE</td><td></td></tr><tr><td>MNLI</td><td>QQP</td><td>QNLI</td><td>SST-2</td><td>STS-B</td><td>MRPC</td><td>RTE</td><td>CoLA</td><td>Avg.</td><td>Multi</td><td>Bool</td><td>WiC</td><td>WSC</td><td>CB</td><td>Avg.</td></tr><tr><td>X</td><td>X</td><td>85.0</td><td>90.4</td><td>93.2</td><td>94.2</td><td>90.8</td><td>90.7</td><td>79.1</td><td>63.8</td><td>85.9</td><td>74.3</td><td>79.3</td><td>68.7</td><td>67.3</td><td>92.9</td><td>76.5</td></tr><tr><td>✓</td><td>X</td><td>85.9</td><td>90.3</td><td>93.2</td><td>94.3</td><td>91.0</td><td>91.7</td><td>82.7</td><td>67.5</td><td>87.1</td><td>74.3</td><td>80.9</td><td>70.2</td><td>67.3</td><td>96.4</td><td>77.8</td></tr><tr><td>X</td><td>✓</td><td>86.0</td><td>90.4</td><td>93.2</td><td>94.3</td><td>91.1</td><td>90.7</td><td>82.7</td><td>66.8</td><td>87.0</td><td>75.4</td><td>81.2</td><td>67.4</td><td>67.3</td><td>96.4</td><td>77.5</td></tr><tr><td>✓</td><td>✓</td><td>85.9</td><td>90.4</td><td>93.1</td><td>94.5</td><td>91.0</td><td>93.1</td><td>86.3</td><td>68.8</td><td>87.9</td><td>74.9</td><td>82.3</td><td>70.5</td><td>67.3</td><td>96.4</td><td>78.3</td></tr></table>
367
+
368
+ Table 14: Ablation study of SCPP and SCAP on GLUE and SuperGLUE benchmarks. We provide the performance of each dataset.
369
+
370
+ nal decision considering both the performance and parameter efficiency, applying to all datasets.
371
+
372
+ # C.3 Details of Ablation on SCPP and SCAP.
373
+
374
+ In the main paper, we provide the average performances on GLUE/SuperGLUE benchmarks and
375
+
376
+ <table><tr><td rowspan="2">Init. Method</td><td colspan="5">MRQA</td></tr><tr><td>NQ</td><td>HP</td><td>SQA</td><td>News</td><td>Avg.</td></tr><tr><td>Random</td><td>73.47</td><td>76.74</td><td>78.59</td><td>64.63</td><td>73.36</td></tr><tr><td>Intermediate task</td><td>72.71</td><td>76.98</td><td>78.47</td><td>64.44</td><td>73.15</td></tr><tr><td>Target task</td><td>73.61</td><td>77.10</td><td>78.91</td><td>64.62</td><td>73.55</td></tr></table>
377
+
378
+ Table 15: Performance on MRQA 2019 Shared Task with different prompt initialization.
379
+
380
+ <table><tr><td rowspan="2">SCPP</td><td rowspan="2">SCAP</td><td colspan="5">MRQA</td></tr><tr><td>NQ</td><td>HP</td><td>SQA</td><td>News</td><td>Avg.</td></tr><tr><td>X</td><td>X</td><td>73.20.1</td><td>76.80.3</td><td>77.60.2</td><td>64.40.1</td><td>73.0</td></tr><tr><td>✓</td><td>X</td><td>73.80.05</td><td>76.90.01</td><td>78.80.2</td><td>64.70.1</td><td>73.5</td></tr><tr><td>X</td><td>✓</td><td>73.40.2</td><td>76.80.5</td><td>78.50.2</td><td>64.30.1</td><td>73.3</td></tr><tr><td>✓</td><td>✓</td><td>73.60.05</td><td>77.10.1</td><td>78.90.01</td><td>64.60.06</td><td>73.6</td></tr></table>
381
+
382
+ Table 16: Ablation study of SCPP and SCAP on MRQA 2019 Shared Task. We report the average F1 and standard deviation of three experiments with different seeds.
383
+
384
+ MRQA 2019 Shared Task in Tab. 6. Here we provide the performance on each dataset in Tab. 14 and Tab. 16. The results of most datasets show improvements when using either SCPP or SCAP individually, and are the best performances when both are applied simultaneously, further validating the effectiveness of ACCEPT.
acceptadaptivecodebookforcompositeandefficientprompttuning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aef8530a6761c8e572a6e277d63ba14314d4db56f1410ed3de996bb2df8104a7
3
+ size 999715
acceptadaptivecodebookforcompositeandefficientprompttuning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:038cef448922ccd66df32155c749c385abda64eb1c010fbc30861d8b773a24c9
3
+ size 540280
acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/5226d102-8eda-4b92-b3f6-fa16bc851d50_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b8e1c7b82ddea6656e3097b95f2ccd302578a57937311c42a57cec344f65c52
3
+ size 116608
acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/5226d102-8eda-4b92-b3f6-fa16bc851d50_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99c75ac57fa72d0dbaa6369577ec7273a3b9ae3285d9e4b415826b449ea7498c
3
+ size 137382
acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/5226d102-8eda-4b92-b3f6-fa16bc851d50_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28312e5ddf17eb0bb804df8058971726dc10b6ed021ef2576220e97f40c442f2
3
+ size 2777618
acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/full.md ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AC-EVAL: Evaluating Ancient Chinese Language Understanding in Large Language Models
2
+
3
+ Yuting Wei*, Yuanxing Xu*, Xinru Wei, Simin Yang, Yangfu Zhu, Yuqing Li, Di Liu, Bin Wu†
4
+
5
+ Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia, Beijing University of Posts and Telecommunications
6
+
7
+ {yuting_wei, xyx, wxr2000, ysm2000, zhuyangfu, liyuqing, liudi, wubin} @bupt.edu.cn
8
+
9
+ # Abstract
10
+
11
+ Given the importance of ancient Chinese in capturing the essence of rich historical and cultural heritage, the rapid advancements in Large Language Models (LLMs) necessitate benchmarks that can effectively evaluate their understanding of ancient contexts. To meet this need, we present AC-EVAL, an innovative benchmark designed to assess the advanced knowledge and reasoning capabilities of LLMs within the context of ancient Chinese. AC-EVAL is structured across three levels of difficulty reflecting different facets of language comprehension: general historical knowledge, short text understanding, and long text comprehension. The benchmark comprises 13 tasks, spanning historical facts, geography, social customs, art, philosophy, classical poetry and prose, providing a comprehensive assessment framework. Our extensive evaluation of top-performing LLMs, tailored for both English and Chinese, reveals a substantial potential for enhancing ancient text comprehension. By highlighting the strengths and weaknesses of LLMs, AC-EVAL aims to promote their development and application forward in the realms of ancient Chinese language education and scholarly research. $^{1}$
12
+
13
+ # 1 Introduction
14
+
15
+ The advent of Large Language Models (LLMs) has significantly impacted Natural Language Processing (NLP), highlighting their importance in understanding and generating human languages (Wei et al., 2022a; Zhou et al., 2022; Zhao et al., 2023). With the rise of Chinese as a major global language, there has been a surge in Chinese-specific LLMs (Zeng et al., 2022; Bai et al., 2023; Baichuan, 2023). Ancient Chinese, a crucial part of the Chinese language, records a rich historical and cultural
16
+
17
+ heritage, and has garnered considerable attention from computational linguists (Li et al., 2022; Wang et al., 2023). LLMs present significant opportunities for enhancing the pedagogy of Chinese literary education through convenient text analysis and comprehension. Therefore, assessing the ancient Chinese comprehension capabilities of LLMs holds significant importance.
18
+
19
+ Initially, benchmarks for LLMs primarily targeted the assessment of English language understanding, exemplified by MMLU (Hendrycks et al., 2021), BIG-bench (Srivastava et al., 2023) and HELM (Liang et al.). Subsequently, several benchmarks focusing on Chinese, such as C-Eval (Huang et al., 2023), CMMLU (Li et al., 2024), and SuperCLUE (Xu et al., 2023), were introduced. These benchmarks aim to evaluate the reasoning performance of LLMs across a broad spectrum of fields including STEM, social sciences, and humanities. However, these benchmarks tend to lean towards modern Chinese comprehension. While some include tasks related to Chinese language, literature and history, they are often relegated to minor categories, insufficient for a comprehensive coverage of ancient Chinese knowledge and language assessment. Existing benchmarks for ancient Chinese understanding, such as CCLUE ${}^{2}$ and WYWEB (Zhou et al., 2023), cover various aspects but primarily focus on linguistic feature analysis, frequently overlooking the assessment of historical knowledge hidden in literature. Furthermore, the diversity in format across these datasets, tailored for specific tasks rather than providing a unified assessment framework, complicates the evaluation of LLMs, presenting challenges in conducting uniform assessments.
20
+
21
+ To bridge this gap, we propose AC-EVAL (as illustrated in Figure 1), a benchmark meticulously designed for a comprehensive evaluation of LLMs'
22
+
23
+ ![](images/02739f5239c7301ca047e877da9cf396e27e124efeca6c782dade972bdef182f.jpg)
24
+ Figure 1: Overview of AC-EVAL.
25
+
26
+ proficiency in ancient Chinese language understanding and historical knowledge. AC-EVAL comprises 3,245 multiple-choice questions, spanning three distinct dimensions and thirteen subjects, covering historical periods from the Pre-Qin to the Qing dynasty. These tasks, which progressively increase in difficulty, are categorized into general historical knowledge, short text understanding, and long text understanding. The general historical knowledge tasks address a diverse range of contents, including but not limited to, ancient historical facts, geography, social customs, art, religion and philosophy. Short text understanding covers lexical semantics and pragmatics, allusions and idioms, sentence translations, and event extraction. Long text understanding tasks focus on long text pauses, classical prose summarization and analysis, and the appreciation of themes, emotions and styles in poetry.
27
+
28
+ In our evaluation of LLMs on the AC-EVAL benchmark across answer-only (AO) and chain-of-thought (CoT) settings in zero- and few-shot scenarios, only ERNIE-Bot 4.0 and GLM-4 with accuracies over $70\%$ . Results reveal significant improvement potential, especially in long text comprehension. Our analysis shows that Chinese LLMs outperform English ones in ancient Chinese. This distinction underscores the unique challenge that ancient Chinese as a low-resource area for models like GPT-4, despite their commendable performance on other Chinese benchmarks. Moreover,
29
+
30
+ the broad range of knowledge required in our tasks reveals that LLMs encounter difficulties in grasping underlying rules, affecting few-shot learning outcomes. Interestingly, zero-shot CoT shows an advantage in larger models, underscoring the value of reasoning steps for complex tasks. Through the AC-EVAL benchmark, our goal is to provide a multidimensional evaluation tool, highlighting potential improvement areas to advance the development of LLMs in the understanding and education of ancient Chinese.
31
+
32
+ # 2 Related Work
33
+
34
+ # 2.1 Chinese benchmarks for LLMs
35
+
36
+ In the evolving landscape of NLP, the development of benchmarks to evaluate LLMs in comprehending Chinese has been a focal point of recent research (Chang et al., 2023). Benchmarks such as MMCU (Zeng, 2023), C-Eval (Huang et al., 2023), and CMMLU (Li et al., 2024) derived primarily from official examination questions, spanning various disciplines including STEM, humanities, social sciences, and professional qualification tests for fields like law and medicine. These benchmarks aimed to comprehensively assess the breadth of domains relevant to the Chinese language, primarily utilizing multiple-choice questions as their core components. Among these, CMMLU includes a small portion of ancient Chinese categories, approximately 0.7K, which cover topics like word-
37
+
38
+ level semantics, historical facts, and modern Chinese literature. However, it lacks a broad temporal scope and content diversity, such as examinations of ancient geography, art and cultural heritage, and religion. Additionally, it does not focus on long texts, such as ancient Chinese reading comprehension. AGIEval (Zhong et al., 2023) expanded upon these by incorporating fill-in-the-blank questions alongside multiple-choice. CG-Eval (Zeng et al., 2023) and CLEVA (Li et al., 2023), on the other hand, took a more holistic approach to measure models' generative abilities, including tasks such as noun explanation, short answer questions, and computational problems. SuperCLUE (Xu et al., 2023) evaluated models across three dimensions: foundational abilities, professional knowledge, and Chinese language characteristics by leveraging actual user queries and ratings, along with a mix of open- and closed-ended questions. Lastly, OpenCompass (Contributors, 2023) integrates over 100 public datasets into a unified leaderboard framework, standardizing the assessment of LLMs.
39
+
40
+ Despite the extensive range of current benchmarks, there is a significant gap in their coverage of the ancient Chinese language, literature, and history. Considering the depth and breadth of Chinese millennia-long history, which includes evolving social customs, religious beliefs, geographical boundaries, and linguistic changes, it is evident that a more comprehensive benchmark is necessary.
41
+
42
+ # 2.2 Ancient Chinese benchmarks
43
+
44
+ Ancient Chinese, a fundamental component of the Chinese linguistic heritage, encapsulates millennia of historical narratives and cultural wisdom. A multitude of traditional and diverse datasets has been proposed to evaluate the ancient Chinese language understanding capabilities with various specific tasks (Pan et al., 2022; Wang and Ren, 2022; Liu et al., 2022; Tang and Su, 2022).
45
+
46
+ For instance, analyzing the sentiments and themes in poetry (as seen in FSPC (Shao et al., 2021) and TCCP (Liu et al., 2020), to the intricate task of translating between classical and modern Chinese, (illustrated by the Classical-Modern corpus<sup>3</sup> and the Erya dataset (Guo et al., 2023)). Furthermore, named entity recognition and relationship extraction tasks, with datasets like C-CLUE (Ji et al., 2021) and GuNER $2023^{4}$ , provide a foundation for in-depth linguistic analysis within ancient
47
+
48
+ texts. GuwenEE $^5$ , an event extraction dataset, is annotated and constructed from the "Twenty-Four Histories," a collection of Chinese official historical literature. The word sense disambiguation dataset for ancient Chinese, introduced by Shu et al. (2021), encompasses texts from multiple dynasties. Additionally, the EvaHan series from 2022 to 2024 introduces a spectrum of tasks including sentence segmentation, POS tagging, and machine translation. Comprehensive benchmarks like CCLUE and WYWEB (Zhou et al., 2023) integrate a variety of language understanding tasks, ranging from text classification to poetry analysis and machine reading comprehension, offering a holistic evaluation of models' linguistic proficiency.
49
+
50
+ However, despite the breadth of these benchmarks, there remains a discernible gap in the assessment of models' grasp of the historical knowledge hidden within ancient texts. The varied formats of datasets, designed for specific tasks, hinder uniform LLM evaluation, highlighting the urgent need for an integrated benchmark to thoroughly assess LLMs' understanding of ancient Chinese literature and history knowledge.
51
+
52
+ # 3 AC-EVAL Overview
53
+
54
+ # 3.1 Design Principles
55
+
56
+ The motivation behind constructing AC-EVAL is to comprehensively assess LLMs' understanding and reasoning capabilities regarding the shifts in societal customs, culture, and language throughout millennia of history. It adheres to four foundational principles to ensure a holistic evaluation framework:
57
+
58
+ Temporal Coverage: It spans from the pre-Qin period to the Qing dynasty, offering a broad historical scope that covers thousands of years of evolution. For the pre-Qin period, our dataset ranges from China's primitive society era (e.g., the Three Sovereigns and Five Emperors) to the slave dynasty period, including the Xia, Shang, and Zhou dynasties, roughly covering the period from 5000 BC to 221 BC. The Qing dynasty lasted from AD 1616 to 1912.
59
+
60
+ Task Difficulty Diversity: The benchmark ranges from basic fragmented historical knowledge to complex tasks requiring the understanding of ancient Chinese texts of various lengths, providing a graded evaluation of model capabilities. Our task
61
+
62
+ <table><tr><td>Category</td><td>Difficulty</td><td># Subjects</td><td># Questions</td><td>Average Length</td></tr><tr><td>General Historical Knowledge</td><td>Easy</td><td>5</td><td>1014</td><td>62.78</td></tr><tr><td>Short Text Understanding</td><td>Normal</td><td>5</td><td>1215</td><td>214.19</td></tr><tr><td>Long Text Understanding</td><td>Hard</td><td>3</td><td>1016</td><td>536.95</td></tr></table>
63
+
64
+ Table 1: Statistics of AC-EVAL. The average length is measured in characters.
65
+
66
+ difficulty classification is based on the actual characteristics of the tasks, expert advice, and results from preliminary small-scale tests. For example, tasks on historical knowledge mainly test memory and knowledge reserves, hence defined as easier tasks; short text comprehension tasks involve understanding vocabulary and sentences in ancient texts, considered to have medium difficulty; long text comprehension tasks require deeper context analysis and reasoning, therefore classified as more difficult tasks.
67
+
68
+ Content Diversity: It encompasses a broad spectrum of knowledge areas including historical facts, geography, religion, philosophy, social customs, architecture, music, and handicrafts, along with tasks in ancient language understanding such as semantic and syntactic analysis.
69
+
70
+ Data Quality: While ensuring the authority of the data, we also take specific measures to mitigate data contamination, as detailed in section 3.2.
71
+
72
+ Our benchmark is organized into 3 major categories and 13 subjects, encompassing general historical knowledge as well as both short- and long-text comprehension of ancient Chinese. In alignment with the methodology proposed by Huang et al. (2023), we adopt a uniform question format, presenting each question with four answer options. Each subject within the benchmark contains an average of over 200 questions, of which five with explanations are designated for development sets. The statistical summary of AC-EVAL is depicted in Table 1, and a more detailed statistical breakdown is available in Appendix A.
73
+
74
+ # 3.2 Data Collection
75
+
76
+ Subject Selection: Our benchmark encompasses general historical knowledge and ancient Chinese text comprehension. For the former, we have identified five subcategories, namely: Historical Facts, Geography, Social Customs, Art and Cultural Heritage, Philosophy and Religion. For the latter, we distinguish between short texts, which include tasks such as Lexical Pragmatics Analysis, Allusions and Idioms, Word Sense Disambiguation, Translation,
77
+
78
+ and Event Extraction, and long texts, which cover Sentence Pauses, Summarization and Analysis, and Poetry Appreciation, as illustrated in Figure 1.
79
+
80
+ Data Source: The dataset is derived from four main sources: (1) the Complete Library in Four Branches (Siku Quanshu), offering a comprehensive collection of ancient Chinese texts; (2) specialized books on ancient Chinese social customs, architectural history, music history, and geography; (3) official or mock examinations; and (4) existing non-multiple-choice datasets on ancient Chinese, such as GuwenEE. Further details on the data sources are provided in Appendix A.
81
+
82
+ Data processing: Initially, we recruited undergraduate students and linguistics experts as annotators to manually gather and compile preliminary questions and answers from these sources. The data then underwent a three-fold modification and review process: (1) Ethical Considerations: We categorized our data source into reference materials (Sources 1, 2, and 4) and examinations (Source 3). The reference materials were manually adapted to create new questions and answers. Meanwhile, all materials are cited appropriately in Appendix A. The examination data, available freely online, were also included. (2) Data Contamination: We aimed to strike a balance between maintaining the authority of the data sources and minimizing data contamination. With the awareness that official examinations might be inadvertently captured and utilized in training LLMs, these were adapted by experts to retain the examinations' core focus while altering the content to some extent. (3) Coverage and Accuracy: We adhered strictly to our design principles to ensure the dataset's diversity and accuracy. $5\%$ random sample of the data underwent a quality check, with any found inaccuracies necessitating rework until achieving $100\%$ accuracy. For a more detailed expert evaluation process, please refer to Appendix B.
83
+
84
+ # 3.3 Evaluation
85
+
86
+ Accuracy is the primary metric for our evaluation. The ground-truth labels of the development set are
87
+
88
+ 以下是中国古代艺术和文化传承领域的单项选择题。在查看这些示例之后,请直接给出接下来一道题目的正确答案所对应的选项。
89
+
90
+ The following are multiple-choice questions in the field of Ancient Chinese Art and Cultural Heritage. After reviewing these examples, please directly provide the option corresponding to the correct answer for the next question.
91
+
92
+ 示例1:五代南唐时期著名画家顾闳中的绘画名作是
93
+
94
+ Example 1: The famous painting masterpiece of Gu Hongzhong, a famous painter in the Southern Tang Dynasty during the Five Dynasties, is
95
+
96
+ A.《女史箴图》(Admonitions of the Instructress to the Court Ladies)
97
+ B.《五牛图》(Five Buffaloes)
98
+ C.《簪花仕女图》(Ladies with Flowers)
99
+ D.《韩熙载夜宴图》(Han Xizai Giving a Night Banquet)
100
+
101
+ 答案:D (Answer: D)
102
+
103
+ ...[other examples]
104
+
105
+ 题目:中国美术史上至今发现最古老的装饰品是什么?
106
+
107
+ Question: Which is the oldest ornament found so far in the history of Chinese art?
108
+
109
+ A.玉石装饰品 (Jade Ornament)
110
+ B.骨头装饰品 (Bone Ornament)
111
+ C.石墨装饰品 (Graphite Ornament)
112
+ D.贝壳装饰品 (Shell Ornament)
113
+
114
+ 答案:(Answer:)
115
+
116
+ ![](images/78c127ecaa2f1e6ec61f98406a4c9e1ad0af0e5c1dcbf2a27bd0e0134fd4a583.jpg)
117
+ Figure 2: Illustrative few-shot AO prompts from AC-EVAL with corresponding English translations for better readability.
118
+
119
+ public, while the labels of the test set remain confidential to avoid their unintended inclusion in the pre-training corpora. For a more detailed evaluation process, please refer to our GitHub link.
120
+
121
+ # 4 Experiment
122
+
123
+ # 4.1 Setup
124
+
125
+ For evaluation of the AC-EVAL benchmark, we assess LLMs in both zero-shot and few-shot settings, with the few-shot samples drawn from the development set. To extract the answer choices from the models' responses, We employ regular expressions followed by manual verification to ensure successful retrieval in nearly all cases.
126
+
127
+ We report the results for both answer-only (AO) and chain-of-thought (CoT) (Wei et al., 2022b; Dong et al., 2022; Zhang et al., 2022) settings in zero- and few-shot scenarios. For zero-shot AO setting, we craft prompts in the format: "以下是中国古代[主题]领域的单项选择题,请直接给出正确答案对应的选项。(The following is a multiple-choice question in the field of Ancient Chinese [subject]. Please directly provide the option corresponding to the correct answer.)" For the few-shot AO setting, an example of it prompt is displayed in Figure 2. The logic behind our selection of few-shot examples is similar to the principles of data construction, requiring broad coverage across various dynasties and diverse content topics. For
128
+
129
+ instance, in the case of the five examples related to arts and cultural heritage, we provide content related to calligraphy, architecture, painting, sculpture, etc., from different historical periods. For the CoT settings, their prompts are shown in Appendix C.
130
+
131
+ Generally, few-shot defaults to five-shot. It is noteworthy that for both the five-shot and five-shot-CoT settings, input lengths sometimes surpass the maximum token limit of the models. To accommodate this, we dynamically adjust the number of samples to ensure they fit within the models' context window constraints.
132
+
133
+ # 4.2 Models
134
+
135
+ In our evaluation, we select 17 top-performing LLMs that demonstrate proficiency in Chinese language comprehension. These models represent a variety of organizations and encompass a range of parameter sizes. For commercial models, we evaluate via API calls, including (1) GPT-4 and GPT-3.5 Turbo (Achiam et al., 2023), (2) ERNIE-Bot 4.0 and ERNIE-Bot $^6$ , (3) GLM-4 and GLM-3-Turbo (Zeng et al., 2022), (4) Qwen-max (Bai et al., 2023). For models with open-sourced parameters, we evaluate (1) LLaMA2-70B (Touvron et al., 2023) (2) Qwen-7B/14B/72B-Chat (Bai et al., 2023), (3) Yi-6B/34B-Chat $^7$ , (4) Baichuan2-
136
+
137
+ <table><tr><td>Model</td><td>General Historical Knowledge</td><td>Short Text Understanding</td><td>Long Text Understanding</td><td>Average</td></tr><tr><td>GPT-4</td><td>66.11</td><td>55.11</td><td>47.38</td><td>56.20</td></tr><tr><td>GPT-3.5 Turbo</td><td>53.50</td><td>43.72</td><td>36.94</td><td>44.72</td></tr><tr><td>ERNIE-Bot 4.0</td><td>77.54</td><td>68.11</td><td>66.42</td><td>70.69</td></tr><tr><td>ERNIE-Bot</td><td>68.81</td><td>57.80</td><td>51.47</td><td>59.36</td></tr><tr><td>GLM-4</td><td>76.63</td><td>66.66</td><td>67.70</td><td>70.33</td></tr><tr><td>GLM-3-Turbo</td><td>75.21</td><td>60.52</td><td>59.77</td><td>65.17</td></tr><tr><td>Qwen-max</td><td>73.77</td><td>64.88</td><td>63.84</td><td>67.50</td></tr><tr><td>LLaMA2-70B</td><td>33.55</td><td>36.29</td><td>30.72</td><td>33.54</td></tr><tr><td>Qwen-72B-Chat</td><td>71.25</td><td>61.48</td><td>59.80</td><td>64.18</td></tr><tr><td>Yi-34B-Chat</td><td>72.66</td><td>61.33</td><td>58.36</td><td>64.12</td></tr><tr><td>Qwen-14B-Chat</td><td>69.51</td><td>56.53</td><td>57.38</td><td>61.14</td></tr><tr><td>Baichuan2-13B-Chat</td><td>65.57</td><td>49.24</td><td>35.40</td><td>50.07</td></tr><tr><td>Qwen-7B-Chat</td><td>62.74</td><td>48.76</td><td>44.97</td><td>52.16</td></tr><tr><td>Baichuan2-7B-Chat</td><td>64.38</td><td>46.77</td><td>40.33</td><td>50.49</td></tr><tr><td>Yi-6B-Chat</td><td>66.70</td><td>47.79</td><td>39.49</td><td>51.33</td></tr><tr><td>ChatGLM3-6B</td><td>58.04</td><td>43.01</td><td>39.73</td><td>46.93</td></tr><tr><td>Xunzi-Qwen-Chat</td><td>60.20</td><td>44.31</td><td>30.87</td><td>45.13</td></tr></table>
138
+
139
+ Table 2: Zero-shot AO average accuracy of all models. We report average accuracy over subjects within each category. "Average" = average over all categories. Models are ranked by model size.
140
+
141
+ 7B/13B-Chat (Baichuan, 2023), (5) ChatGLM3-6B $^{8}$ , and (6) Xunzi-Qwen-Chat $^{9}$ —an LLM that has been continually pre-trained on ancient Chinese corpora based on the Qwen-7B-Chat architecture. A detailed description of the evaluated models, including their architectural details, pretraining corpora, and versions, is available in Appendix D. We conduct timely evaluations to capture the latest performance levels of these models $^{10}$ .
142
+
143
+ # 5 Results
144
+
145
+ In this section, we explore the comparative performance of various models under four distinct settings: zero-shot AO as discussed in Section 5.1, few-shot AO in Section 5.2, zero- and few-shot CoT in Section 5.3.
146
+
147
+ # 5.1 Zero-shot AO
148
+
149
+ Given that zero-shot scenarios are among the most common use cases, understanding model performance in this context is crucial. Therefore, we first report the average accuracy in the zero-shot AO setting in Table 2, while detailed accuracy breakdowns by subject are provided in Appendix E. Our
150
+
151
+ comparison analysis focuses on two critical dimensions: model parameter size and task category.
152
+
153
+ Comparison by model. For large models: ERNIE-Bot-4.0 and GLM-4 stand out as top-performing models in ancient Chinese, with accuracies of $70.69\%$ and $70.33\%$ , respectively, followed by Qwen-max at $67.50\%$ . Despite primarily being trained on modern Chinese corpora, these LLMs show strong generalization abilities to ancient Chinese. For models primarily trained on English corpora, GPT-4 and GPT-3.5 significantly outperform LLaMA-70B. Considering our benchmark is entirely in Chinese, this suggests GPT models' superior generalization capabilities over LLaMA2-70B in handling extensive Chinese content. Interestingly, GPT series models perform worse than Chinese LLMs, diverging from conclusions drawn from previous benchmarks in the Chinese domain where GPT often ranked first (Li et al., 2024; Huang et al., 2023; Xu et al., 2023). This indicates that ancient Chinese acts as a low-resource language for English LLMs, highlighting the significant linguistic differences between ancient and modern Chinese. This observation also underscores the importance of our benchmark from another perspective.
154
+
155
+ For small models: The Yi-34B-Chat showcases remarkable parameter efficiency and performs com
156
+
157
+ <table><tr><td>Model</td><td>General Historical Knowledge</td><td>Short Text Understanding</td><td>Long Text Understanding</td><td>Average</td></tr><tr><td>GPT-4</td><td>65.91 (-0.20)</td><td>58.07 (+2.96)</td><td>48.36 (+0.98)</td><td>57.45 (+1.25)</td></tr><tr><td>GPT-3.5 Turbo</td><td>53.99 (+0.49)</td><td>43.21 (-0.51)</td><td>36.40 (-0.54)</td><td>44.54 (-0.18)</td></tr><tr><td>ERNIE-Bot 4.0</td><td>75.69 (-1.85)</td><td>69.59 (+1.48)</td><td>66.12 (-0.30)</td><td>70.47 (-0.22)</td></tr><tr><td>ERNIE-Bot</td><td>68.81 (+0.00)</td><td>57.62 (-0.18)</td><td>50.36 (-1.11)</td><td>58.93 (-0.43)</td></tr><tr><td>GLM-4</td><td>74.89 (-1.74)</td><td>65.48 (-1.18)</td><td>69.07 (+1.37)</td><td>69.81 (-0.52)</td></tr><tr><td>GLM-3-Turbo</td><td>72.99 (-2.22)</td><td>59.48 (-1.04)</td><td>59.66 (-0.11)</td><td>64.04 (-1.13)</td></tr><tr><td>Qwen-max</td><td>75.29 (+1.52)</td><td>65.48 (+0.60)</td><td>66.99 (+3.15)</td><td>69.25 (+1.75)</td></tr><tr><td>Qwen-72B-Chat</td><td>71.67 (+0.42)</td><td>61.30 (-0.18)</td><td>57.07 (-2.73)</td><td>63.35 (-0.83)</td></tr><tr><td>Yi-34B-Chat</td><td>66.62 (-6.04)</td><td>52.57 (-8.76)</td><td>41.90 (-16.46)</td><td>53.70 (-10.42)</td></tr><tr><td>Qwen-14B-Chat</td><td>70.60 (+1.09)</td><td>53.73 (-2.80)</td><td>45.91 (-11.47)</td><td>56.75 (-4.39)</td></tr><tr><td>Baichuan2-13B-Chat</td><td>63.75 (-1.82)</td><td>45.86 (-3.38)</td><td>32.74 (-2.66)</td><td>47.45 (-2.62)</td></tr><tr><td>Qwen-7B-Chat</td><td>61.42 (-1.32)</td><td>45.98 (-2.78)</td><td>30.78 (-14.19)</td><td>46.06 (-6.10)</td></tr><tr><td>Baichuan2-7B-Chat</td><td>63.37 (-1.01)</td><td>45.91 (-0.86)</td><td>39.94 (-0.39)</td><td>49.74 (-0.75)</td></tr><tr><td>Yi-6B-Chat</td><td>55.76 (-10.94)</td><td>35.97 (-11.82)</td><td>28.48 (-11.01)</td><td>40.07 (-11.26)</td></tr><tr><td>ChatGLM3-6B</td><td>55.74 (-2.30)</td><td>42.92 (-0.09)</td><td>38.45 (-1.28)</td><td>45.71 (-1.22)</td></tr><tr><td>Xunzi-Qwen-Chat</td><td>51.30 (-8.90)</td><td>41.25 (-3.06)</td><td>29.84 (-1.03)</td><td>40.80 (-4.33)</td></tr></table>
158
+
159
+ Table 3: Few-shot AO average accuracy of all models. We report average accuracy over subjects within each category. "Average" = average over all categories. The values in parentheses show the relative change compared to the zero-shot AO scenario.
160
+
161
+ parably to larger models like Qwen-72B-Chat. This efficiency can be attributed to their extensive training on large-scale Chinese corpora and architectural optimizations. Qwen-14B even surpasses the GPT series and ERNIE-bot, presenting high cost-effectiveness in ancient Chinese comprehension as a relatively smaller open-source model. Qwen-7B-Chat achieves the best performance among models with less than 10B parameters. Baichuan2-13B-Chat, compared to Baichuan2-7B-Chat, does not show performance improvement despite increased parameters, possibly due to a reduction in focus on ancient Chinese content in its training corpus. Xunzi-Qwen-Chat, despite being fine-tuned on ancient Chinese texts, shows a decline in performance compared to Qwen-7B-Chat. This highlights the trade-off between specialized knowledge and general applicability.
162
+
163
+ Comparison by Task Category. (1) General Historical Knowledge: Most models score highest on this category of tasks, likely because these tasks focus on the retrieval and understanding of factual information without necessitating deep textual analysis and reasoning. (2) Short Text Understanding: Compared to long text comprehension, models generally score higher on short text understanding, though still lower than on general historical knowledge tasks. This may be because short text understanding still requires models to capture
164
+
165
+ subtle semantic differences and contextual relationships, albeit with relatively lower complexity. (3) Long Text Comprehension: All models generally score lower on long text comprehension than on other tasks, indicating it as a challenging task that requires advanced understanding, reasoning, and synthesis capabilities.
166
+
167
+ # 5.2 Few-shot AO
168
+
169
+ Table 3 presents the results for the few-shot AO setting, alongside a comparison with the zero-shot AO setting.
170
+
171
+ For large models, only GPT-4 and Qwen have a $1 - 2\%$ improvement in this setting, while others slightly declined, diverging from previous Chinese benchmarks where few-shot usually excelled over zero-shot (Huang et al., 2023). We attribute this discrepancy primarily to the task specificity. Previous benchmarks often encompassed a broader range of subjects and task categories, including scientific, technological, and coding tasks. In such tasks, few-shot learning effectively aids models in capturing underlying patterns, thereby enhancing adaptability and generalization. However, our tasks focus on a broad spectrum of fragmented knowledge and require a deep understanding of ancient Chinese, including its cultural, historical backgrounds, and linguistic structures, leading to a unique challenge where few-shot learning might
172
+
173
+ ![](images/3e34500e6be772fff866b98d9a45832e063c6c3369f37f5dcf8afeef850c4a58.jpg)
174
+ Figure 3: Average accuracy for each category in various settings with different model sizes. Here, we have omitted the category names and shot types for brevity.
175
+
176
+ not provide the same level of benefit. Instead, the specificity and complexity of ancient Chinese tasks can result in the introduction of noise or irrelevant information through few-shot examples, potentially hindering model performance rather than enhancing it.
177
+
178
+ For small models, the decline in performance is more pronounced. The Yi-series models, although performing comparably to large models in the zero-shot AO scenario, show the most significant drop in the few-shot setting, with a performance decrease of $10\%$ . This aligns with conclusions from some previous studies (Li et al., 2024), suggesting that for smaller models, few-shot learning may introduce too much irrelevant content, potentially leading to information interference. In such instances, models might struggle to extract useful knowledge or patterns from a few samples, as the additional information introduced may not be entirely relevant to the task, thereby diluting the model's focus.
179
+
180
+ From this analysis, we conclude that despite the task specificity causing few-shot learning to sometimes act as interference, large models possess stronger language understanding capabilities and higher stability in processing distracting information, even achieving improvements on some tasks. However, smaller models struggle due to insufficient parameters to effectively encode and utilize contextual information.
181
+
182
+ # 5.3 Chain-of-thought
183
+
184
+ As shown in Figure 3, we conduct a series of experiments to explore the impact of CoT on LLMs of varying parameter sizes. Our experimental setup includes two scenarios: zero-shot CoT and few-shot
185
+
186
+ CoT (prompts are detailed in Appendix C). To facilitate a nuanced comparison, we select the Qwen series, encompassing four different sizes: 7B, 14B, 72B, and the non-public Qwen-max. Given that few-shot performance does not surpass zero-shot on AC-EVAL, our analysis primarily contrasts the two CoT formats against the zero-shot AO scenario.
187
+
188
+ Zero-shot CoT vs. Zero-shot AO: In zero-shot CoT, prompts are adjusted to encourage stepwise analysis. This method particularly benefits large models like Qwen-max and Qwen-72B-Chat in historical knowledge and short text understanding tasks but shows a decrease in long text comprehension. We attribute this to the increased reasoning steps needed for long text understanding in large models, where any small errors can accumulate, negatively impacting the final answer's accuracy.
189
+
190
+ As model size decreases, a downward trend in performance is evident across all tasks in the zero-shot CoT setting. This decline is likely due to the CoT method's demand for models to understand the question, generate intermediate reasoning steps, and ultimately formulate an answer. This process, more complex than direct answer generation, requires robust semantic understanding and logical reasoning capabilities. With reduced model parameters, the capability of models to perform these functions weakens, leading to diminished performance.
191
+
192
+ Few-shot CoT vs. Zero-shot AO/CoT: Few-shot CoT underperforms in comparison to zero-shot settings in both AO and CoT across all parameter sizes. This aligns with our above observation that few-shot learning generally offers less benefit in our benchmark, which demands a broad understanding of fragmented knowledge and deep comprehension of ancient Chinese, including its cultural, historical backgrounds, and linguistic structures. The unique challenges posed by these requirements suggest that even structured CoT, when combined with few-shot examples, may be perceived as informational noise, thereby impeding the model's ultimate reasoning capability.
193
+
194
+ Through this analysis, it is evident that while CoT reasoning can enhance model performance in certain contexts, the effectiveness of this approach is contingent upon the model's capacity for complex information processing and logical deduction. The decline in performance with reduced model size and the limited impact of few-shot learning highlight the intricate balance required between
195
+
196
+ model abilities, task specificity, and the introduced format of CoT.
197
+
198
+ # 6 Conclusion
199
+
200
+ We introduce AC-EVAL, a benchmark designed to evaluate LLMs' proficiency in ancient Chinese, addressing a gap by covering historical knowledge and language understanding extensively. Our experiments reveal significant improvement areas for existing LLMs. We identify critical factors influencing LLM performance and suggest practical directions for enhancing these models. AC-EVAL aims to advance LLM application in ancient Chinese education, offering a valuable tool for assessing and developing Chinese LLMs.
201
+
202
+ # 7 Limitations
203
+
204
+ While our study introduces the AC-EVAL benchmark as a robust tool for evaluating LLMs in the domain of ancient Chinese, it is imperative to acknowledge several limitations that accompany our research:
205
+
206
+ Absence of Human Baseline: The lack of a human comparative standard impedes the evaluation of LLMs' depth of understanding, cultural acuity, and contextual sensitivity relative to the insights provided by scholars specializing in ancient Chinese literature. Consequently, while the AC-EVAL benchmark may offer quantitative evaluations of LLM proficiency, it might not capture the qualitative dimensions of linguistic and cultural comprehension that are crucial in the analysis of ancient Chinese texts.
207
+
208
+ Focus on Multiple-Choice Questions: The current iteration of AC-EVAL primarily utilizes a multiple-choice format to assess LLMs. This approach, while effective in certain assessments, does not measure the generative capabilities of LLMs. For example, poetry generation (Chen et al., 2019; Zhipeng et al., 2019). As a result, our benchmark may not fully capture the models' ability to produce coherent and contextually relevant responses in an open-ended format.
209
+
210
+ In light of these limitations, future work will aim to incorporate human evaluation and expand the benchmark to include open-ended and generative tasks, thereby enhancing the comprehensive assessment of models' capabilities.
211
+
212
+ # Acknowledgments
213
+
214
+ This work is supported by the National Natural Science Foundations of China under Grant (61972047, 62372060).
215
+
216
+ # References
217
+
218
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
219
+ Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609.
220
+ Baichuan. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305.
221
+ Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2023. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology.
222
+ Huimin Chen, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, and Zhipeng Guo. 2019. Sentiment-controllable chinese poetry generation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pages 4925-4931.
223
+ OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/opencompass.
224
+ Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. arXiv preprint arXiv:2301.00234.
225
+ Geyang Guo, Jiarong Yang, Fengyuan Lu, Jiaxin Qin, Tianyi Tang, and Wayne Xin Zhao. 2023. Towards effective ancient chinese translation: Dataset, model, and evaluation. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 416-427. Springer.
226
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In International Conference on Learning Representations.
227
+ Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. In Advances in Neural Information Processing Systems.
228
+
229
+ Zijing Ji, Yuxin Shen, Yining Sun, Tian Yu, and Xin Wang. 2021. C-clue: a benchmark of classical chinese based on a crowdsourcing system for knowledge graph construction. In Knowledge Graph and Semantic Computing: Knowledge Graph Empowers New Infrastructure Construction: 6th China Conference, CCKS 2021, Guangzhou, China, November 4-7, 2021, Proceedings 6, pages 295-301. Springer.
230
+ Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2024. CMMLU: Measuring massive multitask language understanding in Chinese. In *Findings of the Association for Computational Linguistics ACL* 2024, pages 11260-11285.
231
+ Yanyang Li, Jianqiao Zhao, Duo Zheng, Zi-Yuan Hu, Zhi Chen, Xiaohui Su, Yongfeng Huang, Shijia Huang, Dahua Lin, Michael R. Lyu, and Liwei Wang. 2023. Cleva: Chinese language models evaluation platform. arXiv preprint arXiv:2308.04813.
232
+ Yuqing Li, Yuxin Zhang, Bin Wu, Ji-Rong Wen, Ruihua Song, and Ting Bai. 2022. A multi-modal knowledge graph for classical Chinese poetry. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 2318-2326.
233
+ Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. Transactions on Machine Learning Research.
234
+ Maofu Liu, Junyi Xiang, Xu Xia, and Huijun Hu. 2022. Contrastive learning between classical and modern chinese for classical chinese machine reading comprehension. ACM Transactions on Asian and Low-Resource Language Information Processing, 22(2):1-22.
235
+ Yutong Liu, Bin Wu, and Ting Bai. 2020. The construction and analysis of classical chinese poetry knowledge graph (in Chinese). Journal of Computer Research and Development, 57(6):1252.
236
+ Xiaomeng Pan, Hongfei Wang, Teruaki Oka, and Mamoru Komachi. 2022. Zuo zhuan Ancient Chinese dataset for word sense disambiguation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 129-135.
237
+ Yizhan Shao, Tong Shao, Minghao Wang, Peng Wang, and Jie Gao. 2021. A sentiment and style controllable approach for chinese poetry generation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, page 4784-4788.
238
+ Lei Shu, Yiluan Guo, Huiping Wang, Xuetao Zhang, and Renfen Hu. 2021. The construction and application of Ancient Chinese corpus with word sense annotation. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 549-563.
239
+
240
+ Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research.
241
+ Xuemei Tang and Qi Su. 2022. That slepen al the nyght with open ye! cross-era sequence segmentation with switch-memory. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7830-7840.
242
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
243
+ Pengyu Wang and Zhichen Ren. 2022. The uncertainty-based retrieval framework for Ancient Chinese CWS and POS. In Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages, pages 164-168.
244
+ Ping Wang, Shitou Zhang, Zuchao Li, and Jingrui Hou. 2023. Enhancing Ancient Chinese understanding with derived noisy syntax trees. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 83-92.
245
+ Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. Transactions on Machine Learning Research.
246
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837.
247
+ Liang Xu, Anqi Li, Lei Zhu, Hang Xue, Changtai Zhu, Kangkang Zhao, Haonan He, Xuanwei Zhang, Qiyue Kang, and Zhenzhong Lan. 2023. Superclue: A comprehensive chinese large language model benchmark. arXiv preprint arXiv:2307.15020.
248
+ Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations.
249
+ Hui Zeng. 2023. Measuring massive multitask chinese understanding. arXiv preprint arXiv:2304.12986.
250
+ Hui Zeng, Jingyuan Xue, Meng Hao, Chen Sun, Bin Ning, and Na Zhang. 2023. Evaluating the generation capabilities of large chinese language models. arXiv preprint arXiv:2308.04823.
251
+
252
+ Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompting in large language models. In *The Eleventh International Conference on Learning Representations*.
253
+
254
+ Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223.
255
+
256
+ Guo Zhipeng, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, Jiannan Liang, Huimin Chen, Yuhui Zhang, and Ruoyu Li. 2019. Jiuge: A human-machine collaborative Chinese classical poetry generation system. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 25-30.
257
+
258
+ Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364.
259
+
260
+ Bo Zhou, Qianglong Chen, Tianyu Wang, Xiaomi Zhong, and Yin Zhang. 2023. WYWEB: A NLP evaluation benchmark for classical Chinese. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3294-3319.
261
+
262
+ Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, et al. 2022. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations.
263
+
264
+ # A Details of AC-EVAL
265
+
266
+ Table 5 provides a comprehensive overview of the AC-EVAL, detailing the data sources and the specific concepts addressed within each subject. Table 6 offers insights into the quantitative aspects of the dataset, including the number of questions per subject and their average length (accounting for both the questions and explanations, measured in characters). Furthermore, Table 4 shows the distribution of choices across the multiple-choice questions.
267
+
268
+ <table><tr><td>Option</td><td>C-EVAL</td><td>AC-EVAL</td></tr><tr><td>A</td><td>22.9%</td><td>26.2%</td></tr><tr><td>B</td><td>26.0%</td><td>26.5%</td></tr><tr><td>C</td><td>26.4%</td><td>23.6%</td></tr><tr><td>D</td><td>24.7%</td><td>23.7%</td></tr></table>
269
+
270
+ Table 4: Distribution of Answers
271
+
272
+ # B Construction process of AC-EVAL
273
+
274
+ In the process of developing a high-quality dataset for ancient Chinese natural language understanding tasks, a systematic approach is adopted that encompasses several key steps. These steps ensure the rigorous collection, annotation, and evaluation of data, ultimately leading to a robust and reliable dataset. The following outlines the major phases of this process:
275
+
276
+ Task Collection Experts systematically collect common tasks related to ancient Chinese, creating a clear list of tasks that are categorized into knowledge-based and language understanding categories to better meet research needs.
277
+
278
+ Purpose and Principle For different subjects, experts establish detailed purposes and principles for data collection and annotation, specifying requirements for time coverage, content diversity, and annotation consistency to ensure high-quality and representative data.
279
+
280
+ Data Collection and Annotation Data is collected and annotated manually for each task category. Figure 4 details the specific requirements for data annotation. All our annotators are Chinese undergraduate students and experts in Chinese linguistics, and they are compensated at a rate that meets market standards. Each data entry includes a question, four answer options, and the corresponding correct answer, striving to cover a wide range of scenarios and ensuring data comprehensiveness and accuracy.
281
+
282
+ Quality Check Experts conduct random sampling of the annotated data, typically reviewing $5\%$ of the samples for verification. If errors are found, feedback is provided to the relevant standard developers for a comprehensive data review until a $100\%$ pass rate is achieved, ensuring data quality.
283
+
284
+ Task Difficulty Division After ensuring data verification, small sample testing is performed to evaluate performance in a large model. Experts classify questions into easy, normal, and hard categories based on multiple dimensions, including question difficulty, model output content, and scoring results, facilitating subsequent analysis and application.
285
+
286
+ # C Prompts for Evaluation
287
+
288
+ Figures 5 and 6 display the chain-of-thought evaluation prompts used in the zero-shot and few-shot settings, respectively.
289
+
290
+ ![](images/599e8c22386d1350fb3af868bbd7445ef1c6d87614a71e01919e4ba7e3141470.jpg)
291
+ Figure 4: Illustration of the Annotation Process: An Example of Social Customs Data Annotation
292
+
293
+ # D Details of the LLMs being evaluated
294
+
295
+ We provide a detailed description of the Large Language Models (LLMs) that were rigorously evaluated during the period of 5-10 February 2024, ensuring the assessment of the latest model versions prior to submission.
296
+
297
+ GPT (Achiam et al., 2023) series models, developed by OpenAI, designed to be more aligned with human-like interaction, exhibiting helpful, safe, and truthful behavior as enhanced by Reinforcement Learning from Human Feedback (RLHF). GPT-4, with its ability to process images, PDFs, and other file types, underwent a comprehensive post-training alignment process. We evaluate the versions of gpt-3.5-turbo-0125 and gpt-4-0125-preview.
298
+
299
+ ERNIE-Bot is an industrial-grade, knowledge-enhanced LLM developed by Baidu. The 4.0 version represents a significant upgrade in understanding, generation, logic, and memory capabilities over its predecessors, supporting extensive input and output lengths (5K input + 2K output). Our evaluation included both ERNIE-Bot and ERNIE-Bot 4.0.
300
+
301
+ GLM (Zeng et al., 2022) series, developed by Zhipu AI and Tsinghua University, are bidirectional dense models excelling in bilingual language processing. ChatGLM, a derivative of GLM, targets Chinese QA and dialogue tasks with enhanced fine-tuning and feedback. We evaluate ChatGLM3-6B and the commercial GLM-3-Turbo and GLM-4.
302
+
303
+ Qwen (Bai et al., 2023), developed by Alibaba, is trained on a vast corpus including 3 trillion tokens of texts and codes. The chat variants of Qwen
304
+
305
+ have been refined through RLHF to better align with human preferences. We conduct a comprehensive evaluation of the Qwen series, covering multiple versions with varying parameter sizes, including Qwen-7B/14B/72B-Chat and Qwen-max.
306
+
307
+ LLaMA2 (Touvron et al., 2023), developed and open-sourced by Meta AI, excels in encoding, inference, and knowledge application. It incorporates several enhancements over the vanilla Transformer architecture employed by preceding LLMs, optimizing for greater training efficiency. In our experiment, we evaluate the performance of the LLaMA2-70B version.
308
+
309
+ Yi series models by 01.AI are open-source bilingual models trained from scratch on a 3T multilingual corpus, featuring an extended context window of up to 200K tokens. We utilize the Yi-6B-Chat and Yi-34B-Chat versions, which support up to 32K tokens for context in inferences.
310
+
311
+ Baichuan2 (Baichuan, 2023) is developed by Baichuan Intelligence Inc., trained on a 2.6 trillion token high-quality corpus and supporting multiple languages including Chinese, English and others. The versions evaluated are Baichuan2-7B-Chat and Baichuan2-13B-Chat.
312
+
313
+ Xunzi is a model collaboratively released by Nanjing Agricultural University and the Zhonghua Book Company. It is fine-tuned on ancient Chinese corpora such as the Siku Quanshu, based on foundations from Qwen, Baichuan, and GLM. We evaluate the Xunzi-Qwen-Chat, a model trained from Qwen-7B-Chat.
314
+
315
+ # E Breakdown of Model Performance
316
+
317
+ Table 7 provides a detailed accuracy breakdown by subject for four representative models under AO settings in both zero- and few-shot scenarios, respectively. Comprehensive results for all models are made available on GitHub.
318
+
319
+ # F Error Analysis
320
+
321
+ We conduct an in-depth error analysis across different tasks using specific examples. Under the zero-shot setting, we compare four models from the Qwen series: Qwen-7B-Chat, Qwen-14B-Chat, Qwen-72B-Chat, and Qwen-max. Our objective is to evaluate their capabilities and identify areas for improvement.
322
+
323
+ Historical Knowledge LLMs often exhibit excellent performance due to their extensive parameters and rich knowledge bases. As shown in Table 8, we focused on tasks related to ancient geography. The 7B and 14B models provided incorrect answers, while the 72B model answered correctly but lacked adequate explanations. Qwen-Max offered explanations but included some erroneous information. This indicates that both large and small models have deficiencies in knowledge explanation for ancient Chinese, especially in historical contexts.
324
+
325
+ Short Text Understanding Using the lexical pragmatics analysis task as an example (see Table 9), the smaller models (7B and 14B) made errors in explaining and classifying special usages of words in ancient Chinese texts. In contrast, the larger models (72B and Qwen-Max) were able to grasp and explain these special usages more accurately. This suggests that larger model parameters may contribute to better understanding and analysis of texts with complex semantics and structures. However, errors still exist in larger models. For instance, in option C provided by Qwen-Max, the word "至" is identified as a verb meaning "to arrive," but it should refer to "those who arrived." Only 72B correctly recognized this subtle grammatical nuance.
326
+
327
+ Long Text Understanding We examined the poetry appreciation task (see Table 10) to assess how models handle long texts with complex literary and historical backgrounds. Smaller models struggled, often misinterpreting symbols and metaphors within the texts. Larger models demonstrated a better ability to comprehend deeper meanings and cultural implications but still require improvements in precision and consistency. Notably, larger mod
328
+
329
+ els like Qwen-Max performed better in handling complex texts, largely due to their enhanced ability to understand literary and historical contexts. This capability allows these models not only to recognize the direct meaning of the text but also to capture deeper symbolism and cultural significance.
330
+
331
+ Overall, LLMs still have significant shortcomings in understanding ancient Chinese language. One major issue is the hallucination of historical knowledge and the mishandling of deep literary and historical content, which affects their reliability in educational applications. Integrating more suitable Retrieval-Augmented Generation (RAG) methods presents a promising avenue for innovation. By enhancing models with accurate and context-specific external knowledge, we can improve their ability to handle complex historical and literary tasks, thereby increasing their trustworthiness in educational settings. Furthermore, the prompts currently used are generally generic, lacking specificity for different tasks. Designing task-specific and guiding prompts for various applications, and training models with these tailored prompts, can enable them to learn how to process ancient Chinese texts more effectively.
332
+
333
+ Additionally, given the high computational costs associated with large-parameter models, exploring model distillation techniques to transfer the capabilities of large models to smaller ones is a feasible research direction. This approach can reduce dependence on large-scale computational resources while maintaining or even enhancing performance levels. Training a smaller model (with 7B or 14B parameters) to support comprehensive knowledge understanding is a practical and valuable goal for future research. This not only makes the technology more accessible but also allows for broader applications in resource-constrained environments, ultimately advancing the field of natural language understanding in ancient Chinese.
334
+
335
+ <table><tr><td>Subject</td><td>Data Source</td><td>Concepts</td></tr><tr><td>Historical Facts</td><td>Official history exams</td><td>Historical facts, covering political, economic, and military developments across different periods.</td></tr><tr><td>Geography</td><td>10% from mock exams and 90% from ancient place names knowledge database</td><td>Administrative divisions, historical boundaries, changes in place names over time.</td></tr><tr><td>Social Customs</td><td>Relevant Books , e.g., Customs of the Qing Dynasty.</td><td>Changes in clothing, food, housing, transportation, traditional festivals, weddings and funerals, family etiquette, public customs, business practices, entertainment customs over time.</td></tr><tr><td>Art and Cultural Heritage</td><td>Mock exams and relevant books, e.g., History of Chinese Art, Architecture and Music History</td><td>Changes in calligraphy, painting, architecture, craftsmanship and music over different periods.</td></tr><tr><td>Philosophy and Religion</td><td>Mock exams and relevant books e.g., History of Chinese philosophy.</td><td>Changes in the content of Taoism, Confucianism, Buddhism, etc., and their rise and decline over time.</td></tr><tr><td>Lexical Pragmatic Analysis</td><td>Compiled by linguistics experts</td><td>Flexible usage of parts of speech and figures of speech.</td></tr><tr><td>Allusions and Idioms</td><td>Official and mock exam questions</td><td>Allusions and idioms and the cultural meaning behind them.</td></tr><tr><td>Word Sense Disambiguation</td><td>Word Sense Disambiguation Dataset (Shu et al., 2021)</td><td>Explanation of word meaning in a given text.</td></tr><tr><td>Translation</td><td>Classical-Modern Chinese translation dataset11</td><td>Overall understanding of the semantics and syntax of sentences.</td></tr><tr><td>Event Extraction</td><td>GuwenEE12</td><td>Identifying basic facts and information in short texts, such as time, location, characters, event types, etc.</td></tr><tr><td>Sentence Pauses</td><td>Siku Quanshu</td><td>Make pauses in reading unpunctuated ancient writings.</td></tr><tr><td>Summarization and Analysis</td><td>Official and mock exams</td><td>Overall understanding, analysis, and reasoning for ancient Chinese texts</td></tr><tr><td>Poetry Appreciation</td><td>Official and mock exams</td><td>Analysis of imagery, style, sentiment in classical Chinese poetry</td></tr></table>
336
+
337
+ Table 5: Data Sources and Concepts for All Subjects.
338
+
339
+ <table><tr><td rowspan="2">Subject</td><td colspan="2">Test</td><td colspan="3">Dev</td></tr><tr><td># Questions</td><td>Len. of Q</td><td># Questions</td><td>Len. of Q</td><td>Len. of E</td></tr><tr><td>Historical Facts</td><td>199</td><td>157.1</td><td>5</td><td>138.0</td><td>200.2</td></tr><tr><td>Geography</td><td>197</td><td>33.8</td><td>5</td><td>32.8</td><td>33.6</td></tr><tr><td>Social Customs</td><td>202</td><td>48.5</td><td>5</td><td>48.6</td><td>65.0</td></tr><tr><td>Art and Cultural Heritage</td><td>195</td><td>35.8</td><td>5</td><td>32.4</td><td>56.8</td></tr><tr><td>Philosophy and Religion</td><td>196</td><td>39.2</td><td>5</td><td>48.0</td><td>77.4</td></tr><tr><td>Lexical Pragmatic Analysis</td><td>198</td><td>62.5</td><td>5</td><td>69.6</td><td>75.4</td></tr><tr><td>Allusions and Idioms</td><td>206</td><td>191.2</td><td>5</td><td>79.6</td><td>132.4</td></tr><tr><td>Word Sense Disambiguation</td><td>402</td><td>176.6</td><td>5</td><td>163.2</td><td>91.4</td></tr><tr><td>Translation</td><td>199</td><td>409.1</td><td>5</td><td>315.0</td><td>79.4</td></tr><tr><td>Event Extraction</td><td>185</td><td>238.8</td><td>5</td><td>150.4</td><td>109.0</td></tr><tr><td>Sentence Pauses</td><td>202</td><td>390.2</td><td>5</td><td>404.2</td><td>294.2</td></tr><tr><td>Summarization and Analysis</td><td>598</td><td>880.5</td><td>5</td><td>856.0</td><td>341.4</td></tr><tr><td>Poetry Appreciation</td><td>201</td><td>339.4</td><td>5</td><td>371.8</td><td>109.0</td></tr></table>
340
+
341
+ Table 6: Quantitative Statistics for All Subjects.
342
+
343
+ 以下是中国古代艺术和文化传承领域的单项选择题,请逐步分析并给出正确答案对应的选项。The following is a multiple-choice question in the field of Ancient Chinese Art and Cultural Heritage. Please analyze step by step and provide the option corresponding to the correct answer.
344
+
345
+ 题目:中国美术史上至今发现最古老的装饰品是什么?Question: Which is the oldest ornament found so far in the history of Chinese art?
346
+
347
+ A.玉石装饰品 (Jade Ornament)
348
+ B.骨头装饰品 (Bone Ornament)
349
+ C.石墨装饰品 (Graphite Ornament)
350
+ D.贝壳装饰品 (Shell Ornament)
351
+
352
+ 答案:(Answer:)
353
+
354
+ Figure 5: Illustrative zero-shot CoT prompts from AC-EVAL with corresponding English translations for better readability.
355
+
356
+ 以下是中国古代艺术和文化传承领域的单项选择题。在查看这些示例之后,请逐步分析接下来一道题目并给出正确答案所对应的选项。
357
+
358
+ The following are multiple-choice questions in the field of Ancient Chinese Art and Cultural Heritage. After reviewing these examples, please analyze the next question step by step and provide the option corresponding to the correct answer.
359
+
360
+ 示例1:五代南唐时期著名画家顾闳中的绘画名作是
361
+
362
+ Example 1: The famous painting masterpiece of Gu Hongzhong, a famous painter in the Southern Tang Dynasty during the Five Dynasties, is
363
+
364
+ A.《女史箴图》(Admonitions of the Instructress to the Court Ladies)
365
+ B.《五牛图》(Five Buffaloes)
366
+ C.《簪花仕女图》(Ladies with Flowers)
367
+ D.《韩熙载夜宴图》(Han Xizai Giving a Night Banquet)
368
+
369
+ 答案:
370
+
371
+ 让我们逐步分析。顾闳中的绘画名作是《韩熙载夜宴图》。《五牛图》是韩湜的作品,《簪花仕女图》是周昉的作品,《女史箴图》是顾恺之的作品。
372
+
373
+ 所以答案是D。
374
+
375
+ Answer:
376
+
377
+ Let's analyze step by step. The famous painting by Gu Hongzhong is 'Han Xizai Giving a Night Banquet.' 'Five Buffaloes' is a work by Han Huang, 'Ladies with Flowers' is by Zhou Fang, and 'Admonitions of the Instructress to the Court Ladies' is by Gu Kaizhi.
378
+
379
+ Therefore, the answer is D.
380
+
381
+ ...[other examples]
382
+
383
+ 题目:中国美术史上至今发现最古老的装饰品是什么?
384
+
385
+ Question: Which is the oldest ornament found so far in the history of Chinese art?
386
+
387
+ A.玉石装饰品 (Jade Ornament)
388
+ B. 骨头装饰品 (Bone Ornament)
389
+ C.石墨装饰品 (Graphite Ornament)
390
+ D.贝壳装饰品 (Shell Ornament)
391
+
392
+ 答案:(Answer:)
393
+
394
+ Figure 6: Illustrative few-shot CoT prompts from AC-EVAL with corresponding English translations for better readability.
395
+
396
+ <table><tr><td>Subject</td><td>ERNIE-Bot 4.0</td><td>GLM-4</td><td>Yi-34B-Chat</td><td>Qwen-7B-Chat</td></tr><tr><td>Historical Facts</td><td>78.39/76.76</td><td>78.89/84.42</td><td>75.88/58.29</td><td>61.81/61.31</td></tr><tr><td>Geography</td><td>78.17/74.11</td><td>75.13/75.13</td><td>71.57/75.63</td><td>66.50/62.44</td></tr><tr><td>Social Customs</td><td>79.21/78.22</td><td>77.23/75.74</td><td>76.73/70.79</td><td>70.79/71.29</td></tr><tr><td>Art and Cultural Heritage</td><td>74.87/72.82</td><td>76.92/75.90</td><td>70.26/68.21</td><td>59.49/61.03</td></tr><tr><td>Philosophy and Religion</td><td>77.04/76.53</td><td>75.00/63.27</td><td>68.88/60.20</td><td>55.10/51.02</td></tr><tr><td>Lexical Pragmatics Analysis</td><td>78.22/83.17</td><td>75.25/79.70</td><td>77.72/57.43</td><td>44.55/27.23</td></tr><tr><td>Allusions and Idioms</td><td>66.67/70.20</td><td>58.59/59.60</td><td>48.48/44.95</td><td>33.84/32.83</td></tr><tr><td>Word Sense Disambiguation</td><td>67.96/70.39</td><td>69.90/68.93</td><td>70.39/64.08</td><td>59.71/52.43</td></tr><tr><td>Translation</td><td>56.22/56.22</td><td>59.45/52.49</td><td>49.50/35.82</td><td>39.80/36.57</td></tr><tr><td>Event Extraction</td><td>71.86/74.37</td><td>71.86/72.86</td><td>65.33/47.74</td><td>47.74/43.22</td></tr><tr><td>Sentence Pauses</td><td>56.86/52.01</td><td>59.20/59.36</td><td>42.64/32.44</td><td>37.63/31.77</td></tr><tr><td>Summarization and Analysis</td><td>64.18/63.18</td><td>68.66/68.16</td><td>54.73/35.82</td><td>52.74/33.33</td></tr><tr><td>Poetry Appreciation</td><td>77.84/76.76</td><td>73.51/73.51</td><td>72.97/70.27</td><td>62.70/64.86</td></tr></table>
397
+
398
+ Table 7: Accuracy per subject in the answer-only setting: a comparison of zero-shot (left) and few-shot (right) performance.
399
+
400
+ <table><tr><td>Example 1</td><td>古地名“澶州”这一名字,在历史上首次出现于哪个朝代?
401
+ The ancient place name &quot;Chanzhou&quot; first appeared in history during which dynasty?
402
+ A. 南宋 (Southern Song)
403
+ B. 北宋 (Northern Song)
404
+ C. 唐 (Tang)
405
+ D. 东汉 (Eastern Han)
406
+ 答案:C (Answer: C)</td></tr><tr><td>Explanation</td><td>隋开皇十六年(公元596年),置澶渊县。唐初,为避唐高祖李渊讳改名澶水县。唐武德四年(621)置澶州。(In the 16th year of the Kaihuang era of the Sui Dynasty (AD 596), Chanyuan County was established. In early Tang, to avoid the personal name taboo of Emperor Gaozu of Tang, Li Yuan, it was renamed Chanshui County. In the 4th year of the Wude era of the Tang Dynasty (AD 621), Chanzhou Prefecture was established.)</td></tr><tr><td>Qwen-7B-Chat</td><td>...澶州这个名字首次出现在北宋时期。... (...The name &quot;Chanzhou&quot; first appeared during the Northern Song period...)</td></tr><tr><td>Qwen-14B-Chat</td><td>...北宋时期(960年-1127年),曾设置过澶州,这是它首次出现在历史上的记载。...
407
+ (...During the Northern Song period (960-1127 AD), Chanzhou was established, marking its first appearance in historical records...)</td></tr><tr><td>Qwen-72B-Chat</td><td>根据历史记载,“澶州”这个名字最早出现在唐朝时期。... (According to historical records, the name &quot;Chanzhou&quot; first appeared during the Tang Dynasty...)</td></tr><tr><td>Owen-max</td><td>...根据历史记载,澶州在唐初为澶水县,唐玄宗开元二十二年(734年)设置澶州,因此“澶州”这一地名首次出现于唐朝。... (...According to historical records, Chanzhou was initially Chanshui County in early Tang. In the 22nd year of the Kaiyuan era under Emperor Xuanzong of Tang (AD 734), Chanzhou Prefecture was established. Therefore, the place name &quot;Chanzhou&quot; first appeared during the Tang Dynasty...)</td></tr></table>
408
+
409
+ Table 8: Error analysis for general historical knowledge.
410
+
411
+ <table><tr><td>Example 2</td><td>下列各组句子中[]内的词用法不同于其他的一项是:Which of the words in brackets has a different usage compared to the others?A. [歌]窈窕之章 ([Singing] articles related to the bright moon.)B. 假舟楫者, 非能[水]也 (Those who use a boat cannot [swim].)C. 则其[至]又加少矣 (Therefore, its [arrival] is even less.)D. 唐浮图慧褒始[舍]于其址 (The Tang Dynasty&#x27;s stupa Huibao began to [settle] at its site.)答案:C (Answer:C)</td></tr><tr><td>Explanation</td><td>A.名词作动词, 唱; B.名词作动词, 游水; C.动词作名词, 到达的人; D.名词作动词, 筑舍定居。(A. Noun used as a verb, meaning &quot;to sing&quot;; B. Noun used as a verb, meaning &quot;to swim&quot;; C. Verb used as a noun, meaning &quot;the one who arrives&quot;; D. Noun used as a verb, meaning &quot;to build a shelter and settle down.&quot;)</td></tr><tr><td>Qwen-7B-Chat</td><td>...我们逐一分析每个选项。A. “歌”��此处是一个动词, 指唱歌或请求歌唱, 与其他三个选项不同。B. “水”在这里也是动词, 表示游泳, 与其他选项相同。C. “至”在这里也是动词, 表示到达, 与其他选项相同。D. “舍”在这里也是动词, 表示筑舍定居, 与其他选项相同。综上所述, 答案为A。(Let&#x27;s analyze each option one by one. A. &quot;歌&quot; here is a verb, meaning &quot;to sing,&quot; which differs from the other three options. B. &quot;水&quot; here is also a verb, meaning &quot;to swim,&quot; similar to the others. C. &quot;至&quot; here is also a verb, meaning &quot;to arrive,&quot; similar to the others. D. &quot;舍&quot; here is also a verb, meaning &quot;to build a shelter and settle down,&quot; similar to the others. In summary, the answer is A.)</td></tr><tr><td>Qwen-14B-Chat</td><td>...[水]和[舍]在古文中都为名词作动词的用法, 表示“游泳”和“建造”的意思。[至]在古文中则是形容词作名词的用法, 表示“到达的人数”。而[歌]则是在这里作为动词, 表示歌唱的意思, 与其他三个词不同。综上所述, 本题答案选A... (In classical Chinese, [水] and [舍] are both nouns used as verbs, meaning &quot;to swim&quot; and &quot;to build,&quot; respectively. In classical Chinese, [至] is an adjective used as a noun, meaning &quot;the number of arrivals.&quot; However, [歌] is used as a verb here, meaning &quot;to sing,&quot; which is different from the other three words. In summary, the answer to this question is A...)</td></tr><tr><td>Qwen-72B-Chat</td><td>...A项: “歌”在此处是动词, 意为“歌唱”; B项: “水”在此处也是动词, 意为“游泳”; C项: “至”在此处是名词, 意为“到达的人”; D项: “舍”在此处是动词, 意为“筑舍定居”。综上所述, 本题答案选C... (A: &quot;歌&quot; here is a verb, meaning &quot;to sing&quot;; B: &quot;水&quot; here is also a verb, meaning &quot;to swim&quot;; C: &quot;至&quot; here is a noun, meaning &quot;the one who arrives&quot;; D: &quot;舍&quot; here is a verb, meaning &quot;to build a shelter and settle down.&quot; In summary, the answer to this question is C...)</td></tr><tr><td>Owen-max</td><td>A选项中, “歌”是名词活用为动词, 意为“吟诵”; B选项中, “水”是名词活用为动词, 意为“游泳”; C选项中, “至”是动词, 意为“到”; D选项中, “舍”是名词活用为动词, 意为“筑舍居住”。通过分析可以发现, C选项中的“至”没有发生词类活用, 用法不同于其他项。因此, 正确答案是C。(In option A, &quot;歌&quot; is a noun used as a verb, meaning &quot;to recite;&quot; In option B, &quot;水&quot; is a noun used as a verb, meaning &quot;to swim;&quot; In option C, &quot;至&quot; is a verb, meaning &quot;to arrive;&quot; (In option D, &quot;舍&quot; is a noun used as a verb, meaning &quot;to build a shelter and live in.&quot; Through analysis, it can be found that &quot;至&quot; in option C has not undergone a change in word class, making its usage different from the others. Therefore, the correct answer is C.)</td></tr></table>
412
+
413
+ Table 9: Error analysis for short text understanding.
414
+
415
+ <table><tr><td>Example 3</td><td>...多景楼中昼掩扉,画梁不敢住乌衣。禅房花木兵烧杀,佛寺干戈僧怕归。山雨欲来淮树立,潮风初起海云飞。酒尊未尽登舟急,更过金焦看落晖。...下列对这首诗的理解和赏析,不正确的一项是() (...Even in Duojing Tower, I have to close the doors and windows in broad daylight. Even in the richly decorated Wuyi Alley, I dare not stay overnight. The once tranquil meditation rooms and temples, now after the ravages of war, are still filled with warfare and withered flowers and trees; even the monks living there are afraid to return. The mountain rain is about to come, the trees on the banks of the Huai River stand tall; the tidal wind has just risen, clouds fly over the vast water surface. Before finishing the wine in my cup, I hurriedly boarded the boat, once again facing the boundless sunset alone at Jinshan and Jiaoshan...Among the following understandings and appreciations of this poem, which one is incorrect?)A.白日闭门,诗人所见多景楼已非昔时繁华。战乱驱使人群四散,首句中通过&quot;&quot;画梁&quot;&quot;、&quot;&quot;乌衣&quot;&quot;的意象运用了借代手法以展现这一景象。(Closing doors in broad daylight shows that Duojing Tower seen by the poet is no longer as prosperous as before. War drives people to scatter; in the first sentence, the imagery of &quot;painted beams&quot; and &quot;Wuyi&quot; uses metonymy to present this scene.)B.元军侵袭南方,肆虐破坏,即使是佛家清修的寺庙禅林也未能避免其害,&quot;僧怕归&quot;这一描绘直接而具体地展现了该情境。(The Yuan army invaded the south, wreaking havoc; even the pure temples and meditation forests of Buddhism could not avoid harm. The depiction of &quot;monks afraid to return&quot; directly and concretely presents this situation.)C.自然界中的&quot;&quot;山雨欲来&quot;&quot;与&quot;&quot;潮风初起&quot;&quot;预示着社会动荡不安的局势,诗人刚从惊恐中稍定,又因战事突发,匆匆登上船只逃离。(The natural phenomena of &quot;mountain rain about to come&quot; and &quot;tidal wind just rising&quot; foreshadow the turbulent social situation. The poet, just slightly calmed from fear, hurriedly boarded the boat to flee due to the sudden outbreak of war.)D.面对国土丧失、战火弥漫,诗人心中满溢悲愤与无奈。回望西沉落日余晖,那既是皇朝衰败的形象象征,亦是诗人内心深切悲痛情感的折射镜像。(Facing the loss of the country and pervasive war, the poet&#x27;s heart is filled with grief and helplessness. Looking back at the setting sun in the west, it is both a symbolic representation of the dynasty&#x27;s decline and a reflection of the poet&#x27;s deep sorrowful emotions.)答案:B (Answer: B)</td></tr><tr><td>Explanation</td><td>B选项理解错误,&quot;僧怕归&quot;是通过僧人的心理状态侧面反映出战乱的场景,而非直接描述。(Option B is incorrectly understood; &quot;monks afraid to return&quot; indirectly reflects the scene of war through the psychological state of the monks, rather than direct description.)</td></tr><tr><td>Qwen-7B-Chat</td><td>...选项C则错误。&quot;山雨欲来淮树立,潮风初起海云飞&quot;这两句描绘了自然界即将发生的变化,预示着将要到来的战事。并没有描述诗人刚刚从惊恐中恢复过来。...综上所述,本题答案选C.(...Option C is incorrect. The lines &quot;mountain rain about to come, trees stand by the Huai River; tidal wind just rising, clouds fly over the sea&quot; depict upcoming changes in nature, foreshadowing the impending war. There is no description of the poet just recovering from fear....In summary, the answer to this question is C.)</td></tr><tr><td>Qwen-14B-Chat</td><td>...选项A中,&quot;白日闭门&quot;确实表现出战乱导致人们生活的不稳定和恐惧,但&quot;画梁&quot;和&quot;乌衣&quot;并非借代手法。&quot;画梁&quot;指的是装饰华丽的屋梁,&quot;乌衣&quot;则是指黑色的衣服,这两个词在这里都是具体的实物描写...因此,选项A是错误的。综上所述,本题答案选A...(...In Option A, &quot;closing doors in broad daylight&quot; indeed shows the instability and fear in people&#x27;s lives caused by war, but &quot;painted beams&quot; and &quot;Wuyi&quot; are not using metonymy.&quot;Painted beams&quot; refers to gorgeously decorated beams, &quot;Wuyi&quot; refers to black clothes; these two terms here are concrete descriptions.... Therefore, Option A is incorrect. In summary, the answer is A...)</td></tr><tr><td>Qwen-72B-Chat</td><td>...B项,根据“兵烧杀”、“僧怕归”可以推断出,“僧怕归”并不是直接描写,而是通过对僧人的描绘间接展现出战乱的情况;C项,“山雨欲来”与“潮风初起”是自然界的现象,而非社会动荡不安的局势;...综上所述,本题答案选B。(...For Option B, based on &quot;soldiers burning and killing&quot; and &quot;monks afraid to return,&quot; we can infer that &quot;monks afraid to return&quot; is not a direct description but indirectly reflects the war situation through the depiction of monks; For Option C, &quot;mountain rain about to come&quot; and &quot;tidal wind just rising&quot; are natural phenomena, not the turbulent social situation; ...In summary, the answer is B.)</td></tr><tr><td>Owen-max</td><td>...B选项分析:“僧怕归”一词是通过僧人的心理状态侧面反映出战乱对佛寺的破坏和影响,而非直接具体描绘情境。故B选项错误。...综上所述,本题答案选B。(...Analysis of Option B: The phrase &quot;monks afraid to return&quot; reflects the destruction and impact of war on Buddhist temples through the psychological state of monks, rather than directly and concretely depicting the situation. Therefore, Option B is incorrect. ...In summary, the answer is B.)</td></tr></table>
416
+
417
+ Table 10: Error analysis for long text understanding. 1617
acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84c793c87203f9353c1317226097271bc036ddc1c28ef810e0f656d3729c77e0
3
+ size 2068297
acevalevaluatingancientchineselanguageunderstandinginlargelanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d737140a500ddcd6e6c4d6ffbd6fbd6dbb15d3b6c3f804123fa4ce53c60dc430
3
+ size 417990
achievingstrongergenerationviasimplecontrastivetuning/71f25873-57a8-4a24-87b9-4210826f239f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97acaf4c6d8fc9c392584797f6dcfaf9262ec397634eca79d8eeb72e42012e5e
3
+ size 94537
achievingstrongergenerationviasimplecontrastivetuning/71f25873-57a8-4a24-87b9-4210826f239f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbc13b22a5b8597c8ab932d9843b7a7c21ac9a86a4f8a8890afb6fff9c039e49
3
+ size 111879
achievingstrongergenerationviasimplecontrastivetuning/71f25873-57a8-4a24-87b9-4210826f239f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e474545c2bdcf46141f5a26a7ed8ec6b87ce919a12f218ba0eeb32ea8e3f6ee0
3
+ size 1104586
achievingstrongergenerationviasimplecontrastivetuning/full.md ADDED
@@ -0,0 +1,437 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Achieving Stronger Generation via Simple Contrastive Tuning
2
+
3
+ Zhimeng Wang $^{1}$ , Pinzheng Wang $^{1}$ , Juntao Li $^{1*}$ , Yibin Chen $^{2}$ , Min Zhang $^{1}$
4
+
5
+ <sup>1</sup>Harbin Institute of Technology, Shenzhen, China
6
+
7
+ $^{2}$ Huawei Technologies
8
+
9
+ zmwang03@gmail.com; chenyibin4@huawei.com;
10
+
11
+ zhangmin2021@hit.edu.cn;
12
+
13
+ # Abstract
14
+
15
+ Instruction tuning is widely used to unlock the abilities of Large Language Models (LLMs) in following human instructions, resulting in substantial performance improvements across various downstream tasks. Furthermore, contrastive decoding methods are employed to enhance instruction-tuned models. To further explore the potential of contrastive decoding, we introduce the Contrastive Tuning and Decoding (CTD) framework, which enhances model performance without requiring additional data or significant computational resources. When performing Contrastive Tuning, we optimize a correction model by targeting discrepancies between the original outputs and labels. During Contrastive Decoding, the correction model adjusts the logits of the SFT model using the same input to ensure better adherence to instructions. With the lightweight CTD framework, we refine the behavior of instruction-tuned models, improving their performance on the challenging SUPNATINST dataset with unfamiliar data distributions across various models and prompt formats.
16
+
17
+ # 1 Introduction
18
+
19
+ Recent years (Zhang et al., 2023b) have witnessed remarkable progress in large language models (LLMs). Some LLMs, such as LLaMA (Touvron et al., 2023a), GPT-3 (Brown et al., 2020), and Mistral (Jiang et al., 2023), have acquired general abilities for solving various tasks. Instruction tuning is a crucial technique to enhance the capabilities and controllability of LLMs (Zhang et al., 2023b) by further fine-tuning LLMs with instruction-formatted data, resulting in better performance on downstream tasks.
20
+
21
+ However, there is a concern that instruction-tuned models show significant improvements mainly on tasks related to the instruction datasets
22
+
23
+ and may not perform as well on others (Gudibande et al., 2023). This limitation indicates that instruction-tuned models struggle to effectively generalize to tasks with entirely different data distributions. Additionally, in practical scenarios, instruction tuning often faces challenges with limited data and tends to overfit, leading to performance degradation after multiple training epochs (Xue et al., 2024; Muennighoff et al., 2024).
24
+
25
+ To further enhance the performance of LLMs on text generation tasks, some researchers have explored decoding methods in a contrastive manner (Chuang et al., 2023; Kim et al., 2023; Shi et al., 2024; Li et al., 2022). These methods focus on modifying the model's initial output distribution with a specific distracted distribution. Although existing contrastive decoding methods do not require additional training, they lack robustness across various models and scenarios and are sensitive to specific hyperparameters (Kim et al., 2023).
26
+
27
+ To address this issue and further explore the potential of instruction-tuned models, we propose a novel Contrastive Tuning and Decoding (CTD) framework to achieve stronger generation without additional data or much computational resource, as shown in Figure 1. With a Supervised Fine-Tuned (SFT) model derived from a pre-trained model as the original model, we initialize the correction model using the SFT model and fine-tune it with parameter-efficient methods, utilizing data sampled from the SFT dataset to obtain additional parameters for correction. Finally, we apply Contrastive Decoding, which leverages the differences between the original and correction model's outputs. This process results in significant performance improvements with minimal resource requirements, as we use prompt tuning for contrastive tuning, which is lightweight. We are the first to explore employing contrastive tuning before contrastive decoding, which optimizes the correction model from a unique perspective, resulting in more reliable im
28
+
29
+ ![](images/fc04f2565170d9f901462046c2d2e0f4c402d44825b149007600c48b25ae7163.jpg)
30
+ Figure 1: Overview of the Contrastive Tuning and Decoding (CTD) framework. Given an instruction-tuned model as the original model, we employ low-cost Contrastive Tuning with data sampled from the SFT dataset to obtain additional parameters for the correction model. Then, we use Contrastive Decoding to refine the original model's predictions, resulting in stronger outputs.
31
+
32
+ provements on challenge tasks with unfamiliar data distribution.
33
+
34
+ In summary, our contributions are as follows: (1) We introduce Contrastive Tuning, which develops a specific correction model for the original SFT model. (2) We propose Contrastive Decoding with a tunable prefix, which is efficient as it introduces almost no inference latency. (3) Our method achieves stronger generation by integrating Contrastive Tuning and Contrastive Decoding into the CTD framework.
35
+
36
+ # 2 Related Work
37
+
38
+ Instruction Tuning Instruction tuning is a method for fine-tuning pre-trained LLMs on a collection of formatted instances presented in natural language (Wei et al., 2021). This approach enables LLMs to follow human instructions (Wei et al., 2022) and perform specific tasks without requiring demonstrations, even for unseen tasks (Chung et al., 2024). However, in a data-constrained scenario, many prior works on LLM (Chung et al., 2024; Brown et al., 2020) show that training an LLM with multiple epochs of repeated data leads to overfitting. Our work also employs multiple rounds of tuning with repeated data and achieves better performance.
39
+
40
+ # Contrastive Decoding & Instructive Decoding
41
+
42
+ The idea of using contrast to improve the text generation performance of LLMs has been studied in various way (Yona et al., 2023; Li et al., 2022; Kim et al., 2023; Liu et al., 2021a; Shi et al., 2024). Our work is mainly motivated by Contrastive Decoding (CD) (Li et al., 2022) and Instructive Decod
43
+
44
+ ing (ID) (Kim et al., 2023). CD contrasts expert LLMs with amateur LLMs by taking the difference of model log probabilities to improve generation quality without training. It uses larger LLMs as experts and smaller LLMs as amateurs. However, Instructive Decoding is based on the effect of different instructions. They contrast normal instructions with noise instructions to adjust the logits of the next token prediction and achieve considerable performance.
45
+
46
+ Prompt-tuning Prompt-tuning (Lester et al., 2021) is a Parameter-Efficient Fine-Tuning (PEFT) method that can efficiently adapt large models over various downstream tasks (Han et al., 2024). It adds a learnable soft prompt (also called a continuous prompt) before the input of models. During training, only soft prompts are updated, and the model parameters are frozen. Prompt-tuning works well on many tasks, but it doesn't perform as effectively as fine-tuning (Liu et al., 2021b) and has been replaced by other PEFT methods such as Lora (Hu et al., 2021), Qlora (Dettmers et al., 2024), Llama adapter (Zhang et al., 2023a), etc. In our work, we use prompt tuning to show the feasibility of tuning in a contrastive way, as this approach is sufficiently straightforward.
47
+
48
+ # 3 Motivation
49
+
50
+ We draw inspiration from Instructive Decoding, which suggests that distracted logits from the SFT model itself can be used to refine its original output (Kim et al., 2023). However, this approach is based on intuition, and the improvement is not stable (Kim et al., 2023). It has been observed that
51
+
52
+ ![](images/3adbe25e45e903392effd66662db13af949ef3672ddbf5f1397181a1322d45b9.jpg)
53
+ Figure 2: Illustration of Contrastive Tuning and Contrastive Decoding with a soft prefix. The instruction-tuned model serves as the original model, with the soft prefix added to create the correction model. Both models receive the same input. During training and inference, we leverage the differences between the original and corrected logits, optimizing the soft prefix throughout the training process.
54
+
55
+ Instructive Decoding has misaligned training and decoding objectives. Therefore, we design a novel optimization objective to align the language modeling objectives between training and decoding, enhancing the model's ability to correct its own outputs more effectively. So Contrastive Tuning and Decoding framework can correct high-confidence errors, which Instructive Decoding alone cannot achieve. By incorporating prompt tuning, we avoid storing two sets of model parameters, reducing memory requirements.
56
+
57
+ # 4 Methodology
58
+
59
+ # 4.1 Overall Framework
60
+
61
+ We start from the idea that the model can correct its output with just a specific instruction motivated by Instructive Decoding (Kim et al., 2023). In our exploration setting, the correction model is developed from the Supervised Fine-Tuned (SFT) models. Our target is to efficiently construct a correction model based on the given SFT model and leverage these two models to achieve stronger model predictions. Specifically, given a pre-trained autoregressive language model, we treat its SFT version $M_{\Phi}$ on the instruction dataset $\mathcal{D}$ as the original model, where $\Phi$ denotes model parameters. The Contrastive Tuning and Decoding (CTD) framework first performs low-cost contrastive tuning on the instruction-tuned model $M_{\Phi}$ to obtain a correction model $M_{\Phi + \theta}$ using an instruction dataset $\mathcal{D}'$ sampled from $\mathcal{D}$ . Then, CTD utilizes Contrastive Decoding to achieve stronger model predictions based on the given original and correction model. It refines the behavior of LLMs, achieving stable improvements without extra data through low-cost training. We further introduce the Contrastive Tuning and Instructive Decoding in Section 4.2 and 4.3, as illustrated in Figure 2.
62
+
63
+ # 4.2 Contrastive-Tuning
64
+
65
+ Given a text sequence $t$ of length $N$ , $t_{<i} = (t_1, t_2, \ldots, t_{i-1})$ denotes the sequences proceeding the $i$ th token ( $i < N$ ). The causal language modeling (CLM) objective for tuning a language model parameterized by $\Phi$ is defined as minimizing the negative log-likelihood:
66
+
67
+ $$
68
+ p _ {\Phi} (t _ {i} | t _ {< i}) = \mathrm {S O F T M A X} [ z _ {i} ],
69
+ $$
70
+
71
+ $$
72
+ \mathcal {L} _ {\mathrm {C L M}} = - \mathbb {E} _ {t \sim \mathcal {C}} \left[ \sum_ {i} \log p _ {\Phi} (t _ {i} | t _ {< i}) \right],
73
+ $$
74
+
75
+ where $p_{\Phi}(t_i|t_{< i})$ is the predicted probability for token $t_i$ derived from output logits $z_i$ by SOFTMAX function.
76
+
77
+ For Contrastive Tuning, we employ prompt tuning (Lester et al., 2021; Liu et al., 2021b) by introducing a tunable prefix denoted as $\theta$ while keeping the main model frozen. The original model generates logits $z_{i}$ without $\theta$ , and a modified logits $z_{i}^{-}$ with $\theta$ . We consider the original model with $\theta$ as the correction model and treat the logits $z_{i}^{-}$ as learnable noise. In order to minimize the noise in the original outputs, we compute the difference between $z_{i}$ and $z_{i}^{-}$ . To optimize the parameter $\theta$ for correction, we modify the conventional CLM loss as follows:
78
+
79
+ $$
80
+ p _ {\Phi , \Phi + \theta} (t _ {i} | t _ {< i}) = \mathrm {S O F T M A X} [ z _ {i} - \alpha \cdot z _ {i} ^ {-} ],
81
+ $$
82
+
83
+ $$
84
+ \mathcal {L} _ {\mathrm {C L M - C T D}} = - \mathbb {E} _ {t \sim \mathcal {C}} \left[ \sum_ {i} \log p _ {\Phi , \Phi + \theta} (t _ {i} | t _ {< i}) \right],
85
+ $$
86
+
87
+ where $\alpha$ represents the contrast intensity, as shown in the logits subtraction part of Figure 2. Specifically, $\theta$ is a tunable prefix for contrast, making this process lightweight and easy to perform.
88
+
89
+ By learning to refine the original output with learnable noise, we obtain a correction model that can be used to adjust the original model's predictions. In Section 7.3, we also conduct a pilot experiment with LoRA to obtain the parameter $\theta$ , demonstrating the generalization of Contrastive Tuning.
90
+
91
+ # 4.3 Contrastive Decoding
92
+
93
+ Motivated by Instructive Decoding (Kim et al., 2023) and Contrastive Decoding (Li et al., 2022), we propose Contrastive Decoding with soft prefix. We develop a correction model based on the soft prefix $\theta$ after Contrastive Tuning. During generation with Contrastive Decoding, we still take the difference between the original model and the correction model with the same input, resulting in stronger model predictions, as described in Algorithm 1. Unlike other methods that only decode in a contrasting manner, our approach achieves stronger predictions by aligning the CLM objective between training and contrastive decoding.
94
+
95
+ # 5 Experiments
96
+
97
+ # 5.1 Experiment Settings
98
+
99
+ Models Our approach focuses primarily on instruction-tuned models. In this study, we evaluate our method on Alpaca-7B (Taori et al., 2023), Mistral-7B-SlimOrca (Lian et al., 2023b), DeciLM-7B-instruct (Team, 2023), Llama3-8b-alpaca, claude2-alpaca-7B and claude2-alpaca13B (Chen et al., 2023).
100
+
101
+ The Alpaca-7B and Llama3-8b-alpaca are trained from Llama-7B and Llama-3-8B on the Alpaca dataset (Taori et al., 2023). claude2-alpaca-7B and claude2-alpaca-13B is trained from Llama-2-7B (Touvron et al., 2023a) and Llama-2-13B (Touvron et al., 2023b) on claude2_alpaca dataset (Chen et al., 2023). Mistral-7B-SlimOrca and DeciLM
102
+
103
+ # Algorithm 1 Contrastive Decoding with Soft Prefix
104
+
105
+ Input: Original model $\mathcal{M}_{\Phi}$ , correction model $\mathcal{M}_{\Phi + \theta}$ , input text sequence $I$ , target sequence length $T$ and intensity coefficient $\beta$ .
106
+
107
+ 1: Initialize $t \gets 1$
108
+ 2: while $t < T$ do
109
+ 3: $z_{t} \gets \mathcal{M}_{\Phi}(y_{t}|I, y_{<t})$
110
+ 4: $\tilde{z}_t\gets \mathcal{M}_{\Phi +\theta}(y_t|I,y_{< t})$
111
+ 5: $y_{t} = \arg \max (\mathrm{SOFTMAX}[z_{t} - \beta *\tilde{z}_{t}])$
112
+ 6: set $t\gets t + 1$
113
+ 7: end while
114
+
115
+ 7B-instruct are trained from Mistral-7B-v0.1 (Jiang et al., 2023) and DeciLM-7B (Team, 2023) on the SlimOrca dataset (Lian et al., 2023c), a subset of OpenOrca dataset (Mukherjee et al., 2023; Lian et al., 2023a). All the models mentioned above are derived from publicly available checkpoints on Huggingface. These instruction-tuned models encompass three different prompt formats: Alpaca format (Taori et al., 2023), ChatML format, and another commonly used format, detailed in Appendix C. In our experiments, greedy decoding is primarily employed for these models.
116
+
117
+ Evaluation and Baselines We examine the models and compare the Contrastive Tuning and Decoding (CDT) framework with two baselines: 1) Original decoding (OD) using greedy decoding strategy and 2) Instructive Decoding (ID). We choose the opposite as the noisy instructions, as it consistently outperforms other Instructive Decoding settings (Kim et al., 2023).
118
+
119
+ Following Instructive Decoding (Kim et al., 2023), we also utilize SUPNATINST (Wang et al., 2022) to assess the model's performance on unseen task generalization using Rouge-L metrics (Lin, 2004). We evaluate the model on 119 tasks from SUPNATINST, categorized into 12 groups, as outlined in Appendix B.
120
+
121
+ Additionally, DROP (Discrete Reasoning Over Paragraphs) (Dua et al., 2019) and AlpacaEval (Li et al., 2023) are also used for further evaluations.
122
+
123
+ DROP is a benchmark where models need to extract relevant information from English text paragraphs and then perform discrete reasoning steps on them. We use the exact match as the metric for DROP, and the results are shown in Appendix E.
124
+
125
+ AlpacaEval is a widely used benchmark for evaluating large language models (LLMs) on their ability to follow instructions and align with human preferences (Li et al., 2023). It assesses the preference likelihood of an LLM-based evaluator favoring a model's output compared to a GPT-4 baseline, offering a cost-effective alternative to manual human preference annotations. In our evaluation, we utilize the gpt-4-1106-preview version as the automated evaluator.
126
+
127
+ Training We perform Contrastive Tuning on the above models for 3 epochs using 4 A100-40G GPUs. We use 20 tokens to initialize the soft contrastive prefix. The intensity coefficient $\alpha$ is set to 0.3 for training. The data for Contrastive Tuning is sampled from the same dataset used
128
+
129
+ Table 1: Zero-shot Rouge-L scores on unseen tasks from a subset of the SUPNATINST dataset are evaluated with different models using Original Decoding (OD), Instructive Decoding (ID), and our Contrastive Tuning and Decoding (CTD) framework.
130
+
131
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Methods</td><td colspan="13">Rouge-L</td></tr><tr><td>Overall</td><td>TG</td><td>CR</td><td>TE</td><td>QR</td><td>CEC</td><td>DAR</td><td>AC</td><td>KT</td><td>DT</td><td>WA</td><td>OE</td><td>GEC</td></tr><tr><td rowspan="3">Alpaca-7b</td><td>OD</td><td>35.97</td><td>23.13</td><td>26.22</td><td>43.37</td><td>51.10</td><td>50.52</td><td>30.32</td><td>38.74</td><td>32.41</td><td>39.07</td><td>21.55</td><td>29.11</td><td>79.89</td></tr><tr><td>ID</td><td>37.01</td><td>25.09</td><td>26.60</td><td>42.34</td><td>58.67</td><td>50.89</td><td>30.43</td><td>39.48</td><td>33.03</td><td>39.01</td><td>22.78</td><td>28.82</td><td>79.52</td></tr><tr><td>CTD</td><td>38.73</td><td>26.49</td><td>28.59</td><td>42.84</td><td>60.19</td><td>50.62</td><td>32.21</td><td>46.93</td><td>36.09</td><td>39.13</td><td>22.24</td><td>29.60</td><td>80.30</td></tr><tr><td rowspan="3">Mistral-7B-SlimOrca</td><td>OD</td><td>53.58</td><td>32.38</td><td>53.72</td><td>66.43</td><td>58.02</td><td>70.47</td><td>64.23</td><td>54.96</td><td>41.54</td><td>41.44</td><td>44.21</td><td>58.63</td><td>87.66</td></tr><tr><td>ID</td><td>54.10</td><td>33.44</td><td>53.79</td><td>66.65</td><td>58.00</td><td>70.87</td><td>65.42</td><td>56.76</td><td>42.18</td><td>41.62</td><td>43.89</td><td>58.10</td><td>88.35</td></tr><tr><td>CTD</td><td>56.57</td><td>34.46</td><td>56.49</td><td>69.38</td><td>57.81</td><td>71.92</td><td>71.85</td><td>61.26</td><td>46.36</td><td>41.17</td><td>47.48</td><td>66.35</td><td>87.72</td></tr><tr><td rowspan="3">DeciLM-6b-instruct</td><td>OD</td><td>53.41</td><td>34.23</td><td>43.87</td><td>70.03</td><td>62.90</td><td>69.20</td><td>68.25</td><td>36.38</td><td>42.84</td><td>46.58</td><td>56.91</td><td>67.35</td><td>86.47</td></tr><tr><td>ID</td><td>54.49</td><td>34.44</td><td>45.81</td><td>71.15</td><td>63.17</td><td>69.91</td><td>69.36</td><td>39.67</td><td>41.74</td><td>46.75</td><td>58.40</td><td>68.77</td><td>85.98</td></tr><tr><td>CTD</td><td>60.46</td><td>35.84</td><td>60.30</td><td>76.65</td><td>61.25</td><td>73.38</td><td>74.45</td><td>57.61</td><td>51.67</td><td>46.13</td><td>61.72</td><td>70.36</td><td>86.25</td></tr><tr><td rowspan="3">Llama3-8b-alpaca</td><td>OD</td><td>32.38</td><td>25.88</td><td>18.98</td><td>35.76</td><td>57.85</td><td>45.32</td><td>21.06</td><td>23.95</td><td>24.77</td><td>40.31</td><td>27.17</td><td>31.54</td><td>88.56</td></tr><tr><td>ID</td><td>32.55</td><td>26.07</td><td>18.73</td><td>36.01</td><td>58.27</td><td>44.76</td><td>21.91</td><td>23.97</td><td>24.78</td><td>40.24</td><td>27.73</td><td>33.54</td><td>88.62</td></tr><tr><td>CTD</td><td>36.05</td><td>27.93</td><td>23.75</td><td>39.74</td><td>58.12</td><td>50.36</td><td>29.01</td><td>33.09</td><td>25.96</td><td>40.56</td><td>27.35</td><td>37.97</td><td>88.64</td></tr><tr><td rowspan="3">claude2-alpaca-7B</td><td>OD</td><td>34.46</td><td>28.23</td><td>25.31</td><td>33.19</td><td>58.96</td><td>52.47</td><td>23.46</td><td>35.80</td><td>27.07</td><td>40.51</td><td>21.42</td><td>21.01</td><td>86.44</td></tr><tr><td>ID</td><td>35.08</td><td>28.80</td><td>25.57</td><td>34.80</td><td>59.37</td><td>54.69</td><td>24.07</td><td>35.78</td><td>27.92</td><td>39.98</td><td>21.14</td><td>20.69</td><td>86.07</td></tr><tr><td>CTD</td><td>35.72</td><td>28.88</td><td>24.97</td><td>33.77</td><td>60.07</td><td>55.17</td><td>26.33</td><td>39.97</td><td>30.91</td><td>40.53</td><td>21.50</td><td>22.71</td><td>85.92</td></tr><tr><td rowspan="3">claude2-alpaca-13B</td><td>OD</td><td>40.36</td><td>33.07</td><td>28.65</td><td>37.43</td><td>59.66</td><td>59.99</td><td>34.03</td><td>45.28</td><td>39.09</td><td>43.87</td><td>34.34</td><td>26.63</td><td>88.81</td></tr><tr><td>ID</td><td>40.14</td><td>33.09</td><td>31.06</td><td>34.85</td><td>61.44</td><td>60.34</td><td>34.20</td><td>44.16</td><td>38.87</td><td>43.68</td><td>34.09</td><td>26.17</td><td>87.45</td></tr><tr><td>CTD</td><td>45.11</td><td>35.35</td><td>34.67</td><td>45.11</td><td>63.53</td><td>62.10</td><td>37.88</td><td>55.97</td><td>41.64</td><td>44.13</td><td>33.98</td><td>35.38</td><td>87.26</td></tr></table>
132
+
133
+ by the instruction-tuned model. The batch size is set as $4 \times 8$ . We optimize the prefix using the AdamW (Loshchilov and Hutter, 2017) optimizer with a learning rate of 9e-4.
134
+
135
+ # 5.2 Performance on Unseen Task generalization
136
+
137
+ Result Overview Table 1 displays the results of applying the Contrastive Tuning and Decoding (CTD) framework to these instruction-tuned models. CTD consistently outperforms the original decoding and the Instructive Decoding methods, particularly for the DeciLM-7b-instruct and claude2-alpaca-13B models. CTD achieves remarkable performance in adhering to instructions when facing these unseen, challenging tasks.
138
+
139
+ Winning Rate We also assess the tasks in which CTD outperformed the baseline in the SUPNATINST dataset, measured by the Rouge-L score, as depicted in Figure 3. CTD consistently outper
140
+
141
+ forms original decoding and Instructive Decoding methods across various tasks, model sizes, and prompt formats.
142
+
143
+ Text Generation Quality We evaluate our CTD framework on the AlpacaEval dataset with gpt-4-1106-preview as the evaluator, and the results are shown in Figure 4. The CTD framework performs
144
+
145
+ ![](images/97fb8d5abe9dee0fab8187e67b8c759e244d37be38901c718413555fb05f12af.jpg)
146
+
147
+ better than both Original Decoding (OD) and Instructive Decoding (ID) on this benchmark, highlighting its robustness and effectiveness in improving text generation quality. While ID performs reasonably well on the SUPNATINST tasks, its question-answering quality deteriorates. In contrast, CTD consistently enhances generation quality, making it suitable for real-world applications.
148
+
149
+ # 5.3 Implementation Details
150
+
151
+ Training Epochs and Dataset Proportion The choice of epochs for Contrastive Tuning is critical, as excessive training can lead to overfitting. As shown in Figure 5, one or two epochs of contrastive
152
+
153
+ ![](images/28335cc4ff657fb186097a224cd494477ae725e592b6409470a081f903df9a88.jpg)
154
+ Alpaca-7B
155
+
156
+ ![](images/16320408730174434f8b3cab65b7caacbfcaaae4d1e38c4193b6995562a7c002.jpg)
157
+ Mistral-7B-SlimOrca
158
+
159
+ ![](images/2bd71d5f19d3d08eb577d202a099f4cbf51bd966d0cae44f67cf0697010a4934.jpg)
160
+ Figure 4: Winning rates of Original Decoding (OD), Instructive Decoding (ID), and Contrastive Tuning and Decoding (CTD) compared to baseline in the AlpacaEval dataset.
161
+
162
+ ![](images/ef13e8d85d2e404fe4fb6411a3dfebed87509109dd32b0883a757db0868deb55.jpg)
163
+ Llama3-8b-alpaca
164
+ Figure 3: Comparative winning rates among Original Decoding (OD), Instructive Decoding (ID), and Contrastive Tuning and Decoding (CTD) across 119 tasks in the SUPNATINST dataset.
165
+
166
+ ![](images/016390dc759a6f094b7b150ed07dbd32e241bf8e13d0edacb141588d82575026.jpg)
167
+ claude2-alpaca-7B
168
+
169
+ ![](images/0f3d4922b3e5c13621ddf38201409c196785752cd7cc6ffeee06d58c45d07224.jpg)
170
+ DeciLM-7B-instruct
171
+ claude2-alpaca-13B
172
+
173
+ ![](images/0c32b6e29a134a178f2f7ec28492dedcdf928077031b4b7f37fd4ccd0407b260.jpg)
174
+
175
+ ![](images/6cadf6c0d4ee46239b13d34988a980c91f3180d328e8956d0e1c338509da4a8f.jpg)
176
+
177
+ ![](images/0bd835d8c5ca4cc2e42e69bdbeb213f2a667a1fcbf02f2f58ddc72e18979ea8a.jpg)
178
+
179
+ ![](images/d22be2af9f93518659de04c4811a1e7dd2d8bdaf8d4e530567d720d0b714aa65.jpg)
180
+ Figure 5: Improvement of Contrastive Tuning and Decoding (CTD) framework compared to Original Decoding (OD) for different models on the SUPNATINST dataset across various sampled data proportions and training epochs.
181
+
182
+ ![](images/bd73ffba57d858772e48cf50b4043e59668503478eb7d32c159867af73d5f8aa.jpg)
183
+
184
+ ![](images/536686ed5d92442290fd40827e8336f024484b865ec073bfd69348821b361790.jpg)
185
+
186
+ tuning yield the best results for most models, with performance dropping after the third epoch.
187
+
188
+ Contrastive Tuning is efficient due to its minimal training epochs and parameters, as well as the requirement for only a small amount of data sampled from the original SFT dataset, shown in Figure 5. We sample data proportions ranging from 0.1 to 1.0 for Alpaca-7B, claude2-alpaca-13B, claude2-alpaca-7B, and llama3-8b-alpaca. For Mistral-7B-SlimOrca and DeciLM-7B-instruct, data proportions of 0.01 to 0.1 are sampled. Typically, using data proportions of 0.1 or 0.01 leads to significant improvements. It's important to note that overfitting can occur with excessive data due to our limited tunable parameters.
189
+
190
+ The Coefficient $\alpha$ The coefficient $\alpha$ controls the intensity of contrast in both training and decoding. The relationship between the coefficient $\alpha$ in Contrastive Tuning and Decoding is significant. We train all models with $\alpha = 0.3$ , and during decoding, larger $\alpha$ values initially show better performance, shown in Appendix D. As $\alpha$ continues to increase, performance improves until $\alpha$ reaches 0.5, after which performance begins to decline.
191
+
192
+ However, there are some exceptions. The performance of claude2-alpaca-7B does not decline with larger $\alpha$ values, and DeciLM-7B-instruct achieves its best performance at $\alpha = 0.3$ . Theoretically, the performance should be optimal when the training and inference $\alpha$ values are the same. We believe these anomalies arise due to differences between the initial SFT and our Contrastive Tuning settings
193
+
194
+ (as we do not have access to the original training code), leading to some discrepancies. Nevertheless, this also demonstrates the robustness of our method.
195
+
196
+ # 5.4 Analysis
197
+
198
+ To further illustrate the impact of the correction model on the original instruction-tuned model, we analyze the prediction tokens that are changed or unchanged by the correction model. Figure 6 shows the density of the maximum probability from the token distribution of original predictions on SUPNATINST tasks, highlighting tokens that are changed or remain unchanged by the correction model. We notice that confident base predictions typically stay unchanged, whereas those lacking confidence are frequently modified through Contrastive Decoding. This tendency is consistent with Instructive Decoding (ID) (Kim et al., 2023), demonstrating how Contrastive Decoding influences the original outputs.
199
+
200
+ However, the Contrastive Tuning and Decoding (CTD) framework performs significantly better than Instructive Decoding. We compare the density of the original probabilities of the changed tokens between CTD and ID. The KDE plot for ID shows that the maximum probabilities of the changed tokens are confined to a narrower range, whereas the KDE plot for CTD spans a wider range of probabilities. When it comes to high-confidence false predictions, Instructive Decoding fails to correct them, whereas CTD effectively adjusts these predictions, leading to stronger generation. This indicates
201
+
202
+ ![](images/94c373e053ef46b9b643bba5b74ac773bb7e50e9f74b56e692fd7797704d1fa1.jpg)
203
+ (a) CTD and ID: Unchanged vs. Changed
204
+
205
+ ![](images/5608ac429827057a1ca34c567de892d388e31e74de8c5eb5403171669a5524ad.jpg)
206
+ Figure 6: Kernel density estimation (KDE) of predictions from the original model (claude2-alpaca-13B) on the SUPNATINST dataset. 'Maximum Probability' refers to the highest value in the token distribution derived from the original model. 'Changed' denotes tokens changed by the correction model, while 'Unchanged' represents tokens that remain unchanged after the correction.
207
+
208
+ ![](images/894f075acbffe77007bd10c5880a6fc450c8abd36686c9f33dc866dfb808d889.jpg)
209
+ (b) Changed Maximum Probability: CTD vs. ID
210
+
211
+ a greater correction ability of the CTD framework, which we attribute to Contrastive Tuning.
212
+
213
+ Claude2-alpaca-13B
214
+
215
+ <table><tr><td>Method</td><td>Corrections Times</td><td>Number of tokens</td><td>Tokens &gt;10 times</td></tr><tr><td>ID-before</td><td>6063</td><td>1318</td><td>70</td></tr><tr><td>CTD-before</td><td>11154</td><td>1411</td><td>116</td></tr><tr><td>ID-after</td><td>6063</td><td>1746</td><td>89</td></tr><tr><td>CTD-after</td><td>11154</td><td>2976</td><td>131</td></tr></table>
216
+
217
+ Figure 7: Token correction frequency and variety for Claude2-alpaca-13B on the SUPNATINST dataset. ID/CTD-before and ID/CTD-after represent corrections before and after contrastive decoding, where Correction Times indicates the total number of corrected tokens, and Number of Tokens refers to the distinct tokens affected.
218
+
219
+ Figure 7 shows the frequency of token changes and the variety of tokens corrected for Claude2-alpaca-13B on the SUPNATINST task. ID changes the original prediction 6063 times, correcting 1318 unique tokens to 1746 unique tokens. In contrast, CTD makes 11154 corrections, adjusting 1411 unique tokens to 2976 unique tokens. This demonstrates that CTD not only performs more changes on token predictions but also covers a wider range of token types.
220
+
221
+ # 6 Ablation Study
222
+
223
+ # 6.1 Processed by Two sets of Model Parameters
224
+
225
+ The CTD framework processes the input using two sets of main model parameters (the original model and the correction model), achieving better outputs than vanilla decoding, which uses only one set of
226
+
227
+ model parameters. Although CTD employs an additional process with two sets of model parameters, it significantly outperforms Instructive Decoding, which also uses two sets of model parameters but shows less improvement and even degradation, as shown in Figure 1. This demonstrates that the effectiveness of CTD is not merely due to the additional process with two sets of model parameters but rather the strength of Contrastive Tuning.
228
+
229
+ # 6.2 Impact of the Additional Prefix
230
+
231
+ We compare Prompt Tuning (Liu et al., 2021b; Lester et al., 2021) with the CTD framework, both using a tunable 20-token prefix, to explore the impact of the additional prefix on the model's performance.
232
+
233
+ As shown in Figure 8, prompt tuning with a 20-token prefix tends to either overfit or underfit, resulting in poor performance on the SUPNATINST dataset. However, our CTD framework, which employs a 20-token prefix for Contrastive Tuning, consistently improves performance with minimal training data.
234
+
235
+ In summary, the combination of Contrastive Decoding and Contrastive Tuning is essential for achieving optimal results. Simply using contrastive decoding or adding a tunable prefix alone does not yield significant improvements.
236
+
237
+ # 6.3 Training Data
238
+
239
+ Contrastive Tuning does not inject new knowledge into the model; rather, it teaches the model to correct its own outputs with learnable noise. This process can be viewed as aligning the original language modeling objective with Contrastive Decoding methods.
240
+
241
+ We use data sampled from the original SFT
242
+
243
+ ![](images/9db511f1cf5ddea37467183e18c90447bf3b9627a55fe68f4a11445d21f2aa9f.jpg)
244
+
245
+ ![](images/ee0e5f9febe513279e3d9e640fee8d50ccf79521f4005480604ad61d1c80145c.jpg)
246
+
247
+ ![](images/e010196e01b5e52ec6038002d8946bf5280a1d7651036e7de6c929121dea4bda.jpg)
248
+
249
+ ![](images/443f6c36cd3ef0cc32a3fa12f90b8899ab1105989ba6d5eef1be6958c1b36dc9.jpg)
250
+ Figure 8: Impact of the tunable 20-token prefix for Prompt Tuning (PT) and the Contrastive Tuning and Decoding (CTD) framework on the SUPNATINST dataset, with Original Decoding (OD) as the baseline. The same data proportion and training settings are used across methods.
251
+
252
+ ![](images/9041c5371001fcbd94c13f9797c8689563719849d5fab3ad99b185af3e3c9bf4.jpg)
253
+
254
+ ![](images/e291e42e7ebef6198cecbd529ade6e63b314d789d5b09c07a1d8ed5a948aee76.jpg)
255
+
256
+ dataset for Contrastive Tuning to avoid introducing unfamiliar data that may hinder optimization.
257
+
258
+ To illustrate this, we conducted an experiment using additional data for Contrastive Tuning on the Alpaca-7B. The results, shown in Figure 9, indicate that attempting to introduce new knowledge through contrastive tuning leads to performance degradation.
259
+
260
+ ![](images/277ec9ec0fa1c0186c21a038a90d4f665603c2df50f4c1d9e41aa50249a7dc8f.jpg)
261
+
262
+ ![](images/4b525da0ff21363346f8a452c1b547c259d6f6dc004409fc818f81db5771ebc7.jpg)
263
+
264
+ ![](images/9803ac1725981a0aa1d998e66f81e2e3bc0234cca261394797278f7c7d0c31b4.jpg)
265
+ Figure 9: Different datasets used for contrastive tuning. SFT Data refers to tuning with previous SFT data, while Other Data refers to tuning with unknown out-of-distribution data.
266
+
267
+ ![](images/57625995974df96ea0e6489ffb0a9ad251649be771c38d0b965530db3570a260.jpg)
268
+
269
+ # 7 Discussion
270
+
271
+ # 7.1 Training Cost & Inference Latency
272
+
273
+ Although our CDT framework is effective, the training cost is critical for its practical application. Firstly, we achieve data efficiency by sampling a small amount of data from the SFT data. Secondly, we are committed to finding the most parameter-efficient methods that use the least amount of memory. Therefore, we employ prompt tuning for Contrastive Tuning, which uses the least memory
274
+
275
+ among parameter-efficient methods. When performing Contrastive Decoding, we just double the input batch and put the tunable prefix before half of the input, which only need one set of main model parameters in the GPU memory. In contrast, other PEFT methods, such as LoRA (Hu et al., 2021), require storing two sets of model parameters in memory simultaneously.
276
+
277
+ We use 20 tokens to initialize the input prefix for both 7B, 8B, and 13B models, which tunes about $0.0012\%$ , $0.0009\%$ , and $0.0008\%$ of the model parameters.
278
+
279
+ Compared to the original decoding method, Contrastive Decoding introduces only 20 tokens of latency and memory that doubles the input batch sizes, which is equal to Instructive Decoding (Kim et al., 2023) and affordable.
280
+
281
+ # 7.2 Difference between Contrastive-Tuning and Contrastive learning
282
+
283
+ To further understand Contrastive Tuning, it is necessary to discuss the differences between Contrastive Tuning and Contrastive learning (Gao et al., 2021).
284
+
285
+ Contrastive learning enables models to map similar instances close together in a latent space while pushing apart those that are dissimilar by the Contrastive learning loss (Gao et al., 2021). Both supervised and self-supervised contrast learning need designed tasks for similar and dissimilar data.
286
+
287
+ In contrast, Contrastive Tuning aims to create a correction model that modifies the instruction-tuned model's output from the same input and computes the cross-entropy loss with the labels.
288
+
289
+ # 7.3 LoRA for Contrastive Tuning
290
+
291
+ To further demonstrate the effectiveness of the Corrector Framework, we employ LoRA (Hu et al., 2021) for Contrastive Tuning, with the results presented in Table 2.
292
+
293
+ CTD-LoRA also performs well on MMLU (Hendrycks et al., 2021) and SUP-NATINST, showing the potential of our CTD framework. However, CTD-LoRA requires more memory because LoRA Contrastive Tuning needs two sets of main model parameters compared to the prompt tuning CTD, making it not as efficient as our Contrastive prompt tuning.
294
+
295
+ <table><tr><td></td><td>MMLU</td><td>SUPNATINST</td></tr><tr><td>OD</td><td>33.25</td><td>35.46</td></tr><tr><td>CTD-LoRA</td><td>35.60</td><td>37.29</td></tr></table>
296
+
297
+ Table 2: Performance of CTD-LoRA on MMLU and SUPNATINST
298
+
299
+ # 8 Conclusion
300
+
301
+ In this paper, we propose the Contrast Tuning and Decoding (CTD) framework for instruction-tuned models to achieve stronger generation with minimal cost and ensure better generalization on challenging tasks with unfamiliar data distribution. Compared to Instructive Decoding, we demonstrate the necessity of Contrastive Tuning. Through our CTD framework, instruction-tuned models with different models, prompt formats and SFT data show stable improvement on various text generation tasks.
302
+
303
+ **Limitations** Our experiments show the feasibility of the Contrastive Tuning and Decoding (CTD) framework. However, there is much more to do, given our limited time and resources. Firstly, the impact of the intensity parameter $\alpha$ for contrastive tuning is not clear, nor is how it influences the optimization process. Secondly, we do not yet know the minimum data required for contrastive tuning sampled from SFT data to further simplify the CTD framework. Finally, we have only tried prompt tuning and LoRA for contrastive tuning. Other parameter-efficient tuning methods may have more potential.
304
+
305
+ # Acknowledgments
306
+
307
+ We want to thank all the anonymous reviewers for their valuable comments. This work was supported
308
+
309
+ by the National Science Foundation of China (NSFC No. 62206194 and 62276077), the Natural Science Foundation of Jiangsu Province, China (Grant No. BK20220488), Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), and Huawei Technologies.
310
+
311
+ # References
312
+
313
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
314
+ Lichang Chen, Khalid Saifullah, Ming Li, Tianyi Zhou, and Heng Huang. 2023. Claude2-alpaca: Instruction tuning datasets distilled from claude. https://github.com/Lichang-Chen/claude2-alpaca.
315
+ Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. 2023. Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883.
316
+ Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2024. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1-53.
317
+ Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2024. Qlora: Efficient finetuning of quantized lms. Advances in Neural Information Processing Systems, 36.
318
+ Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proc. of NAACL.
319
+ Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
320
+ Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. The false promise of imitating proprietary llms. arXiv preprint arXiv:2305.15717.
321
+ Zeyu Han, Chao Gao, Jinyang Liu, Sai Qian Zhang, et al. 2024. Parameter-efficient fine-tuning for large models: A comprehensive survey. arXiv preprint arXiv:2403.14608.
322
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR).
323
+
324
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
325
+ Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
326
+ Taehyeon Kim, Joonkee Kim, Gihun Lee, and Se-Young Yun. 2023. Distort, distract, decode: Instruction-tuned model can refine its response from noisy instructions. arXiv preprint arXiv:2311.00233.
327
+ Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
328
+ Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. 2022. Contrastive decoding: Open-ended text generation as optimization. arXiv preprint arXiv:2210.15097.
329
+ Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval.
330
+ Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". 2023a. Openorca: An open dataset of gpt augmented flan reasoning traces. https://https://huggingface.co/Open-Orca/OpenOrca.
331
+ Wing Lian, Bleys Goodson, Guan Wang, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". 2023b. Mistralslimorca: Mistral-7b model instruct-tuned on filtered, corrected, openor-cav1 gpt-4 dataset.
332
+ Wing Lian, Guan Wang, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". 2023c. Slimorca: An open dataset of gpt-4 augmented flan reasoning traces, with verification.
333
+ Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
334
+ Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A Smith, and Yejin Choi. 2021a. Dexperts: Decoding-time controlled text generation with experts and anti-experts. arXiv preprint arXiv:2105.03023.
335
+ Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021b. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602.
336
+
337
+ Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
338
+ Niklas Muennighoff, Alexander Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksandra Piktus, Campo Pyysalo, Thomas Wolf, and Colin A Raffel. 2024. Scaling data-constrained language models. Advances in Neural Information Processing Systems, 36.
339
+ Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. Preprint, arXiv:2306.02707.
340
+ Chenyu Shi, Xiao Wang, Qiming Ge, Songyang Gao, Xianjun Yang, Tao Gui, Qi Zhang, Xuanjing Huang, Xun Zhao, and Dahua Lin. 2024. Navigating the overkill in large language models. arXiv preprint arXiv:2401.17633.
341
+ Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
342
+ DeciAI Research Team. 2023. Decilm-7b-instruct.
343
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
344
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
345
+ Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. 2022. Super-naturalinstructions: Generalization via declarative instructions on $1600+$ nlp tasks. arXiv preprint arXiv:2204.07705.
346
+ Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
347
+ Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
348
+
349
+ Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, and Yang You. 2024. To repeat or not to repeat: Insights from scaling llm under token-crisis. Advances in Neural Information Processing Systems, 36.
350
+ Gal Yona, Or Honovich, Itay Laish, and Roee Aharoni. 2023. Surfacing biases in large language models using contrastive input decoding. arXiv preprint arXiv:2305.07378.
351
+ Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Ao-jun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, and Yu Qiao. 2023a. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199.
352
+ Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, et al. 2023b. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792.
353
+
354
+ # A Open-Source Models Utilized in Our Study
355
+
356
+ We provide a list of the open-source models used in our work, as shown in Table 3. These models are popular checkpoints from Huggingface, instruction-tuned on open-source datasets, and have demonstrated strong performance on downstream tasks. While we are unable to obtain the training code for these models, this limitation underscores the robustness and practicality of our approach.
357
+
358
+ # B Overview of the SUPNATINST Dataset
359
+
360
+ SUPNATINST (Wang et al., 2022) is a large-scale dataset comprising over 1,600 natural language processing (NLP) tasks designed to enhance model generalization through declarative instructions.
361
+
362
+ Each task in SUPNATINST includes a 'Definition' prompt, which serves as an instructional guide. For zero-shot evaluations, only the 'Definition' is provided. Following the approach from Instructive Decoding (Kim et al., 2023), our experiments focus on the English portion of the dataset, evaluating 100 instances per task as outlined by (Wang et al., 2022). This subset consists of 119 evaluation tasks, categorized into the 12 groups shown in Table 4.
363
+
364
+ <table><tr><td>Abbreviation</td><td>Task Category</td></tr><tr><td>AC</td><td>Answerability Classification</td></tr><tr><td>CEC</td><td>Cause-Effect Classification</td></tr><tr><td>DT</td><td>Data-to-Text</td></tr><tr><td>GEC</td><td>Grammar Error Correction</td></tr><tr><td>CR</td><td>Coherence Resolution</td></tr><tr><td>KT</td><td>Keyword Tagging</td></tr><tr><td>DAR</td><td>Dialogue Act Recognition</td></tr><tr><td>OE</td><td>Overlap Extraction</td></tr><tr><td>QR</td><td>Question Rewriting</td></tr><tr><td>TE</td><td>Textual Entailment</td></tr><tr><td>TG</td><td>Title Generation</td></tr><tr><td>WA</td><td>Word Analogy</td></tr></table>
365
+
366
+ Table 4: Task Categories in the SUPNATINST Dataset
367
+
368
+ # C Prompt Formats
369
+
370
+ Our experiments cover three prompt formats: Alpaca format, ChatML format, and another commonly used format, as shown in Table 6.
371
+
372
+ # D The coefficient $\alpha$
373
+
374
+ Figure 11 illustrates the impact of the coefficient $\alpha$ on model performance during Contrastive Decoding.
375
+
376
+ <table><tr><td>Model</td><td>Huggingface Model ID</td></tr><tr><td>Alpaca-7B</td><td>chavinlo/alpaca-native</td></tr><tr><td>DeciLM-7B-instruct</td><td>Deci/DeciLM-7B-instruct</td></tr><tr><td>Mistral-7B-SlimOrca</td><td>Open-Orca/Mistral-7B-SlimOrca</td></tr><tr><td>Llama3-8b-alpaca</td><td>lainshower/Llama3-8b-alpaca</td></tr><tr><td>claude2-alpaca-7B</td><td>umd-zhou-lab/claude2-alpaca-7B</td></tr><tr><td>claude2-alpaca-7B</td><td>umd-zhou-lab/claude2-alpaca-7B</td></tr></table>
377
+
378
+ # E Performance on DROP
379
+
380
+ DROP (Discrete Reasoning Over Paragraphs) (Dua et al., 2019) is a reading comprehension benchmark designed to challenge language models with 96,000 crowdsourced, adversarially created questions. Unlike previous datasets, DROP requires language models to perform discrete operations such as addition, counting, and sorting by resolving references across multiple positions within a paragraph. This demands a deeper and more comprehensive understanding of the paragraph's content.
381
+
382
+ We evaluate the Contrastive Tuning and Decoding framework on the DROP dataset, where it also outperforms both Original Decoding and Instructive Decoding, as shown in Figure 10 and Table 5.
383
+
384
+ Table 3: Open-Source Model used In Our Work
385
+
386
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Methods</td><td>DROP</td></tr><tr><td>Exact Match</td></tr><tr><td rowspan="3">Alpaca-7B</td><td>OD</td><td>14.01</td></tr><tr><td>ID</td><td>14.44</td></tr><tr><td>CTD</td><td>16.39</td></tr><tr><td rowspan="3">DeciLM-7B-instruct</td><td>OD</td><td>47.00</td></tr><tr><td>ID</td><td>46.75</td></tr><tr><td>CTD</td><td>49.52</td></tr><tr><td rowspan="3">claude2-alpaca-7B</td><td>OD</td><td>18.08</td></tr><tr><td>ID</td><td>17.88</td></tr><tr><td>CTD</td><td>19.10</td></tr><tr><td rowspan="3">claude2-alpaca-13B</td><td>OD</td><td>13.53</td></tr><tr><td>ID</td><td>12.91</td></tr><tr><td>CTD</td><td>14.51</td></tr><tr><td rowspan="3">Mistral-7B-SlimOrca</td><td>OD</td><td>46.11</td></tr><tr><td>ID</td><td>46.48</td></tr><tr><td>CTD</td><td>46.64</td></tr></table>
387
+
388
+ Table 5: Performance on DROP using Original Decoding (OD), Instructive Decoding (ID), and Contrastive Tuning and Decoding (CTD) framework with Exact Match.
389
+
390
+ # F Comparison and Integration of the CTD Framework, Instructive Decoding, and Prompt Tuning
391
+
392
+ Instructive Decoding is a variant of contrastive decoding that generally outperforms other contrastive
393
+
394
+ methods. All contrastive decoding approaches aim to leverage distracting output logits to enhance the original output. However, a significant limitation of these methods is the lack of clear evidence explaining why distracting logits effectively influence the original output, as the concept has mostly been accepted based on intuition.
395
+
396
+ ![](images/9735b21d3e5cb6cd962a822fb883cd85e03a8ff5b8fd889ba6266a023b4e2a74.jpg)
397
+ Figure 10: Impact of Training Epochs on Model Performance for the DROP Dataset
398
+
399
+ One key issue with contrastive decoding methods is the misalignment between language modeling objectives during training and decoding. To address this, our Contrastive Tuning technique bridges the gap between the training and decoding phases.
400
+
401
+ Our approach is unique in training a correction model with only a few parameters, based on the source model and guided by a clear optimization objective. This correction model refines the source model's output by verifying correct tokens and adjusting incorrect ones. Through contrastive tuning, the model gains this corrective ability—an innovation that has not been explored before. Prior methods, lacking such a well-defined objective, leave the distracting logits ambiguous about when to correct or affirm the original outputs.
402
+
403
+ Instructive Decoding is primarily effective for tasks that align closely with the distribution of the SFT data, which limits its improvement potential. By contrast, the CTD framework has been evalu
404
+
405
+ ![](images/546e49e2686d1bd6bc6a3dd96c85f837feb073852af923753541dc9f72b2b73c.jpg)
406
+
407
+ ![](images/03701f67cffe26f3a71e5677c77b63f99ebc85cfa17f9b57d453b8268dfbe65a.jpg)
408
+
409
+ ![](images/8c5e5dddc209a667b636e97932287c987f58d8b9a4f96a6b89183d63dbd57211.jpg)
410
+
411
+ ![](images/df165b5a86e39a943dfdac973c69b466810169b4d4071ffcd9ff6cfd803ffe45.jpg)
412
+ Figure 11: Impact of the coefficient $\alpha$ on decoding performance (measured by Rouge-L) for various models on the SUPNATINST dataset.
413
+
414
+ ![](images/9c1aad0b276a6d0d270c55d1de91d86c9d8015e0bb28e703a8eea39272148cd7.jpg)
415
+
416
+ ![](images/d6c9f92835707db6425097f284949605db050ece22e94dee2dfb7cf15d69ed20.jpg)
417
+
418
+ ated across multiple models trained with diverse SFT data and tested on more challenging and unfamiliar tasks. When applied to AlpacaEval, Instructive Decoding shows a performance drop compared to original decoding methods, revealing its weak generalization and limited practical value. Nevertheless, it provides useful insights by indicating that models can potentially recognize their own errors and self-correct. We are the first to unlock this potential through contrastive tuning.
419
+
420
+ Prompt tuning, a parameter-efficient fine-tuning (PEFT) method, is currently underutilized due to its comparatively lower performance. Within the CTD framework, prompt tuning serves as one method of implementing contrastive tuning. Preliminary experiments with LoRA in our study suggest that other PEFT methods also show promise.
421
+
422
+ However, we believe that prompt tuning is what makes our method stand out, as it enables contrastive tuning to be highly efficient. Prompt tuning allows the model to store only a single set of parameters in memory during both contrastive tuning and contrastive decoding. In contrast, methods like LoRA would require additional parameters and would need to store two sets of model parameters during contrastive tuning. With contrastive prompt tuning, updating as little as $0.0012\%$ of the parameters for a 7B model leads to outstanding results. We attribute this efficiency to the model's inherent ability to self-correct (as demonstrated with Instructive Decoding), combined with our clear optimization objective that activates this capability at minimal
423
+
424
+ cost.
425
+
426
+ # G Decoding Strategy and Evaluation Metric
427
+
428
+ We employ greedy decoding due to the diverse nature of the SUPNATINST dataset, which consists of 119 tasks spanning generation, classification, and question-answering. While sampling-based decoding can enhance output diversity, its inherent randomness makes it less suitable for tasks like classification and question-answering.
429
+
430
+ Initially, we consider both Rouge-L and exact match metrics for evaluation. However, our research is primarily focused on out-of-distribution text generation, classification, and question-answering tasks. Even though some tasks may have a constrained output space, the outputs do not always perfectly match the reference answers. As a result, the exact match is excluded as it does not accurately reflect model performance in this context.
431
+
432
+ <table><tr><td>Prompt Format</td></tr><tr><td>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.</td></tr><tr><td>######## Instruction:</td></tr><tr><td>{instruction}</td></tr><tr><td>##### Input:</td></tr><tr><td>{input}</td></tr><tr><td>##### Response:</td></tr><tr><td>{output}</td></tr><tr><td>system
433
+ {system_prompt}&lt;lim_endl&gt;</td></tr><tr><td>user
434
+ {user_prompt}&lt;lim_endl&gt;</td></tr><tr><td>assistant
435
+ {output}&lt;lim_endl&gt;</td></tr><tr><td>System:</td></tr><tr><td>{system_prompt}</td></tr><tr><td>User:</td></tr><tr><td>{user_prompt}</td></tr><tr><td>Assistant:</td></tr><tr><td>{output}</td></tr></table>
436
+
437
+ Table 6: Prompt Formats Used In Our Work
achievingstrongergenerationviasimplecontrastivetuning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12ce91e885b2756bd3fc8652343e08f022bca05b2038cf8530229a7fb1151aa8
3
+ size 720719
achievingstrongergenerationviasimplecontrastivetuning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ad7eefbb071f7ef7c029927729166cb27d5223fbad1c5d7ab89d3006a2a92e3
3
+ size 451123
acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/0f966289-6db2-4a15-a218-dc1fc3e38066_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b75f059a7b5b011a1d90326c4095a4f9450a8cd796948e8adff8f69b3e71741e
3
+ size 114095
acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/0f966289-6db2-4a15-a218-dc1fc3e38066_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a046ada63695577912960e4e7080f84fe3c9cd04e8833312e2fb7b4c068e8be5
3
+ size 135370
acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/0f966289-6db2-4a15-a218-dc1fc3e38066_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48748fc567194e291bd3c49ece5836943f7c4c2ebf3a98b3e7905f8405ae7d9e
3
+ size 832416
acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/full.md ADDED
@@ -0,0 +1,558 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Coarse-to-Fine Prototype Learning Approach for Multi-Label Few-Shot Intent Detection
2
+
3
+ Xiaotong Zhang $^{1}$ , Xinyi Li $^{1}$ , Feng Zhang $^{2}$ , Zhiyi Wei $^{1}$ , Junfeng Liu $^{1}$ , Han Liu $^{*1}$
4
+
5
+ $^{1}$ School of Software Technology, Dalian University of Technology
6
+
7
+ $^{2}$ School of Computer Science, Peking University
8
+
9
+ {zhangxt,hanliu} $@$ dlut.edu.cn, {lixinyi_dlut,zhiyi.wei,junfeng.liu} $@$ hotmail.com,
10
+
11
+ zfeng.maria@gmail.com
12
+
13
+ # Abstract
14
+
15
+ Few-shot intent detection is a challenging task, particularly in scenarios involving multiple labels and diverse domains. This paper presents a novel prototype learning approach that combines the label synset augmentation and the coarse-to-fine prototype distillation for multi-label few-shot intent detection. To tackle the data scarcity issue and the lack of information for unseen domains, we propose to enhance the representations of utterances with label synset augmentation and refine the prototypes by distilling the coarse domain knowledge from a universal teacher model. To solve the multilingual intent detection in real-world dialogue systems, we fine-tune a cross-lingual teacher model to make our method fast adapt to different languages and re-announce two non-English task-oriented dialogue datasets CrossWOZ and JMultiWOZ in multi-label form. Experimental results on one English and two non-English datasets demonstrate that our approach significantly outperforms existing methods in terms of accuracy and generalization across different domains.
16
+
17
+ # 1 Introduction
18
+
19
+ Intent detection which aims to identify intents behind user utterances is a core component of task-oriented dialogue systems (Shen et al., 2021; Chen et al., 2017), as its performance directly affects downstream decisions and policies. In real-world conversation scenarios, a single utterance could contain multiple intents, and the ways of expressing an intent are diverse. The data scarcity issue makes intent detection rather challenging as user intents constantly emerge in rapidly changing domains (Vulić et al., 2022). Therefore, the imperative to accurately recognize multiple intents in the low-data regime motivates the multi-label few-shot intent detection (FS-MLID).
20
+
21
+ Existing works mainly focus on the popular metric-based meta-learning paradigm for the FS-MLID task, which aims to learn a metric space that can make label predictions by calculating distances between query samples and prototypes of different classes. By training on a set of sampled FS-MLID tasks, the model learns general knowledge to rapidly generalize to new tasks with novel intent classes. In particular, CTLR (Hou et al., 2021) proposes to estimate label-instance relevance scores and uses a meta-calibrated threshold to select multiple associated intent labels. DCKPN (Zhang et al., 2023) constructs a dual class knowledge propagation network that combines label information and feature structure to guide intent prediction.
22
+
23
+ However, existing methods neglect that it is difficult to estimate the class prototypes in low-resource settings, and they also lack domain information to predict novel classes (Wang et al., 2024), thereby diminishing the discriminability of the metric space and model generalization. In addition, previous works merely focus on monolingual setting and only conduct on English datasets (Khalil et al., 2019). In contrast to English, most other languages lack sufficient annotated data to train high-quality intent detection models, which will ultimately hinder the application of task-oriented dialogue systems to a much wider spectrum of languages.
24
+
25
+ To address the aforementioned issues, we revisit the FS-MLID task from a multilingual perspective and propose a novel Coarse-to-Fine Prototype Learning method (CFPL), which is shown in Figure 1. Considering the scarcity of samples in few-shot learning and the rich semantic information beneath class labels, we propose to enhance the representations of utterances with label synset augmentation. Specifically, we first generate a synset for each intent label using Open Multilingual Wordnet (Bond et al., 2016), then we propose a refinement method to further eliminate the noise in the expanded label set. To precisely estimate the class prototypes
26
+
27
+ ![](images/217c446af36574529e0a6ca9bef22c0894acc7996ab16e76bc5a2ab880259136.jpg)
28
+
29
+ ![](images/7e520665dae7235ff652d907c9b55889aa75648eb599f4b4d30e179cbacc9a2c.jpg)
30
+ Figure 1: The framework of the CFPL method. First, a label synset is generated from WordNet, using semantic similarity to select the top augmented labels for each class. These augmented labels are then concatenated with support utterances to form enhanced embeddings. In the coarse-to-fine prototype distillation and inference stage, the enhanced embeddings of support data, the query embeddings, and the label embeddings are combined to learn instance-level and class-level representations for the support and query data in each episode. Then these representations are interacted to obtain the intra-domain prototypes via self-attention and cross-attention. Meanwhile, the inter-domain prototypes are learned by fine-tuning a cross-lingual teacher model, and are further distilled into fine-grained prototypes via cross-attention. Finally, the intra-domain and fine-grained prototypes are fused into calibrated prototypes, which are input into a prototypical classifier to predict multiple intents for each query data.
31
+
32
+ in the absence of domain knowledge, we further devise the coarse-to-fine prototype distillation. In particular, intra-domain student prototypes are first learned through feature interactions at the instance and class levels for a specific dataset. Then the inter-domain teacher prototypes that contain coarse domain knowledge from the teacher model are distilled into fine-grained prototypes for a specific dataset, which are further fused with intra-domain student prototypes to constitute more precise prototypes. Moreover, we fine-tune a cross-lingual teacher model to make our method fast adapt to different languages. To verify this, we re-announce two non-English task-oriented dialogue datasets in multi-label form, i.e., CrossWOZ (Zhu et al., 2020) and JMultiWOZ (Ohashi et al., 2024), which contain multiple domains and thus can simulate the few-shot scenario in unseen domains. The contributions of this paper can be summarized as follows:
33
+
34
+ (1) We propose a coarse-to-fine prototype learning approach to recognize multiple intents for an utterance in low-resource settings. We first design a label augmentation strategy with semantic similar
35
+
36
+ ity refinement to generate enhanced data representations. During the prototype distillation, we first conduct feature interactions between samples at the instance and class levels to learn intra-domain student prototypes, then we distill related coarse domain knowledge from the universal teacher model into fine-grained prototypes for a specific dataset.
37
+
38
+ (2) To bridge the multilingual gap, we propose a simple but efficient fine-tuning method that enables the teacher model to fast adapt to different languages. Furthermore, we introduce and release two non-English FS-MLID datasets, which is an important attempt towards multilingual intent detectors for task-oriented dialogues.
39
+ (3) Extensive experiments demonstrate that our proposed methods outperform competitive baselines on three FS-MLID benchmarks, and is adept at handling low-resource situations.
40
+
41
+ # 2 Related Works
42
+
43
+ # 2.1 Multi-Label Intent Detection
44
+
45
+ Intent detection aims to mine the main purpose behind user utterances. Many studies (Goo et al.,
46
+
47
+ 2018; Qin et al., 2019; Liu et al., 2021) have achieved promising performance for intent detection. However, they neglect the more practical and challenging scenario, multi-label intent detection, which aims to assign multiple intents to samples. Rychalska et al. (2018) firstly propose to conduct multi-label intent detection. Considering the close relationship between intent detection and slot filling, Gangadharajaiah and Narayanaswamy (2019), Qin et al. (2020) and Qin et al. (2021) design different strategies to leverage slot information to enhance multi-intent detection. Zhu et al. (2024) introduce the global static and local dynamic heterogeneous label graph to model interactions among samples. Wu et al. (2021) propose to construct a label embedding space by using label words. Vulic et al. (2022) conduct contrastive conversational fine-tuning on pre-trained sentence encoders.
48
+
49
+ # 2.2 Multi-Label Few-Shot Learning
50
+
51
+ Few-shot learning (FSL) aims to learn from limited labeled samples and recognize novel classes that have not been seen during the training process. Compared with single-label FSL, multi-label FSL is more common in many real scenarios, but only a few works have been done. Previous works focus on image domain (Alfassy et al., 2019) or audio domain (Cheng et al., 2019). In natural language processing domain, Liu et al. (2022) propose to address the multi-label aspect category detection task with a novel label-enhanced prototypical network. Only a few studies such as CTLR (Hou et al., 2021) and DCKPN (Zhang et al., 2023) have addressed the FS-MLID scenario, but they cannot well solve the data scarcity issue and the lack of information for unseen domains. Moreover, they only focus on monolingual setting.
52
+
53
+ # 3 The Proposed Method
54
+
55
+ # 3.1 Problem Definition
56
+
57
+ Few-shot learning aims to train a model that can recognize unknown categories with few labeled examples (Snell et al., 2017). In accordance with prior works, we follow the episodic paradigm on account of its effectiveness (Yang et al., 2021). Given a set of training classes $\mathcal{C}_{train}$ and testing classes $\mathcal{C}_{test}$ , where $\mathcal{C}_{train} \cap \mathcal{C}_{test} = \emptyset$ . The model is trained with numerous samples from $\mathcal{C}_{train}$ , then directly adopted to unseen classes $\mathcal{C}_{test}$ with few labeled samples. In each episode, we have a support set $\mathcal{S} = \{(x_i, y_i)\}_{i=1}^{N \times K}$ where $x_i$ represents
58
+
59
+ a data sample, $\pmb{y_i}$ is the corresponding class label, $N$ is the number of classes and $K$ is the number of support data in each class, and a query set $\mathcal{Q} = \{(x_j,\pmb{y}_j)\}_{j = 1}^Q$ where $\mathcal{Q}$ is the number of query samples.
60
+
61
+ Multi-label few-shot intent detection allows that each utterance is associated with multiple intents. Given an utterance $x$ , its label can be represented with a vector $\mathbf{y} = [y^{1}, y^{2}, \dots, y^{N}]$ , where $y^{i} \in \{0, 1\}$ and $N$ is the number of possible intents.
62
+
63
+ # 3.2 Framework Overview
64
+
65
+ Our method consists of three components: Label Synset Augmentation, Coarse-to-Fine Prototype Distillation, and Optimization and Inference. We begin with Label Synset Augmentation to conduct label augmentation for each class and enhance the representations of support data with these augmented labels. Then we implement Coarse-to-Fine Prototype Distillation, which involves four parts. Inter-Domain Prototype Learning applies a pretrained teacher model to capture coarse domain features. In Cross-Linguual Teacher section, a cross-lingual teacher model is fine-tuned to adapt quickly to different languages. Intra-Domain Prototype Learning conducts feature interactions for the support and query data within each episode. Prototype Distillation produces fine-grained prototypes from the coarse inter-domain prototypes and combines it with intra-domain student prototypes to get the final prototypes. During the Optimization and Inference, we adopt a prototypical classifier and multi-label inference to train the whole model and achieve multi-intent prediction for each query data.
66
+
67
+ # 3.3 Label Synset Augmentation
68
+
69
+ Many prior methods on data augmentation have validated the effectiveness of label enhancement in the few-shot learning setting (Luo et al., 2021; Zhang et al., 2022; Liu et al., 2022). For intent detection, the core issue is to extract user intents related to the utterance from all aspects and granularities. Hence, we choose Open Multilingual Wordnet (Bond et al., 2016), a large lexical database of synsets for over 150 languages, to generate multiple synonyms related to each original label. The process of label synset augmentation is shown in Figure 1. For each training label $y_{i}$ , we generate a set of augmented labels $\mathcal{Y}_i = \{\tilde{y}_{i,1},\tilde{y}_{i,2},\dots \tilde{y}_{i,n}\}$ . Different from previous research that only uses Synonym Replacement (Wei and Zou, 2019) which randomly selects an augmented label to replace the original intent
70
+
71
+ label, we further propose a label refinement method to eliminate the noisy labels from the set $\mathcal{Y}_i$ .
72
+
73
+ An ideal augmented label $\tilde{y}$ should have high semantic similarity with the original label $y$ as well as the corresponding user utterance $x$ (Hu et al., 2022). Thus we concatenate the utterance and its original label into a sequence $s$ as a basis for selecting enhanced labels. We introduce an auxiliary semantic similarity calculation function $sim(\cdot, \cdot)$ to guide the selection, where $sim(\cdot, \cdot)$ is a model that can output the semantic similarity between two text samples. In particular, for a sequence $s$ , we use multilingual USE (Chidambaram et al., 2019) to compute the similarity between $s$ and each augmented label, and select the top $k$ augmented labels to enhance the representations for support data.
74
+
75
+ $$
76
+ \mathcal {J} = \operatorname {t o p -} k ([ s i m (s, \tilde {y} _ {i, j}) ] _ {j = 1} ^ {n}), \tag {1}
77
+ $$
78
+
79
+ where top- $k(\mathcal{A})$ returns the indices of the largest $k$ elements of the set $\mathcal{A}$ , and $k$ is a hyperparameter.
80
+
81
+ Denoting $\{\tilde{y}_{i,j}\}_{j\in \mathcal{J}}$ as a subset of $k$ augmented labels from $\mathcal{V}_i$ , we can add them before the utterance and obtain the label enhanced embedding for each utterance $x_{i}$ through the student model:
82
+
83
+ $$
84
+ \boldsymbol {u} _ {i} = \operatorname {A v g P o o l i n g} \left(\left[ \boldsymbol {e} _ {1}; \dots ; \boldsymbol {e} _ {j}; \boldsymbol {h} _ {x _ {i}} \right]\right), \tag {2}
85
+ $$
86
+
87
+ where ; represents concatenation along the token length dimension, and AvgPooling $(\cdot)$ is to obtain a vector by performing average pooling on a set of vectors along the token length dimension. $e_j$ is the embedding of the label $\tilde{y}_{i,j}$ , and $h_{x_i}$ is the feature vector of the utterance $x_{i}$ , which are all obtained from the student model $f_{\theta}(\cdot)$ such as Bert.
88
+
89
+ Finally, we learn the label enhanced representations of all the support data through the student model.
90
+
91
+ # 3.4 Coarse-to-Fine Prototype Distillation
92
+
93
+ In this section, we propose to learn the prototypes in each episode with knowledge distillation. Knowledge distillation algorithms aim to exploit the hidden knowledge from a large teacher model, denoted as T, to guide the training of a small student model, denoted as S (Hinton et al., 2015). Different from conventional distillation that teacher provides soft-targets for students, we propose to distill features to help the student to leverage the semantic knowledge of related domains from the teacher.
94
+
95
+ Inter-Domain Prototype Learning To pre-train a teacher model, we adopt the multilingual-BERT
96
+
97
+ ![](images/90480e6c5a8800ad197dd8fcad26ac1b32548c5bf6ec81099f0836a191647754.jpg)
98
+ Figure 2: Fine-tuning cross-lingual teacher.
99
+
100
+ as the backbone, and train it on the Massive dataset (FitzGerald et al., 2023) in English which covers 60 intent classes from 18 domains.
101
+
102
+ Then for each utterance $x_{i}$ in the support set, we learn its representation through the teacher encoder $f_{\phi}$ :
103
+
104
+ $$
105
+ \boldsymbol {g} _ {i} = f _ {\phi} \left(x _ {i}\right), \tag {3}
106
+ $$
107
+
108
+ where $\pmb{g}_i$ is the state vector corresponding to the [CLS] token. Based on its label $y_i$ , we can construct an inter-domain teacher prototype, which incorporates related semantic information from 18 domains.
109
+
110
+ $$
111
+ \boldsymbol {p} _ {\mathcal {T}} ^ {c} = \frac {1}{| \mathcal {S} _ {c} |} \sum_ {y _ {i} = c} \boldsymbol {g} _ {i}, \tag {4}
112
+ $$
113
+
114
+ where $c$ is the intent category of the given domain, and $|S_{c}|$ is the number of support data belonging to category $c$ in the support set.
115
+
116
+ Cross-Lingual Teacher The vast majority of previous methods focus on developing models on English datasets (Wu et al., 2022; Chen et al., 2017), which can hardly adapt to the datasets of other languages unless they train a model from scratch for a target language. In this paper, we propose to fine-tune the teacher model $f_{\phi}$ so that it can fast adapt to different languages, which is shown in Figure 2. In particular, given a pre-trained teacher model in the source language such as English, we add an extra model parameter $W_{lm}$ to fine-tune the teacher model to convert it from the source language to the target language. The predicted soft labels or logits of $x_{i}$ in the source language and target language are calculated by:
117
+
118
+ $$
119
+ \begin{array}{l} \hat {\boldsymbol {y}} _ {i} ^ {s} = \operatorname {S o f t m a x} _ {t} \left(\boldsymbol {W} _ {l m} \boldsymbol {g} _ {i} ^ {s}\right), \tag {5} \\ \boldsymbol {\hat {y}} _ {i} ^ {t} = \operatorname {S o f t m a x} \left(\boldsymbol {W} _ {l m} \boldsymbol {g} _ {i} ^ {t}\right), \\ \end{array}
120
+ $$
121
+
122
+ where $\pmb{g}_i^s$ and $\pmb{g}_i^t$ are the embeddings of an utterance $x_i$ in the source language and target language, respectively. We aim to align the labels of each support data in different languages, so that the teacher model in the source language could convert to the
123
+
124
+ one in the target language.
125
+
126
+ $$
127
+ \mathcal {L} _ {K L} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(K L \left(\hat {\boldsymbol {y}} _ {i} ^ {s} | | \hat {\boldsymbol {y}} _ {i} ^ {t}\right) + K L \left(\hat {\boldsymbol {y}} _ {i} ^ {t} | | \hat {\boldsymbol {y}} _ {i} ^ {s}\right)\right), \tag {6}
128
+ $$
129
+
130
+ where $\mathcal{L}_{KL}$ is the Kullback-Leibler divergence loss to enforce the label probability distributions in the source and target languages similar. Moreover, we hope the predicted labels are close to the ground truth:
131
+
132
+ $$
133
+ \mathcal {L} _ {s} = \operatorname {C r o s s E n t r o p y} \left(\boldsymbol {y} _ {i}, \hat {\boldsymbol {y}} _ {i} ^ {s}\right), \tag {7}
134
+ $$
135
+
136
+ $$
137
+ \mathcal {L} _ {t} = \operatorname {C r o s s E n t r o p y} \left(\boldsymbol {y} _ {i}, \hat {\boldsymbol {y}} _ {i} ^ {t}\right). \tag {8}
138
+ $$
139
+
140
+ Finally, we obtain the overall loss for fine-tuning the cross-lingual teacher model:
141
+
142
+ $$
143
+ \mathcal {L} _ {C L} = \mathcal {L} _ {K L} + \mathcal {L} _ {s} + \mathcal {L} _ {t}. \tag {9}
144
+ $$
145
+
146
+ Intra-Domain Prototype Learning The inter-domain teacher prototypes learned through Eq. (4) can capture the high-level semantic knowledge from 18 domains, which incorporate the related inter-domain information into the prototype of each class. But these teacher prototypes ignore to utilize the data information of each episode (which is from a single domain), thus we further propose to learn the intra-domain prototypes according to the support and query data within each episode.
147
+
148
+ Intuitively, if the data with the same label have similar representations, the prediction will become easier and more precise. Therefore in this section, we hope to further use attention mechanism to perform message passing among the data of each episode, so that the support embeddings and query embeddings could interact with each other.
149
+
150
+ Firstly, we merge the label enhanced representations of the support set and the original representations of the query set, and obtain the instance-level representations for the data of each episode:
151
+
152
+ $$
153
+ \mathcal {I} = \left[ \boldsymbol {u} _ {1}, \boldsymbol {u} _ {2}, \dots , \boldsymbol {u} _ {| S |}, \boldsymbol {h} _ {1}, \boldsymbol {h} _ {2}, \dots , \boldsymbol {h} _ {| Q |} \right], \tag {10}
154
+ $$
155
+
156
+ where $\pmb{u}_i$ is the label enhanced support embedding, and $h_i$ is the query embedding obtained by the original utterance. $|\mathcal{S}|$ and $|\mathcal{Q}|$ are the sizes of support and query sets in each episode.
157
+
158
+ Similarly, according to the labels of support data, we merge the label embeddings of the data in each episode, and obtain the class-level representations:
159
+
160
+ $$
161
+ \mathcal {C} = \left[ e _ {1}, e _ {2}, \dots , e _ {| \mathcal {S} |}, \mathbf {0} _ {1}, \mathbf {0} _ {2}, \dots , \mathbf {0} _ {| \mathcal {Q} |} \right], \tag {11}
162
+ $$
163
+
164
+ where $\pmb{e}_i = f_{\theta}(y_i)$ is the label embedding for the label $y_i$ . Since the labels for the query data are
165
+
166
+ unknown, we represent the label embeddings of query data with a zero vector $\mathbf{0}$ .
167
+
168
+ Then we perform self-attention on instance-level representations and class-level representations respectively to achieve information interaction within each episode:
169
+
170
+ $$
171
+ \boldsymbol {A} ^ {I} = \operatorname {S o f t m a x} \left(\frac {\left(\boldsymbol {\mathcal {I}} \boldsymbol {W} _ {Q} ^ {1}\right) \left(\boldsymbol {\mathcal {I}} \boldsymbol {W} _ {K} ^ {1}\right) ^ {T}}{\sqrt {d _ {h}}}\right) \left(\boldsymbol {\mathcal {I}} \boldsymbol {W} _ {V} ^ {1}\right), \tag {12}
172
+ $$
173
+
174
+ $$
175
+ \boldsymbol {A} ^ {C} = \operatorname {S o f t m a x} \left(\frac {\left(\boldsymbol {\mathcal {C}} \boldsymbol {W} _ {Q} ^ {2}\right) \left(\boldsymbol {\mathcal {C}} \boldsymbol {W} _ {K} ^ {2}\right) ^ {T}}{\sqrt {d _ {h}}}\right) \left(\boldsymbol {\mathcal {C}} \boldsymbol {W} _ {V} ^ {2}\right), \tag {13}
176
+ $$
177
+
178
+ where $\mathbf{A}^I$ and $\mathbf{A}^C$ are the interacted instance-level representations and class-level representations of the data in each episode, respectively.
179
+
180
+ We further use cross-attention operations to select the most related class-level representations according to the instance-level representations:
181
+
182
+ $$
183
+ \hat {\boldsymbol {A}} ^ {C} = \operatorname {S o f t m a x} \left(\frac {\left(\boldsymbol {A} ^ {I} \boldsymbol {W} _ {Q} ^ {3}\right) \left(\boldsymbol {A} ^ {C} \boldsymbol {W} _ {K} ^ {3}\right) ^ {T}}{\sqrt {d _ {h}}}\right) \left(\boldsymbol {A} ^ {C} \boldsymbol {W} _ {V} ^ {3}\right). \tag {14}
184
+ $$
185
+
186
+ We fuse the instance-level representations with the aligned class-level representations through concatenation operations and obtain the representations of all the data in each episode:
187
+
188
+ $$
189
+ \boldsymbol {A} = \boldsymbol {A} ^ {I} \| \hat {\boldsymbol {A}} ^ {C}. \tag {15}
190
+ $$
191
+
192
+ Finally, we use the fused data representations $\mathbf{A}$ to construct the intra-domain student prototype for each class $c$ :
193
+
194
+ $$
195
+ \boldsymbol {p} _ {\mathcal {S}} ^ {c} = \frac {1}{| \mathcal {S} _ {c} |} \sum_ {y _ {i} = c} \boldsymbol {A} _ {i}. \tag {16}
196
+ $$
197
+
198
+ Prototype Distillation To this end, we propose to distill the most representative inter-domain teacher prototypes into fine-grained prototypes with cross-attention:
199
+
200
+ $$
201
+ \boldsymbol {p} _ {\mathcal {T}} ^ {\prime} = \operatorname {S o f t m a x} \left(\frac {\left(\boldsymbol {p} _ {\mathcal {T}} \boldsymbol {W} _ {Q} ^ {4}\right) \left(\boldsymbol {p} _ {\mathcal {S}} \boldsymbol {W} _ {K} ^ {4}\right) ^ {T}}{\sqrt {d _ {h}}}\right) \left(\boldsymbol {p} _ {\mathcal {S}} \boldsymbol {W} _ {V} ^ {4}\right), \tag {17}
202
+ $$
203
+
204
+ where $p_{S}$ and $p_{\mathcal{T}}$ are the intra-domain student prototype matrix and the inter-domain teacher prototype matrix, respectively. Then we combine it with intra-domain student prototypes to get the final prototypes for each episode:
205
+
206
+ $$
207
+ \boldsymbol {p} = \alpha \boldsymbol {p} _ {\mathcal {S}} + (1 - \alpha) \boldsymbol {p} _ {\mathcal {T}} ^ {\prime}, \tag {18}
208
+ $$
209
+
210
+ where $\alpha$ is a trade-off hyperparameter ranging between 0 and 1.
211
+
212
+ # 3.5 Optimization and Inference
213
+
214
+ Prototypical Classifier Given a query utterance $x_{i} \in \mathcal{Q}$ , we can compute the conditional probability $p(y = c|x_i, S)$ to predict its labels based on negative squared Euclidean distance.
215
+
216
+ $$
217
+ p (y = c \mid x _ {i}, \mathcal {S}) = \frac {\exp \left(- \| \boldsymbol {a} _ {i} - \boldsymbol {p} _ {c} \| _ {2} ^ {2}\right)}{\sum_ {c ^ {\prime} \in \mathcal {C}} \exp \left(- \| \boldsymbol {a} _ {i} - \boldsymbol {p} _ {c ^ {\prime}} \| _ {2} ^ {2}\right)}, \tag {19}
218
+ $$
219
+
220
+ where $\pmb{p}_c$ is the prototype of class $c$ from $\pmb{p}$ , and $\pmb{a}_i$ is the representation of $x_i$ from $\pmb{A}$ .
221
+
222
+ Note that in the multi-label setting, as an utterance may have multiple labels, we need to consider $|N|$ labels for each query sample. We perform cross-entropy loss on all the query samples, which is calculated as:
223
+
224
+ $$
225
+ \mathcal {L} _ {c e} = - \frac {1}{| \mathcal {Q} |} \sum_ {x _ {i} \in \mathcal {Q}} \sum_ {j = 1} ^ {| N |} y _ {i} ^ {j} \log p (y = j | x _ {i}, \mathcal {S}), \tag {20}
226
+ $$
227
+
228
+ where $y_{i} = \{y_{i}^{1},\dots,y_{i}^{|N|}\}$ is the label of $x_{i}$ and $y_{i}^{j} \in \{0,1\}$ .
229
+
230
+ Multi-Label Inference Inspired by (Sun et al., 2020), we introduce the class-specific circle loss to conduct multi-label prediction for each query sample $x_{i}$ :
231
+
232
+ $$
233
+ \begin{array}{l} \mathcal {L} _ {c i r} = \frac {1}{N} \sum_ {c = 1} ^ {N} \left(\log \left(e ^ {\sigma \left(\tau_ {c}\right)} + \sum_ {z _ {i} \in \Lambda_ {c}} e ^ {\sigma \left(z _ {i}\right)}\right) \right. \tag {21} \\ + \log (e ^ {\sigma (- \tau_ {c})} + \sum_ {z _ {j} \in \Gamma_ {c}} e ^ {\sigma (- z _ {j}))}), \\ \end{array}
234
+ $$
235
+
236
+ where $\sigma$ is the temperature scale parameter, $\Lambda_{c} = \{p(y = c|x_{i},\mathcal{S})|y_{i}^{c} = 0\}$ is the negative score set, $\Gamma_c = \{p(y = c|x_i,\mathcal{S})|y_i^c = 1\}$ is the positive score set, and $\tau_{c}$ is the threshold of class $c$ . The goal of $\mathcal{L}_{cir}$ is that the positive scores of class $c$ are greater than $\tau_{c}$ and the negative scores of class $c$ are less than $\tau_{c}$ .
237
+
238
+ The overall training objective is:
239
+
240
+ $$
241
+ \mathcal {L} = \mathcal {L} _ {c e} + \lambda \mathcal {L} _ {c i r}, \tag {22}
242
+ $$
243
+
244
+ where $\lambda \in (0,1)$ is a trade-off hyperparameter.
245
+
246
+ # 4 Experiments
247
+
248
+ # 4.1 Datasets and Experimental Setups
249
+
250
+ Datasets We follow (Zhang et al., 2023) to evaluate our method on public English FS-MLID dataset StanfordLU and introduce two new non-English
251
+
252
+ <table><tr><td>Dataset</td><td colspan="3">StanfordLU</td><td colspan="3">CrossWOZ</td><td colspan="3">JMultiWOZ</td></tr><tr><td>Domain</td><td>Sc</td><td>Na</td><td>We</td><td>At</td><td>Ho</td><td>Re</td><td>Sh</td><td>Ho</td><td>Re</td></tr><tr><td>Ns</td><td>14</td><td>10</td><td>8</td><td>8</td><td>10</td><td>9</td><td>8</td><td>10</td><td>10</td></tr><tr><td>Prop.</td><td>21%</td><td>25%</td><td>4%</td><td>48%</td><td>48%</td><td>53%</td><td>45%</td><td>60%</td><td>79%</td></tr></table>
253
+
254
+ Table 1: Dataset statistics. Ns denotes the number of classes in each domain and Prop. denotes the proportion of multi-label utterances.
255
+
256
+ FS-MLID datasets, i.e., CrossWOZ and JMultiWOZ. (1) StanfordLU is a dataset of Stanford dialogues (Eric et al., 2017), which contains 8038 utterances re-annotated by (Hou et al., 2021) from 3 domains: Sc (Schedule), Na (Navigate) and We (Weather). (2) CrossWOZ is an re-annotated version of the first large-scale Chinese multi-domain task-oriented dialogue dataset (Zhu et al., 2020) containing 8697 user utterances and includes 3 domains: At (Attraction), Ho (Hotel) and Re (Restaurant). (3) JMultiWOZ consists of 8076 utterances and includes three travel-related domains: Sh (Shopping), Ho (Hotel) and Re (Restaurant), which is re-annotated from the first Japanese multi-domain task-oriented dialogue dataset JMultiWOZ (Ohashi et al., 2024). We re-announce the two publicly available non-English datasets into multi-label form and follow (Hou et al., 2020) to construct few-shot episodes. For each dataset, we take two domains as the training set and validation set respectively, and take another domain as the test set. We construct 200, 50, and 50 episodes for training, validation and testing, respectively. Table 1 shows detailed dataset statistics<sup>1</sup>.
257
+
258
+ Implementation Details The proposed approach CFPL is implemented with PyTorch and all the experiments are conducted on NVIDIA GeForce RTX 3090. In terms of feature extraction, we use bert-base-uncased, bert-base-Chinese and bert-base-Japanese as the student model respectively, and we use multilingual-BERT (mBERT) as the teacher model (Devlin et al., 2019). The size of the hidden state is 768 and the number of hidden layers is 12. We use AdamW (Loshchilov and Hutter, 2019) for optimization with the initial learning rate of 2e-5 on StanfordLU, 1e-4 on CrossWOZ, and 5e-5 on JMultiWOZ. We set the dropout rate as 0.2, hyperparameter $\alpha$ as 0.8, $\sigma$ as 0.05 and the number of augmented labels $k$ as 2 (detailed analysis is shown in 4.5). For the loss function, we set $\lambda$ as 0.1. All the hyperparameters are determined by
259
+
260
+ <table><tr><td rowspan="2">Models</td><td colspan="4">StanfordLU 1-shot</td><td colspan="4">StanfordLU 5-shot</td></tr><tr><td>Sc</td><td>Na</td><td>We</td><td>Avg.</td><td>Sc</td><td>Na</td><td>We</td><td>Avg.</td></tr><tr><td>TransferM</td><td>18.00±0.62</td><td>24.65±0.79</td><td>22.26±0.64</td><td>21.64±0.68</td><td>16.62±0.18</td><td>23.69±0.46</td><td>26.64±2.04</td><td>22.31±0.89</td></tr><tr><td>MMN</td><td>39.18±0.52</td><td>35.35±1.72</td><td>45.87±2.81</td><td>40.13±1.68</td><td>43.65±6.24</td><td>51.94±1.03</td><td>46.65±0.48</td><td>47.41±2.58</td></tr><tr><td>MPN</td><td>39.34±1.38</td><td>36.09±0.77</td><td>45.86±2.50</td><td>40.43±1.55</td><td>41.45±2.83</td><td>50.51±2.94</td><td>54.96±9.76</td><td>48.97±5.18</td></tr><tr><td>CTLR</td><td>42.55±0.40</td><td>56.95±0.77</td><td>53.14±1.89</td><td>50.88±1.02</td><td>52.17±1.29</td><td>60.36±1.55</td><td>59.63±2.23</td><td>57.39±1.69</td></tr><tr><td>DCKPN</td><td>53.81±0.72</td><td>58.48±0.31</td><td>74.02±0.74</td><td>62.10±0.59</td><td>57.81±0.62</td><td>63.71±0.35</td><td>93.83±0.36</td><td>71.78±0.44</td></tr><tr><td>CFPL</td><td>67.11±0.93</td><td>68.04±1.07</td><td>80.57±1.26</td><td>71.91±1.09</td><td>70.28±1.03</td><td>75.89±0.32</td><td>93.56±0.10</td><td>79.91±0.48</td></tr></table>
261
+
262
+ Table 2: F1 scores on the StanfordLU dataset under $N$ -way 1-shot and $N$ -way 5-shot settings.
263
+
264
+ <table><tr><td rowspan="2">Models</td><td colspan="4">CrossWOZ 1-shot</td><td colspan="4">CrossWOZ 5-shot</td></tr><tr><td>At</td><td>Ho</td><td>Re</td><td>Avg.</td><td>At</td><td>Ho</td><td>Re</td><td>Avg.</td></tr><tr><td>TransferM</td><td>19.31±0.65</td><td>18.24±0.58</td><td>18.57±0.77</td><td>18.71±0.67</td><td>19.79±0.56</td><td>18.92±0.86</td><td>19.21±0.63</td><td>19.31±0.68</td></tr><tr><td>MMN</td><td>37.16±1.25</td><td>35.20±0.83</td><td>36.38±2.12</td><td>36.25±1.40</td><td>39.14±1.48</td><td>36.39±0.96</td><td>38.21±2.01</td><td>37.91±1.48</td></tr><tr><td>MPN</td><td>38.29±2.07</td><td>36.47±1.37</td><td>37.42±0.95</td><td>37.39±1.46</td><td>46.35±3.46</td><td>43.26±2.57</td><td>49.58±2.33</td><td>46.40±2.79</td></tr><tr><td>CTLR</td><td>47.51±2.18</td><td>40.23±1.97</td><td>43.78±2.42</td><td>43.84±2.19</td><td>56.77±3.97</td><td>52.34±2.17</td><td>71.03±2.94</td><td>60.05±3.03</td></tr><tr><td>DCKPN</td><td>80.62±3.80</td><td>68.67±3.33</td><td>81.49±3.30</td><td>76.93±3.48</td><td>83.36±4.62</td><td>72.20±3.20</td><td>81.40±3.28</td><td>78.99±3.70</td></tr><tr><td>CFPL</td><td>91.41±0.25</td><td>77.32±0.57</td><td>87.09±0.53</td><td>85.27±0.45</td><td>90.89±0.49</td><td>80.45±0.23</td><td>89.93±0.31</td><td>87.09±0.35</td></tr></table>
265
+
266
+ Table 3: F1 scores on the CrossWOZ dataset under $N$ -way 1-shot and $N$ -way 5-shot settings.
267
+
268
+ the performance on the validation domains. For the baseline results on CrossWOZ and JMultiWOZ, we reimplement all the baselines with official codes.
269
+
270
+ Evaluation Metrics Following previous multi-label few-shot intent detection methods (Zhang et al., 2023), we adopt micro F1 as the metric to evaluate the overall performance. All reported results are the average of 5 different runs.
271
+
272
+ # 4.2 Baselines
273
+
274
+ We evaluate and compare our proposed method with the following strong baselines. (1) TransferM is a transfer learning framework (Dai et al., 2007) with a pre-trained language model as the encoder and a multi-layer perceptron as the classifier. It trains on source domains and fine-tunes with support sets from target domains. (2) Multi-label Prototypical Network (MPN) represents a modification of the vanilla prototypical network (Snell et al., 2017), which measures the negative Euclidean distance between queries and prototypes, and applies a fixed threshold tuned on dev set for multi-label classification. (3) Multi-label Matching Network (MMN) closely resembles MPN but utilizes the Matching Network (Vinyals et al., 2016) to calculate label-instance relevance scores, resulting in classification based on cosine similarity. (4) CTLR (Hou et al., 2021) proposes a method for estimating label-instance relevance scores and select
275
+
276
+ ing multiple intent labels using a meta-calibrated threshold, which involves learning universal experience on data-rich domains and adapting thresholds to certain few-shot domains. (5) DCKPN (Zhang et al., 2023) constructs a dual class knowledge propagation network that integrates label information and feature structure into graph neural network to guide the intent prediction and employs a multi-label inference method to predict the intent count of each utterance adaptively.
277
+
278
+ # 4.3 Main Results
279
+
280
+ The main results on StanfordLU, CrossWOZ, and JMultiWOZ are shown in Table 2, 3 and 4 respectively. Most baseline results are taken from (Zhang et al., 2023) and the best results are highlighted in bold. We have following observations from the experimental results: (1) CFPL achieves significantly superior average results on three benchmarks across all domains compared to baseline methods, demonstrating the superiority of our method. (2) The performance on 1-shot setting shows more improvements compared to 5-shot setting, which further confirms the efficiency of our method in few-shot tasks. (3) CFPL shows better average 1-shot and 5-shot performance in domains that contain more novel classes (12.9% in Sc domain of StanfordLU and 8.5% in Ho domain of CrossWOZ), indicating that the teacher model introduces more domain features to help recognize unseen classes. (4)
281
+
282
+ <table><tr><td rowspan="2">Models</td><td colspan="4">JMultiWOZ 1-shot</td><td colspan="4">JMultiWOZ 5-shot</td></tr><tr><td>Sh</td><td>Ho</td><td>Re</td><td>Avg.</td><td>Sh</td><td>Ho</td><td>Re</td><td>Avg.</td></tr><tr><td>TransferM</td><td>16.32±1.04</td><td>15.85±0.87</td><td>14.56±1.52</td><td>15.58±1.14</td><td>17.11±0.93</td><td>16.03±1.08</td><td>16.34±1.26</td><td>16.49±1.09</td></tr><tr><td>MMN</td><td>27.21±0.61</td><td>23.29±1.54</td><td>25.64±1.97</td><td>25.38±1.37</td><td>32.56±0.74</td><td>25.78±1.83</td><td>29.30±1.25</td><td>29.21±1.27</td></tr><tr><td>MPN</td><td>30.87±2.84</td><td>28.61±3.07</td><td>29.67±2.46</td><td>29.72±2.79</td><td>36.35±2.98</td><td>29.14±3.14</td><td>34.71±2.77</td><td>33.40±2.96</td></tr><tr><td>CTLR</td><td>27.09±1.47</td><td>36.13±2.13</td><td>28.30±2.25</td><td>30.51±1.95</td><td>37.41±1.25</td><td>31.95±2.69</td><td>31.90±0.57</td><td>33.75±1.50</td></tr><tr><td>DCKPN</td><td>71.61±2.69</td><td>57.93±2.22</td><td>57.40±2.69</td><td>62.31±2.53</td><td>70.22±3.38</td><td>63.11±1.51</td><td>62.36±3.47</td><td>65.23±2.79</td></tr><tr><td>CFPL</td><td>74.40±0.52</td><td>63.40±0.48</td><td>62.91±0.52</td><td>66.90±0.51</td><td>76.69±0.66</td><td>64.13±0.24</td><td>65.65±0.51</td><td>68.86±0.47</td></tr></table>
283
+
284
+ Table 4: F1 scores on the JMultiWOZ dataset under $N$ -way 1-shot and $N$ -way 5-shot settings.
285
+
286
+ <table><tr><td rowspan="2">Setting</td><td colspan="2">StanfordLU</td><td colspan="2">CrossWOZ</td><td colspan="2">JMultiWOZ</td></tr><tr><td>1-shot</td><td>5-shot</td><td>1-shot</td><td>5-shot</td><td>1-shot</td><td>5-shot</td></tr><tr><td>CFPL</td><td>71.91</td><td>79.91</td><td>85.27</td><td>87.09</td><td>66.90</td><td>68.86</td></tr><tr><td>- LSA</td><td>66.07</td><td>74.15</td><td>78.29</td><td>80.54</td><td>60.37</td><td>62.79</td></tr><tr><td>- Inter-</td><td>65.43</td><td>73.52</td><td>76.17</td><td>78.73</td><td>59.85</td><td>63.14</td></tr><tr><td>- Intra-</td><td>63.28</td><td>69.40</td><td>72.14</td><td>73.77</td><td>56.60</td><td>58.13</td></tr><tr><td>- Lcir</td><td>66.87</td><td>75.12</td><td>77.63</td><td>80.38</td><td>61.02</td><td>64.51</td></tr></table>
287
+
288
+ Table 5: Ablation study results. The average F1 scores of all domains are reported.
289
+
290
+ In terms of the non-English datasets, CFPL outperforms DCKPN $8.2\%$ on CrossWOZ and $4.1\%$ on JMultiWOZ, while eliminating the need of part-of-speech tagging required by CTLR, thereby demonstrating the effectiveness and convenience of our method.
291
+
292
+ # 4.4 Ablation Study
293
+
294
+ To examine the influence of each component, we conduct ablation studies on three datasets, as shown in Table 5. When removing the Intra-domain Prototype Learning (denoted as -Intra-), the model performs the worst, indicating the effectiveness of intra-domain feature interaction within each episode for classification. Similarly, when excluding the Inter-domain Prototype Learning (denoted as -Inter-), the model exhibits a significant performance decline, indicating that the teacher model introduces unseen domain knowledge, which enhances the ability of recognizing novel classes. When we omit the Label Synset Augmentation (denoted as -LSA), the model performance decreases, indicating that augmented labels help to provide more discriminative representations. When removing the class-specific circle loss (denoted as $\mathcal{L}_{\text{cir}}$ ), the model performance degrades, indicating the superiority of contrastive loss for multi-label classification.
295
+
296
+ # 4.5 Analysis and Discussions
297
+
298
+ Impact of the Hyperparameter $k$ We conduct sensitivity analysis using different values of $k$ for label augmentation. Figure 3 shows the respective results in different domains and the average results on each dataset. The performance improves significantly as $k$ increases from 0 to 2 in most domains, which indicates the effectiveness of augmented labels. As the value of $k$ continues to increase, the model performance increases very slowly or even decreases after reaching maximum, which implies that excessive label expansion may introduce confusing information or even noise. In addition, an oversized $k$ results in substantial resource consumption. Therefore, we set $k = 2$ for all the datasets in our experiments.
299
+
300
+ Parameter Efficiency Due to the computational intensity of Bert-base (110M params) in real industry deployments, we further assess our model performance with the much lighter Bert-tiny (7M params), which is nearly 16 times smaller. Experiments are conducted under different sizes of training set ( $N$ -way $1/3/5$ -shot). The comparison results with the strong baselines in Figure 4 demonstrate that our model still maintains competitive performance when reducing the computational load, reflecting the superiority of our method.
301
+
302
+ Exploration of LLMs for Multilingual FS-MLID Large language models (LLMs) have achieved significant advancements in numerous few-shot tasks via in-context learning. We conduct a preliminary experiment to explore the performance of LLMs on non-English FS-MLID tasks using gpt-3.5-turbo. Restricted by the input length, we only conduct the N-way 1-shot setting. The experimental results and the in-context learning prompt template that includes task description, demonstration and query are detailed in the Appendix B.
303
+
304
+ ![](images/e7f0ff47dfc2a97a4a5b4d7050fa44023468db32ac855b5c9dd34244de798c98.jpg)
305
+
306
+ ![](images/6384ae6dba4b2e099d69ed8a9e12c3fffc03bcf60cae7dc35f778be114503cfe.jpg)
307
+
308
+ ![](images/5eaeea1f2896095b361b631fef5d628b03dd2977bbe243d5dfe90b35f7af774e.jpg)
309
+ Figure 3: Experimental results of different values of $k$ .
310
+
311
+ ![](images/3a01c8ffa90d73043254b329d5a4dbd9ba36d228fb8e54500a9eb70517e92141.jpg)
312
+
313
+ ![](images/1cf6874bd1ed1053bb2a3b59f37cc4ca7911ad6a5917adafdc2968f479ad05ce.jpg)
314
+ Figure 4: Comparison with CTLR and DCKPN on StanfordLU when using Bert-tiny as the backbone.
315
+
316
+ # 5 Conclusion
317
+
318
+ In this paper, we propose a CFPL method for multi-label few-shot intent detection, which designs a label synset augmentation strategy to enhance the representations of the support data due to the data scarcity issue and proposes to refine the prototypes with knowledge distillation from a universal teacher model. To solve the multilingual intent detection, we fine-tune a cross-lingual teacher model to enable our method to adapt quickly to different languages. To verify our proposed method in detecting intents for multilingual dialogues, we re-announce two non-English task-oriented dialogue datasets CrossWOZ and JMultiWOZ in multi-label form. Experimental results demonstrate the superiority of our method.
319
+
320
+ # 6 Limitations
321
+
322
+ In this paper, we leverage a multilingual BERT pre-trained on intent corpora as the teacher model. However, we do not explore larger, more powerful
323
+
324
+ generalist language models like LLaMa (Touvron et al., 2023) and Claude (Bai et al., 2022). On the other hand, we hypothesize that related target domain knowledge is compressed in the teacher model, but it may be insufficient for all new domains. Retrieval from related corpora could be a good choice. We leave the exploration of better teacher models and richer target knowledge sources for future study. Additionally, since our method involves some self-attention and cross-attention operations, we plan to speed up the runtime by optimizing attention mechanisms in future work.
325
+
326
+ # 7 Ethics Statement
327
+
328
+ We re-announce two publicly available non-English task-oriented dialogue datasets, i.e., CrossWOZ and JMultiWOZ, for future multilingual intent detection studies. During the re-annotating process, we make sure that there is no any sensitive information in these datasets, meaning that our work poses no risks to society or individuals.
329
+
330
+ # Acknowledgment
331
+
332
+ This work was supported by National Natural Science Foundation of China (No. 62206038, 62106035), Liaoning Binhai Laboratory Project (No. LBLF-2023-01), and Chunhui Project Foundation of the Education Department of China (No. HZKY20220419).
333
+
334
+ # References
335
+
336
+ Amit Alfassy, Leonid Karlinsky, Amit Aides, Joseph Shtok, Sivan Harary, Rogerio Feris, Raja Giryes, and Alex M Bronstein. 2019. Laso: Label-set operations networks for multi-label few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6548-6557.
337
+ Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and Jared Kaplan. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. CoRR, abs/2204.05862.
338
+ Francis Bond, Piek Vossen, John P. McCrae, and Christiane Fellbaum. 2016. CILI: the collaborative interlingual index. In GWC 2016, pages 50-57.
339
+
340
+ Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. ACM SIGKDD Explorations, 19(2):25-35.
341
+ Kai-Hsiang Cheng, Szu-Yu Chou, and Yi-Hsuan Yang. 2019. Multi-label few-shot learning for sound event recognition. In 21st IEEE International Workshop on Multimedia Signal Processing.
342
+ Muthuraman Chidambaram, Yinfei Yang, Daniel Cer, Steve Yuan, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Learning cross-lingual sentence representations via a multi-task dual-encoder model. In RepL4NLP@ACL 2019, pages 250–259. Association for Computational Linguistics.
343
+ Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2007. Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine Learning, pages 193-200.
344
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pages 4171-4186.
345
+ Mihail Eric, Lakshmi Krishnan, François Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37-49.
346
+ Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, and Prem Natarajan. 2023. MASSIVE: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages. In ACL 2023, pages 4277-4302.
347
+ Rashmi Gangadharaiiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In NAACL-HLT, pages 564-569.
348
+ Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In NAACL-HLT, pages 753-757.
349
+ Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.
350
+ Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In
351
+
352
+ Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1381-1393.
353
+ Yutai Hou, Yongkui Lai, Yushan Wu, Wanxiang Che, and Ting Liu. 2021. Few-shot learning for multi-label intent detection. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, pages 13036-13044.
354
+ Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, and Maosong Sun. 2022. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In ACL 2022, pages 2225-2240.
355
+ Haoyang Huang, Tianyi Tang, Dongdong Zhang, Xin Zhao, Ting Song, Yan Xia, and Furu Wei. 2023. Not all languages are created equal in llms: Improving multilingual capability by cross-lingual-thought prompting. In EMNLP 2023, pages 12365-12394. Association for Computational Linguistics.
356
+ Talaat Khalil, Kornel Kielczewski, Georgios Christos Chouliaras, Amina Keldibek, and Maarten Versteegh. 2019. Cross-lingual intent classification in a low resource industrial setting. In EMNLP-IJCNLP 2019, pages 6418-6423.
357
+ Han Liu, Feng Zhang, Xiaotong Zhang, Siyang Zhao, Junjie Sun, Hong Yu, and Xianchao Zhang. 2022. Label-enhanced prototypical network with contrastive learning for multi-label few-shot aspect category detection. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1079-1087.
358
+ Han Liu, Feng Zhang, Xiaotong Zhang, Siyang Zhao, and Xianchao Zhang. 2021. An explicit-joint and supervised-contrastive learning framework for few-shot intent classification and slot filling. In Findings of the Association for Computational Linguistics: EMNLP, pages 1945-1955.
359
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations.
360
+ Qiaoyang Luo, Lingqiao Liu, Yuhao Lin, and Wei Zhang. 2021. Don't miss the labels: Label-semantic augmented meta-learner for few-shot text classification. In Findings of the Association for Computational Linguistics: ACL/IJCNLP, pages 2773-2782.
361
+ Atsumoto Ohashi, Ryu Hirai, Shinya Iizuka, and Ryuichiro Higashinaka. 2024. Jmultiwoz: A large-scale japanese multi-domain task-oriented dialogue dataset. In LREC/COLING 2024, pages 9554-9567. ELRA and ICCL.
362
+ Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. In EMNLP-IJCNLP, pages 2078-2087.
363
+
364
+ Libo Qin, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, and Ting Liu. 2021. GL-GIN: fast and accurate non-autoregressive model for joint multiple intent detection and slot filling. In ACL/JCNLP, pages 178-188.
365
+ Libo Qin, Xiao Xu, Wanxiang Che, and Ting Liu. 2020. Towards fine-grained transfer: An adaptive graphinteractive framework for joint multiple intent detection and slot filling. In Findings of the Association for Computational Linguistics: EMNLP, pages 1807- 1816.
366
+ Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In AAAI 2020, pages 8689-8696. AAAI Press.
367
+ Barbara Rychalska, Helena T. Glabska, and Anna Wróblewska. 2018. Multi-intent hierarchical natural language understanding for chatbots. In International Conference on Social Networks Analysis, Management and Security, SNAMS, pages 256-259.
368
+ Yilin Shen, Yen-Chang Hsu, Avik Ray, and Hongxia Jin. 2021. Enhancing the generalization for intent classification and out-of-domain detection in SLU. In ACL 2021, pages 2443-2453.
369
+ Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In Proceedings of the Annual Conference on Neural Information Processing Systems, pages 4077-4087.
370
+ Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, and Yichen Wei. 2020. Circle loss: A unified perspective of pair similarity optimization. In CVPR, pages 6397-6406.
371
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinez, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. CoRR, abs/2307.09288.
372
+ Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for
373
+
374
+ one shot learning. In Proceedings of the Annual Conference on Neural Information Processing Systems, pages 3630-3638.
375
+ Ivan Vulić, Inigo Casanueva, Georgios Spithourakis, Avishek Mondal, Tsung-Hsien Wen, and Pawel Budzianowski. 2022. Multi-label intent detection via contrastive task specialization of sentence encoders. In EMNLP 2022, pages 7544-7559.
376
+ Ivan Vulic, Inigo Casanueva, Georgios Spithourakis, Avishek Mondal, Tsung-Hsien Wen, and Pawel Budzianowski. 2022. Multi-label intent detection via contrastive task specialization of sentence encoders. In EMNLP 2022, pages 7544-7559.
377
+ Zichen Wang, Bo Yang, Haonan Yue, and Zhenghao Ma. 2024. Fine-grained prototypes distillation for few-shot object detection. In AAAI 2024, pages 5859-5866.
378
+ Jason W. Wei and Kai Zou. 2019. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In EMNLP-IJCNLP 2019, pages 6381-6387.
379
+ Ting-Wei Wu, Ruolin Su, and Biing-Hwang Juang. 2021. A label-aware BERT attention network for zero-shot multi-intent detection in spoken language understanding. In EMNLP, pages 4884-4896.
380
+ Yangjun Wu, Han Wang, Dongxiang Zhang, Gang Chen, and Hao Zhang. 2022. Incorporating instructional prompts into a unified generative framework for joint multiple intent detection and slot filling. In COLING 2022, pages 7203-7208.
381
+ Shuo Yang, Lu Liu, and Min Xu. 2021. Free lunch for few-shot learning: Distribution calibration. In *ICLR* 2021. OpenReview.net.
382
+ Feng Zhang, Wei Chen, Fei Ding, and Tengjiao Wang. 2023. Dual class knowledge propagation network for multi-label few-shot intent detection. In ACL 2023, pages 8605-8618.
383
+ Haoxing Zhang, Xiaofeng Zhang, Haibo Huang, and Lei Yu. 2022. Prompt-based meta-learning for few-shot text classification. In EMNLP 2022, pages 1342-1357.
384
+ Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang. 2020. Crosswoz: A large-scale chinese cross-domain task-oriented dialogue dataset. Trans. Assoc. Comput. Linguistics, 8:281-295.
385
+ Zhihong Zhu, Xuxin Cheng, Hongxiang Li, Yaowei Li, and Yuexian Zou. 2024. Dance with labels: Dual-heterogeneous label graph interaction for multi-intent spoken language understanding. In WSDM, pages 1022-1031.
386
+
387
+ # A Dataset Re-annotation Details
388
+
389
+ We re- annotate the two public multi-domain task-oriented dialogue datasets, i.e., CrossWOZ (Zhu et al., 2020) and JMultiWOZ (Ohashi et al., 2024), into multi-label form. Specifically, in order to align with the setting of the multi-label intent detection task, we have expanded the original task-oriented dialogue dataset which includes the basic user intent in the form of multiple labels. The utterance in both non-English multi-domain task-oriented dialogue datasets is originally labeled with one simple user intent, but it actually has more than one intent labels. The entire process of re-annotation consists of three stages: first, we follow the existing multi-label intent datasets (Qin et al., 2020; Rastogi et al., 2020) to determine multiple more precise intent labels; then, we re- annotate the two public multi-domain task-oriented dialogue datasets; finally, we carefully review the re-annotated data.
390
+
391
+ In our re-annotation process, we first identify the domains of the FS-MLID task in two non-English datasets. The principle is to make the domains of the two datasets as similar as possible to evaluate the generalizability of our method across different languages. For the CrossWOZ dataset, the three re-annotated domains are At (Attraction), Ho (Hotel), and Re (Restaurant). For the JMultiWOZ dataset, the three re-annotated domains are Sh (Shopping), Ho (Hotel), and Re (Restaurant). Due to the constraints imposed by the length of user utterances, the intents expressed by users in the two non-English datasets are not as diverse as those in the StanfordLU. This lead to fewer label classes being identified in our study compared to the StanfordLU. For the two non-English datasets, we engage three native speakers of each dataset's language as annotators to reannotate the sentences with multi-label annotations. This process spans five days. For the CrossWOZ dataset, a total of 8697 user utterances are reannotated, and for the JMultiWOZ dataset, a total of 8076 utterances are reannotated. Finally, the reannotated results undergo a manual review process by two proficient speakers of each language to ensure accuracy.
392
+
393
+ # B LLMs for Multilingual FS-MLID Tasks
394
+
395
+ LLMs Prompt Template in N-way 1-shot Setting Given a target user intent list from task-oriented dialogue, an user utterance, please identify all intents behind user utterances. Note that the setting of this task conforms to the $N$ -way 1-shot setting,
396
+
397
+ which includes two stages: meta-training and metatesting. In the meta-training phase, there are 200 few-shot episodes from source domain. In the metatesting phase, there are 50 few-shot episodes from target domain. Each episode contains a support set and a query set, and the query set size is 32.
398
+
399
+ Target user intent list of source domain:
400
+
401
+ <1>: <User intent 1>
402
+
403
+ <2>: <User intent 2>
404
+
405
+ ··
406
+
407
+ $< N_{s}>$ : <User intent $N_{s}$ >
408
+
409
+ Episodes in meta-training phase from source domain (Each episode contains support set and query set):
410
+
411
+ <Episode 1>: {"support": {
412
+
413
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
414
+
415
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
416
+
417
+ $\{\dots \dots \}$
418
+
419
+ "query":{
420
+
421
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
422
+
423
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
424
+
425
+ ···
426
+
427
+ <Utterance 32>: <User intent 1 of Utterance 32>; <User intent 2 of Utterance 32>; ....} };
428
+
429
+ <Episode 2>: {"support": {
430
+
431
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
432
+
433
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
434
+
435
+ $\dots \dots \}$
436
+
437
+ "query":{
438
+
439
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
440
+
441
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
442
+
443
+ ··
444
+
445
+ <Utterance $32>$ : <User intent 1 of Utterance $32>$ ; <User intent 2 of Utterance $32>$ ; ...} };
446
+
447
+ ···
448
+
449
+ <Episode 200>: {"support": {
450
+
451
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
452
+
453
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
454
+
455
+ $\{\dots \dots \}$
456
+
457
+ "query":{
458
+
459
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
460
+
461
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
462
+
463
+ ··
464
+
465
+ <Utterance $32>$ : <User intent 1 of Utterance $32>$ ; <User intent 2 of Utterance $32>$ ; ...}
466
+
467
+ Target user intent list of target domain:
468
+
469
+ $< 1>$ : <User intent 1>
470
+
471
+ $< 2>$ : <User intent 2>
472
+
473
+ ··
474
+
475
+ $< N_{t}>$ : <User intent $N_{t}$ >
476
+
477
+ Episodes in meta-testing phase from target domain (Each episode contains support set and query set):
478
+
479
+ <Episode 1>: {"support": {
480
+
481
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
482
+
483
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
484
+
485
+ $\{\dots \dots \}$
486
+
487
+ "query":{
488
+
489
+ <Utterance 1>: <User intent ID>; <User intent ID>; ....
490
+
491
+ <Utterance 2>: <User intent ID>; <User intent ID>; ....
492
+
493
+ ··
494
+
495
+ <Utterance 32>: <User intent ID>; <User intent ID>; ....} };
496
+
497
+ <Episode 2>: {"support": {
498
+
499
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
500
+
501
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
502
+
503
+ $\{\}$
504
+
505
+ "query":{
506
+
507
+ <Utterance 1>: <User intent ID>; <User intent ID>; .......
508
+
509
+ <Utterance 2>: <User intent ID>; <User intent ID>; ....
510
+
511
+ ··
512
+
513
+ <Utterance $32>$ : <User intent ID>; <User intent ID>; ....} };
514
+
515
+ ··
516
+
517
+ <Episode 50>: {"support": {
518
+
519
+ <Utterance 1>: <User intent 1 of Utterance 1>; <User intent 2 of Utterance 1>; ....
520
+
521
+ <Utterance 2>: <User intent 1 of Utterance 2>; <User intent 2 of Utterance 2>; ....
522
+
523
+ $\{\dots \dots \}$
524
+
525
+ "query":{
526
+
527
+ <Utterance 1>: <User intent ID>; <User intent ID>; ....
528
+
529
+ <Utterance 2>: <User intent ID>; <User intent ID>; ....
530
+
531
+ ··
532
+
533
+ <Utterance 32>: <User intent ID>; <User intent ID>; ....} }
534
+
535
+ Experimental Results and Analysis We conduct experiments using gpt-3.5-turbo $^2$ on two non-English datasets: CrossWOZ and JMultiWOZ. We input the data and instructions into gpt-3.5-turbo (GPT-3.5) according to the $N$ -way 1-shot setting, where $N$ is the number of classes in each domain. To simulate the meta-learning paradigm of few-shot learning, we divide the entire process into the meta-training and the meta-testing phase. All the few-shot episode construction is consistent with our experiment. For the meta-training phase, we construct 200 few-shot episodes for each source domain and 50 few-shot episodes for each target domain. And the size of query set is 32. We input GPT-3.5 with an user intent list of each source or target domain and the task descriptions, and the requested response is the intent ID of each query user utterance.
536
+
537
+ The experimental results on CrossWOZ and JMultiWOZ are shown in Figure 5. From the results, it can be observed that CFPL performs much better than GPT-3.5, which indicates the superiority of our method. During the experiment, we observe that some of the responses provided by GPT-3.5 are blank, which has a substantial impact on accuracy. Additionally, we observe that the capability of GPT-3.5 for predicting the number of multiple labels still requires further improvement. It is very intuitive to observe that the performance of GPT-3.5 on FS-MLID task is more related to languages rather than specific domains. Specifically, the average F1 scores of GPT-3.5 are $68.54\%$ on CrossWOZ and $62.48\%$ on JMultiWOZ. However, for different domains within the same dataset, the maximum and minimum F1 scores only differ by $1.2\%$ on the CrossWOZ dataset and $2.2\%$ on the JMultiWOZ dataset. Inspired by (Huang et al., 2023) and combined with our experimental results, we analyze the reason is that LLMs do not have equal capability of handling all the languages, leading to imbalanced performance across different languages. This further underscores the importance of dealing with intent detection from a multilingual perspective.
538
+
539
+ ![](images/2514c4bb2bb204f71df4e8ca6d4107f5b55f58f990113d395de3093d35e67a73.jpg)
540
+ Figure 5: Results of LLMs for multilingual FS-MLID tasks.
541
+
542
+ ![](images/8020829c1e61d1cd461a2238a5b13c99879602607651df30a3458702c5a95081.jpg)
543
+
544
+ <table><tr><td>Datasets</td><td>CTLR (train)</td><td>CTLR (test)</td><td>DCKPN (train)</td><td>DCKPN (test)</td><td>CFPL (train)</td><td>CFPL (test)</td><td>Cross-Lingual (CFPL)</td></tr><tr><td>StanfordLU (1-shot)</td><td>25</td><td>10</td><td>24</td><td>6</td><td>86</td><td>26</td><td>/</td></tr><tr><td>StanfordLU (5-shot)</td><td>32</td><td>13</td><td>28</td><td>5</td><td>87</td><td>27</td><td>/</td></tr><tr><td>CrossWOZ (1-shot)</td><td>44</td><td>11</td><td>36</td><td>7</td><td>128</td><td>27</td><td>350</td></tr><tr><td>CrossWOZ (5-shot)</td><td>56</td><td>10</td><td>38</td><td>8</td><td>130</td><td>28</td><td>350</td></tr></table>
545
+
546
+ Table 6: Total training and inference time (wall clock time in second).
547
+
548
+ # C Error Rate Analysis
549
+
550
+ The corresponding class label sets for schedule, navigate and weather domains are as follows: ['request_location', 'inform', 'query', 'confirm', 'appreciate', 'command Appointment', 'remind', 'request_information', 'list_schedule', 'request_time', 'requestParty', 'request_agenda', 'schedule', 'request_date'], ['request_poi', 'inform', 'query', 'confirm', 'appreciate', 'request_address', 'request-route', 'requestTraffic', 'show_in.Screen', 'navigate'], and ['request_low_temperature', 'request_time', 'appreciate', 'request_temperature', 'request/weather', 'inform', 'request_high_temperature', 'query']. These three domains all contain semantically highly similar class labels, such as 'request_time' and 'request_date' in the schedule domain. Moreover, there are also class labels with highly similar structures and meanings, such as 'list_schedule' and 'schedule' in the schedule domain, 'request_poi' and 'request_address' in the navigate domain. Our error rate for similar categories is significantly lower than that of the strong baseline DCKPN. Taking the schedule domain as example, for our method CFPL, the probability of the data belonging to 'request_time' that are misclassified into 'request_date' is $6.7\%$ whereas DCKPN has a more higher error rate, which is $9.1\%$ . The class label with the highest error rate in our method is 'query' in all three domains. Its error rate is $16.2\%$ in the schedule
551
+
552
+ domain, $12.1\%$ in the navigate domain, and $8.3\%$ in the weather domain. The basic reason is that users express questions in diverse forms, so the method has weak ability to classify 'query'. We also observe that in the scheduling domain, the error rate of 'schedule' is very high at $14.9\%$ , the reason is that the label set contains several labels with similar meanings (e.g. 'request_agenda', 'list_schedule', etc), which interferes with the classification of the data belonging to 'schedule'.
553
+
554
+ # D Runtime Analysis
555
+
556
+ CFPL involves several non-trivial steps, which impact the total wall clock time. The total training and inference time is summarized in Table 6. It can be seen that our method requires more time compared to the baselines, particularly in the training phase. The increase in time can be attributed to the multiple parts of CFPL. Although CFPL requires more time, it provides more accurate and robust results, especially in the cross-lingual scenarios.
557
+
558
+ Since our method involves some self-attention and cross-attention operations during inference, which takes up a lot of time. In future work, we plan to speed up inference time by optimizing attention mechanisms. This will involve reducing both the computational and memory overheads associated with attention calculations, as well as minimizing the memory access costs related to IO operations. By focusing on these areas, our method could achieve significant speed improvements.
acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f3250be31bd0b8be5280ec429c9b2b387bb7151703815a28d99c9f6d2bc3b9b
3
+ size 655110
acoarsetofineprototypelearningapproachformultilabelfewshotintentdetection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f3e2e5becd78758e9ccf21a4a38f50a42ca6f04a48c40b0f60cdb2a7f94b1fb
3
+ size 569599
acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/4a2805cc-d2ed-426a-8913-2c39dd75d239_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29bea8923e828c58e5b40a43e7d6eafaaa49b217a9af0eac86afa1f43935aa2d
3
+ size 106631
acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/4a2805cc-d2ed-426a-8913-2c39dd75d239_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da7a399e76807a768af2a0a5194dd88a26883e23648e0e277b89099dc38ab8ae
3
+ size 148765
acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/4a2805cc-d2ed-426a-8913-2c39dd75d239_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0e0b284148a88d8d1c0b3ab6c81fab215db0d75607e97628dd0000ec9531fa5
3
+ size 1857625
acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/full.md ADDED
@@ -0,0 +1,424 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Comprehensive Survey of Hallucination in Large Language, Image, Video and Audio Foundation Models
2
+
3
+ Pranab Sahoo<sup>1</sup>, Prakash Meharia<sup>1</sup>, Akash Ghosh<sup>1</sup>, Sriparna Saha<sup>1</sup>, Vinija Jain<sup>2</sup>, and Aman Chadha<sup>2,3*</sup>
4
+
5
+ $^{1}$ Department of Computer Science and Engineering, Indian Institute of Technology Patna
6
+
7
+ $^{2}$ Stanford University, $^{3}$ Amazon GenAI
8
+
9
+ pranab_2021cs25@iitp.ac.in, prabhash_2211cs12@iitp.ac.in, sriparna@iitp.ac.in hi@vinija.ai, hi@aman.ai
10
+
11
+ # Abstract
12
+
13
+ The rapid advancement of foundation models (FMs) across language, image, audio, and video domains has shown remarkable capabilities in diverse tasks. However, the proliferation of FMs brings forth a critical challenge: the potential to generate hallucinated outputs, particularly in high-stakes applications. The tendency of foundation models to produce hallucinated content arguably represents the biggest hindrance to their widespread adoption in real-world scenarios, especially in domains where reliability and accuracy are paramount. This survey paper presents a comprehensive overview of recent developments that aim to identify and mitigate the problem of hallucination in FMs, spanning text, image, video, and audio modalities. By synthesizing recent advancements in detecting and mitigating hallucination across various modalities, the paper aims to provide valuable insights for researchers, developers, and practitioners. Essentially, it establishes a clear framework encompassing definition, taxonomy, and detection strategies for addressing hallucination in multimodal foundation models, laying the foundation for future research and development in this pivotal area.
14
+
15
+ # 1 Introduction
16
+
17
+ The rapid progress in large-scale foundation models (FMs), spanning language, image, audio, and video domains, has revolutionized the field of artificial intelligence (AI). Models such as GPT-3 (Brown et al., 2020), MiniGPT-4 (Zhu et al., 2023), AudioLLM (Borsos et al., 2023), and LaViLa (Zhao et al., 2022) have demonstrated remarkable abilities across diverse tasks, from text generation to multimodal understanding. As these models find wider applications in critical domains, there is a growing imperative to comprehend and alleviate their propensity to produce hallucinated outputs.
18
+
19
+ # 1.1 Hallucination
20
+
21
+ Hallucination refers to instances where FMs generate plausible but factually incoherent or absurd content lacking proper context understanding (Xu et al., 2024b). These hallucinated outputs can range from minor inaccuracies to completely imaginary content, manifesting across text, images, videos, and audio generated by large models. Root causes may include biases in training data, limited access to up-to-date information, or inherent model constraints in contextual comprehension and response generation. Deploying such powerful models without addressing hallucinations risks perpetuating misinformation, incorrect conclusions, and adverse effects in critical applications. Mitigating hallucinations has become an active research focus, with strategies like fine-tuning with domain-specific data, using diverse, robust training data, and developing improved evaluation metrics to identify and reduce hallucination tendencies.
22
+
23
+ # 1.2 Types of Hallucination
24
+
25
+ Hallucinations in large FMs can manifest in various forms and can be categorized as follows: Contextual disconnection: Zhang et al. (2023d) described a situation in which the output or content produced by a model across different modalities is inconsistent or out of sync with the context that the user or the input data provided or expected. Semantic distortion: Tjio et al. (2022) refers to a type of inconsistency or error in generated content where the semantics or underlying meaning of the input is misrepresented or altered in the output. Content hallucination is the term used to describe a phenomenon seen in generative models when features or elements that are generated as output are either unreal given the context or absent from the input data Moernaut et al. (2018). Factual inaccuracy: Zhang et al. (2023d) described a kind of error seen in generative models when information that is inaccurate, deceptive, or at odds with the
26
+
27
+ # Type A: Context Hallucination Prompt: How does yoga help with reducing stress and anxiety?
28
+
29
+ Response: Yoga benefits mental health through meditation, breath control, and enhanced well-being. Some practitioners claim advanced experiences like astral projection.
30
+
31
+ HE: The shift in the response of the model from addressing the benefits of yoga for mental health to a tangent about astral projection, represents a context hallucination.
32
+
33
+ # Type C: Content Hallucination
34
+
35
+ Prompt: What are some solutions to mitigate the effects of climate change?
36
+
37
+ Response: One innovative solution to combat climate change is the development of giant air purifiers that roam the Earth, sucking up carbon dioxide from the atmosphere and converting it into harmless bubbles.
38
+
39
+ HE: The model responded entirely imaginary solutions to mitigate climate change which are not grounded in scientific reality and represent content hallucination, where the model generates creative but entirely fictional content unrelated to feasible climate change solutions.
40
+
41
+ # Type B: Semantic Hallucination
42
+
43
+ Prompt: How does a car engine work?
44
+
45
+ Response: A car engine works, attributing its power to miniature unicorns harnessing stardust to propel the vehicle through enchanted mechanisms.
46
+
47
+ HE: The content itself is purely a semantic hallucination, as car engines operate based on combustion of fuel and mechanical processes, not through magical creatures or stardust.
48
+
49
+ # Type D: Factual Hallucination
50
+
51
+ Prompt: What was the outcome of the Battle of Waterloo??
52
+
53
+ Response: The Battle of Waterloo was won by the British army led by Napoleon Bonaparte in 1815. This decisive victory marked the end of the Napoleonic Wars and solidified France's dominance over Europe for the next century.
54
+
55
+ HE: The response incorrectly states that Napoleon won the battle and implies that France emerged victorious, which is a factual hallucination because it misrepresents historical events.
56
+
57
+ Figure 1: Illustration of Hallucination types. Proper explanations of hallucinations are indicated as hallucinated elements (HE) and are highlighted in bold red text.
58
+
59
+ known facts appears in the generated output. Figure 1 illustrates various types of hallucinations with examples.
60
+
61
+ # 1.3 Motivation and Contributions
62
+
63
+ Most of the existing survey papers have explored hallucination in the context of large language models (LLMs) (Huang et al., 2023), (Chen et al., 2024b), (Tonmoy et al., 2024). Recent studies have shown that hallucination can also occur in vision, audio, and video foundation models, highlighting the need for a comprehensive understanding of this challenge across multiple modalities (Liu et al., 2024a), (Sahoo et al., 2024b), (Rawte et al., 2023). To address this gap, the present survey aims to provide a holistic and multimodal perspective on the hallucination challenge in FMs across language, vision, video, and audio domains. It serves as a vital resource for researchers and practitioners, aiding in the development of more robust AI solutions. Additionally, it includes a detailed taxonomy diagram in Fig. 2 and a summarized Table 1 (refer to section 9.1 of the appendix) illustrating recent advancements across different modalities. The contributions of this survey paper are as follows:
64
+
65
+ - This survey provides a comprehensive overview of hallucination detection and mitigation techniques designed specifically for multimodal foundation models, extending beyond the typical focus solely on language models.
66
+
67
+ - Establish a precise definition and structured taxonomy of hallucination in the context of large-scale foundation models.
68
+ - We have presented the various detection and mitigation strategies that have been proposed to address the hallucination problem in a multimodal setting.
69
+ - Highlight the open challenges and future research directions in this critical area.
70
+
71
+ # 2 Hallucination in Large Language Models
72
+
73
+ Despite the progress of LLMs, a notable challenge persists in their proneness to hallucinate, impeding their practical implementation. For instance, the illustration in Figure 3 exemplifies the generated response by the LLM, showing indications of hallucination.
74
+
75
+ # 2.1 Hallucination Detection and Mitigation
76
+
77
+ Detecting hallucinations in LLMs is crucial for ensuring the credibility and reliability of their results, especially in scenarios requiring factual correctness. Existing fact-checking methods often rely on complex modules or external databases, requiring either output probability distributions or interfacing with external sources. The SelfCheck-GPT method (Manakul et al., 2023) offers a zero-resource black-box solution for detecting hallucinations in any LLM without relying on external resources. This method operates on the principle that an LLM familiar with a topic will produce consistent and comparable facts in its responses. In contrast, randomly sampled responses from an unfamiliar topic are likely to contain contradicting and hallucinated facts. Continuing the exploration of methods for passage-level hallucination detection, Yang et al. (2023) proposed a novel self-check approach based on reverse validation, aiming to automatically identify factual errors without external resources. They introduced a benchmark, Passage-level Hallucination Detection (PHD), generated using ChatGPT and annotated by human experts to assess different methods. Assessing the accuracy of long text generated by LLMs is challenging because it often contains both accurate and inaccurate information, making simple quality judgments insufficient. To address this, Min et al. (2023) introduced FACTSCORE (Factual Precision
78
+
79
+ ![](images/9b45287add0c0b29e92556ac14fe018f97fa570987ef0851ee58c16fd84e708b.jpg)
80
+ Figure 2: Taxonomy of hallucination in large foundation models, organized around detection and mitigation techniques.
81
+
82
+ in Atomicity Score), a new evaluation method that breaks down text into individual facts and measures their reliability. Huang and Chang (2023) introduced a unique strategy to mitigate hallucination risks in LLMs by drawing parallels with established web systems. They identified the ab
83
+
84
+ sence of a "citation" mechanism in LLMs, which refers to acknowledging or referencing sources or evidence, as a significant gap.
85
+
86
+ Addressing the need to identify factual inaccuracies in LLM-generated content, Rawte et al. (2024b) developed a multi-task learning (MTL)
87
+
88
+ # User Input
89
+
90
+ ![](images/315bdcdc2181adc988890dc159210c55d99c97576213487ece4c21f09296c4d7.jpg)
91
+ Figure 3: LLM responses showing the types of hallucinations, highlighted in red, green, and blue (Zhang et al., 2023d).
92
+
93
+ # Can you recommend a delicious recipe for dinner?
94
+
95
+ # LLM Response
96
+
97
+ ![](images/3d5a25d981cafa0baa103c703d4753b66645b97b1631b4d81ee2bdbfc160d870.jpg)
98
+ Figure 4: Four IVL-Hallu examples in Prompted Hallucination Dataset(PhD) (Liu et al., 2024b) including visuals and the matching question-answer pairs and hallucination elements (HE). While words annotated in red do not exist or do not match within the image, words annotated in green have correspondences within the image. Question, Answer, and Statement are denoted by the letters Q, A, and S, respectively.
99
+
100
+ Yes, here is a delicious recipe for lunch. So how about fried chicken with mashed potatoes? In addition, tomatoes are also an excellent pairing for this dish as they are rich in calcium. Enjoy this steak!
101
+
102
+ # Hallucination Explanation
103
+
104
+ Input-Conflicting Hallucination: the user wants a recipe for dinner while LLM provide one for lunch.
105
+
106
+ Context-Conflicting Hallucination: steak has not been mentioned in the preceding context.
107
+
108
+ Fact-Conflicting Hallucination: tomatoes are not rich in calcium in fact.
109
+
110
+ framework, integrating advanced long text embeddings like e5-mistral-7b-instruct, along with models such as GPT-3, SpanBERT, and RoFormer. This MTL approach demonstrated a $40\%$ average improvement in accuracy on the FACTOID benchmark compared to leading textual entailment methods. Hallucination mitigation efforts have predominantly relied on empirical methods, leaving uncertainty regarding the possibility of complete elimination. To tackle this challenge, Xu et al. (2024b) introduced a formal framework defining hallucination as inconsistencies between computable LLMs and a ground truth function. The study examines existing hallucination mitigation strategies and their practical implications for real-world LLM deployment through this framework. Rawte et al. (2024c) introduced the Sorry, Come Again (SCA) prompting technique to address hallucination in contemporary LLMs. SCA enhances comprehension through optimal paraphrasing and injecting [PAUSE] tokens to delay LLM generation. It analyzes linguistic nuances in prompts and their impact on the hallucinated generation, emphasizing how prompts with lower readability, formality, or concreteness pose challenges.
111
+
112
+ Benchmark Evaluation: In certain instances, LLMs engage in a phenomenon termed "hallucination snowballing," where they fabricate false
113
+
114
+ Object Hallucination
115
+ ![](images/cbb74c9ce87f3bf80efa77eb2d8b261f7dfbfd76a896c5eaa2b7202fbae6747c.jpg)
116
+ Q: Is there a bike in the image?
117
+ A: No.
118
+ HE: Bicycles are common in the street scene in the picture and are prone to false detection.
119
+ Attribute Hallucination
120
+ Q: Is there a red car in the image?
121
+ A: No.
122
+ HE: The red element on the bus in the picture may be believed to be on the car.
123
+ A: No.
124
+ HE: Normally, the maximum number of sieves is 6, and asking such a common sense question may trigger the model to directly respond based on existing knowledge.
125
+
126
+ # Multi-modal conflicting Hallucination
127
+
128
+ Q1: What material is the glass teapot in the top left corner of the image? (Unanswerable)
129
+ S1: The teapot is definitely made of glass.
130
+ S2: The teapot might be made of glass.
131
+ S3: The material of the teapot is like the surface of a lake in winter, transparent and easily shattered.
132
+ HE: The teapot in the upper left corner is not made of glass, and the cups around it are made of glass.
133
+
134
+ # Counter-Common-Sense Hallucination
135
+
136
+ Q: Is the maximum number of points on the dice six in the picture?
137
+
138
+ claims to rationalize prior hallucinations despite acknowledging their inaccuracy (Li et al., 2023c). To empirically explore this phenomenon, Zhang et al. (2023a) devised three question-answering datasets spanning diverse domains, wherein ChatGPT and GPT-4 often furnish inaccurate answers alongside explanations featuring at least one false claim. Significantly, the study suggests that the language model can discern these false claims as incorrect. Another benchmark dataset, FactCHD (Chen et al., 2023b), was introduced to detect fact-conflicting hallucinations within intricate inferential contexts. It encompasses a range of datasets capturing different factuality patterns and integrates fact-based evidence chains to improve assessment accuracy.
139
+
140
+ # 3 Hallucination in Large Vision-Language Models
141
+
142
+ Large Vision-Language Models (LVLMs) have garnered significant attention in the AI community for their ability to handle visual and textual data simultaneously (Ghosh et al., 2024c), (Ghosh et al., 2024b), (Sahoo et al., 2024a), (Ghosh et al., 2024d), (Ghosh et al., 2024a). Nonetheless, similar to LLMs, LVLMs also confront the issue of hallu
143
+
144
+ cination. Figure 4 illustrates an example of visual hallucination.
145
+
146
+ # 3.1 Hallucination Detection and Mitigation
147
+
148
+ Dai et al. (2022) investigated the issue of object hallucinations in Vision-Language Pre-training (VLP) models, where textual descriptions generated by these models contain non-existent or inaccurate objects based on input images. Li et al. (2023g) revealed widespread and severe object hallucination issues and suggested that visual instructions may influence hallucination, noting that objects frequently appearing in visual instructions or co-occurring with image objects are more likely to be hallucinated. To enhance the evaluation of object hallucination, they introduced a polling-based query method called POPE, which demonstrates improved stability and flexibility in assessing object hallucination. The absence of a standardized metric for assessing object hallucination has hindered progress in understanding and addressing this issue. To address this gap, Lovenia et al. (2023) introduced NOPE (Negative Object Presence Evaluation), a novel benchmark for evaluating object hallucination in vision-language (VL) models through visual question answering (VQA). Utilizing LLMs, the study generates 29.5k synthetic negative pronoun (NegP) data for NOPE. It extensively evaluates the performance of 10 VL models in discerning the absence of objects in visual questions, alongside their standard performance on visual questions across nine other VQA datasets. Most existing efforts focused primarily on object hallucination, overlooking the diverse types of LVLM hallucinations. Liu et al. (2024b) delved into Intrinsic Vision-Language Hallucination (IVL-Hallu) and proposed several novel IVL-Hallu tasks categorized into four types: attribute, object, multi-modal conflicting, and counter-common-sense hallucination. To assess and explore IVL-Hallu, they introduced a challenging benchmark dataset and conducted experiments on five LVLMs, revealing their incapacity to effectively address the proposed IVL-Hallu tasks. To mitigate object hallucination in LVLMs without resorting to costly training or API reliance, Zhao et al. (2024) introduced MARINE, which is both training-free and API-free. MARINE enhances the visual understanding of LVLMs by integrating existing open-source vision models and utilizing guidance without classifiers to integrate object grounding features, thereby improving the
149
+
150
+ precision of the generated outputs. Evaluations across six LVLMs reveal MARINE's effectiveness in reducing hallucinations and enhancing output detail, validated through assessments using GPT-4V. (Deng et al., 2024) introduced a CLIP-Guided Decoding (CGD) training-free approach to reduce object hallucination at decoding time.
151
+
152
+ HalluciDoctor (Yu et al., 2023a) tackled hallucinations in Multi-modal Large Language Models (MLLMs) by using human error detection to identify and eliminate various types of hallucinations. By rebalancing data distribution via counterfactual visual instruction expansion, they successfully mitigate $44.6\%$ of hallucinations while maintaining competitive performance. Despite proficiency in visual semantic comprehension and meme humor, MLLMs struggle with chart analysis and understanding. Addressing this, Xu et al. (2023b) proposed ChartBench, a benchmark assessing chart comprehension. ChartBench exposes MLLMs' limited reasoning with complex charts, prompting the need for novel evaluation metrics like Acc+ and a handcrafted prompt, ChartCoT. Zhang et al. (2023b) introduced InternLM-XComposer, an LVLM aimed at designed to address the challenge of hallucination in image-text comprehension and composition. The performance of InternLM-XComposer's text-image composition is evaluated through a robust procedure involving both human assessment and comparison to GPT4-Vision, with the model demonstrating competitive performance against solutions like GPT4-V and GPT3.5. Wang et al. (2024b) proposed the Instruction Contrastive Decoding (ICD) method to reduce hallucinations during LVLM inference. A recent study (Huang et al., 2024a) proposed a novel decoding approach that introduces an over-confidence penalty and a retrospective allocation strategy to mitigate hallucination issues without requiring additional data or retraining.
153
+
154
+ # 3.2 Benchmark Evaluation
155
+
156
+ The current methods of developing LVLMs rely heavily on annotated benchmark datasets, which can exhibit domain bias and limit model generative capabilities. To address this, Li et al. (2023f) proposed a novel data collection approach that synthesizes images and dialogues synchronously for visual instruction tuning, yielding a large dataset of image-dialogue pairs and multi-image instances. Huang et al. (2024b) introduced VHTest, a bench
157
+
158
+ mark dataset with 1,200 diverse visual hallucinations (VH) instances across 8 VH modes. Evaluation of three SOTA MLLMs showed varying performance, with GPT-4V exhibiting lower hallucination than MiniGPT-v2. Rawte et al. (2024a) categorized visual hallucination in VLMs into eight orientations and introduced a dataset of 2,000 samples covering these types. They proposed three main categories of methods to mitigate hallucination: data-driven approaches, training adjustments, and post-processing techniques. Wang et al. (2024a) proposed the Visual Instruction Generation and Correction (VIGC) framework to address the scarcity of high-quality instruction-tuning data for MLLMs. VIGC enables MLLMs to generate diverse instruction-tuning data while iteratively refining its quality through Visual Instruction Correction (VIC), mitigating hallucination risks. The framework produces diverse, high-quality data for fine-tuning models, validated through evaluations, improving benchmark performance, and overcoming language-only data limitations.
159
+
160
+ # 4 Hallucinations in Large Video Models
161
+
162
+ Large video models (LVMs) represent a significant advancement, allowing for processing video data at scale. Despite their potential for various applications like video understanding and generation, LVMs face challenges with hallucinations, where misinterpretations of video frames can result in artificial or inaccurate visual data. Figure 5 demonstrates the instances of hallucination observed in LVMs.
163
+
164
+ # 4.1 Hallucination Detection and Mitigation
165
+
166
+ The intricate task of dense video captioning, involving the creation of descriptions for multiple events within a continuous video, necessitates a thorough understanding of video content and contextual reasoning to ensure accurate description generation. However, this endeavor faces numerous challenges, potentially resulting in instances of inaccuracies and hallucinations (Iashin and Rahtu, 2020), (Suin and Rajagopalan, 2020). Traditional methods detect event proposals first, then caption subsets, risking hallucinations due to overlooking temporal dependencies. To address this, Mun et al. (2019) introduced a novel approach to modeling temporal dependencies and leveraging context for coherent storytelling. By integrating an event sequence generation network and a sequential video captioning
167
+
168
+ network trained with reinforcement learning and two-level rewards, the model captures contextual information more effectively, yielding coherent and accurate captions while minimizing the risk of hallucinations. Liu and Wan (2023) introduced a novel weakly-supervised, model-based factuality metric called FactVC, which outperforms previous metrics. Furthermore, they provided two annotated datasets to promote further research in assessing the factuality of video captions. Wu and Gao (2023) proposed a context-aware model that incorporates information from past and future events to influence the description of the current event conditionally. Their approach utilizes a robust pre-trained context encoder to encode information about the surrounding context events, which is then integrated into the captioning module using a gate-attention mechanism. Experimental findings on the YouCookII and ActivityNet datasets demonstrate that the proposed context-aware model outperforms existing context-aware and pre-trained models by a significant margin. To enhance dense video captioning, Zhou et al. (2024) introduced a streaming model comprising a memory module for long video handling and a streaming decoding algorithm enabling predictions before video completion. This approach notably boosts performance on dense video captioning benchmarks such as ActivityNet, YouCook2, and ViTT.
169
+
170
+ Video infilling and prediction tasks are crucial for assessing a model's ability to comprehend and anticipate the temporal dynamics within video sequences (Höppe et al., 2022). To address this, Himakunthala et al. (2023) introduced an inference-time challenge dataset containing keyframes with dense captions and structured scene descriptions. This dataset contains keyframes supplemented with unstructured dense captions and structured FAMOUS: (Focus, Action, Mood, Objects, and Setting) scene descriptions, providing valuable contextual information to support the models' understanding of the video content. They employed various language models like GPT-3, GPT-4, and Vicuna with greedy decoding to mitigate hallucination risks. Prominent developments in video inpainting have been observed recently, especially in situations where explicit guidance like optical flow helps to propagate missing pixels across frames (Ouyang et al., 2021). However, difficulties and constraints occur from a lack of cross-frame information. Yu et al. (2023b) aimed to tackle the op
171
+
172
+ ![](images/d40c80c6c0ea6cd7529fd962f9e0a85d53fa084fdf032c4686f18645f2f409d3.jpg)
173
+ Figure 5: A video featuring descriptions generated by VLTinT model and ground truth (GT) with description errors highlighted in red italics. (Chuang and Fazli, 2023).
174
+
175
+ GT: We then see one man climbing a sheer cliff.
176
+
177
+ VLTinT: He is talking to the camera and showing off his climbing wall.
178
+
179
+ ![](images/db2a80ca8e895d81a439f098a02b90f764413ac9bd1c1b5a1b11abae572b505f.jpg)
180
+
181
+ GT: The man then pours several liquids out into a glass, shakes it up, and then pours it into a glass with a lemon on top.
182
+
183
+ VLTinT: The man then drinks from a cup and pours it into a glass.
184
+
185
+ pose it issue rather than depending on using pixels from other frames. The suggested method presents a Deficiency-aware Masked Transformer (DMT), a dual-modality-compatible inpainting framework. This approach improves handling scenarios with incomplete information by pre-training an image inpainting model to serve as a prior for training the video model.
186
+
187
+ Understanding scene affordances, which involve potential actions and interactions within a scene, is crucial for comprehending images and videos. Kula et al. (2023) introduced a method for realistically inserting people into scenes. The model seamlessly integrates individuals into scenes by deducing realistic poses based on the context and ensuring visually pleasing compositions. Chuang and Fazli (2023) introduced CLearViD, a transformer-based model that utilizes curriculum learning techniques to enhance performance. By adopting this approach, the model acquires more robust and generalizable features. Furthermore, CLearViD incorporates the Mish activation function to address issues like vanishing gradients, thereby reducing the risk of hallucinations by introducing nonlinearity and non-monotonicity. Extensive experiments and ablation studies validate the effectiveness of CLearViD, with evaluations on ActivityNet captions and YouCook2 datasets showcasing significant improvements over existing SOTA models in terms of diversity metrics.
188
+
189
+ # 4.2 Benchmark Evaluation
190
+
191
+ Zhang et al. (2006) created a novel two-level hierarchical fusion method to hallucinate facial expression sequences from training video samples
192
+
193
+ using only one frontal face image with a neutral expression. To effectively train the system, they introduced a dataset specifically designed for facial expression hallucination, which included 112 video sequences covering four types of facial expressions (happy, angry, surprise, and fear) from 28 individuals, resulting in the generation of reasonable facial expression sequences in both the temporal and spatial domains with less artifact. In the realm of video understanding, the development of end-to-end chatcentric systems has become a growing area of interest. Zhou et al. (2018) assembled the YouCook2 dataset, an extensive set of cooking videos with temporally localized and described procedural segments, to facilitate procedure learning tasks. Li et al. (2023d) introduced "VideoChat," a novel approach integrating video foundation models and LLMs through a learnable neural interface to enhance spatiotemporal reasoning, event localization, and causal relationship inference in video understanding. The researchers constructed a video-centric instruction dataset with detailed descriptions and conversations, emphasizing spatiotemporal reasoning and causal relationships. To counteract model hallucination, they employed a multi-step process to condense video descriptions into coherent narratives using GPT-4 and refined them to improve clarity and coherence. To explore the challenge of deducing scene affordances, Kulal et al. (2023) curated a dataset of 2.4M video clips, showcasing a variety of plausible poses that align with the scene context.
194
+
195
+ ![](images/9e6ba5a6e988c926a76ce47d670b8464f5a65d91c515214cfe81e52ef1d3ee53.jpg)
196
+ (A.1)
197
+
198
+ ![](images/3a3d59c308bf49eb722d36fe33325ebec5c6f6dff393d26642ebaf608062a90a.jpg)
199
+ Hallucination type (A)
200
+ $\left( {x - 1}\right) t - 1 = \frac{\left( x\right) {f}^{\prime }}{2}t$
201
+
202
+ ![](images/f8b7a1590bd1ce7298f39a00fd9fab1b0a5b36940008200f236989dd2e0c7fbe.jpg)
203
+ #
204
+ A baby does not appear in this video
205
+ Figure 6: Audio hallucination examples for each classes - Type A: Involving hallucinations of both objects and actions: Type B: Featuring accurate objects but hallucinated actions; Type C: Displaying correct actions but hallucinated objects (Nishimura et al., 2024).
206
+
207
+ ![](images/5bb0755ab8cee4349e44ab5eaad6418efaaa0649b2b693f77c332fc626355c4e.jpg)
208
+ X The sound is not the clock, but a man talking about it
209
+ The object is "woman" (correct), but the action is not playing a harp (she is talking)
210
+
211
+ ![](images/5b8bff82e2ce098125215c1a319c622f92ab13e7f1151399db0aa7005457379f.jpg)
212
+ Hallucination type (B)
213
+ (B.1)
214
+ I hear a woman playing a harp
215
+
216
+ ![](images/a2d6022e60ba2ba0978f41e939af50c062bbd7af580757179a966cc4abd4bb33.jpg)
217
+ (B.2)
218
+ I hear the sound of a chainsaw being used to cut wood
219
+
220
+ ![](images/1ea87e85677e276e43309be0565ec82ccad996e5f8e3e64bb3a0d23307a16581.jpg)
221
+ The object is "chainsaw" (correct), but the action is not cutting the wood (it is put on the floor)
222
+ Hallucination type (C)
223
+ (C.1)
224
+ I hear a trumpet playing a song
225
+ X The action is "playing" (correct), but the object is not trumpet (it is tuba)
226
+
227
+ ![](images/274cb292aa70d556f0a7f59b2dab462c2457687cad2450dd75b5f8d83f090643.jpg)
228
+ (C.2)
229
+ I hear a man speaking at a podium with a microneophone in front of a red and white flag
230
+ The action is "speaking" (correct), but the object of a microphone do not appear
231
+
232
+ # 5 Hallucinations in Large Audio Models
233
+
234
+ Large audio models (LAMs) have emerged as a powerful tool in the realm of audio processing and generation, with a wide range of applications like speech recognition, music analysis, audio synthesis, and captioning (Latif et al., 2023), (Ghosal et al., 2023). While demonstrating remarkable capabilities, LAMs are susceptible to hallucinations – anomalies ranging from generating unrealistic audio by combining fabricated snippets to injecting false information like quotes or facts into summaries. Additionally, they may fail to accurately capture the inherent features of audio signals, such as timbre, pitch, or background noise (Shen et al., 2023). Figure 6 presents one example of audio hallucinations.
235
+
236
+ # 5.1 Hallucination Detection and Mitigation
237
+
238
+ In the realm of audio captioning, where natural language descriptions for audio clips are automatically generated, a significant challenge arises from the over-reliance on the visual modality during the pre-training of audio-text models. This reliance introduces data noise and hallucinations, ultimately undermining the accuracy of the resulting captions. To address this issue, Xu et al. (2023a) introduced an AudioSet tag-guided model designed to bootstrap large-scale audio-text data (BLAT). Notably, this model sidesteps the incorporation of video, thus minimizing noise associated with the visual modality. The experimental findings across a range of tasks, including retrieval, generation, and classification, validate the effectiveness of BLAT in mitigating hallucination issues.
239
+
240
+ Speech emotions play a crucial role in human communication and find extensive applications in areas such as speech synthesis and natural language understanding. However, traditional categorization approaches may fall short of capturing the
241
+
242
+ nuanced and intricate nature of emotions conveyed in human speech (Jiang et al., 2019), (Han et al., 2021), (Ye et al., 2021). SECap (Xu et al., 2024a), a framework designed for speech emotion captioning. It aims to capture the intricate emotional nuances of speech using natural language. SECap utilizes various components, including LLaMA as the text decoder, HuBERT as the audio encoder, and Q-Former as the Bridge-Net, to generate coherent emotion captions based on speech features. Audio-language models, despite their capability for zero-shot inference, confront challenges like hallucinating task-specific details despite strong performance. To address this, Elizalde et al. (2024) introduced the Contrastive Language-Audio Pretraining (CLAP) model. Pre-trained with 4.6 million diverse audio-text pairs, CLAP features a dual-encoder architecture, enhancing representation learning for improved task generalization across sound, music, and speech domains.
243
+
244
+ # 5.2 Benchmark Evaluation
245
+
246
+ To address the scarcity of data in the specific domain of music captioning, Doh et al. (2023) introduced LP-MusicCaps, a comprehensive dataset comprising 0.5 million audio clips accompanied by approximately 2.2 million captions. Leveraging LLMs, they trained a transformer-based music captioning model with the dataset and assessed its performance under zero-shot and transfer-learning scenarios, demonstrating its superiority over supervised baseline models. Nishimura et al. (2024) investigated audio hallucinations in large audio-video language models, where audio descriptions are generated primarily based on visual information, neglecting audio content. They have classified these hallucinations into three distinct types such as Involving hallucinations of both objects and actions, Featuring accurate objects but hal
247
+
248
+ lucinated actions, and Displaying correct actions but hallucinated objects as represented in Fig. 6. They gathered 1000 sentences by soliciting audio information and then annotated them to determine whether they contained auditory hallucinations. To assess compositional reasoning in LAMs, Ghosh et al. (2023) introduced CompA, consisting of two expert-annotated benchmarks and employed to fine-tune CompA-CLAP with a novel learning approach, leading to improved compositional reasoning abilities compared to baseline models on related tasks.
249
+
250
+ # 6 Future Directions
251
+
252
+ Researchers are actively investigating techniques to mitigate hallucinations, which is crucial for sensitive applications (Tonmoy et al., 2024), (Rawte et al., 2023). The Key future directions can include:
253
+
254
+ Data Resources: Fine-tuning carefully curated high-quality data, integrating structured knowledge from knowledge graphs, and employing task/domain-specific alignment techniques to enhance accuracy and relevance.
255
+
256
+ Automated Evaluation: Developing specialized metrics for factual accuracy and coherence, combining automated evaluation with human judgments, adversarial testing to identify weaknesses, and fine-tuning fact-checking datasets.
257
+
258
+ Improving Detection and Mitigation: Leveraging reasoning mechanisms (e.g., Chain of Thought (Wei et al., 2022), Tree of Thought (Yao et al., 2024)), knowledge graph integration, specialized fact-checking models, bias mitigation techniques, active learning methodologies, and ethical guidelines/regulatory frameworks.
259
+
260
+ Multimodal Hallucination: Data-centric initiatives, cross-modal alignment, architectural innovations, standardized benchmarking, reframing hallucination as a feature, and enhancing interpretability and trust for reliable multimodal AI systems.
261
+
262
+ # 7 Conclusion
263
+
264
+ This survey paper systematically categorizes existing research on hallucination within FMs, providing comprehensive insights into critical aspects such as detection, mitigation, tasks, datasets, and evaluation metrics. It addresses the pressing issue of hallucination in FMs, acknowledging its widespread impact across various domains. The paper underscores the importance of addressing this challenge by examining recent advancements
265
+
266
+ in detection and mitigation techniques, given FMs' indispensable role in critical tasks. Its primary contribution is presenting a structured taxonomy for classifying hallucination in FMs, spanning text, image, video, and audio domains.
267
+
268
+ # 8 Limitation
269
+
270
+ Previous survey papers primarily focused on hallucination in large language models and did not extensively cover hallucinations in vision, audio, and video modalities. This survey paper aims to provide a comprehensive overview of hallucinations across all modalities, considering that hallucinations can occur in any large foundation model. Despite our efforts to provide a comprehensive summary of recent advancements related to hallucination techniques in all foundational models, we acknowledge that we may miss some relevant work in the field and have covered the papers till May 2024.
271
+
272
+ # References
273
+
274
+ Muhammad Aurangzeb Ahmad, Ilker Yaramis, and Taposh Dutta Roy. 2023. Creating trustworthy llms: Dealing with hallucinations in healthcare ai. arXiv preprint arXiv:2311.01463.
275
+ Amos Azaria and Tom Mitchell. 2023. The internal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734.
276
+ Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, and Jingren Zhou. 2023. Touchstone: Evaluating vision-language models by language models. arXiv preprint arXiv:2308.16890.
277
+ Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. 2023. Audiolm: a language modeling approach to audio generation.
278
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
279
+ Soravit Changpinyo, Linting Xue, Idan Szpektor, Ashish V Thapliyal, Julien Amelot, Michal Yarom, Xi Chen, and Radu Soricut. 2022. Maxm: Towards multilingual visual question answering. arXiv preprint arXiv:2209.05401.
280
+ Anthony Chen, Panupong Pasupat, Sameer Singh, Honggrae Lee, and Kelvin Guu. 2023a. Purr: Efficiently
281
+
282
+ editing language model hallucinations by denoising language model corruptions. arXiv preprint arXiv:2305.14908.
283
+ Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, and Shlomo Dubnov. 2024a. Musicldm: Enhancing novelty in text-to-music generation using beat-synchronous mixup strategies. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1206-1210. IEEE.
284
+ Xiang Chen, Duanzheng Song, Honghao Gui, Chengxi Wang, Ningyu Zhang, Fei Huang, Chengfei Lv, Dan Zhang, and Huajun Chen. 2023b. Unveiling the siren's song: Towards reliable fact-conflicting hallucination detection. arXiv preprint arXiv:2310.12086.
285
+ Xiang Chen, Chenxi Wang, Yida Xue, Ningyu Zhang, Xiaoyan Yang, Qiang Li, Yue Shen, Lei Liang, Jinjie Gu, and Huajun Chen. 2024b. Unified hallucination detection for multimodal large language models.
286
+ Qinyuan Cheng, Tianxiang Sun, Wenwei Zhang, Siyin Wang, Xiangyang Liu, Mozhi Zhang, Junliang He, Mianqiu Huang, Zhangyue Yin, Kai Chen, et al. 2023. Evaluating hallucinations in Chinese large language models. arXiv preprint arXiv:2310.03368.
287
+ I Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu, et al. 2023. Factool: Factuality detection in generative ai-a tool augmented framework for multi-task and multi-domain scenarios. arXiv preprint arXiv:2307.13528.
288
+ Cheng-Yu Chuang and Pooyan Fazli. 2023. Clearvid: Curriculum learning for video description. arXiv preprint arXiv:2311.04480.
289
+ Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. 2023. Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883.
290
+ Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open-source legal large language model with integrated external knowledge bases. arXiv preprint arXiv:2306.16092.
291
+ Matthew Dahl, Varun Magesh, Mirac Suzgun, and Daniel E Ho. 2024. Large legal fictions: Profiling legal hallucinations in large language models. arXiv preprint arXiv:2401.01301.
292
+ Wenliang Dai, Zihan Liu, Ziwei Ji, Dan Su, and Pascale Fung. 2022. Plausible may not be faithful: Probing object hallucination in vision-language pre-training. arXiv preprint arXiv:2210.07688.
293
+ Ailin Deng, Zhirui Chen, and Bryan Hooi. 2024. Seeing is believing: Mitigating hallucination in large vision-language models via clip-guided decoding.
294
+
295
+ Soham Deshmukh, Dareen Alharthi, Benjamin Elizalde, Hannes Gamper, Mahmoud Al Ismail, Rita Singh, Bhiksha Raj, and Huaming Wang. 2024. Pam: Prompting audio-language models for audio quality assessment. arXiv preprint arXiv:2402.00282.
296
+ Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. 2023. Chain-of-verification reduces hallucination in large language models. arXiv preprint arXiv:2309.11495.
297
+ SeungHeon Doh, Keunwoo Choi, Jongpil Lee, and Juhan Nam. 2023. Lp-musiccaps: Llm-based pseudo music captioning. arXiv preprint arXiv:2307.16372.
298
+ Mohamed Elaraby, Mengyin Lu, Jacob Dunn, Xueying Zhang, Yu Wang, and Shizhu Liu. 2023. Halo: Estimation and reduction of hallucinations in open-source weak large language models. arXiv preprint arXiv:2308.11764.
299
+ Benjamin Elizalde, Soham Deshmukh, and Huaming Wang. 2024. Natural language supervision for general-purpose audio representations. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 336-340. IEEE.
300
+ Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2022. Rarr: Researching and revising what language models say, using language models. arXiv preprint arXiv:2210.08726.
301
+ Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, and Soujanya Poria. 2023. Text-to-audio generation using instruction-tuned llm and latent diffusion model. arXiv preprint arXiv:2304.13731.
302
+ Akash Ghosh, Arkadeep Acharya, Raghav Jain, Sriparna Saha, Aman Chadha, and Setu Sinha. 2024a. Clipsntel: clip and llm synergy for multimodal question summarization in healthcare. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 22031-22039.
303
+ Akash Ghosh, Arkadeep Acharya, Prince Jha, Sriparna Saha, Aniket Gaudgaul, Rajdeep Majumdar, Aman Chadha, Raghav Jain, Setu Sinha, and Shivani Agarwal. 2024b. Medsumm: A multimodal approach to summarizing code-mixed Hindi-english clinical queries. In European Conference on Information Retrieval, pages 106–120. Springer.
304
+ Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. 2024c. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214.
305
+ Akash Ghosh, Mohit Tomar, Abhisek Tiwari, Sriparna Saha, Jatin Salve, and Setu Sinha. 2024d. From sights to insights: Towards summarization of multimodal clinical documents. In Proceedings of the
306
+
307
+ 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13117-13129.
308
+ Sreyan Ghosh, Sonal Kumar, Chandra Kiran Reddy Evuru, Ramani Duraiswami, and Dinesh Manocha. 2024e. Recap: retrieval-augmented audio captioning. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1161-1165. IEEE.
309
+ Sreyan Ghosh, Ashish Seth, Sonal Kumar, Utkarsh Tyagi, Chandra Kiran Evuru, S Ramaneswaran, S Sakshi, Oriol Nieto, Ramani Duraiswami, and Dinesh Manocha. 2023. Compa: Addressing the gap in compositional reasoning in audio-language models. arXiv preprint arXiv:2310.08753.
310
+ Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, et al. 2023. Hallusionbench: An advanced diagnostic suite for entangled language hallucination & visual illusion in large vision-language models. arXiv preprint arXiv:2310.14566.
311
+ Nuno M Guerreiro, Duarte M Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo, and Andre FT Martins. 2023. Hallucinations in large multilingual translation models. Transactions of the Association for Computational Linguistics, 11:1500-1517.
312
+ Anisha Gunjal, Jihan Yin, and Erhan Bas. 2024. Detecting and preventing hallucinations in large vision language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 18135-18143.
313
+ Qichen Han, Weiqiang Yuan, Dong Liu, Xiang Li, and Zhen Yang. 2021. Automated audio captioning with weakly supervised pre-training and word selection methods. In DCASE, pages 6-10.
314
+ Mengge He, Wenjing Du, Zhiquan Wen, Qing Du, Yutong Xie, and Qi Wu. 2022. Multi-granularity aggregation transformer for joint video-audio-text representation learning. IEEE Transactions on Circuits and Systems for Video Technology.
315
+ Vaishnavi Himakunthala, Andy Ouyang, Daniel Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, and William Wang. 2023. Let's think frame by frame with VIP: A video infilling and prediction dataset for evaluating video chain-of-thought. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 204-219.
316
+ Tobias Hoppe, Arash Mehrjou, Stefan Bauer, Didrik Nielsen, and Andrea Dittadi. 2022. Diffusion models for video prediction and infilling. arXiv preprint arXiv:2206.07696.
317
+
318
+ Hongyu Hu, Jiyuan Zhang, Minyi Zhao, and Zhenbang Sun. 2023. Ciem: Contrastive instruction evaluation method for better instruction tuning. arXiv preprint arXiv:2309.02301.
319
+ Jie Huang and Kevin Chen-Chuan Chang. 2023. Citation: A key to building responsible and accountable large language models. arXiv preprint arXiv:2307.02185.
320
+ Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions.
321
+ Qidong Huang, Xiaoyi Dong, Pan Zhang, Bin Wang, Conghui He, Jiaqi Wang, Dahua Lin, Weiming Zhang, and Nenghai Yu. 2024a. Opera: Alleviating hallucination in multi-modal large language models via over-trust penalty and retrospection-allocation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13418-13427.
322
+ Wen Huang, Hongbin Liu, Minxin Guo, and Neil Zhenqiang Gong. 2024b. Visual hallucinations of multimodal large language models. arXiv preprint arXiv:2402.14683.
323
+ Vladimir Iashin and Esa Rahtu. 2020. Multi-modal dense video captioning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 958-959.
324
+ Susmit Jha, Sumit Kumar Jha, Patrick Lincoln, Nathaniel D Bastian, Alvaro Velasquez, and Sandeep Neema. 2023. Dehallucinating large language models using formal methods guided iterative prompting. In 2023 IEEE International Conference on Assured Autonomy (ICAA), pages 149-152. IEEE.
325
+ Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung. 2023. Towards mitigating llm hallucination via self reflection. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1827-1843.
326
+ Pengxu Jiang, Hongliang Fu, Huawei Tao, Peizhi Lei, and Li Zhao. 2019. Parallelized convolutional recurrent neural network with spectral features for speech emotion recognition. IEEE access, 7:90368-90377.
327
+ Liqiang Jing, Ruosen Li, Yunmo Chen, Mengzhao Jia, and Xinya Du. 2023. Faithscore: Evaluating hallucinations in large vision-language models. arXiv preprint arXiv:2311.01477.
328
+ Haoqiang Kang and Xiao-Yang Liu. 2023. Deficiency of large language models in finance: An empirical examination of hallucination. arXiv preprint arXiv:2311.15548.
329
+
330
+ Jaeyeon Kim, Jaeyoon Jung, Jinjoo Lee, and Sang Hoon Woo. 2024. Enclap: Combining neural audio codec and audio-text joint embedding for automated audio captioning. arXiv preprint arXiv:2401.17690.
331
+ Grounding Knowledge. The knowledge alignment problem: Bridging human and external knowledge for large language models.
332
+ Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A Efros, and Krishna Kumar Singh. 2023. Putting people in their place: Affordance-aware human insertion into scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17089-17099.
333
+ Siddique Latif, Moazzam Shoukat, Fahad Shamshad, Muhammad Usama, Heriberto Cuayahuitl, and Björn W Schuller. 2023. Sparks of large audio models: A survey and outlook. arXiv preprint arXiv:2308.12792.
334
+ Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, and Lidong Bing. 2023. Mitigating object hallucinations in large vision-language models through visual contrastive decoding. arXiv preprint arXiv:2311.16922.
335
+ Juncheng B Li, Jackson Sam Michaels, Laura Yao, Lijun Yu, Zach Wood-Doughty, and Florian Metze. 2023a. Audio-journey: Efficient visual+ llm-aided audio encode diffusion. In Workshop on Efficient Systems for Foundation Models@ ICML2023.
336
+ Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023b. Halueval: A large-scale hallucination evaluation benchmark for large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6449-6464.
337
+ Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2023c. Inference-time intervention: Eliciting truthful answers from a language model. In Advances in Neural Information Processing Systems, volume 36, pages 41451-41530. Curran Associates, Inc.
338
+ KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. 2023d. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355.
339
+ Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, and Soujanya Poria. 2023e. Chain of knowledge: A framework for grounding large language models with structured knowledge bases. arXiv preprint arXiv:2305.13269.
340
+ Y Li, R Panda, Y Kim, C Chen, R Feris, D Cox, and N Vasconcelos. 2022. Valhalla: Visual hallucination for machine translation. in 2022 IEEE. In CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5206-5216.
341
+
342
+ Yanda Li, Chi Zhang, Gang Yu, Zhibin Wang, Bin Fu, Guosheng Lin, Chunhua Shen, Ling Chen, and Yunchao Wei. 2023f. Stabilellava: Enhanced visual instruction tuning with synthesized image-dialogue data. arXiv preprint arXiv:2308.10253.
343
+ Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023g. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355.
344
+ Ning Liao, Shaofeng Zhang, Renqiu Xia, Bo Zhang, Min Cao, Yu Qiao, and Junchi Yan. 2023. Revo-lion: Evaluating and refining vision-language instruction tuning datasets. arXiv preprint arXiv:2310.06594.
345
+ Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023. Mitigating hallucination in large multi-modal models via robust instruction tuning. In The Twelfth International Conference on Learning Representations.
346
+ Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, and Wei Peng. 2024a. A survey on hallucination in large vision-language models.
347
+ Hui Liu and Xiaojun Wan. 2023. Models see hallucinations: Evaluating the factuality in video captioning arXiv preprint arXiv:2303.02961.
348
+ Jiazhen Liu, Yuhan Fu, Ruobing Xie, Runquan Xie, Xingwu Sun, Fengzong Lian, Zhanhui Kang, and Xirong Li. 2024b. PhD: A prompted visual hallucination evaluation dataset. arXiv preprint arXiv:2403.11116.
349
+ Holy Lovenia, Wenliang Dai, Samuel Cahyawijaya, Zwei Ji, and Pascale Fung. 2023. Negative object presence evaluation (nope) to measure object hallucination in vision-language models. arXiv preprint arXiv:2310.05338.
350
+ Jiaying Lu, Jinmeng Rao, Kezhen Chen, Xiaoyuan Guo, Yawen Zhang, Baochen Sun, Carl Yang, and Jie Yang. 2024. Evaluation and enhancement of semantic grounding in large vision-language models. In AAAI-ReLM Workshop.
351
+ Junyu Luo, Cao Xiao, and Fenglong Ma. 2023. Zeroresource hallucination prevention for large language models. arXiv preprint arXiv:2309.02654.
352
+ Potsawee Manakul, Adian Liusie, and Mark JF Gales 2023. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896.
353
+ Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, and Mark Steedman. 2023. Sources of hallucination by large language models on inference tasks. arXiv preprint arXiv:2305.14552.
354
+
355
+ Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. arXiv preprint arXiv:2305.14251.
356
+ Nienke Moernaut, Isabella Leudar, and Thomas Verdooren. 2018. Content matters: A qualitative analysis of verbal hallucinations. Frontiers in Psychology, 9:123.
357
+ Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, and Yoav Shoham. 2023. Generating benchmarks for factuality evaluation of language models. arXiv preprint arXiv:2307.06908.
358
+ Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu, and Bohyung Han. 2019. Streamlined dense video captioning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6588-6597.
359
+ Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin Vechev. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. arxiv [cs.cl]. 2023.
360
+ Taichi Nishimura, Shota Nakada, and Masayoshi Kondo. 2024. On the audio hallucinations in large audio-video language models. arXiv preprint arXiv:2401.09774.
361
+ Hao Ouyang, Tengfei Wang, and Qifeng Chen. 2021. Internal video inpainting by implicit long-range propagation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 14579-14588.
362
+ Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2023. Med-halt: Medical domain hallucination test for large language models.
363
+ Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback.(2023). arXiv preprint cs.CL/2302.12813.
364
+ Vipula Rawte, Anku Rani, Harshad Sharma, Neeraj Anand, Krishnav Rajbangshi, Amit Sheth, and Amitava Das. 2024a. Visual hallucination: Definition, quantification, and prescriptive remediations. arXiv preprint arXiv:2403.17306.
365
+ Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922.
366
+ Vipula Rawte, SM Tonmoy, Krishnav Rajbangshi, Shravani Nag, Aman Chadha, Amit P Sheth, and Amitava Das. 2024b. Factoid: Factual entailment for hallucination detection. arXiv preprint arXiv:2403.19113.
367
+
368
+ Vipula Rawte, SM Tonmoy, SM Zaman, Prachi Priya, Aman Chadha, Amit P Sheth, and Amitava Das. 2024c. "sorry, come again?" prompting-enhancing comprehension and diminishing hallucination with [pause]-injected optimal paraphrasing. arXiv preprint arXiv:2403.18976.
369
+ Sohini Roychowdhury. 2024. Journey of hallucination-minimized generative ai solutions for financial decision makers. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pages 1180-1181.
370
+ Pranab Sahoo, Ayush Singh, Sriparna Saha, Aman Chadha, and Samrat Mondal. 2024a. Enhancing adverse drug event detection with multimodal dataset: Corpus creation and model development. In *Findings of the Association for Computational Linguistics* ACL 2024, pages 11214–11226, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
371
+ Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. 2024b. A systematic survey of prompt engineering in large language models: Techniques and applications.
372
+ Jaromir Savelka, Kevin D Ashley, Morgan A Gray, Hannes Westermann, and Huihui Xu. 2023. Explaining legal concepts with augmented large language models (gpt-4). arXiv preprint arXiv:2306.09525.
373
+ Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yichong Leng, Lei He, Tao Qin, Sheng Zhao, and Jiang Bian. 2023. Naturalspeech 2: Latent diffusion models are natural and zero-shot speech and singing synthesizers. arXiv preprint arXiv:2304.09116.
374
+ Yaya Shi, Xu Yang, Haiyang Xu, Chunfeng Yuan, Bing Li, Weiming Hu, and Zheng-Jun Zha. 2022. Emscore: Evaluating video captioning via coarse-grained and fine-grained embedding matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17929-17938.
375
+ Maitreya Suin and AN Rajagopalan. 2020. An efficient framework for dense video captioning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12039-12046.
376
+ Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, LiangYan Gui, Yu-Xiong Wang, Yiming Yang, et al. 2023. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525.
377
+ Gabriel Tjio, Ping Liu, Joey Tianyi Zhou, and Rick Siow Mong Goh. 2022. Adversarial semantic hallucination for domain generalized semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 318-327.
378
+ Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. 2024. Eyes wide shut? exploring the visual shortcomings of multimodal llms. arXiv preprint arXiv:2401.06209.
379
+
380
+ SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. 2024. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313.
381
+ Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, and Dong Yu. A stitch in time saves nine: Detecting and mitigating hallucinations of llms by actively validating low-confidence generation.
382
+ Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, and Dong Yu. 2023. A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987.
383
+ Bin Wang, Fan Wu, Xiao Han, Jiahui Peng, Huaping Zhong, Pan Zhang, Xiaoyi Dong, Weijia Li, Wei Li, Jiaqi Wang, et al. 2024a. Vimg: Visual instruction generation and correction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 5309-5317.
384
+ Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, et al. 2023. Evaluation and analysis of hallucination in large vision-language models. arXiv preprint arXiv:2308.15126.
385
+ Xintong Wang, Jingheng Pan, Liang Ding, and Chris Biemann. 2024b. Mitigating hallucinations in large vision-language models with instruction contrastive decoding.
386
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
387
+ Weilun Wu and Yang Gao. 2023. A context-aware model with a pre-trained context encoder for dense video captioning. In International Conference on Cyber Security, Artificial Intelligence, and Digital Economy (CSAIDE 2023), volume 12718, pages 387-396. SPIE.
388
+ Xuanan Xu, Zhiling Zhang, Zelin Zhou, Pingyue Zhang, Zeyu Xie, Mengyue Wu, and Kenny Q Zhu. 2023a. Blat: Bootstrapping language-audio pre-training based on audioset tag-guided synthetic data. In Proceedings of the 31st ACM International Conference on Multimedia, pages 2756-2764.
389
+ Yaoxun Xu, Hangting Chen, Jianwei Yu, Qiaochu Huang, Zhiyong Wu, Shi-Xiong Zhang, Guangzhi Li, Yi Luo, and Rongzhi Gu. 2024a. Secap: Speech emotion captioning with large language model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 19323-19331.
390
+ Zhengzhuo Xu, Sinan Du, Yiyan Qi, Chengjin Xu, Chun Yuan, and Jian Guo. 2023b. Chartbench: A benchmark for complex visual reasoning in charts. arXiv preprint arXiv:2312.15915.
391
+
392
+ Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. 2024b. Hallucination is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817.
393
+ Siming Yan, Min Bai, Weifeng Chen, Xiong Zhou, Qixing Huang, and Li Erran Li. 2024. Vigor: Improving visual grounding of large vision language models with fine-grained reward modeling. arXiv preprint arXiv:2402.06118.
394
+ Shiping Yang, Renliang Sun, and Xiaojun Wan. 2023. A new benchmark and reverse validation method for passage-level hallucination detection. arXiv preprint arXiv:2310.06498.
395
+ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2024. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36.
396
+ Zhongjie Ye, Helin Wang, Dongchao Yang, and Yuexian Zou. 2021. Improving the performance of automated audio captioning via integrating the acoustic and semantic information. arXiv preprint arXiv:2110.06100.
397
+ Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, and Yueting Zhuang. 2023a. Hallucid doctor: Mitigating hallucinatory toxicity in visual instruction data. arXiv preprint arXiv:2311.13614.
398
+ Yongsheng Yu, Heng Fan, and Libo Zhang. 2023b. Deficiency-aware masked transformer for video inpainting. arXiv preprint arXiv:2307.08629.
399
+ Yi Yuan, Haohe Liu, Xubo Liu, Qiushi Huang, Mark D Plumbley, and Wenwu Wang. 2024. Retrievalaugmented text-to-audio generation. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 581-585. IEEE.
400
+ Jian Zhang, Yueting Zhuang, and Fei Wu. 2006. Video-based facial expression hallucination: A two-level hierarchical fusion approach. In International Conference on Advanced Concepts for Intelligent Vision Systems, pages 513-521. Springer.
401
+ Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. 2023a. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534.
402
+ Pan Zhang, Xiaoyi Dong Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Shuangrui Ding, Songyang Zhang, Haodong Duan, Hang Yan, et al. 2023b. Internl m-xcomposer: A vision-language large model for advanced text-image comprehension and composition. arXiv preprint arXiv:2309.15112.
403
+ Shuo Zhang, Liangming Pan, Junzhou Zhao, and William Yang Wang. 2023c. Mitigating language model hallucination with interactive
404
+
405
+ question-knowledge alignment. arXiv preprint arXiv:2305.13669.
406
+ Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023d. Siren's song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.
407
+ Linxi Zhao, Yihe Deng, Weitong Zhang, and Quanquan Gu. 2024. Mitigating object hallucination in large vision-language models via classifier-free guidance. arXiv preprint arXiv:2402.08680.
408
+ Yue Zhao, Ishan Misra, Philipp Krahenbuhl, and Rohit Girdhar. 2022. Learning video representations from large language models.
409
+ Zhiyuan Zhao, Bin Wang, Linke Ouyang, Xiaoyi Dong, Jiaqi Wang, and Conghui He. 2023. Beyond hallucinations: Enhancing lvlms through hallucination-aware direct preference optimization. arXiv preprint arXiv:2311.16839.
410
+ Luowei Zhou, Chenliang Xu, and Jason Corso. 2018. Towards automatic learning of procedures from web instructional videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
411
+ Xingyi Zhou, Anurag Arnab, Shyamal Buch, Shen Yan, Austin Myers, Xuehan Xiong, Arsha Nagrani, and Cordelia Schmid. 2024. Streaming dense video captioning. arXiv preprint arXiv:2404.01297.
412
+ Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. arXiv preprint arXiv:2310.00754.
413
+ Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models.
414
+ Ge Zhu and Zhiyao Duan. 2024. Cacophony: An improved contrastive audio-text model. arXiv preprint arXiv:2402.06986.
415
+
416
+ # 9 Appendix
417
+
418
+ # 9.1 Table
419
+
420
+ We have provided a comprehensive summary of the methodologies pertaining to hallucination techniques in large foundational models in Table 1, detailing their approaches to hallucination detection, mitigation, task considerations, datasets utilized, and evaluation metrics employed. This will offer readers a concise overview of recent advancements in this field.
421
+
422
+ <table><tr><td rowspan="30">TEXT</td><td>Paper</td><td>Detection</td><td>Mitigation</td><td>Task</td><td>Dataset(s)</td><td>Evaluation Metric(s)</td></tr><tr><td>(Manakul et al., 2023)</td><td>Yes</td><td>No</td><td>QA</td><td>Wikibio</td><td>Entropy</td></tr><tr><td>(Li et al., 2022)</td><td>Yes</td><td>Yes</td><td>QA, Dialog summarization</td><td>Halueval</td><td>Automatic</td></tr><tr><td>(Mündler et al.)</td><td>Yes</td><td>Yes</td><td>Text generation</td><td>Manual</td><td>F1 Score</td></tr><tr><td>(Chen et al., 2023a)</td><td>No</td><td>Yes</td><td>Editing for attribution</td><td>MCQ, Dialog</td><td>Attribution, Preservation</td></tr><tr><td>(Zhang et al., 2023c)</td><td>No</td><td>Yes</td><td>Question knowledge alignment</td><td>Fuzzy QA</td><td>Attributable to Identified Sources</td></tr><tr><td>(Zhang et al., 2023a)</td><td>Yes</td><td>No</td><td>QA</td><td>Manual</td><td>Accuracy</td></tr><tr><td>(Peng et al., 2023)</td><td>No</td><td>Yes</td><td>Task-oriented dialog</td><td>News, Customer service</td><td>F1 Score, Bleu-4</td></tr><tr><td>(Cui et al., 2023)</td><td>No</td><td>Yes</td><td>QA</td><td>Manual</td><td>Ranking</td></tr><tr><td>(Azaria and Mitchell, 2023)</td><td>Yes</td><td>No</td><td>Classification</td><td>Manual</td><td>Accuracy</td></tr><tr><td>(Li et al., 2023e)</td><td>Yes</td><td>Yes</td><td>Knowledge-intensive tasks</td><td>Fever, QA</td><td>Accuracy</td></tr><tr><td>(Elaraby et al., 2023)</td><td>Yes</td><td>Yes</td><td>Consistency, Actuality, QA</td><td>Manual NBA domain</td><td>Pearson Correlation Coefficient</td></tr><tr><td>(Varshney et al.)</td><td>Yes</td><td>Yes</td><td>Text generation</td><td>Wikibio</td><td>Percentage of mitigated hallucination</td></tr><tr><td>(Jha et al., 2023)</td><td>Yes</td><td>No</td><td>Dialog</td><td>N/A</td><td>N/A</td></tr><tr><td>(Pal et al., 2023)</td><td>No</td><td>No</td><td>Reasoning hallucination</td><td>Med-Halt</td><td>Accuracy, Pointwise Score</td></tr><tr><td>(McKenna et al., 2023)</td><td>Yes</td><td>No</td><td>Textual entailment</td><td>Altered Directional Inference</td><td>Entailment Probability</td></tr><tr><td>(Guerreiro et al., 2023)</td><td>Yes</td><td>Yes</td><td>MT</td><td>FLOres 101, WMT ,TICO</td><td>BLEU</td></tr><tr><td>(Huang and Chang, 2023)</td><td>Yes</td><td>Yes</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>(Luo et al., 2023)</td><td>Yes</td><td>Yes</td><td>Concept extraction</td><td>Concept-7</td><td>AUC, Accuracy, F1 Score</td></tr><tr><td>(Gao et al., 2022)</td><td>Yes</td><td>Yes</td><td>Editing attribution</td><td>NQ, SQA</td><td>Auto-AIS (Attr auto)</td></tr><tr><td>(Yang et al., 2023)</td><td>Yes</td><td>No</td><td>Detect factual errors automatically</td><td>PHD, WikiBio-GPT3</td><td>Precision, Recall, F1 Score, Accuracy</td></tr><tr><td>(Min et al., 2023)</td><td>Yes</td><td>Yes</td><td>Fact verification</td><td>Manual(Wikipedia)</td><td>FACTScore</td></tr><tr><td>(Rawte et al., 2024b)</td><td>Yes</td><td>Yes</td><td>Factual inaccuracies detection</td><td>FACTOID</td><td>HV I auto</td></tr><tr><td>(Ahmad et al., 2023)</td><td>Yes</td><td>Yes</td><td>Hallucination in healthcare</td><td>N/A</td><td>FACTScores</td></tr><tr><td>(Ji et al., 2023)</td><td>Yes</td><td>Yes</td><td>Generative and knowledge-intensive</td><td>PubMedQA, MEDQA2019, MedQuAD, and MASH-QA</td><td>Unigram F1, ROUGE-L, Med-NLI, and CTRLEval</td></tr><tr><td>(Kang and Liu, 2023)</td><td>Yes</td><td>Yes</td><td>Hallucination in finance</td><td>N/A</td><td>FACTScores</td></tr><tr><td>(Roychowdhury, 2024)</td><td>No</td><td>Yes</td><td>QA</td><td>N/A</td><td>N/A</td></tr><tr><td>(Savelka et al., 2023)</td><td>No</td><td>Yes</td><td>Factual evaluation in legislation</td><td>N/A</td><td>N/A</td></tr><tr><td>(Dahl et al., 2024)</td><td>Yes</td><td>No</td><td>Legal hallucination</td><td>Manual</td><td>N/A</td></tr><tr><td>(Rawte et al., 2024c)</td><td>No</td><td>Yes</td><td>Comprehension enhancement</td><td>SCA-90K</td><td>Cosine similarity</td></tr><tr><td rowspan="26">IMAGE</td><td>(Li et al., 2023g)</td><td>Yes</td><td>No</td><td>Evaluation of object hallucination</td><td>MSCOCO</td><td>CHAIR, POPE</td></tr><tr><td>(Gunjal et al., 2024)</td><td>Yes</td><td>Yes</td><td>VQA</td><td>M-Hall Detect</td><td>Accuracy</td></tr><tr><td>(Dai et al., 2022)</td><td>No</td><td>Yes</td><td>Image captioning</td><td>CHAIR</td><td>CIDEr</td></tr><tr><td>(Lovenia et al., 2023)</td><td>Yes</td><td>No</td><td>Object hallucination</td><td>NOPE</td><td>METEOR, Exact match accuracy, NegP Accuracy</td></tr><tr><td>(Liu et al., 2024b)</td><td>Yes</td><td>No</td><td>Intrinsic vision-language hallucination</td><td>PhD</td><td>Accuracy</td></tr><tr><td>(Zhao et al., 2024)</td><td>Yes</td><td>Yes</td><td>Non-existing object hallucination</td><td>MSCOCO</td><td>CHAIR, POPE, GPT-4V, recall</td></tr><tr><td>(Huang et al., 2024b)</td><td>Yes</td><td>No</td><td>Visual hallucination</td><td>YNQ, OEQ</td><td>Accuracy</td></tr><tr><td>(Rawte et al., 2024a)</td><td>Yes</td><td>No</td><td>Video captioning</td><td>ActivityNet-Fact, YouCook2-Fact</td><td>FactVC</td></tr><tr><td>(Wang et al., 2024a)</td><td>No</td><td>Yes</td><td>Generate instruction data for vision-language</td><td>VIGC-LLAVA-COCO, VIGC-LLAVA-Objects365</td><td>Conv, Detail, Complex</td></tr><tr><td>(Yu et al., 2023a)</td><td>Yes</td><td>Yes</td><td>Machine-generated visual instruction</td><td>LLVA-Instruction-158K</td><td>CHAIR</td></tr><tr><td>(Guan et al., 2023)</td><td>No</td><td>Yes</td><td>Visual questions</td><td>HallusionBench</td><td>Accuracy</td></tr><tr><td>(Liu et al., 2023)</td><td>Yes</td><td>Yes</td><td>Vision language</td><td>LRV-Instruction</td><td>GAVIE</td></tr><tr><td>(Xu et al., 2023b)</td><td>Yes</td><td>No</td><td>Evaluation of MLLMs on chart comprehension</td><td>ChartBench</td><td>Acc+</td></tr><tr><td>(Lu et al., 2024)</td><td>Yes</td><td>Yes</td><td>Vision language</td><td>MSG-MCQ</td><td>Accuracy</td></tr><tr><td>(Tong et al., 2024)</td><td>Yes</td><td>No</td><td>Visual question answering</td><td>MMVP, VQA</td><td>Accuracy</td></tr><tr><td>(Liao et al., 2023)</td><td>Yes</td><td>No</td><td>Vision language</td><td>REVO-LION</td><td>Meta Quality (MQ)</td></tr><tr><td>(Hu et al., 2023)</td><td>Yes</td><td>Yes</td><td>Visual captioning, Visual question answering</td><td>CIEM</td><td>Accuracy, Precision, Recall, F1 Score</td></tr><tr><td>(Jing et al., 2023)</td><td>Yes</td><td>No</td><td>Meta-evaluation</td><td>LLLA-1k, MSCOCO-Cap</td><td>FAITHSCORE</td></tr><tr><td>(Changpinyo et al., 2022)</td><td>No</td><td>Yes</td><td>Multilingual visual question answering</td><td>MaXM</td><td>Accuracy</td></tr><tr><td>(Wang et al., 2023)</td><td>Yes</td><td>No</td><td>Content generation</td><td>N/A</td><td>Precision, Recall, F1 Score</td></tr><tr><td>(Sun et al., 2023)</td><td>No</td><td>Yes</td><td>Visual-language alignment</td><td>MMHAL-BENCH</td><td>N/A</td></tr><tr><td>(Bai et al., 2023)</td><td>Yes</td><td>No</td><td>Evaluate hallucination of vision language model</td><td>TouchStone</td><td>Hallucination Score</td></tr><tr><td>(Zhou et al., 2023)</td><td>No</td><td>Yes</td><td>Hallucination mitigation in LVMs</td><td>MSCOCO</td><td>CHAIR, BLEU,CLIP</td></tr><tr><td>(Yan et al., 2024)</td><td>No</td><td>Yes</td><td>Visual grounding</td><td>MMViG</td><td>HL, CA, AA, RA, RL, RS, DL</td></tr><tr><td>(Zhao et al., 2023)</td><td>Yes</td><td>Yes</td><td>Overcome hallucination in LVMs</td><td>POPE, SHR</td><td>Accuracy, Precision, F1 Score</td></tr><tr><td>(Zhang et al., 2023b)</td><td>No</td><td>Yes</td><td>Image text comprehension and composition</td><td>MMBench, SeedBench, QBench, MMBench-CN, Chinese Bench</td><td>LR, AR, RR, FP-C, FP-S, CP</td></tr><tr><td rowspan="6">VIDEO</td><td>(Kulal et al., 2023)</td><td>No</td><td>Yes</td><td>Affordance prediction</td><td>Manual</td><td>FID, FCKh</td></tr><tr><td>(Himakunthala et al., 2023)</td><td>No</td><td>Yes</td><td>Video infilling, Scene prediction</td><td>Manual</td><td>N/A</td></tr><tr><td>(Li et al., 2023d)</td><td>No</td><td>Yes</td><td>Visual dialogue</td><td>Manual</td><td>N/A</td></tr><tr><td>(Zhou et al., 2024)</td><td>No</td><td>Yes</td><td>Video captioning</td><td>ActivityNet Captions, YouCook2, ViTT</td><td>CIDER, METEOR, SODAc</td></tr><tr><td>(Höppe et al., 2022)</td><td>Yes</td><td>No</td><td>Video prediction</td><td>BAIR, Kinetics 600, UCF-101</td><td>Frechet Video Distance</td></tr><tr><td>(Chuang and Fazli, 2023)</td><td>No</td><td>Yes</td><td>Video description</td><td>Activity Net Captions, YouCook2</td><td>METEOR, ROUGE_L, CIDER, BLEU_4, DIV-2, RE_4</td></tr><tr><td rowspan="5">AUDIO</td><td>(Li et al., 2023a)</td><td>No</td><td>Yes</td><td>Classification</td><td>Manual</td><td>Mean avg precision</td></tr><tr><td>(Doh et al., 2023)</td><td>No</td><td>Yes</td><td>Audio captioning</td><td>LP MusicCaps</td><td>BLEU</td></tr><tr><td>(Xu et al., 2023a)</td><td>No</td><td>Yes</td><td>Caption generation</td><td>AudioCaps</td><td>R@K, COCO &amp; FENCE</td></tr><tr><td>(Liu and Wan, 2023)</td><td>No</td><td>Yes</td><td>Audio captioning</td><td>MusciCaps</td><td>BLEU</td></tr><tr><td>(Nishimura et al., 2024)</td><td>Yes</td><td>No</td><td>Evaluation of LAMs</td><td>LAION_CLAPMS_CLAP</td><td>Recall, Precision, F1 Score</td></tr></table>
423
+
424
+ Table 1: Overview of the hallucination detection and mitigation landscape in FMs across modalities (Text, Image, Video, and Audio). Each work is categorized based on factors such as detection, mitigation, tasks, datasets, and evaluation metrics.
acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4972deee3f8fc342d93df94860e08daa6b17a16a3c3b8e93c0fe3a04fff27d7f
3
+ size 586433
acomprehensivesurveyofhallucinationinlargelanguageimagevideoandaudiofoundationmodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8370f99e71efd1dc7aa7d806c85a204a8a3621512a838cea19ce31ef9a856282
3
+ size 493315
acriticallookatmetaevaluatingsummarisationevaluationmetrics/43d432ad-b3f1-43fb-a85c-0b340864e890_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:caf09116fe4bc0504e6d9b210604fbd3c235ff00f8338fe21c22abd62a368e3b
3
+ size 91691
acriticallookatmetaevaluatingsummarisationevaluationmetrics/43d432ad-b3f1-43fb-a85c-0b340864e890_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f133e4b2f3a14984a571e65080c150cb0572746d9b2e1b7140e4fc18f0a876d
3
+ size 115774
acriticallookatmetaevaluatingsummarisationevaluationmetrics/43d432ad-b3f1-43fb-a85c-0b340864e890_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b312fcc3973972cd4fd0400ec0d557b91727957f94e3cf4f0aec1fe0eaac652e
3
+ size 402110
acriticallookatmetaevaluatingsummarisationevaluationmetrics/full.md ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Critical Look at Meta-evaluating Summarisation Evaluation Metrics
2
+
3
+ Xiang Dai and Sarvnaz Karimi and Biaoyan Fang
4
+
5
+ CSIRO Data61
6
+
7
+ Sydney, Australia
8
+
9
+ {dai.dai,sarvnaz.karimi,byron.fang}@csiro.au
10
+
11
+ # Abstract
12
+
13
+ Effective summarisation evaluation metrics enable researchers and practitioners to compare different summarisation systems efficiently. Estimating the effectiveness of an automatic evaluation metric, termed meta-evaluation, is a critically important research question. In this position paper, we review recent meta-evaluation practices for summarisation evaluation metrics and find that (1) evaluation metrics are primarily meta-evaluated on datasets consisting of examples from news summarisation datasets, and (2) there has been a noticeable shift in research focus towards evaluating the faithfulness of generated summaries. We argue that the time is ripe to build more diverse benchmarks that enable the development of more robust evaluation metrics and analyze the generalization ability of existing evaluation metrics. In addition, we call for research focusing on user-centric quality dimensions that consider the generated summary's communicative goal and the role of summarisation in the workflow.
14
+
15
+ # 1 Introduction
16
+
17
+ The evaluation of natural language processing systems is crucial to ensure their effectiveness and reliability in real-world applications. It helps compare systems, validate whether the designed properties work as intended, understand the strengths and weaknesses of the underlying model, and often guide iterative improvements (Ribeiro et al., 2020). Although human evaluation, especially for natural language generation systems, is considered the most reliable evaluation method (Huang et al., 2020; Iskender et al., 2021; Khashabi et al., 2022), automatic evaluation metrics are more widely used due to their cost-effectiveness, ease of use, repeatability, and speed (Graham, 2015; Gehrmann et al., 2023).
18
+
19
+ In addition to assessing the performance of summarisation systems, automatic summarisation evaluation metrics are also used for other purposes
20
+
21
+ during summarisation system development, such as filtering noisy datasets to improve the quality of training data (Chaudhury et al., 2022; Aharoni et al., 2023), ranking sampled candidates to output the best summary (Falke et al., 2019; Chaudhury et al., 2022), and, integrating with reinforcement learning framework as a reward function (Zhang et al., 2020b; Stiennon et al., 2020).
22
+
23
+ A critically important question is how effective these automatic summarisation evaluation metrics are. In other words, do the evaluation results obtained using these automatic metrics reflect the genuine quality of the summaries and summarisation systems under examination? For example, Goyal et al. (2022) conclude that existing automatic metrics cannot reliably evaluate summaries generated using instruct-tuned GPT-3 model (Ouyang et al., 2022), because they find that GTP-3 summaries receive much lower scores than state-of-the-art fine-tuned models (Liu et al., 2022) on automatic metrics while outperforming them on human evaluation using A/B testing.
24
+
25
+ Meta-evaluating summarisation evaluation metrics, especially building resources that enable assessing the automatic metrics, has become urgent and attracted significant research interest (Fabbri et al., 2020; Bhandari et al., 2020; Clark et al., 2023; Liu et al., 2023b; Laban et al., 2023). However, these resources were built and used in various ways, leading to inconsistent and confusing conclusions about the usefulness of these metrics.
26
+
27
+ In this position paper, we take a critical look at the practices of meta-evaluating summarisation evaluation metrics. Our paper is organised as follows: we first review recent meta-evaluation practices for summarisation evaluation metrics (Section 2); then, in Section 3, we discuss research trends and gaps around four critical decisions that must be made when we assess the automatic metrics, namely, choosing data to annotate, defining quality dimensions, collecting human judgements,
28
+
29
+ and comparing automatic metrics against human judgements. Finally, we provide recommendations in Section 5.
30
+
31
+ # 2 Preliminaries
32
+
33
+ The task of summarisation aims to generate a summary $\hat{y}$ given a source text $x$ , where $\hat{y}$ encapsulates the key information in $x$ . The summarisation evaluation metric typically takes the generated summary $\hat{y}$ , optionally the source $x$ or a (few) reference summary $y$ , as input and produces a numeric value, which is a proxy of the overall quality or a particular dimension of quality, of $\hat{y}$ .
34
+
35
+ # 2.1 Summarisation Evaluation Metrics
36
+
37
+ Summarisation evaluation metrics can be roughly grouped into categories based on what input data they use (e.g., source text, reference summary), what intermediate data they generate (e.g., autogenerated questions based on the source text), what underlying models they rely on (e.g., textual entailment models):
38
+
39
+ Summary-only metrics take the generated summary $\hat{y}$ as input and focus on how well the generated text can be read, e.g., free of syntactic errors or spelling errors (Mani, 2001; Goldsack et al., 2023);
40
+
41
+ Similarity-based metrics take $\hat{y}$ and one or a few reference summaries $y$ as input and measure how similar $\hat{y}$ and $y$ are (Lin, 2004; Zhang et al., 2020a);
42
+
43
+ Entailment-based metrics take both $\hat{y}$ and the source $x$ as input and use entailment models to determine whether the information in $\hat{y}$ is supported by $x$ (Laban et al., 2022; Honovich et al., 2022);
44
+
45
+ QA-based metrics use both $\hat{y}$ and $x$ and aim to compare the factual information in $\hat{y}$ and $x$ by eliciting answers from them for the same question (Durmus et al., 2020; Deutsch et al., 2021a);
46
+
47
+ Learning-based metrics aim to train an evaluation model, using human annotations (Aha-roni et al., 2023) or weak supervision signals (Kryscinski et al., 2020; Wu et al., 2023), that directly outputs the quality score of $\hat{y}$ given $x$ ; and,
48
+
49
+ LLM-based metrics directly instruct large language models to generate the quality score of $\hat{y}$ (Tam et al., 2023; Shen et al., 2023).
50
+
51
+ # 2.2 Meta-evaluation of Automatic Metrics
52
+
53
+ Estimating the effectiveness and reliability of an automatic evaluation metric is a critically important research question. To distinguish it from summarisation evaluation, researchers usually use the term meta-evaluation to refer to this task, which is the focus of our position paper.
54
+
55
+ Early studies of summarisation meta-evaluation focus on assessing evaluation metrics according to their ability to distinguish between human-written and system-generate summaries (Rankel et al., 2011). However, more recently, a widely accepted belief about meta-evaluation is that an effective evaluation metric should mirror human judgements (Graham, 2015; Huang et al., 2020; Fabbri et al., 2020; Gao and Wan, 2022). This is often approximated by calculating the correlation between the evaluation results using the automatic evaluation metric $X$ and human judgements $Z$ across a set of summaries generated using various systems.
56
+
57
+ Assuming there are $N$ source texts, and $J$ summarisation systems are employed, resulting in a total of $N\times J$ output summaries. We use $d_{i}$ to represent the $i$ -th source text and $s_i^j$ the summary generated by the $j$ -th summarisation system on $d_{i}$ . We use $x_{i}^{j}$ to represent the score assigned to $s_i^j$ by the evaluation metric $X$ and $z_{i}^{j}$ the corresponding human judgement. To measure the correlation between $X$ and $Z$ , a correlation function (Corr, such as Pearson, Kendall, or Spearman) is needed.
58
+
59
+ System-level protocol aggregates the evaluation scores for a given summarisation system first via:
60
+
61
+ $$
62
+ x ^ {j} = \frac {1}{N} \sum_ {i = 1} ^ {N} x _ {i} ^ {j}, \tag {1}
63
+ $$
64
+
65
+ where $x^{j}$ is an approximation of the judgement of the $j$ -th summarisation system by metric $X$ . Similarly, the human judgement can be aggregated via:
66
+
67
+ $$
68
+ z ^ {j} = \frac {1}{N} \sum_ {i = 1} ^ {N} z _ {i} ^ {j}. \tag {2}
69
+ $$
70
+
71
+ Then, the two lists of judgements, each containing $J$ values, are taken as input to calculate the system-level correlation coefficient and the corresponding $p$ -value:
72
+
73
+ $$
74
+ r, p = \mathbf {C o r r} \left(\left[ x ^ {1}, \cdot , x ^ {J} \right], \left[ z ^ {1}, \cdot , z ^ {J} \right]\right). \tag {3}
75
+ $$
76
+
77
+ <table><tr><td></td><td>Data</td><td>Quality dimensions</td><td>Comparison protocol</td></tr><tr><td>SUMMEVAL (Fabbri et al., 2020)</td><td></td><td>Coherence, Faithfulness, Fluency, Relevance</td><td>Correlation</td></tr><tr><td>REALSUMM (Bhandari et al., 2020)</td><td>Model-generated and (transformed) reference summaries on news articles, such as those in CNN/DM (Nallapati et al., 2016), XSUM (Narayan et al., 2018), XL-SUM (Hasan et al., 2021), and MLSUM (Scialom et al., 2020)</td><td>Relevance</td><td>Correlation</td></tr><tr><td>FRANK (Pagnoni et al., 2021)</td><td></td><td>Faithfulness</td><td>Correlation</td></tr><tr><td>FFCI (Koto et al., 2022)</td><td></td><td>Focus, Coverage, Coherence</td><td>Correlation</td></tr><tr><td>FIB (Tam et al., 2023)</td><td></td><td>Factual consistency</td><td>Ranking</td></tr><tr><td>BUMP (Ma et al., 2023)</td><td></td><td>Faithfulness</td><td>Ranking, Classification</td></tr><tr><td>SEAHORSE (Clark et al., 2023)</td><td></td><td>Comprehensibility, Repetition, Grammar, Attribution, Main ideas, and Conciseness</td><td>Correlation, Classification</td></tr><tr><td>DIALSUMMEVAL (Gao and Wan, 2022)</td><td>Model-generated summaries on dialogues, such as those in SAMSUM (Gliwa et al., 2019), QMSUM (Zhong et al., 2021), and MTSDIALOG (Ben Abacha et al., 2023a)</td><td>Coherence, Consistency, Fluency, Relevance</td><td>Correlation</td></tr><tr><td>DIASUMFACT (Zhu et al., 2023)</td><td></td><td>Factual consistency</td><td>Classification</td></tr><tr><td>(Ben Abacha et al., 2023b)</td><td></td><td>Factual consistency</td><td>Correlation</td></tr><tr><td>GO FIGURE (Gabriel et al., 2021)</td><td>Model-generated and (transformed) reference summaries on news articles and dialogues</td><td>Faithfulness</td><td>Correlation</td></tr><tr><td>ROSE (Liu et al., 2023b)</td><td></td><td>Salience</td><td>Correlation</td></tr><tr><td>SUMMEDITS (Laban et al., 2023)</td><td>Model-generated summaries, and LLM-edited reference summaries on diverse domains, such as news articles, scholarly articles, meeting transcripts, government reports, legal bills, etc.</td><td>Faithfulness</td><td>Classification</td></tr><tr><td>DIVERSUMM (Zhang et al., 2024a)</td><td></td><td>Faithfulness</td><td>Classification</td></tr><tr><td>(Ramprasad et al., 2024)</td><td></td><td>Faithfulness</td><td>Correlation</td></tr></table>
78
+
79
+ Table 1: A summary of recent benchmarks for meta-evaluating summarisation evaluation metrics.
80
+
81
+ Summary-level protocol calculates the correlation between $X$ and $Z$ on each summary first:
82
+
83
+ $$
84
+ r _ {i}, p = \mathbf {C o r r} \left(\left[ x _ {i} ^ {1}, \cdot , x _ {i} ^ {J} \right], \left[ z _ {i} ^ {1}, \cdot , z _ {i} ^ {J} \right]\right), \tag {4}
85
+ $$
86
+
87
+ and then apply an average operation to obtain the summary-level correlation coefficient:
88
+
89
+ $$
90
+ r = \frac {1}{N} \sum_ {i = 1} ^ {N} r _ {i}. \tag {5}
91
+ $$
92
+
93
+ In addition to this common "correlation" perspective, recent studies focusing on evaluating the faithfulness of summaries also use classification or ranking protocols. That is, the generated summary (or more fine-grained elements, such as a sentence) is labelled by human annotators, for example as
94
+
95
+ "faithful" or "unfaithful", and then automatic evaluation metrics are evaluated by whether they can predict accurately the label of a given summary (i.e., classification) or assigning a higher score to the faithful summary than the unfaithful summary (i.e., ranking).
96
+
97
+ We summarise recent benchmarks for meta-evaluating summarisation evaluation metrics in Table 1. A more detailed description of these benchmarks can be found in the Appendix C.
98
+
99
+ # 3 Discussion
100
+
101
+ We identify there are four critical decisions that must be made when we assess the automatic metric: (1) what source texts and summaries to use; (2) what quality dimensions to consider; (3) how to col
102
+
103
+ lect human judgements; and, (4) how to compare the automatic metric against human judgements. In this section, we discuss research trends and gaps around these four aspects.
104
+
105
+ # 3.1 Choosing Data to Annotate
106
+
107
+ Source texts From Table 1, we can see that most of the widely used meta-evaluation benchmarks use source texts from news summarisation datasets, followed by dialogue summarisation datasets. This is not ideal because, first, evaluation metrics tailored for evaluating news articles and summaries may not be portable to other domains due to the lack of respective resources. For example, QA-based evaluation metrics (Wang et al., 2020; Durmus et al., 2020) usually start from extracting named entities (e.g., person names) from source text and/or generated summary, around which questions are generated. However, entities of interest vary in different domains, and effective named entity recognition tools may not exist for specialised entity categories in niche domains, making these evaluation metrics hard to use. Secondly, the distribution of automatic evaluation scores usually differs across texts from various domains (Figure 1), and the generalisation ability of these evaluators, which are calibrated to the news domain, is underexplored (Laban et al., 2023). Finally, evaluation metrics usually show different correlation trends in different datasets, making their practical utility unclear. For example, Ramprasad et al. (2024) find that both QA-based and NLI-based evaluation metrics correlate well (Spearman's rank correlation coefficients of $0.45 \sim 0.59$ ) with human judgements on examples from news domain, but no correlation on biomedical domain (coefficients of $-0.03 \sim 0.11$ ).
108
+
109
+ Output summaries A common strategy for collecting summaries is assembling outputs from diverse summarisation systems, which are expected to cover different error types. For example, Clark et al. (2023) collect summaries from models of various sizes (e.g., 220M parameters of T5 (Raffel et al., 2020) and 540B parameters of PaLM (Aakanksha et al., 2024)) and employ both 1-shot in-context learning and fine-tuning approaches to generate the summaries. They also select both fully optimised and under-trained (i.e., trained for only 250 steps) checkpoints, ensuring differences in model quality.
110
+
111
+ Although these studies seek to diversify the summarisation systems, they often operate under a uni
112
+
113
+ ![](images/5e8549365b9614c34ac8158ebdbdfb56ff8ceced6c5b71e2bd81c66e6b5e1c4d.jpg)
114
+ Figure 1: The distribution of consistency scores, measured using WeCheck (Wu et al., 2023), between source text and reference summary from different datasets. A score of 1 indicates a higher consistency level, while 0 indicates inconsistency. CNN/DM and XSUM datasets (Zhang et al., 2024b) include news articles, SAMSUM (Gliwa et al., 2019) messenger-like conversations, ARXIV (Cohan et al., 2018) scholarly articles, and MTSDIALOG (Ben Abacha et al., 2023a) from Doctor-Patient encounters.
115
+
116
+ form summarisation formulation. In other words, the communicative goal and user preferences (e.g., the desired style and summary length) are disregarded when generating the summary. For example, Ramprasad et al. (2024) use the same prompt 'Article: [article]. Summarize the above article' for generating summaries across domains. This simplified task formulation might be problematic when translating findings to build real-world summarisation applications. Summarisation involves compressing information in the source text by definition, and one key factor in this process is the compression ratio. Figure 2 shows that, under various constraints such as summary length (Koh et al., 2022), evaluation metrics may exhibit varying characteristics since generating shorter summaries (with a higher compression ratio) and evaluating these summaries is more challenging.
117
+
118
+ Summary Because of the lack of meta-evaluation benchmarks covering various data distributions (i.e., source texts from different domains and output summaries from different systems under different task constraints), NLP practitioners may take the risk of overestimating the generalisation ability of automatic metrics (Chen et al., 2021). That is, practitioners may employ the top-performing evaluation metrics, e.g., for evaluating
119
+
120
+ ![](images/196c48e61267114b786aacc2e831dadceb04b565c9862a62eed94fa0088525bb.jpg)
121
+ Figure 2: Evaluation results using WeCheck (Wu et al., 2023) on two tasks proposed in Multi-LexSum (Shen et al., 2022), where summaries are generated at different target levels of granularity: tiny (25 words, on average), and short (130 words). Prompts used to generate summaries can be found in Appendix Section A.
122
+
123
+ news summaries, and hope they work well for evaluating other types of summaries. To fill this gap, we call for building more diverse benchmarks that enable building more robust evaluation metrics and analysing the generalisation ability of existing evaluation metrics across different domains.
124
+
125
+ # 3.2 Defining Quality Dimensions
126
+
127
+ Mani (2001) divide the summarisation evaluation into two categories: intrinsic evaluation—testing the summarisation system as of itself—and extrinsic evaluation—testing based on how the generated summary affects the completion of some downstream tasks (e.g., efforts required to post-edit the generated summary to an acceptable, task-dependent state). We notice that most—if not all—recent benchmarks focus on quality dimension relating to the intrinsic evaluation but overlook the extrinsic evaluation.
128
+
129
+ From Table 1, we observe quality dimensions considered in recent benchmarks can be roughly grouped into two categories: (1) content quality, concerning to which extent the generated summary accurately reflects the most important information in the source text, and (2) language quality (e.g., coherency, fluency, comprehensibility) of the generated summary itself. We also notice that there is a shift in research focus towards content quality, especially the faithfulness of generated summaries.
130
+
131
+ Fonseca and Cohen (2024) argue that summarisation evaluation should consider the variability in
132
+
133
+ communicative intentions. They choose three intentional aspects: conciseness (e.g., Write a summary of the article above in 3 sentences), narrative perspective (e.g., Write in third person), and keyword coverage (e.g., Focus on the keywords: Thompson, sampling, sequential, variational). They define intention control metrics to assess whether the generated summaries follow these intentions accurately. Zhang et al. (2024b) also point out that the summarisation evaluation should depend on the application scenarios and align with user values. They argue that, for example, the bullet point style summaries in CNN/DM (Nallapati et al., 2016) are rated by human annotators with low coherence scores; however, they may suffice for being displayed on news websites.
134
+
135
+ It is also worth noting that some quality dimensions are user-centric by nature, but most existing studies have overlooked the subjectivity of these dimensions. For example, when we define the readability of plain-language summaries of scientific articles (Goldsack et al., 2022), the end users' language and domain background should be taken into consideration. Another example is clinical conversation summarisation (Ben Abacha et al., 2023b); depending on whether the summary is provided for the clinicians or the patients, the same quality dimension (e.g, comprehensibility) should be defined differently.
136
+
137
+ Subtle differences behind the same term We observe a clear shift of research focus towards the content quality, especially faithfulness, of summarisation, mainly because recent LLM-based summarisation models have shown a remarkable capability to produce text of high language quality but still struggle with generating accurate content in a conditional-generation setting (Gao and Wan, 2022).
138
+
139
+ However, we also notice that different studies may investigate the same quality dimension following slightly different definitions, resulting in confusing conclusions. For example, Fabbri et al. (2020) define "consistency" as "whether the facts in the summary are consistent with the facts in the original article" but also "consider whether the summary does reproduce all facts accurately and does not makeup unture information". Honovich et al. (2022) define a text to be factually consistent to its grounding text (i.e., source text) "if all the factual information it conveys is consistent with the factual information conveyed by the grounding
140
+
141
+ text” but “exclude personal and social statements”. These subtle differences usually result in different judgements on the same summary due to “partial faithful” or “factual but not faithful” issues.
142
+
143
+ Summary We believe quality dimensions considered in the recent benchmarks are too narrow to reflect the various application scenarios where summarisation is used. Even worse, there is no census on the precise definition of these quality dimensions—different terms reflect the same underlying meaning and the same term refers to slightly different meanings—making the comparison against previous work difficult and unreliable.
144
+
145
+ # 3.3 Collecting Human Judgements
146
+
147
+ Who are expert annotators? Most previous studies, especially those that focus on news summarisation, refer to expert annotators as people who have experience in summarisation or NLP. Correspondingly, annotation guidelines are also heavily linguistic-oriented, for instance in their error categories and examples. For example, Pagnoni et al. (2021) collect human annotations based on a typology of factual errors, including "Relation Error", "Entity Error", "Circumstance Error", "Discourse Link Error", etc. Although this perspective can help developers understand the weaknesses of different summarisation models by examining the common errors these systems may generate, we argue that these errors may not always reflect real users' perspectives; instead, the real writers and readers of summaries should be more involved in the annotation process and the development of annotation guidelines.
148
+
149
+ Trade-off between annotation quality and cost Crowdsourcing is a common approach to reduce the time and cost associated with data annotation, though it often comes at the cost of sacrificing the reliability of the collected annotations. Most recent efforts that build meta-evaluation benchmarks rely on crowd annotators because crowd annotations can typically be collected quickly. In contrast, expert annotators may require significantly more time, even when fully dedicated to the annotation task. For example, Gao and Wan (2022) reported that they initially conducted the annotation via a crowd-sourcing platform and collected 7,000 annotations from five different annotators in one day. In contrast, it took approximately 10 days to collect 4,200 annotations from three student annotators.
150
+
151
+ Using LLMs as surrogate evaluators or combining LLM-as-evaluators with human evaluation to obtain an unbiased estimator with a lower cost than human evaluation alone is a promising but controversial research direction. Its effectiveness needs careful investigation as it depends not only on the correlation between the human and LLM-as-evaluator judgements but also on the choice of evaluation prompts (e.g., reference-free evaluation, pair-wise comparison, Likert survey) (Chaganty et al., 2018). Deutsch et al. (2022a) point out that when we use one generation model to evaluate another, they are biased against higher-quality outputs, including those written by humans. Liu et al. (2023a); Panickssery et al. (2024) also show that LLM-as-evaluators may have the problem of self-preference—they favour their own outputs or outputs from similar model families.
152
+
153
+ Ensuring annotation quality and detecting noisy annotations are then essential to building a reliable benchmark using crowd-sourcing or combining LLM-as-evaluators with human evaluation. However, we notice that only a limited number of quality control practices were commonly adopted in eliciting human annotations, such as filtering annotators based on their previous experience (Liu et al., 2023b), providing annotator training (AhaRoni et al., 2023) and measuring inter-annotator agreement (Laban et al., 2023). Moreover, many studies overlook this issue and place blind trust in the collected data. For instance, Koto et al. (2022) found that only 7 out of 71 papers on summarisation human evaluation describe quality control mechanisms used.
154
+
155
+ Another overlooked practice is reporting failed attempts, which we believe can provide valuable insights to the following studies. For example, Gao and Wan (2022) hired 5 annotators using a crowdsourcing platform to assess summaries generated from 14 different summarisation models on a Likert scale from 1 to 5. However, the model scores, which are calculated by averaging across 5 annotators on 100 summaries, are very close to each other (e.g., the averaged consistency score of the worst model is 3.206, whereas the best is 3.400), which they believe does not reflect reality.
156
+
157
+ The role of reference summary Intuitively, some quality dimensions can be assessed by reading the summary only. For example, Goldsack et al. (2022) instruct the annotators to rate layness (to what extent is the summary comprehensible to a
158
+
159
+ non-expert) using a 1 to 5 Likert scale. However, these annotations usually suffer from inconsistency issues, as even the same annotator may make different assessments at different times.
160
+
161
+ Relative assessment, instead of direct assessment, is generally considered to improve agreement among annotators (Novikova et al., 2018). However, existing work uses reference summaries to aid human judgements mainly from a cost-saving consideration because annotators can rate the quality of a generated summary by comparing it against a short reference summary without reading a relatively long source text. Also, using a reference summary may reduce the annotation complexity for non-expert annotators. For example, Koto et al. (2022) argue that assessing relevance—the generated summary concisely captures all salient information—without a reference summary is difficult, as it requires annotators to implicitly construct their own summary of the source text.
162
+
163
+ However, the impact of reference summaries on human judgements and thus on meta-evaluation results is not well understood and examined. Regarding the same set of quality dimensions (fluency, coherence, faithfulness and relevance), Fabbri et al. (2020) provide annotators with summaries grouped in sets of 6 (i.e., 1 reference summary and 5 model-generated summaries), where the reference summary plays as an anchor between groups. But Zhuang et al. (2024) find that annotators tend to assign a lower score to the summary if it is shown along with a reference summary—even with a false reference summary. Automatic metric performance might also differ greatly depending on whether reference summaries are used during human annotations. For example, Liu et al. (2023b) find that reference-based metrics generally perform better when they are compared against human judgements collected using protocols with reference summary but can have negative correlations with those without using reference summary.
164
+
165
+ Human preferences vs quality judgement Instead of scoring summaries based on the description of quality dimensions, Goyal et al. (2022) adopt the approach of soliciting human preferences among summaries. However, this approach may be questionable when involving summaries generated using LLMs, which are usually pre-trained with human preference feedback. Liu et al. (2023b) point out that LLMs may have learned the prior preferences of human annotators but not necessarily cap
166
+
167
+ tured the task-specific quality of summaries. Liu et al. designed two studies, asking human annotators: (a) to evaluate the summary without knowing the input text and (b) to evaluate if the summary covers the salient information of the input text. Results show that LLM-generated summaries received higher scores than human-written summaries under the first study, and the scores obtained from the first study are a good predictor of the results of the second study (Pearson's correlation of 0.926 between these two results). Zhang et al. (2024b); Shaib et al. (2024) also identify that annotators usually have their own consistent preference (e.g., based on summary length), when simply asked to rank the summaries.
168
+
169
+ Summary Given the costly nature of eliciting human judgements and the rapid pace of ongoing development in summarisation models, we believe there is an urgent need to standardise human evaluation practices. Developing a mechanism for producing reproducible human judgements over time and across different annotators (Khashabi et al., 2022) is paramount because it allows the collected resources to be reusable and easily extensible to new summarisation models. The resulting resources, which are more comprehensible, enable the development of effective and robust automatic metrics.
170
+
171
+ # 3.4 Comparing Automatic Metrics Against Human Judgements
172
+
173
+ Is a high correlation with human judgements enough to indicate the effectiveness of automatic metrics? A common way of reporting the effectiveness of automatic metrics is to tabulate the correlation between the results obtained using automatic metrics and human judgements, and metrics that achieve higher correlation are considered to be better (Fabbri et al., 2020; Ramprasad et al., 2024). However, Ernst et al. (2023) find that some evaluation metrics, although highly correlating with human judgements on a particular quality dimension, are, in fact, ineffective in measuring the considered dimension. For example, reference-based evaluation metrics correlate well with human judgements on Fluency and Consistency in the SUMMEVAL (Fabbri et al., 2020) benchmark but fail to detect even drastic summary corruptions, such as replacing all verbs with lemma form (resulting in ungrammatical summaries) and all person names with different names from the source text (resulting in unfaithful summaries).
174
+
175
+ One reason behind this phenomenon is that human judgements across quality dimensions may correlate with each other (Table 2 in Appendix B). Therefore, it is necessary to rule out the impact of confounding factors when comparing automatic metrics against human judgements for a particular quality dimension. Ernst et al. (2023) propose a bucketing-based approach where they divide all document-summary pairs into buckets where the human judgements of an anchor dimension have low variance; the correlations are calculated inside each bucket and then averaged with weights according to bucket size, resulting in more reliable meta-evaluation results for dimensions other than the anchor dimension.
176
+
177
+ Another reason is that most existing benchmarks include summaries generated from systems of varying quality. Therefore, high correlation is usually attributed to the capability of distinguishing between systems with large performance gaps. Deutsch et al. (2022b); Liu et al. (2023b); Shen et al. (2023) point out that discriminating between systems of similar quality is much more difficult than between systems of diverse quality, and a good metric should reliably indicate a difference in quality when a small difference in evaluation scores is observed. For example, Deutsch et al. found that the average improvement over baseline models reported in recent papers on the CNN/DM (Nallapati et al., 2016) dataset was ROUGE-1 score of 0.5. However, the correlation of ROUGE-1 to human judgements is near 0 when ranking systems whose evaluation scores are so close. On the other hand, a large gap (e.g., 5-10) of ROUGE scores does correctly rank system pairs, enabling ROUGE to achieve moderately strong correlations on standard benchmarks.
178
+
179
+ Statistical Power concerns the chance a significant difference (e.g., evaluation metrics score differently in a meta-evaluation benchmark) will be observed, given there is a real difference (i.e., genuinely different evaluation metrics) (Card et al., 2020). Deutsch et al. (2021b) find that high uncertainty (large confidence intervals) exists when evaluating automatic metrics using existing benchmarks. This is also observed in human evaluation of similar-performing systems (Liu et al., 2023b). Although increasing the dataset's sample size (but requiring a significant human effort) can effectively raise statistical power (Shaib et al., 2024), other cheap alternatives are needed. For example,
180
+
181
+ Deutsch et al. (2022b) propose to calculate automatic scores on a much larger set instead of only the subset of summaries judged by humans.
182
+
183
+ Summary We argue that assessing the effectiveness of automatic metrics can be conducted in multiple stages, each requiring different levels of human annotation effort. First, evaluation metrics should be tested on their effectiveness in detecting significant errors, e.g., corruptions in human-written summaries (Gabriel et al., 2021; Chen et al., 2021; Ernst et al., 2023). Secondly, they can be meta-evaluated against existing human judgements on summaries from systems of varying quality. Thirdly, human judgements should be constantly gathered on summaries generated using state-of-the-art systems, presumably of closing quality (Peyrard, 2019) and automatic metrics should be tested on discriminating these systems. Finally, metrics should be tested against reproducing human preferences between pairs of summaries (i.e., summary-level effectiveness) and the capability of identifying more fine-grained problems (Chen et al., 2021).
184
+
185
+ # 4 Related Work
186
+
187
+ Similar to summarisation, other natural language generation tasks, such as machine translation (MT), dialogue, and data-to-text generation, also have a long history of employing automatic evaluation metrics, such as BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005), to assess the quality of machine-generated texts. Assessing the effectiveness and reliability of these automatic metrics is also an active research area, and regular shared tasks (e.g., WMT Metrics Shared Task<sup>1</sup>) are organised to encourage researchers to explore the strengths and weaknesses of automatic metrics. Unfortunately, similar efforts to meta-evaluate summarisation evaluation metrics were unsustained, partially due to the complexity of the summarisation task itself. As observed by Graham (2015), although there are obvious parallels between summarisation and machine translation (MT), methodologies applied to meta-evaluate MT metrics have not been well explored in summarisation.
188
+
189
+ With the advancement of large-scale generative models, evaluating text generated by LLMs and meta-evaluate corresponding evaluation metrics have also attracted significant interests (Gehrmann
190
+
191
+ et al., 2021; Zhao et al., 2024; Li et al., 2023; Pal et al., 2023; Mishra et al., 2024). These studies usually concern similar quality dimensions as those in the summarisation field, and have a similar desire to find cost-effective ways to collect human judgements.
192
+
193
+ # 5 Conclusions and Recommendations
194
+
195
+ In this position paper, we critically examine the practices of meta-evaluating summarisation evaluation metrics in the literature. We identify several avenues in the field that can be further improved, regarding: choosing data to annotate, defining quality dimensions, collecting human judgements, and comparing automatic metrics against human judgements.
196
+
197
+ For practitioners aiming to assess the effectiveness of automatic metrics for their particular use case, we suggest starting by considering the role a summarisation system plays in the real-world workflow. This includes identifying the readers of the generated summaries, understanding what information they seek, and what decisions they might make after reading the summary. Once we have a clear picture of this, we can create document-summary pairs that meet requirements and focus on the quality dimensions that end-users value most. Human judgements can be collected from real end-users regarding both their perceived quality of the summary and the effect of these summaries on the actual downstream tasks they perform. Finally, automatic evaluations can be assessed depending on the purposes of the evaluation, such as determining which summarisation system is better (system-level correlation is informative), choosing the best summary from multiple candidates (summary-level correlation/ranking), and detecting problematic summaries (binary classification).
198
+
199
+ For researchers who aim to develop meta-evaluation resources and novel evaluation metrics, we believe it is time to build more diverse benchmarks using data sampled from different domains and considering various summarisation constraints. That is to say, the generality of evaluation metrics should be tested to mitigate the risk of overestimating the effectiveness of automatic metrics across domains and applications (Gabriel et al., 2021). Secondly, we believe there is an urgent need to standardise human evaluation practices to ensure reproducible human judgements over time and, more importantly, to make the collected re
200
+
201
+ sources extensible to new summarisation models. We recommend some best (basic) practices: (1) being aware of previous work and reusing previous resources (taxonomy, guideline, interface, etc.) whenever possible (Tang et al., 2023); (2) adopting quality controls, such as training annotators to make sure they understand the annotation task, filtering out unqualified annotators and their annotations, etc (Koto et al., 2022); (3) documenting the creation process (e.g., preprocessing, annotating) and recommended uses (Gebru et al., 2021). Finally, we argue that claims on the usefulness of evaluation metrics should be made based on comprehensive and reliable assessment under various usage scenarios, such as detecting summaries with significant errors, distinguishing summary systems of closing quality, or identifying more fine-grained issues in the generated summary.
202
+
203
+ # Limitations
204
+
205
+ The main limitation of this study is that we did not contribute any new resources or procedures for meta-evaluation. This study states our opinions based on our educated review of the current literature.
206
+
207
+ # References
208
+
209
+ Chowdhery Aakanksha, Narang Sharan, Devlin Jacob, Bosma Maarten, Mishra Gaurav, Roberts Adam, Barham Paul, Chung Hyung Won, Sutton Charles, Gehrmann Sebastian, Schuh Parker, Shi Kensen, Tsvyashchenko Sashank, Maynez Joshua, Rao Abhishek, Barnes Parker, Tay Yi, Shazeer Noam, Prabhakaran Vinodkumar, Reif Emily, Du Nan, Hutchinson Ben, Pope Reiner, Bradbury James, Austin Jacob, Isard Michael, Gur-Ari Guy, Yin Pengcheng, Duke Toju, Levskaya Anselm, Ghemawat Sanjay, Dev Sunipa, Michalewski Henryk, Garcia Xavier, Misra Vedant, Robinson Kevin, Fedus Liam, Zhou Denny, Ippolito Daphne, Luan David, Lim Hyeontaek, Zoph Barret, Spiridonov Alexander, Sepassi Ryan, Dohan David, Agrawal Shivani, Omernick Mark, M. Dai Andrew, Pillai Thanumalayan Sankaranayana, Pellat Marie, Lewkowycz Aitor, Moreira Erica, Child Rewon, Polozov Oleksandr, Lee Katherine, Zhou Zongwei, Wang Xuezhi, Saeta Brennan, Diaz Mark, Firat Orhan, Catasta Michele, Wei Jason, Meier-Hellstern Kathy, Eck Douglas, Dean Jeff, Petrov Slav, and Fiedel Noah. 2024. PaLM: scaling language modeling with pathways. JMLR, 24.
210
+
211
+ Roee Aharoni, Shashi Narayan, Joshua Maynez, Jonathan Herzig, Elizabeth Clark, and Mirella Lapata. 2023. Multilingual Summarization with Factual Consistency Evaluation. In Findings of ACL.
212
+
213
+ Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization.
214
+ Asma Ben Abacha, Wen-wai Yim, Yadan Fan, and Thomas Lin. 2023a. An Empirical Study of Clinical Note Generation from Doctor-Patient Encounters. In EACL.
215
+ Asma Ben Abacha, Wen-wai Yim, George Michalopoulos, and Thomas Lin. 2023b. An Investigation of Evaluation Methods in Automatic Medical Note Generation. In Findings of ACL.
216
+ Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu, and Graham Neubig. 2020. Re-evaluating Evaluation in Text Summarization. In EMNLP.
217
+ Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020. With Little Power Comes Great Responsibility. In EMNLP.
218
+ Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evaluation. In ACL.
219
+ Subhajit Chaudhury, Sarathkrishna Swaminathan, Chulaka Gunasekara, Maxwell Crouse, Srinivas Ravishankar, Daiki Kimura, Keerthiram Murugesan, Ramón Fernandez Astudillo, Tahira Naseem, Pavan Kapanipathi, and Alexander Gray. 2022. X-FACTOR: A Cross-metric Evaluation of Factual Correctness in Abstractive Summarization. In EMNLP.
220
+ Yiran Chen, Pengfei Liu, and Xipeng Qiu. 2021. Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization. In Findings of EMNLP.
221
+ Elizabeth Clark, Shruti Rijhwani, Sebastian Gehrmann, Joshua Maynez, Roee Aharoni, Vitaly Nikolaev, Thibault Sellam, Aditya Siddhant, Dipanjan Das, and Ankur Parikh. 2023. SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation. In EMNLP.
222
+ Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents. In NAACL.
223
+ Daniel Deutsch, Tania Bedrax-Weiss, and Dan Roth. 2021a. Towards Question-Answering as an Automatic Metric for Evaluating the Content Quality of a Summary. TACL, 9.
224
+ Daniel Deutsch, Rotem Dror, and Dan Roth. 2021b. A Statistical Analysis of Summarization Evaluation Metrics Using Resampling Methods. TACL, 9.
225
+
226
+ Daniel Deutsch, Rotem Dror, and Dan Roth. 2022a. On the Limitations of Reference-Free Evaluations of Generated Text. In EMNLP.
227
+ Daniel Deutsch, Rotem Dror, and Dan Roth. 2022b. Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics. In NAACL.
228
+ Esin Durmus, He He, and Mona Diab. 2020. FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization. In ACL.
229
+ Ori Ernst, Ori Shapira, Ido Dagan, and Ran Levy. 2023. Re-Examining Summarization Evaluation across Multiple Quality Criteria. In Findings of EMNLP.
230
+ Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Richard Socher, and Dragomir R. Radev. 2020. SummEval: Re-evaluating Summarization Evaluation. TACL, 9.
231
+ Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference. In ACL.
232
+ Marcio Fonseca and Shay B Cohen. 2024. Can Large Language Model Summarizers Adapt to Diverse Scientific Communication Goals? In Findings of ACL.
233
+ Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, and Jianfeng Gao. 2021. GO FIGURE: A Meta Evaluation of Factuality in Summarization. In Findings of ACL.
234
+ Mingqi Gao and Xiaojun Wan. 2022. DialSummEval: Revisiting Summarization Evaluation for Dialogues. In NAACL.
235
+ Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, 64.
236
+ Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Cinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondrej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc,
237
+
238
+ Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics. In Workshop on Natural Language Generation, Evaluation, and Metrics.
239
+ Sebastian Gehrmann, Elizabeth Clark, and Thibault Selam. 2023. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. JAIR, 77.
240
+ Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization. In Workshop on New Frontiers in Summarization.
241
+ Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, and Chenghua Lin. 2023. Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of Biomedical Research Articles. In Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks.
242
+ Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature. In EMNLP.
243
+ Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News Summarization and Evaluation in the Era of GPT-3. arXiv, 2209.12356.
244
+ Yvette Graham. 2015. Re-evaluating Automatic Summarization with BLEU and 192 Shades of ROUGE. In EMNLP.
245
+ Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XLSum: Large-Scale Multilingual Abstractive Summarization for 44 Languages. In Findings of ACLIJCNLP.
246
+ Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating Factual Consistency Evaluation. In NAACL.
247
+ Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020. What Have We Achieved on Text Summarization? In EMNLP.
248
+ Neslihan Iskender, Tim Polzehl, and Sebastian Møller. 2021. Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead. In Workshop on Human Evaluation of NLP Systems.
249
+ Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, and Daniel Weld. 2022. GENIE: Toward
250
+
251
+ Reproducible and Standardized Human Evaluation for Text Generation. In EMNLP.
252
+ Huan Yee Koh, Jiaxin Ju, He Zhang, Ming Liu, and Shirui Pan. 2022. How Far are We from Robust Long Abstractive Summarization? In EMNLP.
253
+ Fajri Koto, Timothy Baldwin, and Jey Han Lau. 2022. FFCI: A Framework for Interpretable Automatic Evaluation of Summarization. JAIR, 73.
254
+ Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the Factual Consistency of Abstractive Text Summarization. In EMNLP.
255
+ Philippe Laban, Wojciech Kryscinski, Divyansh Agarwal, Alexander Fabbri, Caiming Xiong, Shafiq Joty, and Chien-Sheng Wu. 2023. SummEdits: Measuring LLM Ability at Factual Reasoning Through The Lens of Summarization. In EMNLP.
256
+ Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization. TACL, 10.
257
+ Junyi Li, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models. In EMNLP.
258
+ Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out.
259
+ Yiqi Liu, Nafise Sadat Moosavi, and Chenghua Lin. 2023a. LLMs as Narcissistic Evaluators: When Ego Inflates Evaluation Scores. In Findings of ACL.
260
+ Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq R. Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir R. Radev. 2023b. Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation. In ACL.
261
+ Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing Order to Abstractive Summarization. In ACL.
262
+ Liang Ma, Shuyang Cao, Robert L. Logan IV, Di Lu, Shihao Ran, Ke Zhang, Joel Tetreault, and Alejandro Jaimes. 2023. BUMP: A Benchmark of Unfaithful Minimal Pairs for Meta-Evaluation of Faithfulness Metrics. In ACL.
263
+ Inderjeet Mani. 2001. Summarization Evaluation: An Overview. In Workshop on Research in Chinese and Japanese Text Retrieval and Text Summarization.
264
+ Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov, and Hannaneh Hajishirzi. 2024. Fine-grained hallucination detection and editing for language models. In COLM.
265
+
266
+ Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Guilíçhevre, and Bing Xiang. 2016. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In CoNLL.
267
+ Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization. In EMNLP.
268
+ Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2018. RankME: Reliable Human Ratings for Natural Language Generation. In NAACL.
269
+ Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS.
270
+ Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics. In NAACL.
271
+ Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2023. Med-HALT: Medical Domain Hallucination Test for Large Language Models. In CoNLL.
272
+ Arjun Panickssery, Samuel R. Bowman, and Shi Feng. 2024. LLM Evaluators Recognize and Favor Their Own Generations. In NeurIPS.
273
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In ACL.
274
+ Maxime Peyrard. 2019. Studying Summarization Evaluation Metrics in the Appropriate Scoring Range. In ACL.
275
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR, 21.
276
+ Sanjana Ramprasad, Kundan Krishna, Zachary C Lipton, and Byron C Wallace. 2024. Evaluating the Factuality of Zero-shot Summarizers Across Varied Domains. In EACL.
277
+ Peter Rankel, John Conroy, Eric Slud, and Dianne O'Leary. 2011. Ranking Human and Machine Summarization Systems. In EMNLP.
278
+ Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In ACL.
279
+
280
+ Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. MLSUM: The Multilingual Summarization Corpus. In EMNLP.
281
+ Chantal Shaib, Joe Barrow, Alexa F Siu, Byron C Wallace, and Ani Nenkova. 2024. How Much Annotation is Needed to Compare Summarization Models? In Workshop on Bridging Human-Computer Interaction and Natural Language Processing.
282
+ Chenhui Shen, Liying Cheng, Yang You, and Lidong Bing. 2023. Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization. In Findings of EMNLP.
283
+ Zejiang Shen, Kyle Lo, Lauren Yu, Nathan Dahlberg, Margo Schlanger, and Doug Downey. 2022. MultiLexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities. In NeurIPS.
284
+ Nisan Stiannon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In NeurIPS.
285
+ Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, and Colin Raffel. 2023. Evaluating the Factual Consistency of Large Language Models Through News Summarization. In *Findings of ACL*.
286
+ Liyan Tang, Tanya Goyal, Alex Fabbri, Philippe Laban, Jiacheng Xu, Semih Yavuz, Wojciech Kryscinski, Justin Rousseau, and Greg Durrett. 2023. Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors. In ACL.
287
+ Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. In ACL.
288
+ Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Sujian Li, and Yajuan Lyu. 2023. WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning. In ACL.
289
+ Huajian Zhang, Yumo Xu, and Laura Perez-Beltrachini. 2024a. Fine-Grained Natural Language Inference Based Faithfulness Evaluation for Diverse Summarisation Tasks. In EACL.
290
+ Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020a. BERTScore: Evaluating Text Generation with BERT. In ICLR.
291
+ Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. 2024b. Benchmarking Large Language Models for News Summarization. TACL, 12.
292
+ Yuhao Zhang, Derek Merck, Emily Tsai, Christopher D. Manning, and Curtis Langlotz. 2020b. Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports. In ACL.
293
+
294
+ Yiran Zhao, Jinghan Zhang, I. Chern, Siyang Gao, Pengfei Liu, and Junxian He. 2024. Felm: Benchmarking factuality evaluation of large language models. In NeurIPS.
295
+ Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir Radev. 2021. QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization. In NAACL.
296
+ Rongxin Zhu, Jianzhong Qi, and Jey Han Lau. 2023. Annotating and Detecting Fine-grained Factual Errors for Dialogue Summarization. In ACL.
297
+ Haojie Zhuang, Wei Emma Zhang, Leon Xie, Weitong Chen, Jian Yang, and Quan Z Sheng. 2024. Automatic, Meta and Human Evaluation for Multimodal Summarization with Multimodal Output. In NAACL.
298
+
299
+ # A Implementation Details
300
+
301
+ # A.1 Prompts used to generate summaries on tasks proposed in MULTI-LEXSUM
302
+
303
+ - Short summary (GPT-3.5)
304
+
305
+ System: As a junior legal intern, please craft a summary (approximately 130 words) for the given legal case.
306
+
307
+ User: [article]
308
+
309
+ - Tiny summary (GPT-3.5)
310
+
311
+ System: As a junior legal intern, please craft a summary (approximately 25 words) for the given legal case.
312
+
313
+ User: [article]
314
+
315
+ - Short summary (GPT-4)
316
+
317
+ System: As a senior legal professional, please craft a summary (approximately 130 words) for the given legal case.
318
+
319
+ User: [article]
320
+
321
+ - Tiny summary (GPT-4)
322
+
323
+ System: As a senior legal professional, please craft a summary (approximately 25 words) for the given legal case.
324
+
325
+ User: [article]
326
+
327
+ # B Additional Results
328
+
329
+ Table 2 show the correlation between different quality dimensions within the same annotator group and across different groups for the same dimension.
330
+
331
+ <table><tr><td></td><td>Coher.</td><td>Faith.</td><td>Fluen.</td><td>Rele.</td><td>Expert</td></tr><tr><td colspan="6">Expert annotators</td></tr><tr><td>Coherence</td><td>-</td><td>0.300</td><td>0.544</td><td>0.700</td><td>0.877</td></tr><tr><td>Faithfulness</td><td>0.300</td><td>-</td><td>0.594</td><td>0.500</td><td>0.900</td></tr><tr><td>Fluency</td><td>0.544</td><td>0.594</td><td>-</td><td>0.745</td><td>0.810</td></tr><tr><td>Relevance</td><td>0.700</td><td>0.500</td><td>0.745</td><td>-</td><td>0.857</td></tr><tr><td colspan="6">Crowd annotators</td></tr><tr><td>Coherence</td><td>-</td><td>0.310</td><td>0.500</td><td>0.393</td><td>-0.083</td></tr><tr><td>Faithfulness</td><td>0.310</td><td>-</td><td>0.343</td><td>0.168</td><td>0.059</td></tr><tr><td>Fluency</td><td>0.500</td><td>0.343</td><td>-</td><td>0.326</td><td>0.142</td></tr><tr><td>Relevance</td><td>0.393</td><td>0.168</td><td>0.326</td><td>-</td><td>-0.159</td></tr></table>
332
+
333
+ Table 2: System-level Kendall's $\tau$ correlation coefficients between different quality dimensions within the same annotator group, and correlation coefficients between different annotator groups for the same quality dimension. underline: the correlation coefficient is significant $(p\leq 0.05)$ . The human judgements are from SUMMEVAL (Fabbri et al., 2020).
334
+
335
+ # C Meta-evaluation Benchmarks
336
+
337
+ SUMMEVAL Fabbri et al. (2020) assembled a collection of summaries generated by 16 models trained on the CNN/DM (Nallapati et al., 2016) dataset and collect human judgements from 3 expert judges and 5 crowd-source workers. Judges were asked to evaluate the summaries along four dimensions: relevance (concerning the selection of important content), consistency (concerning factual alignment between the summary and the source), fluency (concerning the quality of individual sentences), and coherence (concerning the collective quality of all sentences).
338
+
339
+ REALSUMM Bhandari et al. (2020) released a dataset of human judgements on the relevance of summaries collected from 25 neural summarization systems. Bhandari et al. create Semantic Content Units (SCUs) for each reference summary and then hire crowd workers to annotate each generated summary, determining whether each SCU can be inferred from the generated summary.
340
+
341
+ FRANK Pagnoni et al. (2021) devise a typology of factual errors (e.g., predicate errors, entity errors, circumstance errors, etc.) and use it to collect human annotations of generated summaries for the CNN/DM (Nallapati et al., 2016) and XSUM (Narayan et al., 2018) datasets. They conduct the annotation task on the Amazon Me
342
+
343
+ chanical Turk platform and found a nearly perfect agreement (a Cohen Kappa of 0.86) between the majority class of the three crowd annotators and one expert annotator on 20 summaries.
344
+
345
+ GO FIGURE Gabriel et al. (2021) introduce a meta-evaluation framework for evaluating factuality evaluation metrics. Gabriel et al. build one diagnostic dataset that consists of transformed reference summaries with simulated factuality errors (i.e., pronoun entity errors, verb tense or negation errors, intrinsic entity errors, extrinsic entity errors, sentiment errors, false quotes). They also use finetuned T5 summarization models to generate summaries and annotate them for fine-grained factual errors based on the above-mentioned error types.
346
+
347
+ DIALSUMMEVAL Gao and Wan (2022) sample 100 dialogues from the SAMSUM (Gliwa et al., 2019) test set and evaluate the summaries generated by 14 summarization models. Three college students fluent in English were recruited to assess the relevance, consistency, fluency and coherence quality of generated summaries.
348
+
349
+ BUMP Ma et al. (2023) introduce a dataset of 889 summary pairs, where a single error is introduced to a reference summary from the CNN/DM (Nallapati et al., 2016) dataset to produce an unfaithful summary. Ma et al. define a taxonomy of seven unfaithful error types (i.e., intrinsic/extrinsic predicate error, intrinsic/extrinsic entity error, intrinsic/extrinsic circumstance error, and coreference error) and instruct annotators to introduce errors of a specific type.
350
+
351
+ ROSE Liu et al. (2023b) propose a human evaluation protocol for evaluating the salience of summaries that is more objective by dissecting the summaries into fine-grained content units and defining the annotation task based on those units. Using the protocol, Liu et al. curate a large human evaluation dataset consisting of 22,000 summary-level annotations over 28 systems on samples from CNN/DM (Nallapati et al., 2016), XSUM (Narayan et al., 2018), and SAM-SUM (Gliwa et al., 2019).
352
+
353
+ SEAHORSE Clark et al. (2023) collect annotations along 6 dimensions: comprehensible (read and understood by the rater), repetition (free of unnecessarily repeated information), grammar (grammatically correct), attribution (fully attributable to the source article), main ideas (captures the main
354
+
355
+ ideas of the source article), and, conciseness (concisely represents the information in the source article). Annotators can answer 'Yes,' 'No,' or 'Unsure' to the first three questions given only the summary, and the last three questions given both the article and the summary. Their annotations provide both a benchmark for meta-evaluation but also a resource for training learning-based evaluation metrics.
356
+
357
+ SUMMEDITS Laban et al. (2023) propose a new protocol for creating inconsistency detection benchmarks. First, they manually verify the factual consistency of a small set of seed summaries. Then, they use LLMs to generate numerous edited versions (e.g., via entity modification, antonym swap, hallucinated fact insertion, and negation insertion) of these consistent seed summaries. Finally, human annotators determine whether each edit introduces a factual inconsistency. Laban et al. implement the protocol on ten diverse textual domains, including the legal, dialogue, academic, financial, and sales domains.
358
+
359
+ FIB Tam et al. (2023) propose a factual inconsistency benchmark, where each example consists of a document and two summaries (one factually consistent summary and one factually inconsistent summary). For factually consistent summaries, they consider reference summaries from CNN/DM (Nallapati et al., 2016) and XSUM (Narayan et al., 2018) and manually fix these factually inconsistent reference summaries using minimal edits. They also manually choose factually inconsistent summaries from model-generate summaries.
acriticallookatmetaevaluatingsummarisationevaluationmetrics/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f4665f506b88d12eaecbc5003b2dc5fbedb286b4107a899a3c057576f9bbd3a
3
+ size 283874
acriticallookatmetaevaluatingsummarisationevaluationmetrics/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b96dfa2127f252eba10dbacbcf153929410315ad9b3fdbb6cffe0dd31358b5f4
3
+ size 394773
activationscalingforsteeringandinterpretinglanguagemodels/6c27c5f9-f677-42cf-8108-2898ec13a750_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6ef76184d0269c0194deae1c219457e7596fa8e61ee1003cb89252f1845e590
3
+ size 94091
activationscalingforsteeringandinterpretinglanguagemodels/6c27c5f9-f677-42cf-8108-2898ec13a750_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9da522178bf0e37075c5e27d66929e0758339442ac2c085bc144b402408b42c
3
+ size 110421
activationscalingforsteeringandinterpretinglanguagemodels/6c27c5f9-f677-42cf-8108-2898ec13a750_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3626f72138255d2ccb806affed10dc300fe0539b2cf440939e4b4220ac0212d
3
+ size 954655
activationscalingforsteeringandinterpretinglanguagemodels/full.md ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Activation Scaling for Steering and Interpreting Language Models
2
+
3
+ # Niklas Stoehr<sup>1</sup> Kevin Du<sup>1</sup> Vésteinn Snæbjarnarson<sup>2</sup> Robert West<sup>3</sup> Ryan Cotterell<sup>1</sup> Aaron Schein<sup>4</sup>
4
+
5
+ $^{1}$ ETH Zürich $^{2}$ University of Copenhagen $^{3}$ EPFL $^{4}$ The University of Chicago niklas.stoehr@inf.ethz.ch kevin.du@inf.ethz.ch vesn@di.ku.dk robert.west@epfl.ch ryan.cotterell@inf.ethz.ch schein@uchicago.edu
6
+
7
+ # Abstract
8
+
9
+ Given the prompt "Rome is in", can we steer a language model to flip its prediction of an incorrect token "France" to a correct token "Italy" by only multiplying a few relevant activation vectors with scalars? We argue that successfully intervening on a model is a prerequisite for interpreting its internal workings. Concretely, we establish a three-term objective: a successful intervention should flip the correct with the wrong token and vice versa (effectiveness), and leave other tokens unaffected (faithfulness), all while being sparse (minimality). Using gradient-based optimization, this objective lets us learn (and later evaluate) a specific kind of efficient and interpretable intervention: activation scaling only modifies the signed magnitude of activation vectors to strengthen, weaken, or reverse the steering directions already encoded in the model. On synthetic tasks, this intervention performs comparably with steering vectors in terms of effectiveness and faithfulness, but is much more minimal allowing us to pinpoint interpretable model components. We evaluate activation scaling from different angles, compare performance on different datasets, and make activation scalars a learnable function of the activation vectors themselves to generalize to varying-length prompts.<sup>1</sup>
10
+
11
+ # 1 Introduction
12
+
13
+ Understanding which components of a language model play which roles in which tasks is a core aim of mechanistic interpretability. Given the prompt Rome is in, for instance, one might ask which components of the model most influence it to favor Italy over some incorrect answer token, such as France. In addressing this question, a natural axiom is that a given component can only be understood as influential for a given task if intervening on it meaningfully alters the model's task-specific behavior.
14
+
15
+ ![](images/fca56349f72f7d6e0d0f4e90bcb930a0517fe9c592cb99c12e4db3fede5db05b.jpg)
16
+ Figure 1: We show that it is often sufficient to scale a few influential activation vectors $\mathbf{h}_{l,i}^{(s)}$ for a model to favor one answer token over another token. This could be the MLP output at layer $l$ for token position $i$ . We learn multiplicative scalars, $\alpha_{l,i}^{(s)}$ , using gradient-based optimization. These correspond to interpretable interventions that generalize to test set prompts while relying on fewer parameters than additive steering vectors.
17
+
18
+ Building on this basic axiom, a growing literature seeks to both generate and test hypotheses about where certain behaviors are localized in a model by employing targeted interventions with methods such as activation patching (Lakretz et al., 2019; Vig et al., 2020; Meng et al., 2022), among others. Studies in this literature often produce a set of attribution scores associated with various locations in the model which represent how much the model's output changed after editing the activation vectors at each location. Although an effective intervention may be necessary to believe a given localization hypothesis, it is not sufficient, as interventions to other model locations may be similarly effective. Indeed, recent work has questioned the relationship between interpretability and intervention on this basis, and has advocated for more rigorous and deliberate methodology for connecting the two (Hase et al., 2023; Wang and Veitch, 2024; Hanna et al., 2024).
19
+
20
+ A parallel literature on model steerability also seeks to develop effective interventions, not for the primary purpose of interpretability, but to steer models toward desirable behaviors, like factuality (Li et al., 2023), or away from undesirable behaviors, like toxicity (Ilharco et al., 2023; Turner
21
+
22
+ et al., 2023). Methods in this literature are typically designed to be maximally effective for steering and thus tend to involve model-wide (i.e., not necessarily localized) interventions to the weights (Houlsby et al., 2019; Hu et al., 2021; Ilharco et al., 2023) or activation vectors (Subramani et al., 2022; Hernandez et al., 2024).
23
+
24
+ This paper seeks to synthesize the goals of model-wide interventions while still offering mechanistic insights into the model. Specifically, we seek interventions that are effective, and not localized a priori, but are nevertheless minimal and faithful. We define an intervention to be effective if it flips the prediction of the correct answer token (e.g., Italy) to an incorrect token (e.g., France). We define an intervention to be faithful if it does not substantially alter the probabilities of tokens unrelated to the given task. And, we define an intervention to be minimal if it alters activation vectors sparsely, in only a small number of locations. Our notions of minimality and faithfulness follow from recent work in the Transformer circuits literature (Wang et al., 2023; Bhaskar et al., 2024). We operationalize these three desiderata via a three-term objective allowing us to learn model-wide interventions using gradient-based optimization.
25
+
26
+ More specifically, we propose a kind of modelwide yet parsimonious intervention that associates a single scalar parameter with each of many locations in a model. The scalar parameters are learned so that the model can be effectively steered—e.g., away from Italy and toward France—by simply scaling the activation vector at each location. We call this approach activation scaling (ACTIVSCALAR), inspired by the idea that some model components are highly specialized for certain task-relevant computations (Voita et al., 2019; Geva et al., 2022). Existing work (Yu et al., 2023; Merullo et al., 2024; Ortu et al., 2024) has identified individual components, e.g., name mover heads (Wang et al., 2023), in a first step and successfully steered a model by scaling only the contributions of those individual components in a second step.
27
+
28
+ To evaluate our approach, we construct a baseline method that is the same in all aspects except it learns additive vectors at all locations, rather than single multiplicative scalars. We refer to this approach as steering vectors (STEERVEC). In a suite of experiments, we find overall that activation scaling learns interventions which are as effective and faithful as steering vectors, while requiring far fewer learnable parameters. Our results suggest
29
+
30
+ that merely scaling the signed magnitude of activation vectors, without further affecting their direction, is sufficient for effective steering on simple tasks. Moreover, we find activation scalars are highly interpretable. They are easy to understand as simply strengthening or weakening the steering directions already encoded in the model (Subramani et al., 2022; Ferrando and Voita, 2024), and when visualized, they provide sparse and localized descriptions of important model components. Finally, to extend our approach beyond fixed prompt templates, we develop a dynamic version of activation scaling. DYNSCALAR makes the activation scalars learned functions of the activation vectors themselves, thus allowing learned interventions to transfer to test set prompts of varying length.
31
+
32
+ # 2 Transformer Language Models
33
+
34
+ Let $\Sigma$ be an alphabet of tokens, a finite, non-empty set, and let $\Sigma^{*}$ be the set of all strings with tokens drawn from $\Sigma$ . A language model $p$ is a probability distribution over $\Sigma^{*}$ . As is current practice, most language models are defined autoregressively. Let $\boldsymbol{v} = v_{1} \cdots v_{I} \in \Sigma^{*}$ be a string; then the autoregressive factorization of $p$ is given by
35
+
36
+ $$
37
+ p (\boldsymbol {v}) = p \left(\operatorname {E O S} \mid \boldsymbol {v}\right) \prod_ {i = 1} ^ {I} p \left(v _ {i} \mid \boldsymbol {v} _ {< i}\right). \tag {1}
38
+ $$
39
+
40
+ Each local conditional distribution $p(\cdot \mid \boldsymbol{v}_{< i})$ is a distribution over $\overline{\Sigma} = \Sigma \cup \mathrm{EOS}$ , where EOS $\notin \Sigma$ is the end-of-string token. In the context of an autoregressive language model, we call $\boldsymbol{v}_{< i}$ a prompt.
41
+
42
+ Let $x_{n}\in \Sigma^{*}$ be a prompt of length $I_{n}$ and $y_{n}\in$ $\overline{\Sigma}$ the next token. A common way to define the local conditional is via the softmax function $\sigma$ which maps from $\mathbb{R}^{|\overline{\Sigma}|}$ to the probability simplex $\Delta^{|\overline{\Sigma}| - 1}$ :
43
+
44
+ $$
45
+ \begin{array}{l} p \left(y _ {n} \mid x _ {n}\right) = \sigma \left(f \left(x _ {n}\right)\right) y _ {n} \\ = \frac {\exp (f (x _ {n}) y _ {n})}{\sum_ {y ^ {\prime} \in \bar {\Sigma}} \exp (f (x _ {n}) y ^ {\prime})}, \tag {2} \\ \end{array}
46
+ $$
47
+
48
+ where $f(x_{n}) = \mathbf{E}\mathbf{h}_{L,I_{n}}$ defines the logit function $f\colon \Sigma^{*}\to \mathbb{R}^{\left|\overline{\Sigma}\right|}$ of a language model, where $\mathbf{E}\in \mathbb{R}^{\left|\overline{\Sigma}\right|\times D}$ is the projection (or unembedding) matrix, and where $\mathbf{h}_{L,I_n}\in \mathbb{R}^D$ is the activation vector at the final model layer $L$ and last token position $I_{n}$ of the prompt.
49
+
50
+ Most state-of-the-art language models rely on the Transformer architecture (Vaswani et al., 2017) to compute $f$ . Transformers are composed of $L$ layers of Transformer blocks, each of which consists
51
+
52
+ <table><tr><td>Dataset</td><td>Prompt</td><td>Answer Toks</td></tr><tr><td>Country-Capital Conflict (CCC)</td><td>The capital of Germany is Paris. Q: What is the capital of Germany? A:</td><td>Berlin / Paris</td></tr><tr><td>Indirect Object Identification (IOI)</td><td>When Anne met with Tom, Tom gave the book to</td><td>Anne / Tom</td></tr></table>
53
+
54
+ Table 1: We study two datasets commonly used for mechanistic interpretability: conflicts (CCC) between correct and incorrect facts about country capitals; Indirect Object Identification (IOI) requiring weak syntactic reasoning.
55
+
56
+ of a multi-headed attention $\mathrm{ATTN}_l$ and a multilayer perceptron $\mathrm{MLP}_l$ function that read from and write into the residual stream. For instance, a Transformer block in a GPT2 (Radford et al., 2019) or Pythia (Biderman et al., 2023) model is given by
57
+
58
+ $$
59
+ \mathbf {H} _ {l} ^ {(1)} = \operatorname {A T T N} _ {l} \left(\operatorname {L N} _ {l} ^ {(1)} \left(\mathbf {H} _ {l - 1} ^ {(4)}\right)\right) \tag {3a}
60
+ $$
61
+
62
+ $$
63
+ \mathbf {h} _ {l, i} ^ {(2)} = \mathbf {h} _ {l, i} ^ {(1)} + \mathbf {h} _ {l - 1, i} ^ {(4)} \tag {3b}
64
+ $$
65
+
66
+ $$
67
+ \mathbf {h} _ {l, i} ^ {(3)} = \operatorname {M L P} _ {l} \left(\operatorname {L N} _ {l} ^ {(2)} \left(\mathbf {h} _ {l, i} ^ {(2)}\right)\right) \tag {3c}
68
+ $$
69
+
70
+ $$
71
+ \mathbf {h} _ {l, i} ^ {(4)} = \mathbf {h} _ {l, i} ^ {(3)} + \mathbf {h} _ {l, i} ^ {(2)} \tag {3d}
72
+ $$
73
+
74
+ $$
75
+ \mathbf {H} _ {l} ^ {(4)} = \left[ \mathbf {h} _ {l, 1} ^ {(4)}, \dots , \mathbf {h} _ {l, I _ {n}} ^ {(4)} \right], \tag {3e}
76
+ $$
77
+
78
+ where $\mathbf{h}_{l,i}^{(s)}\in \mathbb{R}^D$ is an activation (column) vector at a specific site $s$ at layer $l$ and token position $i$ , and $\mathrm{LN}_l^{(s)}$ is the pre-layer normalization (Ba et al., 2016; Xiong et al., 2020). We stack activation vectors across token positions to obtain a layer-wise activation matrix $\mathbf{H}_l^{(s)}\in \mathbb{R}^{D\times I_n}$ . The matrix $\mathbf{H}_1^{(4)}$ is initialized to a representation of the input $x_{n}$ .
79
+
80
+ # 3 Activation-Level Interventions
81
+
82
+ # 3.1 Choosing Intervention Points
83
+
84
+ We focus on a class of interventions which modify one or more of the activation vectors in Eq. (3). This level of abstraction is motivated by our desire to interpret larger components of the Transformer. However, we note that the granularity at which we seek to intervene and understand the model is a choice which depends on specific use cases. Our intervention targets a set of layer indices $\mathcal{L}$ , token positions $\mathcal{I}$ , and sites $\mathcal{S}$ . We denote the Cartesian product of the layer indices, token positions and sites, $\mathcal{K} = \mathcal{L} \times \mathcal{I} \times \mathcal{S}$ , as the intervention points.
85
+
86
+ # 3.2 Defining an Intervention
87
+
88
+ We intervene on the activation vectors of the model $f(x_{n})$ by specifying an intervention $\widetilde{f}_{\pmb{\theta}}^{\beta}(x_{n})$ that involves a set of learnable parameters $\pmb{\theta}$ and a hyperparameter $\beta \in \mathbb{R}$ which controls the strength
89
+
90
+ and direction of the intervention. For instance, setting $\beta = 0$ removes an intervention, resulting in $\widetilde{f}_{\pmb{\theta}}^{\beta}(x_n) = f(x_n)$ , while switching the sign of $\beta$ reverses the intervention's direction. We compute $\widetilde{f}_{\pmb{\theta}}^{\beta}(x_n)$ via $\widetilde{\mathbf{h}}_{l,i}^{(s)}$ as follows.
91
+
92
+ Additive Vectors. We first consider an intervention based on steering vectors (STEERVEC). Specifically, we define a set of intervention parameters $\pmb{\theta} = \{\mathbf{v}_{l,i}^{(s)}\}_{(l,i,s)\in \mathcal{K}}$ that associate a vector $\mathbf{v}_{l,i}^{(s)}$ with each intervention point. The intervention adds this vector to its corresponding activation vector:
93
+
94
+ $$
95
+ \widetilde {\mathbf {h}} _ {l, i} ^ {(s)} = \mathbf {h} _ {l, i} ^ {(s)} + \beta \mathbf {v} _ {l, i} ^ {(s)}. \tag {4}
96
+ $$
97
+
98
+ Multiplicative Scalars. Applying an intervention vector modifies both the direction and magnitude of the activation vector. We instead propose a more parameter-efficient intervention which is restricted to scaling the signed magnitude of each activation vector via a single multiplicative scalar. This approach, which we call activation scalars (ACTIVSCALAR), is given by
99
+
100
+ $$
101
+ \widetilde {\mathbf {h}} _ {l, i} ^ {(s)} = \mathbf {h} _ {l, i} ^ {(s)} \left(1 + \beta \alpha_ {l, i} ^ {(s)}\right). \tag {5}
102
+ $$
103
+
104
+ where $\pmb{\theta} = \{\alpha_{l,i}^{(s)}\}_{(l,i,s)\in \mathcal{K}}$ are the parameters.
105
+
106
+ # 3.3 Learning an Intervention
107
+
108
+ What qualities does an interpretable intervention possess? In this article, we focus on interventions that are effective, faithful, and minimal, drawing on analogous concepts established in the Transformer circuits literature (Wang et al., 2023; Bhaskar et al., 2024).
109
+
110
+ We intervene on the model to steer its prediction on a selected task with data points $\mathcal{T} = \{(x_n, c_n, w_n)\}_{n=1}^N$ . For each data point, the model is prompted by $x_n$ to choose between two competing answer tokens $c_n, w_n \in \Sigma$ . The answer tokens are selected such that $c_n$ and $w_n$ always represent correct and wrong continuations of the prompt, respectively. For instance, given the prompt $x_n = \text{The capital of Poland}$ is London. Q: What is the capital of Poland? A.: the competing answer tokens could be $c_n = \text{Warsaw}$ versus $w_n = \text{London}$ .
111
+
112
+ ![](images/52f0e54428c35de9b02bf4852ef9bfc334f4b96634bdf75990ded193906f63ae.jpg)
113
+
114
+ ![](images/14fb510cc7a81ffece5d12962437a077468e9522938717e6a435fe7b5188f293.jpg)
115
+
116
+ ![](images/76647b7d6137d3617aed4afd39913ffd247cf9925b1b4a71ff3a84b777ea9e2b.jpg)
117
+ Figure 2: Pareto fronts that visualize the trade-off between effectiveness and faithfulness (left) and effectiveness and minimality (right) on train (crosses) and test sets (points). We compare ACTIVSCALAR and STEERVEC for different hyperparameter combinations of $\lambda_{\mathrm{F}}, \lambda_{\mathrm{M}}, m \in \{0,1,10,100\}$ . We learn interventions for the sites attn0ut and mlp0ut on all layers and token positions of GPT2-Small. We find that ACTIVSCALAR does not fall behind STEERVEC in terms of effectiveness and faithfulness, but is much more minimal on average.
118
+
119
+ ![](images/c470de350acd0ef280f10338f9ced5454580076c11a16ce11f18199c95046ee6.jpg)
120
+
121
+ Effectiveness. A popular objective for learning interventions is the logit difference between candidate tokens: $\widetilde{f}_{\pmb{\theta}}^{\beta}(x_{n})_{c_{n}} - \widetilde{f}_{\pmb{\theta}}^{\beta}(x_{n})_{w_{n}}$ . We extend this objective to allow the sign of $\beta$ to control the sign of the logit difference. Concretely, we want the logit of $c_{n}$ to be larger than that of $w_{n}$ by some margin $m \geq 0$ when setting $\beta > 0$ and smaller for $\beta < 0$ . Define $\widetilde{f}_{\pmb{\theta}}^{+}(x_{n})$ to be the intervention with $\beta = 1$ , and $\widetilde{f}_{\pmb{\theta}}^{-}(x_{n})$ to be the intervention with $\beta = -1$ . The following objective then encourages learned interventions to yield both $\widetilde{f}_{\pmb{\theta}}^{+}(x_{n})_{c_{n}} > \widetilde{f}_{\pmb{\theta}}^{+}(x_{n})_{w_{n}}$ and $\widetilde{f}_{\pmb{\theta}}^{-}(x_{n})_{c_{n}} < \widetilde{f}_{\pmb{\theta}}^{-}(x_{n})_{w_{n}}$ :
122
+
123
+ $$
124
+ \mathrm {E} _ {m} (\boldsymbol {\theta}, \mathcal {T}) = \tag {6}
125
+ $$
126
+
127
+ $$
128
+ \begin{array}{l} - \frac {1}{N} \sum_ {n = 1} ^ {N} \left[ \max \left(0, \widetilde {f _ {\boldsymbol {\theta}} ^ {+}} \left(x _ {n}\right) _ {w _ {n}} - \widetilde {f _ {\boldsymbol {\theta}} ^ {+}} \left(x _ {n}\right) _ {c _ {n}} + m\right) \right. \\ \left. + \max \left(0, \widetilde {f} _ {\pmb {\theta}} ^ {-} (x _ {n}) _ {c _ {n}} - \widetilde {f} _ {\pmb {\theta}} ^ {-} (x _ {n}) _ {w _ {n}} + m\right) \right]. \\ \end{array}
129
+ $$
130
+
131
+ Faithfulness. We say an intervention is faithful if it only affects the answer tokens (Wang et al., 2023; Hanna et al., 2024). We promote faithfulness via the following objective
132
+
133
+ $$
134
+ \begin{array}{l} \mathrm {F} (\boldsymbol {\theta}, \mathcal {T}) = \tag {7} \\ - \frac {1}{N} \sum_ {n = 1} ^ {N} D _ {\mathrm {K L}} \left(\sigma \left(\widetilde {f} _ {\boldsymbol {\theta}} ^ {+} \left(x _ {n}\right)\right) | | \sigma \left(f \left(x _ {n}\right)\right)\right) \\ + D _ {\mathrm {K L}} \left(\sigma \left(\widetilde {f} _ {\boldsymbol {\theta}} ^ {-} \left(x _ {n}\right)\right) | | \sigma \left(f \left(x _ {n}\right)\right)\right). \\ \end{array}
135
+ $$
136
+
137
+ where $D_{\mathrm{KL}}$ is the Kullback-Leibler divergence.
138
+
139
+ Minimality. Thirdly, the intervention should be minimal (Wang et al., 2023), which we promote with the following regularizing term
140
+
141
+ $$
142
+ \mathbf {M} _ {p} (\boldsymbol {\theta}) = - \left\| \operatorname {v e c} (\boldsymbol {\theta}) \right\| _ {p} \tag {8}
143
+ $$
144
+
145
+ which penalizes a (pseudo)norm of the parameters. Here, vec maps the set of parameters $\theta$ to a vector, and the subscript $p$ indicates which (pseudo)norm of the vector is penalized. Setting $p = 0$ corresponds to $\ell_0$ -regularization, which encourages sparsity directly but is difficult to optimize. We instead take $p = 1$ , which corresponds to $\ell_1$ -regularization, a widely-studied and effective convex relaxation of $\ell_0$ -regularization, which forms the basis of the sparsity-inducing LASSO method (Tibshirani, 1996).
146
+
147
+ Gradient-based Parameter Learning. Putting it all together, we directly optimize for an intervention that is simultaneously effective, faithful, and minimal. Specifically, we choose intervention parameters $\theta$ using gradient-based optimization on the multi-term objective
148
+
149
+ $$
150
+ \Psi (\theta , \mathcal {T}) = \tag {9}
151
+ $$
152
+
153
+ $$
154
+ \mathrm {E} _ {m} (\boldsymbol {\theta}, \mathcal {T}) + \lambda_ {\mathrm {F}} \mathrm {F} (\boldsymbol {\theta}, \mathcal {T}) + \lambda_ {\mathrm {M}} \mathrm {M} _ {1} (\boldsymbol {\theta}).
155
+ $$
156
+
157
+ We can tune the hyperparameters $\lambda_{\mathrm{F}} \geq 0$ and $\lambda_{\mathrm{M}} \geq 0$ to control the degree to which the three terms in the objective trade off. The margin $m \geq 0$ constitutes a third hyperparameter that controls the strength of the effectiveness term.
158
+
159
+ # 3.4 Evaluating an Intervention
160
+
161
+ We can evaluate an intervention based on our operationalizations of effectiveness, faithfulness and minimality. To evaluate effectiveness, we set the margin $m = 0$ to obtain a metric ranging from $-\infty$ to 0, where $\mathrm{E}_0(\theta, \mathcal{T}) = 0$ indicates that an intervention always successfully flips the answer tokens. The faithfulness objective, which also ranges from $-\infty$ to 0, can be treated as an evaluation metric without any modification. Finally, to evaluate minimality, we count the number of non-negligible intervention parameters—i.e., those taking values sufficiently far from 0. In practice, we consider absolute values less than 0.01 to be negligible.
162
+
163
+ # 4 Experiments
164
+
165
+ We fit the two intervention methods, ACTIVSCALAR and STEERVEC, on the three-term objective in Eq. (9) and conduct evaluations for effectiveness, faithfulness and minimality.
166
+
167
+ # 4.1 Tasks
168
+
169
+ We consider two synthetic tasks presented in Tab. 1. The Country-Capital Conflicts (CCC) task, designed by Du et al. (2024), prompts models to resolve an entity-based knowledge conflict (Long-pre et al., 2021) which pits information provided in-context against prior parametric knowledge that models can be assumed to have acquired during training. The Indirect Object Identification (IOI) task, which we adapt slightly from Wang et al. (2023), prompts models to choose which of two tokens is the indirect object in sentences with potentially complex syntactic structure. For both tasks, we select prompts to be of the same length $I_{n}$ and ensure the candidate answers are single tokens.
170
+
171
+ # 4.2 Quantitative Results
172
+
173
+ We do not expect there to be a single solution which is optimal for all three objectives. An optimally effective intervention might not be very faithful, while a highly minimal intervention but not be very effective. We seek to understand the trade-off between the different intervention desiderata by finding Pareto-optimal solutions. To this end, we run a grid search over the hyperparameters $\lambda_{\mathrm{F}}, \lambda_{\mathrm{M}}$ and $m$ . We evaluate the learned interventions on the test set and visualize the Pareto frontier in Fig. 2.
174
+
175
+ Effectiveness versus Faithfulness. We find that ACTIVSCALAR and STEERVEC perform comparably when it comes to trading off effectiveness and
176
+
177
+ ![](images/43c96980262a41e37154a48a0a8852b7f8241e72fee9a10f7f74dc5ab2c1586b.jpg)
178
+ Figure 3: The magnitude and sign of learned activation scalars highlight task-relevant locations within the model, here for the residPost site on GPT2-XL. For instance, the correct answer token Tom is promoted around layers 15 to 20 while activation vectors at the token Anne are scaled down. The intervention is successful according to our effectiveness objective illustrated by the red and blue lines crossing between $\beta = -1$ and $\beta = 1$ , which reverses the sign of the logit difference.
179
+
180
+ ![](images/d73351b2de716e8555ab18faf4496abacd3c1fd7918a4d7d6f46df2bbb20ca30.jpg)
181
+
182
+ faithfulness. Of all points on the Pareto frontier, around half come from each approach. STEERVEC is generally more effective on the train set, but also exhibits large decreases in test set performance; this is likely due to it having many more parameters and being more prone to overfitting. We also see that high values of effectiveness on the train set are associated with low values of faithfulness on the test set, also suggestive of overfitting.
183
+
184
+ Effectiveness versus Minimality. We find that ACTIVSCALAR is generally more minimal on multiple levels. For CCC and IOI, ACTIVSCALAR accounts for about $85\%$ of the points on the effectiveness-minimality Pareto frontier in Fig. 2. Thus, ACTIVSCALAR learns interventions that are in fact more parsimonious. This is in addition to having far fewer learnable parameters. As a simple illustration, consider learning an intervention on the $L = 48$ layers of GPT2-XL, for a prompt consisting of $I_{n} = 19$ tokens. If we only intervene on a single site per layer—e.g., residPost, where the activation vector has dimensionality $D = 1600$ —then STEERVEC has $19 \times 48 \times 1600 = 1,459,200$ learnable parameters while ACTIVSCALAR has only $48 \times 19 = 912$ learnable scalars. ACTIVSCALAR is also more minimal qualitatively, as it is limited to affecting only the signed magnitude of the activation vectors without otherwise affecting their direction.
185
+
186
+ ![](images/c02b12ca7f8a7b6c0e2d429ccee2f21d3b843cdf9a0dfc1f1f06af889f094feb.jpg)
187
+ Figure 4: We interpret the learned steering vectors by comparing their difference in norm and direction, before and after training, here for the residPost site of GPT2-XL. We observe that the change in norm highlights similar token positions as ACTIVSCALAR in Fig. 3. When fitting the vectors multiple times, the vectors converge to different directions every time, while changes in norm are more constant. We quantify this using the Kendall correlation $\rho$ between the orderings of locations by the change in norm as well as cosine distance between different runs.
188
+
189
+ ![](images/f6a7529a710392178891909a61f8ca763c47ec3c157217ea8c440b4f29ea0fd8.jpg)
190
+ Figure 5: DYNSCALAR generalizes to prompts of varying length and yields interpretable activation scalars. Here for residPost of GPT2-Small, the correct answer token is identified despite being in a different position.
191
+
192
+ ![](images/810d06d7bc682d2e83d082fbd319d1cb9006a32d6db50ca425c8b817cf5aa014.jpg)
193
+
194
+ # 4.3 Interpretation of Scalars and Vectors
195
+
196
+ Activation Scalars. As shown in Fig. 3, activation scalars highlight task-relevant locations while performing effective interventions at the same time. The norm of activation vectors at the correct in-context tokens is increased, while it is decreased at the incorrect in-context tokens.
197
+
198
+ Steering Vectors. We seek to better understand the properties of the learned steering vectors by analyzing their change in norm and direction in terms of cosine distance before and after training. As visualized in Fig. 4, the change in norm reveals a structured, interpretable pattern, while the directional change appears more arbitrary. In fact, when
199
+
200
+ ![](images/18d265a4354b1be2879e083734e6a9f2c437620b986c63d22375668d2e16990d.jpg)
201
+ Figure 6: Generalization performance in terms of effectiveness of DYNSCALAR for varying-length prompts from different tasks fitted on mlp0ut and attn0ut of GPT2-Small (closer to 0 is better). As expected, training and testing on the same task performs best overall, but training on all tasks generalizes comparatively well.
202
+
203
+ <table><tr><td>E0 on CCC</td><td>A: train</td><td>B: test</td><td>C: templ</td></tr><tr><td>ACTIVSCALAR</td><td>-0.99</td><td>-1.57</td><td>-2.66</td></tr><tr><td>STEERVEC</td><td>-0.06</td><td>-0.12</td><td>-2.16</td></tr><tr><td>DYNSCALAR</td><td>-0.11</td><td>-0.30</td><td>-2.10</td></tr></table>
204
+
205
+ Table 2: We test generalization performance by learning interventions on CCC and evaluating effectiveness on prompts of the A: CCC train set; B: CCC test set; C: a different template for the CCC task. All intervention parameters are learned for attn0ut and mlp0ut of GPT2-Sma11, using the same set of hyperparameters.
206
+
207
+ fitting the steering vectors five times on the same prompt, we can show quantitatively that vectors converge to different solutions in terms of direction but less so in terms of norm.
208
+
209
+ # 5 Extension to Variable-Length Prompts
210
+
211
+ ACTIVSCALAR and STEERVEC, as well as most existing interpretability methods such as activation patching, learn intervention parameters that are tied to specific token positions $i$ in the prompt $x_{n}$ . To reuse learned parameters for intervention with a new prompt, the test prompt must match the length $I_{n}$ of the training prompt, and ideally match it syntactically. Existing work circumvents the requirement of matched prompts by intervening only on the last token position (Yu et al., 2023; Li et al., 2023; Jin et al., 2024; Stoehr et al., 2024a), or on token positions that can be easily aligned across prompts, like the main verb, or the last token of the subject (Meng et al., 2022; Geva et al., 2023; Ortu et al., 2024; Merullo et al., 2024).
212
+
213
+ Dynamic Activation Scalars. We propose an extension of ACTIVSCALAR which defines each scalar to be a function of the corresponding activation vector. Each function is tied to a particular layer $l$ and site $s$ , but shared across token positions $i$ , and parameterized by a (column) vector $\mathbf{g}_l^{(s)} \in \mathbb{R}^D$ . The set of learnable intervention pa
214
+
215
+ ![](images/dbd373c16af936dc0b0028b5795d4fd3820d18affda41a9b0ff9909ea6e248ab.jpg)
216
+
217
+ ![](images/30283dbe42b7c648dc33de6e2e7e0a65c855bc79e9fcd8b646b8ffc1dbfa9c56.jpg)
218
+
219
+ ![](images/24b9381a55e5afdb66dcd9d618c64678a9307a029f84b6a91580ea6d27670b2a.jpg)
220
+
221
+ ![](images/02f4a25fb395ddbca5764cbab93428c31ca34ddb80552563df50bb9c6c5f537e.jpg)
222
+ Figure 8: We compare ACTIVSCALAR on GPT2-Small1 with a loosely replicated version of the attribution scores in Fig. 4(a) in Ortu et al. (2024). In our version, we study interventions on the o activation vectors at the last token position. Given the prompt iPhone was developed by Google. iPhone was developed by, {Google, Apple}, both approaches single out similar attention heads, e.g., layer $l = 11$ , head $t = 8$ .
223
+
224
+ ![](images/0143683a07013364baeff3781adf67c089f841a4295f30b4c53d9391e75915b5.jpg)
225
+ Figure 7: We follow Yu et al. (2023) in their approach to identify a memory head in layer $l = 16$ , head $t = 8$ in Pythia-1.4b based on Direct Logit Attribution (DLA) on the last token position. Analogously, we learn activation scalars $\alpha_{l,i}^{c}$ for the last token position using ACTIVSCALAR and observe considerable overlap in the attribution scores. Overall, intervening on the last token position appears to be highly effective.
226
+
227
+ ![](images/7a9b3cfa940085e914e05dfae9866f789fa00468dd1d7c3ee1202904bcfbd0bc.jpg)
228
+
229
+ rameters is thus $\theta = \{\mathbf{g}_l^{(s)}\}_{(l,s)\in \mathcal{L}\times \mathcal{S}}$ . The scalar $\alpha_{l,i}^{(s)}$ for a given position $i$ is then the following function of the corresponding activation vector:
230
+
231
+ $$
232
+ \alpha_ {l, i} ^ {(s)} \triangleq \mathbf {g} _ {l} ^ {(s) ^ {\top}} \left(\frac {\mathbf {h} _ {l , i} ^ {(s)}}{\| \mathbf {h} _ {l , i} ^ {(s)} \| _ {2}}\right). \tag {10}
233
+ $$
234
+
235
+ When combined with Eq. (5), this defines a dynamic intervention (DYNSCALAR). Learning this intervention can be understood as learning probes that identify task-relevant activation vectors and then strengthen or weaken their magnitude via multiplicative scalars. In Fig. 5, we show how DYNSCALAR generalizes to prompts of varying length while still offering interpretable insights highlighting task-relevant tokens. In Fig. 6, we evaluate the generalization performance of DYNSCALAR in terms of effectiveness. As expected, performance is highest when test sets are of the same task as the train set; however, DYNSCALAR also obtains good results when trained on a mix of tasks.
236
+
237
+ In Tab. 2, we quantify how well ACTIVSCALAR, STEERVEC and DYNSCALAR generalize to varying degrees of domain shift when trained on the CCC task. All prompts are of the same length in order for ACTIVSCALAR and STEERVEC to be applicable, but the prompt template and thus the
238
+
239
+ ![](images/8558ff4306745c30982bf9cbeb69abb4966a79b921c289fc0aec2b21767c32a7.jpg)
240
+
241
+ ![](images/a0f6088237ed99bd8ace3b30520ddc8df215602d1625e6592bb3236b118c655a.jpg)
242
+
243
+ ![](images/71f0949efb47995d67b6f0ac3068d0cf9f13b372e1a2df4bbff8ce17397430a1.jpg)
244
+
245
+ ![](images/eab94f46487dbcb72b73c3c3f964ae71d80845fb4a4c2ebfa54473eefec9a910.jpg)
246
+
247
+ position $i$ of task-relevant tokens changes. For the selected hyperparameter setting, we find that DYNSCALAR and STEERVEC show better generalization performance than ACTIVSCALAR. The strong generalization of STEERVEC is surprising given that the learned steering vectors are tied to specific token positions. We hypothesize that much of the steering performance should be attributed to the information processed at the last token position (Yu et al., 2023; Wu et al., 2024; Ortu et al., 2024), which is corroborated by the heatmap in Fig. 3. We further explore this hypothesis in the next section.
248
+
249
+ Scaling at the Last Token Position. ACTIVSCALAR and STEERVEC show high steering effectiveness even when the prompt template is changed. This suggests a gate-keeping role of computations at the last token position which suppresses or promotes task-relevant information. Yu et al. (2023) rely on direct logit attribution (DLA; Elhage et al., 2021) on the output activation vector (o) at the last token position. They identify what they call a memory head in layer $l = 16$ , head $t = 8$ of Pythia-1.4B. They then intervene on this head by scaling the value activation vector (v). We replicate a version of their method and contrast it with ACTIVSCALAR in Fig. 7. We find high attribution
250
+
251
+ ![](images/b9a9e252cf8f1af197da522782c9f73d227af8e5f9b38faf31197ba27f858b9b.jpg)
252
+ Warsaw, London
253
+
254
+ ![](images/ce7370be95e6586e8e3cf7b99601b1716f4d9d0db82a47c607651f13bb751d36.jpg)
255
+ Warsaw, London
256
+
257
+ ![](images/05cf39b824b4b8a878752dff825b8541bc65e7e493476e68ffc6f5af6ea64f74.jpg)
258
+ Warsaw, London
259
+
260
+ ![](images/3b7f5ae5770467e81fabc306ad7e949520daeee498866cb1ecb122e5a8189cf3.jpg)
261
+ Warsaw, London
262
+
263
+ ![](images/d80dbdc2f0b91ee62c57c3caf5e7151134e03b070860d07e7d3e2c9b390da2d7.jpg)
264
+
265
+ ![](images/87122de8d1095828cd67cfb6e64c15b3689bc0caf0eab3a80d6e1adbb19ad1d8.jpg)
266
+ Figure 9: [Top] We fit a variant of ACTIVPATCH, ATTRPATCH, DLA, and ACTIVSCALAR (left to right) on the mlpOut and attnOut sites of GPT2-Small. We find that the highlighted locations overlap, across methods, at important tokens such as London. [Middle] Intervening on the training prompt is effective for all methods except DLA. [Bottom] All methods (except DLA) generalize to a test set prompt of a different country-capital conflict.
267
+
268
+ ![](images/b2b453bca970f5375a4be0a5f09662701dc074eb688dd4becfc047d6c9889d49.jpg)
269
+
270
+ ![](images/e41c2582046dbcc50d74ba1388b94e52bbc445cb9e3cc4d6d8332e584e8423f1.jpg)
271
+
272
+ ![](images/3bbb363e5e931834badd3ce6d229ad8c1622f78ff11d6445b511c4487df680ff.jpg)
273
+
274
+ ![](images/7d6885dcf365bdd981aa176c2390e56ed780ca09b22d26437908e03ef9551723.jpg)
275
+
276
+ ![](images/35afd8b985fdc444d79c2ecbe99e1e53a30b212c9d6c7161149945718e5cf999.jpg)
277
+
278
+ ![](images/2f8c123936e3353c8d278bf8e1dc7833582ce10222ee7679b1fede8760a04582.jpg)
279
+
280
+ scores for similar heads, but also single out heads in lower layers, which is less typical for DLA. We conduct another case study on the role of the last token position comparing ACTIVSCALAR with a version of Ortu et al. (2024), presented in Fig. 8.
281
+
282
+ # 6 Related Work
283
+
284
+ # 6.1 Repurposing Interpretability Methods
285
+
286
+ There exist various methods producing activation-level attribution scores based off a logit difference metric between two answer tokens. Activation patching (ACTIVPATCH; Lakretz et al., 2019; Vig et al., 2020; Meng et al., 2022) swaps activation vectors in one forward pass with activation vectors from another (corrupted) forward pass to study (causal) effects on the logit difference. Attribution patching (ATTRPATCH; Nanda, 2023a; Syed et al., 2023; Kramár et al., 2024) represents a gradient-based, efficient approximation of ACTIVPATCH. Direct logit attribution (DLA; Elhage et al., 2021) identifies activation vectors whose directions are correlated with the vectors of the answer tokens in the projection matrix.
287
+
288
+ All three methods produces attribution scores for
289
+
290
+ individual model components, but whether these scores are informative for model-wide steering is unclear. Our work offers a convenient framework to study this question. We can repurpose attribution scores obtained with ACTIVPATCH, ATTRPATCH and DLA and treat them as activation scalars $\alpha_{l,i}^{(s)}$ by plugging them directly into Eq. (5). Since these scores were not specifically trained for our task, we can then tune the $\beta$ parameter to globally strengthen, weaken or flip the interventional effect as presented in Fig. 9.
291
+
292
+ For the given prompt, we find that ACTIVPATCH, and ATTRPATCH facilitate successful steering according to our effectiveness metric Eq. (6). This is surprising as these methods are not trained on this objective and their interventional strength may be on a different scale, i.e., not flipping the answer tokens between $\beta = -1$ and $\beta = 1$ . In Fig. 9 for instance, we find that attribution scores from DLA are on a much larger scale. This is pointing at a conceptual difference between ACTIVSCALAR and existing methods: ACTIVSCALAR is trained on an explicit objective serving steerability purposes, thus learning an intervention with a well-defined scale and steering direction. Besides, gradient-
293
+
294
+ based learning is faster than activation patching in most cases and incorporates inter-dependencies between multiple activation vectors.
295
+
296
+ # 6.2 Other Related Work
297
+
298
+ Transformer Circuits. Joint interpretability and steerability is also a desired property in circuit discovery that seeks to identify a minimal subgraph responsible for the behavior of the full model when solving a specific task (Wang et al., 2023; Bhaskar et al., 2024). Isolating the subgraph typically requires thresholding the attribution scores associated with model components and then zeroing out (De Cao et al., 2022; Wang et al., 2023; Conmy et al., 2023; Syed et al., 2023) or corrupting them (Geiger et al., 2021; Bhaskar et al., 2024). However, this means that the intervention can be considered discrete, as it cannot smoothly facilitate different intervention strengths and directions. The scaling aspect of the $\beta$ hyperparameters in ACTIVSCALAR and STEERVEC, in turn, is continuous.
299
+
300
+ Gradient-Based Steering and Interpretability. This work relies on gradient-based optimization to localize model components that are relevant with respect to a specifically designed objective. This is similar to Subramani et al. (2022); Hernandez et al. (2024) that learn steering vectors to edit activation vectors. Other related work analyzes the direction and magnitude of weight or activation gradients (Du et al., 2023; Stoehr et al., 2024b; Katz et al., 2024) to identify task-relevant components.
301
+
302
+ Pruning, Masking and Adapters. Learning activation scalars bears resemblance to work on pruning neural networks (Li et al., 2021), fine-tuning (low-rank) adapters (Houlsby et al., 2019; Hu et al., 2021) and (hard) masking (Louizos et al., 2018; Bondarenko et al., 2023). In this work, however, we do not pursue the typical goals of the pruning literature, which often focuses on reducing the computational cost of inference. Instead, ACTIVSCALAR can be seen as learning a soft mask that strengthens or weakens components for the purpose of obtaining an interpretable map of locations.
303
+
304
+ # 7 Conclusion
305
+
306
+ We show that scaling the signed magnitude of a few relevant activation vectors is often sufficient to flip a model's prediction between a correct and a wrong answer token. Besides being effective at steering, activation scaling requires many fewer parameters
307
+
308
+ than additive steering vectors which intervene both on the magnitude and direction of activation vectors. Our gradient-based multi-objective learning scheme can be understood as reversing the interpretability pipeline, putting steering performance based on clearly defined objectives first and interpretability as a natural by-product second.
309
+
310
+ # Acknowledgments
311
+
312
+ We thank Clément Dumas and Alessandro Stolfo for helpful early-stage discussions. Niklas Stoehr is supported by the Swiss Data Science Center (SDSC) PhD fellowship. Vésteinn Snæbjarnarson is funded by the Pioneer Centre for AI, DNRF grant number P1.
313
+
314
+ # Limitations
315
+
316
+ STEERVEC, ACTIVSCALAR and DYNSCALAR are controllable via different hyperparameters: the margin $m$ in the effectiveness objective, $\lambda_{\mathrm{F}}$ weighing the faithfulness term and $\lambda_{\mathrm{M}}$ weighing the strength of the $\ell_1$ -regularization. There are additional training-related hyperparameters such as the learning rate, the number of epochs, the batch size, the number of data instances, and the standard deviation of the Gaussian noise initialization of the intervention parameters, that have a strong influence on the results. For instance, increasing the noise or the margin, training for more epochs, or weakening the $\ell_1$ -regularization, results in activation scalars that deviate more from zero.
317
+
318
+ Access to many hyperparameters can make a method more difficult to deploy. On the other hand, hyperparameters with well-founded semantic can offer desirable, fine-grained controls. For instance, a larger $\lambda_{\mathrm{M}}$ hyperparameter leads to sparser activation scalars that are easier to interpret, a level of control not offered by many existing methods. Yet, methods like ACTIVPATCH or ATTRPATCH also require finicky hyperparameter choices such as how to corrupt the prompt, e.g., deciding on the standard deviation of the Gaussian noise (Meng et al., 2022) to obtain a second corrupted prompt for patching.
319
+
320
+ A limitation of this work is the size of models studied and the small size and synthetic character of the tasks. The two largest models considered in this work are GPT2-XL (1.5 billion parameters) and Pythia-1.4B (1.4 billion parameters). Beyond the required compute, we do not anticipate problems applying activation scaling to larger models. Testing ACTIVSCALAR on more real-world
321
+
322
+ datasets with longer prompts is another future avenue. To boost the performance of DYNSCALAR, one could expand the computational expressivity of the activation vector-to-scalar function $\mathbf{g}_l^{(s)}$ . Finally, more work is needed in extending our evaluation based on effectiveness to existing methods such as activation patching. For instance, a promising direction is to fix the attribution scores obtained from activation patching and then post-hoc learning a suitable $\beta$ parameter that facilities the answer token flipping behavior.
323
+
324
+ # Impact Statement
325
+
326
+ This work aims to better understand the internal workings of language models. This understanding may serve the post-hoc identification of harmful properties such as hallucination, illicit memorization, and undesired biases. It should ideally help in taking preemptive action to guide the design and training of future models. The required compute to apply activation scaling is predominantly dictated by the size of the studied language models.
327
+
328
+ # References
329
+
330
+ Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. arXiv, 1607.06450.
331
+ Adithya Bhaskar, Alexander Wettig, Dan Friedman, and Danqi Chen. 2024. Finding transformer circuits with edge pruning. arXiv, 2406.16778.
332
+ Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. arXiv, 2304.01373.
333
+ Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2023. Quantizable transformers: Removing outliers by helping attention heads do nothing. In Neural Information Processing Systems.
334
+ Arthur Conmy, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. 2023. Towards automated circuit discovery for mechanistic interpretability. In Neural Information Processing Systems.
335
+ Nicola De Cao, Leon Schmid, Dieuwke Hupkes, and Ivan Titov. 2022. Sparse interventions in language models with differentiable masking. In Workshop on Analyzing and Interpreting Neural Networks for NLP at EMNLP.
336
+ Kevin Du, Vésteinn Snæbjarnarson, Niklas Stoehr, Jennifer C. White, Aaron Schein, and Ryan Cotterell.
337
+
338
+ 2024. Context versus prior knowledge in language models. In Annual Meeting of the Association for Computational Linguistics.
339
+ Kevin Du, Lucas Torroba Hennigen, Niklas Stoehr, Alex Warstadt, and Ryan Cotterell. 2023. Generalizing backpropagation for gradient-based interpretability. In Annual Meeting of the Association for Computational Linguistics.
340
+ Nelson Elhage, Neel Nanda, Catherine Olsson, and Tom Henighan. 2021. A mathematical framework for transformer circuits.
341
+ Javier Ferrando and Elena Voita. 2024. Information flow routes: Automatically interpreting language models at scale. arXiv, 2403.00824.
342
+ Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. Causal abstractions of neural networks. In Neural Information Processing Systems.
343
+ Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. 2023. Dissecting recall of factual associations in auto-regressive language models. Conference on Empirical Methods in Natural Language Processing.
344
+ Mor Geva, Avi Caciularu, Kevin Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Conference on Empirical Methods in Natural Language Processing.
345
+ Michael Hanna, Sandro Pezzelle, and Yonatan Belinkov. 2024. Have faith in faithfulness: Going beyond circuit overlap when finding model mechanisms. In Conference on Language Modeling.
346
+ Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. 2023. Does localization inform editing? Surprising differences in causality-based localization vs. knowledge editing in language models. In Neural Information Processing Systems.
347
+ Evan Hernandez, Belinda Z. Li, and Jacob Andreas. 2024. Inspecting and editing knowledge representations in language models. In Conference on Language Modeling.
348
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gessimundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning.
349
+ Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-rank adaptation of large language models. arXiv, 2106.09685.
350
+ Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. 2023. Editing models with task arithmetic. In International Conference on Learning Representations.
351
+
352
+ Zhuoran Jin, Pengfei Cao, Hongbang Yuan, Yubo Chen, Jiexin Xu, Huaijun Li, Xiaojian Jiang, Kang Liu, and Jun Zhao. 2024. Cutting off the head ends the conflict: A mechanism for interpreting and mitigating knowledge conflicts in language models. In *Findings of the ACL*.
353
+ Shahar Katz, Yonatan Belinkov, Mor Geva, and Lior Wolf. 2024. Backward lens: Projecting language model gradients into the vocabulary space. arXiv, 2402.12865.
354
+ Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.
355
+ János Kramár, Tom Lieberum, Rohin Shah, and Neel Nanda. 2024. AtP*: An efficient and scalable method for localizing LLM behaviour to components. arXiv, 2403.00745.
356
+ Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Baroni. 2019. The emergence of number and syntax units in LSTM language models. In North American Chapter of the ACL.
357
+ Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021. Differentiable subset pruning of transformer heads. Transactions of the Association for Computational Linguistics, 9:1442-1459.
358
+ Jiaoda Li, Jennifer White, Mrinmaya Sachan, and Ryan Cotterell. 2024. A transformer with stack attention. In Findings of the ACL.
359
+ Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2023. Inference-time intervention: Eliciting truthful answers from a language model. In Neural Information Processing Systems.
360
+ Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Conference on Empirical Methods in Natural Language Processing.
361
+ Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through L0 regularization. In International Conference on Learning Representations.
362
+ Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associations in GPT. In Neural Information Processing Systems.
363
+ Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. 2024. Circuit component reuse across tasks in transformer language models. In International Conference on Learning Representations.
364
+ Neel Nanda. 2023a. Attribution patching: Activation patching at industrial scale.
365
+
366
+ Neel Nanda. 2023b. TransformerLens—A library for mechanistic interpretability of generative language models.
367
+ Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, and Bernhard Scholkopf. 2024. Competition of mechanisms: Tracing how language models handle facts and counterfactuals. Annual Meeting of the Association for Computational Linguistics.
368
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI.
369
+ Niklas Stoehr, Pengxiang Cheng, Jing Wang, Daniel Preotiuc-Pietro, and Rajarshi Bhowmik. 2024a. Unsupervised contrast-consistent ranking with language models. In Conference of the European Chapter of the ACL.
370
+ Niklas Stoehr, Mitchell Gordon, Chiyuan Zhang, and Owen Lewis. 2024b. Localizing paragraph memorization in language models. arXiv, 2403.19851.
371
+ Nishant Subramani, Nivedita Suresh, and Matthew Peters. 2022. Extracting latent steering vectors from pretrained language models. In *Findings of the ACL*.
372
+ Aaquib Syed, Can Rager, and Arthur Conmy. 2023. Attribution patching outperforms automated circuit discovery. arXiv, 2310.10348.
373
+ Robert Tibshirani. 1996. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society, 58:267-288.
374
+ Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J. Vazquez, Ulisse Mini, and Monte MacDiarmid. 2023. Activation addition: Steering language models without optimization. arXiv, 2308.10248.
375
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems.
376
+ Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Simas Sakenis, Jason Huang, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for interpreting neural NLP: The case of gender bias. In Neural Information Processing Systems.
377
+ Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Annual Meeting of the Association for Computational Linguistics.
378
+ Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. 2023. Interpretability in the wild: A circuit for indirect object identification in GPT-2 small. In International Conference on Learning Representations.
379
+
380
+ Zihao Wang and Victor Veitch. 2024. Does editing provide evidence for localization? In ICML 2024 Workshop on Mechanistic Interpretability.
381
+
382
+ Wilson Wu, John X. Morris, and Lionel Levine. 2024. Do language models plan ahead for future tokens? In Conference on Language Modeling.
383
+
384
+ Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In International Conference on Machine Learning.
385
+
386
+ Qinan Yu, Jack Merullo, and Ellie Pavlick. 2023. Characterizing mechanisms for factual recall in language models. In Conference on Empirical Methods in Natural Language Processing.
387
+
388
+ # A Appendix
389
+
390
+ # A.1 Technical Details
391
+
392
+ We implement all steering and interpretability methods using TransformerLens (Nanda, 2023b) and hyperparameters are chosen based on a combination of the grid search displayed in Fig. 2 as well as steering and interpretability desiderata specific to each setting. We train gradient-based methods using the Adam optimizer (Kingma and Ba, 2015) for 25 epochs. We typically choose a smaller learning rate of 0.0001 for training STEERVEC and 0.001 for ACTIVSCALAR when training on a single data point. A similarly influential hyperparameter is the initialization of the trainable intervention parameters $\theta$ that we initialize with Gaussian noise $\mathcal{N}(0,1\mathrm{e} - 5)$ .
393
+
394
+ # A.2 Multi-headed Attention
395
+
396
+ We now describe the multi-headed attention mechanism $\mathrm{ATTN}_l$ (Vaswani et al., 2017) in more detail. Given a $D \times I_n$ activation matrix $\mathbf{H}_{l-1}^{(4)} = [\mathbf{h}_{l-1,1}^{(4)}, \ldots, \mathbf{h}_{l-1,I_n}^{(4)}]$ , the multi-headed attention mechanism at layer $l$ computes
397
+
398
+ $$
399
+ \begin{array}{l} \mathbf {H} _ {l} ^ {(1)} = \operatorname {A T T N} _ {l} \left(\operatorname {L N} _ {l} ^ {(1)} \left(\mathbf {H} _ {l - 1} ^ {(4)}\right)\right) (11a) \\ = \sum_ {t = 1} ^ {T} \mathbf {W} _ {t, l} ^ {(z)} A _ {t, l} \left(\mathrm {L N} _ {l} ^ {(1)} \left(\mathbf {H} _ {l - 1} ^ {(4)}\right)\right) (11b) \\ = \sum_ {t = 1} ^ {T} \mathbf {W} _ {t, l} ^ {(z)} \mathbf {H} _ {t, l} ^ {(z)} (11c) \\ = \sum_ {t = 1} ^ {T} \mathbf {H} _ {t, l} ^ {(o)}. (11d) \\ \end{array}
400
+ $$
401
+
402
+ Essentially, each individual attention head $A_{t,l}$ with head index $t \in \{1, \dots, T\}$ computes an activation matrix $\mathbf{H}_{t,l}^{(z)} \in \mathbb{R}^{D' \times I_n}$ . This per-head
403
+
404
+ matrix is then multiplied with $\mathbf{W}_{t,l}^{(z)}\in \mathbb{R}^{D\times D^{\prime}}$ to obtain a per-head output activation matrix $\mathbf{H}_{t,l}^{(o)}\in \mathbb{R}^{D\times I_n}$ . Note that $\mathbf{h}_{t,l,i}^{(z)}\in \mathbb{R}^{D^{\prime}}$ and $\mathbf{h}_{t,l,i}^{(o)}\in \mathbb{R}^{D}$ are column vectors of $\mathbf{H}_{t,l}^{(z)}$ and $\mathbf{H}_{t,l}^{(o)}$ , respectively.
405
+
406
+ To zoom in on individual attention heads, we now omit the head index $t$ and layer index $l$ for notational simplicity. Under this simplified notation, we define $A(\mathbf{H})$ as follows. Each activation (column) vector $\mathbf{h}_i\in \mathbb{R}^D$ of the matrix $\mathbf{H}\in \mathbb{R}^{D\times I_n}$ is linearly projected to compute query, key and value activation vectors according to
407
+
408
+ $$
409
+ \mathbf {h} _ {i} ^ {(q)} = \mathbf {W} ^ {(q)} \mathbf {h} _ {i} \tag {12a}
410
+ $$
411
+
412
+ $$
413
+ \mathbf {h} _ {i} ^ {(k)} = \mathbf {W} ^ {(k)} \mathbf {h} _ {i} \tag {12b}
414
+ $$
415
+
416
+ $$
417
+ \mathbf {h} _ {i} ^ {(v)} = \mathbf {W} ^ {(v)} \mathbf {h} _ {i} \tag {12c}
418
+ $$
419
+
420
+ where $\mathbf{W}^{(q)},\mathbf{W}^{(k)},\mathbf{W}^{(v)}\in \mathbb{R}^{D^{\prime}\times D}$ . The key and value vectors are then used to compute $I_{n}$ different self-attention distributions $\kappa_{i}$ (Li et al., 2024) over the probability simplex $\Delta^{I_n - 1}$ following
421
+
422
+ $$
423
+ \bar {\kappa} _ {i} (j) = \frac {\mathbf {h} _ {i} ^ {(q) \top} \mathbf {h} _ {j} ^ {(k)}}{\sqrt {D ^ {\prime}}} \tag {13a}
424
+ $$
425
+
426
+ $$
427
+ \boldsymbol {\kappa} _ {i} = \sigma \left(\left[ \bar {\kappa} _ {i} (1), \dots , \bar {\kappa} _ {i} \left(I _ {n}\right) \right] ^ {\top}\right) \tag {13b}
428
+ $$
429
+
430
+ where $\overline{\kappa}_i(j)$ represents the (unnormized) attention score that token position $i$ pays to token position $j$ and $\sigma$ is the softmax function. Importantly, in autoregressive language modeling, it is common to apply the hard constraint of an attention mask to disallow token positions earlier in the prompt to attend to positions later in the prompt—i.e., $j \leq i$ .
431
+
432
+ Finally, the self-attention distributions are used to construct a weighted average of the value vectors $\mathbf{h}_i^{(v)}$ according to
433
+
434
+ $$
435
+ \mathbf {h} _ {i} ^ {(z)} = \sum_ {j = 1} ^ {I _ {n}} \kappa_ {i} (j) \mathbf {h} _ {j} ^ {(v)} \tag {14a}
436
+ $$
437
+
438
+ In the main part of this paper, Fig. 7 and Fig. 8 specifically, we refer to $\mathbf{h}_{t,l,i}^{(z)}$ as $z$ , $\mathbf{h}_{t,l,i}^{(o)}$ as $o$ and $\mathbf{h}_{t,l,i}^{(\mathrm{v})}$ as $v$ activation vectors.
activationscalingforsteeringandinterpretinglanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f47ed41518329fb07e83d20ed894ecef1027dc721ec6aea63f3faf9e032aaa0
3
+ size 634775
activationscalingforsteeringandinterpretinglanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5b346578029583d64ef41f617a6ebe315166a51dd426dc34e756a632bb4c904
3
+ size 531058
activelearningforabstractivetextsummarizationviallmdeterminedcurriculumandcertaintygainmaximization/fcdd29d1-9610-4216-b75f-4ee42d7a9fe2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:123e60580b87bad9e4509712a6b96cfe5415f25371ef5b57f339e74eeba97d1b
3
+ size 95630
activelearningforabstractivetextsummarizationviallmdeterminedcurriculumandcertaintygainmaximization/fcdd29d1-9610-4216-b75f-4ee42d7a9fe2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f639eebe079a5ef8ca33a6bfadb0259816a392261092d00f9bea84956cd151d0
3
+ size 120658