SlowGuess commited on
Commit
db134f9
·
verified ·
1 Parent(s): 81d7a2e

Add Batch 207dad52-a653-45ad-8162-16fb5fe7a3d1

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_content_list.json +3 -0
  2. abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_model.json +3 -0
  3. abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_origin.pdf +3 -0
  4. abcattentionwithboundedmemorycontrol/full.md +491 -0
  5. abcattentionwithboundedmemorycontrol/images.zip +3 -0
  6. abcattentionwithboundedmemorycontrol/layout.json +3 -0
  7. acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_content_list.json +3 -0
  8. acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_model.json +3 -0
  9. acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_origin.pdf +3 -0
  10. acceleratingcodesearchwithdeephashingandcodeclassification/full.md +261 -0
  11. acceleratingcodesearchwithdeephashingandcodeclassification/images.zip +3 -0
  12. acceleratingcodesearchwithdeephashingandcodeclassification/layout.json +3 -0
  13. accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_content_list.json +3 -0
  14. accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_model.json +3 -0
  15. accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_origin.pdf +3 -0
  16. accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/full.md +424 -0
  17. accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/images.zip +3 -0
  18. accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/layout.json +3 -0
  19. achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_content_list.json +3 -0
  20. achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_model.json +3 -0
  21. achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_origin.pdf +3 -0
  22. achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/full.md +392 -0
  23. achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/images.zip +3 -0
  24. achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/layout.json +3 -0
  25. achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_content_list.json +3 -0
  26. achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_model.json +3 -0
  27. achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_origin.pdf +3 -0
  28. achievingreliablehumanassessmentofopendomaindialoguesystems/full.md +412 -0
  29. achievingreliablehumanassessmentofopendomaindialoguesystems/images.zip +3 -0
  30. achievingreliablehumanassessmentofopendomaindialoguesystems/layout.json +3 -0
  31. acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_content_list.json +3 -0
  32. acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_model.json +3 -0
  33. acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_origin.pdf +3 -0
  34. acloserlookathowfinetuningchangesbert/full.md +470 -0
  35. acloserlookathowfinetuningchangesbert/images.zip +3 -0
  36. acloserlookathowfinetuningchangesbert/layout.json +3 -0
  37. acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_content_list.json +3 -0
  38. acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_model.json +3 -0
  39. acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_origin.pdf +3 -0
  40. acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/full.md +397 -0
  41. acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/images.zip +3 -0
  42. acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/layout.json +3 -0
  43. acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_content_list.json +3 -0
  44. acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_model.json +3 -0
  45. acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_origin.pdf +3 -0
  46. acomparisonofstrategiesforsourcefreedomainadaptation/full.md +389 -0
  47. acomparisonofstrategiesforsourcefreedomainadaptation/images.zip +3 -0
  48. acomparisonofstrategiesforsourcefreedomainadaptation/layout.json +3 -0
  49. acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_content_list.json +3 -0
  50. acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_model.json +3 -0
abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0cb4743c763893b136c804fbb5a5914fb8c3e0f27a6a6a6444225e7d87f4f55
3
+ size 114605
abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f36bb5e4be602c91d366f3d33d69e0efdda3c7a11cb5bf3cb0679f8dcf64f46
3
+ size 140325
abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d9fec97fe293fc9e2d9aa30f872376a21a07472ab0926450e97b77b52eb492
3
+ size 711334
abcattentionwithboundedmemorycontrol/full.md ADDED
@@ -0,0 +1,491 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ABC: Attention with Bounded-Memory Control
2
+
3
+ Hao Peng\* Jungo Kasai\* Nikolaos Pappas\* Dani Yogatama\* Zhaofeng Wu\* Lingpeng Kong\* Roy Schwartz\* Noah A. Smith\*
4
+
5
+ $\spadesuit$ Paul G. Allen School of Computer Science & Engineering, University of Washington
6
+
7
+ $^{\star}$ Amazon Web Services $\clubsuit$ DeepMind $\diamond$ Allen Institute for Artificial Intelligence $\diamond$ School of Computer Science & Engineering, Hebrew University of Jerusalem
8
+ $\diamond$ Department of Computer Science, The University of Hong Kong {hapeng, jkasai, npappas, zfw7, nasmith}@cs.washington.edu dyogatama@deepmind.com, lpk@cs.hku.hk roy.schwartz1@mail.huji.ac.il
9
+
10
+ # Abstract
11
+
12
+ Transformer architectures have achieved state-of-the-art results on a variety of natural language processing (NLP) tasks. However, their attention mechanism comes with a quadratic complexity in sequence lengths, making the computational overhead prohibitive, especially for long sequences. Attention context can be seen as a random-access memory with each token taking a slot. Under this perspective, the memory size grows linearly with the sequence length, and so does the overhead of reading from it. One way to improve the efficiency is to bound the memory size. We show that disparate approaches can be subsumed into one abstraction, attention with bounded-memory control (ABC), and they vary in their organization of the memory. ABC reveals new, unexplored possibilities. First, it connects several efficient attention variants that would otherwise seem distinct. Second, this abstraction gives new insights—an established approach (Wang et al., 2020b) previously thought to not be applicable in causal attention, actually is. Last, we present a new instance of ABC, which draws inspiration from existing ABC approaches, but replaces their heuristic memory-organizing functions with a learned, contextualized one. Our experiments on language modeling, machine translation, and masked language model finetuning show that our approach outperforms previous efficient attention models; compared to strong transformer baselines, it significantly improves the inference time and space efficiency with no or negligible accuracy loss.
13
+
14
+ # 1 Introduction
15
+
16
+ Transformer architectures are now central in natural language processing (Vaswani et al., 2017). They rely on the attention mechanism (Bahdanau et al., 2015) to contextualize the input. The context can be seen as a random access memory whose size linearly grows with the sequence length; each query
17
+
18
+ reads from it using a softmax-normalized linear combination, with overhead linear in the memory size. This amounts to a quadratic complexity overall, making transformers' computational overhead prohibitive, especially for long sequences.
19
+
20
+ One way to improve attention's efficiency is to bound its memory size. Imposing a constant-sized constraint over the memory ensures that reading from it has constant time and space overhead, yielding a linear overall complexity in sequence lengths. This is in fact a common strategy adopted by several recent works. In this work, we show that some of these works are closely connected in ways that, to date, have gone unremarked. We propose attention with bounded-memory control (ABC), a unified abstraction over them. In ABC, constant-sized memories are organized with various control strategies, e.g., induced from heuristic patterns (Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020; Rae et al., 2020, inter alia), locality assumptions (Parmar et al., 2018; Liu et al., 2018), or positions (Wang et al., 2020b).
21
+
22
+ These strategies, by and large, are "context-agnostic." In response to this, we propose $\mathrm{ABC}_{\mathrm{MLP}}$ , a particular instance of ABC that learns a contextualized control strategy from data. Specifically, $\mathrm{ABC}_{\mathrm{MLP}}$ uses a neural network to determine how to store each token into the memory (if at all). Compared to previous bounded-memory models, it strikes a better trade-off between accuracy and efficiency: controlling for the accuracy, $\mathrm{ABC}_{\mathrm{MLP}}$ can get away with much smaller memory sizes.
23
+
24
+ ABC models (including $\mathrm{ABC}_{\mathrm{MLP}}$ ) come with a linear complexity in sequence lengths, and admit recurrent computation graphs in causal attention (self-attention over the prefix). Therefore they are appealing choices in a variety of applications, including text encoding, language modeling and text generation. This leads to a surprising finding. Linformer (Wang et al., 2020b), an established efficient attention method, was previously thought not
25
+
26
+ to be applicable in causal attention or autoregressive decoding (Tay et al., 2020). Through the ABC view, we show that it actually is, and achieves competitive performance in our machine translation experiments.
27
+
28
+ ABC connects existing models that would otherwise seem distinct, reveals new insights into established methods, and inspires new efficient attention architectures. We explore its applications in transformers, as a drop-in substitute for the canonical softmax attention. ABC offers a novel lens that can help future research in the analysis of transformers, where the theoretical insights are still catching up with empirical success. Experiments on language modeling, machine translation, and masked language model finetuning show that our $\mathrm{ABC}_{\mathrm{MLP}}$ model outperforms previous ABC approaches in accuracy with a much smaller memory size. Compared to the strong transformer baseline, $\mathrm{ABC}_{\mathrm{MLP}}$ achieves a significant speedup and memory savings at inference time, with no or negligible accuracy loss. The efficiency improvements are more prominent for long sequences, suggesting that the asymptotic savings are even more appealing in applications involving long sequences. We release our code at https://github.com/Noahs-ARK/ABC.
29
+
30
+ # 2 An Outer-Product View of Attention
31
+
32
+ This section presents our outer-product memory perspective of attention, which allows for a smooth transition to later discussion.
33
+
34
+ In attention, a sequence of queries $\{\mathbf{q}_i\}_{i=1}^N$ attend to a memory with $N$ slots, each storing a key and value pair: $\mathbf{K} = [\mathbf{k}_1, \dots, \mathbf{k}_N]^\top$ , $\mathbf{V} = [\mathbf{v}_1, \dots, \mathbf{v}_N]^\top \in \mathbb{R}^{N \times d}$ . Query $\mathbf{q}$ reads from the memory using a softmax-normalized linear combination, producing a $d$ -dimensional vector:
35
+
36
+ $$
37
+ \operatorname {a t t n} (\mathbf {q}, \{\mathbf {k} _ {i} \}, \{\mathbf {v} _ {i} \}) = \mathbf {V} ^ {\top} \operatorname {s o f t m a x} \left(\mathbf {K q}\right). (1)
38
+ $$
39
+
40
+ This takes $\mathcal{O}(N)$ time and space. When the attention with $N$ queries can be parallelized (e.g., in text encoding), it takes linear time and quadratic space; when it cannot be (e.g., in decoding), it takes quadratic time and linear space.
41
+
42
+ The memory can be equivalently represented as sums of vector outer products: $\mathbf{K} = \mathbf{IK} = \sum_{i=1}^{N} \mathbf{e}_i \otimes \mathbf{k}_i$ , $\mathbf{V} = \sum_{i=1}^{N} \mathbf{e}_i \otimes \mathbf{v}_i$ . $\mathbf{I}$ is the identity matrix, and $\otimes$ denotes the outer product: $[\mathbf{x} \otimes \mathbf{v}] = [\mathbf{x} \otimes \mathbf{v}]^{\mathrm{T}}$ .
43
+
44
+ $\mathbf{y}]_{i,j} = x_i y_j$ . $N$ -dimensional vectors $\{\mathbf{e}_i\}$ form the standard basis: $\mathbf{e}_i$ has the $i$ th element being one and others zeros. We can view $\mathbf{e}_i$ as control vectors that determine where to store $\mathbf{k}_i$ and $\mathbf{v}_i$ :
45
+
46
+ $$
47
+ \begin{array}{l} \mathbf {e} _ {i} \otimes \mathbf {k} _ {i} = \left[ \underbrace {0 , \ldots 0} _ {i - 1}, 1, \underbrace {0 , \ldots , 0} _ {N - i} \right] ^ {\top} \otimes \mathbf {k} _ {i} \\ = \left[ \underbrace {\mathbf {0}} _ {d \times (i - 1)}; \mathbf {k} _ {i}; \underbrace {\mathbf {0}} _ {d \times (N - i)} \right] ^ {\top}. \\ \end{array}
48
+ $$
49
+
50
+ The $N$ -by- $d$ matrix on the last line has its $i$ th row being $\mathbf{k}_i^\top$ and all others zeros; in this sense, $\mathbf{k}_i$ is stored in the $i$ th slot by $\mathbf{e}_i$ , not affecting others.
51
+
52
+ # 3 Attention with Bounded Memory
53
+
54
+ A straightforward way to improve attention's efficiency is to bound its memory size. Our outer-product view of attention provides a straightforward way to devise this, by replacing $\{\mathbf{e}_i\}$ with control vectors that select $n \ll N$ vectors to attend to. We dub this approach attention with bounded-memory control (ABC). Concretely, let $\widetilde{\mathbf{K}}, \widetilde{\mathbf{V}} \in \mathbb{R}^{n \times d}$ denote a constant-size memory with $n$ slots, with $n$ set a priori.
55
+
56
+ $$
57
+ \widetilde {\mathbf {K}} = \sum_ {i = 1} ^ {N} \phi_ {i} \otimes \mathbf {k} _ {i}, \quad \widetilde {\mathbf {V}} = \sum_ {i = 1} ^ {N} \phi_ {i} \otimes \mathbf {v} _ {i}. \tag {3}
58
+ $$
59
+
60
+ $\{\phi_i \in \mathbb{R}^n\}_{i=1}^N$ denotes a sequence of control vectors. The output is calculated by attending to $\widetilde{\mathbf{K}}$ and $\widetilde{\mathbf{V}}$ : ABC $(\mathbf{q}, \{\mathbf{k}_i\}, \{\mathbf{v}_i\}, \{\phi_i\}) =$
61
+
62
+ $$
63
+ \widetilde {\mathbf {V}} ^ {\top} \operatorname {s o f t m a x} (\widetilde {\mathbf {K}} \mathbf {q}). \tag {4}
64
+ $$
65
+
66
+ We will discuss various ways to construct $\{\phi_i\}$ in the subsequent sections. Reading from the memory takes a constant $\mathcal{O}(n)$ time and space; therefore ABC's overall complexity is $\mathcal{O}(Nn)$ , linear in the sequence length.2
67
+
68
+ Eq. 3 offers an equivalent recurrent computation, which is particularly useful in causal attention where only the prefix is looked at,
69
+
70
+ $$
71
+ \widetilde {\mathbf {K}} _ {t + 1} = \widetilde {\mathbf {K}} _ {t} + \phi_ {t + 1} \otimes \mathbf {k} _ {t + 1}, \tag {5}
72
+ $$
73
+
74
+ likewise for $\widetilde{\mathbf{V}}_t$ . $\widetilde{\mathbf{K}}_t$ and $\widetilde{\mathbf{V}}_t$ can be seen as the recurrent hidden state that encodes the prefix.
75
+
76
+ In what follows, we study several existing efficient attention approaches and show that they are in fact instances of the ABC abstraction.
77
+
78
+ # 3.1 Linformer
79
+
80
+ Linformer (Wang et al., 2020b) is an established efficient transformer variant that has proven successful in masked language modeling and text encoding. It assumes fixed-length inputs and learns a low-rank approximation of the attention weights. A learned $n$ -by- $N$ matrix $\mathbf{W}^{\mathrm{LF}}$ down projects the $N$ -by- $d$ dimensional keys and values along the timestep dimension, to an $n$ -by- $d$ memory: $\widetilde{\mathbf{K}}^{\mathrm{LF}} = \mathbf{W}^{\mathrm{LF}}\mathbf{K}$ , $\widetilde{\mathbf{V}}^{\mathrm{LF}} = \mathbf{W}^{\mathrm{LF}}\mathbf{V}$ ; they are then used for attention computation with Eq. 4. This yields a linear complexity in the input length. Linformer is an ABC instance with $\phi_{i}^{\mathrm{LF}} = \mathbf{W}_{:,i}^{\mathrm{LF}}$ (ith column), and in this sense, it learns a control vector for each position.
81
+
82
+ Previous works have noted that Linformer cannot be efficiently applied in causal attention (Table 1 of Tay et al., 2020). Indeed, it is less straightforward to avoid mixing future with the past when projecting along the timestep dimension. ABC reveals that, in fact, Linformer is applicable in causal attention. Like all ABC models, it admits a linear-complexity recurrent computation (Eq. 5): $\widetilde{\mathbf{K}}_{t + 1}^{\mathrm{LF}} = \widetilde{\mathbf{K}}_t + \phi_{t + 1}^{\mathrm{LF}}\otimes \mathbf{k}_{t + 1}$ . This confirms ABC's benefits: it reveals new insights about existing models and reassesses their applications and impact. Our experiments show that Linformer achieves competitive performance in machine translation.
83
+
84
+ # 3.2 Clustering-Based Attention
85
+
86
+ Improving attention's efficiency with clustering has received an increasing amount of interest (Kitaev et al., 2020; Roy et al., 2020; Wang et al., 2020a, inter alia). ABC bears interesting connections to clustering-based methods. Here we discuss an approach that closely follows Vyas et al. (2020), except that it clusters keys and values instead of queries, and only attends to the centroids to reduce the effective context size. Formally, keys and values are grouped into $n < N$ clusters $\{\widetilde{\mathbf{k}}_j^{\mathrm{CL}}\}_{j=1}^n$ , $\{\widetilde{\mathbf{v}}_j^{\mathrm{CL}}\}_{j=1}^n$ . Let an $N$ -by- $n$ binary matrix $\mathbf{M}$ denote the cluster membership shared between keys and values. $M_{i,j} = 1$ iff. $\mathbf{k}_i$ is assigned to cluster $\widetilde{\mathbf{k}}_j^{\mathrm{CL}}$ and $\mathbf{v}_i$ to $\widetilde{\mathbf{v}}_j^{\mathrm{CL}}$ . The $j$ th centroid for the keys is
87
+
88
+ $$
89
+ \widetilde {\mathbf {k}} _ {j} ^ {\mathrm {C L}} = \sum_ {i = 1} ^ {N} \frac {M _ {i , j}}{\sum_ {\ell = 1} ^ {N} M _ {\ell , j}} \mathbf {k} _ {i}; \tag {6}
90
+ $$
91
+
92
+ likewise for the values. It then attends over the centroids using Eq. 4, with $\widetilde{\mathbf{K}}^{\mathrm{CL}} = [\widetilde{\mathbf{k}}_1^{\mathrm{CL}},\dots ,\widetilde{\mathbf{k}}_n^{\mathrm{CL}}]^{\top} =$
93
+
94
+ $$
95
+ \begin{array}{l} \sum_ {j = 1} ^ {n} \mathbf {e} _ {j} \otimes \widetilde {\mathbf {k}} _ {j} ^ {\mathrm {C L}} = \sum_ {j = 1} ^ {n} \mathbf {e} _ {j} \otimes \sum_ {i = 1} ^ {N} \frac {M _ {i , j}}{\sum_ {\ell = 1} ^ {N} M _ {\ell , j}} \mathbf {k} _ {i} \\ = \sum_ {i = 1} ^ {N} \left(\sum_ {j = 1} ^ {n} \mathbf {e} _ {j} \frac {M _ {i , j}}{\sum_ {\ell = 1} ^ {N} M _ {\ell , j}}\right) \otimes \mathbf {k} _ {i}. \\ \end{array}
96
+ $$
97
+
98
+ The last line indicates that this model is an instance of ABC: $\phi_{i} = \sum_{j=1}^{n}(M_{i,j} / \sum_{\ell=1}^{N}M_{\ell,j})\mathbf{e}_{j}$ . The stack of centroids can be seen as the constant-size memory. Putting aside the clustering overhead (i.e., constructing $\mathbf{M}$ and computing centroids), it has a linear complexity in the sequence length.
99
+
100
+ # 3.3 Sliding-Window Attention
101
+
102
+ In some applications, being able to remove entries from the memory can be beneficial: clearing up older context frees slots for more recent ones, promoting a locality inductive bias. ABC offers the capability to do so, if augmented with an additional matrix multiplication. We use the sliding-window attention as an example.
103
+
104
+ Attending to the most recent $n$ input tokens (Beltagy et al., 2020; Zaheer et al., 2020; Sukhbaatar et al., 2021, inter alia) can be seen as a first-in-first-out queue that "pops" out the oldest token while "pushing" in the most recent one: $\widetilde{\mathbf{K}}_t^{\mathrm{WD}} = [\mathbf{k}_{t - n + 1},\dots,\mathbf{k}_t]^\top$ . The pop operation can be achieved by multiplying an $n$ -by- $n$ upper shift matrix: $U_{i,j} = \delta_{i + 1,j}$ , with $\delta$ being the Kronecker delta (i.e., $\mathbf{U}$ has ones only on the superdiagonal and zeros elsewhere). Left-multiplying $\mathbf{U}$ against $\widetilde{\mathbf{K}}_t^{\mathrm{WD}}$ shifts its rows one position up, with zeros appearing in the last:
105
+
106
+ $$
107
+ \begin{array}{l} \mathbf {U} \widetilde {\mathbf {K}} _ {t} ^ {\mathrm {W D}} = \mathbf {U} \left[ \underbrace {\mathbf {k} _ {t - n + 1} , \ldots , \mathbf {k} _ {t}} _ {n} \right] ^ {\top} \\ = \left[ \underbrace {\mathbf {k} _ {t - n + 2} , \ldots , \mathbf {k} _ {t - 1} , \mathbf {k} _ {t}} _ {n - 1}, \mathbf {0} \right] ^ {\top} \in \mathbb {R} ^ {n \times d}. \\ \end{array}
108
+ $$
109
+
110
+ Then the most recent token can be put into the slot freed up: $\widetilde{\mathbf{K}}_{t + 1}^{\mathrm{WD}} = \mathbf{U}\widetilde{\mathbf{K}}_t^{\mathrm{WD}} + \mathbf{e}_n\otimes \mathbf{k}_{t + 1}$ . $\mathbf{U}$ and $\phi_t = \mathbf{e}_n$ ensure a first-in-first-out queue. Dilated and stride convolution patterns (Beltagy et al., 2020) can be similarly recovered (§A.4).
111
+
112
+ Recurrently multiplying $\mathbf{U}$ simulates the discrete pop operation (Grefenstette et al., 2015; Joulin and Mikolov, 2015; Yogatama et al., 2018) in a differentiable way. This is reminiscent of recurrent neural networks, while in this case $\mathbf{U}$ is never updated as
113
+
114
+ parameters. It is exciting to explore learning $\mathbf{U}$ but is beyond the scope of this work.
115
+
116
+ Discussion. Besides the models discussed above, certain variants of Rae et al. (2020) and sparse attention patterns (local-to-global attention; Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020) can also be seen as instances of ABC ( $\S$ A). ABC provides a unified perspective of them, and at the same time points out their limitations: their control strategies are context-agnostic. In response to this, in $\S$ 4 we propose to learn a contextualized strategy from data. Table 1 analyzes various ABC models, and Table 2 details their complexity.
117
+
118
+ # 4 Learned Memory Control
119
+
120
+ The ABC abstraction connects several existing approaches that would otherwise seem distinct. This inspires the design of new architectures. We hypothesize that learning a contextualized strategy can achieve better performance. This section introduces $\mathrm{ABC}_{\mathrm{MLP}}$ . It parameterizes $\phi$ with a single-layer multi-layer perceptron (MLP) that takes as input the token's representation $\mathbf{x}_i$ , and determines which slots to write it into and how much.
121
+
122
+ $$
123
+ \boldsymbol {\alpha} _ {i} = \exp \left(\mathbf {W} _ {\phi} \mathbf {x} _ {i}\right), \quad \phi_ {i} = \boldsymbol {\alpha} _ {i} \Bigg / \sum_ {j = 1} ^ {N} \boldsymbol {\alpha} _ {j}. (7)
124
+ $$
125
+
126
+ Matrix $\mathbf{W}_{\phi}$ is learned. exp is an elementwise activation function. The motivation is to allow for storing a "fractional" (but never negative) amount of input into the memory. Using a non-negative activation, however, has a drawback: the scales of $\sum_{i}\phi_{i}\otimes \mathbf{k}_{i}$ and $\sum_{i}\phi_{i}\otimes \mathbf{v}_{i}$ would grow with the sequence lengths, making training less stable. To overcome this, we divide $\alpha_{i}$ vectors by their sum. This functions as normalization and aims to offset the impact of varying sequence lengths. It admits the recurrent computation graph as in Eq. 5, and has a linear complexity in the sequence length.
127
+
128
+ A key design choice of $\mathrm{ABC}_{\mathrm{MLP}}$ is that its $\phi_{i}$ depends only on current input $\mathbf{x}_i$ . This helps (1) keep the recurrent computation efficient in practice (Lei et al., 2018), and (2) make it applicable
129
+
130
+ in not only encoder self-attention and cross attention, but also causal attention. Concurrently to this work, Goyal et al. (2021) and Ma et al. (2021) also proposed methods to learn contextualized control. They compute $\phi_{i}$ from previous layer's memory, revealing the full sequence to the control vectors. As a result, these two approaches are unsuitable for causal attention. $^{6}$
131
+
132
+ $\mathrm{ABC}_{\mathrm{MLP}}$ , as other ABC models, can be used as a drop-in replacement for the canonical softmax attention, and we apply its multihead variant in transformers. With proper parameter sharing, the number of additional parameters $\mathrm{ABC}_{\mathrm{MLP}}$ incurs is small: inspired by Wang et al. (2020b), we tie $\phi$ -MLP's parameters across different layers, which adds less than $1\%$ parameters to the models.
133
+
134
+ ABCMLP: context-agnostic then context-dependent attention. We now dissect ABCMLP and show that it can be seen as a cascade of two attention mechanisms: one with a learned context-agnostic "pseudo query" followed by one with a context-dependent query. Our analysis starts with a one-dimensional example; the conclusion generalizes to higher-dimensional cases.
135
+
136
+ Example 1. Consider $\mathrm{ABC}_{\mathrm{MLP}}$ with a single memory slot ( $n = 1$ ). It is parameterized with a learned vector $\mathbf{w}_{\phi}$ , and $\phi_{i} = \exp(\mathbf{w}_{\phi} \cdot \mathbf{x}_{i}) / \sum_{j=1}^{N} \exp(\mathbf{w}_{\phi} \cdot \mathbf{x}_{j})$ . Since $\phi_{i}$ is a scalar here, $\phi_{i} \otimes \mathbf{k}_{i} = \phi_{i} \mathbf{k}_{i}^{\top}$ .
137
+
138
+ $$
139
+ \begin{array}{l} \widetilde {\mathbf {K}} ^ {\top} = \sum_ {i = 1} ^ {N} \left(\phi_ {i} \otimes \mathbf {k} _ {i}\right) ^ {\top} \\ = \sum_ {i = 1} ^ {N} \frac {\exp (\mathbf {w} _ {\phi} \cdot \mathbf {x} _ {i})}{\sum_ {j = 1} ^ {N} \exp (\mathbf {w} _ {\phi} \cdot \mathbf {x} _ {j})} \mathbf {k} _ {i} \\ = \operatorname {a t t n} \left(\mathbf {w} _ {\phi}, \left\{\mathbf {x} _ {i} \right\} _ {i = 1} ^ {N}, \left\{\mathbf {k} _ {i} \right\} _ {i = 1} ^ {N}\right). \\ \end{array}
140
+ $$
141
+
142
+ In other words, $\widetilde{\mathbf{K}}$ uses $\mathbf{w}_{\phi}$ as a "pseudo-query" to attend to $\{\mathbf{x}_i\}$ and $\{\mathbf{k}_i\}$ . Likewise, $\widetilde{\mathbf{V}}^{\top} = \mathrm{attn}(\mathbf{w}_{\phi}, \{\mathbf{x}_i\}_{i=1}^{N}, \{\mathbf{v}_i\}_{i=1}^{N})$ . Despite its similarity to the standard softmax attention, Example 1 has a more efficient linear complexity in sequence lengths. $\mathbf{w}_{\phi}$ 's being context-independent is the key to the savings. Table 2 details its complexity.
143
+
144
+ Example 1's conclusion generalizes to higher-dimensional cases: the $j$ th dimension of $\{\phi_i\}$ attends to $\{\mathbf{x}_i\}$ and $\{\mathbf{k}_i\}$ using the $j$ th row of $\mathbf{W}_{\phi}$ as the context-independent pseudo-query; $n$ such attention mechanisms run in parallel, stacking the
145
+
146
+ <table><tr><td>Model</td><td>Section</td><td>φt</td><td>Mem. Control</td></tr><tr><td>Sliding-window</td><td>§3.3</td><td>en</td><td>Kt+1 = UKt + φt+1 ⊗ kt+1</td></tr><tr><td>Linformer</td><td>§3.1</td><td>WLF</td><td></td></tr><tr><td>L2G Pattern</td><td>§A.1</td><td>ei if xt is the ith global token</td><td></td></tr><tr><td>ABCD</td><td>§A.2</td><td>ei, where it ~ unif{1,n}</td><td>Kt+1 = Kt + φt+1 ⊗ kt+1</td></tr><tr><td>Comp. Trans.</td><td>§A.3</td><td>e[nt/N]</td><td></td></tr><tr><td>Clustering</td><td>§3.2</td><td>∑j=1n (Mt,j / ∑l=1N Mt,l,j) ej</td><td></td></tr><tr><td>ABCMLP</td><td>§4</td><td>exp (Wφxt) / ∑i=1t exp (Wφxt)</td><td></td></tr></table>
147
+
148
+ Table 1: A comparison of different ABC models. $N$ denotes the sequence length, and $n$ the memory size. $\phi_t$ denotes the memory control vector for $\mathbf{k}_t$ and $\mathbf{v}_t$ , and unif is the discrete uniform distribution.
149
+
150
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Time Complexity</td><td colspan="3">Space Complexity</td></tr><tr><td>Mem.</td><td>Per Query</td><td>Overall</td><td>Mem.</td><td>Per Query</td><td>Overall</td></tr><tr><td>Softmax Attention</td><td>-</td><td>O(N)</td><td>O(N2)</td><td>-</td><td>O(N)</td><td>O(N2)</td></tr><tr><td>ABC</td><td>O(N)</td><td>O(n)</td><td>O(nN)</td><td>O(n)</td><td>O(n)</td><td>O(nN)</td></tr></table>
151
+
152
+ Table 2: ABC's time and space complexity in sequence length against the softmax attention's. "Mem." indicates the time and space needed for calculating and storing memory $\widetilde{\mathbf{K}},\widetilde{\mathbf{V}}$ $N$ denotes the sequence length, and $n$ the memory size. The time complexity analysis assumes that the softmax attention cannot be parallelized across the queries. In practice, this is common in autoregressive decoding or for long sequences where the accelerators (e.g., GPUs) do not have enough threads to fully parallelize softmax attention's computation across different queries.
153
+
154
+ results into $n$ -by- $d$ memory $\widetilde{\mathbf{K}}$ and $\widetilde{\mathbf{V}}$ . Intuitively, it is the "real queries" $\{\mathbf{q}_i\}$ that encode "what information is useful for the prediction task." Without access to them, $\mathrm{ABC}_{\mathrm{MLP}}$ summarizes the input for $n$ times using different pseudo-queries, aiming to preserve enough information in the memory for onward computation. The attention output is calculated with the context-dependent real queries using Eq. 4. §B.2 presents a detailed derivation.
155
+
156
+ Connections to other prior works. Although starting from distinct motivations, $\mathrm{ABC}_{\mathrm{MLP}}$ closely relates to hierarchical attention (HA; Yang et al., 2016). HA summarizes the context into higher-level representations with a cascade of attention mechanisms, e.g., words to sentences, and then to documents. $\mathrm{ABC}_{\mathrm{MLP}}$ applies two types of attention. The first learns context-agnostic pseudo-queries and attends to the same sequence for $n$ times in parallel, while the second retrieves from the memory with real queries. HA, in contrast, summarizes non-overlapping segments at each level.
157
+
158
+ The learned pseudo-queries closely relate to the inducing point method in set attention (ISA; Lee et al., 2019). ISA applies a non-linear feedforward network between a cascade of two attention mod
159
+
160
+ ules. This precludes the outer-product memory computation and efficient recurrences in ABC.
161
+
162
+ Another line of work "linearizes" attention through kernel tricks and also applies bounded memory: their feature map dimensions are analogous to memory sizes. They substitute the softmax with approximations (Peng et al., 2021; Choromanski et al., 2021), heuristically designed (Katharopoulos et al., 2020; Schlag et al., 2021), or learned (Kasai et al., 2021b) functions. $\mathrm{ABC}_{\mathrm{MLP}}$ keeps the softmax, but over a smaller constant-sized context. This can be useful in practice: (1) ABC provides a unified perspective of several efficient attention methods, allowing for borrowing from existing wisdom to design new architectures; (2) it draws a close analogy to the canonical softmax attention, and is better-suited as its drop-in substitute in various application settings, as we will show in the experiments; (3) empirically, we find that $\mathrm{ABC}_{\mathrm{MLP}}$ can get away with a much smaller memory size to retain the accuracy. Peng et al. (2021) and Schlag et al. (2021) use gating to promote recency bias. The same technique is equally applicable in ABC models.
163
+
164
+ The learned contextualized memory control is reminiscent of the content-based addressing in neu
165
+
166
+ ral Turing machines (NTM; Graves et al., 2014). $\mathrm{ABC_{MLP}}$ computes the control vectors $\{\phi_i\}$ as a function of the input, but not of the memory as in NTM. This ensures that the control vectors at different timesteps can be computed in parallel, improving the time efficiency in practice (Lei et al., 2018; Peng et al., 2018). Analogies between memory and neural architectures are also made by other previous works (Hochreiter and Schmidhuber, 1997; Weston et al., 2015; Le et al., 2020, inter alia).
167
+
168
+ # 5 Experiments
169
+
170
+ We evaluate ABC models on language modeling (§5.1), sentence-level and document-level machine translation (§5.2), and masked language model finetuning (§5.3). Dataset statistics and implementation details are summarized in §C.
171
+
172
+ # 5.1 Language Modeling
173
+
174
+ Setting. We experiment with WikiText-103, sampled text from English Wikipedia (Merit et al., 2017). The BASE model with standard softmax attention is the strong transformer-based language model by Baevski and Auli (2019). We compare the following ABC variants, which build on BASE, but replace the softmax attention with linear-complexity bounded-memory attention alternatives while keeping other components the same.
175
+
176
+ - $\mathrm{ABC}_{\mathrm{MLP}}$ , as described in §4, learns a contextualized exp-MLP as the $\phi$ function.
177
+ Linformer (§3.1; Wang et al., 2020b).
178
+ - $\mathrm{ABC}_{\mathrm{RD}}$ stores each token in a randomly-selected memory slot with $\phi_t = \mathbf{e}_{i_t}$ . $i_t$ is uniformly drawn from $\{1, \dots, n\}$ at each time step. This helps us quantify the differences between random and learned bounded-memory controls.
179
+
180
+ # We consider two model size settings:
181
+
182
+ 16 layers (Baevski and Auli, 2019). All models have around $\sim 242\mathrm{M}$ parameters. They train with 512-token segments, and evaluate with 0 or 480 context sizes: a 0- or 480- length prefix precedes each evaluation segment.
183
+ - 32 layers (Kasai et al., 2021b). All models have $\sim 484\mathrm{M}$ parameters. This setting applies layer dropout (Fan et al., 2020), and evaluates with a 256 context size. It aims to compare $\mathrm{ABC}_{\mathrm{MLP}}$ to several kernel-based efficient attention variants: ELU (Katharopoulos et al., 2020), RFA (Peng et al., 2021), and T2R (Kasai et al., 2021b).
184
+
185
+ Results. Table 3a compares ABC variants using Baevski and Auli (2019)'s 16-layer setting. Among
186
+
187
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">n</td><td colspan="2">Dev.</td><td colspan="2">Test</td></tr><tr><td>0</td><td>480</td><td>0</td><td>480</td></tr><tr><td>BASE</td><td>-</td><td>19.8</td><td>18.4</td><td>20.5</td><td>19.0</td></tr><tr><td>Linformer</td><td>64</td><td>26.5</td><td>27.1</td><td>27.2</td><td>30.7</td></tr><tr><td>ABCD</td><td>64</td><td>23.2</td><td>22.3</td><td>24.0</td><td>23.1</td></tr><tr><td>ABCMLP</td><td>32</td><td>21.2</td><td>19.7</td><td>21.9</td><td>20.5</td></tr><tr><td>ABCMLP</td><td>64</td><td>20.4</td><td>18.9</td><td>21.1</td><td>19.5</td></tr></table>
188
+
189
+ (a) 16-layer setting. $0 / 480$ indicate evaluation context sizes.
190
+
191
+ <table><tr><td>Model</td><td>n</td><td>Dev.</td><td>Test</td></tr><tr><td>†BASE</td><td>-</td><td>17.9</td><td>18.5</td></tr><tr><td>†ELU</td><td>128</td><td>22.0</td><td>22.8</td></tr><tr><td>†RFA</td><td>32</td><td>20.4</td><td>21.3</td></tr><tr><td>†T2R</td><td>32</td><td>20.1</td><td>20.8</td></tr><tr><td>ABCMLP</td><td>32</td><td>19.2</td><td>19.9</td></tr></table>
192
+
193
+ (b) 32-layer setting. A 256-length context is used at evaluation time. $\dagger$ numbers are due to Kasai et al. (2021b).
194
+ Table 3: WikiText-103 language modeling perplexity (lower is better). $n$ denotes the memory size. Bold numbers perform the best among linear-complexity models.
195
+
196
+ ABC models, $\mathrm{ABC}_{\mathrm{MLP}}$ achieves the best performance for both context sizes. With a memory size $n = 64$ , $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms both Linformer and $\mathrm{ABC}_{\mathrm{RD}}$ by more than 2.9 test perplexity; and the gap is larger with the longer 480-length context: more than 3.6 test perplexity. $\mathrm{ABC}_{\mathrm{MLP}}$ -32 outperforms its larger-memory ABC counterparts by more than 2.1 test perplexity. These results confirm $\mathrm{ABC}_{\mathrm{MLP}}$ 's advantages of using a contextualized strategy. Surprisingly, Linformer underperforms $\mathrm{ABC}_{\mathrm{RD}}$ , and its performance drops with the larger 480-length context window. This suggests that, while successful in text encoding, Linformer's position-based strategy is a suboptimal design choice for causal attention, at least for long context. All ABC models underperform the BASE, with $\mathrm{ABC}_{\mathrm{MLP}}$ -64 having the smallest gap of 0.5 perplexity. $\mathrm{ABC}_{\mathrm{MLP}}$ -32 outperforms kernel-based methods by more than 0.9 test perplexity, using Kasai et al. (2021b)'s 32-layer setting (Table 3b).
197
+
198
+ # 5.2 Machine Translation
199
+
200
+ Datasets. To assess their performance over various output lengths, we compare ABC models on sentence- and document- level machine translation.
201
+
202
+ - Sentence-level translation with WMT14 EN-DE
203
+
204
+ <table><tr><td>Model</td><td>Cross n</td><td>Causal n</td><td>BLEU</td></tr><tr><td>BASE</td><td>-</td><td>-</td><td>27.2</td></tr><tr><td>ABCD</td><td>32</td><td>32</td><td>25.7</td></tr><tr><td>ABCD</td><td>64</td><td>64</td><td>26.2</td></tr><tr><td>Linformer</td><td>32</td><td>32</td><td>26.6</td></tr><tr><td>Linformer</td><td>64</td><td>64</td><td>26.7</td></tr><tr><td>ABCMLP</td><td>32</td><td>8</td><td>27.1</td></tr><tr><td>ABCMLP</td><td>32</td><td>32</td><td>27.3</td></tr></table>
205
+
206
+ (a) Bolded number outperforms BASE.
207
+
208
+ <table><tr><td>Model</td><td>Cross n</td><td>Causal n</td><td>BLEU</td></tr><tr><td>BASE</td><td>-</td><td>-</td><td>39.9</td></tr><tr><td>Linformer</td><td>128</td><td>64</td><td>-</td></tr><tr><td>ABCD</td><td>128</td><td>64</td><td>38.6</td></tr><tr><td>ABCMLP</td><td>128</td><td>64</td><td>39.7</td></tr></table>
209
+
210
+ (b) Linformer fails to converge even with multiple random seeds. Bold number performs the best among ABC models.
211
+ Table 4: Machine translation test SacreBLEU. Left: sentence-level translation with WMT14 EN-DE; right: document-level translation with IWSLT14 ES-EN.
212
+
213
+ (Bojar et al., 2014). The preprocessing and data splits follow Vaswani et al. (2017).
214
+
215
+ - Document-level translation with IWSLT14 ESEN (Cettolo et al., 2014). We use Miculicich et al. (2018)'s data splits and preprocessing. Following standard practice (Voita et al., 2019), a 4-sentence sliding window is used to create the dataset, i.e., each instance has 4 sentences.
216
+
217
+ Setting. We compare ABC variants as in §5.1. §C.2 further compares to the clustering-based (§3.2) and sliding-window (§3.3) ABC variants.
218
+
219
+ The BASE model they build on is our implementation of transformer-base (Vaswani et al., 2017). ABC variants replace decoder cross attention and causal attention with bounded-memory attention, while keeping softmax attention for the encoder, since its overhead is much less significant (Kasai et al., 2021a); other components are kept the same. §C.2 studies a model that replaces all softmax attention with $\mathrm{ABC_{MLP}}$ . It performs on par with BASE, confirming $\mathrm{ABC_{MLP}}$ 's broad applicability in various application scenarios. We evaluate with SacreBLEU (Post, 2018).
220
+
221
+ Results. Table 4a summarizes sentence-level machine translation results on the WMT14 EN-DE test set. Overall $\mathrm{ABC}_{\mathrm{MLP}}$ performs on par with BASE, with either 32-32 cross-causal memory sizes or 32-8. Even with smaller memory sizes, it outperforms other ABC variants by more than 1.1 BLEU. Differently from the trend in the language modeling experiment (§5.1), Linformer outperforms $\mathrm{ABC}_{\mathrm{RD}}$ by more than 0.5 BLEU. We attribute this to the smaller sequence lengths of this dataset. $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms other ABC models by more than 0.4 BLEU, even with smaller memory sizes.
222
+
223
+ The trend is similar on document-level translation with IWSLT14 ES-EN (Table 4b), except that $\mathrm{ABC}_{\mathrm{MLP}}$ slightly underperforms BASE by 0.2 BLEU. This suggests that even with longer sequences, $\mathrm{ABC}_{\mathrm{MLP}}$ is effective despite its bounded memory size. Linformer fails to converge even with multiple random seeds, suggesting the limitations of its purely position-based strategy in tasks involving decoding varying-length text.
224
+
225
+ # 5.3 Masked Language Model Finetuning
226
+
227
+ Setting. We compare the ABC variants as in §5.1. It is interesting to pretrain ABC from scratch, but we lack the resources to do so. Instead, we warm-start from a pretrained RoBERTa-base (Liu et al., 2019) trained with the softmax transformer, swap its attention with ABC variants, and continue pretraining with the masked language modeling (MLM) objective on a concatenation of BookCorpus (Zhu et al., 2015), English Wikipedia, OpenWebText (Gokaslan and Cohen, 2019), and RealNews (Zellers et al., 2019). Then the models are finetuned and evaluated on downstream classification datasets from the the GLUE benchmark (Wang et al., 2019). This is an appealing setting, since it avoids reinvesting the huge amounts of resources already put into pretraining.
228
+
229
+ Results. Table 5 compares downstream text classification performance. BASE indicates a baseline that continues pretraining RoBERTa-base on our data. Following standard practice, we report development accuracy. Linformer achieves competitive
230
+
231
+ <table><tr><td>Model</td><td>n</td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>SST</td><td>Avg.</td></tr><tr><td>BASE</td><td>-</td><td>87.2</td><td>92.4</td><td>91.7</td><td>94.3</td><td>91.4</td></tr><tr><td>Linformer</td><td>64</td><td>85.3</td><td>91.8</td><td>90.8</td><td>92.4</td><td>90.1</td></tr><tr><td>Linformer</td><td>128</td><td>86.1</td><td>91.9</td><td>91.4</td><td>93.7</td><td>90.8</td></tr><tr><td>ABCMLP</td><td>64</td><td>85.6</td><td>91.8</td><td>91.7</td><td>93.8</td><td>90.7</td></tr><tr><td>ABCMLP</td><td>128</td><td>87.1</td><td>92.6</td><td>91.8</td><td>94.4</td><td>91.5</td></tr></table>
232
+
233
+ Table 5: Text classification development set accuracy. All models continue pretraining RoBERTa-base on our data with the MLM objective. Bold numbers perform the best among ABC models, and underlined ones perform on par with or better than BASE.
234
+
235
+ performance, aligned with Wang et al. (2020b)'s results. $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms Linformer, and performs on par with or better than BASE, affirming the benefits of using contextualized memory organization in MLM. $\mathrm{ABC}_{\mathrm{RD}}$ fails to converge in continued pretraining even with multiple seeds.
236
+
237
+ Based on the above results, we think $\mathrm{ABC}_{\mathrm{MLP}}$ can achieve competitive performance when pretrained from scratch, just as Linformer does (Wang et al., 2020b). Further empirical exploration is beyond our budget and left for future work.
238
+
239
+ # 6 Analysis
240
+
241
+ Decoding efficiency over varying sequence lengths. ABC's efficiency gains can be more prominent for long sequences. We study $\mathrm{ABC}_{\mathrm{MLP}}$ 's decoding overhead with varying sequence lengths. Following Kasai et al. (2021b), we consider a sequence-to-sequence generation experiment. Three linear-complexity models are compared: RFA (with 256/128 cross/causal memory sizes; Peng et al., 2021), T2R (32/4; Kasai et al., 2021b), and $\mathrm{ABC}_{\mathrm{MLP}}$ (32/8). The sizes are chosen to maximize efficiency without accuracy drop. T2R needs to be finetuned from a pretrained transformer to match its performance, while others don't.
242
+
243
+ All linear-time models achieve consistent decoding speed for different lengths (Figure 1a), substantially outpacing the softmax attention baseline, especially for long sequences. In particular, $\mathrm{ABC}_{\mathrm{MLP}}$ decodes $\sim 1.25$ times faster than RFA, another competitive model that can match transformer's accuracy without a warm start from a pretrained model. This can be attributed to the fact that $\mathrm{ABC}_{\mathrm{MLP}}$ achieves similar accuracy with a much smaller memory. T2R's memory sizes are similar to $\mathrm{ABC}_{\mathrm{MLP}}$ 's, but it decodes about $20\%$ faster.
244
+
245
+ ![](images/923142fc51328c0d7e65ae5d2e884d74e92245d0e28bd2d89f783eb9c9bb75b2.jpg)
246
+ (a) Decoding Speed.
247
+
248
+ ![](images/00cb2bc3d969902934c975f32ac77ef9a1952aded12123a2fe0a694c05e9b17b.jpg)
249
+ (b) Decoding memory overhead.
250
+ Figure 1: Sequence-to-sequence decoding speed (top) and memory consumption (bottom) varying sequence lengths. Greedy decoding is used, with batch size 16.
251
+
252
+ This is because it does not compute the softmax when calculating attention output, while $\mathrm{ABC}_{\mathrm{MLP}}$ does (Eq. 4). These results show that $\mathrm{ABC}_{\mathrm{MLP}}$ is an appealing modeling choice for decoding tasks, especially when training from scratch is desired.
253
+
254
+ $\mathrm{ABC}_{\mathrm{MLP}}$ also achieves significant savings in terms of memory overhead (Figure 1b). $\mathrm{ABC}_{\mathrm{MLP}}$ , RFA, and T2R's curves are similar.
255
+
256
+ Text encoding efficiency. We compare the efficiency of $\mathrm{ABC_{MLP}}$ against softmax attention and Linformer when used as text encoders. The models' sizes mirror those in the MLM experiment (§5.3). Table 6 summarizes inference time and memory overhead with 512-length inputs, batch size 16. Both $\mathrm{ABC_{MLP}}$ and Linformer achieve inference speed gains and memory savings over BASE. Linformer is faster, since its linear projection is cheaper to compute than $\mathrm{ABC_{MLP}}$ ’s MLP. Inference speed is measured on the same V100 GPU. The trend in memory overhead is similar.
257
+
258
+ Although $\mathrm{ABC}_{\mathrm{MLP}}$ slightly underperforms Linformer in terms of inference speed, it can be a more appealing architectural choice in practice: in all of our 5 experiments, $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms other ABC models in accuracy. Linformer, in contrast, fails to converge or yields sub-optimal performance on some tasks. This confirms its flexibility and ap
259
+
260
+ <table><tr><td rowspan="2">n</td><td rowspan="2">BASE -</td><td colspan="2">Linformer</td><td colspan="2">ABCMLP</td></tr><tr><td>64</td><td>128</td><td>64</td><td>128</td></tr><tr><td>Speed</td><td>1.0×</td><td>1.7×</td><td>1.5×</td><td>1.5×</td><td>1.3×</td></tr><tr><td>Memory</td><td>1.0×</td><td>0.5×</td><td>0.6×</td><td>0.5×</td><td>0.6×</td></tr></table>
261
+
262
+ Table 6: Text encoding inference speed (higher is better) and memory (lower is better). Inputs are text segments with 512 tokens and batch size 16.
263
+
264
+ <table><tr><td rowspan="2" colspan="2"></td><td colspan="4">Cross n</td></tr><tr><td>8</td><td>16</td><td>32</td><td>64</td></tr><tr><td rowspan="4">Causal n</td><td>8</td><td>24.7</td><td>25.2</td><td>25.6</td><td>25.5</td></tr><tr><td>16</td><td>-</td><td>25.4</td><td>25.7</td><td>25.6</td></tr><tr><td>32</td><td>-</td><td>-</td><td>25.7</td><td>25.8</td></tr><tr><td>64</td><td>-</td><td>-</td><td>-</td><td>25.8</td></tr></table>
265
+
266
+ Table 7: $\mathrm{ABC}_{\mathrm{MLP}}$ 's SacreBLEU on WMT14 EN-DE development data varying memory sizes.
267
+
268
+ plicability in various settings.
269
+
270
+ Memory size's impact on accuracy. Practically, one may want to minimize the memory size to improve efficiency. We use the WMT14 EN-DE experiment to investigate how memory size affects accuracy. Using the §5.2's setup, we vary $\mathrm{ABC}_{\mathrm{MLP}}$ 's cross and causal attention memory sizes and compare their translation quality on the development data. They are selected from $\{8, 16, 32, 64\}$ , with cross attention's equal to or larger than causal's: cross attention is more important than causal attention in machine translation (Michel et al., 2019). Our results (Table 7) align with this observation: when cross attention memory is large enough, reducing causal attention memory size from 64 to 8 has a minor 0.3 BLEU drop. Surprisingly, $\mathrm{ABC}_{\mathrm{MLP}}$ with 8-8 sized cross-causal memory is only 1.1 BLEU behind the best-performing configuration.
271
+
272
+ # 7 Conclusion
273
+
274
+ We presented attention with bounded-memory control (ABC). It provides a unified perspective of several recently-proposed models, and shows that they vary in the organization of the bounded memory. ABC reveals new insights into established methods and inspires new architectures. We proposed $\mathsf{ABC}_{\mathsf{MLP}}$ , a particular instance of ABC that learns a contextualized memory control. On language modeling, machine translation, and masked language model finetuning, $\mathsf{ABC}_{\mathsf{MLP}}$ outperforms previous ABC models. Compared to the strong transformer
275
+
276
+ baseline, $\mathrm{ABC}_{\mathrm{MLP}}$ achieves substantial efficiency improvements with no or negligible accuracy loss.
277
+
278
+ # Acknowledgments
279
+
280
+ We would like to thank the ARK group at the University of Washington for their helpful feedback, and the anonymous reviewers for their thoughtful comments. This work was supported in part by NSF grant 2113530 and a Google Fellowship. Nikolaos Pappas was supported by the Swiss National Science Foundation grant P400P2_183911.
281
+
282
+ # References
283
+
284
+ Joshua Ainslie, Santiago Ontanon, Chris Alberti, Va-clav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding long and structured inputs in transformers. In Proc. of EMNLP.
285
+ Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In Proc. of ICLR.
286
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR.
287
+ Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer.
288
+ Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proc. of WMT.
289
+ Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proc. of IWSLT.
290
+ Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2021. Rethinking attention with performers. In Proc. of ICLR.
291
+ Kornél Csernai. 2017, accessed September 1, 2020. First Quora Dataset Release: Question Pairs.
292
+ Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In Proc. of ICLR.
293
+ Aaron Gokaslan and Vanya Cohen. 2019. Openwebtext corpus. http://Skylion007.github.io/OpenWebTextCorpus.
294
+
295
+ Anirudh Goyal, Aniket Didolkar, Alex Lamb, Kartikeya Badola, Nan Rosemary Ke, Nasim Rahaman, Jonathan Binas, Charles Blundell, Michael Mozer, and Yoshua Bengio. 2021. Coordination among neural modules through a shared global workspace.
296
+ Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural tuning machines.
297
+ Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In Proc. of NeurIPS.
298
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation.
299
+ Armand Joulin and Tomás Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Proc. of NeurIPS.
300
+ Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2021a. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In Proc. of ICLR.
301
+ Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, and Noah A. Smith. 2021b. Finetuning pretrained transformers into RNNs. In Proc. of EMNLP.
302
+ Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Francois Fleuret. 2020. Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proc. of ICML.
303
+ Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In Proc. of ICLR.
304
+ Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL.
305
+ Hung Le, Truyen Tran, and Svetha Venkatesh. 2020. Self-attentive associative memory. In Proc. of ICML.
306
+ Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. 2019. Set transformer: A framework for attention-based permutation-invariant neural networks. In Proc. of ICML.
307
+ Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In Proc. of EMNLP.
308
+ Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summarizing long sequences. In Proc. of ICLR.
309
+
310
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach.
311
+ Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. 2021. Luna: Linear unified nested attention. In Proc. of NeurIPS.
312
+ Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In Proc. of ICLR.
313
+ Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Proc. of NeurIPS.
314
+ Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proc. of EMNLP.
315
+ Sebastian Nagel. 2016. News dataset available. https://commoncrawl.org/2016/10/news-dataset-available/.
316
+ Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In Proc. of ICML.
317
+ Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. 2021. Random feature attention. In Proc. of ICLR.
318
+ Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proc. of EMNLP.
319
+ Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proc. of WMT.
320
+ Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In Proc. of ICLR.
321
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proc. of EMNLP.
322
+ Aurko Roy, Mohammad Taghi Saffar, David Grangier, and Ashish Vaswani. 2020. Efficient content-based sparse attention with routing transformers. TACL.
323
+ Imanol Schlag, Kazuki Irie, and Jürgen Schmidhuber. 2021. Linear transformers are secretly fast weight programmers. In Proc. of ICML.
324
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL.
325
+
326
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP.
327
+ Sainbayar Sukhbaatar, Da Ju, Spencer Poff, Stephen Roller, Arthur Szlam, Jason Weston, and Angela Fan. 2021. Not all memories are created equal: Learning to forget by expiring. In Proc. of ICML.
328
+ Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey.
329
+ Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning.
330
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS.
331
+ Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proc. of ACL.
332
+ Apoory Vyas, Angelos Katharopoulos, and François Fleuret. 2020. Fast transformers with clustered attention. In Proc. of NeurIPS.
333
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proc. of ICLR.
334
+ Shuohang Wang, Luowei Zhou, Zhe Gan, Yen-Chun Chen, Yuwei Fang, Siqi Sun, Yu Cheng, and Jingjing Liu. 2020a. Cluster-Former: Clustering-based sparse transformer for long-range dependency encoding. Findings of ACL.
335
+ Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020b. Linformer: Self-attention with linear complexity.
336
+ Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In Proc. of ICLR.
337
+ Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc. of NAACL.
338
+ Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proc. of NAACL.
339
+ Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. 2018. Memory architectures in recurrent neural network language models. In Proc. of ICLR.
340
+
341
+ Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontonon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In Proc. of NeurIPS.
342
+ Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Proc. of NeurIPS.
343
+ Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proc. of ICCV.
344
+
345
+ # Appendices
346
+
347
+ # A Other ABC Models
348
+
349
+ # A.1 Sparse Local-to-global Attention
350
+
351
+ It sparsifies attention pattern to reduce the number of tokens that are attended to (Beltagy et al., 2020; Zaheer et al., 2020, inter alia). All queries attend to a subset of $n < N$ "global tokens," while ignoring others. Therefore the effective context size is reduced to $n$ . The global tokens are usually pre-selected by positions according to some heuristics. Local-to-global attention is an instance of ABC: it can be recovered by letting $\phi_t = \mathbf{e}_i$ if $x_t$ is the $i$ th global token $(i = 1, \dots, n)$ , and the zero vectors for others.
352
+
353
+ # A.2 Random Memory Control
354
+
355
+ As a baseline, $\mathrm{ABC}_{\mathrm{RD}}$ stores each token in a randomly-selected memory slot. This is achieved by letting $\phi_t = \mathbf{e}_{i_t}$ , where $i_t$ is uniformly drawn from $\{1,\dots,n\}$ for each $t$ . It is designed as a baseline to $\mathrm{ABC}_{\mathrm{MLP}}$ and Linformer to quantify the differences between random and learned bounded-memory control.
356
+
357
+ Random sparse attention patterns are explored by Zaheer et al. (2020), where a subset of $n < N$ tokens are randomly selected to be attended to by all tokens. $\mathrm{ABC}_{\mathrm{RD}}$ is different, and it attends to all tokens, but randomly "squash" them into an $n$ -slot memory.
358
+
359
+ # A.3 Compressive Transformer with Mean Pooling
360
+
361
+ The compressive transformer (Rae et al., 2020) explores various ways to "squash" long context into smaller and more compact representations. It achieves state-of-the-art performance on several language modeling benchmarks. We show that at least the mean-pooling variant of the compressive transformer can be seen as an ABC instance.
362
+
363
+ The mean-pooling variant of the compressive transformer compresses the context by
364
+
365
+ $$
366
+ \begin{array}{l} \mathbf {K} = \left[ \mathbf {k} _ {1}, \dots , \mathbf {k} _ {N} \right] ^ {\top} \in \mathbb {R} ^ {N \times d} \\ \rightarrow \widetilde {\mathbf {K}} = \big [ \underbrace {(\mathbf {k} _ {1} + \cdots + \mathbf {k} _ {c})} _ {c} / c, \\ \underbrace {\left(\mathbf {k} _ {c + 1} + \cdots + \mathbf {k} _ {2 c}\right)} _ {c} / c \ldots , \\ \underbrace {\left(\mathbf {k} _ {N - c + 1} + \cdots + \mathbf {k} _ {N}\right)} _ {c} / c \bigg ] ^ {\top} \in \mathbb {R} ^ {n \times d}. \\ \end{array}
367
+ $$
368
+
369
+ where $c = N / n$ is the compression ratio. Here $N \mod n = 0$ is assumed, since otherwise the sequence can be padded to.
370
+
371
+ The above model is an ABC instance by letting
372
+
373
+ $$
374
+ \phi_ {i} = \mathbf {e} _ {\lfloor (i - 1) / c \rfloor + 1} / c. \tag {8}
375
+ $$
376
+
377
+ # A.4 Dilated Convolution Attention Patterns
378
+
379
+ The dilated attention pattern is similar to the sliding window attention and only considers the context within a predefined window. It differs in that it attends to every other token:
380
+
381
+ $$
382
+ \widetilde {\mathbf {K}} _ {t} = \left[ \mathbf {k} _ {t - 2 n + 2}, \mathbf {k} _ {t - 2 n + 4}, \dots , \mathbf {k} _ {t - 2}, \mathbf {k} _ {t} \right] ^ {\top}. \tag {9}
383
+ $$
384
+
385
+ It can be simulated with two separate queues $\widetilde{\mathbf{K}}^{\mathrm{odd}}$ and $\widetilde{\mathbf{K}}^{\mathrm{even}}$ :
386
+
387
+ $$
388
+ \widetilde {\mathbf {K}} _ {t} ^ {\mathrm {o d d}} = \left\{ \begin{array}{l l} \mathbf {U} \widetilde {\mathbf {K}} _ {t - 1} ^ {\mathrm {o d d}} + \mathbf {e} _ {n} \otimes \mathbf {k} _ {t}, & \text {i f t i s o d d} \\ \widetilde {\mathbf {K}} _ {t - 1} ^ {\mathrm {o d d}}, & \text {o t h e r w i s e} \end{array} \right.
389
+ $$
390
+
391
+ $$
392
+ \widetilde {\mathbf {K}} _ {t} ^ {\text {e v e n}} = \left\{ \begin{array}{l l} \mathbf {U} \widetilde {\mathbf {K}} _ {t - 1} ^ {\text {e v e n}} + \mathbf {e} _ {n} \otimes \mathbf {k} _ {t}, & \text {i f t i s e v e n} \\ \widetilde {\mathbf {K}} _ {t - 1} ^ {\text {e v e n}}, & \text {o t h e r w i s e} \end{array} \right.
393
+ $$
394
+
395
+ Likewise for the values. Depending on $t$ , the query attends to one of the two queues: output =
396
+
397
+ $$
398
+ \left\{ \begin{array}{l l} \left(\widetilde {\mathbf {V}} ^ {\mathrm {o d d}}\right) ^ {\top} \operatorname {s o f t m a x} (\widetilde {\mathbf {K}} ^ {\mathrm {o d d}} \mathbf {q} _ {t}), & \text {i f} t \text {i s o d d} \\ \left(\widetilde {\mathbf {V}} ^ {\mathrm {e v e n}}\right) ^ {\top} \operatorname {s o f t m a x} (\widetilde {\mathbf {K}} ^ {\mathrm {e v e n}} \mathbf {q} _ {t}), & \text {o t h e r w i s e}. \end{array} \right.
399
+ $$
400
+
401
+ The above implementation could incur considerable amount of overhead and may be actually more expensive than the original dilated window formulation. Therefore it has more conceptual value than practical value.
402
+
403
+ # A.5 Shared Workspace and Linear Unified Nested Attention
404
+
405
+ Concurrently to this work, shared workspace (SW; Goyal et al., 2021) and linear unified nested attention (LUNA; Ma et al., 2021) also proposed methods to learn contextualized memory control strategies. Both can be seen as instances of ABC. At layer $\ell$ , their $\phi_i^\ell$ is a function of previous layer's memory $\widetilde{\mathbf{X}}^{\ell -1} \in \mathbb{R}^{n \times d}$ and current layer's input $\mathbf{X}^\ell \in \mathbb{R}^{N \times d}$ :
406
+
407
+ $$
408
+ \phi_ {i} = \left[ \operatorname {s o f t m a x} \left(\widetilde {\mathbf {X}} ^ {\ell - 1} \mathbf {X} ^ {\ell^ {\top}}\right) \right] _ {:, i}, \tag {10}
409
+ $$
410
+
411
+ where $[\cdot ]_{:,i}$ denotes the ith column of a matrix. Query, key, and value projections are suppressed for notation clarity.
412
+
413
+ SW and LUNA reveal the entire sequence to the control vectors, by constructing $\phi$ as a function of previous layer's memory. Although both admit the recurrent computation as all ABC models do, they are ill-suited for causal attention and autoregressive decoding, since future information is "leaked" to $\phi_{i}$ from the previous layer. LUNA resorts to a variant of Katharopoulos et al. (2020) in causal attention (Ma et al., 2021). In contrast, $\mathrm{ABC}_{\mathrm{MLP}}$ never conditions $\phi_{i}$ on previous layer's memory, but only on the current layer's input.
414
+
415
+ # B More Details about ABC-MLP
416
+
417
+ # B.1 Normalization in Causal Attention
418
+
419
+ An equivalent implementation to Eq. 7 is to normalize $\widetilde{\mathbf{K}}$ and $\widetilde{\mathbf{V}}$ instead of $\phi_{i}$ vectors:
420
+
421
+ $$
422
+ \boldsymbol {\alpha} _ {i} = \exp \left(\mathbf {W} _ {\phi} \mathbf {x} _ {i}\right), \quad \phi_ {i} = \boldsymbol {\alpha} _ {i},
423
+ $$
424
+
425
+ $$
426
+ \bar {\mathbf {K}} = \widetilde {\mathbf {K}} \left/ \sum_ {j = 1} ^ {N} \boldsymbol {\alpha} _ {j}. \quad \bar {\mathbf {V}} = \widetilde {\mathbf {V}} \left. \right/ \sum_ {j = 1} ^ {N} \boldsymbol {\alpha} _ {j}.
427
+ $$
428
+
429
+ $\mathrm{output} = \bar{\mathbf{V}}^{\top}$ softmax $(\bar{\mathbf{K}}\mathbf{q})$
430
+
431
+ $\mathbf{M} / \mathbf{z}$ divides the $\ell$ th row of matrix $\mathbf{M}$ by vector $\mathbf{z}$ 's $\ell$ th dimension. This admits a linear complexity computation graph for the causal variant of $\mathrm{ABC}_{\mathrm{MLP}}$ .
432
+
433
+ # B.2 Higher-Dimensional Case of Example 1
434
+
435
+ This section generalizes Example 1 to higher dimensional cases. Assume that the constant-sized memory has $n$ slots. $\phi_{i}$ is calculated as in Eq.7. Then $\widetilde{\mathbf{K}} = \sum_{i = 1}^{N}\phi_{i}\otimes \mathbf{k}_{i}\in \mathbb{R}^{n\times d}$ . Each row of $\widetilde{\mathbf{K}}$ can be seen as a separate attention mechanism with a pseudo query. Let $[\cdot ]_{\ell}$ denote the $\ell$ th row/dimension of a matrix/vector. Then for any $\ell = 1,\ldots ,n$
436
+
437
+ $$
438
+ \begin{array}{l} \left[ \widetilde {\mathbf {K}} \right] _ {\ell} = \sum_ {i = 1} ^ {N} \left[ \phi_ {i} \right] _ {\ell} \otimes \mathbf {k} _ {i} \\ = \sum_ {i = 1} ^ {N} \frac {\exp \left(\left[ \mathbf {W} _ {\phi} \right] _ {\ell} \cdot \mathbf {x} _ {i}\right)}{\sum_ {j = 1} ^ {N} \exp \left(\left[ \mathbf {W} _ {\phi} \right] _ {\ell} \cdot \mathbf {x} _ {j}\right)} \mathbf {k} _ {i} ^ {\top} \\ = \mathrm {a t t n} \left([ \mathbf {W} _ {\phi} ] _ {\ell}, \{\mathbf {x} _ {i} \} _ {i = 1} ^ {N}, \{\mathbf {k} _ {i} \} _ {i = 1} ^ {N}\right) ^ {\top} \in \mathbb {R} ^ {1 \times d}. \\ \end{array}
439
+ $$
440
+
441
+ In other words, there are $n$ attention mechanisms in total, each with a separately-parameterized pseudoquery $[\mathbf{W}_{\phi}]_{\ell}$ . They summarize the context for $n$ times in parallel, each producing a $d$ -dimensional vectors. These output vectors are then stacked into $n$ -by- $d$ memory $\widetilde{\mathbf{K}}$ . $\widetilde{\mathbf{V}}$ is similar.
442
+
443
+ # C Experimental Details
444
+
445
+ # C.1 Language Modeling
446
+
447
+ We closely build on Baevski and Auli (2019) and Kasai et al. (2021b). The hyperparameters are summarized in Table 10. All models are trained on 4 A100 GPUs.
448
+
449
+ # C.2 Machine Translation
450
+
451
+ We experiment with a sentence-level (WMT14 EN-DE, Bojar et al., 2014) and a document-level benchmark (IWSLT14 ES-EN, Cettolo et al., 2014) to assess model performance over various sequence lengths. The preprocessing and data splits of WMT14 EN-DE follow Vaswani et al. (2017). A 32,768 byte pair encoding (BPE; Sennrich et al., 2016) vocabulary is shared between source and target languages. For IWSLT14, we follow Miculicich et al. (2018) and use the dev2010 subset for development and tst2010-2012 for testing. The tokenization is also the same as Miculicich et al. (2018): we tokenize and truecase Spanish and English with Moses (Koehn et al., 2007) and run byte-pair encoding with 30k splits, shared between the two languages. The final dataset contains 1421, 8, and 42 documents for training, development, and testing. On average, each document contains 126.7 sentences, and each sentence contains 21.7(ES)/22.5(EN) BPE subwords. We use a sliding window with length-4 and stride-one to generate our dataset. During inference, we use predicted context on the target side.
452
+
453
+ We average the checkpoints from the last five epochs to obtain the final model (Vaswani et al., 2017). In inference, we apply beam search with size 5 and length penalty 0.6. Other hyperparameters are summarized in Table 11. All models are trained on 4 RTX 2080 Ti GPUs.
454
+
455
+ Additional machine translation results. In addition to the results presented in §5.2, Table 8 further compares, on the WMT14 EN-DE dataset, the clustering-based (§3.2) and sliding-window (§3.3) models of ABC, as well as ReLU and sigmoid variants of $\mathsf{ABC}_{\mathsf{MLP}}$ . Clustering and sliding-window ABC variants underperform $\mathsf{ABC}_{\mathsf{MLP}}$ with the same memory sizes by more than 0.5 BLEU. Both ReLU and sigmoid underperform their exp counterpart.
456
+
457
+ MLP-exp-all replaces the encoder's softmax attention modules with ABC, in addition to the decoder's. It underperforms $\mathrm{ABC}_{\mathrm{MLP}}$ by only 0.3 BLEU.
458
+
459
+ <table><tr><td>Model</td><td>φ</td><td>Cross n</td><td>Causal n</td><td>Encoder n</td><td>BLEU</td></tr><tr><td>BASE</td><td>-</td><td>-</td><td>-</td><td>-</td><td>27.2</td></tr><tr><td rowspan="9">ABC</td><td>Window</td><td>32</td><td>32</td><td>-</td><td>26.3</td></tr><tr><td>Cluster</td><td>32</td><td>32</td><td>-</td><td>26.8</td></tr><tr><td>MLP-ReLU</td><td>32</td><td>8</td><td>-</td><td>-</td></tr><tr><td>MLP-ReLU</td><td>32</td><td>32</td><td>-</td><td>26.4</td></tr><tr><td>MLP-sigmoid</td><td>32</td><td>8</td><td>-</td><td>26.8</td></tr><tr><td>MLP-sigmoid</td><td>32</td><td>32</td><td>-</td><td>27.0</td></tr><tr><td>MLP-exp</td><td>32</td><td>8</td><td>-</td><td>27.1</td></tr><tr><td>MLP-exp</td><td>32</td><td>32</td><td>-</td><td>27.3</td></tr><tr><td>MLP-exp-all</td><td>32</td><td>32</td><td>32</td><td>27.0</td></tr></table>
460
+
461
+ Table 8: ABC variants' performance (SacreBLEU) on the WMT14 EN-DE test set for sentence-level machine translation. MLP-ReLU with 32/8 memory sizes fails to converge. MLP-exp-all applies ABC in both the encoder and the decoder, while others only in the decoders.
462
+
463
+ Figure 1b compares $\mathrm{ABC}_{\mathrm{MLP}}$ 's (32-8 memory sizes) attention memory overhead with softmax attention's. Following Kasai et al. (2021b), we consider a synthetic sequence-to-sequence generation task with varying sequence lengths. A batch size of 16 and greedy decoding is used. The models are of the same size as those in §5.2.
464
+
465
+ # C.3 Masked Language Model Finetuning
466
+
467
+ Our data for continued pretraining is a concatenation of BookCorpus (Zhu et al., 2015), English Wikipedia, OpenWebText (Gokaslan and Cohen, 2019), and RealNews (Zellers et al., 2019). Our data differs from RoBERTa's pretraining data, which we do not have access to. We replace their CC-News (Nagel, 2016) with RealNews, and drop Stories (Trinh and Le, 2018). At the time of this project, the public access to the Stories dataset is broken.[10] Our machine does not have a large enough memory to load all the data. We therefore split the training data into 20 shards, after shuffling. Other preprocessing is the same as Liu et al. (2019).[11] The hyperparameters for continued pretraining follow base-sized RoBERTa, part of which are summarized in Table 12. All models are trained on a single TPU v3 accelerator.
468
+
469
+ For downstream task finetuning, we use the same
470
+
471
+ hyperparameters as Liu et al. (2019).<sup>12</sup> Table 13 briefly describes the tasks. The readers are referred to Wang et al. (2019) for further details.
472
+
473
+ <table><tr><td>Data</td><td>Train</td><td>Dev.</td><td>Test</td><td>Vocab.</td><td>Sent./doc</td></tr><tr><td>WikiText-103</td><td>103M</td><td>218K</td><td>246K</td><td>268K</td><td>-</td></tr><tr><td>WMT14 EN-DE</td><td>4.5M</td><td>3K</td><td>3K</td><td>32K</td><td>-</td></tr><tr><td>IWSLT14 ES-EN</td><td>1713</td><td>8</td><td>56</td><td>30K</td><td>121.5</td></tr></table>
474
+
475
+ Table 9: Statistics for the datasets. WikiText-103 split sizes are in number of tokens, WMT14 in number of sentences, and IWSLT14 in number of documents.
476
+
477
+ <table><tr><td>Hyperprams.</td><td>B&amp;A</td><td>Kasai</td></tr><tr><td># Layers</td><td>16</td><td>32</td></tr><tr><td># Heads</td><td>8</td><td>8</td></tr><tr><td>Embedding Size</td><td>1024</td><td>1024</td></tr><tr><td>Head Size</td><td>128</td><td>128</td></tr><tr><td>FFN Size</td><td>4096</td><td>4096</td></tr><tr><td>Batch Size</td><td>64</td><td>64</td></tr><tr><td>Learning Rate</td><td>1.0</td><td>1.0</td></tr><tr><td>Dropout</td><td>0.3</td><td>0.3</td></tr><tr><td>Layer Dropout</td><td>-</td><td>0.2</td></tr><tr><td>Memory size</td><td>[32, 64]</td><td>64</td></tr></table>
478
+
479
+ Table 10: Hyperparameters used in the language modeling experiments. B&A: Baevski and Auli (2019); Kasai: Kasai et al. (2021b).
480
+
481
+ <table><tr><td>Hyperprams.</td><td>WMT14</td><td>IWSLT14</td></tr><tr><td># Layers</td><td>6</td><td>6</td></tr><tr><td># Heads</td><td>8</td><td>8</td></tr><tr><td>Embedding Size</td><td>512</td><td>512</td></tr><tr><td>Head Size</td><td>64</td><td>64</td></tr><tr><td>FFN Size</td><td>2048</td><td>1024</td></tr><tr><td>Warmup Steps</td><td>6000</td><td>4000</td></tr><tr><td>Dropout</td><td>0.1</td><td>0.3</td></tr><tr><td>Cross Attn. n</td><td>32</td><td>128</td></tr><tr><td>Causal Attn. n</td><td>8</td><td>64</td></tr></table>
482
+
483
+ Table 11: Hyperparameters used in the machine translation experiments.
484
+
485
+ <table><tr><td>Hyperprams.</td><td>Values</td></tr><tr><td># Layers</td><td>12</td></tr><tr><td># Heads</td><td>12</td></tr><tr><td>Embedding Size</td><td>768</td></tr><tr><td>Head Size</td><td>64</td></tr><tr><td>FFN Size</td><td>3072</td></tr><tr><td>Dropout</td><td>0.1</td></tr><tr><td>Memory Size</td><td>[64, 128]</td></tr></table>
486
+
487
+ Table 12: Hyperparameters for continued pretraining in the masked language model finetuning experiments.
488
+
489
+ <table><tr><td>Data</td><td>Task</td><td>Train</td><td>Dev.</td></tr><tr><td>MNLI</td><td>Entailment</td><td>392K</td><td>9.8K</td></tr><tr><td>QNLI</td><td>Entailment</td><td>105K</td><td>5.5K</td></tr><tr><td>QQP</td><td>Paraphrase</td><td>363K</td><td>40K</td></tr><tr><td>SST-2</td><td>Sentiment</td><td>67K</td><td>873</td></tr></table>
490
+
491
+ Table 13: GLUE datasets and statistics. MNLI: Williams et al. (2018); QNLI is compiled by GLUE's authors using Rajpurkar et al. (2016); QQP: Csernai (2017, accessed September 1, 2020); SST-2: Socher et al. (2013).
abcattentionwithboundedmemorycontrol/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cc3c202f904c4379f6c715884fb4b0e3c5a146ba0c07295aabdfbf975c644ee
3
+ size 569880
abcattentionwithboundedmemorycontrol/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1932cf95d144d87e710c51efb208f75fa89b5f61c585cd0bfb624e43b0fc344
3
+ size 675351
acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4c58deb93ed24478726419898949609975b955303226aa218d2e4bf0f04213b
3
+ size 73965
acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b67ec3aa903f5d9505046a308fd2965c96bab2275f92c242f3dd2ae10905dff
3
+ size 86935
acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e3b6a44ef31e504231cd13d0622c2b5a06cf5f53291811e2810a3d3060714d7
3
+ size 480761
acceleratingcodesearchwithdeephashingandcodeclassification/full.md ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerating Code Search with Deep Hashing and Code Classification
2
+
3
+ Wenchao Gu $^{1*}$ , Yanlin Wang $^{2,\dagger}$ , Lun Du $^{2}$ , Hongyu Zhang $^{3}$ , Shi Han $^{2}$ , Dongmei Zhang $^{2}$ , and Michael R. Lyu $^{1}$
4
+
5
+ $^{1}$ Department of Computer Science and Engineering, The Chinese University of Hong Kong, China.
6
+
7
+ $^{2}$ Microsoft Research Asia, Beijing, China
8
+
9
+ <sup>3</sup> The University of Newcastle, Australia
10
+
11
+ # Abstract
12
+
13
+ Code search is to search reusable code snippets from source code corpus based on natural languages queries. Deep learning-based methods on code search have shown promising results. However, previous methods focus on retrieval accuracy, but lacked attention to the efficiency of the retrieval process. We propose a novel method CoSHC to accelerate code search with deep hashing and code classification, aiming to perform efficient code search without sacrificing too much accuracy. To evaluate the effectiveness of CoSHC, we apply our method on five code search models. Extensive experimental results indicate that compared with previous code search baselines, CoSHC can save more than $90\%$ of retrieval time meanwhile preserving at least $99\%$ of retrieval accuracy.
14
+
15
+ # 1 Introduction
16
+
17
+ Code reuse is a common practice during software development process. It improves programming productivity as developers' time and energy can be saved by reusing existing code. According to previous studies (Brandt et al., 2009; Lv et al., 2015), many developers tend to use natural language to describe the functionality of desired code snippets and search the Internet/code corpus for code reuse.
18
+
19
+ Many code search approaches (Brandt et al., 2009; McMillan et al., 2011; Lv et al., 2015; Du et al., 2021) have been proposed over the years. With the rapid growth of open source code bases and the development of deep learning technology, recently deep learning based approaches have become popular for tackling the code search problem (Gu et al., 2018; Husain et al., 2019; Gu et al., 2021). Some of these approaches adopt neural network models to encode source code and query descriptions into representation vectors in the same
20
+
21
+ embedding space. The distance between the representation vectors whose original code or description are semantically similar should be small. Other approaches (Feng et al., 2020; Guo et al., 2021; Du et al., 2021) regard the code search task as a binary classification task, and calculate the probability of code matching the query.
22
+
23
+ In the past, deep learning-based methods focused on retrieval accuracy, but lacked attention to the efficiency of retrieval on large-scale code corpus. However, both types of these deep learning-based approaches directly rank all the source code snippets in the corpus during searching, which will incur a large amount of computational cost. For the approaches that separately encode code and description representation vectors, the similarity of the target query vector with all code representation vectors in the corpus needs to be calculated for every single retrieval. In order to pursue high retrieval accuracy, a high dimension is often set for the representation vectors. For example, in CodeBERT, the dimension of the final representation vector is 768. The similarity calculation between a pair of code and query vectors will take 768 multiplications and 768 additions between two variables with double data type. The total calculation of single linear scan for the whole code corpus containing around 1 million code snippets is extremely large - around 1 billion times of multiplications and additions. As for the approaches adopting binary classification, there is no representation vectors stored in advance and the inference of the target token sequence with all the description token sequences needs to be done in real time for every single retrieval. Due to the large number of parameters in the current deep learning models, the computation cost will be significant.
24
+
25
+ Hashing is a promising approach to improve the retrieval efficiency and widely adopted in other retrieval tasks such as image-text search and image-image search. Hashing techniques can convert high
26
+
27
+ dimensional vectors into low dimensional binary hash code, which greatly reduce the cost of storage and calculation (Luo et al., 2020). Hamming distance between two binary hash code can also be calculated in a very efficient way by running XOR instruction on the modern computer architectures (Wang et al., 2016). However, the performance degradation is still not avoidable during the conversion from representation vectors to binary hash codes even the state-of-the-art hashing models are adopted. The tolerance of performance degradation from most users is quite low and many of them are willing to sweep the performance with efficiency. In order to preserve the performance of the original code search models that adopt bi-encoders for the code-query encoding as much as possible, we integrate deep hashing techniques with code classification, which could mitigate the performance degradation of hashing model in the recall stage by filtering out the irrelevant data.
28
+
29
+ Specifically, in this paper, we propose a novel approach CoSHC (Accelerating Semantic Code Search with Deep Hashing and Code Classification) for accelerating the retrieval efficiency of deep learning-based code search approaches. CoSHC firstly clusters the representation vectors into different categories. It then generates binary hash codes for both source code and queries according to the representation vectors from the original models. Finally, CoSHC gives the normalized prediction probability of each category for the given query, and then CoSHC will decide the number of code candidates for the given query in each category according to the probability. Comprehensive experiments have been conducted to validate the performance of the proposed approach. The evaluation results show that CoSHC can preserve more than $99\%$ performance of most baseline models. We summarize the main contributions of this paper as follows:
30
+
31
+ - We propose a novel approach, CoSHC, to improve the retrieval efficiency of previous deep learning based approaches. CoSHC is the first approach that adopts the recall and re-rank mechanism with the integration of code clustering and deep hashing to improve the retrieval efficiency of deep learning based code search models.
32
+ - We conduct comprehensive experimental evaluation on public benchmarks. The results demonstrate that CoSHC can greatly improve the retrieval efficiency meanwhile preserve almost the
33
+
34
+ same performance as the baseline models.
35
+
36
+ # 2 Background
37
+
38
+ # 2.1 Code Search
39
+
40
+ In this subsection, we briefly review some deep learning based code search approaches. Sachdev et al. (2018) firstly propose the neural network based model NCS to retrieve the source code from a large source code corpus according to the given natural language descriptions. Cambronero et al. (2019) propose a neural network model UNIF based on bag-of-words, which embeds code snippets and natural language descriptions into a shared embedding space. Gu et al. (2018) propose to encode source code representation with API sequences, method name tokens and code tokens. Yao et al. (2019) treat code annotation and code search as dual tasks and utilize the generated code annotations to improve code search performance. Husain et al. (2019) explore different neural architectures for source code representation and discover that the self-attention model achieves the best performance. Gu et al. (2021) extract the program dependency graph from the source code and adopt long short term memory (LSTM) networks to model this relationship. Feng et al. (2020) propose a pre-trained model for source code representation and demonstrate its effectiveness on the code search task.
41
+
42
+ # 2.2 Deep Hashing
43
+
44
+ In this subsection, we briefly introduce some representative unsupervised cross-modal hashing methods. In order to learn a unified hash code, Ding et al. (2014) propose to adopt collective matrix factorization with latent factor model from different modalities to merge multiple view information sources. Zhou et al. (2014) firstly utilize sparse coding and matrix factorization to extract the latent features for images and texts, respectively. Then the learned latent semantic features are mapped to a shared space and quantized to the binary hash codes. Wang et al. (2014) suggest using stacked auto-encoders to capture the intra- and inter-modal semantic relationships of data from heterogeneous sources. He et al. (2017) and Zhang et al. (2018) adopt adversarial learning for cross-modal hash codes generation. Wu et al. (2018) propose an approach named UDCMH that integrates deep learning and matrix factorization with binary latent factor models to generate binary hash codes for multi
45
+
46
+ modal data retrieval. By incorporating Laplacian constraints into the objective function, UDCMH preserve not only the nearest neighbors but also the farthest neighbors of data. Unlike using Laplacian constraints in the loss function, Su et al. (2019) construct a joint-semantic affinity matrix that integrates the original neighborhood information from different modalities to guide the learning of unified binary hash codes.
47
+
48
+ # 3 Method
49
+
50
+ We propose a general framework to accelerate existing Deep Code Search (DCS) models by decoupling the search procedure into a recall stage and a re-rank stage. Our main technical contribution lies in the recall stage. Figure 1 illustrates the overall framework of the proposed approach. CoSHC consists of two components, i.e., Offline and Online. In Offline, we take the code and description embeddings learned in the given DCS model as input, and learn the corresponding hash codes by preserving the relations between the code and description embeddings. In Online, we recall a candidate set of code snippets according to the Hamming distance between the query and code, and then we use the original DCS model to re-rank the candidates.
51
+
52
+ # 3.1 Offline Stage
53
+
54
+ Multiple Code Hashing Design with Code Classification Module Since the capacity of binary hashing space is very limited compared to Euclidean space, the Hamming distance between similar code snippets will be too small to be distinguishable if we adopt a single Hashing model. To be specific, we cluster the codebase using K-Means algorithm with the code embeddings learned from the given DCS model. The source code whose representation vectors are close to each other will be classified into the same category after the clustering.
55
+
56
+ Deep Hashing Module The deep hashing module aims at generating the corresponding binary hash codes for the embeddings of code and description from the original DCS model. Figure 2 illustrates the framework of the deep hashing module. To be specific, three fully-connected (FC) layers with $\tanh (\cdot)$ activation function are adopted to replace the output layer in the original DCS model to convert the original representation vectors into a soft binary hash code.
57
+
58
+ The objective of the deep hashing module is to force the Hamming distance between hashing
59
+
60
+ representations of code pairs and description pairs approaching the Euclidean distance between the corresponding embeddings. Thus, we need to calculate the ground truth similarity matrix between code pairs and description pairs firstly. For performance consideration, we calculate the similarity matrix within a mini-batch.
61
+
62
+ To construct such a matrix, we first define the code representation vectors and the description representation vectors in the original code search model as $V_{C} = \{v_{c}^{(1)},\dots,v_{c}^{(n)}\}$ and $V_{D} = \{v_{d}^{(1)},\dots,v_{d}^{(n)}\}$ , respectively. $V_{C}$ and $V_{D}$ represent the representation vectors matrix for the entire batch, while $v_{c}^{(i)}$ and $v_{d}^{(i)}$ represent the representation vector for the single code snippet or query. After normalizing $V_{C}, V_{D}$ to $\hat{V}_{C}, \hat{V}_{D}$ with $l_{2}$ -norm, we can calculate the code similarity matrices $S_{C} = \hat{V}_{C}\hat{V}_{C}^{T}$ and summary similarity matrices $S_{D} = \hat{V}_{D}\hat{V}_{D}^{T}$ to describe the similarity among code representation vectors and summary representation vectors, respectively. In order to integrate the similarity information in both $S_{C}$ and $S_{D}$ , we combine them with a weighted sum:
63
+
64
+ $$
65
+ \tilde {S} = \beta S _ {C} + (1 - \beta) S _ {D}, \beta \in [ 0, 1 ] \tag {1}
66
+ $$
67
+
68
+ where $\beta$ is the weight parameter. Since the pairwise similarity among the code representation vectors and description representation vectors still cannot comprehensively present the distribution condition of them in the whole embedding space, we involve a matrix $\tilde{S}\tilde{S}^T$ to describe a high order neighborhood similarity that two vectors with high similarity should also have the close similarity to other vectors. Finally, we utilize a weighted equation to combine both of these two matrices as follows:
69
+
70
+ $$
71
+ S = (1 - \eta) \tilde {S} + \eta \frac {\tilde {S} \tilde {S} ^ {T}}{m}, \tag {2}
72
+ $$
73
+
74
+ where $\eta$ is a hyper-parameter and $m$ is the batch size which is utilized to normalize the second term in the equation. Since we hope the binary hash codes of the source code and its corresponding description to be the same, we replace the diagonal elements in the similarity matrix with one. The final high order similarity matrix is:
75
+
76
+ $$
77
+ S _ {F _ {i j}} = \left\{ \begin{array}{l l} 1, & i = j \\ S _ {i j}, & \text {o t h e r w i s e} \end{array} \right. \tag {3}
78
+ $$
79
+
80
+ Binary Hash Code Training We propose to replace the output layer of the original code search
81
+
82
+ ![](images/72687c5e757dbc13f2bf9bf40eade1ea38242f67d4a4bc0b82a53c03d18da92b.jpg)
83
+ Figure 1: Overview of the proposed CoSHC. 1 Encoding the code token sequence and description token sequence via original code retrieval models. 2 Clustering the code representation vectors into several categories. 3 Converting the original code representation vectors into binary hash codes. 5 Predicting the category of the query given by users and set the number of code candidates for different categories. 7 Converting the input query into binary hash code. 8 Recall the code candidates according to the hamming distance and the number of code candidates for each category. 9 Re-ranking all the code candidates according to the cosine similarity between the input query description vectors and code candidates' representation vectors and return the results to the user.
84
+
85
+ model with three FC layers with $\tanh (\cdot)$ activate function. We define the trained binary hash code for code and description as $B_{C} = \{b_{c}^{(1)},\dots,b_{c}^{(n)}\}$ and $B_{D} = \{b_{d}^{(1)},\dots,b_{d}^{(n)}\}$ , respectively. To ensure that the relative distribution of binary hash codes is similar to the distribution of representation vectors in the original embedding space, the following equation is utilized as the loss function of the deep hashing module:
86
+
87
+ $$
88
+ \begin{array}{l} \mathcal {L} (\theta) = \min _ {B _ {C}, B _ {D}} \| \min (\mu S _ {F}, 1) - \frac {B _ {C} B _ {D} ^ {T}}{d} \| _ {F} ^ {2} \\ + \lambda_ {1} \| \min \left(\mu S _ {F}, 1\right) - \frac {B _ {C} B _ {C} ^ {T}}{d} \| _ {F} ^ {2} \tag {4} \\ + \lambda_ {2} \| \min (\mu S _ {F}, 1) - \frac {B _ {D} B _ {D} ^ {T}}{d} \| _ {F} ^ {2}, \\ \end{array}
89
+ $$
90
+
91
+ $$
92
+ s. t. B _ {C}, B _ {D} \in \{- 1, + 1 \} ^ {m \times d},
93
+ $$
94
+
95
+ where $\theta$ are model parameters, $\mu$ is the weighted parameters to adjust the similarity score between different pairs of code and description, $\lambda_1, \lambda_2$ are the trade-off parameters to weight different terms in the loss function, and $d$ is the dimension of the binary hash code generated by this deep hashing module. These three terms in the loss function are adopted to restrict the similarity among binary hash codes of the source codes, the similarity among
96
+
97
+ binary hash codes of the descriptions, and the similarity between the binary hash codes of source code and description, respectively.
98
+
99
+ Note that we adopt $B_{C}B_{D}^{T} / d$ to replace $\cos (B_C,B_D)$ because $\cos (B_C,B_D)$ only measures the angle between two vectors but neglects the length of the vectors, which makes $\cos (B_C,B_D)$ can still be a very large value even the value of every hash bits is close to zero. Unlike $\cos (B_C,B_D)$ , $B_{C}B_{D}^{T} / d$ can only achieve a high value when every bit of the binary hash code is 1 or -1 since the value of $B_{C}B_{D}^{T} / d$ will be close to zero if the value of every hash bits is close to zero.
100
+
101
+ Since it is impractical to impose on the output of neural network to be discrete values like 1 and -1, we adopt the following equation to convert the output of deep hashing module to be strict binary hash code:
102
+
103
+ $$
104
+ B = \operatorname {s g n} (H) \in \{- 1, + 1 \} ^ {m \times d}, \tag {5}
105
+ $$
106
+
107
+ where $H$ is the output of the last hidden layer without the activation function in the deep hashing module and $\operatorname{sgn}(\cdot)$ is the sign function and the output of this function is 1 if the input is positive and the output is -1 otherwise.
108
+
109
+ However, the gradient of the sign function will be zero in backward propagation which will induce
110
+
111
+ ![](images/c83080d41e1ef725532a13143912876a63e406d7036934e4644e5ac9ac2986ea.jpg)
112
+ Figure 2: Architecture of the hashing module. The original representation vectors will be utilized for the joint-similarity matrix construction at first. Then the joint-similarity matrix will be utilized as the labels for training binary hash codes generation. The training objective is to make the Hamming distance similarity matrix to be identical as the joint-similarity matrix.
113
+
114
+ the vanishing gradients problem and affect model convergence. To address this problem, we follow the previous research (Cao et al., 2017; Hu et al., 2019) and adopt a scaling function:
115
+
116
+ $$
117
+ B = \tanh (\alpha H) \in \{- 1, + 1 \} ^ {m \times d}, \tag {6}
118
+ $$
119
+
120
+ where $\alpha$ is the parameter which is increased during the training. The function of $\tanh (\alpha H)$ is an approximate equation of $\operatorname{sgn}(H)$ when $\alpha$ is large enough. Therefore, the output of Eq. 6 will finally be converged to 1 or -1 with the increasing of $\alpha$ during the training and the above problem is addressed.
121
+
122
+ # 3.2 Online Stage
123
+
124
+ Recall and Re-rank Mechanism The incoming query from users will be fed into the description category prediction module to calculate the normalized probability distribution of categories at first. Then the number of code candidates $R_{i}$ for each category $i$ will be determined according to this probability distribution. The Hamming distance between the hash code of the given query and all the code inside the database will be calculated. Then code candidates will be sorted by Hamming distance in ascending order and the top $R_{i}$ code candidates in each category $i$ will be recalled. In the re-rank step, the original representation vectors of these recalled code candidates will be retrieved and utilized for the cosine similarity calculation. Finally, code snippets will be returned to users in descending order of cosine similarity.
125
+
126
+ Description Category Prediction Module The description category prediction module aims to pre
127
+
128
+ dict the category of source code that meets user's requirement according to the given natural language description. The model adopted for category prediction is the same as the original code search model, except that the output layer is replaced with a one-hot category prediction layer and the cross-entropy function is adopted as the loss function of the model.
129
+
130
+ Since the accuracy of the description category prediction module is not perfect, we use the probability distribution of each category instead of the category with the highest predicted probability as the recall strategy for code search. We define the total recall number of source code as $N$ , the normalized predicted probability for each code category as $P = \{p_1, \dots, p_k\}$ , where $k$ is the number of categories. The recall number of source code in each category is:
131
+
132
+ $$
133
+ R _ {i} = \min (\left\lfloor p _ {i} \cdot (N - k) \right\rfloor , 1), i \in 1, \dots , k, \tag {7}
134
+ $$
135
+
136
+ where $R_{i}$ is the recall number of source code in category $i$ . To ensure that the proposed approach can recall at least one source code from each category, we set the minimum recall number for a single category to 1.
137
+
138
+ # 4 Experiments
139
+
140
+ # 4.1 Dataset
141
+
142
+ We use two datasets (Python and Java) provided by CodeBERT (Feng et al., 2020) to evaluate the performance of CoSHC. CodeBERT selects the data from the CodeSearchNet (Husain et al., 2019)
143
+
144
+ dataset and creates both positive and negative examples of <description, code> pairs. Since all the baselines in our experiments are bi-encoder models, we do not need to predict the relevance score for the mismatched pairs so we remove all the negative examples from the dataset. Finally we get 412,178 <description, code> pairs as the training set, 23,107 <description, code> pairs as the validation set, and 22,176 <description, code> pairs as the test set in the Python dataset. We get 454,451 <description, code> pairs as the training set, 15,328 <description, code> pairs as the validation set, and 26,909 <description, code> pairs as the test set in the Java dataset.
145
+
146
+ # 4.2 Experimental Setup
147
+
148
+ In the code classification module, we set the number of cluster to 10. In the deep hashing module, we add three fully connected (FC) layer in all the baselines, the hidden size of each FC layer is the same as the dimension of the original representation vectors. Specifically, the hidden size of FC layer for CodeBERTa, CodeBERT, GraphCodeBERT is 768. The hidden size of FC layer for UNIF is 512 and for RNN is 2048. The size of the output binary hash code for all the baselines is 128. The hyper parameters $\beta, \eta, \mu, \lambda_1, \lambda_2$ are 0.6, 0.4, 1.5, 0.1, 0.1, respectively. The parameter $\alpha$ is the epoch number and will be linear increased during the training. In the query category prediction module, a cross-entropy function is adopted as the loss function and the total recall number is 100.
149
+
150
+ The learning rate for CodeBERTa, CodeBERT and GraphCodeBERT is 1e-5 and the learning rate for UNIF, RNN is $1.34\mathrm{e - }4$ . All the models are trained via the AdamW algorithm (Kingma and Ba, 2015).
151
+
152
+ We train our models on a server with four 4x Tesla V100 w/NVLink and 32GB memory. Each module based on CodeBERT, GraphCodeBERT and CodeBERTa are trained with 10 epochs and Each module based on RNN and UNIF are trained with 50 epochs. The early stopping strategy is adopted to avoid overfitting for all the baselines. The time efficiency experiment is conducted on the server with Intel Xeon E5-2698v4 2.2Ghz 20-core. The programming for evaluation is written in $\mathrm{C + + }$ and the program is allowed to use single thread of CPU.
153
+
154
+ # 4.3 Baselines
155
+
156
+ We apply CoSHC on several state-of-the-art and representative baseline models. UNIF (Cambronero et al., 2019) regards the code as the sequence of tokens and embeds the sequence of code tokens and description tokens into representation vectors via full connected layer with attention mechanism, respectively. RNN baseline adopts a two-layer bi-directional LSTM (Cho et al., 2014) to encode the input sequences. CodeBERTa is a 6-layer, Transformer-based model trained on the CodeSearchNet dataset. CodeBERT (Feng et al., 2020) is a pre-trained model based on Transformer with 12 layers. Similar to CodeBERT, GraphCodeBERT (Guo et al., 2021) is a pre-trained Transformer-based model pre-trained with not only tokens information but also dataflow of the code snippets. As we introduced, the inference efficiency of cross-encoder based models like CodeBERT is quite low and the purpose of our approach is to improve the calculation efficiency between the representation vectors of code and queries. Here we slightly change the model structure of CodeBERTa, CodeBERT, and GraphCodeBERT. Rather than concatenating code and query together and inputting them into a single encoder to predict the relevance score of the pair, we adopt the bi-encoder architecture for the baselines, which utilize the independent encoder to encoding the code and queries into representation vectors, respectively. Also, cosine similarity between the given representation vector pairs is adopted as the training loss function to replace the cross entropy function of the output relevance score.
157
+
158
+ # 4.4 Evaluation Metric
159
+
160
+ SuccessRate@k is widely used by many previous studies (Haldar et al., 2020; Shuai et al., 2020; Fang et al., 2021; Heyman and Cutsem, 2020). The metric is calculated as follows:
161
+
162
+ $$
163
+ \text {S u c c e s s R a t e @} k = \frac {1}{| Q |} \sum_ {q = 1} ^ {Q} \delta \left(F \operatorname {R a n k} _ {q} \leq k\right), \tag {8}
164
+ $$
165
+
166
+ where $Q$ denotes the query set and $F \operatorname{Rank}_q$ is the rank of the correct answer for query $q$ . If the correct result is within the top $k$ returning results, $\delta(F \operatorname{Rank}_q \leq k)$ returns 1, otherwise it returns 0. A higher $R@k$ indicates better performance.
167
+
168
+ <table><tr><td></td><td>Python</td><td>Java</td></tr><tr><td colspan="3">Total Time</td></tr><tr><td>CodeBERT</td><td>572.97s</td><td>247.78s</td></tr><tr><td>CoSHC</td><td>33.87s (↓94.09%)</td><td>15.78s (↓93.51%)</td></tr><tr><td colspan="3">(1) Vector Similarity Calculation</td></tr><tr><td>CodeBERT</td><td>531.95s</td><td>234.08s</td></tr><tr><td>CoSHC</td><td>14.43s (↓97.29%)</td><td>7.25s (↓96.90%)</td></tr><tr><td colspan="3">(2) Array Sorting</td></tr><tr><td>CodeBERT</td><td>41.02s</td><td>13.70s</td></tr><tr><td>CoSHC</td><td>19.44s (↓53.61%)</td><td>8.53s (↓37.74%)</td></tr></table>
169
+
170
+ Table 1: Time Efficiency of CoSHC.
171
+
172
+ # 4.5 Experimental Results
173
+
174
+ In this section, we present the experimental results and evaluate the performance of CoSHC from the aspects of retrieval efficiency, overall retrieval performance, and the effectiveness of the internal classification module.
175
+
176
+ # 4.5.1 RQ1: How much faster is CoSHC than the original code search models?
177
+
178
+ Table 1 illustrates the results of efficiency comparison between the original code search models and CoSHC. Once the representation vectors of code and description are stored in the memory, the retrieval efficiency mainly depends on the dimension of representation vectors rather than the complexity of the original retrieval model. Therefore, we select CodeBERT as the baseline model to illustrate efficiency comparison. Since code search process in both approaches contains vector similarity calculation and array sorting, we split the retrieval process into these two steps to calculate the time cost.
179
+
180
+ In the vector similarity calculation step, CoSHC reduces $97.29\%$ and $96.90\%$ of time cost in the dataset of Python and Java respectively, which demonstrates that the utilization of binary hash code can effectively reduce vector similarity calculation cost in the code retrieval process.
181
+
182
+ In the array sorting step, CoSHC reduces $53.61\%$ and $37.74\%$ of time cost in the dataset of Python and Java, respectively. The classification module makes the main contribution on the improvement of sorting efficiency. The sorting algorithm applied in both original code search model and CoSHC is quick sort, whose time complexity is $O(n \log n)$ . Classification module divides a large code dataset into several small code datasets, reducing the average time complexity of sorting to $O(n \log \frac{n}{m})$ . The reason why the improvement of sorting in the Java dataset is not so significant as in the Python dataset
183
+
184
+ is that the size of Java dataset is much smaller than the size of Python dataset. However, the combination of the algorithm of divide and conquer and max-heap, rather than quick sort, is widely applied in the big data sorting, which can greatly shrink the retrieval efficiency gap between these two approaches. Therefore, the improvement of efficiency in the sorting process will not be as large as what shown in Table 1.
185
+
186
+ In the overall code retrieval process, the cost time is reduced by $94.09\%$ and $93.51\%$ in the dataset of Python and Java, respectively. Since the vector similarity calculation takes most of cost time in the code retrieval process, CoSHC still can reduce at least $90\%$ of cost time, which demonstrates the effectiveness on the efficiency improvement in the code search task.
187
+
188
+ # 4.5.2 RQ2: How does CoSHC affect the accuracy of the original models?
189
+
190
+ Table 2 illustrates the retrieval performance comparison between the original code search models and CoSHC. We have noticed that the performance of the conventional approaches like BM25 (Robertson and Zaragoza, 2009) is not good enough. For example, we set the token length for both code and queries as 50, which is the same as the setting in CodeBERT, and apply BM25 to recall top 100 code candidates for the re-rank step on the Python dataset. BM25 can only retain $99.3\%$ , $95.6\%$ and $92.4\%$ retrieval accuracy of CodeBERT in terms of $R@1$ , $R@5$ and $R@10$ on the Python dataset. Here we only compare the performance of our approach with the original code search models since the purpose of our approach is to preserve the performance of the original code search models. As can be observed, CoSHC can retain at least $99.5\%$ , $99.0\%$ and $98.4\%$ retrieval accuracy of most original code search models in terms of $R@1$ , $R@5$ and $R@10$ on the Python dataset. CoSHC can also retain $99.2\%$ , $98.2\%$ and $97.7\%$ of the retrieval accuracy as all original code search baselines in terms of $R@1$ , $R@5$ and $R@10$ on the Java dataset, respectively. We can find that CoSHC can retain more than $97.7\%$ of performance in all metrics. $R@1$ is the most important and useful metric among these metrics since most users hope that the first returned answer is the correct answer during the search. CoSHC can retain at least $99.2\%$ of performance on $R@1$ in both datasets, which demonstrates that CoSHC can retain almost the same performance as the original code search model.
191
+
192
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Python</td><td colspan="3">Java</td></tr><tr><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td></tr><tr><td>UNIF</td><td>0.071</td><td>0.173</td><td>0.236</td><td>0.084</td><td>0.193</td><td>0.254</td></tr><tr><td>CoSHCUNIF</td><td>0.072 (↑1.4%)</td><td>0.177 (↑2.3%)</td><td>0.241 (↑2.1%)</td><td>0.086 (↑2.4%)</td><td>0.198 (↑2.6%)</td><td>0.264 (↑3.9%)</td></tr><tr><td>-w/o classification</td><td>0.071 (0.0%)</td><td>0.174 (↑0.6%)</td><td>0.236 (0.0%)</td><td>0.085 (↑1.2%)</td><td>0.193 (0.0%)</td><td>0.254 (0.0%)</td></tr><tr><td>-one classification</td><td>0.069 (↓2.8%)</td><td>0.163 (↓5.8%)</td><td>0.216 (↓8.5%)</td><td>0.083 (↓1.2%)</td><td>0.183 (↓5.2%)</td><td>0.236 (↓7.1%)</td></tr><tr><td>-ideal classification</td><td>0.077 (↑6.9%)</td><td>0.202 (↑16.8%)</td><td>0.277 (↑17.4%)</td><td>0.093 (↑10.7%)</td><td>0.222 (↑15.0%)</td><td>0.296 (↑16.5%)</td></tr><tr><td>RNN</td><td>0.111</td><td>0.253</td><td>0.333</td><td>0.073</td><td>0.184</td><td>0.250</td></tr><tr><td>CoSHCRNN</td><td>0.112 (↑0.9%)</td><td>0.259 (↑2.4%)</td><td>0.343 (↑5.0%)</td><td>0.076 (↑4.1%)</td><td>0.194 (↑5.4%)</td><td>0.265 (↑6.0%)</td></tr><tr><td>-w/o classification</td><td>0.112 (↑0.9%)</td><td>0.254 (↑0.4%)</td><td>0.335 (↑0.6%)</td><td>0.073 (0.0%)</td><td>0.186 (↑1.1%)</td><td>0.253 (↑1.2%)</td></tr><tr><td>-one classification</td><td>0.112 (↑0.9%)</td><td>0.243 (↓4.0%)</td><td>0.311 (↓6.6%)</td><td>0.075 (↑2.7%)</td><td>0.182 (↓1.1%)</td><td>0.240 (↓4.0%)</td></tr><tr><td>-ideal classification</td><td>0.123 (↑10.8%)</td><td>0.289 (↑14.2%)</td><td>0.385 (↑15.6%)</td><td>0.084 (↑15.1%)</td><td>0.221 (↑20.1%)</td><td>0.302 (↑20.8%)</td></tr><tr><td>CodeBERTa</td><td>0.124</td><td>0.250</td><td>0.314</td><td>0.089</td><td>0.203</td><td>0.264</td></tr><tr><td>CoSHCCodeBERTa</td><td>0.123 (↓0.8%)</td><td>0.247 (↓1.2%)</td><td>0.309 (↓1.6%)</td><td>0.090 (↑1.1%)</td><td>0.210 (↑3.4%)</td><td>0.272 (↑3.0%)</td></tr><tr><td>-w/o classification</td><td>0.122 (↓1.6%)</td><td>0.242 (↓3.2%)</td><td>0.302 (↓3.8%)</td><td>0.089 (0.0%)</td><td>0.201 (↓1.0%)</td><td>0.258 (↓2.3%)</td></tr><tr><td>-one classification</td><td>0.116 (↓6.5%)</td><td>0.221 (↓11.6%)</td><td>0.271 (↓13.7%)</td><td>0.085 (↓4.5%)</td><td>0.189 (↓6.9%)</td><td>0.238 (↓9.8%)</td></tr><tr><td>-ideal classification</td><td>0.135 (↑8.9%)</td><td>0.276 (↑10.4%)</td><td>0.346 (↑10.2%)</td><td>0.100 (↑12.4%)</td><td>0.235 (↑15.8%)</td><td>0.305 (↑15.5%)</td></tr><tr><td>CodeBERT</td><td>0.451</td><td>0.683</td><td>0.759</td><td>0.319</td><td>0.537</td><td>0.608</td></tr><tr><td>CoSHCCodeBERT</td><td>0.451 (0.0%)</td><td>0.679 (↓0.6%)</td><td>0.750 (↓1.2%)</td><td>0.318 (↓0.3%)</td><td>0.533 (↓0.7%)</td><td>0.602 (↓1.0%)</td></tr><tr><td>-w/o classification</td><td>0.449 (↓0.4%)</td><td>0.673 (↓1.5%)</td><td>0.742 (↓2.2%)</td><td>0.316 (↓0.9%)</td><td>0.527 (↓1.9%)</td><td>0.593 (↓2.5%)</td></tr><tr><td>-one classification</td><td>0.425 (↓5.8%)</td><td>0.613 (↓10.2%)</td><td>0.665 (↓12.4%)</td><td>0.304 (↓4.7%)</td><td>0.483 (↓10.1%)</td><td>0.532 (↓12.5%)</td></tr><tr><td>-ideal classification</td><td>0.460 (↑2.0%)</td><td>0.703 (↑2.9%)</td><td>0.775 (↑2.1%)</td><td>0.329 (↑3.1%)</td><td>0.555 (↑3.4%)</td><td>0.627 (↑3.1%)</td></tr><tr><td>GraphCodeBERT</td><td>0.485</td><td>0.726</td><td>0.792</td><td>0.353</td><td>0.571</td><td>0.640</td></tr><tr><td>CoSHCGraphCodeBERT</td><td>0.483 (↓0.4%)</td><td>0.719 (↓1.0%)</td><td>0.782 (↓1.3%)</td><td>0.350 (↓0.8%)</td><td>0.561 (↓1.8%)</td><td>0.625 (↓2.3%)</td></tr><tr><td>-w/o classification</td><td>0.481 (↓0.8%)</td><td>0.713 (↓1.8%)</td><td>0.774 (↓2.3%)</td><td>0.347 (↓1.7%)</td><td>0.553 (↓3.2%)</td><td>0.616 (↓3.7%)</td></tr><tr><td>-one classification</td><td>0.459 (↓5.4%)</td><td>0.653 (↓10.1%)</td><td>0.698 (↓11.9%)</td><td>0.329 (↓7.8%)</td><td>0.505 (↓11.6%)</td><td>0.551 (↓13.9%)</td></tr><tr><td>-ideal classification</td><td>0.494 (↑1.9%)</td><td>0.741 (↑2.1%)</td><td>0.803 (↑1.4%)</td><td>0.361 (↑2.3%)</td><td>0.585 (↑2.5%)</td><td>0.649 (↑1.4%)</td></tr></table>
193
+
194
+ It is interesting that CoSHC presents a relatively better performance when the performance of the original code retrieval models is worse. $\mathrm{CoSHC_{CodeBERTa}}$ even outperforms the original baseline model in Java dataset. $\mathrm{CoSHC_{RNN}}$ and $\mathrm{CoSHC_{UNIF}}$ outperform the original model in both Python and Java datasets. The integration of deep learning and code classification with recall make the contribution on this result. The worse performance indicates more misalignment between the code representation vectors and description representation vectors. Since the code classification and deep hashing will filter out most of irrelevant codes in the recall stage, some irrelevant code representation vectors but has high cosine similarity with the target description representation vectors are filtered, which leads the improvement on the final retrieval performance.
195
+
196
+ # 4.5.3 RQ3: Can the classification module help improve performance?
197
+
198
+ Table 2 illustrates the performance comparison between the CoSHC variants which adopt different recall strategies with query category prediction results. $\mathrm{CoSHC}_{\mathrm{w/o classification}}$ represents CoSHC
199
+
200
+ Table 2: Results of code search performance comparison. The best results among the three CoSHC variants are highlighted in bold font.
201
+
202
+ <table><tr><td>Model</td><td>Python Acc.</td><td>Java Acc.</td></tr><tr><td>CoSHCUNIF</td><td>0.558</td><td>0.545</td></tr><tr><td>CoSHCRNN</td><td>0.610</td><td>0.535</td></tr><tr><td>CoSHCCodeBERTa</td><td>0.591</td><td>0.571</td></tr><tr><td>CoSHCCodeBERT</td><td>0.694</td><td>0.657</td></tr><tr><td>CoSHCGraphCodeBERT</td><td>0.713</td><td>0.653</td></tr></table>
203
+
204
+ Table 3: Classification accuracy of the code classification module in each model.
205
+
206
+ without code classification and description prediction module. $\mathrm{CoSHC_{one}}$ classification represents the CoSHC variant that recalls $N - k + 1$ candidates in the code category with highest prediction probability and one in each of the rest categories. $\mathrm{CoSHC_{ideal}}$ classification is an ideal classification situation we set. Assuming the correct description category is known, $N - k + 1$ candidates are recalled in the correct category and one candidate is recalled in each of the rest categories. Note that the display of $\mathrm{CoSHC_{ideal}}$ classification is only to explore the upper threshold of performance improvement of the category prediction module and will not be counted as a variant of CoSHC we compare.
207
+
208
+ By comparing the performance be
209
+
210
+ tween $\mathrm{CoSHC}_{\mathrm{ideal~classification}}$ and $\mathrm{CoSHC}_{\mathrm{w/o~classification}}$ , we can find that correct classification can significantly improve the retrieval performance. With the ideal category labels, CoSHC can even outperform all baseline models. As mentioned in Sec. 4.5.2, code classification can mitigate the problem of vector pairs misalignment via filtering out wrong candidates whose representation vectors have high cosine similarity with the target representation vectors in the recall stage. The more serious the misalignment problem, the more effective the code classification. That is the reason why the improvement of CoSHC with ground-truth labels on UNIF, RNN, and CodeBERTa is more significant than the improvement of it on CodeBERT and GraphCodeBERT since the retrieval accuracy of former models is much lower than the latter ones. Similar conclusions can also be drawn at the aspect of binary hash code distribution via the comparison between CoSHC and $\mathrm{CoSHC}_{\mathrm{ideal~classification}}$ since CoSHC utilizes the distribution of the original representation vectors as the guidance for model training. Therefore, the distribution of binary hash codes will be similar to the distribution of original representation vectors.
211
+
212
+ Since we have explored the theoretical upper limit of the effectiveness of code classification for code retrieval, the effectiveness of code classification for code retrieval in the real application will be validated. By comparing the experimental results between $\mathrm{CoSHC_{w/o}}$ classification and $\mathrm{CoSHC_{one}}$ classification, we can find that the performance of CoSHC with predicted labels is even worse than the performance of CoSHC without code classification module. The reason is that the accuracy of description category prediction is far from the satisfactory. Table 3 illustrates the accuracy of description category prediction module in all baseline models. We regard the category with the highest probability as the predicted category from the description category prediction module and check whether the module could give a correct prediction. It can be seen that the classification accuracy is not very high (less than $75\%$ ). By observing the experimental results of CoSHC in GraphCodeBERT on the Java dataset, we can also find that low accuracy greatly affects the performance of $\mathrm{CoSHC_{oneclassification}}$ , which makes $7.8\%$ , $11.6\%$ , and $13.9\%$ performance drop in terms of $R@1$ , $R@5$ , and $R@10$ , respectively.
213
+
214
+ Fortunately, although the description category prediction module cannot accurately tell the exact category which this description belongs to, the module still can give a relative high predicted probability on the correct category. By comparing the experimental results among all the variants of CoSHC, we can find the performance is increased significantly once the recall strategy is replaced to that the number of code candidates for each category is determined by the normalized predication probability. CoSHC with new recall strategy almost achieve the best performance in all metrics on all baseline models. Even on RNN in the Python dataset, CoSHC still achieve the same performance as CoSHC without classification under $R@1$ and achieve similar performance in other metrics. Above experimental results have demonstrated the effectiveness of the adoption of code classification in code search.
215
+
216
+ # 5 Conclusion
217
+
218
+ To accelerate code search, we present CoSHC, a general method that incorporates deep hashing techniques and code classification. We leverage the two-staged recall and re-rank paradigm in information retrieval field and apply deep hashing techniques for fast recall. Furthermore, we propose to utilize a code classification module to retrieve better quality code snippets. Experiments on five code search models show that compared with the original code search models, CoSHC can greatly improve the retrieval efficiency meanwhile preserve almost the same performance.
219
+
220
+ # 6 Acknowledgement
221
+
222
+ Wenchao Gu's and Michael R. Lyu's work described in this paper was in part supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210920 of the General Research Fund).
223
+
224
+ # References
225
+
226
+ Joel Brandt, Philip J. Guo, Joel Lewenstein, Mira Dontcheva, and Scott R. Klemmer. 2009. Two studies of opportunistic programming: interleaving web foraging, learning, and writing code. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 2009, Boston, MA, USA, April 4-9, 2009, pages 1589-1598. ACM.
227
+ Jose Cambrero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra. 2019. When deep learning met code search. In Proceedings of the ACM
228
+
229
+ Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019, Tallinn, Estonia, August 26-30, 2019, pages 964-974. ACM.
230
+ Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017. Hashnet: Deep learning to hash by continuation. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 5609-5618. IEEE Computer Society.
231
+ Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pages 103-111. Association for Computational Linguistics.
232
+ Guiguang Ding, Yuchen Guo, and Jile Zhou. 2014. Collective matrix factorization hashing for multimodal data. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 2083-2090. IEEE Computer Society.
233
+ Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, and Dongmei Zhang. 2021. Is a single model enough? mucos: A multi-model ensemble learning approach for semantic code search. In CIKM '21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, pages 2994-2998. ACM.
234
+ Sen Fang, Youshuai Tan, Tao Zhang, and Yepang Liu. 2021. Self-attention networks for code search. Inf. Softw. Technol., 134:106542.
235
+ Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics.
236
+ Wenchao Gu, Zongjie Li, Cuiyun Gao, Chaozheng Wang, Hongyu Zhang, Zenglin Xu, and Michael R. Lyu. 2021. Cradle: Deep code retrieval based on semantic dependency learning. *Neural Networks*, 141:385-394.
237
+ Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pages 933-944. ACM.
238
+
239
+ Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. Graphcodebert: Pre-training code representations with data flow. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
240
+ Rajarshi Haldar, Lingfei Wu, Jinjun Xiong, and Julia Hockenmaier. 2020. A multi-perspective architecture for semantic code search. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8563-8568. Association for Computational Linguistics.
241
+ Li He, Xing Xu, Huimin Lu, Yang Yang, Fumin Shen, and Heng Tao Shen. 2017. Unsupervised cross-modal retrieval through adversarial learning. In 2017 IEEE International Conference on Multimedia and Expo, ICME 2017, Hong Kong, China, July 10-14, 2017, pages 1153-1158. IEEE Computer Society.
242
+ Geert Heyman and Tom Van Cutsem. 2020. Neural code search revisited: Enhancing code snippet retrieval through natural language intent. CoRR, abs/2008.12193.
243
+ Di Hu, Feiping Nie, and Xuelong Li. 2019. Deep binary reconstruction for cross-modal hashing. IEEE Trans. Multim., 21(4):973-985.
244
+ Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code-searchnet challenge: Evaluating the state of semantic code search. CoRR, abs/1909.09436.
245
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
246
+ Xiao Luo, Chong Chen, Huasong Zhong, Hao Zhang, Minghua Deng, Jianqiang Huang, and Xiansheng Hua. 2020. A survey on deep hashing methods. CoRR, abs/2003.03369.
247
+ Fei Lv, Hongyu Zhang, Jian-Guang Lou, Shaowei Wang, Dongmei Zhang, and Jianjun Zhao. 2015. Codehow: Effective code search based on API understanding and extended boolean model (E). In 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9-13, 2015, pages 260-270. IEEE Computer Society.
248
+ Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu. 2011. Portfolio: finding relevant functions and their usage. In Proceedings of
249
+
250
+ the 33rd International Conference on Software Engineering, ICSE 2011, Waikiki, Honolulu, HI, USA, May 21-28, 2011, pages 111-120. ACM.
251
+ Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr., 3(4):333-389.
252
+ Saksham Sachdev, Hongyu Li, Sifei Luan, Seohyun Kim, Koushik Sen, and Satish Chandra. 2018. Retrieval on source code: a neural code search. In Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL@PLDI 2018, Philadelphia, PA, USA, June 18-22, 2018, pages 31-41. ACM.
253
+ Jianhang Shuai, Ling Xu, Chao Liu, Meng Yan, Xin Xia, and Yan Lei. 2020. Improving code search with co-attentive representation learning. In ICPC '20: 28th International Conference on Program Comprehension, Seoul, Republic of Korea, July 13-15, 2020, pages 196-207. ACM.
254
+ Shupeng Su, Zhisheng Zhong, and Chao Zhang. 2019. Deep joint-semantics reconstructing hashing for large-scale unsupervised cross-modal retrieval. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 3027-3035. IEEE.
255
+ Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. 2016. Learning to hash for indexing big data - A survey. Proc. IEEE, 104(1):34-57.
256
+ Wei Wang, Beng Chin Ooi, Xiaoyan Yang, Dongxiang Zhang, and Yueting Zhuang. 2014. Effective multi-modal retrieval based on stacked autoencoders. Proc. VLDB Endow., 7(8):649-660.
257
+ Gengshen Wu, Zijia Lin, Jungong Han, Li Liu, Guiguang Ding, Baochang Zhang, and Jialie Shen. 2018. Unsupervised deep hashing via binary latent factor models for large-scale cross-modal retrieval. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 2854-2860. ijcai.org.
258
+ Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 2203-2214. ACM.
259
+ Jian Zhang, Yuxin Peng, and Mingkuan Yuan. 2018. Unsupervised generative adversarial cross-modal hashing. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 539-546. AAAI Press.
260
+
261
+ Jile Zhou, Guiguang Ding, and Yuchen Guo. 2014. Latent semantic sparse hashing for cross-modal similarity search. In The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '14, Gold Coast, QLD, Australia - July 06 - 11, 2014, pages 415-424. ACM.
acceleratingcodesearchwithdeephashingandcodeclassification/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6c2115a7261dd4dac78f4e30c98086af6d38a4426254124d519bec8ec7dbc8b
3
+ size 545067
acceleratingcodesearchwithdeephashingandcodeclassification/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5abf14dfe5d6449ee90a5eed0a0af7a97de32eef5fb724ce08020234380f4a2
3
+ size 349931
accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fa3c6cfb6b0c994c5f1f9fd0ff323f2a0b5dc9b72b0ecfba2fe241a97dd5ed4
3
+ size 118094
accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:237f117da61941e700cec1783b27d65e767cad9f9ea4d60025a0e8affa486664
3
+ size 136573
accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6eb5571a3720093fd894c24d7e53e2c9cc2d5c36662318db06f190f5ccc91ab
3
+ size 533807
accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/full.md ADDED
@@ -0,0 +1,424 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding
2
+
3
+ Soumya Chatterjee
4
+
5
+ IIT Bombay
6
+
7
+ Sunita Sarawagi
8
+
9
+ IIT Bombay
10
+
11
+ Preethi Jyothi
12
+
13
+ IIT Bombay
14
+
15
+ soumya@cse.iitb.ac.in
16
+
17
+ sunita@iitb.ac.in
18
+
19
+ pjyothi@cse.iitb.ac.in
20
+
21
+ # Abstract
22
+
23
+ Online alignment in machine translation refers to the task of aligning a target word to a source word when the target sequence has only been partially decoded. Good online alignments facilitate important applications such as lexically constrained translation where user-defined dictionaries are used to inject lexical constraints into the translation model. We propose a novel posterior alignment technique that is truly online in its execution and superior in terms of alignment error rates compared to existing methods. Our proposed inference technique jointly considers alignment and token probabilities in a principled manner and can be seamlessly integrated within existing constrained beam-search decoding algorithms. On five language pairs, including two distant language pairs, we achieve consistent drop in alignment error rates. When deployed on seven lexically constrained translation tasks, we achieve significant improvements in BLEU specifically around the constrained positions.
24
+
25
+ # 1 Introduction
26
+
27
+ Online alignment seeks to align a target word to a source word at the decoding step when the word is output in an auto-regressive neural translation model (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). This is unlike the more popular offline alignment task that uses the entire target sentence (Och and Ney, 2003). State of the art methods of offline alignment based on matching of whole source and target sentences (Jalili Sabet et al., 2020; Dou and Neubig, 2021) are not applicable for online alignment where we need to commit on the alignment of a target word based on only the generated prefix thus far.
28
+
29
+ An important application of online alignment is lexically constrained translation which allows injection of domain-specific terminology and other phrasal constraints during decoding (Hasler et al.,
30
+
31
+ 2018; Hokamp and Liu, 2017; Alkhouli et al., 2018; Crego et al., 2016). Other applications include preservation of markings between the source and target (Müller, 2017), and supporting source word edits in summarization (Shen et al., 2019). These applications need to infer the specific source token which aligns with output token. Thus, alignment and translation is to be done simultaneously.
32
+
33
+ Existing online alignment methods can be categorized into Prior and Posterior alignment methods. Prior alignment methods (Garg et al., 2019; Song et al., 2020) extract alignment based on the attention at time step $t$ when outputting token $y_{t}$ . The attention probabilities at time-step $t$ are conditioned on tokens output before time $t$ . Thus, the alignment is estimated prior to observing $y_{t}$ . Naturally, the quality of alignment can be improved if we condition on the target token $y_{t}$ (Shankar and Sarawagi, 2019). This motivated Chen et al. (2020) to propose a posterior alignment method where alignment is calculated from the attention probabilities at the next decoder step $t + 1$ . While alignment quality improved as a result, their method is not truly online since it does not generate alignment synchronously with the token. The delay of one step makes it difficult and cumbersome to incorporate terminology constraints during beam decoding.
34
+
35
+ We propose a truly online posterior alignment method that provides higher alignment accuracy than existing online methods, while also being synchronous. Because of that we can easily integrate posterior alignment to improve lexicon-constrained translation in state of the art constrained beam-search algorithms such as VDBA (Hu et al., 2019). Our method (Align-VDBA) presents a significant departure from existing papers on alignment-guided constrained translation (Chen et al., 2020; Song et al., 2020) that employ a greedy algorithm with poor constraint satisfaction rate (CSR). For example, on a ja→en their CSR is 20 points lower than ours. Moreover, the latter does not benefit
36
+
37
+ from larger beam sizes unlike VDBA-based methods that significantly improve with larger beam widths. Compared to Chen et al. (2020), our method improves average overall BLEU scores by 1.2 points and average BLEU scores around the constrained span by up to 9 points. In the evaluations performed in these earlier work, VDBA was not allocated the slightly higher beam size needed to pro-actively enforce constraints without compromising BLEU. Compared to Hu et al. (2019) (VDBA), this paper's contributions include online alignments and their use in more fluent constraint placement and efficient allocation of beams.
38
+
39
+ # Contributions
40
+
41
+ - A truly online posterior alignment method that integrates into existing NMT sytems via a trainable light-weight module.
42
+ - Higher online alignment accuracy on five language pairs including two distant language pairs where we improve over the best existing method in seven out of ten translation tasks.
43
+ - Principled method of modifying VDBA to incorporate posterior alignment probabilities in lexically-constrained decoding. VDBA enforces constraints ignoring source alignments; our change (Align-VDBA) leads to more fluent constraint placement and significant BLEU increase particularly for smaller beams.
44
+ - Establishing that VDBA-based pro-active constrained inference should be preferred over prevailing greedy alignment-guided inference (Chen et al., 2021; Song et al., 2020). Further, VDBA and our Align-VDBA inference with beam size 10 provide 1.2 BLEU increase over these methods with the same beam size.
45
+
46
+ # 2 Posterior Online Alignment
47
+
48
+ Given a sentence $\mathbf{x} = x_{1},\ldots ,x_{S}$ in the source language and a sentence $\mathbf{y} = y_{1},\dots ,y_{T}$ in the target language, an alignment $\mathcal{A}$ between the word strings is a subset of the Cartesian product of the word positions (Brown et al., 1993; Och and Ney, 2003): $\mathcal{A}\subseteq \{(s,t):s = 1,\dots ,S;t = 1,\dots ,T\}$ such that the aligned words can be considered translations of each other. An online alignment at timestep $t$ commits on alignment of the $t^{\mathrm{th}}$ output token conditioned only on $\mathbf{x}$ and $\mathbf{y}_{< t} = y_1,y_2,\dots y_{t - 1}$ . Additionally, if token $y_{t}$ is also available we call it a posterior online alignment. We seek to embed online alignment with existing NMT systems. We will first briefly describe the architecture of state
49
+
50
+ of the art NMT systems. We will then elaborate on how alignments are computed from attention distributions in prior work and highlight some limitations, before describing our proposed approach.
51
+
52
+ # 2.1 Background
53
+
54
+ Transformers (Vaswani et al., 2017) adopt the popular encoder-decoder paradigm used for sequence-to-sequence modeling (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015). The encoder and decoder are both multi-layered networks with each layer consisting of a multi-headed self-attention and a feedforward module. The decoder layers additionally use multi-headed attention to encoder states. We elaborate on this mechanism next since it plays an important role in alignments.
55
+
56
+ # 2.1.1 Decoder-Encoder Attention in NMTs
57
+
58
+ The encoder transforms the $S$ input tokens into a sequence of token representations $\mathbf{H} \in \mathbb{R}^{S \times d}$ . Each decoder layer (indexed by $\ell \in \{1, \dots, L\}$ ) computes multi-head attention over $\mathbf{H}$ by aggregating outputs from a set of $\eta$ independent attention heads. The attention output from a single head $n \in \{1, \dots, \eta\}$ in decoder layer $\ell$ is computed as follows. Let the output of the self-attention sub-layer in decoder layer $\ell$ at the $t^{\text{th}}$ target token be denoted as $\mathbf{g}_t^\ell$ . Using three projection matrices $\mathbf{W}_Q^{\ell,n}$ , $\mathbf{W}_V^{\ell,n}$ , $\mathbf{W}_K^{\ell,n} \in \mathbb{R}^{d \times d_n}$ , the query vector $\mathbf{q}_t^{\ell,n} \in \mathbb{R}^{1 \times d_n}$ and key and value matrices, $\mathbf{K}^{\ell,n} \in \mathbb{R}^{S \times d_n}$ and $\mathbf{V}^{\ell,n} \in \mathbb{R}^{S \times d_n}$ , are computed using the following projections: $\mathbf{q}_t^{\ell,n} = \mathbf{g}_t^\ell \mathbf{W}_Q^{\ell,n}$ , $\mathbf{K}^{\ell,n} = \mathbf{H} \mathbf{W}_K^{\ell,n}$ , and $\mathbf{V}^{\ell,n} = \mathbf{H} \mathbf{W}_V^{\ell,n}$ . These are used to calculate the attention output from head $n$ , $\mathbf{Z}_t^{\ell,n} = P(\mathbf{a}_t^{\ell,n} | \mathbf{x}, \mathbf{y}_{<t}) \mathbf{V}^{\ell,n}$ , where:
59
+
60
+ $$
61
+ P (\mathbf {a} _ {t} ^ {\ell , n} | \mathbf {x}, \mathbf {y} _ {< t}) = \mathrm {s o f t m a x} \left(\frac {\mathbf {q} _ {t} ^ {\ell , n} (\mathbf {K} ^ {\ell , n}) ^ {\intercal}}{\sqrt {d}}\right) (1)
62
+ $$
63
+
64
+ For brevity, the conditioning on $\mathbf{x},\mathbf{y}_{< t}$ is dropped and $P(\mathbf{a}_t^{\ell ,n})$ is used to refer to $P(\mathbf{a}_t^{\ell ,n}|\mathbf{x},\mathbf{y}_{< t})$ in the following sections.
65
+
66
+ Finally, the multi-head attention output is given by $[\mathbf{Z}_t^{\ell,1},\dots,\mathbf{Z}_t^{\ell,\eta}]\mathbf{W}^O$ where $[ ]$ denotes the column-wise concatenation of matrices and $\mathbf{W}^O \in \mathbb{R}^{d\times d}$ is an output projection matrix.
67
+
68
+ # 2.1.2 Alignments from Attention
69
+
70
+ Several prior work have proposed to extract word alignments from the above attention prob
71
+
72
+ abilities. For example Garg et al. (2019) propose a simple method called NAIVEATT that aligns a source word to the $t^{\text{th}}$ target token using $\operatorname{argmax}_j \frac{1}{\eta} \sum_{n=1}^{\eta} P(a_{t,j}^{\ell,n} | \mathbf{x}, \mathbf{y}_{<t})$ where $j$ indexes the source tokens. In NAIVEATT, we note that the attention probabilities $P(a_{t,j}^{\ell,n} | \mathbf{x}, \mathbf{y}_{<t})$ at decoding step $t$ are not conditioned on the current output token $y_t$ . Alignment quality would benefit from conditioning on $y_t$ as well. This observation prompted Chen et al. (2020) to extract alignment of token $y_t$ using attention $P(a_{t,j}^{\ell,n} | \mathbf{x}, \mathbf{y}_{\leq t})$ computed at time step $t+1$ . The asynchronicity inherent to this shift-by-one approach (SHIFTATT) makes it difficult and more computationally expensive to incorporate lexical constraints during beam decoding.
73
+
74
+ # 2.2 Our Proposed Method: POSTALN
75
+
76
+ We propose POSTALN that produces posterior alignments synchronously with the output tokens, while being more computationally efficient compared to previous approaches like SHIFTATT. We incorporate a lightweight alignment module to convert prior attention to posterior alignments in the same decoding step as the output. Figure 1 illustrates how this alignment module fits within the standard Transformer architecture.
77
+
78
+ The alignment module is placed at the penultimate decoder layer $\ell = L - 1$ and takes as input (1) the encoder output $\mathbf{H}$ , (2) the output of the self-attention sub-layer of decoder layer $\ell$ , $\mathbf{g}_t^\ell$ and, (3) the embedding of the decoded token $\mathbf{e}(y_t)$ . Like in standard attention it projects $\mathbf{H}$ to obtain a key matrix, but to obtain the query matrix it uses both decoder state $\mathbf{g}_t^\ell$ (that summarizes $\mathbf{y}_{< t}$ ) and $\mathbf{e}(y_t)$ to compute the posterior alignment $P(\mathbf{a}_t^{\mathrm{post}})$ as:
79
+
80
+ $$
81
+ P (\mathbf {a} _ {t} ^ {\mathrm {p o s t}}) = \frac {1}{\eta} \sum_ {n = 1} ^ {\eta} \mathrm {s o f t m a x} \left(\frac {\mathbf {q} _ {t , \mathrm {p o s t}} ^ {n} (\mathbf {K} _ {\mathrm {p o s t}} ^ {n}) ^ {\intercal}}{\sqrt {d}}\right),
82
+ $$
83
+
84
+ $$
85
+ \mathbf {q} _ {t, \mathrm {p o s t}} ^ {n} = [ \mathbf {g} _ {t} ^ {\ell}, \mathbf {e} (y _ {t}) ] \mathbf {W} _ {Q, \mathrm {p o s t}} ^ {n}, \mathbf {K} _ {\mathrm {p o s t}} ^ {n} = \mathbf {H} \mathbf {W} _ {K, \mathrm {p o s t}} ^ {n}
86
+ $$
87
+
88
+ Here $\mathbf{W}_{Q,\mathrm{post}}^n\in \mathbb{R}^{2d\times d_n}$ and $\mathbf{W}_{K,\mathrm{post}}^n\in \mathbb{R}^{d\times d_n}$ .
89
+
90
+ This computation is synchronous with producing the target token $y_{t}$ , thus making it compatible with beam search decoding (as elaborated further in Section 3). It also accrues minimal computational overhead since $P(\mathbf{a}_t^{\mathrm{post}})$ is defined using $\mathbf{H}$ and $\mathbf{g}_t^{L - 1}$ , that are both already cached during a standard decoding pass. Note that if the query vector $\mathbf{q}_{t,\mathrm{post}}^n$ is computed using only $\mathbf{g}_t^{L - 1}$ , without concatenating $\mathbf{e}(y_t)$ , then we get prior alignments
91
+
92
+ ![](images/b5e4ec500e94f378c3a7ccc76f3c6fdcebc33910b56c8b30f4722a8112c4b355.jpg)
93
+ Figure 1: Our alignment module is an encoder-decoder attention sub-layer, similar to the existing cross-attention sub-layer. It takes as inputs the encoder output $\mathbf{H}$ as the key, and the concatenation of the output of the previous self-attention layer $\mathbf{g}_t^\ell$ and the currently decoded token $y_{t}$ as the query, and outputs posterior alignment probabilities $\mathbf{a}_t^{\mathrm{post}}$ .
94
+
95
+ that we refer to as PRIORATT. In our experiments, we explicitly compare PRIORATT with POSTALN to show the benefits of using $y_{t}$ in deriving alignments while keeping the rest of the architecture intact.
96
+
97
+ Training Our posterior alignment sub-layer is trained using alignment supervision, while freezing the rest of the translation model parameters. Specifically, we train a total of $3d^{2}$ additional parameters across the matrices $\mathbf{W}_{K,\mathrm{post}}^{n}$ and $\mathbf{W}_{Q,\mathrm{post}}^{n}$ . Since gold alignments are very tedious and expensive to create for large training datasets, alignment labels are typically obtained using existing techniques. We use bidirectional symmetrized SHIFTATT alignments, denoted by $S_{i,j}$ that refers to an alignment between the $i^{\mathrm{th}}$ target word and the $j^{\mathrm{th}}$ source word, as reference labels to train our alignment sub-layer. Then the objective (following Garg et al. (2019)) can be defined as:
98
+
99
+ $$
100
+ \max _ {\mathbf {W} _ {Q, \text {p o s t}} ^ {n}, \mathbf {W} _ {K, \text {p o s t}} ^ {n}} \frac {1}{T} \sum_ {i = 1} ^ {T} \sum_ {j = 1} ^ {S} S _ {i, j} \log (P (a _ {i, j} ^ {\text {p o s t}} | \mathbf {x}, \mathbf {y} _ {\leq i}))
101
+ $$
102
+
103
+ Next, we demonstrate the role of posterior online alignments on an important downstream task.
104
+
105
+ # 3 Lexicon Constrained Translation
106
+
107
+ In the lexicon constrained translation task, for each to-be-translated sentence $\mathbf{x}$ , we are given a set of source text spans and the corresponding target tokens in the translation. A constraint $\mathcal{C}_j$ comprises a pair $(\mathcal{C}_j^x,\mathcal{C}_j^y)$ where $\mathcal{C}_j^x = (p_j,p_j + 1,\ldots ,p_j + \ell_j)$ indicates input token positions, and $\mathcal{C}_j^y = (y_1^j,y_2^j\dots ,y_{m_j}^j)$ denote target tokens that are translations of the input tokens $x_{p_j}\dots x_{p_j + \ell_j}$ . For the output tokens we do not know their positions in the target sentence. The different constraints are non-overlapping and each is expected to be used exactly once. The goal is to translate the given sentence $\mathbf{x}$ and satisfy as many constraints in $\mathcal{C} = \bigcup_{j}\mathcal{C}_{j}$ as possible while ensuring fluent and correct translations. Since the constraints do not specify target token position, it is natural to use online alignments to guide when a particular constraint is to be enforced.
108
+
109
+ # 3.1 Background: Constrained Decoding
110
+
111
+ Existing inference algorithms for incorporating lexicon constraints differ in how pro-actively they enforce the constraints. A passive method is used in Song et al. (2020) where constraints are enforced only when the prior alignment is at a constrained source span. Specifically, if at decoding step $t$ , $i = \operatorname{argmax}_{i'} P(a_{t,i'})$ is present in some constraint $C_j^x$ , the output token is fixed to the first token $y_1^j$ from $C_j^y$ . Otherwise, the decoding proceeds as usual. Also, if the translation of a constraint $C_j$ has started, the same is completed ( $y_2^j$ through $y_{m_j}^j$ ) for the next $m_j - 1$ decoding steps before resuming unconstrained beam search. The pseudocode for this method is provided in Appendix G.
112
+
113
+ For the posterior alignment methods of Chen et al. (2020) this leads to a rather cumbersome inference (Chen et al., 2021). First, at step $t$ they predict a token $\hat{y}_t$ , then start decoding step $t + 1$ with $\hat{y}_t$ as input to compute the posterior alignment from attention at step $t + 1$ . If the maximum alignment is to the constrained source span $\mathcal{C}_j^x$ they revise the output token to be $y_1^j$ from $\mathcal{C}_j^y$ , but the output score for further beam-search continues to be of $\hat{y}_t$ . In this process both the posterior alignment and token probabilities are misrepresented since they are both based on $\hat{y}_t$ instead of the finally output token $y_1^j$ . The decoding step at $t + 1$ needs to be restarted after the revision. The overall algorithm continues to be normal beam-search, which implies that the constraints are not enforced pro-actively.
114
+
115
+ Many prior methods have proposed more proactive methods of enforcing constraints, including the Grid Beam Search (GBA, Hokamp and Liu (2017)), Dynamic Beam Allocation (DBA, Post and Vilar (2018)) and Vectorized Dynamic Beam Allocation (VDBA, Hu et al. (2019)). The latest of these, VDBA, is efficient and available in public NMT systems (Ott et al., 2019; Hieber et al., 2020). Here multiple banks, each corresponding to a particular number of completed constraints, are maintained. At each decoding step, a hypothesis can either start a new constraint and move to a new bank or continue in the same bank (either by not starting a constraint or progressing on a constraint mid-completion). This allows them to achieve near $100\%$ enforcement. However, VDBA enforces the constraints by considering only the target tokens of the lexicon and totally ignores the alignment of these tokens to the source span. This could lead to constraints being placed at unnatural locations leading to loss of fluency. Examples appear in Table 4 where we find that VDBA just attaches the constrained tokens at the end of the sentence.
116
+
117
+ # 3.2 Our Proposal: Align-VDBA
118
+
119
+ We modify VDBA with alignment probabilities to better guide constraint placement. The score of a constrained token is now the joint probability of the token, and the probability of the token being aligned with the corresponding constrained source span. Formally, if the current token $y_{t}$ is a part of the $j^{\mathrm{th}}$ constraint i.e. $y_{t} \in \mathcal{C}_{j}^{y}$ , the generation probability of $y_{t}$ , $P(y_{t}|\mathbf{x},\mathbf{y}_{< t})$ is scaled by multiplying with the alignment probabilities of $y_{t}$ with $\mathcal{C}_j^x$ , the source span for constraint $i$ . Thus, the updated probability is given by:
120
+
121
+ $$
122
+ \underbrace {P \left(y _ {t} , \mathcal {C} _ {j} ^ {x} \mid \mathbf {x} , \mathbf {y} _ {< t}\right)} _ {\text {J o i n t P r o b}} = \underbrace {P \left(y _ {t} \mid \mathbf {x} , \mathbf {y} _ {< t}\right)} _ {\text {T o k e n P r o b}} \underbrace {\sum_ {r \in \mathcal {C} _ {j} ^ {x}} P \left(a _ {t , r} ^ {\text {p o s t}} \mid \mathbf {x} , \mathbf {y} _ {\leq t}\right)} _ {\text {S r c A l i g n . P r o b .}} \tag {2}
123
+ $$
124
+
125
+ $P(y_{t},\mathcal{C}_{j}^{x}|\mathbf{x},\mathbf{y}_{< t})$ denotes the joint probability of outputting the constrained token and the alignment being on the corresponding source span. Since the supervision for the alignment probabilities was noisy, we found it useful to recalibrate the alignment distribution using a temperature scale $T$ , so that the recalibrated probability is $\propto \operatorname*{Pr}(a_{t,r}^{\mathrm{post}}|\mathbf{x},\mathbf{y}_{\leq t})^{\frac{1}{T}}$ . We used $T = 2$ i.e., square-root of the alignment probability.
126
+
127
+ Align-VDBA also uses posterior alignment probabilities to also improve the efficiency of VDBA.
128
+
129
+ Algorithm 1 Align-VDBA: Modifications to DBA shown in blue. (Adapted from Post and Vilar (2018))
130
+ 1: Inputs beam: $K$ hypothesis in beam, scores: $K \times |V_{T}|$ matrix of scores where scores $[k,y]$ denotes the score of $k^{\mathrm{th}}$ hypothesis extended with token $y$ at this step, constraints: $\left\{\left(\mathcal{C}_j^x,\mathcal{C}_j^y\right)\right\}$ , threshold
131
+ 2: candidates $\leftarrow [(k,y,\text{scores}[k,y],\text{beam}[k].\text{constraints.add}(y)]\right.$ for $k,y$ in ARGMAX_K(scores)
132
+ 3: for $1 \leq k \leq K$ do
133
+ 4: for all $y \in V_{T}$ that are unmet constraints for beam[k] do
134
+ 5: alignProb $\leftarrow \Sigma_{\text{constraint\_xs}(y)}$ POSTALN $(k,y)$
135
+ 6: if alignProb > threshold then
136
+ 7: candidates.append $(k,y,\text{scores}[k,y] \times \text{alignProb})$ , beam[k].constraints.add(y))
137
+ 8: candidates.append $(k,y,\text{scores}[k,y],\text{beam}[k].\text{constraints.add}(y))$
138
+ 9: $w = \text{ARGMAX}(\text{scores}[k,:])$
139
+ 10: candidates.append $(k,w,\text{scores}[k,w],\text{beam}[k].\text{constraints.add}(w))$
140
+ 11: newBeam $\leftarrow$ ALLOCATE(candidates, K)
141
+
142
+ Currently, VDBA attempts beam allocation for each unmet constraint since it has no way to discriminate. In Align-VDBA we allocate only when the alignment probability is greater than a threshold. When the beam size is small (say 5) this yields higher accuracy due to more efficient beam utilization. We used a threshold of 0.1 for all language pairs other than ro→en for which a threshold of 0.3 was used. Further, the thresholds were used for the smaller beam size of 5 and not for larger beam sizes of 10 and 20.
143
+
144
+ We present the pseudocode of our modification (steps 5, 6 and 7, in blue) to DBA in Algorithm 1. Other details of the algorithm including the handling of constraints and the allocation steps (step 11) are involved and we refer the reader to Post and Vilar (2018) and Hu et al. (2019) to understand these details. The point of this code is to show that our proposed posterior alignment method can be easily incorporated into these algorithms so as to provide a more principled scoring of constrained hypothesis in a beam than the ad hoc revision-based method of Chen et al. (2021). Additionally, posterior alignments lead to better placement of constraints than in the original VDBA algorithm.
145
+
146
+ # 4 Experiments
147
+
148
+ We first compare our proposed posterior online alignment method on quality of alignment against existing methods in Section 4.2, and in Section 4.3, we demonstrate the impact of the improved alignment on the lexicon-constrained translation task.
149
+
150
+ # 4.1 Setup
151
+
152
+ We deploy the fairseq toolkit (Ott et al., 2019) and use transformer_iwslt_de_en preconfigured model for all our experiments. Other configuration parameters include: Adam optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , a learning rate of $5\mathrm{e}-4$
153
+
154
+ <table><tr><td></td><td>de-en</td><td>en-fr</td><td>ro-en</td><td>en-hi</td><td>ja-en</td></tr><tr><td>Training</td><td>1.9M</td><td>1.1M</td><td>0.5M</td><td>1.6M</td><td>0.3M</td></tr><tr><td>Validation</td><td>994</td><td>1000</td><td>999</td><td>25</td><td>1166</td></tr><tr><td>Test</td><td>508</td><td>447</td><td>248</td><td>140</td><td>1235</td></tr></table>
155
+
156
+ Table 1: Number of sentence pairs for the five datasets used. Note that gold alignments are available only for the handful of sentence pairs in the test set.
157
+
158
+ with 4000 warm-up steps, an inverse square root schedule, weight decay of $1\mathrm{e} - 4$ , label smoothing of 0.1, 0.3 probability dropout and a batch size of 4500 tokens. The transformer models are trained for 50,000 iterations. Then, the alignment module is trained for 10,000 iterations, keeping the other model parameters fixed. A joint byte pair encoding (BPE) is learned for the source and the target languages with 10k merge operation (Sennrich et al., 2016) using subword-nmt.
159
+
160
+ All experiments were done on a single 11GB Nvidia GeForce RTX 2080 Ti GPU on a machine with 64 core Intel Xeon CPU and 755 GB memory. The vanilla Transformer models take between 15 to 20 hours to train for different datasets. Starting from the alignments extracted from these models, the POSTALN alignment module trains in about 3 to 6 hours depending on the dataset.
161
+
162
+ # 4.2 Alignment Task
163
+
164
+ We evaluate online alignments on ten translation tasks spanning five language pairs. Three of these are popular in alignment papers (Zenkel et al., 2019): German-English (de-en), English-French (en-fr), Romanian-English (ro-en). These are all European languages that follow the same subject-verb-object (SVO) ordering. We also present results on two distant language pairs, English-Hindi (en-hi) and English-Japanese (ja-en), that follow a SOV word order which is different from the SVO
165
+
166
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Delay</td><td colspan="2">de-en</td><td colspan="2">en-fr</td><td colspan="2">ro-en</td><td colspan="2">en-hi</td><td colspan="2">ja-en</td></tr><tr><td>de→en</td><td>en→de</td><td>en→fr</td><td>fr→en</td><td>ro→en</td><td>en→ro</td><td>en→hi</td><td>hi→en</td><td>ja→en</td><td>en→ja</td></tr><tr><td colspan="12">Statistical Methods (Not Online)</td></tr><tr><td>GIZA++ (Och and Ney, 2003)</td><td>End</td><td>18.9</td><td>19.7</td><td>7.3</td><td>7.0</td><td>27.6</td><td>28.3</td><td>35.9</td><td>36.4</td><td>41.8</td><td>39.0</td></tr><tr><td>FastAlign (Dyer et al., 2013)</td><td>End</td><td>28.4</td><td>32.0</td><td>16.4</td><td>15.9</td><td>33.8</td><td>35.5</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="12">No Alignment Training</td></tr><tr><td>NAIVEATT (Garg et al., 2019)</td><td>0</td><td>32.4</td><td>40.0</td><td>24.0</td><td>31.2</td><td>37.3</td><td>33.2</td><td>49.1</td><td>53.8</td><td>62.2</td><td>63.5</td></tr><tr><td>SHIFTATT (Chen et al., 2020)</td><td>+1</td><td>20.0</td><td>22.9</td><td>14.7</td><td>20.4</td><td>26.9</td><td>27.4</td><td>35.3</td><td>38.6</td><td>53.6</td><td>48.6</td></tr><tr><td colspan="12">With Alignment Training</td></tr><tr><td>PRIORATT</td><td>0</td><td>23.4</td><td>25.8</td><td>14.0</td><td>16.6</td><td>29.3</td><td>27.2</td><td>36.4</td><td>35.1</td><td>52.7</td><td>50.9</td></tr><tr><td>SHIFTAET (Chen et al., 2020)</td><td>+1</td><td>15.8</td><td>19.5</td><td>10.3</td><td>10.4</td><td>22.4</td><td>23.7</td><td>29.3</td><td>29.3</td><td>42.5</td><td>41.9</td></tr><tr><td>POSTALN [Ours]</td><td>0</td><td>15.5</td><td>19.5</td><td>9.9</td><td>10.4</td><td>21.8</td><td>23.2</td><td>28.7</td><td>28.9</td><td>41.2</td><td>42.2</td></tr></table>
167
+
168
+ Table 2: AER for de-en, en-fr, ro-en, en-hi, ja-en language pairs. "Delay" indicates the decoding step at which the alignment of the target token is available. NaIVEATT, PRIORATT and POSTALN are truly online and output alignment at the same time step (delay=0), while SHIFTATT and SHIFTAET output one decoding step later.
169
+
170
+ word order of English. Data statistics are shown in Table 1 and details are in Appendix C.
171
+
172
+ Evaluation Method: For evaluating alignment performance, it is necessary that the target sentence is exactly the same as for which the gold alignments are provided. Thus, for the alignment experiments, we force the output token to be from the gold target and only infer the alignment. We then report the Alignment Error Rate (AER) (Och and Ney, 2000) between the gold alignments and the predicted alignments for different methods. Though our focus is online alignment, for comparison to previous works, we also report results on bidirectional symmetrized alignments in Appendix D.
173
+
174
+ Methods compared: We compare our method with both existing statistical alignment models, namely GIZA++ (Och and Ney, 2003) and FastAlign (Dyer et al., 2013), and recent Transformer-based alignment methods of Garg et al. (2019) (NAIVEATT) and Chen et al. (2020) (SHIFTATT and SHIFTAET). Chen et al. (2020) also propose a variant of SHIFTATT called SHIFTAET that delays computations by one time-step as in SHIFTATT, and additionally includes a learned attention sublayer to compute alignment probabilities. We also present results on PRIORATT which is similar to POSTALN but does not use $\mathbf{y}_t$ .
175
+
176
+ Results: The alignment results are shown in Table 2. First, AERs using statistical methods FastAlign and GIZA++ are shown. Here, for fair comparison, the IBM models used by GIZA++ are trained on the same sub-word units as the Transformer models and sub-word alignments are converted to word level alignments for AER calculations. (GIZA++ has remained a state-of-the-art alignment technique and continues to be compared against.) Next, we present alignment results for two vanilla
177
+
178
+ Transformer models - NaIVEATT and SHIFTATT - that do not train a separate alignment module. The high AER of NaIVEATT shows that attention-as-is is very distant from alignment but posterior attention is closer to alignments than prior. Next we look at methods that train alignment-specific parameters: PRIORATT, a prior attention method; SHIFTAET and POSTALN, both posterior alignment methods. We observe that with training even PRIORATT has surpassed non-trained posterior. The posterior attention methods outperform the prior attention methods by a large margin, with an improvement of 4.0 to 8.0 points. Within each group, the methods with a trained alignment module outperform the ones without by a huge margin. POSTALN performs better or matches the performance of SHIFTAET (achieving the lowest AER in nine out of ten cases in Table 2) while avoiding the one-step delay in alignment generation. Even on the distant languages, POSTALN achieves significant reductions in error. For ja→en, we achieve a 1.3 AER reduction compared to SHIFTAET which is not a truly online method. Figure 2 shows examples to illustrate the superior alignments of POSTALN compared to NaIVEATT and PRIORATT.
179
+
180
+ # 4.3 Impact of POSTALN on Lexicon-Constrained Translation
181
+
182
+ We next depict the impact of improved AERs from our posterior alignment method on a downstream lexicon-constrained translation task. Following previous work (Hokamp and Liu, 2017; Post and Vilar, 2018; Song et al., 2020; Chen et al., 2020, 2021), we extract constraints using the gold alignments and gold translations. Up to three constraints of up to three words each are used for each sentence. Spans correctly translated by a greedy decoding
183
+
184
+ ![](images/5ada8c4056d6ffa85ed4f9251ef27356d9c05993dc89fd400bce9b8673f56003.jpg)
185
+
186
+ ![](images/b8602ebee6625cbcac78935ae093d2b3eeb849394d9950adf577d5623d32859b.jpg)
187
+
188
+ ![](images/f441193c38b833860755f27fea41f6de0c9ca0fbd36d61cee011adb39f76f695.jpg)
189
+
190
+ ![](images/b14c704d764aff2c7ffc35bc20b4b81cdb4f5bcb1b839f51d8ad9316f45d7470.jpg)
191
+
192
+ ![](images/0630ee0e5bf20214d8d50aa5cce1e8475b13b6bfbbf4f1a4a5282dd5288f3ddf.jpg)
193
+ Figure 2: Alignments for de→en (top-row) and en→hi (bottom-row) by NAIVEATT, PRIORATT, and POSTALN. Note that POSTALN is most similar to Gold alignments in the last column.
194
+
195
+ ![](images/92c91267867736f679785ebdc39754dfd3cd2a2c742dfffebb887001543d4a47.jpg)
196
+
197
+ ![](images/9bd1e2f7dd7331369b910e5e9f6d479ea9a3703a514aeceac32614497c937e40.jpg)
198
+
199
+ ![](images/91037571b76b311e7de4b926aa38e512e5359ca080b12395950fc43d1ce2e211.jpg)
200
+
201
+ <table><tr><td rowspan="2">Method</td><td colspan="4">de→en</td><td colspan="4">en→fr</td><td colspan="4">ro→en</td><td colspan="4">en→hi</td><td colspan="4">ja→en</td></tr><tr><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td></tr><tr><td>No constraints</td><td>0.0</td><td>4.6</td><td>32.9</td><td>87</td><td>0.0</td><td>8.7</td><td>34.8</td><td>64</td><td>0.0</td><td>8.8</td><td>33.4</td><td>47</td><td>0.0</td><td>6.3</td><td>19.7</td><td>21</td><td>0.0</td><td>8.8</td><td>18.9</td><td>237</td></tr><tr><td>NAIVEATT</td><td>28.7</td><td>86.1</td><td>36.6</td><td>147</td><td>36.5</td><td>88.0</td><td>38.3</td><td>93</td><td>33.3</td><td>92.3</td><td>36.5</td><td>99</td><td>22.5</td><td>88.4</td><td>23.6</td><td>27</td><td>15.1</td><td>75.9</td><td>20.2</td><td>315</td></tr><tr><td>PRIORATT</td><td>35.0</td><td>92.8</td><td>37.6</td><td>159</td><td>42.1</td><td>94.4</td><td>38.9</td><td>97</td><td>36.0</td><td>91.2</td><td>37.2</td><td>100</td><td>27.2</td><td>91.5</td><td>24.4</td><td>28</td><td>16.7</td><td>79.7</td><td>20.4</td><td>326</td></tr><tr><td>SHIFTATT</td><td>41.0</td><td>96.6</td><td>38.7</td><td>443</td><td>45.0</td><td>93.5</td><td>38.7</td><td>239</td><td>39.2</td><td>94.2</td><td>37.4</td><td>241</td><td>23.2</td><td>78.7</td><td>21.9</td><td>58</td><td>15.2</td><td>72.7</td><td>19.3</td><td>567</td></tr><tr><td>SHIFTAET</td><td>43.1</td><td>97.5</td><td>39.1</td><td>458</td><td>46.6</td><td>94.3</td><td>39.0</td><td>235</td><td>40.8</td><td>94.4</td><td>37.6</td><td>263</td><td>24.3</td><td>80.2</td><td>22.0</td><td>62</td><td>18.1</td><td>75.9</td><td>19.7</td><td>596</td></tr><tr><td>POSTALN</td><td>42.7</td><td>97.2</td><td>39.0</td><td>399</td><td>46.3</td><td>94.1</td><td>38.7</td><td>218</td><td>40.0</td><td>93.5</td><td>37.4</td><td>226</td><td>23.8</td><td>79.0</td><td>22.0</td><td>47</td><td>18.2</td><td>75.7</td><td>19.7</td><td>460</td></tr><tr><td>VDBA</td><td>44.5</td><td>98.9</td><td>38.5</td><td>293</td><td>51.9</td><td>98.5</td><td>39.5</td><td>160</td><td>43.1</td><td>99.1</td><td>37.9</td><td>165</td><td>29.8</td><td>92.3</td><td>24.5</td><td>49</td><td>24.3</td><td>95.6</td><td>21.6</td><td>494</td></tr><tr><td>Align-VDBA</td><td>44.5</td><td>98.6</td><td>38.6</td><td>357</td><td>52.9</td><td>98.4</td><td>39.7</td><td>189</td><td>44.1</td><td>98.9</td><td>38.1</td><td>203</td><td>30.5</td><td>91.5</td><td>24.7</td><td>70</td><td>25.1</td><td>95.5</td><td>21.8</td><td>630</td></tr></table>
202
+
203
+ Table 3: Constrained translation results showing BLEU-C, CSR (Constraint Satisfaction Rate), BLEU scores and total decoding time (in seconds) for the test set. Align-VDBA has the highest BLEU-C on all datasets.
204
+
205
+ are not selected as constraints.
206
+
207
+ Metrics: Following prior work (Song et al., 2020), we report BLEU (Papineni et al., 2002), time to translate all test sentences, and Constraint Satisfaction Rate (CSR). However, since it is trivial to get $100\%$ CSR by always copying, we report another metric to evaluate the appropriateness of constraint placement: We call this measure BLEU-C and compute it as the BLEU of the constraint (when satisfied) and a window of three words around it. All numbers are averages over five different sets of randomly sampled constraint sets. The beam size is set to ten by default; results for other beam sizes appear in Appendix E.
208
+
209
+ Methods Compared: First we compare all the alignment methods presented in Section 4.2 on the constrained translation task using the alignment based token-replacement algorithm of Song et al.
210
+
211
+ (2020) described in Section 3.1. Next, we present a comparison between VBDA (Hu et al., 2019) and our modification Align-VDBA.
212
+
213
+ Results: Table 3 shows that VDBA and our AlignVDBA that pro-actively enforce constraints have a much higher CSR and BLEU-C compared to the other lazy constraint enforcement methods. For example, for ja→en greedy methods can only achieve a CSR of $76\%$ compared to $96\%$ of the VDBA-based methods. In terms of overall BLEU too, these methods provide an average increase in BLEU of 1.2 and an average increase in BLEU-C of 5 points. On average, Align-VDBA has a 0.7 point greater BLEU-C compared to VDBA. It also has a greater BLEU than VDBA on all the five datasets. In Table 9 of Appendix we show that for smaller beam-size of 5, the gap between Align-VDBA and VDBA is even larger (2.1 points greater BLEU-C and 0.4
214
+
215
+ <table><tr><td>Constraints</td><td>(gesetz zur, law also), (dealer, pusher)</td></tr><tr><td>Gold</td><td>of course, if a drug addict becomes a pusher, then it is right and necessary that he should pay and answer before the law also.</td></tr><tr><td>VDBA</td><td>certainly, if a drug addict becomes a dealer, it is right and necessary that he should be brought to justice before the law also pusher.</td></tr><tr><td>Align-VDBA</td><td>certainly, if a drug addict becomes a pusher, then it is right and necessary that he should be brought to justice before the law also.</td></tr><tr><td>Constraints</td><td>(von mehrheitsverfahren, of qualified)</td></tr><tr><td>Gold</td><td>... whether this is done on the basis of a vote or of consensus, and whether unanimity is required or some form of qualified majority.</td></tr><tr><td>VDBA</td><td>... whether this is done by means of qualified votes or consensus, and whether unanimity or form of majority procedure apply.</td></tr><tr><td>Align-VDBA</td><td>... whether this is done by voting or consensus, and whether unanimity or form of qualified majority voting are valid.</td></tr><tr><td>Constraints</td><td>(zustimmung der, strong backing of)</td></tr><tr><td>Gold</td><td>... which were adopted with the strong backing of the ppe group and the support of the socialist members.</td></tr><tr><td>VDBA</td><td>... which were then adopted with broad agreement from the ppe group and with the strong backing of the socialist members.</td></tr><tr><td>Align-VDBA</td><td>... which were then adopted with strong backing of the ppe group and with the support of the socialist members.</td></tr><tr><td>Constraints</td><td>(den usa, the usa), (sicherheitssystems an, security system that), (entwicklung, development)</td></tr><tr><td>Gold</td><td>matters we regard as particularly important are improving the working conditions between the weu and the eu and the development of a European security system that is not dependent on the usa .</td></tr><tr><td>VDBA</td><td>we consider the usa ’s european security system to be particularly important in improving working conditions between the weu and the eu and developing a European security system that is independent of the united states development .</td></tr><tr><td>Align-VDBA</td><td>we consider the development of the security system that is independent of the usa to be particularly important in improving working conditions between the weu and the eu .</td></tr></table>
216
+
217
+ points greater BLEU). Table 4 lists some example translations by VDBA vs. Align-VDBA. We observe that VDBA places constraints at the end of the translated sentence (e.g., "pusher", "development") unlike Align-VDBA. In some cases where constraints contain frequent words (like of, the, etc.), VDBA picks the token in the wrong position to tack on the constraint (e.g., "strong backing of", "of qualified") while Align-VDBA places the constraint correctly.
218
+
219
+ Table 4: Anecdotes showing constrained translations produced by VDBA vs. Align-VDBA.
220
+
221
+ <table><tr><td>Dataset →</td><td colspan="2">IATE.414</td><td colspan="2">Wiktionary.727</td></tr><tr><td>Method (Beam Size) ↓</td><td>BLEU (Δ)</td><td>CSR</td><td>BLEU (Δ)</td><td>CSR</td></tr><tr><td>Baseline (5)</td><td>25.8</td><td>76.3</td><td>26.0</td><td>76.9</td></tr><tr><td>Train-by-app. (5)</td><td>26.0 (+0.2)</td><td>92.9</td><td>26.9 (+0.9)</td><td>90.7</td></tr><tr><td>Train-by-rep. (5)</td><td>26.0 (+0.2)</td><td>94.5</td><td>26.3 (+0.3)</td><td>93.4</td></tr><tr><td>No constraints (10)</td><td>29.7</td><td>77.0</td><td>29.9</td><td>72.4</td></tr><tr><td>SHIFTAET (10)</td><td>29.9</td><td>95.9</td><td>30.4</td><td>97.2</td></tr><tr><td>VDBA (10)</td><td>30.9</td><td>99.8</td><td>30.9</td><td>99.4</td></tr><tr><td>Align-VDBA (10)</td><td>30.9 (+1.2)</td><td>99.8</td><td>31.1 (+1.2)</td><td>99.5</td></tr></table>
222
+
223
+ Table 5: Constrained translation results on the two real world constraints from Dinu et al. (2019).
224
+
225
+ Real World Constraints: We also evaluate our method using real world constraints extracted from IATE and Wiktionary datasets by Dinu et al. (2019). Table 5 compares Align-VDBA with the soft-constraints method of Dinu et al. (2019) that requires special retraining to teach the model to copy constraints. We reproduced the numbers from their paper in the first three rows. Their baseline is almost 4 BLEU points worse than ours since they used a smaller transformer NMT model, thus making running times incomparable. When we compare the increment $\Delta$ in BLEU over the respective baselines, Align-VDBA shows much greater gains of $+1.2$ vs. their $+0.5$ . Also, Align-VDBA provides
226
+
227
+ a larger CSR of 99.6 compared to their 92. Results for other beam sizes and other methods and metrics appear in Appendix F.
228
+
229
+ # 5 Related Work
230
+
231
+ Online Prior Alignment from NMTs: Zenkel et al. (2019) find alignments using a single-head attention submodule, optimized to predict the next token. Garg et al. (2019) and Song et al. (2020) supervise a single alignment head from the penultimate multi-head attention with prior alignments from GIZA++ alignments or FastAlign. Bahar et al. (2020) and Shankar et al. (2018) treat alignment as a latent variable and impose a joint distribution over token and alignment while supervising on the token marginal of the joint distribution.
232
+
233
+ Online Posterior Alignment from NMTs: Shankar and Sarawagi (2019) first identify the role of posterior attention for more accurate alignment. However, their NMT was a single-headed RNN. Chen et al. (2020) implement posterior attention in a multi-headed Transformer but they incur a delay of one step between token output and alignment. We are not aware of any prior work that extracts truly online posterior alignment in modern NMTs. Offline Alignment Systems: Several recent methods apply only in the offline setting: Zenkel et al. (2020) extend an NMT with an alignment module; Nagata et al. (2020) frame alignment as a question answering task; and Jalili Sabet et al. (2020); Dou and Neubig (2021) leverage similarity between contextual embeddings from pretrained multilingual models (Devlin et al., 2019).
234
+
235
+ Lexicon Constrained Translation: Hokamp and Liu (2017) and Post and Vilar (2018); Hu et al.
236
+
237
+ (2019) modify beam search to ensure that target phrases from a given constrained lexicon are present in the translation. These methods ignore alignment with the source but ensure high success rate for appearance of the target phrases in the constraint. Song et al. (2020) and Chen et al. (2021) do consider source alignment but they do not enforce constraints leading to lower CSR. Dinu et al. (2019) and Lee et al. (2021) propose alternative training strategies for constraints, whereas we focus on working with existing models. Recently, non autoregressive methods have been proposed for enforcing target constraints but they require that the constraints are given in the order they appear in the target translation (Susanto et al., 2020).
238
+
239
+ # 6 Conclusion
240
+
241
+ In this paper we proposed a simple architectural modification to modern NMT systems to obtain accurate online alignments. The key idea that led to high alignment accuracy was conditioning on the output token. Further, our designed alignment module enables such conditioning to be performed synchronously with token generation. This property led us to Align-VDBA, a principled decoding algorithm for lexically constrained translation based on joint distribution of target token and source alignments. Future work includes increase efficiency of constrained inference and harnessing such joint distributions for other forms of constraints, for example, nested constraints.
242
+
243
+ Limitations: All existing methods for hard constrained inference, including ours, come with considerable runtime overheads. Soft constrained methods are not accurate enough.
244
+
245
+ # Acknowledgements
246
+
247
+ We are grateful to the reviewers for their detailed analysis, thoughtful comments and insightful questions which have helped us improve the paper. We are grateful to Priyesh Jain for providing alignment annotations for 50 English-Hindi sentences.
248
+
249
+ # References
250
+
251
+ Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 177-185, Brussels, Belgium. Association for Computational Linguistics.
252
+
253
+ Parnia Bahar, Nikita Makarov, and Hermann Ney. 2020. Investigation of transformer-based latent attention models for neural machine translation. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 7-20, Virtual. Association for Machine Translation in the Americas.
254
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
255
+ Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.
256
+ Guanhua Chen, Yun Chen, and Victor O.K. Li. 2021. Lexically constrained neural machine translation with explicit alignment guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12630-12638.
257
+ Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, and Qun Liu. 2020. Accurate word alignment induction from neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 566-576, Online. Association for Computational Linguistics.
258
+ Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah-danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, Doha, Qatar. Association for Computational Linguistics.
259
+ Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran's pure neural machine translation systems.
260
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
261
+ Shuoyang Ding, Hainan Xu, and Philipp Koehn. 2019. Saliency-driven word alignment interpretation for
262
+
263
+ neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 1-12, Florence, Italy. Association for Computational Linguistics.
264
+ Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063-3068, Florence, Italy. Association for Computational Linguistics.
265
+ Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics.
266
+ Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.
267
+ Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4453-4462, Hong Kong, China. Association for Computational Linguistics.
268
+ Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 506-512, New Orleans, Louisiana. Association for Computational Linguistics.
269
+ Felix Hieber, Tobias Domhan, Michael Denkowski, and David Vilar. 2020. Sockeye 2: A toolkit for neural machine translation. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 457-458, Lisboa, Portugal. European Association for Machine Translation.
270
+ Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535-1546, Vancouver, Canada. Association for Computational Linguistics.
271
+ J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin
272
+
273
+ Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 839-850, Minneapolis, Minnesota. Association for Computational Linguistics.
274
+ Masoud Jalili Sabet, Philipp Dufter, François Yvon, and Hinrich Schütze. 2020. SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1627-1643, Online. Association for Computational Linguistics.
275
+ Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700-1709, Seattle, Washington, USA. Association for Computational Linguistics.
276
+ Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics.
277
+ Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In International Workshop on Spoken Language Translation (IWSLT) 2005.
278
+ Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
279
+ Gyubok Lee, Seongjun Yang, and Edward Choi. 2021. Improving lexically constrained neural machine translation with source-conditioned masked span prediction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 743-753, Online. Association for Computational Linguistics.
280
+ Joel Martin, Rada Mihalcea, and Ted Pedersen. 2005. Word alignment for languages with scarce resources. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pages 65-74, Ann Arbor, Michigan. Association for Computational Linguistics.
281
+ Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proceedings of the HLT-NAACL 2003 Workshop on Building and
282
+
283
+ Using Parallel Texts: Data Driven Machine Translation and Beyond, pages 1-10.
284
+ Mathias Müller. 2017. Treatment of markup in statistical machine translation. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 36-46, Copenhagen, Denmark. Association for Computational Linguistics.
285
+ Masaaki Nagata, Katsuki Chousa, and Masaaki Nishino. 2020. A supervised word alignment method based on cross-language span prediction using multilingual BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 555-565, Online. Association for Computational Linguistics.
286
+ Graham Neubig. 2011. The Kyoto free translation task.
287
+ Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 440-447, Hong Kong. Association for Computational Linguistics.
288
+ Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
289
+ Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
290
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
291
+ Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
292
+ Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1314-1324, New Orleans, Louisiana. Association for Computational Linguistics.
293
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational
294
+
295
+ Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
296
+ Shiv Shankar, Siddhant Garg, and Sunita Sarawagi. 2018. Surprisingly easy hard-attention for sequence to sequence learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 640–645, Brussels, Belgium. Association for Computational Linguistics.
297
+ Shiv Shankar and Sunita Sarawagi. 2019. Posterior attention models for sequence to sequence learning. In International Conference on Learning Representations.
298
+ Xiaoyu Shen, Yang Zhao, Hui Su, and Dietrich Klakow. 2019. Improving latent alignment in text summarization by generalizing the pointer generator. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3762-3773, Hong Kong, China. Association for Computational Linguistics.
299
+ Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, and Min Zhang. 2020. Alignment-enhanced transformer for constraining nmt with pre-specified translations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8886-8893.
300
+ Raymond Hendy Susanto, Shamil Chollampatt, and Liling Tan. 2020. Lexically constrained neural machine translation with Levenshtein transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3536-3543, Online. Association for Computational Linguistics.
301
+ Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
302
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
303
+ David Vilar, Maja Popović, and Hermann Ney. 2006. AER: Do we need to "improve" our alignments? In International Workshop on Spoken Language Translation (IWSLT) 2006.
304
+ Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural translation models improves word alignment.
305
+ Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outperforms GIZA++. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1605-1617, Online. Association for Computational Linguistics.
306
+
307
+ # A Alignment Error Rate
308
+
309
+ Given gold alignments consisting of sure alignments $S$ and possible alignments $\mathcal{P}$ , and the predicted alignments $\mathcal{A}$ , the Alignment Error Rate (AER) is defined as (Och and Ney, 2000):
310
+
311
+ $$
312
+ \mathrm {A E R} = 1 - \frac {| \mathcal {A} \cap \mathcal {P} | + | \mathcal {A} \cap \mathcal {S} |}{| \mathcal {A} | + | \mathcal {S} |}
313
+ $$
314
+
315
+ Note that here $\mathcal{S} \subseteq \mathcal{P}$ . Also note that since our models are trained on sub-word units but gold alignments are over words, we need to convert alignments between word pieces to alignments between words. A source word and a target word are said to be aligned if there exists an alignment link between any of their respective word pieces.
316
+
317
+ # B BLEU-C
318
+
319
+ Given a reference sentence, a predicted translation and a set of constraints, for each constraint, a segment of the sentence is chosen which contains the constraint and window size words (if available) surrounding the constraint words on either side. Such segments, called spans, are collected for the reference and predicted sentences in the test set and BLEU is computed over these spans. If a constraint is not satisfied in the prediction, the corresponding span is considered to be the empty string. An example is shown in Table 6. Table 7 shows how BLEU-C varies as a function of varying window size for a fixed English-French constraint set with beam size set to 10.
320
+
321
+ <table><tr><td>Window Size →</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td>No constraints</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>NAIVEATT</td><td>34.4</td><td>32.0</td><td>30.4</td><td>29.5</td><td>29.4</td><td>29.5</td><td>29.7</td></tr><tr><td>PRIORATT</td><td>41.5</td><td>38.7</td><td>36.4</td><td>35.1</td><td>34.9</td><td>35.0</td><td>35.2</td></tr><tr><td>SHIFTATT</td><td>44.9</td><td>41.5</td><td>38.9</td><td>37.3</td><td>36.4</td><td>36.2</td><td>36.0</td></tr><tr><td>SHIFTAET</td><td>47.0</td><td>43.2</td><td>40.4</td><td>38.7</td><td>38.0</td><td>37.6</td><td>37.4</td></tr><tr><td>POSTALN</td><td>46.4</td><td>42.7</td><td>39.8</td><td>38.0</td><td>37.1</td><td>36.9</td><td>36.6</td></tr><tr><td>VDBA</td><td>54.9</td><td>50.5</td><td>46.8</td><td>44.6</td><td>43.5</td><td>43.0</td><td>42.6</td></tr><tr><td>Align-VDBA</td><td>56.4</td><td>51.7</td><td>47.9</td><td>45.6</td><td>44.4</td><td>43.7</td><td>43.3</td></tr></table>
322
+
323
+ # C Description of the Datasets
324
+
325
+ The European languages consist of parallel sentences for three language pairs from the Europarl Corpus and alignments from Mihalcea and Pedersen (2003), Och and Ney (2000), Vilar et al. (2006). Following previous works (Ding et al., 2019; Chen et al., 2020), the last 1000 sentences of the training data are used as validation data.
326
+
327
+ For English-Hindi, we use the dataset from Martin et al. (2005) consisting of 3440 training sentence
328
+
329
+ pairs, 25 validation and 90 test sentences with gold alignments. Since training Transformers requires much larger datasets, we augment the training set with 1.6 million sentences from the IIT Bombay Parallel Corpus (Kunchukuttan et al., 2018). We also add the first 50 sentences from the dev set of IIT Bombay Parallel Corpus with manually annotated alignments to the test set giving a total of 140 test sentences.
330
+
331
+ For Japanese-English, we use The Kyoto Free Translation Task (Neubig, 2011). It comprises roughly 330K training, 1166 validation and 1235 test sentences. As with other datasets, gold alignments are available only for the test sentences. The Japanese text is already segmented and we use it without additional changes.
332
+
333
+ The real world constraints datasets of Dinu et al. (2019) are extracted from the German-English WMT newtest 2017 task with the IATE dataset consisting of 414 sentences (451 constraints) and the Wiktionary 727 sentences (879 constraints). The constraints come from the IATE and Wiktionary terminology databases.
334
+
335
+ All datasets were processed using the scripts provided by Zenkel et al. (2019) at https://github.com/lilt/alignment-scripts. Computation of BLEU and BLEU-C, and the paired test were performed using sacrebleu (Post, 2018).
336
+
337
+ # D Bidirectional Symmetrized Alignment
338
+
339
+ We report AERs using bidirectional symmetrized alignments in Table 8 in order to provide fair comparisons to results in prior literature. The symmetrization is done using the grow-diagonal heuristic (Koehn et al., 2005; Och and Ney, 2000). Since bidirectional alignments need the entire text in both languages, these are not online alignments.
340
+
341
+ Table 7: BLEU-C vs Window Size
342
+
343
+ <table><tr><td>Method</td><td colspan="5">de-en en-fr ro-en en-hi ja-en</td></tr><tr><td colspan="6">Statistical Methods</td></tr><tr><td>GIZA++</td><td>18.6</td><td>5.5</td><td>26.3</td><td>35.9</td><td>39.7</td></tr><tr><td>FastAlign</td><td>27.0</td><td>10.5</td><td>32.1</td><td>-</td><td>-</td></tr><tr><td colspan="6">No Alignment Training</td></tr><tr><td>NAIVEATT</td><td>29.2</td><td>16.9</td><td>31.4</td><td>43.8</td><td>57.1</td></tr><tr><td>SHIFTATT</td><td>16.9</td><td>7.8</td><td>24.3</td><td>30.9</td><td>46.2</td></tr><tr><td colspan="6">With Alignment Training</td></tr><tr><td>PRIORATT</td><td>22.0</td><td>10.1</td><td>26.3</td><td>32.1</td><td>48.2</td></tr><tr><td>SHIFTAET</td><td>15.4</td><td>5.6</td><td>21.0</td><td>26.7</td><td>40.1</td></tr><tr><td>POSTALN</td><td>15.3</td><td>5.5</td><td>21.0</td><td>26.1</td><td>39.5</td></tr></table>
344
+
345
+ Table 8: AERs for bidirectional symmetrized alignments. POSTALN consistently performs the best.
346
+
347
+ <table><tr><td>Reference Prediction</td><td colspan="2">we consider the development of a robust security system that is independent of the we consider developing a robust security system which is independent of the</td></tr><tr><td colspan="3">BLEU-C (Window Size = 2)</td></tr><tr><td>Cons. No</td><td>Reference Spans</td><td>Predicted Spans</td></tr><tr><td>1</td><td rowspan="2">consider the development of a robust security system that is</td><td>(empty sentence)</td></tr><tr><td>2</td><td>a robust security system which is</td></tr><tr><td colspan="3">BLEU-C = BLEU(Reference Spans, Predicted Spans)</td></tr></table>
348
+
349
+ Table 6: An example BLEU-C computation
350
+
351
+ # E Additional Lexicon-Constrained Translation Results
352
+
353
+ Constrained translation results for beam sizes 5 and 10 are shown in Table 9. We also present results for Align-VDBA without the alignment probability based beam allocation as Align-VDBA* in Table 9. We can see that our beam allocation technique results in better beam utilization as evidenced by improvements in BLEU and BLEU-C, and reduction total decoding time.
354
+
355
+ Paired bootstrap resampling test (Koehn, 2004) results with respect to Align-VDBA for beam size 10 are shown in Table 10.
356
+
357
+ # F Additional Real World Constrained Translation Results
358
+
359
+ Results on the real world constrained translation datasets of Dinu et al. (2019) for all the methods in Table 3 with beam sizes 5, 10 and 20 are presented in Table 11. Paired bootstrap resampling test (Koehn, 2004) results with respect to AlignVDBA for beam size 5 are shown in Table 12
360
+
361
+ # G Alignment-based Token Replacement Algorithm
362
+
363
+ The pseudocode for the algorithm used in Song et al. (2020); Chen et al. (2021) and our non-VDBA based methods in Section 4.3 is presented in Algorithm 2. As described in Section 3.1, at each decoding step, if the source token having the maximum alignment at the current step lies in some constraint span, the constraint in question is decoded until completion before resuming normal decoding.
364
+
365
+ Though different alignment methods are represented using a call to the same ATTENTION function in Algorithm 2, these methods incur varying computational overheads. For instance, NAIVEATT incurs little additional cost, PRIORATT and POSTALN involve a multi-head attention computation. For SHIFTATT and SHIFTAET,
366
+
367
+ an entire decoder pass is done when ATTENTION is called, thereby incurring a huge overhead as shown in Table 3.
368
+
369
+ # H Layer Selection for Alignment Supervision of Distant Language Pairs
370
+
371
+ For the alignment supervision, we used alignments extracted from vanilla Transformers using the SHIFTATT method. To do so, however, we need to choose the decoder layers from which to extract the alignments. The validation AERs can be used for this purpose but since gold validation alignments are not available, Chen et al. (2020) suggest selecting the layers which have the best consistency between the alignment predictions from the two translation directions.
372
+
373
+ For the European language pairs, this turns out to be layer 3 as suggested by Chen et al. (2020). However, for the distant language pairs Hindi-English and Japanese-English, this is not the case and layer selection needs to be done. The AER between the two translation directions on the validation set, with alignments obtained from different decoder layers, are shown in Tables 13 and 14.
374
+
375
+ <table><tr><td rowspan="2">Beam Size</td><td rowspan="2">Method</td><td colspan="4">dc→en</td><td colspan="4">en→fr</td><td colspan="4">ro→en</td><td colspan="4">en→hi</td><td colspan="4">ja→en</td></tr><tr><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td></tr><tr><td rowspan="9">5</td><td>No constraints</td><td>0.0</td><td>5.0</td><td>32.9</td><td>78</td><td>0.0</td><td>8.7</td><td>34.6</td><td>61</td><td>0.0</td><td>8.4</td><td>33.3</td><td>45</td><td>0.0</td><td>5.6</td><td>19.7</td><td>18</td><td>0.0</td><td>7.9</td><td>19.1</td><td>221</td></tr><tr><td>NAIVEATT</td><td>28.9</td><td>86.2</td><td>36.7</td><td>127</td><td>36.7</td><td>88.6</td><td>38.0</td><td>87</td><td>32.9</td><td>91.8</td><td>36.3</td><td>88</td><td>23.0</td><td>89.9</td><td>23.9</td><td>25</td><td>15.1</td><td>77.0</td><td>20.3</td><td>398</td></tr><tr><td>PRIORATT</td><td>35.3</td><td>93.0</td><td>37.7</td><td>136</td><td>42.2</td><td>94.7</td><td>38.6</td><td>89</td><td>36.0</td><td>91.6</td><td>37.0</td><td>89</td><td>27.6</td><td>91.7</td><td>24.7</td><td>26</td><td>16.8</td><td>80.2</td><td>20.6</td><td>353</td></tr><tr><td>SHIFTATT</td><td>41.0</td><td>96.7</td><td>38.7</td><td>268</td><td>45.2</td><td>93.8</td><td>38.4</td><td>167</td><td>39.2</td><td>94.4</td><td>37.2</td><td>160</td><td>23.8</td><td>81.8</td><td>22.0</td><td>42</td><td>15.1</td><td>72.6</td><td>19.3</td><td>664</td></tr><tr><td>SHIFTAET</td><td>43.1</td><td>97.6</td><td>39.1</td><td>291</td><td>46.5</td><td>94.8</td><td>38.6</td><td>165</td><td>40.8</td><td>94.7</td><td>37.5</td><td>163</td><td>24.5</td><td>83.6</td><td>22.1</td><td>44</td><td>18.0</td><td>76.5</td><td>19.6</td><td>583</td></tr><tr><td>POSTALN</td><td>42.7</td><td>97.3</td><td>39.0</td><td>252</td><td>46.1</td><td>93.9</td><td>38.5</td><td>151</td><td>39.8</td><td>93.5</td><td>37.3</td><td>141</td><td>23.3</td><td>79.7</td><td>21.7</td><td>39</td><td>17.9</td><td>75.3</td><td>19.6</td><td>469</td></tr><tr><td>VDBA</td><td>39.6</td><td>99.4</td><td>37.8</td><td>203</td><td>45.9</td><td>99.5</td><td>38.5</td><td>109</td><td>36.6</td><td>99.2</td><td>36.7</td><td>117</td><td>27.3</td><td>96.6</td><td>24.2</td><td>37</td><td>22.1</td><td>96.9</td><td>20.9</td><td>397</td></tr><tr><td>Align-VDBA*</td><td>40.3</td><td>99.0</td><td>38.0</td><td>244</td><td>47.4</td><td>99.3</td><td>38.7</td><td>132</td><td>37.6</td><td>99.7</td><td>36.8</td><td>139</td><td>27.2</td><td>95.6</td><td>24.1</td><td>46</td><td>22.5</td><td>97.2</td><td>21.0</td><td>460</td></tr><tr><td>Align-VDBA</td><td>41.3</td><td>98.8</td><td>38.2</td><td>236</td><td>48.0</td><td>98.9</td><td>38.7</td><td>128</td><td>42.0</td><td>96.6</td><td>37.5</td><td>134</td><td>28.2</td><td>91.3</td><td>24.7</td><td>45</td><td>22.6</td><td>93.9</td><td>21.2</td><td>445</td></tr><tr><td rowspan="8">10</td><td>No constraints</td><td>0.0</td><td>4.6</td><td>32.9</td><td>87</td><td>0.0</td><td>8.7</td><td>34.8</td><td>64</td><td>0.0</td><td>8.8</td><td>33.4</td><td>47</td><td>0.0</td><td>6.3</td><td>19.7</td><td>21</td><td>0.0</td><td>8.8</td><td>18.9</td><td>237</td></tr><tr><td>NAIVEATT</td><td>28.7</td><td>86.1</td><td>36.6</td><td>147</td><td>36.5</td><td>88.0</td><td>38.3</td><td>93</td><td>33.3</td><td>92.3</td><td>36.5</td><td>99</td><td>22.5</td><td>88.4</td><td>23.6</td><td>27</td><td>15.1</td><td>75.9</td><td>20.2</td><td>315</td></tr><tr><td>PRIORATT</td><td>35.0</td><td>92.8</td><td>37.6</td><td>159</td><td>42.1</td><td>94.4</td><td>38.9</td><td>97</td><td>36.0</td><td>91.2</td><td>37.2</td><td>100</td><td>27.2</td><td>91.5</td><td>24.4</td><td>28</td><td>16.7</td><td>79.7</td><td>20.4</td><td>326</td></tr><tr><td>SHIFTATT</td><td>41.0</td><td>96.6</td><td>38.7</td><td>443</td><td>45.0</td><td>93.5</td><td>38.7</td><td>239</td><td>39.2</td><td>94.2</td><td>37.4</td><td>241</td><td>23.2</td><td>78.7</td><td>21.9</td><td>58</td><td>15.2</td><td>72.7</td><td>19.3</td><td>567</td></tr><tr><td>SHIFTAET</td><td>43.1</td><td>97.5</td><td>39.1</td><td>458</td><td>46.6</td><td>94.3</td><td>39.0</td><td>235</td><td>40.8</td><td>94.4</td><td>37.6</td><td>263</td><td>24.3</td><td>80.2</td><td>22.0</td><td>62</td><td>18.1</td><td>75.9</td><td>19.7</td><td>596</td></tr><tr><td>POSTALN</td><td>42.7</td><td>97.2</td><td>39.0</td><td>399</td><td>46.3</td><td>94.1</td><td>38.7</td><td>218</td><td>40.0</td><td>93.5</td><td>37.4</td><td>226</td><td>23.8</td><td>79.0</td><td>22.0</td><td>47</td><td>18.2</td><td>75.7</td><td>19.7</td><td>460</td></tr><tr><td>VDBA</td><td>44.5</td><td>98.9</td><td>38.5</td><td>293</td><td>51.9</td><td>98.5</td><td>39.5</td><td>160</td><td>43.1</td><td>99.1</td><td>37.9</td><td>165</td><td>29.8</td><td>92.3</td><td>24.5</td><td>49</td><td>24.3</td><td>95.6</td><td>21.6</td><td>494</td></tr><tr><td>Align-VDBA</td><td>44.5</td><td>98.6</td><td>38.6</td><td>357</td><td>52.9</td><td>98.4</td><td>39.7</td><td>189</td><td>44.1</td><td>98.9</td><td>38.1</td><td>203</td><td>30.5</td><td>91.5</td><td>24.7</td><td>70</td><td>25.1</td><td>95.5</td><td>21.8</td><td>630</td></tr></table>
376
+
377
+ Table 9: Lexically Constrained Translation Results with different beam sizes. All numbers are average over 5 randomly sampled constraint sets and running times are in seconds. Align-VDBA* denotes Align-VDBA without alignment probability based beam allocation (i.e. with threshold set to 0).
378
+
379
+ <table><tr><td></td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr><tr><td>1</td><td>65.5</td><td>55.8</td><td>56.1</td><td>95.2</td><td>94.6</td><td>96.6</td></tr><tr><td>2</td><td>59.2</td><td>47.5</td><td>44.5</td><td>95.1</td><td>91.9</td><td>95.8</td></tr><tr><td>3</td><td>62.6</td><td>52.1</td><td>48.3</td><td>93.7</td><td>91.4</td><td>95.2</td></tr><tr><td>4</td><td>88.6</td><td>83.3</td><td>82.1</td><td>89.9</td><td>88.0</td><td>90.3</td></tr><tr><td>5</td><td>91.6</td><td>87.7</td><td>88.5</td><td>91.4</td><td>88.8</td><td>90.2</td></tr><tr><td>6</td><td>93.5</td><td>91.1</td><td>92.5</td><td>92.5</td><td>90.5</td><td>90.7</td></tr></table>
380
+
381
+ Table 13: AER between en→hi and hi→en SHIF-TATT alignments on the validation set for EnHi
382
+
383
+ <table><tr><td></td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr><tr><td>1</td><td>93.5</td><td>90.0</td><td>94.4</td><td>92.2</td><td>95.1</td><td>95.1</td></tr><tr><td>2</td><td>86.5</td><td>58.7</td><td>86.9</td><td>69.4</td><td>87.2</td><td>86.2</td></tr><tr><td>3</td><td>87.4</td><td>59.4</td><td>87.1</td><td>69.1</td><td>87.1</td><td>86.2</td></tr><tr><td>4</td><td>89.1</td><td>69.1</td><td>85.9</td><td>74.2</td><td>84.9</td><td>85.4</td></tr><tr><td>5</td><td>93.4</td><td>88.5</td><td>89.1</td><td>87.1</td><td>86.8</td><td>88.1</td></tr><tr><td>6</td><td>93.5</td><td>89.4</td><td>90.0</td><td>88.1</td><td>87.7</td><td>88.7</td></tr></table>
384
+
385
+ Table 14: AER between ja→en and en→ja SHIF-TATT alignments on the validation set for JaEn
386
+
387
+ <table><tr><td></td><td>de→en</td><td>en→fr</td><td>ro→en</td></tr><tr><td>No constraints</td><td>0.0001*</td><td>0.0001*</td><td>0.0001*</td></tr><tr><td>NAIVEATT</td><td>0.0001*</td><td>0.0001*</td><td>0.0001*</td></tr><tr><td>PRIORATT</td><td>0.0001*</td><td>0.0001*</td><td>0.0001*</td></tr><tr><td>SHIFTATT</td><td>0.1700</td><td>0.0001*</td><td>0.0001*</td></tr><tr><td>SHIFTAET</td><td>0.0015*</td><td>0.0001*</td><td>0.0018*</td></tr><tr><td>POSTALN</td><td>0.0032*</td><td>0.0001*</td><td>0.0003*</td></tr><tr><td>VDBA</td><td>0.2666</td><td>0.0020*</td><td>0.0229*</td></tr></table>
388
+
389
+ Table 10: $p$ -values from paired bootstrap resampling tests with 10000 bootstrap samples for BLEU on Table 3 datasets for beam size 10. Tests are performed with respect to Align-VDBA. * denotes statistically significant difference from Align-VDBA at power 0.05 (p-value < 0.05).
390
+
391
+ <table><tr><td></td><td>Dataset →</td><td colspan="4">IATE.414</td><td colspan="4">Wiktionary.727</td></tr><tr><td>Beam Size</td><td>Method ↓</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td><td>BLEU-C</td><td>CSR</td><td>BLEU</td><td>Time</td></tr><tr><td rowspan="9">5</td><td>No constraints</td><td>27.9</td><td>76.6</td><td>29.7</td><td>134</td><td>26.3</td><td>72.0</td><td>29.9</td><td>217</td></tr><tr><td>NAIVEATT</td><td>29.2</td><td>96.9</td><td>29.2</td><td>175</td><td>29.0</td><td>95.3</td><td>29.1</td><td>341</td></tr><tr><td>PRIORATT</td><td>31.2</td><td>97.1</td><td>29.7</td><td>198</td><td>32.2</td><td>95.9</td><td>29.9</td><td>306</td></tr><tr><td>SHIFTATT</td><td>34.9</td><td>96.7</td><td>29.9</td><td>355</td><td>35.3</td><td>96.5</td><td>30.0</td><td>568</td></tr><tr><td>SHIFTAAET</td><td>35.2</td><td>96.3</td><td>30.0</td><td>378</td><td>35.8</td><td>97.1</td><td>30.2</td><td>637</td></tr><tr><td>POSTALN</td><td>35.3</td><td>96.7</td><td>30.0</td><td>272</td><td>35.8</td><td>96.7</td><td>30.2</td><td>467</td></tr><tr><td>VDBA</td><td>35.3</td><td>98.8</td><td>29.8</td><td>258</td><td>35.0</td><td>99.2</td><td>30.4</td><td>442</td></tr><tr><td>Align-VDBA*</td><td>35.4</td><td>99.8</td><td>29.8</td><td>280</td><td>35.1</td><td>99.3</td><td>30.3</td><td>534</td></tr><tr><td>Align-VDBA</td><td>36.1</td><td>98.3</td><td>30.1</td><td>268</td><td>35.9</td><td>98.8</td><td>30.6</td><td>523</td></tr><tr><td rowspan="8">10</td><td>No constraints</td><td>28.3</td><td>77.0</td><td>29.7</td><td>113</td><td>26.3</td><td>72.4</td><td>29.9</td><td>164</td></tr><tr><td>NAIVEATT</td><td>28.9</td><td>97.3</td><td>29.1</td><td>145</td><td>29.2</td><td>95.3</td><td>29.1</td><td>269</td></tr><tr><td>PRIORATT</td><td>31.3</td><td>96.9</td><td>29.5</td><td>155</td><td>32.3</td><td>96.0</td><td>29.9</td><td>260</td></tr><tr><td>SHIFTATT</td><td>34.9</td><td>96.3</td><td>29.8</td><td>345</td><td>35.3</td><td>96.8</td><td>30.3</td><td>600</td></tr><tr><td>SHIFTAAET</td><td>35.2</td><td>95.9</td><td>29.9</td><td>350</td><td>35.9</td><td>97.2</td><td>30.4</td><td>664</td></tr><tr><td>POSTALN</td><td>35.1</td><td>95.9</td><td>29.9</td><td>287</td><td>35.8</td><td>97.0</td><td>30.3</td><td>458</td></tr><tr><td>VDBA</td><td>37.6</td><td>99.8</td><td>30.9</td><td>257</td><td>36.9</td><td>99.4</td><td>30.9</td><td>451</td></tr><tr><td>Align-VDBA</td><td>37.5</td><td>99.8</td><td>30.9</td><td>353</td><td>37.2</td><td>99.5</td><td>31.1</td><td>540</td></tr><tr><td rowspan="8">20</td><td>No constraints</td><td>28.4</td><td>77.2</td><td>29.9</td><td>103</td><td>26.3</td><td>72.1</td><td>30.0</td><td>177</td></tr><tr><td>NAIVEATT</td><td>28.9</td><td>96.9</td><td>29.0</td><td>188</td><td>29.1</td><td>95.4</td><td>29.3</td><td>325</td></tr><tr><td>PRIORATT</td><td>31.3</td><td>96.9</td><td>29.6</td><td>203</td><td>32.6</td><td>96.4</td><td>30.1</td><td>338</td></tr><tr><td>SHIFTATT</td><td>34.7</td><td>96.1</td><td>29.8</td><td>528</td><td>35.3</td><td>96.8</td><td>30.2</td><td>892</td></tr><tr><td>SHIFTAAET</td><td>35.0</td><td>95.8</td><td>29.9</td><td>539</td><td>36.1</td><td>97.3</td><td>30.4</td><td>923</td></tr><tr><td>POSTALN</td><td>35.1</td><td>96.1</td><td>29.9</td><td>420</td><td>36.0</td><td>97.0</td><td>30.4</td><td>751</td></tr><tr><td>VDBA</td><td>37.8</td><td>99.8</td><td>30.9</td><td>381</td><td>37.4</td><td>99.2</td><td>31.2</td><td>680</td></tr><tr><td>Align-VDBA</td><td>37.9</td><td>99.8</td><td>30.9</td><td>465</td><td>38.0</td><td>99.5</td><td>31.3</td><td>818</td></tr></table>
392
+
393
+ Table 11: Additional results for the real world constraints for all methods and different beam sizes. AlignVDBA* denotes Align-VDBA without alignment probability based beam allocation.
394
+
395
+ Algorithm 2 $k$ -best extraction with argmax replacement decoding.
396
+ Inputs: A $k \times |V_T|$ matrix of scores (for all tokens up to the currently decoded ones). $k$ beam states.
397
+ 1: function SEARCH_STEP(beam, scores)
398
+ 2: next_toks, next Scores $\leftarrow$ ARGMAX_K(scores, k=2, dim=1) Best 2 tokens for each beam
399
+ 3: candidates $\leftarrow []$
400
+ 4: for $0 \leq h < 2 \cdot k$ do
401
+ 5: candidate $\leftarrow$ beam[h/2]
402
+ 6: candidate(tokens.append(next_tokens[h/2, h%2])
403
+ 7: candidate.scores $\leftarrow$ next Scores[h/2, h%2]
404
+ 8: candidates.append(candidate)
405
+ 9: attention $\leftarrow$ ATTENTION(candidates)
406
+ 10: aligned_x $\leftarrow$ ARGMAX(attention, dim=1)
407
+ 11: for $0 \leq h < 2 \cdot k$ do
408
+ 12: if aligned_x[h] $\in \mathcal{C}_i^x$ for some $i$ and not candidates[h].inprogress then Start constraint
409
+ 13: candidates[h].inprogress $\leftarrow$ True
410
+ 14: candidates[h].constraintNum $\leftarrow i$
411
+ 15: candidates[h].tokenNum $\leftarrow 0$
412
+ 16: if candidates[h].inprogress then Replace token with constraint tokens
413
+ 17: consNum $\leftarrow$ candidates[h].constraintNum
414
+ 18: candidates[h].tokens[-1] $\leftarrow$ constraints[consNum][candidates[h].tokenNum]
415
+ 19: candidates[h].tokenNum $\leftarrow$ candidates[h].tokenNum + 1
416
+ 20: if constraints[consNum].length == candidates[h].tokenNum then
417
+ 21: candidates[h].inprogress $\leftarrow$ False Finish current constraint
418
+ 22: candidates $\leftarrow$ REMOVE_DUPLICATE(candidates)
419
+ 23: newBeam $\leftarrow$ TOP_K(candidates)
420
+ 24: return newBeam
421
+
422
+ <table><tr><td>Dataset</td><td colspan="3">IATE.414</td><td colspan="3">Wiktionary.727</td></tr><tr><td>Method</td><td>BLEU</td><td>μ±95% CI</td><td>p-value</td><td>BLEU</td><td>μ±95% CI</td><td>p-value</td></tr><tr><td>Align-VDBA</td><td>30.1</td><td>(30.0±1.7)</td><td></td><td>30.6</td><td>(30.6±1.2)</td><td></td></tr><tr><td>No constraints</td><td>29.7</td><td>(29.7±1.7)</td><td>0.1059</td><td>29.9</td><td>(29.9±1.2)</td><td>0.0054*</td></tr><tr><td>NAIVEATT</td><td>29.2</td><td>(29.2±1.7)</td><td>0.0121*</td><td>29.1</td><td>(29.1±1.2)</td><td>0.0001*</td></tr><tr><td>PRIORATT</td><td>29.7</td><td>(29.6±1.6)</td><td>0.0829</td><td>29.9</td><td>(29.8±1.2)</td><td>0.0041*</td></tr><tr><td>SHIFTATT</td><td>29.9</td><td>(29.8±1.6)</td><td>0.1827</td><td>30.0</td><td>(30.0±1.2)</td><td>0.0229*</td></tr><tr><td>SHIFTTAET</td><td>30.0</td><td>(29.9±1.6)</td><td>0.2824</td><td>30.2</td><td>(30.2±1.2)</td><td>0.0588</td></tr><tr><td>POSTALN</td><td>30.0</td><td>(30.0±1.6)</td><td>0.3813</td><td>30.2</td><td>(30.2±1.2)</td><td>0.0646</td></tr><tr><td>VDBA</td><td>29.8</td><td>(29.7±1.6)</td><td>0.0849</td><td>30.4</td><td>(30.4±1.2)</td><td>0.0960</td></tr></table>
423
+
424
+ Table 12: Paired bootstrap resampling tests with 10000 bootstrap samples for BLEU on Dnu et al. (2019) datasets for beam size 5. * denotes statistically significant difference from Align-VDBA at power 0.05 (p-value < 0.05).
accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1de25ed45d623b2e00b370aa5e31122353ea00db5ac7c22c00dc25d52b99c68
3
+ size 1097251
accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6f484f523440918794737c0380e4ef146a8a2af1bb98bd39e206c39ba3ff53e
3
+ size 550967
achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6022bc848ad7f268a099b52bb4298add09ecf1440ef71ebfee897aaeea75deea
3
+ size 95797
achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cccc7ba31469633572054dde38c532f14c9b5babe5e67946ec55f1b5b76fa1f
3
+ size 119244
achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e84029a1ea6a23a5b8a4677043f209455eb092f65707eada0ab510dcf90df80d
3
+ size 3142005
achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/full.md ADDED
@@ -0,0 +1,392 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection
2
+
3
+ Bodhisattwa Prasad Majumder\* Harsh Jhamtani\* Taylor Berg-Kirkpatrick\* Julian McAuley\*
4
+
5
+ \*Department of Computer Science and Engineering, UC San Diego {bmajumde, tberg, jmcauley}@eng.ucsd.edu School of Computer Science, Carnegie Mellon University jharsh@cs.cmu.edu
6
+
7
+ # Abstract
8
+
9
+ A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings.
10
+
11
+ # 1 Introduction
12
+
13
+ Generic responses which lack specificity have been a major issue in existing dialog models (Hosseini-Asl et al., 2020; Dinan et al., 2019a). The issue in part stems from bottlenecks in dialog models due to a limited scope of scenarios and access to limited knowledge available during training. On the other hand, encoding all possible world knowledge at training time is not feasible, and even undesirable in cases where knowledge sources are dynamically varying (Ghazvininejad et al., 2018; Majumder et al., 2020b; Zhao et al., 2020; Bruyn et al., 2020; Kim et al., 2020; Prabhumoye et al., 2021). One possible approach is to incorporate
14
+
15
+ ![](images/605870a9693eb116ad5ee6d555907df83a639f40f60ac3df4aa918b444e52a7f.jpg)
16
+ Figure 1: Augmenting initial response from an existing dialog model with relevant external knowledge leads to more engaging and informative responses improving the success in achieving the conversational goal (here, finding a fun activity).
17
+
18
+ relevant knowledge at decoding-time. For example, in Figure 1, the user is seeking options for a fun activity around Cambridge. While the initial dialog response suggests watching a movie as an option, it does not provide any information behind that choice.
19
+
20
+ We propose and evaluate an approach for unsupervised knowledge injection into a dialog model's response at decoding time $^{1}$ —not addressed in any previous work. We first sample a response from the model (trained on dialog data) conditioned on the dialog context. Next, we utilize the dialog context and the sampled response to query external knowledge sources. Finally, the retrieved knowledge is used to construct a more informative and engaging response (Figure 1). A major advantage of such post-hoc knowledge injection is its flexibility in adding newer knowledge sources especially where the success of achieving conversational goals relies upon the availability of relevant knowledge. Post-hoc injection also promotes efficiency in NLP applications (Schwartz et al., 2020; Strubell et al., 2019): it mitigates the need to retrain dialog models to accommodate dynamically evolving knowledge.
21
+
22
+ We experiment with two types of knowledge sources: language models, which we treat as parametric knowledge bases (Petroni et al., 2019;
23
+
24
+ ![](images/66bc2899ce4157ed224430910be2ccf6278a61be564a963ca285740a7f897a92.jpg)
25
+ Figure 2: Pipeline of POKI: It first retrieves post-hoc knowledge from external sources based on dialog history and an initial response from a dialog model. Then the most relevant and diverse knowledge snippets are selected from the retrieved set. Each selected snippet is individually combined with the initial response through constrained decoding to generate a candidate final response. At last, the final response is selected via an unsupervised ranking step. Note that POKI requires no additional training.
26
+
27
+ ![](images/67e0570ee422b6ddb70d172e9ce39cb2bf584dcc4167f201aa7c4b43e9eba63b.jpg)
28
+
29
+ ![](images/0a5382d41d6deff58d6c5e173417548c284df2a78b27a80a7c492f82609e6fe5.jpg)
30
+
31
+ Brown et al., 2020); and user review datasets such as Yelp reviews (Hajas et al., 2014) as nonparametric knowledge sources ( $\S 2$ ). Since it is possible to gather a large amount of related knowledge given a query, we select a relevant and diverse (estimated via information-theoretic measures) subset of knowledge snippets using an unsupervised method ( $\S 3.1$ ). Then, a gradient-based inference approach is used to construct an updated response that incorporates the selected knowledge ( $\S 3.2$ ). Note that our framework does not require retraining the existing dialog model—it only relies upon updating the model's output hidden states at decoding time for unsupervised knowledge injection.
32
+
33
+ We experiment with two scenarios: goal-oriented and knowledge-grounded dialog where the training data covers only a fraction of the needed knowledge. Automatic evaluation reveals that our method is capable of generating highly diverse responses in both settings. In some cases, the generated response shows high overlap with the original target response showing that our unsupervised method bridges the knowledge gap between available knowledge and human-written responses present in the existing dialog corpus. An extensive human evaluation confirms that generated responses are indeed engaging, interesting, and human-like without any loss in fluency.
34
+
35
+ To pinpoint the usefulness of knowledge injection in the above settings, we design a real-time study (§5.3) where users interact with our system to reach a conversational goal (e.g. planning a holiday or knowing more about the solar system). We find that external knowledge enables users to achieve their goals more efficiently. Additionally, we observe that the our approach of sub-selecting relevant but diverse knowledge leads to responses that promote success in achieving conversational goals.
36
+
37
+ # 2 Post-hoc Knowledge for Dialog
38
+
39
+ Our goal is to construct a dialog response by injecting knowledge (from external textual sources) at decoding time, without having to retrain the models. Consider a dialog model $\mathcal{M}$ from which we can sample a dialog response $x^{d}$ given a dialog history $\mathcal{H}$ . We shall refer to the response $x^{d}$ sampled from such a model without any decoding time knowledge injection as the initial response.
40
+
41
+ However, as motivated earlier, samples from such a dialog model often lack detail. To improve such responses, we retrieve and incorporate relevant external knowledge $k$ into the initial response. To achieve our goal, we construct a query using both dialog history $\mathcal{H}$ and the initial response $x^{d}$ , and gather a relevant knowledge candidate $k$ from a knowledge source $\mathcal{K}$ . The retrieved snippet can provide useful information to the end-user to achieve the conversational goal (see §5.3). We explore both parametric (e.g. querying a language model) and non-parametric (e.g. deterministic retrieval using word-overlap) ways to obtain post-hoc knowledge.
42
+
43
+ # 2.1 Parametric knowledge sources
44
+
45
+ Pretrained language models (PTLM) are typically trained with a vast amount of text that spans a diverse range of domains. Petroni et al. (2019); Brown et al. (2020) showed that such PTLMs can be used as a source of knowledge when queried with suitable textual prompts (e.g. Seattle is famous for ). To use PTLMs in our use-case, we construct useful prompts from dialog history and the initial response. We assemble simple prompts inspired from various knowledge-seeking situations in dialog (Shwartz et al., 2020) such as [KP] is famous for , Here is what I know about [KP]: ,
46
+
47
+ where $[\mathrm{KP}]$ is a key-phrase $^2$ extracted from dialog context. We use gpt2-large as the PTLM. For example, a query "Here is what I know about fun things around Cambridge:" results in "There are plenty of museums to visit around Cambridge. If you love hiking, you can enjoy the trails alongside the river..." as shown in Figure 1. A complete list of prompts is provided in Appendix B. We finally rank each knowledge snippet $k$ using the likelihood obtained from the PTLM for a concatenated input of $k$ and dialog history and choose the most likely.
48
+
49
+ # 2.2 Non-parametric knowledge sources
50
+
51
+ External knowledge in the form of a text corpus can be used as a non-parametric knowledge source available at decoding time. Compared to parametric knowledge sources, such sources do not generate text as knowledge snippets, but offer the advantage of high quality and reliability of human written text. We consider the dialog history and the initial response as a query to retrieve relevant knowledge instances from the corpus. Next, we identify the top relevant instances in the given corpus with respect to the constructed query using cosine similarity on TF-IDF based representations (Robertson et al., 1995).
52
+
53
+ # 3 Unsupervised Knowledge Injection in Generated Dialog
54
+
55
+ Effectively utilizing the retrieved knowledge snippets to construct an enriched dialog response encompasses two major challenges. Firstly, it is not practical to use potentially hundreds of knowledge snippets obtained from the retrieval step for a single response generation. Thus, we need to find a relevant but diverse subset of the snippets. Secondly, the dialog model $\mathcal{M}$ is trained to condition only on the dialog context, and not on the external knowledge. Hence, to leverage the knowledge snippets, we need a decoding strategy to rewrite the initial response $x^{d}$ such that the resulting final response $x^{f}$ should closely follow the knowledge snippet to be injected without a loss in the fluency and consistency. Thus, our method requires no additional training and only assumes a language model trained on dialog context (i.e. $\mathcal{M}$ ). We refer to our proposed framework (Figure 2) as POKI (Post-hoc Knowledge Injection in Generated Dialog).
56
+
57
+ # 3.1 Relevance-Redundancy Tradeoff for Knowledge Selection
58
+
59
+ At each turn, we obtain $N$ knowledge snippets from both the parametric and non-parametric sources. We wish to select a subset of $B$ (out of $N$ ) relevant but diverse knowledge snippets.
60
+
61
+ We define relevance score of a snippet $k_{i}$ with respect to the dialog history $H$ using pointwise mutual information (PMI) as follows:
62
+
63
+ $$
64
+ \mathbb {R} \mathbb {E} \mathbb {L} _ {i} = \operatorname {P M I} (k _ {i}, \mathcal {H}) = \log \left(\frac {p (\mathcal {H} | k _ {i})}{p (\mathcal {H})}\right),
65
+ $$
66
+
67
+ Thus, a high PMI score would imply a larger semantic similarity between the snippet $k_{i}$ and $H$ . To account for redundancy between the snippet pair $k_{i}, k_{j}$ we again use the PMI score as follows:
68
+
69
+ $$
70
+ \mathbb {R E D} _ {i j, j > i} = \mathrm {P M I} (k _ {i}, k _ {j}) = \log \left(\frac {p (k _ {j} | k _ {i})}{p (k _ {j})}\right).
71
+ $$
72
+
73
+ The redundancy score is symmetric i.e. $\mathbb{R}\mathbb{E}\mathbb{D}_{ij} = \mathbb{R}\mathbb{E}\mathbb{D}_{ji}$ as PMI is a symmetric measure.
74
+
75
+ We estimate probabilities (both conditional and marginal) $p(.)$ in the above equations using GPT2 language model, following past work (Padmakumar and He, 2021). The PMI measure is often considered better than other n-gram-based overlap metrics to measure the degree of association between two sentences (Kedzie et al., 2018; Padmakumar and He, 2021). Semantically similar phrases occur in both sentences that can easily be ignored by overlap based metrics.
76
+
77
+ Selection via Determinantal Point Processes. To select $B$ knowledge snippets out of $N$ with a relevance-redundancy trade-off, we use a subset selection process named Determinantal Point Process (DPP) (Kulesza and Taskar, 2011). DPP employs a non-uniform selection that assigns low probability to subsets (here, of knowledge snippets) that are less diverse by modeling the repulsive correlation between independently occurring datapoints (see Figure 2).
78
+
79
+ We build an $N\times N$ kernel matrix $\mathcal{D}$ , which is real, symmetric and positive semi-definite. The diagonal entries $\mathcal{D}_{ii}$ are populated by the squared relevance score of the $i$ -th knowledge $\mathbb{R}\mathbb{E}\mathbb{L}_i$ and the off-diagonal entries $\mathcal{D}_{ij}$ are $\beta \times$ squared redundancy scores $\mathbb{R}\mathbb{E}\mathbb{D}_{ij}$ . We adjust $\beta$ in such a way that $\mathcal{D}$ always remains positive semi-definite (more details in (Wilhelm et al., 2018)). To select a subset of $B$ , a DPP assigns a probability of sampling such a subset proportional to the determinant
80
+
81
+ of the submatrix $\mathcal{D}_B$ of $\mathcal{D}$ , constructed using the indices of the subsetted items. The DPP probability is geometrically related to the volume of the parallelepiped spanned by the selected knowledge snippets. Diverse knowledge snippets tend to be orthogonal in their space hence span larger volume (Kulesza and Taskar, 2012).
82
+
83
+ Choosing $B$ -size submatrix from $N$ -size $\mathcal{D}$ is a combinatorial problem and can become prohibitively costly when $N$ is very high. Hence, we use a greedy method (Wilhelm et al., 2018) where we initialize the selection with the most relevant $k_{i}$ and subsequently select the next $k_{j}$ that maximizes the determinant of the resultant submatrix.
84
+
85
+ # 3.2 Gradient-based Constrained Decoding for Knowledge Injection
86
+
87
+ Upon selecting $B$ knowledge snippets, we want to individually inject each knowledge snippet into $x^{d}$ to construct a candidate final response $x^{f}$ at inference time.
88
+
89
+ Previous works have addressed the problem of unsupervised modification of already-generated text using gradient-based decoding (Dathathri et al., 2020; Qin et al., 2020) that employs an iterative procedure consisting of a forward and a backward pass. The forward pass on the generative model (here, $\mathcal{M}$ ) encourages fluency of the generated text while the backward pass performs gradient ascent on certain desired constraints. Note that due to the discrete nature of $x_{d}$ , it is not possible to directly update it via back-propagation. Therefore, we maintain the sequence of hidden representations of each output token as $z$ from the dialog model. Each output token $x_{(t)}^{d}$ is realized via $p(x_{(t)}^{d}) \sim \mathrm{softmax}(Wz_{(t)} / \tau)$ , where $\tau$ is the temperature hyperparameter, $W$ is the output embedding matrix (shared with the input), and $Wz_{(t)} \in \mathcal{R}^V$ ( $V$ is the size of the vocabulary).
90
+
91
+ Constraints. Following Majumder et al. (2021a), we define a knowledge fidelity objective that encourages $x^{f}$ to be minimally different from the knowledge snippet $k$ . We achieve this by minimizing the cross entropy loss (CE) between knowledge tokens $k_{(1)},\ldots ,k_{(T)}$ as labels and $Wz_{(1)},\ldots ,Wz_{(T)}$ as the logits.
92
+
93
+ We further notice that injected knowledge can influence the generation in such a way that it contradicts with responses uttered during previous turns. Hence, we also want $x^{f}$ to be entailed with the dialog history $\mathcal{H}$ . We build an entailment classifier
94
+
95
+ $\theta(z, \mathcal{H})$ that predicts the probability of $x^f$ (ideally, the hidden representation $z$ of $x^f$ ) entailing $\mathcal{H}$ . The classifier $\theta(z, \mathcal{H})$ is a bag-of-words classification layer with hidden states $z$ from $\mathcal{M}$ and fine-tuned using the DNLI dataset (Welleck et al., 2019) to predict whether the current response is entailed with previous responses or not.
96
+
97
+ Decoding. In the subsequent forward and backward passes, the hidden representation $z$ is gradually perturbed via gradient ascent on the respective objectives. During backward pass, the objective with constraints is
98
+
99
+ $$
100
+ \mathcal {L} (\mathcal {H}, k; z) = \alpha \log \theta (z, \mathcal {H}) - \lambda \operatorname {C E} (k, W z)
101
+ $$
102
+
103
+ with hyperparameters $\alpha$ and $\lambda$ . We use back-propagation to update $z$ with the gradient $\nabla_{z}\mathcal{L}(\mathcal{H},k;z)$ while the parameters of $\mathcal{M}$ remain fixed. The updated latent representations of $z$ after the backward pass are denoted as $z^{bw}$ .
104
+
105
+ A forward pass with $\mathcal{M}$ is required to regularize the hidden states $z$ toward the original dialog model objective to obtain $z^{fw}$ . Corresponding to the $t^{\mathrm{th}}$ token, the hidden states for the $t + 1^{\mathrm{th}}$ time step are computed via a weighted addition of backward and forward hidden states, i.e., $z_{(t + 1)} = \gamma \times z_{(t)}^{bw} + (1 - \gamma)\times z_{(t)}^{fw}$ where $\gamma \in (0,1)$ is a hyperparameter.
106
+
107
+ During generation, we start by sampling the initial response $x^{d}$ with greedy decoding from $\mathcal{M}$ . The hidden states $z$ (of $x^{d}$ ) are iteratively updated by alternate backward and forward passes. The final response is sampled as $x^{f} \sim \mathrm{softmax}(Wz / \tau)$ . The number of iterations $(= 5)$ and the $\gamma$ $(= 0.45)$ were chosen by maximizing the Z-normalized sum of dialog model perplexity and linguistic diversity (% of distinct bigrams) in a greedy hyperparameter search. More details are in Appendix B.
108
+
109
+ # 3.3 Unsupervised Ranking of Candidate Final Responses
110
+
111
+ Several previous works often over-generate and use an additional ranking step in order to select the final candidate in unsupervised text generation (Qin et al., 2020; Shwartz et al., 2020; Paranjape and Manning, 2021). Similarly, here we want to rank the generated candidate final responses according to the diversity of the generated text as well as the conditional likelihood of generation given the dialog history. For diversity, we measure the percentage of distinct bigrams present in the response. For conditional likelihood, we use
112
+
113
+ <table><tr><td>System</td><td>Acc</td><td>BLEU</td><td>BRTSc</td><td>D-2</td><td>ENTR</td></tr><tr><td>KCopy</td><td>70.1</td><td>4.1</td><td>62.3</td><td>3.16</td><td>2.41</td></tr><tr><td>SimpleTOD (2020)</td><td>70.1</td><td>15.0</td><td>79.2</td><td>0.56</td><td>0.90</td></tr><tr><td>SimpleTOD+ (2021)</td><td>69.8</td><td>12.1</td><td>68.1</td><td>0.81</td><td>1.11</td></tr><tr><td>Arranger (2021)</td><td>70.2</td><td>12.3</td><td>68.5</td><td>0.93</td><td>1.15</td></tr><tr><td>Rewriter (2021)</td><td>70.2</td><td>12.1</td><td>69.4</td><td>1.03</td><td>1.45</td></tr><tr><td>POKI</td><td>71.1</td><td>13.7</td><td>74.5</td><td>3.78</td><td>2.67</td></tr><tr><td>w/o Entailment</td><td>69.9</td><td>10.9</td><td>67.8</td><td>3.67</td><td>2.56</td></tr><tr><td>w/o Kw Fidelity</td><td>70.0</td><td>12.3</td><td>71.2</td><td>0.95</td><td>1.19</td></tr><tr><td>Gold</td><td>100</td><td>100</td><td>100</td><td>0.78</td><td>0.86</td></tr></table>
114
+
115
+ the pre-trained GPT2 model to obtain the log probability when the dialog history, followed by the generated response, passed as a concatenated input. Since these two scores can have varied scale, we perform Z-normalization on the individual scores and add them to obtain a single score for ranking. The highest ranked candidate response is finally rendered to the user.
116
+
117
+ # 4 Experimental Setup
118
+
119
+ # 4.1 Scenarios and Datasets
120
+
121
+ We experiment with two dialog scenarios: goal-oriented and knowledge grounded. Both setups are knowledge intensive but the training data in such setups often contains only a fraction of the needed knowledge. For the goal-oriented setting, we use the Multi-domain Wizard-of-Oz (Budzianowski et al., 2018) dataset. For knowledge grounded dialog, we use the Wizard-of-Wikipedia (Dinan et al., 2019b) dataset. More details are in Appendix A.
122
+
123
+ Multi-domain Wizard-of-Oz (MultiWOZ) is a multi-domain dialog dataset (we use v2.0 (Hosseini-Asl et al., 2020)) consisting of goal-oriented human-human conversations. The dataset spans seven domains (restaurant, train, attraction, hotel, taxi, hospital, police) and contains 10,438 dialogs with 13.68 average turns. Since, we do not need any training data, we only use an evaluation set (of 7K utterances).
124
+
125
+ Wizard-of-Wikipedia (WoW) is a knowledge grounded dialog dataset which involves retrieving relevant knowledge from Wikipedia, reading and conditioning on it, and finally generating dialog responses (Dinan et al., 2019b). The dataset contains 201K utterances from 22K dialogues spanning 1300 diverse topics, from which we use only the test set. The associated Wikipedia knowledge base has 5.4M articles and 93M sentences.
126
+
127
+ Table 1: Automatic metrics on the test set of MultiWoZ. Difference between bold and non-bold numbers is statistically significant $(p < 0.001)$
128
+
129
+ <table><tr><td>System</td><td>BLEU</td><td>BRTSc</td><td>D-2</td><td>ENTR</td></tr><tr><td>KCopy</td><td>13.4</td><td>74.3</td><td>3.64</td><td>3.12</td></tr><tr><td>KGuide (2017)</td><td>16.7</td><td>71.5</td><td>2.54</td><td>2.12</td></tr><tr><td>KGround (2019)</td><td>18.3</td><td>72.5</td><td>2.87</td><td>2.35</td></tr><tr><td>BART (2020a)</td><td>19.8</td><td>73.4</td><td>2.97</td><td>2.55</td></tr><tr><td>RAG (2020b)</td><td>19.9</td><td>73.1</td><td>1.03</td><td>1.45</td></tr><tr><td>POKI</td><td>19.4</td><td>76.8</td><td>3.65</td><td>3.44</td></tr><tr><td>w/o Entailment</td><td>18.1</td><td>74.2</td><td>3.17</td><td>3.39</td></tr><tr><td>w/o Kw Fidelity</td><td>18.8</td><td>73.3</td><td>2.75</td><td>2.54</td></tr><tr><td>Gold</td><td>100</td><td>100</td><td>2.98</td><td>2.59</td></tr></table>
130
+
131
+ Table 2: Automatic metrics on the test set of Wizard-of-Wikipedia. Difference between bold and non-bold numbers is statistically significant $(p < 0.001)$ .
132
+
133
+ # 4.2 Baselines and Ablations
134
+
135
+ Baselines for MultiWOZ. For MultiWOZ, we consider several baselines following (Sun et al., 2021) for knowledge injection. First, we use the current state-of-the-art model, SimpleTOD, for goal-oriented dialog (Hosseini-Asl et al., 2020). Sun et al. (2021) extends SimpleTOD by adding chitchat candidates to dialog histories during training. They also have other variants that either concatenate output from SimpleTOD and candidate chitchats (Arranger) or rewrite by combining both output and chitchat snippets (Rewrter). We also have a trivial baseline (KCopy) which appends the retrieved knowledge snippet $k$ from POKI with the initial response $x_{d}$ .
136
+
137
+ Baselines for WoW. For WoW, we use two current-best knowledge-grounded models, KGround (Wolf et al., 2019) and BART (Lewis et al., 2020a) that concatenate the associated knowledge snippets (present in WoW) and the dialog history as inputs to generate the response with supervision. KGuide (Zhao et al., 2017) and RAG (Lewis et al., 2020b) have an additional knowledge selection step modeled by a latent variable before response generation similar to knowledge grounded models. We also use the KCopy baseline, as described for MultiWOZ.
138
+
139
+ Variants of POKI. To investigate the impact of various decoding constraints in POKI, we consider the following two variants of POKI—w/o Entailment and w/o Knowledge (Kw) Fidelity (§ 3.2). In POKI, we use SimpleTOD as the base dialog model in goal-oriented scenarios and use BART (which is a state-of-the-art model for WoW) as the base dialog model in the knowledge-grounded scenario. For all variants of POKI, we use gradient-based inference for decoding the final response.
140
+
141
+ <table><tr><td rowspan="2"></td><td>POKI vs</td><td colspan="3">SimpleTOD</td><td colspan="3">Rewrter</td><td colspan="3">w/o Entailment</td><td colspan="3">w/o Kw Fidelity</td><td colspan="3">Gold</td></tr><tr><td>Criteria</td><td>win</td><td>loss</td><td>κ</td><td>win</td><td>loss</td><td>κ</td><td>win</td><td>loss</td><td>κ</td><td>win</td><td>loss</td><td>κ</td><td>win</td><td>loss</td><td>κ</td></tr><tr><td rowspan="4">MultiWOZ</td><td>Coherent</td><td>93.2</td><td>4.4</td><td>0.76</td><td>85.6</td><td>10.2</td><td>0.75</td><td>98.7</td><td>0.8</td><td>0.72</td><td>77.8</td><td>17.8</td><td>0.78</td><td>26.2</td><td>34.4</td><td>0.69</td></tr><tr><td>Engaging</td><td>94.3</td><td>4.5</td><td>0.78</td><td>89.7</td><td>7.9</td><td>0.79</td><td>98.7</td><td>0.6</td><td>0.80</td><td>71.5</td><td>20.5</td><td>0.80</td><td>42.4</td><td>37.4</td><td>0.78</td></tr><tr><td>Interesting</td><td>92.7</td><td>5.4</td><td>0.72</td><td>91.2</td><td>8.3</td><td>0.73</td><td>88.6</td><td>8.9</td><td>0.68</td><td>98.7</td><td>0.8</td><td>0.75</td><td>49.7</td><td>45.6</td><td>0.67</td></tr><tr><td>Humanlike</td><td>85.4</td><td>10.7</td><td>0.68</td><td>87.4</td><td>7.3</td><td>0.65</td><td>61.9</td><td>30.5</td><td>0.71</td><td>81.7</td><td>14.0</td><td>0.74</td><td>29.7</td><td>37.8</td><td>0.66</td></tr><tr><td></td><td></td><td colspan="3">RAG</td><td colspan="3">BART</td><td colspan="3">w/o Entailment</td><td colspan="3">w/o Kw Fidelity</td><td colspan="3">Gold</td></tr><tr><td rowspan="4">WoW</td><td>Coherent</td><td>95.4</td><td>4.5</td><td>0.78</td><td>88.5</td><td>9.6</td><td>0.72</td><td>94.3</td><td>3.4</td><td>0.68</td><td>83.6</td><td>10.7</td><td>0.65</td><td>23.8</td><td>25.3</td><td>0.73</td></tr><tr><td>Engaging</td><td>89.3</td><td>7.7</td><td>0.72</td><td>87.8</td><td>8.3</td><td>0.71</td><td>97.7</td><td>0.8</td><td>0.70</td><td>71.5</td><td>25.4</td><td>0.69</td><td>25.4</td><td>26.7</td><td>0.73</td></tr><tr><td>Interesting</td><td>96.3</td><td>3.5</td><td>0.74</td><td>83.3</td><td>9.9</td><td>0.75</td><td>79.8</td><td>17.2</td><td>0.70</td><td>93.5</td><td>4.5</td><td>0.71</td><td>35.9</td><td>37.8</td><td>0.76</td></tr><tr><td>Humanlike</td><td>91.4</td><td>7.1</td><td>0.68</td><td>92.4</td><td>6.5</td><td>0.66</td><td>84.5</td><td>10.5</td><td>0.67</td><td>81.8</td><td>13.5</td><td>0.71</td><td>42.3</td><td>41.9</td><td>0.68</td></tr></table>
142
+
143
+ Table 3: Pairwise comparison (% win/loss cases, tie not reported) between responses from POKI and from other baselines as well as ground truth. Difference between bold and non-bold numbers is statistically significant $(p < 0.001)$ . $\kappa$ denotes Cohen's Kappa (Cohen, 1960) between a pair of annotators. Complete details of the human evaluation are in Appendix C.
144
+
145
+ # 5 Results and Discussion
146
+
147
+ # 5.1 Automatic Evaluation
148
+
149
+ Our primary goal is to generate responses enriched with relevant external knowledge. Arguably, a system which can effectively leverage additional knowledge at decoding time should generate more diverse responses. We measure percentage of distinct bigrams as Distinct-(D-2) (Li et al., 2016) and geometric mean of entropy values of empirical frequency distributions of n-grams ( $n = 1,2,3$ ) as Entropy (ENTR) (Jhamtani et al., 2018) for diversity. Additionally, we report overlap between generated responses and corresponding ground truth as per BLEU and BERTScore (BRTSc). For MultiWOZ, we also report the final goal accuracy (Acc) following (Hosseini-Asl et al., 2020).
150
+
151
+ MultiWOZ. Table 1 shows POKI outperforms all the baselines in terms of diversity of generated responses. More importantly, we see POKI promotes accuracy of reaching the final dialog state i.e. the goal. For ablated versions of POKI, we find the entailment constraint has little effect on diversity while dropping the knowledge adherence constraint negatively influences accuracy and diversity. All variants of SimpleTOD and all versions of POKI show departure from the results obtained by SimpleTOD on BLEU and BERTScore since all of these versions add external knowledge that were not explicitly present in the data. However, we observe that the departure is not significant and POKI achieves a much closer BERTScore to SimpleTOD compared to baselines.
152
+
153
+ WoW. Despite all systems for WoW use knowledge explicitly in the knowledge-grounded dialog generation task, Table 2 shows POKI generates the most diverse responses. Similar to MultiWOZ,
154
+
155
+ the knowledge adherence constraint still remains a significant factor for increasing diversity, one of the main goals of knowledge injection. For WoW, we instead see POKI outperform even BART (previous SOTA) in terms of BERTScore when injected with external knowledge indicating the need of the external knowledge for modeling WoW dialogs.
156
+
157
+ # 5.2 Human Evaluation
158
+
159
+ We conduct a comparative human evaluation with 300 samples to evaluate the quality of generated dialog responses following ACUTE-Eval (Li et al., 2019). We show a generated response from POKI to an annotator with its associated dialog history to annotate if knowledge injection makes the final response more engaging, interesting and humanlike compared to a baseline response. As sanity check, we also investigate if the response remain coherent after knowledge injection. Each sample is evaluated by two annotators<sup>3</sup>.
160
+
161
+ MultiWOZ. Table 3 records the pairwise comparison showing POKI consistently outperforms baselines on all criteria. Responses from POKI are more engaging and interesting compared to SimpleTOD and Rewriter, demonstrating that gradient-based decoding is effective for knowledge injection. In POKI, entailment constraint mostly influences coherence whereas knowledge fidelity constraint is important for engagingness and interestingness.
162
+
163
+ WoW. Table 3 shows POKI outperforms baselines that use grounding knowledge during training in all criteria showing that external knowledge can be useful even in the knowledge-grounded setting to make the conversation engaging and interesting. It also indicates the limitation of the training signal or lack of access to sufficient knowledge and
164
+
165
+ <table><tr><td>: I need a place to eat that is cheap.
166
+ : Do you have a location preference? I have several options for inexpensive restaurants.</td><td>Do you have a location preference? I have several options for you.</td><td>None</td></tr><tr><td>: Center of the town in Cambridge.
167
+ : I think you can go with Asian cuisines such as Chinese or Indian cuisines. They are value for money since they are known for their great taste.</td><td>I can book a Chinese or Indian restaurant near centre for you.</td><td>Asian cuisines such as Chinese or Indian cuisines are inexpensive. They are value for money since they are known for their great taste.</td></tr><tr><td>: Do you have any restaurants that serve English food?
168
+ : I do not have an inexpensive restaurant that serves English food in the centre area. Indian food i very popular in the centre area. Many people who like English food also enjoy Indian food.</td><td>I do not have an inexpensive restaurant that serves English food in the centre area.</td><td>Indian food is very popular in the centre area, Cambridge. I know many friends who like English food also enjoy Indian food. Indian food is great value for money.</td></tr><tr><td>: Awesome. I can go with Indian food.
169
+ : Great. I can book a reservation for you at Kohinoor. The table will be reserved for 15 minutes. They have a vibrant environment and they are known for their starters.</td><td>I can book a reservation for you at Kohinoor. The table will be reserved for 15 minutes.</td><td>Kohinoor has a vibrant environment in the evening. They are best known for their starters. Do check them out. Also Indian sweets are great as desserts.</td></tr></table>
170
+
171
+ Figure 3: POKI converses with a user who is looking for some restaurant options (left column). In each turn, a knowledge snippet (right column) is injected into an initial response (middle column). More examples are in Appendix D.
172
+
173
+ room for improvement in terms of how knowledge is utilized. A large gap in win percentages in favor of POKI for evaluating how 'humanlike' is a response when compared to state-of-the-art methods suggests knowledge injection leads to more natural conversation. Here too, both decoding constraints show similar trends to MultiWOZ.
174
+
175
+ Qualitative Analysis. Figure 3 shows a conversation by POKI with a user who seeks to find restaurant options around Cambridge. We observe that in most of the turns the injected knowledge appeared as an additional justification over the initial responses making the dialog engaging and effective to reach the user's goal (also noted by human judges in §5.3). For example, in turn 3, we observe that adding the extra information about Indian cuisine helped user to reach a conclusion when their original choice of English cuisine was absent.
176
+
177
+ Effect of Response Length. Qualitatively, as seen in Figure 3, responses generated by POKI are longer than those from the initial response due to the post-hoc knowledge injection. In the human evaluation sample, we found that $37\%$ of responses from POKI are similar or smaller in length compared to responses from the best baseline. We investigate if response length acted as a confounding factor during human evaluation. Among all the cases where POKI was lost over a baseline, $45\%$ $(\pm 2\%)$ when bootstrapped with 1000 subsets of size 50) of responses from POKI were longer than those from the comparing baseline. Among win cases for POKI, we observe $49\%$ $(\pm 3\%)$ when bootstrapped with 1000 subsets of size 50) POKI responses were longer than those from the comparing method. This indicates that human users did not only choose longer responses as better.
178
+
179
+ # 5.3 User Study for Effectiveness of Knowledge Injection
180
+
181
+ Relevant knowledge injection has the benefit of adding more justification to terse dialog outputs and hence influencing the task outcome positively. Mirroring observations from (Ghandeharioun et al., 2019), a real-time full conversation evaluation is needed to investigate if POKI could achieve the conversational goal any better than baselines.
182
+
183
+ We recruited 60 users for this study. One half of the users interacted with POKI, while the other half interacted with the best baseline model that does not augment dialog responses with external knowledge. We construct a speculative goal for each user to accomplish via the conversation. We allow users to end the conversation any time they would like and ask them whether the system helped them to reach their conversation goal along with additional comments to justify their annotation. Users who interacted with a knowledge-augmented system also asked if the system provided any knowledge that user has not explicitly asked for but indeed the extra information helped them to reach the conversational goal (Majumder et al., 2021b). Finally, we also ask if they would like to engage with the system they interacted with in future.
184
+
185
+ For goal-oriented dialog, we construct speculative goals (e.g. looking for entertainment options) manually from the ground truth for 300 dialog samples. Since we are not using the underlying databases, we made sure speculative goals do not require specific information (e.g. booking availability, flight information, etc.). For knowledge-grounded dialog, we provide the intended topic of
186
+
187
+ <table><tr><td>MultiWOZ</td><td># turns ↓</td><td>Goal</td><td>Know</td><td>Would use</td></tr><tr><td>Rewriter</td><td>8 ± 2</td><td>69%</td><td>35%</td><td>56%</td></tr><tr><td>POKI</td><td>4 ± 3</td><td>86%</td><td>84%</td><td>76%</td></tr><tr><td>WoW</td><td># turns ↑</td><td>Goal</td><td>Know</td><td>Would use</td></tr><tr><td>BART</td><td>10 ± 2</td><td>56%</td><td>70%</td><td>48%</td></tr><tr><td>POKI</td><td>16 ± 3</td><td>76%</td><td>89%</td><td>71%</td></tr></table>
188
+
189
+ discussion (e.g. science fiction) present in the data; the speculative goal here is to know more about, or to have an engaging conversation about the topic.
190
+
191
+ Results. First of all, we find that POKI is unanimously preferred by users compared to the baseline during the user study. More importantly, we see that when the user successfully accomplished their goal, $84\%$ of those times they found the additional knowledge helpful in the goal-oriented setting (MultiWOZ) as compared to a baseline (Rewrter) that did not use any external knowledge. Most importantly, POKI takes significantly fewer turns for users to accomplish the goal as compared to Rewrter implicitly indicating injected knowledge (we observe high correlation, 0.67) contributes toward more efficient conversations.
192
+
193
+ For the knowledge-grounded setting (WoW), both BART and POKI have access to external knowledge sources. However, $89\%$ (compared to $70\%$ ) of success scenarios were directly influenced by the additional post-hoc knowledge. For knowledge-grounded dialog, a longer conversation is indicative of engagingness on a particular topic (Gopalakrishnan et al., 2019), hence users preferred to converse with POKI for more turns as compared to a BART baseline. We quote a comment from a user who found a conversation about the Korean culture with POKI was particularly engaging—“Before this conversation, I had less knowledge about Korean movies and art-forms. This gave me a new perspective and a handful of popular opinions to look at it.”
194
+
195
+ # 5.4 Discussion
196
+
197
+ Performance of Knowledge Selection. The knowledge selection step in POKI acts an information bottleneck where the quality of the generated response directly depends on the quality of the
198
+
199
+ Table 4: Real-time user study with average # of turns for successful goal completion, % of time the goal was achieved, % of success cases users were helped by an additional knowledge (Know) that was not explicitly asked to reach their goal, and if users would like to use the system in future.
200
+
201
+ <table><tr><td rowspan="2">Source</td><td colspan="2">Relevant</td><td colspan="2">Factual</td><td colspan="2">BRTSc for WoW</td></tr><tr><td>Random</td><td>DPP</td><td>Random</td><td>DPP</td><td>Random</td><td>DPP</td></tr><tr><td>Parametric</td><td>82%</td><td>89%</td><td>65%</td><td>83%</td><td>74.2</td><td>81.3</td></tr><tr><td>Non-parametric</td><td>81%</td><td>83%</td><td>97%</td><td>98%</td><td>65.2</td><td>76.8</td></tr></table>
202
+
203
+ Table 5: Evaluation for the quality of the knowledge snippets for random and DPP-based selection.
204
+
205
+ <table><tr><td>System</td><td>MultiWOZ</td><td>WoW</td></tr><tr><td>Supervised</td><td>17.6 ± 5.2 ms</td><td>23.6 ± 4.6 ms</td></tr><tr><td>PPCM (2020)</td><td>30.9 ± 7.5 ms</td><td>32.6 ± 4.2 ms</td></tr><tr><td>POKI</td><td>34.2 ± 8.4 ms</td><td>35.7 ± 5.7 ms</td></tr><tr><td>POKI, only decoding</td><td>31.6 ± 2.7 ms</td><td>32.3 ± 3.4 ms</td></tr></table>
206
+
207
+ Table 6: Mean and std. error of clock-time taken per token
208
+
209
+ selected knowledge<sup>5</sup>. We perform a human evaluation on 200 snippets to measure the relevance and the factual correctness in two scenarios: when we randomly select a retrieved snippet or select via DPP. In Table 5, we see that the parametric knowledge source (gpt2-large) generates more relevant knowledge snippets than a non-parametric one. We attribute this to 1) a large and diverse dataset (webtext) used during pretraining of gpt2 as compared to yelp reviews (restricted domains) we used for retrieval, and 2) the limited recall of relevant knowledge when using word-overlap based retrieval. However, large language models are still prone to generate non-factual knowledge. We observe that DPP-based selection in POKI is able to sub-select more factual knowledge which then positively influences the final response quality. For WoW, we also compare the selected snippets with the gold knowledge available in the dataset that in turn show high fidelity in terms of BERTScore.
210
+
211
+ Time Complexity. Madotto et al. (2020) shows that iterative gradient-based decoding could be slower than generating response using single forward pass from an existing model. When we benchmark POKI in an Nvidia 2080Ti GPU, in Table 6, we see that knowledge generation (or retrieval) could be a computational bottleneck for POKI. However the greedy selection and the constrained decoding step do not add significant computational load. Furthermore, POKI's performance is comparable with PPCM (Madotto et al., 2020)—a more efficient version of gradient-based decoding. The efficiency of the knowledge retrieval step can be improved with better indexing (Johnson et al., 2021) which we leave as a future work.
212
+
213
+ # 6 Related Work
214
+
215
+ Knowledge grounded dialog datasets such as Wizard-of-Wikipedia (Dinan et al., 2019a) and Topical chat (Gopalakrishnan et al., 2019) typically consist of dialog responses paired with relevant knowledge available as collected annotations. Hence, models trained on such datasets are restricted to the knowledge sources they were exposed to at training time. Past work (Sun et al., 2021; Majumder et al., 2020a; Su et al., 2020; Komeili et al., 2021; Adolphs et al., 2021; Ghazvininejad et al., 2018; Tuan et al., 2020; Lewis et al., 2020c; Guu et al., 2020) has looked into injecting extra knowledge sources at training time in a bid to add knowledge not available originally as paired to dialog responses. However, such approaches require re-training the model if some new knowledge source were to be used. Moreover, while previous work focuses on just improving specificity of dialog response using external knowledge, we also study the effect of additional knowledge in achieving conversational goals.
216
+
217
+ Improving the diversity of dialog responses by using diversity-promoting sampling has been explored in past work (Fan et al., 2018; Holtzman et al., 2020). We use a gradient-based decoding method, building on past work in this direction (Dathathri et al., 2020; Qin et al., 2020; Madotto et al., 2020; Majumder et al., 2021a). However, we propose new objectives to inject post-hoc knowledge obtained based on already generated dialog—an unsupervised knowledge injection method that has not been explored so far.
218
+
219
+ # 7 Conclusion
220
+
221
+ We propose a framework for unsupervised knowledge injection into dialog responses. We show that knowledge can be obtained post-hoc from any knowledge sources that can improve users' ability to reach their conversational goal more effectively. In future, our idea can be generalized to setups where external knowledge can justify model's predictions such as conversational recommendation.
222
+
223
+ # Acknowledgements
224
+
225
+ We thank anonymous reviewers for providing valuable feedback. BPM is partly supported by a Qualcomm Innovation Fellowship, a Friends of the International Center Fellowship-UC San Diego, NSF Award #1750063, and MeetElise.
226
+
227
+ # References
228
+
229
+ Leonard Adolphs, Kurt Shuster, Jack Urbanek, Arthur Szlam, and Jason Weston. 2021. Reason first, then respond: Modular generation for knowledge-infused dialogue. CoRR, abs/2111.05204.
230
+ Tom B. Brown, Benjamin Mann, Nick Ryder, et al. 2020. Language models are few-shot learners. In NeurIPS.
231
+ Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, and Walter Daelemans. 2020. BART for knowledge grounded conversations. In Converse@KDD, volume 2666. CEUR-WS.org.
232
+ Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - A large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In EMNLP.
233
+ Ricardo Campos, Vitor Mangaravite, Arian Pasquali, Alipio Jorge, Celia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. Information Sciences, 509.
234
+ Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.
235
+ Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In ICLR.
236
+ Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019a. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR.
237
+ Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019b. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR.
238
+ Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018. Hierarchical neural story generation. In ACL.
239
+ Guillaume Gautier, Guillermo Polito, Rémi Bardenet, and Michal Valko. 2019. DPPy: DPP Sampling with Python. Journal of Machine Learning Research - Machine Learning Open Source Software (JMLRMLOSS).
240
+ Asma Ghandeharioun, Judy Hanwen Shen, Natasha Jaques, Craig Ferguson, Noah Jones, Ågata Lapedriza, and Rosalind W. Picard. 2019. Approximating interactive human evaluation with self-play for open-domain dialog systems. In NeurIPS.
241
+ Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In AAAI.
242
+
243
+ Karthik Gopalakrishnan, Behnam Hedayatnia, Qinglang Chen, Anna Gottardi, Sanjeev Kwa-tra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In Interspeech.
244
+ Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: retrieval-augmented language model pre-training. CoRR, abs/2002.08909.
245
+ Peter Hajas, Louis Gutierrez, and Mukkai S. Krishnamoorthy. 2014. Analysis of yelp reviews. CoRR, abs/1407.1443.
246
+ Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In ICLR.
247
+ Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. In NeurIPS.
248
+ Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg-Kirkpatrick. 2018. Learning to generate move-by-move commentary for chess games from large-scale social forum data. In ACL 2018.
249
+ Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Trans. Big Data.
250
+ Chris Kedzie, Kathleen R. McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In EMNLP.
251
+ Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential latent knowledge selection for knowledge-grounded dialogue. In ICLR. OpenReview.net.
252
+ Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation. CoRR, abs/2107.07566.
253
+ Alex Kulesza and Ben Taskar. 2011. k-dpps: Fixed-size determinantal point processes. In ICML. Omni Press.
254
+ Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning. Found. Trends Mach. Learn., 5(2-3):123-286.
255
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In ACL.
256
+ Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih,
257
+
258
+ Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS.
259
+ Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020c. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS.
260
+ Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT.
261
+ Margaret Li, Jason Weston, and Stephen Roller. 2019. ACUTE-EVAL: improved dialogue evaluation with optimized questions and multi-turn comparisons. CoRR, abs/1909.03087.
262
+ Andrea Madotto, Etsuko Ishii, Zhaojiang Lin, Sumanth Dathathri, and Pascale Fung. 2020. Plug-and-play conversational models. In Findings of EMNLP.
263
+ Bodhisattwa Prasad Majumder, Taylor BergKirkpatrick, Julian J. McAuley, and Harsh Jhamtani. 2021a. Unsupervised enrichment of persona-grounded dialog with background stories. In ACL.
264
+ Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Julian J. McAuley. 2020a. Like hiking? you probably enjoy nature: Personagrounded dialog with commonsense expansions. In EMNLP.
265
+ Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian J. McAuley. 2020b. Interview: Large-scale modeling of media dialog with discourse patterns and knowledge grounding. In EMNLP.
266
+ Bodhisattwa Prasad Majumder, Sudha Rao, Michel Galley, and Julian J. McAuley. 2021b. Ask what's missing and what's useful: Improving clarification question generation using global knowledge. *NAACL*.
267
+ Vishakh Padmakumar and He He. 2021. Unsupervised extractive summarization using pointwise mutual information. In EACL.
268
+ Ashwin Paranjape and Christopher D. Manning. 2021. Human-like informative conversations: Better acknowledgements using conditional mutual information. In NAACL-HLT.
269
+ Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In EMNLP-IJCNLP.
270
+ Shrimai Prabhumoye, Kazuma Hashimoto, Yingbo Zhou, Alan W. Black, and Ruslan Salakhutdinov. 2021. Focused attention improves document-grounded generation. In *NAACL-HLT*.
271
+
272
+ Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena D. Hwang, Ronan Le Bras, Antoine Bosselut, and Yejin Choi. 2020. Back to the future: Unsupervised backprop-based decoding for counterfactual and abductive commonsense reasoning. In EMNLP.
273
+ Stephen E. Robertson, Steve Walker, and Micheline Hancock-Beaulieu. 1995. Large test collection experiments on an operational, interactive system: Okapi at TREC. Inf. Process. Manag., 31(3):345-360.
274
+ Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. Green AI. Commun. ACM, 63(12):54-63.
275
+ Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In EMNLP.
276
+ Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In ACL.
277
+ Hui Su, Xiaoyu Shen, Sanqiang Zhao, Xiao Zhou, Pengwei Hu, Randy Zhong, Cheng Niu, and Jie Zhou. 2020. Diversifying dialogue generation with non-conversational text. In ACL.
278
+ Kai Sun, Seungwhan Moon, Paul A. Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, and Claire Cardie. 2021. Adding chit-chats to enhance task-oriented dialogues. NAACL.
279
+ Yi-Lin Tuan, Wei Wei, and William Yang Wang. 2020. Unsupervised injection of knowledge into dialogue generation via language models. CoRR, abs/2004.14614.
280
+ Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In ACL.
281
+ Mark Wilhelm, Ajith Ramanathan, Alexander Bonomo, Sagar Jain, Ed H. Chi, and Jennifer Gillenwater. 2018. Practical diversified recommendations on youtube with determinantal point processes. In CIKM. ACM.
282
+ Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. CoRR, abs/1901.08149.
283
+ Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL.
284
+ Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020. Knowledge-grounded dialogue generation with pre-trained language models. In EMNLP.
285
+
286
+ # A Datasets
287
+
288
+ MultiWOZ. To compare with previous works, we use MultiWoz 2.0 following (Hosseini-Asl et al., 2020). Note that we do not need any training data for our models since we perform post-hoc knowledge injection.
289
+
290
+ WoW For Wizard-of-Wikipedia, all baselines and the original dialog model for POKI use available paired knowledge present in the training data (not a part of our pipeline). However, POKI additionally uses the external knowledge snippets selected via DPP.
291
+
292
+ # B Implementation Details
293
+
294
+ We open-source our code at: https://github. com/majumderb/poki. We use the publicly available implementation for DPP (Gautier et al., 2019).
295
+
296
+ We obtain the MultiWOZ 2.0 from the official release $^{7}$ . Similarly, we obtain the Wizard-of-Wikipedia from ParlAI repository $^{8}$ . We adapted codes from original PPLM (Dathathri et al., 2020) repository and modified them for our own objective function. We obtained the Yelp review dataset from the official website $^{10}$ . Yelp dataset contains 8,635,403 reviews. For diversity calculation (in automatic evaluation), we use NLTK to extract n-grams.
297
+
298
+ Network architecture For MultiWOZ, we use the SimpleTOD<sup>12</sup> as the base model. Whereas for WoW, we use BART<sup>13</sup> as the base model. For the parametric knowledge source, we use gpt2-large<sup>14</sup>.
299
+
300
+ Hyperparameters POKI does not require any training since we perform gradient-based decoding at the inference time. For hyperparameters involved in the decoding stage, we maximize the
301
+
302
+ ```txt
303
+ $^{6}$ https://github.com/guilgautier/DPPy
304
+ $^{7}$ https://github.com/budzianowski/multiwoz
305
+ $^{8}$ https://parl.ai/projects/wizard_of_wikipedia/
306
+ $^{9}$ https://github.com/uber-research/PPLM
307
+ $^{10}$ https://www.yelp.com/dataset
308
+ $^{11}$ https://www.nltk.org/_modules/nltk/util.html
309
+ $^{12}$ https://github.com/salesforce/simpletod
310
+ $^{13}$ https://huggingface.co/transformers/model_doc/bart.html
311
+ $^{14}$ https://huggingface.co/transformers/model_doc/gpt2.html
312
+ ```
313
+
314
+ Z-normalized sum of dialog model perplexity and linguistic diversity (\% of distinct bigrams) of the generated response in a greedy fashion to select the best values. For our best method, in objective function $\mathcal{L}$ , we use $\alpha$ as 1 and $\lambda$ as 1. We keep generation length to be 100 to encourage longer generations. We train the entailment classifier using code from PPLM repository<sup>15</sup>. The weight $\gamma$ for mixing forward and backward passes was set to 0.45. We run 5 backward-forward passes to obtain a candidate final response.
315
+
316
+ Filtering knowledge candidates from PTLMs Our initial experiments suggests that that knowledge generated from PTLMs can be inappropriate (contains bias or toxic content) and misleading/nonfactual. Sun et al. (2021) collected annotations of dialog responses with labels positive (useful, social), negative (inappropriate and misleading). We learn a binary classifier to classify a knowledge snippet as positive or negative and use it as a filtering criteria.
317
+
318
+ Key-phrase extraction Given a sentence from the context, we first extract n-gram $(n\in 1,2,3,4)$ key-phrases using YAKE (Yet-Another-Keyword-Extractor) (Campos et al., 2020) and retain only those that contain at least a noun.
319
+
320
+ **Prompts** We curated prompts inspired by various knowledge-seeking situations (such as for: more information, opinion, review) (Shwartz et al., 2020) and are listed in Table 7.
321
+
322
+ [KP] is famous for
323
+ The popular opinion about [KP] is
324
+ Here is what I know about [KP]:
325
+ My friend says that [KP] is:
326
+ Here is some information about [KP]:
327
+ Here are some reviews about [KP]:
328
+ I think [KP] is:
329
+ I read on the internet about [KP] and found that
330
+ Today I learned about [KP] that
331
+
332
+ Table 7: Manually curated prompts to query the PTLM
333
+
334
+ Statistics on generated and selected knowledge snippets For both datasets, we retrieve 100 most relevant knowledge snippets from non-parametric source (here, yelp reviews), and generate 5 candidate knowledge snippets (using nucleus sampling
335
+
336
+ (Holtzman et al., 2020), $p = 0.95$ for each keyphrase extracted from an input instance (dialog history + initial response). After knowledge selection by DPP, on an average (over validation set), 5 snippets were selected for MultiWoz and 8 snippets were selected for WoW.
337
+
338
+ # C Human Evaluation and User Study Setup
339
+
340
+ Human Evaluation We hired two Anglophone (Lifetime HIT acceptance $\% > 85$ ) annotators for every test sample. Figure 4 shows a sample question for the pairwise comparison between response generated by POKI and a baseline for informativeness. The exact formulations for all criteria are provided as below:
341
+
342
+ - Coherent: Which version is more consistent with the dialog history?
343
+ - Engaging: Which version is more likely to hold your attention and make you want to hear more?
344
+ - Interesting: Which version arouses your curiosity or tells you something new or useful?
345
+ - Humanlike: Which version is more natural and personable?
346
+
347
+ All differences in values from human evaluations are significant with $p < 0.05$ from bootstrap tests on 1000 subsets of size 50. A snapshot of our human evaluation interface is shown in Figure 4. The order of two candidate responses (R1 and R2) is made random for each question.
348
+
349
+ User Study For user study, we similarly recruited 60 Anglophone users who have at least high-school level of education and are comfortable with handling internet-based technologies. Each session (depending on the systems they interacted) lasted on an average 30 minutes (for MultiWOZ) and 60 minutes (for WoW) including on-boarding, performing actual task and answering post-task questions.
350
+
351
+ # D Qualitative Examples
352
+
353
+ Figure 5 shows a complete dialog in the knowledge-grounded scenario where the user discusses about 'science-fiction'. Figure 6 shows more utterance level examples for both goal-oriented and knowledge-grounded scenarios.
354
+
355
+ # Instructions (Click to collapse)
356
+
357
+ This task requires basic English language understanding.
358
+
359
+ For each instance, you will have to read the dialog history between two people A and B. We expect you to respond on the following for the candidates shown for B's response:
360
+
361
+ Interestingness: Which version arouses your curiosity or tells you something new or useful?
362
+
363
+ # 1. Dialog History:
364
+
365
+ A's turn: I need a place to eat that is cheap.
366
+ B's turn: Do you have a location preference? I have several options for inexpensive restaurants.
367
+ A's turn: Center of the town in Cambridge.
368
+ B's turn: I think you can go with Chinese or Indian cuisine. Both are inexpensive but have great taste. Should I look for Chinese or Indian restaurants?
369
+ A's turn: Do you have any restaurants that serve English food?
370
+
371
+ Candidates for B's next turn:
372
+
373
+ Response R1: I do not have an inexpensive restaurant that serves English food in the centre area. You can try with Indian food since it is very popular in the centre area. Many people who like
374
+
375
+ English food also like Indian food.
376
+
377
+ Response R2: I do not have an inexpensive restaurant that serves English food in the centre area.
378
+
379
+ 1.1 Which version seems more well informed and confident in the information?
380
+ $\bigcirc \mathbb{R}1$ is better Both have similar interestingness $\bigcirc \mathbb{R}1$ is worse
381
+
382
+ Figure 4: Human evaluation setup for pairwise comparison between POKI and another baseline
383
+
384
+ # Ethical considerations
385
+
386
+ We do not foresee any immediate ethical concerns for our method as we use several constraints (less divergence from the extracted knowledge, consistency with the dialog context) that allow the generation to be restricted to the context. In general, we expect our dialog system to be more engaging and accessible to the user. Since we use PTLMs as knowledge source, we inherit the general risk of generating biased or toxic language, which should be carefully filtered. In our work, we perform explicit filtering steps to make sure that the knowledge is appropriate. Furthermore, our selection step promotes more factually correct knowledge to be selected. However, the generations may incorporate biases that are already present in the dialog datasets due to crowd-sourced data collection. Finally, our generations are limited only to the English language. Hence we suggest that a system like ours should likely not be used as a 'black box,' but would best be used in a setting where its outputs can be 'audited'. Carbon footprint: Our system uses post-hoc knowledge injection which refrains from retraining newer dialog models to accommodate dynamically evolving external knowledge. This promotes green NLP applications (Schwartz et al., 2020; Strubell et al., 2019) reducing carbon footprints that stem from training (or even finetuning) large language models.
387
+
388
+ ![](images/3e37705892ede9ea552471afcfaef47c9f5ccae094c19fa6278d87eea0564000.jpg)
389
+ Figure 5: POKI converses with a user who is discussing about science fiction, in a knowledge-grounded dialog scenario (left column). In each turn, a initial response (middle column) is augmented with a knowledge snippet (right column) using constrained gradient-based decoding. Human judges unanimously noted this conversation as more engaging as compared to the initial responses.
390
+
391
+ ![](images/04aeca8fb457359904dcbdd223135d3e3530ebebde82479ebd296df8e3fd9fe0.jpg)
392
+ Figure 6: Utterance level examples (left column) in (a) and (b) goal oriented scenario; and (c) knowledge-grounded scenario. POKI updates the initial response (middle column) with a knowledge snippet (right column) using constrained gradient-based decoding.
achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:880d8efb10be6522bd11328f0e38401c76ea4c7825454456c7474d4c5f1373cf
3
+ size 681056
achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cc0f130e2735f9334cfb625b48e56720442bce0e12590023336e507f6f52424
3
+ size 495327
achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeec0f7a5e397b124374202315d8ed5109f38c2853c3445e429cb28e657adddd
3
+ size 115784
achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:094fa6e7d845f3216dceed65efb368c6d021ac0fded2ac1c0735e588ff904fcd
3
+ size 136871
achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbe3ec8d17e8a7abbee6d034b0e36462f4c83749801b67afa46691ed6bacecf6
3
+ size 1494507
achievingreliablehumanassessmentofopendomaindialoguesystems/full.md ADDED
@@ -0,0 +1,412 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Achieving Reliable Human Assessment of Open-Domain Dialogue Systems
2
+
3
+ Tianbo Ji $^{1,2}$ , Yvette Graham $^{1,3}$ , Gareth Jones $^{1,2}$ , Chenyang Lyu $^{2}$ , and Qun Liu $^{4}$
4
+
5
+ 1ADAPT Centre
6
+
7
+ $^{2}$ School of Computing, Dublin City University
8
+
9
+ <sup>3</sup>School of Computer Science and Statistics, Trinity College Dublin
10
+
11
+ $^{4}$ Noah's Ark Lab, Huawei
12
+
13
+ {tianbo.ji,yvette.graham,gareth.jones}@
14
+
15
+ adaptcentre.ie, chenyang.lyu2@mail.dcu.ie, qun.liu@huawei.com
16
+
17
+ # Abstract
18
+
19
+ Evaluation of open-domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed. Despite substantial efforts to carry out reliable live evaluation of systems in recent competitions, annotations have been abandoned and reported as too unreliable to yield sensible results. This is a serious problem since automatic metrics are not known to provide a good indication of what may or may not be a high-quality conversation. Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost. Self-replication experiments reveal almost perfectly repeatable results with a correlation of $r = 0.969$ . Furthermore, due to the lack of appropriate methods of statistical significance testing, the likelihood of potential improvements to systems occurring due to chance is rarely taken into account in dialogue evaluation, and the evaluation we propose facilitates application of standard tests. Since we have developed a highly reliable evaluation method, new insights into system performance can be revealed. We therefore include a comparison of state-of-the-art models (i) with and without personas, to measure the contribution of personas to conversation quality, as well as (ii) prescribed versus freely chosen topics. Interestingly with respect to personas, results indicate that personas do not positively contribute to conversation quality as expected.
20
+
21
+ # 1 Introduction
22
+
23
+ Evaluation of open-domain dialogue is particularly challenging and has been cited in high-profile competitions as a known open problem (Dinan et al., 2019). Challenges arise primarily from the fact that in real-world conversations there exists such a vast number of possible appropriate responses.
24
+
25
+ Subsequently, dialogue evaluation that relies on comparison with pre-created reference dialogues incur substantial false-negative rates as many appropriate responses are unfairly penalized simply for not corresponding closely with references. In addition, evaluation faces further challenges with respect to the ability to fully take into account dialogue history. $^{1}$
26
+
27
+ In this paper, we present a new method of open-domain dialogue evaluation based on human assessment of live conversations with models that avoids the need for pre-created reference dialogues and ensures full familiarity with dialogue history, ticking two important boxes in terms of validity. Although live human evaluation of models has the advantage of being highly valid, reliability unfortunately cannot be assumed and developing methods of evaluation for language tasks that achieve high rater consistency has been challenging, often resulting in low levels of agreement between annotators (Finch and Choi, 2020; Callison-Burch et al., 2011, 2012; Bojar et al., 2013, 2014; Mehri and Eskenazi, 2020b). Despite challenges in this respect, our proposed method provides highly reliable evaluation, achieving a correlation of $r = 0.969$ in self-replication experiments. Additionally, the evaluation can be carried out cheaply and on a large scale through strict quality controlled crowd-sourcing, as well as including score standardization for fairer ranking of competing models. We make the data and code publicly available to aid future research.[2]
28
+
29
+ # 2 Problems in Past Evaluations
30
+
31
+ A common issue occurs that can potentially impact the validity of results is filtering the set of systems to be evaluated via automatic metric scores. Since metric scores are known to be a poor substitute
32
+
33
+ for human assessment, this only results in the possibility that the best system according to human judges is inadvertently filtered out at this stage. For example, ConvAI2 (Dinan et al., 2019) ranked models firstly using automatic metrics before top models according to metric scores were assessed by crowd-sourced workers on Mechanical Turk, while similarly in the sixth Dialog System Technology Challenge (DSTC6) systems were filtered according to metric scores prior to human evaluation.
34
+
35
+ In terms of the live evaluation, competitions such as ConvAI2 report such evaluations as highly challenging, with many of the resulting dialogues reported to be senseless, offensive, or simply not in line with instructions and ultimately live evaluation results have been discarded.
36
+
37
+ Despite challenges, competitions that operate in the public domain, making data and evaluation techniques available to researchers (such as ourselves) should be applauded for such efforts.
38
+
39
+ On the other hand, competitions that (for one reason or another) do not release data and evaluation techniques into the public domain have reported relative success in terms of human evaluation. However until such methods can be accessed and independently verified through replication studies, they will unfortunately have little impact. The first Amazon Alexa Socialbot Grand Challenge required human assessors to score how coherent and engaging conversations were on a 1-5 rating scale by two distinct groups: volunteer Amazon employees (experts), and general Alexa users (crowds) (Ram et al., 2018), are reported to achieve a correlation of overall scores for the two types of human assessors at 0.93. The absolute average rating across all chatbots was reported to be $20\%$ lower for experts compared to general users. In an additional effort to evaluate models, conversational user experience, coherence, engagement, domain coverage, topical diversity, and conversational depth were assessed (1-5 scale), with combined scores reported to correlate with those of general users at $r = 0.66$ . In addition to methods and data not being publicly available, correlations are difficult to interpret since no detail is provided about the number of judgments on which the correlation is calculated for example.
40
+
41
+ In addition to competitions that generally aim to include human evaluation of systems, automatic metrics are often proposed for dialogue evaluation, themselves requiring a human evaluation data set
42
+
43
+ on which to evaluate the proposed metric. However, inappropriate statistics are often applied. For example, Pang et al. (2020) propose a holistic metric to automatically evaluate four distinct aspects of dialogue, and a human evaluation experiment is deployed on Mechanical Turk using a 1-5 rating scale. The mean correlation between human assessors is reported as $r = 0.61$ . However, mean correlations are unfortunately difficult to interpret, since correlation coefficients are not additive, averages calculated in the usual way cannot be assumed to reflect central tendency, and unfortunately, the distribution of correlations is not reported (Alexander, 1990).
44
+
45
+ Mehri and Eskenazi (2020b) propose USR (Unsupervised and Reference-free), an unsupervised model that predicts the quality of dialog for a range of criteria using various rating scales: understandable (0-1 rating scale), natural (1-3), maintains context (1-3), interesting (1-3), uses knowledge (0-1); overall quality (1-5). Despite human evaluation being carried out by experts inter-annotator agreement levels varied depending on criteria being measured, ranging from as low as 0.298. Additionally, although correlations between human assessments are reported as significant at $p < 0.01$ , despite such statistics often being reported for correlations, they are unfortunately not very meaningful in terms of their impact on correlation interpretation and can be somewhat misleading. Contrary to common expectations, even small effect sizes (low $r$ ) can produce very low p-values (strong significance) in such tests. Aiming to achieve a significant correlation is an extremely low bar to reach in terms of consistency, since a low p-value in this case simply rejects the null hypothesis that the correlation is zero.
46
+
47
+ In addition to the above issues, human evaluation of dialogue systems rarely take into account the fact that differences in performance can occur simply by chance. The method of human evaluation we propose provides a means of applying standard tests for statistical significance to avoid concluding differences that are highly likely to have occurred simply by chance.
48
+
49
+ # 3 Crowd-sourcing Reliable Human Assessment of Open-Domain Dialogue
50
+
51
+ Crowd-sourcing with highly accurate quality control provides a potential mechanism to ensure the three most important criteria that makes an eval
52
+
53
+ uation meaningful while still remaining feasible: validity, reliability and scalability. Subsequently, we ask crowd-workers to carry out live text-based chat with models prior to that same worker also rating the quality of the immediately preceding conversation.
54
+
55
+ # 3.1 Human Ratings of Dialogue Quality
56
+
57
+ A continuous (0-100) rating scale is employed with three main motivation points (Graham et al., 2013; Novikova et al., 2018; Li et al., 2019; Santhanam and Shaikh, 2019; Santhanam et al., 2020; Mille et al., 2020; Barrault et al., 2020; Howcroft et al., 2020). Firstly, continuous scales reduce potential bias when comparing the performance of competing models by enabling score standardization. The score distribution of each human assessor is standardized according to overall mean and standard deviation of all ratings provided by that assessor, thus removing any adverse effects of those employing overly harsh (or indeed lenient) scoring strategies. Secondly, the 0-100 rating scale allows standard significance tests to score distributions of models to help determine which models significantly outperform others. Thirdly, and possibly most importantly, a continuous rating scale facilitates highly accurate quality control of crowd-sourced workers so that the evaluation can scale while still maintaining validity at a low cost.
58
+
59
+ Each human assessor is firstly asked to carry out a live conversation with a randomly selected model, comprised of a minimum of 10 conversational inputs, before rating the quality of the conversation that just took place under a number of criteria shown in Figure 1. Note that the measurement criteria we employed are not immutable and we encourage to extend or adjust the criteria for future studies as necessary.
60
+
61
+ A continuous rating scale is advantageous for several reasons but employment of such a scale raises the question of how it should be labeled. In evaluation of language tasks, adjectival scale labels, such as poor, low, medium, high, perfect/okay, good, excellent, and so on, are often employed despite their likely contribution to annotator inconsistency (Loukina et al., 2020; Sorodoc et al., 2017). This is despite evidence of adjectival scale labels being problematic in terms of bias resulting from positively and negatively worded items not being true opposites of one another, and items intended to have neutral intensity in fact proving to have
62
+
63
+ <table><tr><td>Robotic:</td><td>It was obvious that I was talking to a chatbot as opposed to another human user.</td></tr><tr><td>Interesting:</td><td>The conversation with the chatbot was interesting.</td></tr><tr><td>Fun:</td><td>The conversation with the chatbot was fun/enjoyable.</td></tr><tr><td>Consistent:</td><td>The chatbot was consistent throughout the conversation.</td></tr><tr><td>Fluent:</td><td>The chatbot&#x27;s English was fluent and natural throughout the conversation.</td></tr><tr><td>Repetitive:</td><td>I felt that the chatbot kept being repetitive during the conversation.</td></tr><tr><td>Topic:</td><td>The chatbot stays on topic.</td></tr></table>
64
+
65
+ Figure 1: Criteria employed to assess models in our human evaluation in the form of Likert statements; corresponding evaluation labels (left) not shown to human assessors.
66
+
67
+ specific conceptual meanings. Alexandrov (2010) provides a summary of issues associated with adjectival labels.
68
+
69
+ To avoid any such causes of inconsistency, we structure each rating as a simple Likert declarative statement and ask human assessors to rate the degree to which they agree with each of these statement, making it possible to keep the rating scale constant while only changing the statement for each measurement criteria. We ask judges to rate each conversation under the seven aforementioned measurement criteria (Figure 1) along with a continuous rating scale labeled only at each extreme with strongly disagree (left); strongly agree (right).
70
+
71
+ # 3.2 Quality Controlling the Crowd for Open-Domain Dialogue
72
+
73
+ We structure Human Intelligence Tasks (HITs) so that a sufficiently rich score distribution is collected from each individual worker who participated, asking each to hold six conversations, comprised of a shuffled arrangement of five dialogue models and a single quality control model.
74
+
75
+ Many approaches to quality controlling the crowd employ gold-standard items as quality checks (Liu et al., 2013; Lasecki et al., 2014). This approach is however highly likely to allow low quality data to pollute the resulting evaluation, since any worker willing to assign high scores to all items will undeservedly pass this check. The approach also runs in contrast to our aim of the same individual who took part in a live conversation to also assess its quality, as it relies on the use of pre-created gold standard conversations.
76
+
77
+ Our quality control approach overcomes these challenges by deploying models in live conversations that have known distinct performance levels instead of asking workers to assess the quality of pre-existing known high quality conversations. Within a HIT, the five models $m$ can produce some quality level of conversation and the model $l$ produces known lower quality dialogues (lower than the five models). For a single worker who takes part in conversations with $m$ and $l$ , we then check how consistently the worker rated the conversations of $l$ lower than $m$ . This results in a quality control mechanism that does not ask workers to be consistent with other workers or to correctly rate gold standard dialogues but only assesses worker consistency by how consistently they distinguish between known distinct performance models and only with respect to their own conversation ratings.
78
+
79
+ From a practical standpoint, creating a low performance model, $l$ , is additionally far less challenging and costly than pre-creating a known set of high quality dialogues, and degraded models operate fully automatically. Low quality models produce outputs via generation of random responses with meaning distortion also applied.
80
+
81
+ For random response degradation: Low quality responses are generated by random sampling responses from training set dialogues with the intention of disregarding any previous input from the user, so responses from the model are likely to be perceived as low quality since they have low relevance. To reduce the quality of conversations further, we apply meaning distortion: each response, $r$ , is altered to distort its meaning by randomly selecting a sequence of words within that response and replacing it with a sequence of words sampled from a distinct training set dialogue, with the length of the replaced word sequence being determined by the number of words in $r$ . The specific details are provided in Appendix A.1, and Figure 4 in Appendix A.4 gives a typical example.
82
+
83
+ Hits subsequently consist of a total of six dialogues comprised of five genuine models and a single quality control model that generates meaning distorted and random responses. Crowd-sourced workers converse with each model before rating conversation quality (model order is shuffled and blind). Statistical significance tests are then applied to score distributions of workers for the ratings they attributed to ordinary models, $m$ , relative to the low quality model, $l$ . The resulting $p$ -value is then em
84
+
85
+ ployed as a means of rating worker consistency, and any worker with $p > = 0.05$ shows no significant difference between low and ordinary model quality and is filtered out.
86
+
87
+ # 3.3 Calculating System-Level Scores
88
+
89
+ Scores are collected from workers who rate models on a 0-100 rating scale, and we refer to these scores as raw scores. Scores for negative attributes, i.e. robotic and repetitive, are then reversed for ease of further comparison, $100 -$ the original rating. A distribution of scores is extracted for each worker and raw scores are standardized according to each worker's mean and standard deviation, in order to iron out any differences in worker scoring strategy.
90
+
91
+ Average standardized scores for each criteria are calculated, and an overall score is calculated as the average of all measurement criteria.
92
+
93
+ # 4 Meta-Evaluation
94
+
95
+ In order to assess the reliability of the proposed method of human evaluation, we carry out a meta-evaluation in which we firstly examine individual human assessor consistency, before conducting a self-replication experiment. A number of models are required to function as a sample set of test systems, and for this purpose we employ available pre-trained models from ParlAI:4 Poly-Encoder Transformer (Humeau et al., 2019), Bi-Encoder Transformer (Dinan et al., 2018), Sequence to Sequence (Sutskever et al., 2014), Key-Value Memory Networks (Miller et al., 2016), and a LSTM-based Model (Hochreiter and Schmidhuber, 1997). Within the evaluation setting of ConvAI2, each model is with a persona consisting of approximately five textual statements to emulate a personality. However, to increase the number of models and to provide an interesting comparison, we additionally include a version of each of the above models without any persona, resulting in 10 competing models.
96
+
97
+ Hits were posted on the crowd-sourcing platform Amazon Mechanical Turk.5 Firstly, and in order to evaluate the open-domain models in as realistic a setting as possible, we allow workers to choose the topic of conversation and input their chosen topic in a text field. The open nature of conversations should be noted however as something that influences the difficulty of producing consistent results
98
+
99
+ <table><tr><td rowspan="2">Topic</td><td colspan="3">Workers</td><td colspan="3">Ave. Duration (min)</td><td colspan="3">Dialogues</td></tr><tr><td>Total</td><td>Passed</td><td>Pass Rate</td><td>Passed</td><td>Failed</td><td>All</td><td>Total</td><td>Passed</td><td>Pass Rate</td></tr><tr><td>Free Run 1</td><td>249</td><td>173</td><td>69.5%</td><td>6.53</td><td>7.04</td><td>6.68</td><td>1,525</td><td>1,075</td><td>70.5%</td></tr><tr><td>Free Run 2</td><td>248</td><td>139</td><td>56.0%</td><td>6.87</td><td>7.58</td><td>7.18</td><td>1,480</td><td>838</td><td>56.6%</td></tr><tr><td>Ice-breaker</td><td>248</td><td>171</td><td>69.0%</td><td>6.60</td><td>6.70</td><td>6.63</td><td>1,450</td><td>1,030</td><td>71.0%</td></tr></table>
100
+
101
+ in our self-replication experiment. The fact that we allow human assessors to freely choose the topic of conversation means that differences in ratings could result from legitimate differences in performance when different topics are chosen by human assessors. We nonetheless test our evaluation allowing the user to choose the topic as this is part of our core aim for developing evaluation of dialogue truly in the open domain.
102
+
103
+ Besides choosing a topic, we additionally asked workers to input their opinion of the topic they chose to discuss with models, categorizing the topic as either liked, ambivalent about it, or disliked. For example, if the topic they chose to discuss was dogs, we were curious to know if this was motivated by the fact that the worker liked or disliked dogs or indeed that they had chosen to discuss something they had no particular feeling about. Table 2 shows subsequent proportions (\%) of workers, and the detailed instructions are introduced in Figure 5 in Appendix A.4. Perhaps unsurprisingly, the vast majority of workers chose to discuss something they liked (84% for workers who passed quality control). Nonetheless 7% of good workers were ambivalent about the topic they chose and 9% chose a topic they reported as disliking.
104
+
105
+ Table 1: Numbers of workers who took part in human evaluation of models, average time taken per dialogue in minutes (min), and total number of dialogues assessed before and after quality control in which workers freely chose the topic (Free run 1); precisely the same experiment set-up was repeated (Free run 2); where the topic was prescribed via an ice-breaker statement (Ice-breaker) selected directly from the persona of the model.
106
+
107
+ <table><tr><td></td><td colspan="2">Free run 1</td><td colspan="2">Free run 2</td></tr><tr><td></td><td>Pass</td><td>Fail</td><td>Pass</td><td>Fail</td></tr><tr><td>Like</td><td>83.9</td><td>88.6</td><td>86.4</td><td>93.8</td></tr><tr><td>Ambivalent</td><td>7.4</td><td>3.8</td><td>6.2</td><td>2.3</td></tr><tr><td>Dislike</td><td>8.7</td><td>7.7</td><td>7.4</td><td>3.9</td></tr></table>
108
+
109
+ Table 2: Proportions $(\%)$ of topics that are reported as liked, ambivalent about or disliked by workers who passed and failed quality control.
110
+
111
+ Table 1 shows the number of workers who participated in the initial data collection run who freely chose the topic of conversation with models (Free run 1), amounting to 1,525 dialogues $\times$ 7 criteria
112
+
113
+ $= 10,675$ human ratings. The details of payment to each worker and the total experiment cost are provided in Appendix A.2. Table 1 also shows the proportion of workers who passed quality checks, numbers of dialogues assessed in total before and after quality filtering, as well as the average time taken for workers to complete a hit and average time taken to assess dialogues. As mentioned previously, we carry out a second data collection run with precisely the same settings (Free run 2) to measure the reliability of results and Table 1 shows equivalent statistics with respect to Free run 2 in which a total of 1,480 dialogues $\times 7$ ratings $= 10,360$ human ratings were collected in total.
114
+
115
+ # 4.1 Human Assessor Consistency
116
+
117
+ Although the overall aim of our evaluation is to produce reliable results at the system level, which we test later in Section 4.2, we firstly examine ratings of workers at the level of individual dialogue ratings. Technically speaking, the most meaningful reliability measures for continuous ratings scales test consistency of aggregate (system-level) results because although a high level of random error is expected in individual continuous rating scale scores, when aggregates are calculated for large samples of ratings, positive and negative error that is truly random effectively cancels itself out, and does not negatively impact consistency. In other words, the rating scale we employ does not rely on consistency on the level of individual ratings. We nonetheless examine individual rater consistency, since it is the standard approach, but keep in mind that results in this part of our meta-evaluation are not crucial when testing reliability for an evaluation carried out via a continuous rating scale where consistency in overall system-level results are more important.
118
+
119
+ The distribution of Pearson correlation coefficients for pairs of workers who assessed the same hit is depicted in Figure 2.
120
+
121
+ As can be seen from Figure 2, the likelihood
122
+
123
+ <table><tr><td></td><td>Model</td><td>n</td><td>Overall</td><td>Interesting</td><td>Fun</td><td>Consistent</td><td>Fluent</td><td>Topic</td><td>Robotic</td><td>Repetitive</td></tr><tr><td rowspan="10">Free Run I</td><td>A</td><td>798</td><td>0.534</td><td>0.564</td><td>0.602</td><td>0.711</td><td>0.863</td><td>0.964</td><td>-0.038</td><td>0.069</td></tr><tr><td>B</td><td>798</td><td>0.419</td><td>0.474</td><td>0.481</td><td>0.614</td><td>0.875</td><td>0.994</td><td>-0.431</td><td>-0.075</td></tr><tr><td>Ap</td><td>707</td><td>0.318</td><td>0.399</td><td>0.372</td><td>0.443</td><td>0.821</td><td>0.404</td><td>-0.330</td><td>0.116</td></tr><tr><td>C</td><td>791</td><td>0.262</td><td>0.491</td><td>0.379</td><td>0.028</td><td>0.636</td><td>-0.066</td><td>-0.316</td><td>0.680</td></tr><tr><td>Cp</td><td>714</td><td>0.189</td><td>0.409</td><td>0.373</td><td>0.159</td><td>0.672</td><td>-0.114</td><td>-0.521</td><td>0.349</td></tr><tr><td>Bp</td><td>707</td><td>0.173</td><td>0.230</td><td>0.197</td><td>0.369</td><td>0.673</td><td>0.320</td><td>-0.395</td><td>-0.187</td></tr><tr><td>D</td><td>707</td><td>-0.087</td><td>-0.190</td><td>-0.208</td><td>0.166</td><td>0.311</td><td>0.401</td><td>-0.637</td><td>-0.449</td></tr><tr><td>Dp</td><td>798</td><td>-0.201</td><td>-0.308</td><td>-0.234</td><td>0.092</td><td>0.312</td><td>0.025</td><td>-0.625</td><td>-0.669</td></tr><tr><td>Ep</td><td>763</td><td>-0.217</td><td>-0.181</td><td>-0.201</td><td>-0.196</td><td>0.380</td><td>-0.455</td><td>-0.605</td><td>-0.264</td></tr><tr><td>E</td><td>742</td><td>-0.243</td><td>-0.165</td><td>-0.160</td><td>-0.142</td><td>0.329</td><td>-0.407</td><td>-0.745</td><td>-0.411</td></tr><tr><td></td><td>r</td><td>-</td><td>0.969</td><td>0.952</td><td>0.927</td><td>0.899</td><td>0.960</td><td>0.951</td><td>0.646</td><td>0.936</td></tr></table>
124
+
125
+ Table 3: Average standardized scores for models in initial data collection run; workers were free to choose the topic of conversation (Free run 1); the correlation $(r)$ between systems in this and a second data collection run distinct data collection runs; where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=LSTM-based Model; models with $p$ models with a the persona; score for robotic and repetitive have been reversed; $n$ is number of ratings; models ordered by overall average score.
126
+
127
+ ![](images/d05e1db6ab751cb802565da67b753e3b7e10ce6d5bcdfd834748264f60f8e488.jpg)
128
+ Figure 2: Agreement between pairs of human assessors as measured by the Pearson correlation $(r)$ of ratings provided by workers who passed (blue) and failed quality control (orange).
129
+
130
+ of agreement between pairs of workers who failed quality control is close to random as the distribution is approaching uniformity across almost the range of possible coefficients. In contrast, for pairs of workers who pass quality control, the peak of agreement is between an $r$ of 0.6 and 0.7, showing high agreement in general between such annotator
131
+
132
+ pairs.
133
+
134
+ Some of the observed disagreement is likely to be the result of legitimate differences between scores of two workers who chose distinct topics to discuss with the same model however, an unavoidable source of inconsistency when testing models with respect to the open domain. Interestingly, in $5\%$ of dialogues, worker pairs assigned the same hit happened to both freely choose an identical topic to discuss with the same model. Furthermore, remaining disagreement at the level of individual ratings might not be problematic at the level of overall scores in relation to aggregation of ratings collected on a continuous rating scale.
135
+
136
+ # 4.2 System-level Consistency
137
+
138
+ Table 3 shows results of the system-level evaluation resulting from the initial data collection run on Mechanical Turk (Free run 1), where competing models are ordered by overall highest average z-score.
139
+
140
+ Table 3 additionally shows consistency of the evaluation between each experiment run via the Pearson correlation of scores for each measurement criteria as well as consistency overall. Across
141
+
142
+ <table><tr><td></td><td>Model</td><td>n</td><td>Overall</td><td>Interesting</td><td>Fun</td><td>Consistent</td><td>Fluent</td><td>Topic</td><td>Robotic</td><td>Repetitive</td></tr><tr><td rowspan="10">Ice-breaker</td><td>A</td><td>721</td><td>0.552</td><td>0.565</td><td>0.527</td><td>0.873</td><td>1.018</td><td>1.011</td><td>-0.287</td><td>0.156</td></tr><tr><td>Ap</td><td>742</td><td>0.422</td><td>0.589</td><td>0.560</td><td>0.518</td><td>0.718</td><td>0.527</td><td>0.009</td><td>0.034</td></tr><tr><td>B</td><td>721</td><td>0.376</td><td>0.379</td><td>0.340</td><td>0.634</td><td>0.769</td><td>0.820</td><td>-0.221</td><td>-0.087</td></tr><tr><td>C</td><td>784</td><td>0.322</td><td>0.615</td><td>0.537</td><td>0.190</td><td>0.631</td><td>0.061</td><td>-0.344</td><td>0.565</td></tr><tr><td>Bp</td><td>658</td><td>0.273</td><td>0.406</td><td>0.340</td><td>0.414</td><td>0.633</td><td>0.423</td><td>-0.369</td><td>0.063</td></tr><tr><td>Cp</td><td>700</td><td>0.222</td><td>0.402</td><td>0.337</td><td>0.089</td><td>0.654</td><td>-0.068</td><td>-0.376</td><td>0.514</td></tr><tr><td>D</td><td>728</td><td>-0.139</td><td>-0.277</td><td>-0.204</td><td>0.123</td><td>0.349</td><td>0.295</td><td>-0.638</td><td>-0.620</td></tr><tr><td>Ep</td><td>714</td><td>-0.198</td><td>-0.172</td><td>-0.203</td><td>-0.054</td><td>0.316</td><td>-0.343</td><td>-0.533</td><td>-0.396</td></tr><tr><td>E</td><td>721</td><td>-0.240</td><td>-0.125</td><td>-0.161</td><td>-0.196</td><td>0.318</td><td>-0.393</td><td>-0.631</td><td>-0.489</td></tr><tr><td>Dp</td><td>721</td><td>-0.267</td><td>-0.426</td><td>-0.402</td><td>-0.011</td><td>0.234</td><td>0.000</td><td>-0.628</td><td>-0.636</td></tr><tr><td></td><td>r</td><td>-</td><td>0.984</td><td>0.967</td><td>0.944</td><td>0.958</td><td>0.951</td><td>0.981</td><td>0.715</td><td>0.950</td></tr></table>
143
+
144
+ Table 4: Average standardized scores for models in human evaluation where workers were prescribed an ice-breaker topic of conversation sampled from the persona of the model; the correlation $(r)$ between these scores and Free run 1 in Table 3; models are consistent with Table 3; $n$ is number of ratings; models without $p$ did not have a persona (ice-breaker statement was subsequently unknown to these models).
145
+
146
+ ![](images/3af7a8227c480414c7f55ae0f5c0e3a2f7fbd39c77490132cf2457645565b2dc.jpg)
147
+ Figure 3: Pairwise significance test results for systems concluded from Free Run 1, where a colored cell indicates that the system in that row significantly outperformed the system in that column. Models are consistent with Table 3.
148
+
149
+ the board, consistency is very high, exceeding a correlation of 0.94 in almost all cases with the exception of robotic which nonetheless achieved a correlation of over 0.7. Besides individual criteria, of crucial importance is the consistency of overall results, as this is the means by which models would ordinarily be ranked in terms of overall performance. As can be observed from Table 3, the correlation reached in terms of overall scores for systems is 0.969, which is very close to a perfect
150
+
151
+ correlation, showing extremely high levels of reliability for the evaluation, evidence that the approach overcomes substantial challenges with respect to annotator consistency and expected difficulties with respect to evaluating models in the open domain, where assessors are legitimately free to choose distinct topics of conversation.
152
+
153
+ In any empirical evaluation, statistical significance tests should be applied to take into account the fact that small differences in scores between systems can occur simply by chance. We provide pairwise significance test results in Figure 3, where we apply standard significance test, Wilcoxon rank-sum to rating distributions for each pair of competing models for each data collection run, and corresponding results for run 2 in Figure 6 in Appendix A.4. Results showed a very high proportion of identical conclusions, $84\%$ , drawn from pairwise significance tests applied to data from the two data collection runs at $p < 0.1$ . Results for $p < 0.05$ , additionally showed high correspondence between pairwise significance test conclusions, only marginally lower with $82\%$ of the same conclusions being drawn for pairs of models in the two data collection runs. We additionally provide correlations between measurement criteria and overall scores in Table 8 of Appendix A.4.
154
+
155
+ # 5 Persona Contribution to System Performance
156
+
157
+ Since we have verified the reliability of the human evaluation, we take a closer look at the results and investigate dialogue quality when models employ a persona. Results in Table 3 reveal that perhaps unexpectedly in general are either rated more favorably by human assessors when they carry out dialogues without a persona or a tie occurs between models with and without a persona.
158
+
159
+ # 6 Evaluating with Prescribed Topics
160
+
161
+ In contrast to the initial experiment in which workers were permitted to choose the topic of conversation, we further investigate the performance of models in a slightly easier setting where the topic under discussion is known to the model, by selecting a statement from its persona, which we refer to as an ice-breaker topic statement. An ice-breaker topic statement is then provided to human assessors at the beginning of each conversation, and the assessor is instructed to talk about this topic with the model. We therefore provide the topic of conversation to workers in the form of an ice-breaker topic statement, corresponding to a randomly selected persona statement belonging to the agent. Again, we run this experiment on MTurk, this time contrasting results for our initial data collection run where workers freely chose a topic with one in which workers were instructed to talk about the ice-breaker statement with models.
162
+
163
+ Numbers of workers who participated in the Icebreaker run are provided in Table 1, while a breakdown of results for each model and overall average scores are shown in Table 4 as well as the correlation between scores for systems when a topic is freely chosen. Interestingly, in terms of absolute differences in raw scores, the best performing model achieves higher fluency, consistency and is deemed less repetitive when evaluated in icebreaker conversations compared those with freely chosen topics. Raw average scores for models in the Ice-breaker run are additionally provided in Table 11 in Appendix A.4. Relatively speaking, in terms of system rankings, no meaningful difference in relative performance is observed when models are tested in a scenario where the worker chooses a topic and when one is prescribed with an ice-breaker statement, as can be seen from the strong correlation between scores for models in Free run 1 and Ice-breaker evaluation as shown in
164
+
165
+ Table 4. Additionally, significance test results for the Ice-breaker evaluation are provided in Figure 7 in Appendix A.4.
166
+
167
+ # 7 Comparison with Automatic Evaluation Metrics
168
+
169
+ # 7.1 Word-overlap-based Metrics
170
+
171
+ In this experiment, we employed four prevailing word-overlap-based metrics as described in the following, whose scores are computed on the ConvAI2 test set.
172
+
173
+ BLEU BLEU (Bilingual Evaluation Understudy) evaluate the quality of a system output by computing the n-gram precision according to human-generated references (Papineni et al., 2002). It also uses the brevity penalty to penalize short outputs.
174
+
175
+ ROUGE-L ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a recall-adaptation of BLEU, whose wildly-applied variant is ROUGE-L (Lin and Hovy, 2003). It computes the precision and recall using longest common subsequence (LSC) instead of n-gram, and the F1 score of precision and recall is reported as the final score.
176
+
177
+ METEOR METEOR (Metric for Evaluation of Translation with Explicit ORdering) was firstly proposed to overcome flaws of BLEU, like no usage of recall (Denkowski and Lavie, 2011). It computes the unigram precision and recall, and have a different mechanism of choosing the brevity penalty.
178
+
179
+ GLEU GLEU (Google-BLEU) is a variety of BLEU (Wu et al., 2016) which computes the n-gram precision and recall instead of the standalone precision. The minimum of precision and recall is reported as the final GLEU score.
180
+
181
+ <table><tr><td>Metric</td><td>r</td></tr><tr><td>BLEU-4</td><td>-0.883</td></tr><tr><td>BLEU-1</td><td>-0.707</td></tr><tr><td>ROUGE-L</td><td>-0.799</td></tr><tr><td>METEOR</td><td>-0.321</td></tr><tr><td>GLEU</td><td>-0.816</td></tr></table>
182
+
183
+ Table 5: Pearson correlation $(r)$ of word-overlap metric scores and human evaluation.
184
+
185
+ # 7.2 Reference-free Metrics
186
+
187
+ The following introduces two reference-free automatic metrics we employed: FED and USR. Their
188
+
189
+ <table><tr><td></td><td>FEDm</td><td>FEDl</td><td>USR</td><td>USR-MLM</td><td>USR-DR(c)</td><td>USR-DR(f)</td></tr><tr><td>Overall</td><td>0.590</td><td>0.530</td><td>-0.230</td><td>-0.419</td><td>0.046</td><td>0.205</td></tr><tr><td>Interesting</td><td>0.028</td><td>-0.042</td><td>-0.451</td><td>-0.235</td><td>-0.238</td><td>-0.081</td></tr><tr><td>Fun</td><td>-0.339</td><td>0.115</td><td>-0.378</td><td>-0.319</td><td>-0.131</td><td>0.032</td></tr><tr><td>Consistent</td><td>0.236</td><td>0.227</td><td>0.214</td><td>-0.620</td><td>0.518</td><td>0.652</td></tr><tr><td>Fluent</td><td>-0.138</td><td>-0.054</td><td>-0.227</td><td>-0.374</td><td>0.028</td><td>0.151</td></tr><tr><td>Robotic</td><td>0.528</td><td>0.461</td><td>-0.070</td><td>-0.290</td><td>0.106</td><td>0.191</td></tr><tr><td>Repetitive</td><td>0.841</td><td>0.752</td><td>-0.713</td><td>0.182</td><td>-0.690</td><td>-0.568</td></tr><tr><td>Topic</td><td>0.046</td><td>0.004</td><td>0.222</td><td>-0.754</td><td>0.606</td><td>0.746</td></tr></table>
190
+
191
+ Table 6: Pearson correlation $(r)$ of reference free metric scores and human evaluation, where $\mathrm{FED}_m$ and $\mathrm{FED}_l$ respectively use medium and large DialoGPT, USR is the overall USR score computed according to three submetrics: USR-MLM, USR-DR(c) and USR-DR(f).
192
+
193
+ scores are computed using the conversations collected in our experiment.
194
+
195
+ FED FED (Fine-grained Evaluation of Dialog) is a pre-trained-model based metric to evaluate a textual conversation history (Mehri and Eskenazi, 2020a). Given a conversation $c$ , a pre-trained model $m$ , two predefined responses $r_p$ and $r_n$ ( $p = \text{positive}$ and $n = \text{negative}$ ), the FED score is $\mathcal{L}_m(r_p|c) - \mathcal{L}_m(r_n|c)$ where $\mathcal{L}_m(r|c)$ computes the likelihood that the model $m$ will generate a response $r$ to a conversation $c$ . We employed medium and large DialogGPT (Zhang et al., 2020) as FED scorers, where the full list of predefined positive and negative responses are available in Table 7 in Appendix.
196
+
197
+ USR USR (an UnSupervised Reference-free metric) uses the pre-trained model RoBERTa (Liu et al., 2019) to assess the quality of a conversation (Mehri and Eskenazi, 2020b). It consists of three sub-metrics: USR-MLM is to evaluate the understandability and naturalness, USR-DR(c) and USR-DR(f) are to evaluate the interestingness and consistency. The sub-metric scores then produce an overall score through a regression model.
198
+
199
+ # 7.3 Correlation between Automatic Metrics and Human Evaluation
200
+
201
+ We compute the correlation between commonly applied automatic metrics and our human evaluation methods, including word-overlap-based metrics and reference-free metrics, as shown in Tables 5 and 6 respectively.
202
+
203
+ As can be seen from Table 5, unfortunately no word-overlap metric achieves a strong positive correlation with human assessment, confirming once again that the invalidity of system rankings cur
204
+
205
+ rently produced by automatic metric scores.
206
+
207
+ In terms of reference-free metrics, results correspond better and are more encouraging. FED has the ability of distinguishing "repetitive" models, but for other criteria, it correlates weakly or even negatively with human. Meanwhile, despite USR only correlating marginally with human in terms of consistency and topic loyalty, USR-DR(f) correlates closest to human among the three sub-metrics, while it performs best on evaluating consistency and topic loyalty.
208
+
209
+ # 8 Conclusion
210
+
211
+ Development of reliable evaluation of open-domain dialogue has been highlighted as a known open-problem. We overcome previous challenges and provide a new human evaluation methodology shown as highly consistent, with results for models correlating at $r = 0.969$ in two separate data collection runs. Our evaluation has the advantage of highly accurate quality control of crowd-sourcing, differences in scoring strategies to be ironed out via score standardization, applicability of standard significance testing while increasing the reliability of results.
212
+
213
+ # Acknowledgments
214
+
215
+ Support was provided by Noah's Ark Lab, Huawei, and Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Trinity College Dublin and Dublin City University funded under the SFI Research Centres Programme (Grants 13/RC/2106_P2; 13/RC/2106) co-funded under the European Regional Development Fund.
216
+
217
+ # References
218
+
219
+ Ralph A Alexander. 1990. A note on averaging correlations. Bulletin of the Psychonomic Society, 28(4):335-336.
220
+ Aliosha Alexandrov. 2010. Characteristics of single-item measures in likert scale format. The Electronic Journal of Business Research Methods, 8(1):1-12.
221
+ Loic Barrault, Magdalena Biesialska, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubesic, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (wmt20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-54, Online. Association for Computational Linguistics.
222
+ Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ales Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12-58, Baltimore, Maryland, USA. Association for Computational Linguistics.
223
+ Ondrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1-44, Sofia, Bulgaria. Association for Computational Linguistics.
224
+ Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10-51, Montreal, Canada. Association for Computational Linguistics.
225
+ Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 22-64, Edinburgh, Scotland. Association for Computational Linguistics.
226
+ M. Denkowski and A. Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85-91. Association for Computational Linguistics.
227
+ Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander H. Miller, Kurt Shuster, Jack Urbanek,
228
+
229
+ Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander I. Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2019. The second conversational intelligence challenge (convai2). CoRR, abs/1902.00098.
230
+ Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. CoRR, abs/1811.01241.
231
+ Sarah E. Finch and Jinho D. Choi. 2020. Towards unified dialogue system evaluation: A comprehensive analysis of current evaluation protocols. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 236-245, 1st virtual meeting. Association for Computational Linguistics.
232
+ Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Crowd-sourcing of human judgments of machine translation fluency. In Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013), pages 16-24, Brisbane, Australia.
233
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.
234
+ David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions. In Proceedings of the 13th International Conference on Natural Language Generation, pages 169-182, Dublin, Ireland. Association for Computational Linguistics.
235
+ Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. CoRR, abs/1905.01969.
236
+ Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW '14, page 248-256, New York, NY, USA. Association for Computing Machinery.
237
+ Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.
238
+ Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association
239
+
240
+ for Computational Linguistics on Human Language Technology-Volume 1, pages 71-78. Association for Computational Linguistics.
241
+ Qiang Liu, Alexander T Ihler, and Mark Steyvers. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
242
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
243
+ Anastassia Loukina, Nitin Madnani, Aoife Cahill, Lili Yao, Matthew S. Johnson, Brian Riordan, and Daniel F. McCaffrey. 2020. Using PRMSE to evaluate automated scoring systems in the presence of label noise. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 18-29, Seattle, WA, USA $\rightarrow$ Online. Association for Computational Linguistics.
244
+ Shikib Mehri and Maxine Eskenazi. 2020a. Unsupervised evaluation of interactive dialog with DialogGPT. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 225-235, 1st virtual meeting. Association for Computational Linguistics.
245
+ Shikib Mehri and Maxine Eskenazi. 2020b. USR: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707, Online. Association for Computational Linguistics.
246
+ Simon Mille, Anya Belz, Bernd Bohnet, Thiago Castro Ferreira, Yvette Graham, and Leo Wanner. 2020. The third multilingual surface realisation shared task (SR'20): Overview and evaluation results. In Proceedings of the Third Workshop on Multilingual Surface Realisation, pages 1-20, Barcelona, Spain (Online). Association for Computational Linguistics.
247
+ Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126.
248
+ Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2018. RankME: Reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 72-78, New Orleans, Louisiana. Association for Computational Linguistics.
249
+ Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, and Kewei Tu. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. In Proceedings of the 58th Annual Meeting of
250
+
251
+ the Association for Computational Linguistics, pages 3619-3629, Online. Association for Computational Linguistics.
252
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics.
253
+ Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, Eric King, Kate Bland, Amanda Wartick, Yi Pan, Han Song, Sk Jayadevan, Gene Hwang, and Art Pettigre. 2018. Conversational AI: the science behind the alexa prize. CoRR, abs/1801.03604.
254
+ Sashank Santhanam, Alireza Karduni, and Samira Shaikh. 2020. Studying the effects of cognitive biases in evaluation of conversational agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13.
255
+ Sashank Santhanam and Samira Shaikh. 2019. Towards best experiment design for evaluating dialogue system output. In Proceedings of the 12th International Conference on Natural Language Generation, pages 88-94, Tokyo, Japan. Association for Computational Linguistics.
256
+ Ionut Sorodoc, Jey Han Lau, Nikolaos Aletras, and Timothy Baldwin. 2017. Multimodal topic labelling. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 701-706, Valencia, Spain. Association for Computational Linguistics.
257
+ Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press.
258
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
259
+ Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-scale generative pre-training for conversational response
260
+
261
+ generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics.
262
+
263
+ # A Appendix
264
+
265
+ # A.1 Further Details of Meaning Distortion Degradation Procedure
266
+
267
+ To distort the meaning of responses in our quality control degraded performance model, a sequence of words of length $r$ is firstly selected from the response of length $n$ and replaced with a distinct sequence of words, also of length $r$ randomly selected from a distinct dialogue in the training set. Note that the position of the word sequence to be replaced is additionally random with the exception of response of length $n \geq 3$ , the sequence of replaced words does not include the response initial or final words:
268
+
269
+ - for $1 \leq n \leq 3$ : $r$ is 1 word;
270
+ for $4\leq n\leq 5$ $r$ is 2 words;
271
+ for $6\leq n\leq 8$ $r$ is 3 words;
272
+ - for $9 \leq n \leq 15$ , $r$ is 4 words;
273
+ for $16\leq n\leq 29$ $r$ is 5 words;
274
+ for $n\geq 30$ $r$ is $\lfloor n / 5\rfloor$ words.
275
+
276
+ # A.2 Worker Payment
277
+
278
+ Each workers was paid 0.99 USD per hit consisting of 6 conversations. The total cost of one run of our evaluation did not exceed 250 USD, or 25 USD per model. Note that the quality control method we applied for removing unreliable data is not the criteria for deciding worker payment. A worker whose data is filtered out can still get paid.
279
+
280
+ # A.3 Automatic Metrics
281
+
282
+ <table><tr><td></td><td>Positive</td><td>Negative</td></tr><tr><td>Interesting</td><td>Wow that is really interesting.
283
+ That&#x27;s really interesting!
284
+ Cool! That sounds super interesting.</td><td>That&#x27;s not very interesting.
285
+ That&#x27;s really boring.
286
+ That was a really boring response.</td></tr><tr><td>Fun</td><td>Wow that is very fun.
287
+ Chat with you is enjoyable.
288
+ You are fun.</td><td>That&#x27;s not very fun.
289
+ I am not having fun.</td></tr><tr><td>Consistent</td><td>-</td><td>That&#x27;s not what you said earlier!
290
+ Stop contradicting yourself!</td></tr><tr><td>Fluent</td><td>That makes sense!
291
+ You have a good point.</td><td>Is that real English?
292
+ I&#x27;m so confused right now!
293
+ That makes no sense!</td></tr><tr><td>Topic</td><td>-</td><td>Stop changing the topic so much.
294
+ Don&#x27;t change the topic!</td></tr><tr><td>Robotic</td><td>-</td><td>You are robot.
295
+ You do not sound like a person.</td></tr><tr><td>Repetitive</td><td>-</td><td>Stop saying the same thing repeatedly.
296
+ Why are you repeating yourself?
297
+ Stop repeating yourself!</td></tr></table>
298
+
299
+ Table 7: Positive and negative utterances employed for the FED metric. For criteria that are available in original FED (Interesting, Consistent, Fluent, Topic and Repetitive), we use their utterances off-the-shelf. In addition, we adapt the utterances for criteria Fun and Robotic.
300
+
301
+ # A.4 Additional Experiment Details and Results
302
+
303
+ We additionally provide a comparison of conversations collected in our live human evaluation and that of ConvAI2 in Figures 8, 9, 10, and 11, as well as median number of words and characters for conversations and inputs in Table 12, showing how challenges reported in the ConvAI2 live evaluation with respect to workers producing very short conversations are overcome by our quality control technique.
304
+
305
+ # A.5 User Interface
306
+
307
+ Figure 12, 13,15, 16 and 17 introduce the User Interfaces we designed in the experiment of our human evaluation method. In addition, Figure 18 provides an example of live chat between a worker and a chatbot.
308
+
309
+ <table><tr><td></td><td>Normal</td><td></td><td>Degraded</td></tr><tr><td>User:</td><td>Hello, how are you?</td><td>User:</td><td>How are you?</td></tr><tr><td>Bot:</td><td>i am good , a little tired but good !</td><td>Bot:</td><td>I do not have a job. I have broken have a job.</td></tr><tr><td>User:</td><td>I am tired as well, but good too</td><td>User:</td><td>Sorry about that?</td></tr><tr><td>Bot:</td><td>i am tired too , but from being out all day .</td><td>Bot:</td><td>Thanks. I the love watching old.</td></tr><tr><td>User:</td><td>What do you like doing?</td><td>User:</td><td>Tell me about it?</td></tr><tr><td>Bot:</td><td>i enjoy movies , reading books , and even just doing nothing</td><td>Bot:</td><td>that is cool, do you i supposed to I you travel?</td></tr><tr><td>User:</td><td>I love reading books</td><td>User:</td><td>I do not like travelling</td></tr><tr><td>Bot:</td><td>i enjoy reading too ! what is your favorite book ?</td><td>Bot:</td><td>I teach meet you, sorryy elementary school.</td></tr><tr><td>User:</td><td>I love the novel &quot;voice of the heart&quot;</td><td>User:</td><td>Okay, do you like to swim?</td></tr><tr><td>Bot:</td><td>i have been wanting to read that , i love reading too</td><td>Bot:</td><td>do you live sports are u skin am in a house.</td></tr><tr><td>User:</td><td>You should read too</td><td>User:</td><td>I love sports, especially soccer</td></tr><tr><td>Bot:</td><td>i do love to read when i have time</td><td>Bot:</td><td>I like but ishuold go, books</td></tr><tr><td>User:</td><td>What book was the last book you read?</td><td>User:</td><td>How often do you read?</td></tr><tr><td>Bot:</td><td>the notebook , a reread lol</td><td>Bot:</td><td>i are your think ur i am a pilot , so i do not have much time .</td></tr><tr><td>User:</td><td>I haven&#x27;t read that</td><td>User:</td><td>I envy you, I would love to be a pilot</td></tr><tr><td>Bot:</td><td>i read it yesterday . what do you do for fun ?</td><td>Bot:</td><td>I i am 4 felicia!</td></tr><tr><td>User:</td><td>I love going hiking</td><td>User:</td><td>I am Ben</td></tr><tr><td>Bot:</td><td>i have not gone hiking a while</td><td>Bot:</td><td>Oh, how nice. do you do for you have?</td></tr><tr><td>User:</td><td>I think you should</td><td>User:</td><td>I do not understand</td></tr><tr><td>Bot:</td><td>i should</td><td>Bot:</td><td>I is your favorite don&#x27;t I am sure you will find out soon.</td></tr></table>
310
+
311
+ Figure 4: A typical human-model conversation (left) and a human quality-degraded model conversation (right) where random response and meaning distortion techniques have been applied to degraded model responses.
312
+
313
+ <table><tr><td>1.</td><td>Your task is to have 6 conversations with a chatbot, and a different chatbot will talk to you in each conversation.</td></tr><tr><td>2.</td><td>Before each conversation, you should think of a topic to talk about with the chatbot (your choice of topic). You will be asked to enter this topic before the conversation starts.</td></tr><tr><td>3.</td><td>The current topic will be displayed to you throughout the conversation.</td></tr><tr><td>4.</td><td>If the chatbot changes the topic to a new one, you should record this by updating the conversation topic using the Topic button (bottom left).</td></tr><tr><td>5.</td><td>You are also allowed to change the topic, you should use the same button to do this (bottom left).</td></tr><tr><td>6.</td><td>At the end of each conversation, you should tell us what you think about the chatbot.</td></tr><tr><td>7.</td><td>In each conversation, you should type in a minimum of 10 inputs/sentences.</td></tr><tr><td>8.</td><td>The purpose of these HITs where you will generate conversations with chatbots is to test how realistic their conversations are with users. In order for your data to be useful to us we require that your half of the conversation is also realistic. For example, your data will not be useful to us if you do the following:User: HiBot: HiUser: HiBot: Hi.. and so on.</td></tr><tr><td>9.</td><td>Another example, if you are too repetitive or your responses are not appropriate given what the chatbot has just said, this will not be a useful test for them. For example, the following conversation is not ok:User: HiBot: HiUser: wow (not appropriate response)Bot: I saw a good movie last nightUser: wow (repetitive)Bot: Do you like football?User: I have two children and one dog. (not appropriate response)... and so on.</td></tr><tr><td>10.</td><td>We need realistic conversations, so please do your best to talk to the bot as if the bot was another person you actually want to talk to. Obvious attempts to game the process and ones that don&#x27;t make a real effort will unfortunately be rejected.</td></tr><tr><td>11.</td><td>The chatbot may take a few seconds to respond, please be patient.</td></tr><tr><td>12.</td><td>Please use Chrome and avoid special symbols if possible.</td></tr><tr><td>13.</td><td>There is a feedback box at the end of the HIT. If you encounter any problems, please enter them in this box or email our MTurk account.</td></tr></table>
314
+
315
+ Figure 5: Instructions shown to Mechanical Turk workers before starting the open-domain dialogue human evaluation.
316
+
317
+ <table><tr><td></td><td>Overall</td><td>Interesting</td><td>Fun</td><td>Consistent</td><td>Fluent</td><td>Topic</td><td>Robotic</td><td>Repetitive</td></tr><tr><td>Overall</td><td>-</td><td>0.959</td><td>0.976</td><td>0.861</td><td>0.966</td><td>0.796</td><td>0.916</td><td>0.674</td></tr><tr><td>Interesting</td><td>0.927</td><td>-</td><td>0.992</td><td>0.691</td><td>0.949</td><td>0.599</td><td>0.875</td><td>0.840</td></tr><tr><td>Fun</td><td>0.903</td><td>0.988</td><td>-</td><td>0.753</td><td>0.961</td><td>0.660</td><td>0.889</td><td>0.783</td></tr><tr><td>Consistent</td><td>0.842</td><td>0.673</td><td>0.636</td><td>-</td><td>0.811</td><td>0.969</td><td>0.770</td><td>0.210</td></tr><tr><td>Fluent</td><td>0.879</td><td>0.939</td><td>0.915</td><td>0.648</td><td>-</td><td>0.724</td><td>0.857</td><td>0.667</td></tr><tr><td>Topic</td><td>0.745</td><td>0.552</td><td>0.503</td><td>0.915</td><td>0.503</td><td>-</td><td>0.676</td><td>0.122</td></tr><tr><td>Robotic</td><td>0.867</td><td>0.830</td><td>0.782</td><td>0.648</td><td>0.867</td><td>0.491</td><td>-</td><td>0.642</td></tr><tr><td>Repetitive</td><td>0.673</td><td>0.770</td><td>0.782</td><td>0.261</td><td>0.770</td><td>0.055</td><td>0.758</td><td>-</td></tr></table>
318
+
319
+ Table 8: Correlation of assessed criteria with others when the human dialogue participant is allowed to freely choose a topic (run 1); correlations in the upper right half correspond to Pearson's $r$ while lower left are Spearman Correlation Coefficients.
320
+
321
+ <table><tr><td></td><td>Model</td><td>n</td><td>Overall</td><td>Interesting</td><td>Fun</td><td>Consistent</td><td>Fluent</td><td>Topic</td><td>Robotic</td><td>Repetitive</td></tr><tr><td rowspan="10">Free Run 2</td><td>A</td><td>623</td><td>0.455</td><td>0.635</td><td>0.629</td><td>0.728</td><td>0.924</td><td>0.922</td><td>-0.443</td><td>-0.212</td></tr><tr><td>Ap</td><td>539</td><td>0.423</td><td>0.747</td><td>0.763</td><td>0.555</td><td>0.728</td><td>0.474</td><td>-0.348</td><td>0.040</td></tr><tr><td>B</td><td>553</td><td>0.344</td><td>0.464</td><td>0.407</td><td>0.554</td><td>0.763</td><td>0.822</td><td>-0.338</td><td>-0.266</td></tr><tr><td>Bp</td><td>630</td><td>0.260</td><td>0.464</td><td>0.372</td><td>0.560</td><td>0.581</td><td>0.496</td><td>-0.412</td><td>-0.238</td></tr><tr><td>C</td><td>539</td><td>0.245</td><td>0.576</td><td>0.492</td><td>0.229</td><td>0.585</td><td>0.043</td><td>-0.545</td><td>0.337</td></tr><tr><td>Cp</td><td>609</td><td>0.154</td><td>0.453</td><td>0.390</td><td>0.027</td><td>0.544</td><td>-0.200</td><td>-0.515</td><td>0.382</td></tr><tr><td>D</td><td>595</td><td>0.002</td><td>0.009</td><td>-0.064</td><td>0.389</td><td>0.282</td><td>0.656</td><td>-0.720</td><td>-0.541</td></tr><tr><td>E</td><td>567</td><td>-0.202</td><td>-0.063</td><td>-0.044</td><td>-0.075</td><td>0.300</td><td>-0.346</td><td>-0.646</td><td>-0.539</td></tr><tr><td>Ep</td><td>511</td><td>-0.218</td><td>-0.152</td><td>-0.143</td><td>0.043</td><td>0.426</td><td>-0.352</td><td>-0.702</td><td>-0.646</td></tr><tr><td>Dp</td><td>679</td><td>-0.258</td><td>-0.285</td><td>-0.304</td><td>0.033</td><td>0.209</td><td>-0.226</td><td>-0.550</td><td>-0.683</td></tr><tr><td></td><td>r</td><td>-</td><td>0.969</td><td>0.952</td><td>0.927</td><td>0.899</td><td>0.960</td><td>0.951</td><td>0.646</td><td>0.936</td></tr></table>
322
+
323
+ Table 9: Average standardized scores for models in secondary data collection run; workers were free to choose the topic of conversation (Free Run 2); the correlation $(r)$ between systems in this and a second data collection run distinct data collection runs; where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=Language Model; models with $p$ models with a the persona; $n$ denotes total number of ratings; score for robotic and repetitive have been reversed; models ordered by overall average score.
324
+
325
+ <table><tr><td></td><td>Model</td><td>n</td><td>Overall</td><td>Interesting</td><td>Fun</td><td>Consistent</td><td>Fluent</td><td>Topic</td><td>Robotic</td><td>Repetitive</td></tr><tr><td rowspan="10">Free run 1</td><td>A</td><td>798</td><td>52.49</td><td>53.03</td><td>54.07</td><td>58.12</td><td>61.78</td><td>65.24</td><td>35.73</td><td>39.47</td></tr><tr><td>B</td><td>798</td><td>50.41</td><td>51.39</td><td>51.68</td><td>56.37</td><td>64.50</td><td>67.84</td><td>25.63</td><td>35.45</td></tr><tr><td>Ap</td><td>707</td><td>45.53</td><td>47.38</td><td>46.23</td><td>48.52</td><td>60.17</td><td>47.50</td><td>28.30</td><td>40.62</td></tr><tr><td>C</td><td>791</td><td>43.96</td><td>50.50</td><td>47.53</td><td>35.85</td><td>55.73</td><td>33.98</td><td>27.35</td><td>56.76</td></tr><tr><td>Cp</td><td>714</td><td>41.21</td><td>47.13</td><td>46.26</td><td>39.25</td><td>55.05</td><td>32.07</td><td>21.85</td><td>46.84</td></tr><tr><td>Bp</td><td>707</td><td>39.93</td><td>41.35</td><td>40.06</td><td>44.93</td><td>53.74</td><td>43.72</td><td>25.25</td><td>30.49</td></tr><tr><td>D</td><td>707</td><td>33.71</td><td>30.28</td><td>29.95</td><td>41.72</td><td>45.92</td><td>49.07</td><td>17.30</td><td>21.72</td></tr><tr><td>Dp</td><td>798</td><td>29.38</td><td>26.19</td><td>27.97</td><td>37.53</td><td>44.19</td><td>35.26</td><td>17.46</td><td>17.06</td></tr><tr><td>E</td><td>742</td><td>28.99</td><td>30.75</td><td>30.65</td><td>31.27</td><td>46.42</td><td>23.60</td><td>15.10</td><td>25.13</td></tr><tr><td>Ep</td><td>763</td><td>28.65</td><td>29.34</td><td>28.50</td><td>29.13</td><td>47.07</td><td>21.30</td><td>17.82</td><td>27.41</td></tr><tr><td rowspan="10">Free Run 2</td><td>A</td><td>623</td><td>51.67</td><td>56.62</td><td>56.27</td><td>59.21</td><td>64.69</td><td>64.04</td><td>27.11</td><td>33.74</td></tr><tr><td>B</td><td>539</td><td>49.07</td><td>52.42</td><td>50.66</td><td>54.88</td><td>60.86</td><td>63.73</td><td>29.57</td><td>31.38</td></tr><tr><td>Ap</td><td>553</td><td>50.56</td><td>59.95</td><td>60.23</td><td>54.28</td><td>60.61</td><td>52.06</td><td>27.59</td><td>39.22</td></tr><tr><td>C</td><td>630</td><td>45.87</td><td>55.60</td><td>53.02</td><td>45.16</td><td>54.70</td><td>38.72</td><td>24.40</td><td>49.50</td></tr><tr><td>Cp</td><td>539</td><td>42.27</td><td>51.19</td><td>49.61</td><td>37.90</td><td>54.17</td><td>30.42</td><td>22.74</td><td>49.84</td></tr><tr><td>Bp</td><td>609</td><td>46.71</td><td>51.92</td><td>49.95</td><td>54.62</td><td>56.01</td><td>52.85</td><td>28.48</td><td>33.10</td></tr><tr><td>D</td><td>595</td><td>38.17</td><td>38.31</td><td>35.39</td><td>50.99</td><td>46.38</td><td>57.94</td><td>16.09</td><td>22.08</td></tr><tr><td>Dp</td><td>567</td><td>30.89</td><td>31.07</td><td>30.37</td><td>38.37</td><td>44.64</td><td>31.47</td><td>21.85</td><td>18.48</td></tr><tr><td>E</td><td>679</td><td>31.70</td><td>35.67</td><td>36.32</td><td>35.26</td><td>46.91</td><td>26.79</td><td>18.98</td><td>21.99</td></tr><tr><td>Ep</td><td>511</td><td>31.66</td><td>33.63</td><td>33.26</td><td>38.77</td><td>51.53</td><td>26.99</td><td>17.63</td><td>19.79</td></tr><tr><td></td><td>r</td><td>-</td><td>0.959</td><td>0.947</td><td>0.919</td><td>0.880</td><td>0.951</td><td>0.951</td><td>0.783</td><td>0.945</td></tr></table>
326
+
327
+ Table 10: Average raw Direct Assessment scores for each assessed dimension of a range of dialogue systems in two distinct data collection runs where workers are free to choose the topic (Free run 1; Free run 2); as well as the correlations of different aspects, where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=LSTM-based; models with $p$ with persona, while those without $p$ do not have a persona; $n$ denotes total number of ratings; score for robotic and repetitive have been reversed; models ordered by overall average score
328
+
329
+ <table><tr><td></td><td>Model</td><td>n</td><td>Overall</td><td>Interesting</td><td>Fun</td><td>Consistent</td><td>Fluent</td><td>Topic</td><td>Robotic</td><td>Repetitive</td></tr><tr><td rowspan="10">Ice-breaker</td><td>A</td><td>721</td><td>53.43</td><td>53.65</td><td>52.35</td><td>63.24</td><td>67.28</td><td>66.97</td><td>28.17</td><td>42.32</td></tr><tr><td>Ap</td><td>721</td><td>50.21</td><td>54.53</td><td>53.50</td><td>52.84</td><td>58.83</td><td>53.18</td><td>38.87</td><td>39.70</td></tr><tr><td>B</td><td>742</td><td>49.55</td><td>49.23</td><td>47.76</td><td>57.79</td><td>60.64</td><td>62.22</td><td>32.56</td><td>36.65</td></tr><tr><td>C</td><td>784</td><td>47.93</td><td>56.18</td><td>53.69</td><td>43.15</td><td>56.88</td><td>40.46</td><td>29.61</td><td>55.54</td></tr><tr><td>Bp</td><td>700</td><td>44.94</td><td>48.83</td><td>46.70</td><td>49.58</td><td>55.86</td><td>49.21</td><td>25.82</td><td>38.61</td></tr><tr><td>Cp</td><td>658</td><td>42.41</td><td>47.98</td><td>45.48</td><td>37.66</td><td>54.51</td><td>32.50</td><td>26.00</td><td>52.72</td></tr><tr><td>D</td><td>728</td><td>35.14</td><td>30.32</td><td>33.13</td><td>42.90</td><td>49.92</td><td>48.51</td><td>20.11</td><td>21.09</td></tr><tr><td>Ep</td><td>721</td><td>31.58</td><td>31.73</td><td>30.82</td><td>35.44</td><td>47.12</td><td>27.06</td><td>21.90</td><td>26.97</td></tr><tr><td>E</td><td>721</td><td>30.09</td><td>33.17</td><td>31.95</td><td>31.14</td><td>47.12</td><td>24.90</td><td>19.10</td><td>23.23</td></tr><tr><td>Dp</td><td>714</td><td>27.22</td><td>22.56</td><td>22.53</td><td>35.22</td><td>41.70</td><td>34.98</td><td>17.44</td><td>16.09</td></tr><tr><td></td><td>r</td><td>-</td><td>0.970</td><td>0.955</td><td>0.918</td><td>0.949</td><td>0.928</td><td>0.972</td><td>0.738</td><td>0.968</td></tr></table>
330
+
331
+ Table 11: Average raw Direct Assessment scores when the topic via an Ice-breaker statement is selected from the persona assigned to the model; as well as the correlation between ice-breaker and freely chosen topic (Free run 1) scores, where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=LSTM-based; Systems with subscript $p$ correspond to the performance of the corresponding model when the persona is available to the dialogue system; $n$ denotes total number of ratings; score for robotic and repetitive have been reversed; $n$ is the sample size of ratings combined to produce each score; models ordered by overall average score.
332
+
333
+ ![](images/61ff096f992a9d26396664f9f8ac14e1f8581919a5b16fe62f5da5cd91b7d206.jpg)
334
+ Figure 6: Pairwise significance test results for systems concluded from Free Run 2, where a colored cell indicates that the system in that row significantly outperformed the system in that column. Models are consistent with Table 3.
335
+
336
+ ![](images/f9584bfdca61d2db587147806c1050ac34bdf1c84d4d38ab99a3698295b107e8.jpg)
337
+ Figure 7: Significance test results for Ice-breaker evaluation of models, where a darker colored cell indicates a stronger win in terms of statistical significance for the system in a given row over the system in a given column. Models are consistent with Table 3.
338
+
339
+ <table><tr><td></td><td></td><td>Passed QC</td><td>Failed QC</td><td>ConvAI2</td></tr><tr><td rowspan="2">Characters</td><td>Median in an Input</td><td>27</td><td>22</td><td>16</td></tr><tr><td>Median in a Conversation</td><td>249</td><td>188</td><td>105</td></tr><tr><td rowspan="2">Words</td><td>Median in an Input</td><td>8</td><td>6</td><td>4</td></tr><tr><td>Median in a Conversation</td><td>63</td><td>48</td><td>28</td></tr></table>
340
+
341
+ Table 12: Median numbers of words and characters in conversations and inputs provided by workers who passed quality control; failed quality control in our human evaluation; ConvAI2 live evaluation.
342
+
343
+ ![](images/ca8b6eed043d0343d6de7bbb2d6611526200b311ff29adf7e2eeb73c1663a3f6.jpg)
344
+ (a) Pass Quality Control
345
+
346
+ ![](images/cffebff63a384bd3855e2882293d3e77487054374c3781226c6dc8fd3c174362.jpg)
347
+ (b) Fail Quality Control
348
+
349
+ ![](images/9cf8e9ea3b110cb6ca649ea4db5ffed572063c1f68c6602f2c096d50d52fefff.jpg)
350
+ (c) ConvAI2 Live
351
+
352
+ ![](images/a1df663fbf46e7fd9bab322ecd21f56bae312097faa82ee4f9f3ff9f71630a4a.jpg)
353
+ Figure 8: Characters per conversation from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation.
354
+ (a) Pass Quality Control
355
+
356
+ ![](images/1128041d9dd5ea6e7074ba91f3bad25a9b4a6a04688aa3076019b1402cad7f78.jpg)
357
+ (b) Fail Quality Control
358
+
359
+ ![](images/4841b88192143b24d88feff0a40acb2fc335cfa3c55010c30275fa60e8dc8242.jpg)
360
+ (c) ConvAI2 Live
361
+
362
+ ![](images/9e3bb66cd6dffa04a7ea4fc9bc75cff6c820bc28eb852f515f9ffa63b98c01ba.jpg)
363
+ (a) Pass Quality Control
364
+ Figure 10: Characters per input from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation.
365
+
366
+ ![](images/de4fe8700c4ccdc4de75e0a3447590718b99440126109d593ae78c407fbdede0.jpg)
367
+ (b) Fail Quality Control
368
+
369
+ ![](images/2cecdc4c3817f8cbd3a54d78796b48450e44883621b711437effa28c109418f9.jpg)
370
+ Figure 9: Words per conversation from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation.
371
+ (c) ConvAI2 Live
372
+
373
+ ![](images/62e72b84ba6e2aa07ce9dfa38081b26308a52d8c5ccf567c923c861ec65fa2cd.jpg)
374
+ (a) Pass Quality Control
375
+
376
+ ![](images/b757441e665319788a74df63adf116664437777bf15e9d5b1622060232ed0bcb.jpg)
377
+ (b) Fail Quality Control
378
+
379
+ ![](images/f656ae7544c2d4df30cac3886170da73a60826ec4472deb96ea75eb9cf430e61.jpg)
380
+ (c) ConvA12 Live
381
+ Figure 11: Words per input from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation.
382
+
383
+ ![](images/e6e5e19109c73b550b9f9736118d703b259383cd580ac0f695178fcf1e1bb8d1.jpg)
384
+ Figure 12: The user interface for workers to interact with a chatbot.
385
+
386
+ ![](images/b5decdcc8d28e1f04bb22b33b69b19a78edb6a1358a16020a8c2088e55717b46.jpg)
387
+ Figure 13: The popup window for user to type a topic before the conversation starts.
388
+
389
+ ![](images/71693e337d487f3cfbc0bd776070481d600857dde83f6a1ce02a601859fe5c5c.jpg)
390
+ Figure 14: The popup window if the Topic button is clicked.
391
+
392
+ You have completed conversations with 0 chatbots.
393
+
394
+ # Not enough inputs yet!
395
+
396
+ Please make sure that you have entered at least 10 inputs/sentences before going to the next chatbot, thanks! The number of inputs you've entered so far is displayed at the top of the screen.
397
+
398
+ Close
399
+
400
+ Please say how much you agree with each of the following statements:
401
+
402
+ ![](images/d3a297bada4c4605ec062a5e06ebda64b5a94332a3a29c7f8d533732f0e5d8f4.jpg)
403
+ Figure 15: The popup warning when a worker clicks the Next Chatbot button without enough inputs.
404
+ Figure 16: The interface shown to a worker to evaluate the conversation with a chatbot after clicking the Next Chatbot button in Figure 12. Once the evaluation of current conversation is done, worker should click the NEXT button to move to the next chatbot. If all conversations are completed, the worker will be redirect to end the entire HIT and leave the feedback, as shown in Figure 17.
405
+
406
+ NEXT
407
+
408
+ Submit
409
+
410
+ ![](images/1ee87e0f42fe2b575dec0f74eb0c1c42fe04d1b590a23c1d3f5da443af43dbba.jpg)
411
+ Figure 17: The interface shown to workers when a HIT is completed. Workers are welcome to leave their feedback in this page.
412
+ Figure 18: Screenshot of example live chat between a Mechanical Turk worker who chose books as the conversation topic in the human evaluation.
achievingreliablehumanassessmentofopendomaindialoguesystems/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8f75d8cc57e62682b9ee3c53598c6656d7df6ecf64225de8e13afc792922436
3
+ size 1696326
achievingreliablehumanassessmentofopendomaindialoguesystems/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6f7a8272c1d75bba98aa2ee49b247e9d8fc6bdb29c14b6a1752c5d620e0deef
3
+ size 503706
acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d77a79e5d3de8164163f91e5b6f2ad478937bde46bcf3508c0340269a28714b
3
+ size 124776
acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3852f7ea2fa104816b8da0e95bb68c12489a95efe4ac860fff64a7f1a767fdeb
3
+ size 138706
acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:842858e366fd8aaaf5cbb5f798b294d49217fbdcd9cb3f87971ae13f959fd273
3
+ size 3074149
acloserlookathowfinetuningchangesbert/full.md ADDED
@@ -0,0 +1,470 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Closer Look at How Fine-tuning Changes BERT
2
+
3
+ Yichu Zhou
4
+
5
+ School of Computing
6
+
7
+ University of Utah
8
+
9
+ flyaway@cs.utah.edu
10
+
11
+ Vivek Srikumar
12
+
13
+ School of Computing
14
+
15
+ University of Utah
16
+
17
+ svivek@cs.utah.edu
18
+
19
+ # Abstract
20
+
21
+ Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful. The most common approach to use these representations involves fine-tuning them for an end task. Yet, how fine-tuning changes the underlying embedding space is less studied. In this work, we study the English BERT family and use two probing techniques to analyze how fine-tuning changes the space. We hypothesize that fine-tuning affects classification performance by increasing the distances between examples associated with different labels. We confirm this hypothesis with carefully designed experiments on five different NLP tasks. Via these experiments, we also discover an exception to the prevailing wisdom that "fine-tuning always improves performance". Finally, by comparing the representations before and after fine-tuning, we discover that fine-tuning does not introduce arbitrary changes to representations; instead, it adjusts the representations to downstream tasks while largely preserving the original spatial structure of the data points.
22
+
23
+ # 1 Introduction
24
+
25
+ Pre-trained transformer-based language models (e.g., Devlin et al., 2019) form the basis of state-of-the-art results across NLP. The relative opacity of these models has prompted the development of many probes to investigate linguistic regularities captured in them (e.g., Kovaleva et al., 2019; Conneau et al., 2018; Jawahar et al., 2019).
26
+
27
+ Broadly speaking, there are two ways to use a pre-trained representation (Peters et al., 2019): as a fixed feature extractor (where the pre-trained weights are frozen), or by fine-tuning it for a task. The probing literature has largely focused on the former (e.g., Kassner and Schütze, 2020; Perone et al., 2018; Yaghoobzadeh et al., 2019;
28
+
29
+ Krasnowska-Kieras and Wroblewska, 2019; Wallace et al., 2019; Pruksachatkun et al., 2020; Aghajanyan et al., 2021). Some previous work (Merchant et al., 2020; Mosbach et al., 2020b; Hao et al., 2020) does provide insights about fine-tuning: fine-tuning changes higher layers more than lower ones and linguistic information is not lost during fine-tuning. However, relatively less is understood about how the representation changes during the process of fine-tuning and why fine-tuning invariably seems to improve task performance.
30
+
31
+ In this work, we investigate the process of fine-tuning of representations using the English BERT family (Devlin et al., 2019). Specifically, we ask:
32
+
33
+ 1. Does fine-tuning always improve performance?
34
+ 2. How does fine-tuning alter the representation to adjust for downstream tasks?
35
+ 3. How does fine-tuning change the geometric structure of different layers?
36
+
37
+ We apply two probing techniques—classifier-based probing (Kim et al., 2019; Tenney et al., 2019) and DIRECTPROBE (Zhou and Srikumar, 2021)—on variants of BERT representations that are fine-tuned on five tasks: part-of-speech tagging, dependency head prediction, preposition supersense role & function prediction and text classification. Beyond confirming previous findings about fine-tuning, our analysis reveals several new findings, briefly described below.
38
+
39
+ First, we find that fine-tuning introduces a divergence between training and test sets, which is not severe enough to hurt generalization in most cases. However, we do find one exception where fine-tuning hurts the performance; this setting also has the largest divergence between training and test set after fine-tuning (§4.1).
40
+
41
+ Second, we examine how fine-tuning changes labeled regions of the representation space. For a representation where task labels are not linearly separable, we find that fine-tuning adjusts it by
42
+
43
+ grouping points with the same label into a small number of clusters (ideally one), thus simplifying the underlying representation. Doing so makes it easier to linearly separate labels with fine-tuned representations than untuned ones (§4.2). For a representation whose task labels are already linearly separable, we find that fine-tuning pushes the clusters of points representing different labels away from each other, thus introducing large separating regions between labels. Rather than simply scaling the points, clusters move in different directions and with different extents (measured by Euclidean distance). Overall, these clusters become distant compared to the untuned representation. We conjecture that the enlarged region between groups admits a bigger set of classifiers that can separate them, leading to better generalization (§4.3).
44
+
45
+ We verify our distance hypothesis by investigating the effect of fine-tuning across tasks. We observe that fine-tuning for related tasks can also provide useful signal for the target task by altering the distances between clusters representing different labels (§4.4).
46
+
47
+ Finally, fine-tuning does not change the higher layers arbitrarily. This confirms previous findings. Additionally, we find that fine-tuning largely preserves the relative positions of the label clusters, while reconfiguring the space to adjust for downstream tasks (§4.5). Informally, we can say that fine-tuning only "slightly" changes higher layers.
48
+
49
+ These findings help us understand fine-tuning better, and justify why fine-tuned representations can lead to improvements across many NLP tasks<sup>1</sup>.
50
+
51
+ # 2 Preliminaries: Probing Methods
52
+
53
+ In this work, we probe representations in the BERT family during and after fine-tuning. First, let us look at the two supervised probes we will employ: a classifier-based probe (e.g., Tenney et al., 2019; Jullien et al., 2022) to assess how well a representation supports classifiers for a task, and DIRECT-ProBE (Zhou and Srikumar, 2021) to analyze the geometry of the representation.
54
+
55
+ # 2.1 Classifiers as Probes
56
+
57
+ Trained classifiers are the most commonly used probes in the literature (e.g. Hewitt et al., 2021; Whitney et al., 2021; Belinkov, 2021). To understand how well a representation encodes the labels
58
+
59
+ for a task, a probing classifier is trained over it, with the embeddings themselves kept frozen when the classifier is trained.
60
+
61
+ For all our experiments, we use two-layer neural networks as our probe classifiers. We use grid-search to choose the best hyperparameters. Each best classifier is trained five times with different initializations. We report the average accuracy and its standard deviation for each classifier.
62
+
63
+ The hidden layer sizes are selected from $\{32,64,128,256\} \times \{32,64,128,256\}$ , and the regularizer weight from the range $10^{-7}$ to $10^{0}$ . All models use RLUs as the activation function for the hidden layer and are optimized by Adam (Kingma and Ba, 2015). We set the maximum number of learning iterations to 1000. We use scikit-learn v0.22 (Pedregosa et al., 2011) for these experiments.
64
+
65
+ Classifier probes aim to measure how well a contextualized representation captures a linguistic property. The classification performance can help us assess the effect of fine-tuning.
66
+
67
+ # 2.2 DIRECTPROBE: Probing the Geometric Structure
68
+
69
+ Classifier probes treat the representation as a black box and only focus on the final task performance; they do not reveal how fine-tuning changes the underlying geometry of the space. To this end, we use DIRECTPROBE (Zhou and Srikumar, 2021) $^2$ , a recently proposed technique which analyzes embeddings from a geometric perspective. We briefly summarize the technique and refer the reader to the original work for details.
70
+
71
+ For a given labeling task, DIRECTPROBE returns a set of clusters such that each cluster only contains the points with the same label, and there are no overlaps between the convex hulls of these clusters. Any decision boundary must cross the regions between the clusters that have different labels (see in Figure 1). Since fine-tuning a contextualized representation creates different representations for different tasks, it is reasonable to probe the representation based on a given task. These clusters allow us to measure three properties of interest.
72
+
73
+ Number of Clusters: The number of clusters indicates the linearity of the representation for a task. If the number of clusters equals the number of labels, then examples with the same label are grouped into
74
+
75
+ ![](images/0e83ff3e5b8a997f983676a43a15a147430c1a45e4c78dc88f29b199c78645f5.jpg)
76
+ Figure 1: Using the clustering to approximate the set of all decision boundaries. The left subfigure is a simple binary classification problem with a dashed circular decision boundary. The right subfigure is the result of DIRECTPROBE where the gray area is the region that a separator must cross. The connected points represent the clusters that DIRECTPROBE produces.
77
+
78
+ ![](images/638e17dedad92d65de2e92fbc2b6939b6e419d8ae6eca828507c728723087338.jpg)
79
+
80
+ one cluster; a simple linear multi-class classifier will suffice. If, however, there are more clusters than labels, then at least two clusters of examples with the same label can not be grouped together (as in Figure 1, right). This scenario calls for a non-linear classifier.
81
+
82
+ Distances between Clusters: Distances<sup>3</sup> between clusters can reveal the internal structure of a representation. By tracking these distances during fine-tuning, we can study how the representation changes. To compute these distances, we use the fact that each cluster represents a convex object. This allows us to use max-margin separators to compute distances. We train a linear SVM (Chang and Lin, 2011) to find the maximum margin separator and compute its margin. The distance between the two clusters is twice the margin.
83
+
84
+ Spatial Similarity: Distances between clusters can also reveal the spatial similarity of two representations. Intuitively, if two representations have similar relative distances between clusters, the representations themselves are similar to each other for the task at hand.
85
+
86
+ We use these distances to construct a distance vector $\mathbf{v}$ for a representation, where each element $\mathbf{v}_i$ is the distance between the clusters of a pair of labels. With $n$ labels in a task, the size of $\mathbf{v}$ is $\frac{n(n - 1)}{2}$ . This construction works only when the number of clusters equals the number of labels (i.e., the dataset is linearly separable under the representation). Surprisingly, we find this to be the case for most representations we studied. As a measure of the similarity of two representations for a labeling task, we compute the Pearson correlation coefficient between their distance vectors. Note that this coefficient can also be used to measure the similarity between two labeled datasets with respect to the
87
+
88
+ <table><tr><td></td><td>Layers</td><td>#heads</td><td>Dim</td><td>#Param</td></tr><tr><td>BERTtiny</td><td>2</td><td>2</td><td>128</td><td>4.4M</td></tr><tr><td>BERTmini</td><td>4</td><td>4</td><td>256</td><td>11.3M</td></tr><tr><td>BERTsmall</td><td>4</td><td>8</td><td>512</td><td>29.1M</td></tr><tr><td>BERTmedium</td><td>8</td><td>8</td><td>512</td><td>41.7M</td></tr><tr><td>BERTbase</td><td>12</td><td>12</td><td>768</td><td>110.1M</td></tr></table>
89
+
90
+ Table 1: Statistics of five different BERT models.
91
+
92
+ same representation. We exploit this observation to analyze the divergence between training and test sets for fine-tuned representations (§4.1).
93
+
94
+ # 3 Experimental Setup
95
+
96
+ In this section, we describe the representations and tasks we will encounter in our experiments.
97
+
98
+ # 3.1 Representations
99
+
100
+ We investigate several models from the BERT family (Devlin et al., 2019; Turc et al., 2019). These models all share the same basic architecture but with different capacities, i.e., different layers and hidden sizes. Table 1 summarizes the models we investigate in this work<sup>4</sup>. All of these models are for English text and uncased.
101
+
102
+ For tokens that are broken into subwords by the tokenizer, we average the subword embeddings for the token representation. We use the models provided by HuggingFace v4.2.1 (Wolf et al., 2020), and Pytorch v1.6.0 (Paszke et al., 2019) for our experiments.
103
+
104
+ # 3.2 Tasks
105
+
106
+ We instantiate our analysis of the BERT models on a diverse set of five NLP tasks, which covers syntactic and semantic predictions. Here, we briefly describe the tasks, and refer the reader to the original sources of the data for further details.<sup>5</sup>
107
+
108
+ Part-of-speech tagging (POS) predicts the part-of-speech tag for each word in a sentence. The task helps us understand if a representation captures coarse grained syntactic categorization. We use the English portion of the parallel universal dependencies treebank (ud-pud, Nivre et al., 2016).
109
+
110
+ Dependency relation (DEP) predicts the syntactic dependency relation between two tokens, i.e.
111
+
112
+ $(w_{head}$ and $w_{mod})$ . This task can help us understand if, and how well, a representation can characterize syntactic relationships between words. This task involves assigning a category to a pair of tokens. We concatenate their contextualized representations from BERT and treat the concatenation as the representation of the pair. We use the same dataset as the POS task for dependencies.
113
+
114
+ Preposition supersense disambiguation involves two categorization tasks of predicting preposition's semantic role (PS-role) and semantic function (PS-fxn). These tasks are designed for disambiguating semantic meanings of prepositions. Following the previous work (Liu et al., 2019), we only train and evaluate on single-token prepositions from Streusle v4.2 corpus (Schneider et al., 2018).
115
+
116
+ Text classification, in general, is the task of categorizing sentences or documents. We use the TREC-50 dataset (Li and Roth, 2002) with 50 semantic labels for sentences. As is the standard practice, we use the representation of the [CLS] token as the sentence representation. This task can show how well a representation characterizes a sentence.
117
+
118
+ # 3.3 Fine-tuning Setup
119
+
120
+ We fine-tune the models in §3.1 on the five tasks from §3.2 separately. $^{6}$ The fine-tuned models (along with the original models) are then used to generate contextualized representations. The probing techniques described in §2 are applied to study both original and fine-tuned representations.
121
+
122
+ Our preliminary experiments showed that the commonly used 3-5 epochs of fine-tuning are insufficient for the smaller representations, such as $\mathrm{BERT}_{\mathrm{tiny}}$ , and they require more epochs. We fine-tuned all the representations for 10 epochs except $\mathrm{BERT}_{\mathrm{base}}$ , which we fine-tuned for the usual three epochs. Note that the fine-tuning phase is separate from the classifier training phase for probing; for the probe classifiers, we train two-layer neural networks (described in §2.1) from scratch on both original and fine-tuned representations<sup>7</sup>, ensuring a fair comparison between them.
123
+
124
+ # 4 Observations and Analysis
125
+
126
+ In this section, we will use classifier probes to examine if fine-tuning always improves classifier per
127
+
128
+ formance (§4.1). Then we propose a geometric explanation for why fine-tuning improves classification performance using DIRECTPROBE (§4.2 and §4.3). Next, we will confirm this geometric explanation by investigating cross-task finetuning (§4.4). Finally, we will analyze how finetuning changes the geometry of different layers of BERTbase (§4.5).
129
+
130
+ # 4.1 Fine-tuned Performance
131
+
132
+ It is commonly accepted that the fine-tuning improves task performance. Does this always hold? Table 2 summarizes the relevant observations from our experiments. Appendix C presents the complete fine-tuning results.
133
+
134
+ Fine-tuning diverges the training and test set. In Table 2, the last column shows the spatial similarity between the training and test set for each representation. We apply DIRECTPROBE on the training and test set separately. The spatial similarity is calculated as the Pearson correlation coefficient between the distance vectors of training and test set (described in §2). We observe that after fine-tuning, all the similarities decrease, implying that the training and test set diverge as a result of fine-tuning. In most cases, this divergence is not severe enough to decrease the performance.
135
+
136
+ There are exceptions, where fine-tuning hurts performance. An interesting observation in Table 2 is that $\mathrm{BERT}_{\mathrm{small}}$ does not show the improvements on the PS-fxn task after fine-tuning, which breaks the well-accepted impression that fine-tuning always improve the performance. However, only one such exception is observed across all our experiments (see Appendix C). It is insufficient to draw any concrete conclusions about why this is happening. We do observe that $\mathrm{BERT}_{\mathrm{small}}$ shows the smallest similarity (0.44) between the training and test set after fine-tuning on PS-fxn task. We conjecture that controlling the divergence between the training and test sets can help ensure that fine-tuning helps. Verifying or refuting this conjecture requires further study.
137
+
138
+ # 4.2 Linearity of Representations
139
+
140
+ Next, let us examine the geometry of the representations before and after fine-tuning using DIRECTPROBE and counting the number of clusters. We will focus on the overwhelming majority of cases where fine-tuning does improve performance.
141
+
142
+ <table><tr><td>Task</td><td></td><td>Acc</td><td>Sim</td></tr><tr><td rowspan="2">POS</td><td>original</td><td>94.25</td><td>0.96</td></tr><tr><td>tuned</td><td>94.43</td><td>0.72</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>92.93</td><td>0.93</td></tr><tr><td>tuned</td><td>94.48</td><td>0.78</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>86.26</td><td>0.82</td></tr><tr><td>tuned</td><td>85.08</td><td>0.44</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>74.22</td><td>0.84</td></tr><tr><td>tuned</td><td>74.57</td><td>0.54</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>81.32</td><td>-</td></tr><tr><td>tuned</td><td>89.60</td><td>-</td></tr></table>
143
+
144
+ Smaller representations require more complex classifiers. Table 3 summarizes the results. For brevity, we only present the results on $\mathrm{BERT}_{\mathrm{tiny}}$ . The full results are in Appendix C. We observe that before fine-tuning, small representations (i.e., $\mathrm{BERT}_{\mathrm{tiny}}$ ) are non-linear for most tasks. Although a non-linearity does not imply poor generalization, it represents a more complex spatial structure, and requires a more complex classifier. This suggests that to use small representations (say, due to limited resources), it would be advisable to use a non-linear classifier rather than a simple linear one.
145
+
146
+ Fine-tuning makes the space simpler. In Table 3, we observe that the number of clusters decreases after fine-tuning. This tells us that after fine-tuning, the points associated with different labels are in a simpler spatial configuration. The same trend holds for TREC-50 (Table 4), even when the final representation is not linearly separable.
147
+
148
+ Table 2: Fine-tuned performances of $\mathrm{BERT}_{\mathrm{small}}$ based on the last layers. The last column shows the spatial similarity (described in §2) between the training and test set. A complete table of all representations and tasks can be found in Appendix C.
149
+
150
+ <table><tr><td>Task</td><td></td><td>#clusters</td><td>is linear</td><td>Acc</td></tr><tr><td rowspan="2">POS</td><td>original</td><td>30</td><td>N</td><td>90.76</td></tr><tr><td>tuned</td><td>18</td><td>N</td><td>91.67</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>50</td><td>N</td><td>86.74</td></tr><tr><td>tuned</td><td>46</td><td>Y</td><td>89.04</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>42</td><td>N</td><td>74.14</td></tr><tr><td>tuned</td><td>40</td><td>Y</td><td>74.40</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>46</td><td>Y</td><td>58.38</td></tr><tr><td>tuned</td><td>46</td><td>Y</td><td>60.31</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>58</td><td>N</td><td>68.12</td></tr><tr><td>tuned</td><td>51</td><td>N</td><td>84.04</td></tr></table>
151
+
152
+ Table 3: The linearity of the last layer of $\mathrm{BERT}_{\mathrm{tiny}}$ for each task. Other results are in Appendix C.
153
+
154
+ <table><tr><td>Rep</td><td></td><td>#clusters</td><td>is linear</td><td>Acc</td></tr><tr><td rowspan="2">BERTtiny</td><td>original</td><td>58</td><td>N</td><td>68.12</td></tr><tr><td>tuned</td><td>51</td><td>N</td><td>84.04</td></tr><tr><td rowspan="2">BERTmini</td><td>original</td><td>52</td><td>N</td><td>74.12</td></tr><tr><td>tuned</td><td>52</td><td>N</td><td>88.36</td></tr><tr><td rowspan="2">BERTsmall</td><td>original</td><td>52</td><td>N</td><td>81.32</td></tr><tr><td>tuned</td><td>51</td><td>N</td><td>89.60</td></tr><tr><td rowspan="2">BERTmedium</td><td>original</td><td>52</td><td>N</td><td>80.68</td></tr><tr><td>tuned</td><td>52</td><td>N</td><td>89.80</td></tr><tr><td rowspan="2">BERTbase</td><td>original</td><td>52</td><td>N</td><td>85.24</td></tr><tr><td>tuned</td><td>51</td><td>N</td><td>90.36</td></tr></table>
155
+
156
+ Table 4: The linearity of the last layer of all models on TREC-50 task. The number of clusters is always more than the number of labels (50).
157
+
158
+ # 4.3 Spatial Structure of Labels
159
+
160
+ To better understand the changes in spatial structure, we apply DIRECTPROBE to every intermediate representation encountered during fine-tuning. Here, we focus on the $\mathrm{BERT}_{\mathrm{base}}$ . Since all representations we considered are linearly separable<sup>8</sup>, the number of clusters equals the number of labels. As a result, each cluster exclusively corresponds to one label. Going ahead, we will use clusters and labels interchangeably.
161
+
162
+ Fine-tuning pushes each label far away from each other. This confirms the observation of Zhou and Srikumar (2021), who pointed out that the fine-tuning pushes each label away from each other. However, they use the global minimum distance between clusters to support this argument, which only partially supports the claim: the distances between some clusters might increase despite the global minimum distance decreasing.
163
+
164
+ We track the minimum distance of each label to all other labels during fine-tuning. We find that all the minimum distances are increasing. Figure 2 shows how these distances change in the last layer of $\mathrm{BERT}_{\mathrm{base}}$ for the PS-role and POS tagging tasks. Appendix D includes the plots for all tasks. For clarity, we only show the three labels where the distance increases the most, and the three where it increases the least. We also observe that although the trend is increasing, the minimum distance associated with a label may decrease during the course of fine-tuning, e.g., the label STUFF in PS-role task, suggesting a potential instability of fine-tuning.
165
+
166
+ ![](images/c890558cd5e1bc2f8be43dda01692196903ef57d894ab7b523f87f3c6f53bc1c.jpg)
167
+ Figure 2: The dynamics of the minimum distances of the three labels where the distance increases the most, and the three where it increases the least. The horizontal axis is the number of fine-tuning updates; the vertical axis is chosen label's minimum distance to other labels. These results come from the last layer of $\mathrm{BERT}_{\mathrm{base}}$ . A full plots of four tasks can be found in Appendix D.
168
+
169
+ ![](images/dc1272d63b6970ab7b5c066012bde50cbdff244847b8c79c3b66bda79cc5c580.jpg)
170
+
171
+ ![](images/64aec6647c83eabbda8880f48eafdb345b40fa71bf06af27900a341a621708ad.jpg)
172
+ Figure 3: The PCA projection of three closest labels in POS tagging task based on the first (left) and last (right) layer of $\mathrm{BERT}_{\mathrm{base}}$ . These lines show the paths of the centroids of each label cluster during the fine-tuning. The markers indicate the starting points. This figure is best seen in color.
173
+
174
+ To further see how labels move during the fine-tuning, we track the centroids of each cluster. We select three closest labels from the POS tagging task and track the paths of the centroids of each label cluster in the last layer of $\mathrm{BERT}_{\mathrm{base}}$ during the fine-tuning. Figure 3 (right) shows the 2D PCA projection of these paths. We observe that before fine-tuning, the centroids of all these three labels are close to each other. As fine-tuning proceeds, the centroids move around in different directions, away from each other.
175
+
176
+ We conclude that fine-tuning enlarges the gaps between label clusters and admits more classifiers consistent with the labels, allowing for better generalization. Note that neither the loss nor the optimizer explicitly mandates this change. Indeed,
177
+
178
+ since the labels were originally linearly separable, the learner need not adjust the representation at all.
179
+
180
+ # 4.4 Cross-task Fine-tuning
181
+
182
+ In §4.3, we hypothesized that fine-tuning improves the performance because it enlarges the gaps between label clusters. A natural inference of this hypothesis is that the process may shrink the gaps between labels of an unrelated task, and its performance can decrease. In this subsection, we investigate how fine-tuning for one task affects another.
183
+
184
+ We fine-tune the $\mathrm{BERT}_{\mathrm{base}}$ on PS-role and POS tagging tasks separately and use the fine-tuned models to generate contextualized representations for the PS-fxn task. Our choice of tasks in this experimental design is motivated by the observation that PS-role and PS-fxn are similar tasks that seek to predict supersense tags for prepositions. On the other hand, POS tagging can adversely affect the PS-fxn task because POS tagging requires all the prepositions to be grouped together (label ADP) while PS-fxn requires different prepositions to be far away from each other. We apply DIRECTPROBE on both representations to analyze the geometric changes with respect to PS-fxn.
185
+
186
+ The effects of cross-task fine-tuning depends on how close two tasks are. The third and fourth columns of Table 5 indicate the number of labels whose minimum distance is increased or decreased after fine-tuning. The second column from the right shows the average distance change over all labels, e.g. fine-tuning on POS results in the minimum distances of the PS-fxn labels decreasing by 1.68 on average. We observe that fine-tuning on the same dataset (PS-fxn) increases the distances between labels (second row), which is consistent with observations from §4.3; fine-tuning on a similar task also increases the distances between clusters (third row) but to a lesser extent. However, fine-tuning on a "opposing" task decreases the distances between clusters (last row). These observations suggest that cross-task fine-tuning could add or remove information from the representation, depending on how close the source and target task are.
187
+
188
+ Small distances between label clusters indicate a poor performance. Based on our conclusion in §4.3 that a larger gap between labels leads to better generalization, we expect that the performance
189
+
190
+ <table><tr><td>fine-tuning</td><td>probing</td><td>#inc</td><td>#dec</td><td>average inc</td><td>Acc</td></tr><tr><td>-</td><td>PS-fxn</td><td>-</td><td>-</td><td>-</td><td>87.75</td></tr><tr><td>PS-fxn</td><td>PS-fxn</td><td>40</td><td>0</td><td>5.29</td><td>89.58</td></tr><tr><td>PS-role</td><td>PS-fxn</td><td>27</td><td>13</td><td>1.02</td><td>88.53</td></tr><tr><td>POS</td><td>PS-fxn</td><td>0</td><td>40</td><td>-1.68</td><td>83.24</td></tr></table>
191
+
192
+ Table 5: Classification performances for PS-fxn task using the last layer of $\mathrm{BERT}_{\mathrm{base}}$ when fine-tuning on different tasks. First row indicates the untuned version. The third and forth column indicate the number of labels whose minimum distance is increased or decreased after fine-tuning. The second last column (average inc) shows the average change of the minimum distance over all the labels. The last column indicates the probing accuracy.
193
+
194
+ of PS-fxn after fine-tuning on PS-role would be higher than the performance after fine-tuning on POS tagging. To verify this, we train two-layer neural networks on PS-fxn task using the representations that are fine-tuned on PS-role and POS tagging tasks. Importantly, we do not further fine-tune the representations for PS-fxn. The last column of Table 5 shows the results. Fine-tuning on PS-fxn enlarges gaps between all PS-fxn labels, which justifies the highest performance; fine-tuning on PS-role enlarges gaps between some labels in PS-fxn, leading to a slight improvement; fine-tuning on POS tags shrinks the gaps between all labels in PS-fxn, leading to a decrease in performance.
195
+
196
+ In summary, based on the results of §4.2, §4.3 and §4.4, we conclude that fine-tuning injects or removes task-related information from representations by adjusting the distances between label clusters even if the original representation is linearly separable (i.e., when there is no need to change the representation). When the original representation does not support a linear classifier, fine-tuning tries to group points with the same label into a small number of clusters, ideally one cluster.
197
+
198
+ # 4.5 Layer Behavior
199
+
200
+ Previous work (Merchant et al., 2020; Mosbach et al., 2020b) showed that during fine-tuning, lower layers changed little compared to higher layers. In the following experiments, we confirm their findings and further show that: (i) fine-tuning does not change the representation arbitrarily, even for higher layers; (ii) an analysis of the changes of different layers by a visual comparison between lower and higher layers. Here, we focus on the POS tagging task with $\mathrm{BERT}_{\mathrm{base}}$ . Our conclusions extend to other tasks, whose results are in Appendix E.
201
+
202
+ Higher layers do not change arbitrarily. Although previous work (Mosbach et al., 2020b) shows that higher layers change more than the lower layers, we find that higher layers still remain close to the original representations. To study the dynamics of fine-tuning, we compare each layer during fine-tuning to its corresponding original pretrained one. The spatial similarity between two representations is calculated as the Pearson correlation coefficient of their distance vectors as described in §2. Intuitively, a classifier learns a decision boundary that traverses the region between clusters, which makes the distances between clusters more relevant to our analysis (as opposed to the spatial structure of points within each cluster).
203
+
204
+ Figure 4 shows the results for all four tasks.10 To avoid visual clutter, we only show the plots for every alternate layer. For the higher layers, we find that the Pearson correlation coefficient between the original representation and the fine-tuned one is surprisingly high (more than 0.5), reinforcing the notion that fine-tuning does not change the representation arbitrarily. Instead, it attempts to preserve the relative positions the labels. This means the fine-tuning process encodes task-specific information, yet it largely preserves the pre-trained information encoded in the representation.
205
+
206
+ ![](images/e39d307eda2acf045937543c6cc2db55d4798dc902368113825e4d92c8d459f6.jpg)
207
+ Figure 4: Dynamics of spatial similarity during the fine-tuning process based on $\mathrm{BERT}_{\mathrm{base}}$ . The horizontal axis is the number of updates during fine-tuning. The vertical axis is the Pearson correlation coefficient between current space and its original version (before fine-tuning).
208
+
209
+ ![](images/5b2709d30b31ac29e65318279a9a6472c464f0810e5301d84e9151ea2e436ee9.jpg)
210
+
211
+ The labels of lower layers move only in a small region and almost in the same directions. The unchanged nature of lower layers raises the question: do they not change at all? To answer this question, for every label, we compute difference between its centroids before and after fine-tuning.
212
+
213
+ ![](images/0c779e05cc762bbb95a7a30f58fb19764920a67ef319554eaea55501846de0bf.jpg)
214
+ Figure 5: The PCA projection of the difference vector between the centroids of labels before and after finetuning based on POS tagging task and $\mathrm{BERT}_{\mathrm{base}}$ . Lower layers have a much smaller projection range than the higher layers. This figure is best seen in color.
215
+
216
+ Figure 5 shows the PCA projection in 2D of these difference vectors. For brevity, we only present the plots for every alternative layer. A plot with all layers can be found in Appendix E. We observe that the movements of labels in lower layers concentrate in a few directions compared to the higher layers, suggesting the labels in lower layers do change, but do not separate the labels as much as higher layers. Also, we observe that the labels INTJ and SYM have distinctive directions in the lower layers.
217
+
218
+ Note that, in Figure 5, the motion range of lower layers is much smaller than the higher layers. The projected two dimensions range from $-1$ to 3 and from $-3$ to 3 for layer two, while for layer 12 they range from $-12$ to 13 and $-12$ to 8, suggesting that labels in lower layers only move in a small region compared to higher layers. Figure 3 shows an example of this difference. Compared to the layer 12 (right) paths, we see that the layer 1 paths (left) traverse almost the same trajectories, which is consistent with the observations from Figure 5.
219
+
220
+ # 5 Discussion
221
+
222
+ Does fine-tuning always improve performance? Indeed, fine-tuning almost always improves task performance. However, rare cases exist where fine-tuning decreases the performance. Fine-tuning introduces a divergence between the training set and unseen examples (§4.1). However, it is unclear how this divergence affects the generalization ability of representations, e.g. does this divergence suggest a new kind of overfitting that is driven by representations rather than classifiers?
223
+
224
+ How does fine-tuning alter the representation to adjust for downstream tasks? Fine-tuning alters the representation by grouping points with the
225
+
226
+ same label into small number of clusters (§4.2) and pushing each label cluster away from the others (§4.3). We hypothesize that the distances between label clusters correlate with the classification performance and confirm this hypothesis by investigating cross-task fine-tuning (§4.4). Our findings are surprising because fine-tuning for a classification task does not need to alter the geometry of a representation if the data is already linearly separable in the original representation. What we observe reveals geometric properties that characterize good representations. We do not show theoretical analysis to connect our geometric findings to representation learnability, but the findings in this work may serve as a starting point for a learning theory for representations.
227
+
228
+ How does fine-tuning change the underlying geometric structure of different layers? It is established that higher layers change more than the lower ones. In this work, we analyze this behavior more closely. We discover that higher layers do not change arbitrarily; instead, they remain similar to the untuned version. Informally, we can say that fine-tuning only "slightly" changes even the higher layers (§4.5). Nevertheless, our analysis does not reveal why higher layers change more than the lower layers. A deeper analysis of model parameters during fine-tuning is needed to understand the difference between lower and higher layers.
229
+
230
+ Limitations of this work. Our experiments use the BERT family of models for English tasks. Given the architectural similarity of transformer language models, we may be able to extrapolate the results to other models, but further work is needed to confirm our findings to other languages or model architectures. In our analysis, we ignore the structure within each cluster, which is another information source for studying the representation. We plan to investigate these aspects in future work. We make our code available for replication and extension by the community.
231
+
232
+ # 6 Related Work
233
+
234
+ There are many lines of work that focus on analyzing and understanding representations. The most commonly used technique is the classifier-based method. Early work (Alain and Bengio, 2017; Kulmizev et al., 2020) starts with using linear classifiers as the probe. Hewitt and Liang (2019) pointed out that a linear probe is not sufficient to evaluate a representation. Some recent work
235
+
236
+ also employ non-linear probes (Tenney et al., 2019; Eger et al., 2019). There are also efforts to inspect the representations from a geometric persepctive (e.g. Ethayarajh, 2019; Mimno and Thompson, 2017), including the recently proposed DIRECTPROBE (Zhou and Srikumar, 2021), which we use in this work. Another line of probing work designs control tasks (Ravichander et al., 2021; Lan et al., 2020) to reverse-engineer the internal mechanisms of representations (Kovaleva et al., 2019; Wu et al., 2020). However, in contrast to our work, most studies (Zhong et al., 2021; Li et al., 2021; Chen et al., 2021) focused on pre-trained representations, not fine-tuned ones.
237
+
238
+ While fine-tuning pre-trained representations usually provide strong empirical performance (Wang et al., 2018; Talmor et al., 2020), how fine-tuning manage to do so has remained an open question. Moreover, the instability (Mosbach et al., 2020a; Dodge et al., 2020; Zhang et al., 2020) and forgetting problems (Chen et al., 2020; He et al., 2021) make it harder to analyze fine-tuned representations. Despite these difficulties, previous work (Merchant et al., 2020; Mosbach et al., 2020b; Hao et al., 2020) draw valuable conclusions about fine-tuning. This work extends this line of effort and provides a deeper understanding of how fine-tuning changes representations.
239
+
240
+ # 7 Conclusions
241
+
242
+ In this work, we take a close look at how fine-tuning a contextualized representation for a task modifies it. We investigate the fine-tuned representations of several BERT models using two probing techniques: classifier-based probing and DIRECT-ProBE. First, we show that fine-tuning introduces divergence between training and test set, and in at least one case, hurts generalization. Next, we show fine-tuning alters the geometry of a representation by pushing points belonging to the same label closer to each other, thus simpler and better classifiers. We confirm this hypothesis by cross-task fine-tuning experiments. Finally, we discover that while adjusting representations to downstream tasks, fine-tuning largely preserves the original spatial structure of points across all layers. Taken collectively, the empirical study presented in this work can not only justify the impressive performance of fine-tuning, but may also lead to a better understanding of learned representations.
243
+
244
+ # Acknowledgments
245
+
246
+ We thank the ARR reviewers and the Utah NLP group for their constructive feedback. This work is partially supported by NSF grants #1801446 (SaTC) and #1822877 (Cyberlearning), and a generous gift from Verisk Inc.
247
+
248
+ # References
249
+
250
+ Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7319-7328, Online. Association for Computational Linguistics.
251
+ Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net.
252
+ Yonatan Belinkov. 2021. Probing classifiers: Promises, shortcomings, and alternatives. CoRR, abs/2102.12452.
253
+ Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1-27.
254
+ Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, and Liping Jing. 2021. Probing BERT in hyperbolic spaces. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
255
+ Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7870-7881, Online. Association for Computational Linguistics.
256
+ Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !$ #\* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics.
257
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of
258
+
259
+ the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
260
+ Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. CoRR, abs/2002.06305.
261
+ Steffen Eger, Andreas Rückle, and Iryna Gurevych. 2019. Pitfalls in the evaluation of sentence embeddings. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 55–60, Florence, Italy. Association for Computational Linguistics.
262
+ Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
263
+ Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2020. Investigating learning dynamics of BERT fine-tuning. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 87-92, Suzhou, China. Association for Computational Linguistics.
264
+ Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, and Fuchun Peng. 2021. Analyzing the forgetting problem in pretrain-finetuning of open-domain dialogue response models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1121-1133, Online. Association for Computational Linguistics.
265
+ John Hewitt, Kawin Ethayarajh, Percy Liang, and Christopher D. Manning. 2021. Conditional probing: measuring usable information beyond a baseline. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1626-1639. Association for Computational Linguistics.
266
+ John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.
267
+ Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of
268
+
269
+ language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
270
+ Mael Jullien, Marco Valentino, and André Freitas. 2022. Do transformers encode a foundational ontology? probing abstract classes in natural language. CoRR, abs/2201.10262.
271
+ Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online. Association for Computational Linguistics.
272
+ Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM* 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguistics.
273
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
274
+ Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.
275
+ Katarzyna Krasnowska-Kieras and Alina Wroblewska. 2019. Empirical linguistic study of sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5729-5739, Florence, Italy. Association for Computational Linguistics.
276
+ Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, and Joakim Nivre. 2020. Do neural language models show preferences for syntactic formalisms? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4077-4091, Online. Association for Computational Linguistics.
277
+ Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
278
+
279
+ Bai Li, Zining Zhu, Guillaume Thomas, Yang Xu, and Frank Rudzicz. 2021. How is BERT surprised? layerwise detection of linguistic anomalies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4215-4228, Online. Association for Computational Linguistics.
280
+ Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics.
281
+ Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.
282
+ Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 33-44, Online. Association for Computational Linguistics.
283
+ David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2873–2878, Copenhagen, Denmark. Association for Computational Linguistics.
284
+ Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020a. On the stability of fine-tuning BERT: misconceptions, explanations, and strong baselines. CoRR, abs/2006.04884.
285
+ Marius Mosbach, Anna Khokhlova, Michael A. Hedderich, and Dietrich Klakow. 2020b. On the interplay between fine-tuning and sentence-level probing for linguistic knowledge in pre-trained transformers. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 68-82, Online. Association for Computational Linguistics.
286
+ Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.
287
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang,
288
+
289
+ Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
290
+ F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
291
+ Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259.
292
+ Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepLANLP-2019), pages 7-14, Florence, Italy. Association for Computational Linguistics.
293
+ Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5231-5247, Online. Association for Computational Linguistics.
294
+ Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does probing accuracy entail task relevance? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3363-3377, Online. Association for Computational Linguistics.
295
+ Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R. Moeller, Aviram Stern, Adi Bitan, and Omri Abend. 2018. Comprehensive supersense disambiguation of English prepositions and possessives. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 185-196, Melbourne, Australia. Association for Computational Linguistics.
296
+ Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olympics - on what language model pre-training captures. Trans. Assoc. Comput. Linguistics, 8:743-758.
297
+ Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In 7th International
298
+
299
+ Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
300
+
301
+ Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962.
302
+
303
+ Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguistics.
304
+
305
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.
306
+
307
+ William F Whitney, Min Jae Song, David Brandfon-brener, Jaan Altosaar, and Kyunghyun Cho. 2021. Evaluating representations by the complexity of learning low-loss predictors. In Neural Compression: From Information Theory to Applications-Workshop@ ICLR 2021.
308
+
309
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
310
+
311
+ Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics.
312
+
313
+ Yadollah Yaghoobzadeh, Katharina Kann, T. J. Hazen, Eneko Agirre, and Hinrich Schütze. 2019. Probing for semantic classes: Diagnosing the meaning content of word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5740-5753, Florence, Italy. Association for Computational Linguistics.
314
+
315
+ Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. arXiv preprint arXiv:2006.05987.
316
+
317
+ Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5017-5033, Online. Association for Computational Linguistics.
318
+
319
+ Yichu Zhou and Vivek Srikumar. 2021. DirectProbe: Studying representations without classifiers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5070-5083, Online. Association for Computational Linguistics.
320
+
321
+ # A Fine-tuning Details
322
+
323
+ In this work, we fine-tune all tasks and representations using HuggingFace library. We use a linear weight scheduler with a learning rate of $3e^{-4}$ , which uses $10\%$ of the total update steps as the warmup steps. The same scheduler is used for all tasks. All the models are optimized by Adam (Kingma and Ba, 2015) with batch size of 32. All the fine-tuning is run on a single Titan GPU. The best hidden-layer sizes for each task are shown in Table 7.
324
+
325
+ # B Summary of Tasks
326
+
327
+ In this work, we conduct experiments on five NLP tasks, which are chosen to cover different usages of the representations we study. Table 6 summarizes these tasks.
328
+
329
+ # C Probing Performance
330
+
331
+ Table 7 shows the complete table of probing results in our experiments. The last column is the spatial similarity between the training set and test set. Some entries are missing because the similarity can only be computed on the representations that are linearly separable for the given task.
332
+
333
+ # D Dynamics of Minimum Distances
334
+
335
+ Figure 6 shows the dynamics of minimum distances for labels on all four tasks. For clarity, we only present the distances for the three labels where the distances increase the most and the three where it decreases the most.
336
+
337
+ # E PCA Projections of the Movements
338
+
339
+ Figures 7-10 show the PCA projections of the difference vector between the centroids of labels before and after fine-tuning based on $\mathrm{BERT}_{\mathrm{base}}$ .
340
+
341
+ <table><tr><td>Task</td><td>#Training</td><td>#Test</td><td>#Labels</td><td>Token-based</td><td>Sentence-based</td><td>Pair-wise</td><td>Semantic</td><td>Syntax</td></tr><tr><td>Supersense-role</td><td>4282</td><td>457</td><td>47</td><td>✓</td><td></td><td></td><td>✓</td><td></td></tr><tr><td>Supersense-function</td><td>4282</td><td>457</td><td>40</td><td>✓</td><td></td><td></td><td>✓</td><td></td></tr><tr><td>POS</td><td>16860</td><td>4323</td><td>17</td><td>✓</td><td></td><td></td><td></td><td>✓</td></tr><tr><td>Dependency Relation</td><td>16054</td><td>4122</td><td>46</td><td></td><td></td><td>✓</td><td></td><td>✓</td></tr><tr><td>TREC-50</td><td>5452</td><td>500</td><td>50</td><td></td><td>✓</td><td></td><td>✓</td><td></td></tr></table>
342
+
343
+ Table 6: Statistics of the five tasks with their different characteristics.
344
+
345
+ ![](images/d5d49ba769e3a1940f3b383805817e685504b72b692f23c095a0ee0764fbda3b.jpg)
346
+
347
+ ![](images/a259b66260f0948d8640bce203d6a90a60d27bd4f3adbed42cc43b74ccc05d7a.jpg)
348
+
349
+ ![](images/210f94fbe17b2868ecd8466e5569a82b34915999cf1c2d86454af757d7c0bef6.jpg)
350
+ Figure 6: The dynamics of the minimum distance of the three labels where the distance increases the most, and three labels where is increases the least. The horizontal axis is the number of fine-tuning updates; the vertical axis is chosen label's minimum distance to other labels. These results come from the last layer of $\mathrm{BERT}_{\mathrm{base}}$ .
351
+
352
+ ![](images/796094fbffc7337130771bd4dcf2f2a87ee358da06c4c48dd9bd321103d8eaee.jpg)
353
+
354
+ ![](images/067d5c53579856b2d7cc62c97b26705ded04df7b37e2c37b7711016e79dbbb69.jpg)
355
+
356
+ ![](images/fba067cc6bd96f9f094cc81bd86040f1696f980541ddba4e9bffdb0fc1058f87.jpg)
357
+
358
+ ![](images/6c0ba7fab09e1e37fe20b3a92b1d2b222e7fee45c0967a9bbc50ca14f3796008.jpg)
359
+
360
+ ![](images/1ab9dbfcb518beabcdf12b1086982199f7a8dd0e8d030e718ca3aa03b2723dc0.jpg)
361
+
362
+ ![](images/91c13d618668bcf36c4887fec3af75e1e0ec42f1ef3695788251512349381a49.jpg)
363
+
364
+ ![](images/45f713234a74882c2c47e19af056f910cc6b2c40b7f632843ef0527b66327297.jpg)
365
+
366
+ ![](images/85accce6038a40b84be26839d853c7d32dd42a7cc3f2685649ddf08647710c99.jpg)
367
+
368
+ ![](images/38bf42979fa88063c462c2dda666c36b0c426b39cdd85a0b09c4aa33188c02a8.jpg)
369
+
370
+ ![](images/f09950216f112a8dbc0c97cf35cc827e23c290b1a9ddd912854d7982d8073b07.jpg)
371
+
372
+ ![](images/d4e4b72c26ebde463159f975fbcb2c3356629503221979f226ba1586f709cbb8.jpg)
373
+ Figure 7: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on POS tagging task and $\mathrm{BERT}_{\mathrm{base}}$ .
374
+
375
+ ![](images/27bbaf03474ac96f40e2c71eff3b649ab2f63f42ef1b85284e919fcb94b520fb.jpg)
376
+
377
+ <table><tr><td>Representations</td><td>Task</td><td></td><td>Acc</td><td>Std</td><td>Best Layer Size</td><td>#Cluster</td><td>is Linear</td><td>Similarity</td></tr><tr><td rowspan="10">BERTtiny</td><td rowspan="2">POS</td><td>original</td><td>90.76</td><td>0.24</td><td>(256, 64)</td><td>30</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>91.67</td><td>0.29</td><td>(64, 64)</td><td>18</td><td>N</td><td>-</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>86.74</td><td>0.22</td><td>(256, 256)</td><td>50</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>89.04</td><td>0.20</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.88</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>74.14</td><td>1.42</td><td>(256, 256)</td><td>42</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>74.40</td><td>0.68</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.72</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>58.38</td><td>0.78</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.76</td></tr><tr><td>fine-tuned</td><td>60.31</td><td>0.29</td><td>(64, 64)</td><td>46</td><td>Y</td><td>0.70</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>68.12</td><td>0.82</td><td>(256, 256)</td><td>58</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>84.04</td><td>0.93</td><td>(256, 256)</td><td>51</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTmini</td><td rowspan="2">POS</td><td>original</td><td>93.81</td><td>0.10</td><td>(256, 32)</td><td>19</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>94.91</td><td>0.03</td><td>(256, 32)</td><td>17</td><td>Y</td><td>0.70</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>91.82</td><td>0.09</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.93</td></tr><tr><td>fine-tuned</td><td>93.55</td><td>0.07</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.86</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>82.45</td><td>1.07</td><td>(256, 256)</td><td>40</td><td>Y</td><td>0.77</td></tr><tr><td>fine-tuned</td><td>84.25</td><td>0.39</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.53</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>68.05</td><td>1.08</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.81</td></tr><tr><td>fine-tuned</td><td>71.90</td><td>1.06</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.59</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>74.12</td><td>1.25</td><td>(256, 256)</td><td>52</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>88.36</td><td>0.50</td><td>(64, 32)</td><td>52</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTsmall</td><td rowspan="2">POS</td><td>original</td><td>94.26</td><td>0.13</td><td>(256, 32)</td><td>17</td><td>Y</td><td>0.96</td></tr><tr><td>fine-tuned</td><td>95.43</td><td>0.06</td><td>(128, 64)</td><td>17</td><td>Y</td><td>0.72</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>92.93</td><td>0.14</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.93</td></tr><tr><td>fine-tuned</td><td>94.48</td><td>0.14</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.78</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>86.26</td><td>0.54</td><td>(256, 256)</td><td>40</td><td>Y</td><td>0.82</td></tr><tr><td>fine-tuned</td><td>85.08</td><td>0.35</td><td>(256, 256)</td><td>40</td><td>Y</td><td>0.44</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>74.22</td><td>1.03</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.84</td></tr><tr><td>fine-tuned</td><td>74.57</td><td>0.61</td><td>(128, 128)</td><td>46</td><td>Y</td><td>0.54</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>81.32</td><td>0.61</td><td>(256, 128)</td><td>52</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>89.60</td><td>0.22</td><td>(256, 64)</td><td>51</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTmedium</td><td rowspan="2">POS</td><td>original</td><td>94.40</td><td>0.08</td><td>(256, 128)</td><td>17</td><td>Y</td><td>0.97</td></tr><tr><td>fine-tuned</td><td>95.56</td><td>0.05</td><td>(64, 32)</td><td>17</td><td>Y</td><td>0.67</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>92.54</td><td>0.14</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.94</td></tr><tr><td>fine-tuned</td><td>94.76</td><td>0.20</td><td>(128, 128)</td><td>46</td><td>Y</td><td>0.79</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>86.56</td><td>0.41</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.80</td></tr><tr><td>fine-tuned</td><td>88.45</td><td>0.45</td><td>(128, 256)</td><td>40</td><td>Y</td><td>0.59</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>76.28</td><td>1.00</td><td>(256, 32)</td><td>46</td><td>Y</td><td>0.83</td></tr><tr><td>fine-tuned</td><td>78.86</td><td>0.58</td><td>(128, 128)</td><td>46</td><td>Y</td><td>0.58</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>80.68</td><td>1.16</td><td>(256, 64)</td><td>52</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>89.80</td><td>0.33</td><td>(32, 64)</td><td>52</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTbase</td><td rowspan="2">POS</td><td>original</td><td>93.39</td><td>0.31</td><td>(256, 128)</td><td>17</td><td>Y</td><td>0.97</td></tr><tr><td>fine-tuned</td><td>95.68</td><td>0.02</td><td>(128, 64)</td><td>17</td><td>Y</td><td>0.70</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>89.39</td><td>0.08</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.92</td></tr><tr><td>fine-tuned</td><td>94.76</td><td>0.05</td><td>(64, 256)</td><td>46</td><td>Y</td><td>0.76</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>87.75</td><td>0.41</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.84</td></tr><tr><td>fine-tuned</td><td>89.58</td><td>0.67</td><td>(32, 256)</td><td>40</td><td>Y</td><td>0.57</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>74.49</td><td>0.84</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.82</td></tr><tr><td>fine-tuned</td><td>81.14</td><td>0.26</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.52</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>85.24</td><td>0.85</td><td>(256, 128)</td><td>52</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>90.36</td><td>0.32</td><td>(64, 32)</td><td>51</td><td>N</td><td>-</td></tr></table>
378
+
379
+ Table 7: A complete table of the probing results of five representations on five tasks.
380
+
381
+ ![](images/0ef249a23407bd3ee6a49d4c179bbd5c5253bfd5f872bdd3d5de7fa6841074a4.jpg)
382
+
383
+ ![](images/d641e9197b86dd0c152478194aa466a66334359ee031df3cd0b6c1c3eb7fc347.jpg)
384
+
385
+ ![](images/96aab14fb7e1e8e732ac3f2dfc875633d01d1b38f4e6361d6dce4d2e867a1fde.jpg)
386
+
387
+ <table><tr><td>acl</td><td>expl</td></tr><tr><td>acl:relcl</td><td>fixed</td></tr><tr><td>advcl</td><td>flat</td></tr><tr><td>advmod</td><td>goeswith</td></tr><tr><td>amod</td><td>iobj</td></tr><tr><td>appos</td><td>mark</td></tr><tr><td>aux</td><td>nmod</td></tr><tr><td>aux:pass</td><td>nmod:npmod</td></tr><tr><td>case</td><td>nmod:poss</td></tr><tr><td>cc</td><td>nmod:tmod</td></tr><tr><td>cc:preconj</td><td>nsubj</td></tr><tr><td>ccomp</td><td>nsubj:pass</td></tr><tr><td>compound</td><td>nummod</td></tr><tr><td>compound:prt</td><td>obj</td></tr><tr><td>conj</td><td>obl</td></tr><tr><td>cop</td><td>obl:npmod</td></tr><tr><td>csubj</td><td>obl:tmod</td></tr><tr><td>csubj:pass</td><td>orphan</td></tr><tr><td>dep</td><td>parataxis</td></tr><tr><td>det</td><td>punct</td></tr><tr><td>det:predet</td><td>reparandum</td></tr><tr><td>discourse</td><td>vocative</td></tr><tr><td>dislocated</td><td>xcomp</td></tr></table>
388
+
389
+ ![](images/3f008914d76f038becde7b66a0f2444bf3109ef458192a090f6c27414c206b22.jpg)
390
+
391
+ ![](images/d967a40dc3f3ad90b64d7291d491392309c9dd4e3d480890b496cc67c49856c6.jpg)
392
+
393
+ ![](images/395639ce1a4614c0babd3056d4ab53d6ba5ff6c39bbbde9e12c20b78c53317c2.jpg)
394
+
395
+ ![](images/5f84b51b6ee8a464915b151fe9835fe8204ae3bacc9d7efd3536e917659b5deb.jpg)
396
+
397
+ ![](images/43903fa1be25c8b11f2cc0a422bb1badffc8d8689e67fa7616bc388142947875.jpg)
398
+
399
+ ![](images/e9416e7298c720327d2933114e1cb01a16348ae4a12e7a081487a0ad038e9f03.jpg)
400
+
401
+ ![](images/6dff42346715a590239e8f27171dc136e1e2ff76da7345454f277f5b9c2c94c1.jpg)
402
+ Figure 8: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on dependency prediction task and $\mathrm{BERT}_{\mathrm{base}}$ .
403
+
404
+ ![](images/b42995037f57fad0dd6a735f2e946d3e61a7c14c6d071a9777be14afc4ecb690.jpg)
405
+
406
+ ![](images/a23557583b98dc80a6cba768c6a81860211390606b9848752373eee1cde6c1be.jpg)
407
+
408
+ ![](images/82863353807a27ef4f36140d3a23f193939e4fa72077bc129d7783cda71a834c.jpg)
409
+
410
+ ![](images/f4a70bd0c70814149f78535eed76f87500e5f7d85de6ac06b7b3bb6f711c5915.jpg)
411
+
412
+ ![](images/0da861d4b7978095b33481e98922a068f6e1298fe2597eece5e5527a8c9bc351.jpg)
413
+
414
+ <table><tr><td>Accompanier</td><td>Instrument</td></tr><tr><td>Agent</td><td>Interval</td></tr><tr><td>Approximator</td><td>Locus</td></tr><tr><td>Beneficiary</td><td>Manner</td></tr><tr><td>Causer</td><td>Means</td></tr><tr><td>Characteristic</td><td>PartPortion</td></tr><tr><td>Circumstance</td><td>Path</td></tr><tr><td>Co-Agent</td><td>Possession</td></tr><tr><td>Co-Theme</td><td>Possessor</td></tr><tr><td>ComparisonRef</td><td>Purpose</td></tr><tr><td>Cost</td><td>Quantity</td></tr><tr><td>Direction</td><td>RateUnit</td></tr><tr><td>Duration</td><td>Source</td></tr><tr><td>EndTime</td><td>Species</td></tr><tr><td>Explanation</td><td>StartTime</td></tr><tr><td>Extent</td><td>Stuff</td></tr><tr><td>Frequency</td><td>Theme</td></tr><tr><td>Gestalt</td><td>Time</td></tr><tr><td>Goal</td><td>Topic</td></tr><tr><td>Identity</td><td>Whole</td></tr></table>
415
+
416
+ ![](images/91c0e6ff0dda9ade24c20a5899629fe54c4674d95569cbf0017b8944aa788270.jpg)
417
+
418
+ ![](images/56f536e357478f5af935a5424465fbc505a0b557d6d160f486d1b97a486a3df6.jpg)
419
+
420
+ ![](images/3672c2c3b731a484e58ee282e295db034db5a2e8dc06db9836fb4f6e44f88a71.jpg)
421
+
422
+ ![](images/07623ec1d9885aac05f74f96b8c0ab5bc922cbcd6f8b168981e8a7d2f7f10909.jpg)
423
+
424
+ ![](images/80eda48a727a7bb1f0843c9632cd10257983440ba525789c5762c17498ee73aa.jpg)
425
+
426
+ ![](images/850a589d649259acf3f47e8350d3eb7ede86b70f3656e40ce7d7d8abb5612554.jpg)
427
+
428
+ ![](images/8215f8202c10932ffee03234ca154c54999a9e3faa866874c4dc1fd6614546c1.jpg)
429
+ Figure 9: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on Supersense function task and $\mathrm{BERT}_{\mathrm{base}}$
430
+
431
+ ![](images/59c1333424427cfe345c9336efb6dc440291e89cc7c81a5bbd7ca9867a0f3783.jpg)
432
+
433
+ ![](images/bf6c4f2686232a4e30d84d234964a718209d10b63fcf10200ef2932ab02692ff.jpg)
434
+
435
+ ![](images/ade8ad6158fa09ac079db9b4e3971474c43e6608c151ad317b5e681c67f466be.jpg)
436
+
437
+ ![](images/2af68a25f61f3913929bcfefac67ffaf44d82d364b80ab40f2fcaebf8d8c0f45.jpg)
438
+
439
+ ![](images/5cb088ef83ec09897daeed90923eabb1000cc864521d589c8fea2c762597422d.jpg)
440
+
441
+ ![](images/64d8a9a1d054b8faa8304c7355f2e7c8e963302766d9728a601f3f6764dd4884.jpg)
442
+
443
+ ![](images/5fab0b301a610939a218c5922f9e8babd02aa69a596c8d69e9c6cec9aef71581.jpg)
444
+
445
+ ![](images/1252ed2073572dc071e6d4ecb111cd2f5f1dc0fa8cad722452820f123c9b2fa6.jpg)
446
+
447
+ ![](images/fc7e4901eb83a9f0e6c8cd42a4fd293a31a0722cbdc0038e20560513ccec5cc7.jpg)
448
+
449
+ ![](images/8ec67e2aefc83f8025a03893372b06ade0564dd4b8d864efe9859473ae6fca50.jpg)
450
+
451
+ ![](images/7ce499a8874aae6c0e2046b563d12b76b49e246b7bb78f3393abbc778d5fedb9.jpg)
452
+
453
+ ![](images/3399811f5ef7d799d68392cd19215af898365ccbd5cdc0f5bb4319f027201ed1.jpg)
454
+
455
+ ![](images/baeae33d0ebf7e7d47215e4af488011009ad7c08ab1a9bc05d71340eb2422d1a.jpg)
456
+ Figure 10: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on Supersense role task and $\mathrm{BERT}_{\mathrm{base}}$
457
+
458
+ ![](images/8d70f717fa7ae7def3ddc7f9fe7796a6219fb62404a6cf24d53986f9177c12ee.jpg)
459
+
460
+ ![](images/31ea42ad50c101e77e6913e803007e65ee54e051ecb3253454819ff066852571.jpg)
461
+
462
+ # F Cluster Number Revision
463
+
464
+ We discovered a bug in the implementation of DIRECTPROBE which causes the merging to stop early while the remaining clusters are still mergeable. The main paper (Table 3, Table 4, and Table 7) has been updated to report the correct results. Table 8 shows the original results.
465
+
466
+ This bug does not change the natural of the linearity of datasets and representations. All the findings from original experiments remain the same. This bug only affects the number of clusters when the representation is non-linear for a given task.
467
+
468
+ <table><tr><td>Representations</td><td>Task</td><td></td><td>Acc</td><td>Std</td><td>Best Layer Size</td><td>#Cluster</td><td>is Linear</td><td>Similarity</td></tr><tr><td rowspan="10">BERTtiny</td><td rowspan="2">POS</td><td>original</td><td>90.76</td><td>0.24</td><td>(256, 64)</td><td>3936</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>91.67</td><td>0.29</td><td>(64, 64)</td><td>20</td><td>N</td><td>-</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>86.74</td><td>0.22</td><td>(256, 256)</td><td>653</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>89.04</td><td>0.20</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.88</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>74.14</td><td>1.42</td><td>(256, 256)</td><td>402</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>74.40</td><td>0.68</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.72</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>58.38</td><td>0.78</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.76</td></tr><tr><td>fine-tuned</td><td>60.31</td><td>0.29</td><td>(64, 64)</td><td>46</td><td>Y</td><td>0.70</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>68.12</td><td>0.82</td><td>(256, 256)</td><td>399</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>84.04</td><td>0.93</td><td>(256, 256)</td><td>51</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTmini</td><td rowspan="2">POS</td><td>original</td><td>93.81</td><td>0.10</td><td>(256, 32)</td><td>2429</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>94.91</td><td>0.03</td><td>(256, 32)</td><td>17</td><td>Y</td><td>0.70</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>91.82</td><td>0.09</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.93</td></tr><tr><td>fine-tuned</td><td>93.55</td><td>0.07</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.86</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>82.45</td><td>1.07</td><td>(256, 256)</td><td>40</td><td>Y</td><td>0.77</td></tr><tr><td>fine-tuned</td><td>84.25</td><td>0.39</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.53</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>68.05</td><td>1.08</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.81</td></tr><tr><td>fine-tuned</td><td>71.90</td><td>1.06</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.59</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>74.12</td><td>1.25</td><td>(256, 256)</td><td>127</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>88.36</td><td>0.50</td><td>(64, 32)</td><td>52</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTsmall</td><td rowspan="2">POS</td><td>original</td><td>94.26</td><td>0.13</td><td>(256, 32)</td><td>17</td><td>Y</td><td>0.96</td></tr><tr><td>fine-tuned</td><td>95.43</td><td>0.06</td><td>(128, 64)</td><td>17</td><td>Y</td><td>0.72</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>92.93</td><td>0.14</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.93</td></tr><tr><td>fine-tuned</td><td>94.48</td><td>0.14</td><td>(256, 64)</td><td>46</td><td>Y</td><td>0.78</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>86.26</td><td>0.54</td><td>(256, 256)</td><td>40</td><td>Y</td><td>0.82</td></tr><tr><td>fine-tuned</td><td>85.08</td><td>0.35</td><td>(256, 256)</td><td>40</td><td>Y</td><td>0.44</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>74.22</td><td>1.03</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.84</td></tr><tr><td>fine-tuned</td><td>74.57</td><td>0.61</td><td>(128, 128)</td><td>46</td><td>Y</td><td>0.54</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>81.32</td><td>0.61</td><td>(256, 128)</td><td>113</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>89.60</td><td>0.22</td><td>(256, 64)</td><td>51</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTmedium</td><td rowspan="2">POS</td><td>original</td><td>94.40</td><td>0.08</td><td>(256, 128)</td><td>17</td><td>Y</td><td>0.97</td></tr><tr><td>fine-tuned</td><td>95.56</td><td>0.05</td><td>(64, 32)</td><td>17</td><td>Y</td><td>0.67</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>92.54</td><td>0.14</td><td>(256, 256)</td><td>46</td><td>Y</td><td>0.94</td></tr><tr><td>fine-tuned</td><td>94.76</td><td>0.20</td><td>(128, 128)</td><td>46</td><td>Y</td><td>0.79</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>86.56</td><td>0.41</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.80</td></tr><tr><td>fine-tuned</td><td>88.45</td><td>0.45</td><td>(128, 256)</td><td>40</td><td>Y</td><td>0.59</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>76.28</td><td>1.00</td><td>(256, 32)</td><td>46</td><td>Y</td><td>0.83</td></tr><tr><td>fine-tuned</td><td>78.86</td><td>0.58</td><td>(128, 128)</td><td>46</td><td>Y</td><td>0.58</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>80.68</td><td>1.16</td><td>(256, 64)</td><td>110</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>89.80</td><td>0.33</td><td>(32, 64)</td><td>52</td><td>N</td><td>-</td></tr><tr><td rowspan="10">BERTbase</td><td rowspan="2">POS</td><td>original</td><td>93.39</td><td>0.31</td><td>(256, 128)</td><td>17</td><td>Y</td><td>0.97</td></tr><tr><td>fine-tuned</td><td>95.68</td><td>0.02</td><td>(128, 64)</td><td>17</td><td>Y</td><td>0.70</td></tr><tr><td rowspan="2">DEP</td><td>original</td><td>89.39</td><td>0.08</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.92</td></tr><tr><td>fine-tuned</td><td>94.76</td><td>0.05</td><td>(64, 256)</td><td>46</td><td>Y</td><td>0.76</td></tr><tr><td rowspan="2">PS-fxn</td><td>original</td><td>87.75</td><td>0.41</td><td>(256, 128)</td><td>40</td><td>Y</td><td>0.84</td></tr><tr><td>fine-tuned</td><td>89.58</td><td>0.67</td><td>(32, 256)</td><td>40</td><td>Y</td><td>0.57</td></tr><tr><td rowspan="2">PS-role</td><td>original</td><td>74.49</td><td>0.84</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.82</td></tr><tr><td>fine-tuned</td><td>81.14</td><td>0.26</td><td>(256, 128)</td><td>46</td><td>Y</td><td>0.52</td></tr><tr><td rowspan="2">TREC-50</td><td>original</td><td>85.24</td><td>0.85</td><td>(256, 128)</td><td>162</td><td>N</td><td>-</td></tr><tr><td>fine-tuned</td><td>90.36</td><td>0.32</td><td>(64, 32)</td><td>51</td><td>N</td><td>-</td></tr></table>
469
+
470
+ Table 8: Original table of the probing results of five representations on five tasks. These results were in the original version of the paper before we found a bug in the implementation of DIRECTPROBE. The updated results are in Table 7. See Appendix C for details.
acloserlookathowfinetuningchangesbert/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e58d4ae66e38eb43649c817bcd283b047aef84222fe02fc5421842b99111106
3
+ size 1502547
acloserlookathowfinetuningchangesbert/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbaa4c402757ea48b6bf5ad8b55ccdc364ea109e568f9c86342ed77c8b345c3f
3
+ size 490860
acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae072ee98d0ac5d2d04985fe71b94009dde25dc4b7a4a4d98a2ccd0efa641201
3
+ size 82565
acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ec403515d103b234c70b627167cfb3c8cd5406d238a682bcfa2424640fd6305
3
+ size 96813
acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a44572ba47ae27d68ab32dc3479429112a924ed2a87c47a32a7c71fdcb76749b
3
+ size 394200
acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/full.md ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Comparative Study of Faithfulness Metrics for Model Interpretability Methods
2
+
3
+ Chun Sik Chan, Huanqi Kong, Guanqing Liang
4
+
5
+ Wisers AI Lab, Wisers Information Limited
6
+
7
+ {tonychan, katekong, quincyliang}@wisers.com
8
+
9
+ # Abstract
10
+
11
+ Interpretation methods to reveal the internal reasoning processes behind machine learning models have attracted increasing attention in recent years. To quantify the extent to which the identified interpretations truly reflect the intrinsic decision-making mechanisms, various faithfulness evaluation metrics have been proposed. However, we find that different faithfulness metrics show conflicting preferences when comparing different interpretations. Motivated by this observation, we aim to conduct a comprehensive and comparative study of the widely adopted faithfulness metrics. In particular, we introduce two assessment dimensions, namely diagnosticity and time complexity. Diagnosticity refers to the degree to which the faithfulness metric favours relatively faithful interpretations over randomly generated ones, and time complexity is measured by the average number of model forward passes. According to the experimental results, we find that sufficiency and comprehensiveness metrics have higher diagnosticity and lower time complexity than the other faithfulness metrics.
12
+
13
+ # 1 Introduction
14
+
15
+ NLP has made tremendous progress in recent years. However, the increasing complexity of the models makes their behaviour difficult to interpret. To disclose the rationale behind the models, various interpretation methods have been proposed.
16
+
17
+ Interpretation methods can be broadly classified into two categories: model-based methods and post-hoc methods. Model-based approaches refer to designing simple and white-box machine learning models whose internal decision logic can be easily interpreted, such as linear regression models, decision trees, etc. A post-hoc method is applied after model training and aims to disclose the relationship between feature values and predictions. As pretrained language models (Devlin et al., 2019a; Liu et al., 2019; Brown et al., 2020) become more
18
+
19
+ popular, deep learning models are becoming more and more complex. Therefore, post-hoc methods are the only option for model interpretations. Post-hoc interpretation methods can be divided into two categories: gradient-based (Simonyan et al., 2014; Sundararajan et al., 2017; Shrikumar et al., 2019) and perturbation-based (Robnik-Sikonja and Kononenko, 2008; Zeiler and Fergus, 2013; Ribeiro et al., 2016). Gradient-based methods assume the model is differentiable and attempt to interpret the model outputs through the gradient information. Perturbation-based methods interpret model outputs by perturbing the input data.
20
+
21
+ To verify whether, and to what extent, the interpretations reflect the intrinsic reasoning process, various faithfulness metrics have been proposed. Most faithfulness metrics use a removal-based criterion, i.e., removing or retaining only the important tokens identified by the interpretation and observing the changes in model outputs (Serrano and Smith, 2019; Chrysostomou and Aletras, 2021; Arras et al., 2017; DeYoung et al., 2020).
22
+
23
+ However, we observe that the existing faithfulness metrics are not always consistent with each other and even lead to contradictory conclusions. As shown in the example from our experiments (Table 1), the conclusions that are drawn by two different faithfulness metrics, Sufficiency (SUFF) and Decision Flip - Fraction of Tokens (DFFOT), conflict with each other. More specifically, DFFOT concludes that the interpretation by LIME method is the best among the four interpretations, while SUFF ranks it as the worst. In this case, which faithfulness metric(s) should we adopt to compare interpretations?
24
+
25
+ Motivated by the above observation, we aim to conduct a comprehensive and comparative study of faithfulness metrics. We argue that a good faithfulness metric should be able to effectively and efficiently distinguish between faithful and unfaithful interpretations. To quantitatively assess this capa
26
+
27
+ <table><tr><td rowspan="2">Method</td><td rowspan="2" colspan="7">Interpretation Visualization</td><td colspan="2">Faithfulness Metric</td></tr><tr><td>SUFF</td><td>DFFOT</td></tr><tr><td>LIME</td><td>A cop story that</td><td>understands</td><td>the</td><td>medium</td><td colspan="3">amazingly well</td><td>4</td><td>1</td></tr><tr><td>Word Omission</td><td>A cop story that</td><td colspan="6">that understands the medium amazingly well</td><td>1</td><td>4</td></tr><tr><td>Saliency Map</td><td>A cop story that</td><td>understands the medium</td><td colspan="5">amazingly well</td><td>3</td><td>3</td></tr><tr><td>Integrated Gradients</td><td>A cop story that</td><td>understands the medium</td><td colspan="5">amazingly well</td><td>2</td><td>2</td></tr></table>
28
+
29
+ Table 1: An example where different interpretation methods assign different importance scores for the same trained CNN model on SST dataset. The tints of blue mark the magnitude of importance scores for positive sentiment. The numbers 1, 2, 3 and 4 are the rankings of the faithfulness values evaluated by the corresponding faithfulness metrics. Where rank 1 indicates the best, while 4 indicates the worst.
30
+
31
+ bility, we introduce two dimensions, diagnosticity and time complexity.
32
+
33
+ Diagnosticity refers to the extent to which a faithfulness metric prefers faithful rather than unfaithful interpretations. However, due to the opaque nature of deep learning models, it is not easy to obtain the ground truth for faithful interpretation (Jacovi and Goldberg, 2020). To concretize this issue, we use random interpretations, i.e., randomly assigning importance scores to tokens regardless of the internal processes of the model, as the relatively unfaithful interpretations. In contrast, we treat interpretations generated by interpretation methods as relatively faithful interpretations. In this way, we constructed the hypothesis that a faithfulness metric is diagnostic only if it can clearly distinguish between random interpretations and interpretations generated from interpretation methods. In addition, we introduce time complexity to estimate the computational speed of each metric, by using the average number of model forward passes.
34
+
35
+ In this paper, we evaluate six commonly adopted faithfulness metrics. We find that the sufficiency and comprehensiveness metrics outperform the other faithfulness metrics, which are more diagnostic and less complex. Secondly, the two correlation-based metrics, namely Correlation between Importance and Output Probability and Monotonicity, have a promising diagnosticity but fail in terms of the high time complexity. Last but not least, decision flip metrics, such as Fraction of Tokens and Most Informative Token, perform the worst in the assessments.
36
+
37
+ The main contributions of this paper are as follows:
38
+
39
+ - We conduct a comparative study of six widely used faithfulness metrics and identify the inconsistencies issues.
40
+ - We propose a quantitative approach to assess faithfulness metrics through two perspectives,
41
+
42
+ namely diagnosticity and time complexity.
43
+
44
+ # 2 Terminology and Notations
45
+
46
+ We first introduce the prerequisite terminology and notations for our discussions.
47
+
48
+ Terminology A "classification instance" is the input and output values of a classification model, which we apply interpretation methods on. An "interpretation" of a classification instance is a sequence of scores where each score quantifies the importance of the input token at the corresponding position. An "interpretation pair" is a pair of interpretations of the same classification instance. An "interpretation method" is a function that generates an interpretation from a classification instance with the associated classification model.
49
+
50
+ Notations Let $x$ be the input tokens. Denote the number of tokens of $x$ as $l_{x}$ . Denote the predicted class of $x$ as $c(x)$ , and the predicted probability corresponding to class $j$ as $p_{j}(x)$ .
51
+
52
+ Assume an interpretation is given. Denote the $k$ -th important token as $x_{k}$ . Denote the input sequence containing only the top $k$ (or top $q\%$ ) important tokens as $x_{:k}$ (or $x_{:q\%}$ ). Denote the modified input sequence from which a token sub-sequence $x'$ are removed as $x \setminus x'$ .
53
+
54
+ Let $(x,y)$ be a classification instance associated with classification model $m$ , and $g$ be an interpretation method. Denote the interpretation of $z$ generated by $g$ as $g(x,y,m)$ . Let $u$ be an interpretation, $(u,v)$ be an interpretation pair, and $F$ be a faithfulness metric. Denote the importance score that $u$ assigns to the $i$ -th input token as $[u]_i$ . Denote the statement " $u$ is more faithful than $v$ " as " $u \succ v$ ", and the statement " $F$ considers $u$ as more faithful than $v$ " as " $u \succ_F v$ ".
55
+
56
+ # 3 Faithfulness Metrics
57
+
58
+ An interpretation is called faithful if the identified important tokens truly contribute to the decision making process of the model. Mainstream faithfulness metrics are removal-based metrics, which measure the changes in model outputs after removing important tokens.
59
+
60
+ We compare the most widely adopted faithfulness metrics, introduced as follows.
61
+
62
+ Decision Flip - Most Informative Token (DFMIT) Introduced by Chrysostomou and Aletras (2021), this metric focuses on only the most important token. It assumes that the interpretation is faithful only if the prediction label is changed after removing the most important token, i.e.
63
+
64
+ $$
65
+ D F M I T = \left\{ \begin{array}{l l} 1 & \text {i f} c (x) \neq c (x \setminus x _ {: 1})) \\ 0 & \text {i f} c (x) = c (x \setminus x _ {: 1})) \end{array} \right.
66
+ $$
67
+
68
+ A score of 1 implies that the interpretation is faithful.
69
+
70
+ Decision Flip - Fraction of Tokens (DFFOT) This metric measures faithfulness as the minimum fraction of important tokens needed to be erased in order to change the model decision (Serrano and Smith, 2019), i.e.
71
+
72
+ $$
73
+ D F F O T = \left\{ \begin{array}{l l} \min \frac {k}{l _ {x}} & \text {s . t .} c (x) \neq c (x \setminus x: k) \\ 1 & \text {i f} c (x) = c (x \setminus x: k) \text {f o r a n y k} \end{array} \right.
74
+ $$
75
+
76
+ If the predicted class change never occurs even if all tokens are deleted, then the score will be 1. A lower value of DFFOT means the interpretation is more faithful.
77
+
78
+ Comprehensiveness (COMP) As proposed by DeYoung et al. (2020), comprehensiveness assumes that an interpretation is faithful if the important tokens are broadly representative of the entire input sequence. It measures the faithfulness score by the change in the output probability of the original predicted class after the important tokens are removed, i.e.
79
+
80
+ $$
81
+ \text{COMP} = \frac{1}{|B|}\sum_{q\in B}(p_{c(x)}(x) - p_{c(x)}(x\setminus x_{:q\%}))
82
+ $$
83
+
84
+ We use $q \in B = \{1, 5, 10, 20, 50\}$ as in the original paper. A higher comprehensiveness score implies a more faithful interpretation.
85
+
86
+ Sufficiency (SUFF) Also proposed by DeYoung et al. (2020), this metric measures whether the important tokens contain sufficient information to retain the prediction. It keeps only the important tokens and calculates the change in output probability compared to the original specific predicted class, i.e.
87
+
88
+ $$
89
+ S U F F = \frac {1}{| B |} \sum_ {q \in B} \left(p _ {c (x)} (x) - p _ {c (x)} \left(x: q \%\right)\right)
90
+ $$
91
+
92
+ We use $q \in B = \{1, 5, 10, 20, 50\}$ as in the original paper. The lower the value of SUFF means that the interpretation is more faithful.
93
+
94
+ Correlation between Importance and Output Probability (CORR) This metric assumes that the interpretation is faithful if the importance of the token and the corresponding predicted probability when the most important token is continuously removed is positively correlated (Arya et al., 2019), i.e.
95
+
96
+ $$
97
+ C O R R = - \rho (\boldsymbol {u}, \boldsymbol {p})
98
+ $$
99
+
100
+ where $\pmb{u}$ denotes the token importance in descending order and $\pmb{p} = [p_{c(x)}(x\setminus x_1),p_{c(x)}(x\setminus x_2),\dots,p_{c(x)}(x\setminus x_{l_x})]$ . $\rho (\cdot)$ denotes the Pearson's correlation. The higher the correlation the more faithful the interpretation is.
101
+
102
+ Monotonicity (MONO) This metric assumes that an interpretation is faithful if the probability of the predicted class monotonically increases when incrementally adding more important tokens (Arya et al., 2019). Starting from an empty vector, the features are gradually added in ascending order of importance, and the corresponding classification probabilities are noted. Monotonicity is calculated as the correlation between the feature importance and the probability after adding the feature, i.e.
103
+
104
+ $$
105
+ M O N O = \rho (\boldsymbol {u}, \boldsymbol {p})
106
+ $$
107
+
108
+ where $\pmb{u}$ denotes the token importance in descending order and $\pmb{p} = [p_{c(x)}(x), p_{c(x)}(x \setminus x_{:1}), p_{c(x)}(x \setminus x_{:2}), \dots, p_{c(x)}(x \setminus x_{:(l_x - 1)})]$ . $\rho(\cdot)$ denotes the Pearson's correlation. The higher the monotonicity the more faithful the interpretation is.
109
+
110
+ # 4 Evaluation of Faithfulness Metrics
111
+
112
+ In this section, we propose an evaluation paradigm for faithfulness metrics by addressing two aspects: (1) diagnosticity and (2) time complexity. They
113
+
114
+ are the two complementary and important factors in selecting a faithfulness metric for assessing the faithfulness of interpretations.
115
+
116
+ # 4.1 Diagnostics of Faithfulness Metric
117
+
118
+ As we have observed in Table 1, faithfulness metrics might disagree with each other on faithfulness assessment. This naturally raises a question: Which faithfulness metric(s) should we trust?
119
+
120
+ To the best of our knowledge, there is no preceding work in quantifying the effectiveness of faithfulness metrics. As a first attempt, we introduce *diagnosis* city, which is intended to measure "the degree to which a faithfulness metric favours faithful interpretations over unfaithful interpretations". Intuitively, the higher the diagnosticity the more effective the faithfulness metric is.
121
+
122
+ # 4.1.1 Definition of Diagnosticsity
123
+
124
+ Definition 4.1 (Diagnosticity). We define the diagnosticity of a faithfulness metric as the probability that given an interpretation pair $(u, v)$ such that $u$ is more faithful than $v$ , the faithfulness metric also considers $u$ as more faithful than $v$ , i.e.
125
+
126
+ $$
127
+ \mathrm {D} (F) = \mathrm {P} (u \succ_ {F} v | u \succ v)
128
+ $$
129
+
130
+ As we will see later in this section, a set of interpretation pairs $(u,v)$ such that $u\succ v$ is required for estimating diagnosticity. Constructing such a dataset leads us to a paradox: we cannot be guaranteed that some generated interpretation is more faithful than the others when the measurement of faithfulness is still under debate. It is more realistic to assume that we can generate an interpretation pair $(u,v)$ such that $u$ is very likely to be more faithful than $v$ . Thus, we relax the condition in Definition 4.1 to a probabilistic one as follows.
131
+
132
+ Definition 4.2 ( $\varepsilon$ -diagnosticity). Let $(u, v)$ be any interpretation pair, and $0 \leq \varepsilon \leq 1$ . The $\varepsilon$ -diagnosticity of a faithfulness metric $F$ is defined as
133
+
134
+ $$
135
+ \mathrm {D} _ {\varepsilon} (F) = \mathrm {P} (u \succ_ {F} v | \mathrm {P} (u \succ v) > 1 - \varepsilon)
136
+ $$
137
+
138
+ In the above definition, $\varepsilon$ represents the uncertainty in comparing the faithfulness of $u$ and $v$ . In the next Theorem, we show that $\varepsilon$ -diagnosticity effectively approximates diagnosticity as long as $\varepsilon$ is small enough.
139
+
140
+ Theorem 4.1 (Error Bound of $\varepsilon$ -diagnosticity). We can approximate diagnosticity with $\varepsilon$ -diagnosticity with error less than $\varepsilon$ , i.e.
141
+
142
+ $$
143
+ | \mathrm {D} _ {\varepsilon} (F) - \mathrm {D} (F) | < \varepsilon
144
+ $$
145
+
146
+ The proof is provided in Appendix A.
147
+
148
+ # 4.1.2 Estimation of Diagnostics
149
+
150
+ In the following, we show how we estimate $\varepsilon$ -diagnosticity with a set of interpretation pairs $(u,v)$ where the $u$ is very likely to be more faithful than $v$ , namely an $\varepsilon$ -faithfulness golden set where $\varepsilon$ is small.
151
+
152
+ Definition 4.3 ( $\varepsilon$ -faithfulness golden set). Let $0 \leq \varepsilon \leq 1$ . A set $Z_{\varepsilon}$ of interpretation pairs is called a $\varepsilon$ -faithfulness golden set, if it satisfies the following conditions.
153
+
154
+ 1. All interpretation pairs in $Z_{\varepsilon}$ are independent and identically distributed (i.i.d.).
155
+ 2. $\mathrm{P}(u\succ v) > 1 - \varepsilon$ for any interpretation pair $(u,v)\in Z_{\varepsilon}$
156
+
157
+ Lemma 4.2. Let $\mathbb{1}(\cdot)$ be the indicator function which takes a value 1 when the input statement is true and a value 0 when it is false. Then $\mathbb{1}(u\succ_F v)|(\mathrm{P}(u\succ v) > 1 - \varepsilon)$ is a random variable and its expected value is equal to $\varepsilon$ -diagnosticity, i.e.
158
+
159
+ $$
160
+ \mathrm {D} _ {\varepsilon} (F) = \mathbb {E} \left[ \mathbb {1} \left(u \succ_ {F} v\right) | \mathrm {P} (u \succ v) > 1 - \varepsilon \right]
161
+ $$
162
+
163
+ The proof is provided in Appendix B.
164
+
165
+ As a result, given an $\varepsilon$ -faithfulness golden set $Z_{\varepsilon}$ , we can estimate the $\varepsilon$ -diagnosticity of a faithfulness metric $F$ by estimating the expected value in Lemma 4.2. Then by the law of large numbers, we can simply estimate the expected value by computing the average value of $\mathbb{1}(u\succ_F v)$ on $Z_{\varepsilon}$ , i.e.
166
+
167
+ $$
168
+ \mathrm {D} _ {\varepsilon} (F) \approx \frac {1}{| Z _ {\varepsilon} |} \sum_ {(u, v) \in Z _ {\varepsilon}} \mathbb {1} (u \succ_ {F} v) \tag {1}
169
+ $$
170
+
171
+ When $|Z_{\varepsilon}|$ is large enough, we will have $\left|\frac{1}{|Z_{\varepsilon}|}\sum_{(u,v)\in Z_{\varepsilon}}\mathbb{1}(u\succ_F v) - D(F)\right| < \varepsilon$ according to Theorem 4.1.
172
+
173
+ # 4.1.3 Generation of an $\varepsilon$ -faithfulness golden set
174
+
175
+ According to Theorem 4.1 and Lemma 4.2, we can estimate the diagnosticity of any faithfulness metric using Equation 1 as long as we have an $\varepsilon$ -faithfulness golden set where $\varepsilon$ is small enough.
176
+
177
+ We called the $u$ and $v$ in Definition 4.3 a relatively faithful interpretation and a relatively unfaithful interpretation respectively. Next, we discuss the processes to generate them respectively.
178
+
179
+ Generating Relatively Unfaithful Interpretations By definition, a faithful interpretation is an interpretation that truly reflects the underlying decision making process of the classification model. Therefore, an unfaithful interpretation is one that is completely irrelevant to the underlying decision making process of the classification model. We propose to generate relatively unfaithful interpretations by assigning a random importance score to each token in the input sequence, i.e. $[v]_i \sim \mathrm{Uniform}(0,1)$ for any token $1 \leq i \leq l$ , where Uniform denotes the uniform distribution.
180
+
181
+ Generating Relatively Faithful Interpretations We propose to generate relatively faithful interpretations with the interpretation methods that infer interpretations from the underlying mechanism of the classification model. There are two mainstream categories of interpretations methods that satisfy this requirement (Alvarez-Melis and Jaakkola, 2018):
182
+
183
+ - Perturbation-based: Relying on querying the model around the classification instance to infer the importance of input features.
184
+ - Gradient-based: Using information from gradients to infer the importance of input features.
185
+
186
+ We select the representative methods from both categories and introduce them in the following.
187
+
188
+ - Perturbation-based - LIME (Ribeiro et al., 2016): For each classification instance, a linear model on the input space is trained to approximate the local decision boundary, so that the learned coefficients can be used to quantify the importance of the corresponding input features on the model prediction.
189
+ - Perturbation-based - Word Omission (WO) (Robnik-Sikonja and Kononenko, 2008): For each $i$ -th input token, WO quantifies the importance of the input token by the change in output probability after removing it from the original input sequence, i.e. $p_{c(x)}(x) - p_{c(x)}(x \setminus \{i\})$ .
190
+ Gradient-based - Saliency Map (SA) (Simonyan et al., 2014): For each $i$ -th input token, SA computes the gradients of the original model output with respect to the embedding associated with the input token, i.e. $\frac{\partial p_{c(x)}(z)}{\partial e(z)_i} |_{z = x}$ , and quantifies the importance
191
+
192
+ Algorithm 1 An $\varepsilon$ -faithfulness golden set generation mechanism.
193
+
194
+ Input: $X$ : A set of i.i.d. classification instances associated with classification model $m$ ;
195
+
196
+ $G$ : The set of interpretation methods for generating relatively faithful interpretations, i.e. {LIME, $\mathrm{WO},\mathrm{SA}_{\mu},\mathrm{SA}_{l2},\mathrm{IG}_{\mu},\mathrm{IG}_{l2}\} ;$
197
+
198
+ $K$ : Sample size;
199
+
200
+ Output: An $\varepsilon$ -faithfulness golden set $Z$ ;
201
+
202
+ $$
203
+ Z \leftarrow \{\};
204
+ $$
205
+
206
+ For 1 to $K$
207
+
208
+ $$
209
+ (x, y) \leftarrow \operatorname {R a n d o m S a m p l e r} (X);
210
+ $$
211
+
212
+ $$
213
+ g \leftarrow \operatorname {R a n d o m S a m p l e r} (G);
214
+ $$
215
+
216
+ $$
217
+ u \leftarrow g (x, y, m)
218
+ $$
219
+
220
+ $$
221
+ v \leftarrow r \in \mathbb {R} ^ {l _ {x}} \text {w h e r e} [ r ] _ {i} \sim \operatorname {U n i f o r m} (0, 1);
222
+ $$
223
+
224
+ $$
225
+ Z \leftarrow Z \cup \{(u, v) \};
226
+ $$
227
+
228
+ return $Z$
229
+
230
+ of the input token by taking either the mean or the $l2$ norm of the gradients in the embedding dimension. We denote the former approach as $\mathrm{SA}_{\mu}$ and the later approach as $\mathrm{SA}_{l2}$
231
+
232
+ Gradient-based - Integrated Gradients (IG)
233
+
234
+ (Simonyan et al., 2014): As shown by Simonyan et al. (2014), Integrated Gradients provide more robust interpretations than Saliency Map in general. For each $i$ -th input token, it approximates the integral of the gradients of the original model output with respect to the embedding corresponding to the input token along a straight line from a reference point $x_0$ to the original input sequence, i.e. $\int_{x_0 \to x} \frac{\partial p_{c(x)}(z)}{\partial e(z)_i} dz$ , and quantifies the importance of the input token by taking either the mean or the $l2$ norm of the integral in the embedding dimension. We denote the former approach as $\mathrm{IG}_{\mu}$ and the later approach as $\mathrm{IG}_{l2}$ .
235
+
236
+ The interpretations generated using the above interpretation methods are highly likely to be more faithful than the randomly generated interpretations because the generation processes of the former ones actually involve inferences from model behaviours, while the random generation process is independent of any model behaviour. Therefore, in principle, the set of generated interpretation pairs will have a small value of $\varepsilon$ in Definition 4.3.
237
+
238
+ In Algorithm 1, we propose a mechanism to generate an $\varepsilon$ -faithfulness golden set from a set of i.i.d. classification instances based on the above
239
+
240
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Splits (Train / Test)</td><td colspan="2">Model perf. (F1)</td></tr><tr><td>BERT</td><td>CNN</td></tr><tr><td>SST</td><td>6,920 / 1,821</td><td>.917</td><td>.804</td></tr><tr><td>IMDB</td><td>25,000 / 25,000</td><td>.918</td><td>.864</td></tr><tr><td>AG</td><td>120,000 / 7,600</td><td>.946</td><td>.919</td></tr></table>
241
+
242
+ processes. Note that the generated interpretation pairs will satisfy the first condition in Definition 4.3 because they are generated from i.i.d. samples, and will satisfy the second condition in Definition 4.3 with a presumably small $\varepsilon$ as we have discussed.
243
+
244
+ # 4.2 Time Complexity of Faithfulness Metric
245
+
246
+ Two of the main applications of faithfulness metrics are (1) evaluating interpretation methods based on their average faithfulness scores on a dataset; and (2) gauging the quality of individual interpretations by spotting out "unfaithful" interpretations.
247
+
248
+ Time complexity is an important aspect in evaluating faithfulness metrics because a fast faithfulness metric will shorten the feedback loop in developing faithful interpretation methods, and would allow runtime faithfulness checking of individual interpretations in a production environment.
249
+
250
+ Measurement of time complexity From the definitions of the faithfulness metrics in Section 3, we observe that their computations are dominated by model forward passes, which are denoted as $c(\cdot)$ or $p(\cdot)$ . Thus, we measure the time complexities of the faithfulness metrics in number of model forward passes.
251
+
252
+ # 5 Experimental Setup
253
+
254
+ Datasets We conduct experiments on three text classification datasets used in (Wiegreffe and Pinter, 2019): (i) Stanford Sentiment Treebank (SST) (Socher et al., 2013); (ii) IMDB Large Movie Reviews (IMDB) (Maas et al., 2011); (iii) AG News Corpus (AG) (Zhang et al., 2015). We summarize the dataset statistics in Table 2.
255
+
256
+ Text classification models We adopt two most common model architectures for text classification: (i) BERT (Devlin et al., 2019b); (ii) CNN (Kim, 2014). The former one encodes contextualized representations of tokens and has higher accuracy in
257
+
258
+ Table 2: Dataset statistics and model performances (Macro-F1) on test sets.
259
+
260
+ <table><tr><td rowspan="2">Faithfulness metric</td><td rowspan="2">SST</td><td colspan="3">Diagnosticity (%)</td></tr><tr><td>IMDB</td><td>AG</td><td>Average</td></tr><tr><td colspan="5">BERT</td></tr><tr><td>DFMIT</td><td>14.79</td><td>6.07</td><td>3.34</td><td>8.07</td></tr><tr><td>DFFOT</td><td>65.16</td><td>72.02</td><td>65.68</td><td>67.62</td></tr><tr><td>SUFF</td><td>71.03</td><td>79.33</td><td>70.42</td><td>73.60</td></tr><tr><td>COMP</td><td>75.38</td><td>80.44</td><td>74.23</td><td>76.69</td></tr><tr><td>CORR</td><td>65.46</td><td>68.06</td><td>67.23</td><td>66.91</td></tr><tr><td>MONO</td><td>75.87</td><td>75.82</td><td>68.33</td><td>73.34</td></tr><tr><td colspan="5">CNN</td></tr><tr><td>DFMIT</td><td>17.29</td><td>9.27</td><td>4.84</td><td>10.47</td></tr><tr><td>DFFOT</td><td>63.76</td><td>70.74</td><td>57.61</td><td>64.04</td></tr><tr><td>SUFF</td><td>71.54</td><td>75.91</td><td>77.97</td><td>75.14</td></tr><tr><td>COMP</td><td>71.39</td><td>73.46</td><td>81.73</td><td>75.53</td></tr><tr><td>CORR</td><td>72.17</td><td>68.92</td><td>71.82</td><td>70.97</td></tr><tr><td>MONO</td><td>72.39</td><td>77.09</td><td>75.12</td><td>74.87</td></tr></table>
261
+
262
+ Table 3: Diagnosticities of all faithfulness metrics on all datasets for both BERT and CNN models. The right-most column states the average diagnosticities over three datasets. In each column, we underline the highest value.
263
+
264
+ general, but at a cost of consuming more memory and computational resources. The latter one uses pretrained word embeddings as token representations and is lighter and faster. Their performances on test data sets are shown in Table 2. The implementation details of both models can be found in Appendix C.1.
265
+
266
+ $\varepsilon$ -faithfulness golden set For each dataset and text classification model, we transform the test set into a set of classification instances and feed it into Algorithm 1 to generate an $\varepsilon$ -faithfulness golden set with a size of 8,000 ( $K$ in Algorithm 1). The implementation details of interpretation methods can be found in Appendix C.2.
267
+
268
+ # 6 Results and Discussion
269
+
270
+ Diagnosticity We estimate the diagnosticsities of the faithfulness metrics in Section 3 on all datasets for both CNN and BERT models. The results are shown in Table 3.
271
+
272
+ COMP and SUFF have the highest and the second highest average diagnosticites for both models. Hence, they are the most effective faithfulness metrics. We also observe that COMP has higher diagnosticities than SUFF on all datasets for BERT model. This can be explained by the contextualization property of Transformer encoders (Vaswani et al., 2017): the hidden state of each token depends on all other tokens in the input sequence. Removing a portion of the important tokens will alter the whole context, and is likely to cause a dramatic
273
+
274
+ change in model output.
275
+
276
+ DFMIT and DFFOT have the lowest and the second lowest average diagnosticities. Removing the most important token is usually not creating enough perturbation to flip the original model decision. In fact, the probability of decision flipping by removing the most important token is $\leq 14\%$ for recent state-of-the-art interpretation methods (Chrysostomou and Aletras, 2021). As a result, up to $86\%$ of interpretations are considered as indifferent by DFTM. For DFFOT, the probability of decision flipping by removing the important tokens in order does not only depend on the quality of interpretation but also depends on any model bias towards certain classes. For instance, decision flipping will be less likely to occur if the predicted class on the original input is the same as the one on the empty input sequence. Therefore, we found that decision flipping metrics (DFMIT, DFFOT) are less effective than the metrics that operate on output probabilities (SUFF, COMP, CORR, MONO).
277
+
278
+ Time complexity We compare the time complexities of the faithfulness metrics in Section 3 measured in number of model forward passes. We first analyze their time complexities based on their definitions in Table 4 and then measure their actual time complexities on all datasets in Table 5. Note that the time complexity here is equal to the number of model forward passes.
279
+
280
+ DFMIT is the fastest faithfulness metric, which requires only one model forward pass. DFFOT has a non-deterministic time complexity, which depends on how fast the decision flipping occurs, and it is the second slowest faithfulness metrics on all datasets. SUFF and COMP are the second fastest faithfulness metric on average, which require at most 5 model forward passes. CORR and MONO are the slowest faithfulness metrics, which have time complexity equal to the number of input tokens.
281
+
282
+ Which faithfulness metric(s) should we adopt?
283
+
284
+ In Figure 1, we evaluate the faithfulness metrics by both their diagnosticities and time complexities.
285
+
286
+ Figure 1 suggests that we should always adopt COMP and SUFF. Because (i) they have higher diagnosticities and lower time complexities than DFFOT, ; (ii) they have a similar level of diagnosticity and much lower time complexities than CORR and MONO; (iii) DFMIT has diagnosticity less than 0.1, which is below an acceptable level.
287
+
288
+ <table><tr><td rowspan="2">Faithfulness metric</td><td colspan="2">Time complexity - Analysis (#(model forward passes))</td></tr><tr><td>Deterministic</td><td>Value or range</td></tr><tr><td>DFMIT</td><td>✓</td><td>1</td></tr><tr><td>DFFOT</td><td>✘</td><td>[1, lx]</td></tr><tr><td>SUFF</td><td>✓</td><td>min(5, lx)</td></tr><tr><td>COMP</td><td>✓</td><td>min(5, lx)</td></tr><tr><td>CORR</td><td>✓</td><td>lx</td></tr><tr><td>MONO</td><td>✓</td><td>lx</td></tr></table>
289
+
290
+ Table 4: Analysis of the time complexities of faithfulness metrics. $l_{x}$ denotes the number of input tokens.
291
+
292
+ <table><tr><td rowspan="2">Faithfulness metric</td><td colspan="4">Time complexity - Actual (#(model forward passes))</td></tr><tr><td>SST</td><td>IMDB</td><td>AG</td><td>Average</td></tr><tr><td>DFMIT</td><td>1.0</td><td>1.0</td><td>1.00</td><td>1.0</td></tr><tr><td>DFFOT</td><td>9.3</td><td>78.7</td><td>30.0</td><td>39.4</td></tr><tr><td>SUFF</td><td>5.0</td><td>5.0</td><td>5.0</td><td>5.0</td></tr><tr><td>COMP</td><td>5.0</td><td>5.0</td><td>5.0</td><td>5.0</td></tr><tr><td>CORR</td><td>20.3</td><td>193.1</td><td>47.7</td><td>87.1</td></tr><tr><td>MONO</td><td>20.3</td><td>193.1</td><td>47.7</td><td>87.1</td></tr></table>
293
+
294
+ Table 5: Actual time complexities of faithfulness metrics measured by the average number of model passes on each dataset.
295
+
296
+ ![](images/982d8c5dfd4149603ecc20d798b65bd8594738c1372c7f00d3e99094736cd564.jpg)
297
+ Figure 1: Diagnostics vs time complexity for faithfulness metrics. The values are averages over all datasets and classification models. The faithfulness metrics near the top-right corner are more desirable than those near the bottom-left corner.
298
+
299
+ We would prefer COMP and SUFF over DFMIT even though it has the lowest time complexity.
300
+
301
+ Note that our evaluation framework can be used to compare any faithfulness metrics. In general, we prefer faithfulness metrics that have higher diagnostics and lower time complexities, i.e. closer to the top-right corner in Figure 1. But what if one has a higher diagnosticity and the other one has a lower time complexity? In this case, we should consider diagnosticity first: a faithfulness metric should not be used if it cannot effectively assess faithfulness,
302
+
303
+ i.e. diagnosticity below a certain threshold. In scenarios where we are subject to constraints of hardware or timeliness, we might need to select a faster metric with a lower but acceptable level of diagnosticity.
304
+
305
+ # 7 Related Work
306
+
307
+ Interpretation methods Interpretation methods can be roughly classified into two categories: model-based methods and post-hoc methods. Model-based methods refer to the construction of simple machine learning models whose internal decision logic can be easily interpreted, such as linear regression models, decision trees, etc. Post-hoc methods interpret the internal reasoning process behind the model after training. Generally, post-hoc methods can be divided into gradient-based and perturbation-based. A gradient-based interpretation method assumes deep learning model is differentiable and discloses the decision making mechanism of the model according to the gradient information (Simonyan et al., 2014; Sundararajan et al., 2017; Shrikumar et al., 2019). A perturbation-based interpretation method interprets the model by perturbing the input of data samples and measuring how the predictions change (Robnik-Sikonja and Kononenko, 2008; Zeiler and Fergus, 2013; Ribeiro et al., 2016).
308
+
309
+ Interpretation method evaluation To assess the quality of different interpretation methods, various evaluation metrics have been proposed. Existing evaluation methods on interpretations can be broadly classified into two categories, plausibility and faithfulness. Plausibility measures if the interpretation agrees with human judgments on how a model makes a decision (Ribeiro et al., 2016; Doshi-Velez and Kim, 2017; Lundberg and Lee, 2017; DeYoung et al., 2020). However, even if the interpretation conforms to human criteria, it is not certain that it truly reflects the underlying decision mechanism behind the model. To this end, faithfulness measures the extent to which the inner decision-making mechanism actually relies on the identified important features (Arras et al., 2017; Serrano and Smith, 2019; Jain and Wallace, 2019; Wegreffe and Pinter, 2019; DeYoung et al., 2020; Chrysostomou and Aletras, 2021).
310
+
311
+ In general, existing faithfulness metrics are developed through a removal-based criterion, which measures the changes in model output when perturbing or removing tokens identified as important
312
+
313
+ by the interpretation. Serrano and Smith (2019) proposed a decision flipping metric that evaluates the proportion of tokens that need to be erased in order to change the model decision. Also using decision flip as an indicator, Chrysostomou and Aletras (2021) introduces a metric that counts the average flips that occur when removing the most important token marked by the interpretation method. In addition to decision flips, changes in model output probabilities by removing or retaining important tokens is also widely used to measure faithfulness (Arras et al., 2017; Arya et al., 2019; DeYoung et al., 2020).
314
+
315
+ Some recent work also focuses on the study of faithfulness metrics. Jacovi and Goldberg (2020) argued that the definition of faithfulness remains inconsistent and informal, and provided concrete guidelines on how evaluations of interpretation methods should and should not be conducted. More recently, Yin et al. (2021) discussed the limitations of removal-based faithfulness metrics and proposed two other quantitative criteria, namely sensitivity and stability. Different from the aforementioned previous work that does not focus on assessing faithfulness metrics, we mainly focus on the measurement of faithfulness and conduct a comprehensive study of existing faithfulness metrics.
316
+
317
+ # 8 Conclusion
318
+
319
+ In this paper, we propose a framework to quantitatively evaluate six widely adopted faithfulness metrics in terms of diagnosticity and time complexity. In particular, diagnosticity measures whether the faithfulness metric correctly favours relatively faithful interpretations over random ones; time complexity is concerned with computational efficiency, estimated by the average number of model forward passes. The experimental results show that sufficiency and comprehensiveness metrics outperform the other faithfulness metrics with higher diagnosticity and lower time complexity. For this reason, we suggest using these two metrics for faithfulness evaluation. We hope our work will bring more awareness to the standardization of faithfulness measurement. For future work, we would like to explore evaluating faithfulness metrics using a white-box model such as linear regression, from which we can derive an intrinsically faithful interpretation as the "ground truth".
320
+
321
+ # References
322
+
323
+ David Alvarez-Melis and Tommi S. Jaakkola. 2018. On the robustness of interpretability methods. Cite arxiv:1806.08049Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden.
324
+ Leila Arras, Franziska Horn, Gregoire Montavon, KlausRobert Muller, and Wojciech Samek. 2017. "what is relevant in a text document?: An interpretable machine learning approach. PLoS ONE, 12:E0181142.
325
+ Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques.
326
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
327
+ George Chrysostomou and Nikolaos Aletras. 2021. Improving the faithfulness of attention-based explanations with task-specific information for text classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 477-488, Online. Association for Computational Linguistics.
328
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. Bert: Pre-training of deep bidirectional transformers for language understanding.
329
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
330
+ Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for
331
+
332
+ Computational Linguistics, pages 4443-4458, Online.
333
+ Association for Computational Linguistics.
334
+ Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning.
335
+ Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, Online. Association for Computational Linguistics.
336
+ Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.
337
+ Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics.
338
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
339
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization.
340
+ Scott Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions.
341
+ Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
342
+ Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
343
+ Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97-101, San Diego, California. Association for Computational Linguistics.
344
+
345
+ Marko Robnik-Sikonja and Igor Kononenko. 2008. Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering, 20(5):589-600.
346
+
347
+ Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics.
348
+
349
+ Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2019. Learning important features through propagating activation differences.
350
+
351
+ Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps.
352
+
353
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
354
+
355
+ Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 3319-3328. JMLR.org.
356
+
357
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
358
+
359
+ Sarah Wiegrefe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.
360
+
361
+ Fan Yin, Zhouxing Shi, Cho-Jui Hsieh, and Kai-Wei Chang. 2021. On the faithfulness measurements for model interpretations.
362
+
363
+ Matthew D Zeiler and Rob Fergus. 2013. Visualizing and understanding convolutional networks.
364
+
365
+ Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 649-657, Cambridge, MA, USA. MIT Press.
366
+
367
+ # A Proof of Theorem 4.1
368
+
369
+ Proof. Let $(u, v)$ be an interpretation pair. Then
370
+
371
+ $$
372
+ \begin{array}{l} \mathrm {P} (u \succ_ {F} v | \mathrm {P} (u \succ v) = 1 - \varepsilon) \\ = \mathrm {P} (u \succ_ {F} v | u \succ v) (1 - \varepsilon) + \mathrm {P} (u \succ_ {F} v | u \nrightarrow v) \varepsilon \\ = \mathrm {D} (F) + \left[ \mathrm {P} \left(u \succ_ {F} v | u \neq v\right) - \mathrm {P} \left(u \succ_ {F} v | u \succ v\right) \right] \varepsilon \\ \end{array}
373
+ $$
374
+
375
+ Since $-1\leq \mathrm{P}(u\succ_Fv|u\not\asymp v) - \mathrm{P}(u\succ_Fv|u\succ v)\leq 1$ , we have
376
+
377
+ $$
378
+ | \mathrm {P} (u \succ_ {F} v | \mathrm {P} (u \succ v) = 1 - \varepsilon) - \mathrm {D} (F) | \leq \varepsilon
379
+ $$
380
+
381
+ ![](images/e53a5f9372d9f7f7e624a79a68a3d3046f449365db1407a15f935625e666443a.jpg)
382
+
383
+ # B Proof of Lemma 4.2
384
+
385
+ Proof. From Definition 4.2, we have $\mathbb{1}(u\succ_F v)|(\mathrm{P}(u\succ v) > 1 - \varepsilon)\sim \mathrm{Bernoulli}(p)$ , where $p = \mathrm{D}(F)$ . Then based on the property of Bernoulli distribution, we know that the expected value of the random variable is equal to $p$ .
386
+
387
+ # C Implementation Details
388
+
389
+ # C.1 Text classification models
390
+
391
+ The text classification models are all implemented in PyTorch $^{2}$ . For BERT, we use the "bert-base-uncased" from Huggingface transformers $^{3}$ as the pretrained model. We use the same set of hyperparameters regardless of dataset for fine-tuning: dropout rate 0.2, AdamW (Loshchilov and Hutter, 2019) with an initial learning rate 2e-5, batch size 32 with no warmup steps. We set the maximum number of finetuning epochs to be 10 and perform early stopping when the performance on the test set does not improve for 3 consecutive epochs
392
+
393
+ For CNN classifier, we use a one-layer CNN encoder with a linear classifier. The embedding is initialized with the 300-dimensional pretrained GloVe word embedding (Pennington et al., 2014). The CNN layer has 256 kernels and the size of the kernels is 3. We use max-pooling and AdamW with an initial learning rate 1e-3, batch size 32, with no warmup steps. The maximum number of epochs is 40 with early stopping after 3 consecutive non-improving epochs.
394
+
395
+ # C.2 Interpretation methods
396
+
397
+ For LIME, Saliency Map, Integrated Gradients and DeepLift, we apply the implementation in Captum 4. For Word Omission, we use our own implementation.
acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c26a71900c7ce14e299a9cd37b41b57ec187677bc46bc8873f061ee7d719f41e
3
+ size 291333
acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72729f0641447f12abf78119d24a2b980caca28cf33568f65bde408ac284e306
3
+ size 421304
acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b62984f2b20d33110c9d630ea2bbbc0fe9969f670d8e4ee656511ec3273fb004
3
+ size 99758
acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e54506502c737e1a971315489d181552181a1d9f3d16493a0178405fb477ff94
3
+ size 122689
acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc3451b459a9d2eabd0c4dfa652a53d1da05b02fca3299192a960a9ffa4c92f6
3
+ size 454338
acomparisonofstrategiesforsourcefreedomainadaptation/full.md ADDED
@@ -0,0 +1,389 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Comparison of Strategies for Source-Free Domain Adaptation
2
+
3
+ Xin Su Yiyun Zhao Steven Bethard
4
+
5
+ University of Arizona
6
+
7
+ Tucson, AZ, USA
8
+
9
+ {xinsu, yiyunzhao, bethard}@email.arizona.edu
10
+
11
+ # Abstract
12
+
13
+ Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data—active learning, self-training, and data augmentation—and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.
14
+
15
+ # 1 Introduction
16
+
17
+ Deep neural networks achieve high performance in many tasks, but typically require annotated training data for each new domain. Domain adaptation algorithms aim to take models trained on one domain (the "source domain") and transfer the model's knowledge to another domain (the "target domain"). They typically try to do this without a huge amount of annotated data in the target domain. Domain adaptation can be easy if the source and target domain have similar distributions, but domains often differ substantially (Wilson and Cook, 2020).
18
+
19
+ While there has been much progress in domain adaptation methods (Kouw, 2018) and even in unsupervised domain adaptation where there are no target-domain labels (Ramponi and Plank, 2020), most methods assume access to the labeled source data. Yet this assumption is often not satisfied, especially in the clinical domain due to privacy concerns (Laparra et al., 2020).
20
+
21
+ SemEval 2021 Task 10 (Laparra et al., 2021), on source-free domain adaptation, called attention to
22
+
23
+ this challenging but more realistic scenario where labeled source data are not accessible, only the model trained on the source domain data can be shared<sup>1</sup>, and little or no labeled target data are available. Participants explored methods including self-training, active learning, and data augmentation (Laparra et al., 2021) but it is hard to make fair comparisons between algorithms since different teams varied in their base implementations.
24
+
25
+ We therefore conducted experiments to provide a systematic comparison of algorithms for source-free domain adaptation. Our contributions are:
26
+
27
+ 1. The first systematic comparison of self-training, active learning, and data augmentation for source-free domain adaptation, carried out across multiple tasks and domains.
28
+ 2. We identify a formulation of source-free active learning that consistently improves performance of the source-domain model, and sometimes even outperforms fine-tuning on a large set of labeled target domain data.
29
+ 3. We perform an error analysis across tasks and domains and show that the selected formulation of active learning corrects several types of errors that self-training does not.
30
+
31
+ Our code is publicly available.2
32
+
33
+ # 2 Related Work
34
+
35
+ # 2.1 Source-free Domain Adaptation
36
+
37
+ Recently, there is rising interest in computer vision to develop methods for unsupervised source-free domain adaptation. Several works utilize a generative framework with a classifier trained on source data to generate labeled training examples (Kurmi et al., 2021; Li et al., 2020) or transfer the target ex
38
+
39
+ amples to match the source style (Hou and Zheng, 2020; Sahoo et al., 2020). Other works use self-supervised pseudo-labeling. Liang et al. (2020) proposes source hypothesis transfer that freezes the classifier of the source model domain but fine-tunes the encoding of the source model with a goal to reduce the entropy of individual output prediction while maintaining global diversity. They also augment the strategy by self-supervised pseudo labels via the nearest centroid classifier. Kim et al. (2020) select low self-entropy instances as class prototypes and pseudo-label the remaining target instances based on the distance to the class prototypes and progressively update the models on target data in the manner of self-training.
40
+
41
+ Despite of a growing number of computer vision studies on source-free domain adaptation, there is limited NLP research into this challenging but realistic scenario. Though there is partially related research on continual learning (de Masson d'Autume et al., 2019; Sun et al., 2020) and generalization of pre-trained models (Hendrycks et al., 2020), the only work to explicitly test source-free domain adaptation is SemEval 2021 Task 10 (Laparra et al., 2021), which asked participants to perform source-free domain adaptation on negation detection and time expression recognition. A variety of techniques were applied to this task, including active learning, self-training, and data augmentation. However, different techniques were applied by different participants with different baseline models, so the shared task results do not allow us to make fair comparisons between different techniques. In the current article, we implement and then systematically compare these different techniques.
42
+
43
+ # 2.2 Self-training
44
+
45
+ Self-training (Yarowsky, 1995; McClosky et al., 2006) trains a model on a labeled dataset $L$ and then iteratively makes predictions ("pseudo-labels") on an unlabeled dataset $U$ and re-trains. On each iteration, the examples in $U$ that the model labels with high confidence ("silver labels") are added to $L$ , and the model is retrained on the new, larger $L$ . This process is repeated until no more predictions are highly confident. Self-training has been applied to a variety of domain adaptation scenarios (Ruder and Plank, 2018; Yu et al., 2015; Cui and Bollegala, 2019), but always with the assumption that the original labeled data $L$ is available at each iteration. In source-free domain adaptation, $L$ is not available,
46
+
47
+ so source-free self-training could train on only the pseudo-labels, and it is unclear whether that would yield a superior or inferior model.
48
+
49
+ # 2.3 Active Learning
50
+
51
+ Active learning selects a small number of examples to be manually annotated, using strategies designed to select the examples that should most benefit the model. Various active learning selection strategies have been developed (see the survey of Settles, 2009), and recent work has shown the benefits of active learning even with pre-trained transformer models (Ein-Dor et al., 2020). Active learning is also frequently used in domain adaptation. For example, Chan and Ng (2007) applied uncertainty sampling for domain adaptation of word sense disambiguation models, and Rai et al. (2010) combined model confidence and a domain discriminator to select target-domain examples for sentiment analysis. As with self-training, active learning algorithms typically assume that the source-domain training data is available and can be combined with target-domain examples. Thus, the efficacy of source-free active learning is currently unclear.
52
+
53
+ # 2.4 Data Augmentation
54
+
55
+ Data Augmentation enhances limited data by using existing resources (WordNet, similar datasets, etc.) and/or rule-based transformations of the training data to create new training examples. A variety of data augmentation techniques have been proposed (see the survey of Liu et al., 2020) including back-translation (Sennrich et al., 2016; Wang et al., 2021), lexical-substitution (Zhou et al., 2019; Arefyev et al., 2020; Wei and Zou, 2019; Miao et al., 2020), noise injection (Wei and Zou, 2019), conditional generation (Juuti et al., 2020; Malandrakis et al., 2019; Kobayashi, 2018), and data transformation with task-specific rules or templates (Sahin and Steedman, 2018; Wang et al., 2021; Xu et al., 2020). Data augmentation assumes access to the source-domain training data, so cannot be used by itself in source-free domain adaptation. It could be coupled with source-free self-training or source-free active learning, but researchers have not yet systematically explored such combinations.
56
+
57
+ # 3 Data
58
+
59
+ We base our experiments off of the data and source-domain models from the tasks of SemEval 2021 Task 10: negation detection and time expression
60
+
61
+ <table><tr><td>Domain</td><td>Data Source</td><td>#</td></tr><tr><td colspan="3">Negation Detection Data</td></tr><tr><td>Source</td><td>SHARP Seed</td><td>10,259 sentences</td></tr><tr><td>Target: development</td><td>i2b2 2010</td><td>1109 sentences</td></tr><tr><td>Target: test</td><td>i2b2 2010</td><td>4436 sentences</td></tr><tr><td>Target: development</td><td>MIMIC III</td><td>1916 sentences</td></tr><tr><td>Target: test</td><td>MIMIC III</td><td>7664 sentences</td></tr><tr><td colspan="3">Time Expression Detection Data</td></tr><tr><td>Source</td><td>SemEval 2018 Task 6 clinical notes</td><td>278 documents</td></tr><tr><td>Target: development</td><td>SemEval 2018 Task 6 news articles</td><td>20 documents</td></tr><tr><td>Target: test</td><td>SemEval 2018 Task 6 news articles</td><td>79 documents</td></tr><tr><td>Target: development</td><td>Food security reports</td><td>4 documents</td></tr><tr><td>Target: test</td><td>Food security reports</td><td>13 documents</td></tr></table>
62
+
63
+ Table 1: Data summary for negation detection and time expression recognition tasks.
64
+
65
+ recognition. We select these tasks because:
66
+
67
+ 1. They represent real-world data-sharing problems: the negation source-domain data "cannot currently be distributed" and the time expression source-domain data is "difficult to gain access to due to the complex data use agreements" (Laparra et al., 2021). Only the task organizers had access to the data and permission to distribute models trained on the (de-identified) data.
68
+ 2. The annotation schemes are complex enough that the problem cannot be easily solved by manually annotating the target domain. Su et al. (2021) found that annotations from annotators given only the time annotation guidelines yielded no gains to models, while annotations from heavily trained annotators did yield gains.
69
+ 3. These two tasks suffer a large performance loss under domain shift: the source-trained model is $15+$ points of F1 lower on the target test set than on the source test set (Laparra et al., 2021).
70
+
71
+ The popular Amazon reviews sentiment analysis dataset (Blitzer et al., 2007) violates the points above: labeled source and target data are easily available, the annotation scheme is easy (it is artificially balanced and removes reviews with neutral labels, as others have noted (He et al., 2018; Miller, 2019)), and the source domain model performs well on the target domain (within 0-4 points of F1). We nonetheless include some experiments on this dataset in appendix A.3. We find that with simple data preprocessing and source-domain hyperparameter tuning, the source-domain model alone outperforms all domain adaptation models from Ye et al. (2020) and Ben-David et al. (2020).
72
+
73
+ SemEval 2021 Task 10 negation detection is a "span-in-context" classification task. The goal is to predict whether an event (denoted by two special
74
+
75
+ tokens $< \mathsf{e}>$ and $< / \mathsf{e}>$ ) in the sentence is negated by its context. For example, given the sentence:
76
+
77
+ Has no $< e>$ diarrhea $</ e>$ and no new lumps or masses
78
+
79
+ the goal is to predict that diarrhea is negated by its context. The source-domain negation detection model was trained on Mayo clinic clinical notes. The target domains are Partners HealthCare clinical notes from the i2b2 2010 Challenge and Beth Israel ICU progress notes from the MIMIC III corpus.
80
+
81
+ SemEval 2021 Task 10 time expression recognition is a sequence-tagging task. The goal is to identify the time entities in the document and label them with SCATE types (Bethard and Parker, 2016). For example, given the sentence:
82
+
83
+ the patient underwent appendicitis surgery on August 29, 2018,
84
+
85
+ the goal is to label August as Month-Of-Year, 29 as Day-Of-Month, and 2018 as Year. The source-domain time expression recognition model was trained on the Mayo Clinic clinical notes of SemEval 2018 Task 6 (Laparra et al., 2018). The target domains are news articles (also from SemEval 2018 Task 6) and reports from food security warning systems including the UN World Food Programme and the Famine Early Warning Systems Network.
86
+
87
+ Each task has a model trained from a source domain and a test set for each of two target domains. For each target domain, we split the data into $20\%$ as a development set and $80\%$ as a test set. Detailed data information is shown in table 1.
88
+
89
+ Source data We do not use source domain data. We use only the English RoBERTa-base models (Liu et al., 2019) (approx. 125M parameters) that the task organizers fine-tuned on the source domain data sets via the Huggingface Transform
90
+
91
+ ers library v3.5.1 (Wolf et al., 2020).
92
+
93
+ Target development data We use the development data for fine-tuning the model. For active learning, to simulate manual annotation, we fine-tune on a small number of automatically selected labeled examples. For self-training, no labels are used; we fine-tune on predictions (pseudo-labels) generated by the model on the development data. For oracle experiments, we fine-tune the model on all labeled examples in the development set.
94
+
95
+ Target test data We evaluate on the test data. No fine-tuning is performed. Models always treat this data as unlabeled<sup>3</sup>. Its labels are used only during evaluation. We use the same evaluation metrics as in SemEval 2021 Task 10: precision, recall, and F1 score.
96
+
97
+ # 4 Research Questions
98
+
99
+ We aim for a systematic analysis of three strategies with many different implementations in SemEval 2021 Task 10: self-training, active learning, and data augmentation. Our research questions are:
100
+
101
+ 1. How much can we gain from having human intervention (active learning) and not just the model alone (self-training)?
102
+ 2. For active learning, given a fixed annotation budget, is it better to do several iterations of selecting examples for annotation and retraining the model, or to select and retrain just once?
103
+ 3. For self training, given a fixed confidence threshold, is it better to do several iterations of generating pseudo-labels and retraining the model, or to generate and train only once?
104
+ 4. In each iteration of active learning or self-training, should we use the training data from the previous iteration or start anew?
105
+ 5. In each iteration of active learning or self-training, should we continue training the model from the previous iteration or the model from the source-domain?
106
+ 6. Do active learning and self-training improve with data augmentation or work better alone?
107
+
108
+ # 5 Method
109
+
110
+ We design source-free variants of self-training, active learning, and data augmentation that incorporate the following parameters, allowing us to investigate the questions above.
111
+
112
+ Algorithm 1: Source-Free Self-training Algorithm
113
+ Input: M: the source-domain model D: the unlabeled target domain data $\tau$ : the self-training threshold T: the maximum number of iterations $S_{D}$ : the data construction strategy $S_{M}$ : the model training strategy $S_{A}$ : the data augmentation strategy
114
+ 1 $M_0\gets Copy(M)$
115
+ 2 $D_0\gets Copy(D)$
116
+ 3 $L\gets \emptyset$
117
+ 4 for $i\gets 0$ to $T$ do
118
+ 5 if $D = \emptyset$ then
119
+ 6 Stop training
120
+ 7 if $S_{D} =$ ResetData then
121
+ 8 $L = \emptyset$
122
+ 9 $D = D_0$
123
+ 10 $L_{C_i}\gets \{(d,M(d))$ for $d\in D$ if $M(d)$ confidence $> \tau \}$
124
+ 11 if $L_{C_i} = \emptyset$ or $L_{C_i} = L_{C_{i - 1}}$ then
125
+ 12 Stop training
126
+ 13 $L = L\cup L_{C_i}$
127
+ 14 if $S_{D} =$ KeepData then
128
+ 15 $D\gets D - \{d$ for $(d,l)\in L_{C_i}\}$
129
+ 16 if $S_A =$ Augment then
130
+ 17 $L\gets L\cup$ Augment $(L_{C_i})$
131
+ 18 if $S_M =$ ResetModel then
132
+ 19 $M\gets M_0$
133
+ 20 Fine-tune M on $L$
134
+
135
+ $T$ the maximum number of iterations for selftraining or active learning
136
+
137
+ $S_{D}$ the data construction strategy: KeepData to keep the training data from the previous iteration, or ResetData to start anew on each iteration.
138
+
139
+ $S_{M}$ the model training strategy: KeepModel to continue training the model from the previous iteration, or ResetModel to continue training from the source-domain model.
140
+
141
+ $S_{A}$ whether or not to use data augmentation.
142
+
143
+ # 5.1 Source-Free Self-training
144
+
145
+ Algorithm 1 presents our self-training algorithm. It follows standard self-training (Yarowsky, 1995) in using the model to add pseudo-labels to the unlabeled data (line 10). However, there is no source-domain labeled data, so the model can fine-tune only on the pseudo-labels. The remainder of the code ensures that models and/or data are kept, reset, or augmented as per the selected strategies.
146
+
147
+ Self-training requires a measure of model confidence on each prediction. In both tasks, we add pseudo-labeled training data a sentence at a time, so we measure confidence at the sentence level. In negation detection, we use the predicted probability
148
+
149
+ Algorithm 2: Source-Free Active Learning Algorithm
150
+ Input: M: the source-domain model D: the development set of the target domain T: the maximum number of iterations K: the number of annotations per iteration $S_{D}$ : the data construction strategy $S_{M}$ : the model training strategy $S_{A}$ : the data augmentation strategy
151
+ 1 $M_0\gets \mathrm{Copy}(M)$
152
+ 2 $D_0\gets \mathrm{Copy}(D)$
153
+ 3 $L\gets \emptyset$
154
+ 4 for $i\gets 0$ to $T$ do
155
+ 5 if $S_{D} = \mathrm{ResetData}$ then
156
+ 6 $\begin{array}{c|c}L = \emptyset \\ D = D_0 \end{array}$
157
+ 7
158
+ 8 $D_U\gets$ [d for $d\in D$ sorted by uncertainty of $M(d)]$
159
+ 9 $L_{U}\leftarrow$ $\{(d,\mathrm{Annotate}(d))$ for $d\in$ top $K$ of $D_U\}$
160
+ 10 $L\gets L\cup L_U$
161
+ 11 if $S_{D} = \mathrm{KeepData}$ then
162
+ 12 $\begin{array}{r}D\gets D - \{d\text{for} (d,l)\in L_U\} \end{array}$
163
+ 13 if $S_A = \mathrm{Augment}$ then
164
+ 14 $\begin{array}{r}L\gets L\cup \mathrm{Augment}(L_U); \end{array}$
165
+ 15 if $S_M = \mathrm{ResetModel}$ then
166
+ 16 $\begin{array}{r}M\gets M_0 \end{array}$
167
+ 17 Fine-tune $M$ on $L$
168
+
169
+ at RoBERTa's special sentence-initial token $\langle \mathrm{s} \rangle$ . In time expression recognition, we use the average of the predicted probabilities of the most probable class of each token.
170
+
171
+ # 5.2 Source-Free Active Learning
172
+
173
+ Algorithm 2 presents our active learning algorithm. It follows an approach similar to Su et al. (2021). Like most active learning algorithms, the core is to select examples the model is uncertain of (line 8) and then manually annotate them (line 9). Since our development sets are already annotated, we simulate annotation by simply revealing the (previously hidden) labels for the selected examples.
174
+
175
+ Active learning requires a measure of model uncertainty on each prediction. In both tasks, we add annotations a sentence at a time, so we measure uncertainty at the sentence level. In negation detection, we use the predicted entropy at RoBERTa's special sentence-initial token, $\langle \mathrm{s} \rangle$ . In time expression recognition, we use the average of the predicted entropies of the tokens in the sentence.
176
+
177
+ # 5.3 Data Augmentation
178
+
179
+ Inspired by Miao et al. (2020), we use a pool-based data augmentation method to automatically increase the size of the training set.
180
+
181
+ In negation detection, we construct a pool of all event words in the unlabeled target domain test data. For each development data example to be augmented, we substitute its event with $n$ randomly-sampled words from the pool. For example, if data augmentation is performed on the sentence: Has no $< e >$ diarrhea $< / e >$ , we replace the diarrhea with random words from the pool, resulting in sentences like Has no $< e >$ asthma $< / e >$ .
182
+
183
+ In time expression recognition, we construct a pool of words for each time entity type using the guidelines of the SCATE annotation schema, excluding words that do not appear in the unlabeled target domain test data. For each entity in a development data example to be augmented, we substitute it with $n$ randomly-sampled words from the pool for its entity type. For example, in the sentence, the patient underwent appendicitis surgery on August 29, 2018, there are three time entities (August: Month-Of-Year, 29: Day-Of-Month, 2018: Year). Data augmentation can therefore generate up to $n \times 3$ sentences with different years, months, and days, e.g., the patient underwent appendicitis surgery on September 1st, 2017.
184
+
185
+ # 6 Experiments
186
+
187
+ The input to the source-domain models for both tasks is a sentence. The output for the negation detection model is a sentence label (negated or not negated). The output for the time expression model is one label per token (its time entity type). For both tasks, we use the conventional RoBERTa input format, surrounding the sentence with the special tokens $\langle \mathrm{s} \rangle$ and $\langle \mathrm{/s} \rangle$ . The negation detection data is already split into sentences. For the time recognition data, we split it into sentences using the English sentencizer from Spacy v2.3.2 (Honnibal et al., 2020).
188
+
189
+ When we fine-tune the source-domain model on the target domain, we keep the same training hyperparameters as were used when the shared task organizers trained the models on the source domains. In source-free domain adaptation, there is no (or very little) labeled development data available, so it is not possible to tune hyperparameters. All hyperparameters are given in appendix A.1. All experiments are run on a single Nvidia P100 GPU. The total approximate GPU hours are 70 hours.
190
+
191
+ In self-training, we set the threshold $\tau$ to 0.95, and experiment with running just a single iteration and with running 30 iterations with the different
192
+
193
+ $S_{D}$ and $S_{M}$ strategies. The threshold and the number of iterations are adapted from Su et al. (2021). Training may run for fewer iterations when the stopping conditions are met. In active learning, we set our annotation budget to 96 sentences, and experiment with spending these 96 sentences at once and in 8 iterations with the different $S_{D}$ and $S_{M}$ strategies. For all experiments, we run one version with data augmentation (with $n = 5$ ) and one without.
194
+
195
+ For each source and target domain pair, we compare our adapted model with the following models.
196
+
197
+ 1. Source-Domain Model: The baseline. It is unadapted, trained only on the source domain.
198
+ 2. Fine-Tuned Source-Domain Model: The oracle. It is fine-tuned on the target domain using the entire labeled development set.
199
+ 3. Self-Distilled Model: A RoBERTa-base model fine-tuned on the development set using pseudo labels generated by the source-domain model.
200
+ 4. Passive Learning Model: The source-domain model fine-tuned on 96 randomly sampled examples from the labeled development set.
201
+
202
+ # 7 Discussion
203
+
204
+ Tables 2 and 3 show the results of our experiments. We are interested less in the best model for a particular configuration, but rather in which configurations are successful across multiple tasks and domains. This is because in source-free domain adaptation, there is typically no (or very little) labeled target domain data available for hyperparameter tuning. Therefore, what we need is a universal strategy that does not require careful tuning.
205
+
206
+ For source-free active learning, we find that even small amounts of annotated data are useful, and that smart data selection (e.g., using uncertainty scores) is usually helpful. The active learning Keep-Data models (rows 6, 8, 11, and 13 in tables 2 and 3) have higher F1s than the baseline source domain models across all tasks and domains (0.054 F1 higher on average). Active learning KeepData models also outperform passive learning models (that randomly select data) in 14 out of 16 cases, and are at least as good as, and typically much better than, the self-training models (rows 15-24 in tables 2 and 3). The ResetModel+ResetData models always have the worst F1s of the active learning models (rows 7 and 12 in tables 2 and 3).
207
+
208
+ Several active learning models achieve higher F1s than the "oracle" model that fine-tuned on the full labeled development set (row 8, 10, 11, 13,
209
+
210
+ 14 in table 3 Time: News and row 8, 11, 14 in table 3 Time: Food). This emphasizes a challenge of source-free domain adaptation: more data is not always better data. Since we do not have access to the source domain training data, if we fine-tune on too much target domain data the model may start to forget what it learned on the source domain, i.e., "catastrophic forgetting" (McCloskey and Cohen, 1989). In these cases, the active learning models, by selecting a small set of just the most uncertain examples, reap the benefits of knowing something about the target domain without losing what they learned from the source domain.
211
+
212
+ For source-free self-training, we find that iteratively updating both model and data is slightly above baseline, and that it is better to start from the source-domain model than from RoBERTa without fine-tuning. The KeepModel+KeepData (without data augmentation) is slightly above the source-domain model across all tasks and domains (0.013 F1 higher on average). Every other configuration, even if they outperform KeepModel+KeepData in one task or domain, is below the source-domain baseline in another. All self-trained models without data augmentation (which start from the source-domain model) do at least outperform self-distilled models (which start from the RoBERTa model without fine-tuning; row 3 in tables 2 and 3). The small gains from the only self-training configuration that consistently outperformed the source-domain model suggest that self-training may not be worthwhile for source-free domain adaptation.
213
+
214
+ Data augmentation helped in some cases (e.g., self-training time expression recognition on news), and hurt in others (e.g., self-training time expression recognition on food security). Data augmentation sometimes led to ill-behaving models: on the negation MIMIC-III dataset, data augmentation made the self-trained model predict all examples as not negated resulting in 0.000 F1 (rows 21 -24 in table 2: Negation-MIMIC-III). This suggests that data augmentation (or at least the variants of it that we explored) is probably not viable for source-free domain adaptation where no labeled data for tuning strategies is available.
215
+
216
+ We thus make the following suggestions for source-free domain adaptation:
217
+
218
+ 1. If there is sufficient expertise to label the data, use active learning and iteratively adapt the model with the KeepModel+KeepData strategy instead of spending the annotation budget all at
219
+
220
+ <table><tr><td rowspan="2">#</td><td rowspan="2">Strategy</td><td colspan="3">Negation: MIMIC-III</td><td colspan="3">Negation: i2b2</td></tr><tr><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td></tr><tr><td>1</td><td>Source-Domain Model (baseline)</td><td>0.656</td><td>0.921</td><td>0.510</td><td>0.837</td><td>0.855</td><td>0.820</td></tr><tr><td>2</td><td>Fine-Tuned Source-Domain Model (oracle)</td><td>0.868</td><td>0.875</td><td>0.862</td><td>0.925</td><td>0.928</td><td>0.922</td></tr><tr><td>3</td><td>Self-Distilled Model</td><td>0.623</td><td>0.825</td><td>0.501</td><td>0.846</td><td>0.849</td><td>0.842</td></tr><tr><td>4</td><td>Passive Learning Model</td><td>0.722</td><td>0.792</td><td>0.663</td><td>0.882</td><td>0.914</td><td>0.853</td></tr><tr><td colspan="8">Active Learning</td></tr><tr><td>5</td><td>AL (96 × 1)</td><td>0.759</td><td>0.901</td><td>0.656</td><td>0.886</td><td>0.943</td><td>0.836</td></tr><tr><td>6</td><td>AL (12 × 8) + ResetModel + KeepData</td><td>0.800</td><td>0.828</td><td>0.774</td><td>0.891</td><td>0.951</td><td>0.838</td></tr><tr><td>7</td><td>AL (12 × 8) + ResetModel + ResetData</td><td>0.618</td><td>0.842</td><td>0.489</td><td>0.778</td><td>0.972</td><td>0.649</td></tr><tr><td>8</td><td>AL (12 × 8) + KeepModel + KeepData</td><td>0.817</td><td>0.867</td><td>0.773</td><td>0.859</td><td>0.852</td><td>0.865</td></tr><tr><td>9</td><td>AL (12 × 8) + KeepModel + ResetData</td><td>0.777</td><td>0.890</td><td>0.689</td><td>0.877</td><td>0.928</td><td>0.831</td></tr><tr><td colspan="8">Active Learning + Data Augmentation</td></tr><tr><td>10</td><td>AL (96 × 1) + DA (5)</td><td>0.708</td><td>0.652</td><td>0.773</td><td>0.883</td><td>0.937</td><td>0.834</td></tr><tr><td>11</td><td>AL (12 × 8) + ResetModel + KeepData + DA (5)</td><td>0.805</td><td>0.803</td><td>0.806</td><td>0.891</td><td>0.960</td><td>0.831</td></tr><tr><td>12</td><td>AL (12 × 8) + ResetModel + ResetData + DA (5)</td><td>0.586</td><td>0.489</td><td>0.730</td><td>0.817</td><td>0.960</td><td>0.710</td></tr><tr><td>13</td><td>AL (12 × 8) + KeepModel + KeepData + DA (5)</td><td>0.805</td><td>0.878</td><td>0.744</td><td>0.881</td><td>0.925</td><td>0.841</td></tr><tr><td>14</td><td>AL (12 × 8) + KeepModel + ResetData + DA (5)</td><td>0.745</td><td>0.882</td><td>0.645</td><td>0.889</td><td>0.929</td><td>0.852</td></tr><tr><td colspan="8">Self-training</td></tr><tr><td>15</td><td>ST (1)</td><td>0.677</td><td>0.916</td><td>0.537</td><td>0.854</td><td>0.871</td><td>0.838</td></tr><tr><td>16</td><td>ST (30) + ResetModel + KeepData</td><td>0.679</td><td>0.937</td><td>0.533</td><td>0.857</td><td>0.876</td><td>0.839</td></tr><tr><td>17</td><td>ST (30) + ResetModel + ResetData</td><td>0.695</td><td>0.912</td><td>0.562</td><td>0.861</td><td>0.880</td><td>0.843</td></tr><tr><td>18</td><td>ST (30) + KeepModel + KeepData</td><td>0.664</td><td>0.906</td><td>0.525</td><td>0.864</td><td>0.890</td><td>0.840</td></tr><tr><td>19</td><td>ST (30) + KeepModel + ResetData</td><td>0.654</td><td>0.879</td><td>0.521</td><td>0.858</td><td>0.883</td><td>0.834</td></tr><tr><td colspan="8">Self-training + Data Augmentation</td></tr><tr><td>20</td><td>ST (1) + DA (5)</td><td>0.654</td><td>0.943</td><td>0.501</td><td>0.863</td><td>0.894</td><td>0.833</td></tr><tr><td>21</td><td>ST (30) + ResetModel + KeepData + DA (5)</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.861</td><td>0.887</td><td>0.838</td></tr><tr><td>22</td><td>ST (30) + ResetModel + ResetData + DA (5)</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.864</td><td>0.897</td><td>0.834</td></tr><tr><td>23</td><td>ST (30) + KeepModel + KeepData + DA (5)</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.854</td><td>0.869</td><td>0.839</td></tr><tr><td>24</td><td>ST (30) + KeepModel + ResetData + DA (5)</td><td>0.000</td><td>0.000</td><td>0.000</td><td>0.855</td><td>0.885</td><td>0.827</td></tr></table>
221
+
222
+ Table 2: Performance of domain adaptation strategies on the negation detection target domains. AL $(k\times i)$ is active learning with $k$ samples and $i$ iterations. ST $(i)$ is self-training up to $i$ iterations. DA $(n)$ is augmenting each example with up to $n$ new examples. The best scores are in bold and the worst scores are underlined.
223
+
224
+ once. This is the best model without data augmentation in three of the four domains (Negation: MIMIC III, Time: News, Time: Food). Note that expertise is important: Su et al. (2021) found that active learning with non-experts in the face of a complex annotation scheme did not yield performance improvements.
225
+
226
+ 2. Self-training and data augmentation, at least as implemented here, are not good choices for source free domain adaptation: sometimes they led to gains, and sometimes they led to losses. While a good strategy could be found by labeling some target domain data and performing hyperparameter search, such annotation effort would have a higher payoff if used for active learning instead.
227
+ 3. Active learning is better than passive learning: smart example selection is better than random example selection.
228
+ 4. Self-training is better than self-distillation: the
229
+
230
+ models benefit from the task knowledge learned from the source-domain.
231
+
232
+ Our systematic analysis allowed us to make the above more specific suggestions than the shared task's main suggestion that "the best performing [systems] incorporated... active-learning, handcrafted heuristics or semiautomatically building a training set" (Laparra et al., 2021).
233
+
234
+ # 8 Error Analysis
235
+
236
+ We performed an error analysis to try to determine if different adaptation strategies resulted in different types of errors being corrected (as compared to the source domain model). For negation detection we sampled and categorized around 200 errors of the source-domain model for each target domain. When the model failed to predict a negation, we manually categorized the error by the negation cue (no, free, absent, etc.). When the model predicted a negation it should not have, we manually cate
237
+
238
+ <table><tr><td rowspan="2">#</td><td rowspan="2">Strategy</td><td colspan="3">Time: News</td><td colspan="3">Time: Food</td></tr><tr><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td></tr><tr><td>1</td><td>Source-Domain Model (baseline)</td><td>0.771</td><td>0.772</td><td>0.770</td><td>0.781</td><td>0.834</td><td>0.734</td></tr><tr><td>2</td><td>Fine-Tuned Source-Domain Model (oracle)</td><td>0.844</td><td>0.826</td><td>0.864</td><td>0.851</td><td>0.841</td><td>0.861</td></tr><tr><td>3</td><td>Self-Distilled Model</td><td>0.572</td><td>0.590</td><td>0.555</td><td>0.766</td><td>0.831</td><td>0.711</td></tr><tr><td>4</td><td>Passive Learning Model</td><td>0.796</td><td>0.783</td><td>0.809</td><td>0.770</td><td>0.755</td><td>0.785</td></tr><tr><td colspan="8">Active Learning</td></tr><tr><td>5</td><td>AL (96 × 1)</td><td>0.812</td><td>0.800</td><td>0.825</td><td>0.819</td><td>0.821</td><td>0.818</td></tr><tr><td>6</td><td>AL (12 × 8) + ResetModel + KeepData</td><td>0.812</td><td>0.794</td><td>0.830</td><td>0.842</td><td>0.844</td><td>0.840</td></tr><tr><td>7</td><td>AL (12 × 8) + ResetModel + ResetData</td><td>0.771</td><td>0.771</td><td>0.770</td><td>0.781</td><td>0.832</td><td>0.737</td></tr><tr><td>8</td><td>AL (12 × 8) + KeepModel + KeepData</td><td>0.861</td><td>0.844</td><td>0.879</td><td>0.872</td><td>0.866</td><td>0.879</td></tr><tr><td>9</td><td>AL (12 × 8) + KeepModel + ResetData</td><td>0.772</td><td>0.758</td><td>0.787</td><td>0.781</td><td>0.797</td><td>0.765</td></tr><tr><td colspan="8">Active Learning + Data Augmentation</td></tr><tr><td>10</td><td>AL (96 × 1) + DA (5)</td><td>0.856</td><td>0.829</td><td>0.884</td><td>0.840</td><td>0.824</td><td>0.855</td></tr><tr><td>11</td><td>AL (12 × 8) + ResetModel + KeepData + DA (5)</td><td>0.860</td><td>0.830</td><td>0.893</td><td>0.856</td><td>0.840</td><td>0.873</td></tr><tr><td>12</td><td>AL (12 × 8) + ResetModel + ResetData + DA (5)</td><td>0.790</td><td>0.748</td><td>0.836</td><td>0.793</td><td>0.782</td><td>0.805</td></tr><tr><td>13</td><td>AL (12 × 8) + KeepModel + KeepData + DA (5)</td><td>0.849</td><td>0.820</td><td>0.881</td><td>0.841</td><td>0.821</td><td>0.863</td></tr><tr><td>14</td><td>AL (12 × 8) + KeepModel + ResetData + DA (5)</td><td>0.853</td><td>0.828</td><td>0.879</td><td>0.856</td><td>0.831</td><td>0.881</td></tr><tr><td colspan="8">Self-training</td></tr><tr><td>15</td><td>ST (1)</td><td>0.753</td><td>0.733</td><td>0.774</td><td>0.777</td><td>0.807</td><td>0.750</td></tr><tr><td>16</td><td>ST (30) + ResetModel + KeepData</td><td>0.786</td><td>0.791</td><td>0.782</td><td>0.780</td><td>0.815</td><td>0.747</td></tr><tr><td>17</td><td>ST (30) + ResetModel + ResetData</td><td>0.727</td><td>0.688</td><td>0.770</td><td>0.787</td><td>0.815</td><td>0.761</td></tr><tr><td>18</td><td>ST (30) + KeepModel + KeepData</td><td>0.784</td><td>0.777</td><td>0.792</td><td>0.786</td><td>0.832</td><td>0.745</td></tr><tr><td>19</td><td>ST (30) + KeepModel + ResetData</td><td>0.633</td><td>0.551</td><td>0.743</td><td>0.789</td><td>0.829</td><td>0.752</td></tr><tr><td colspan="8">Self-training + Data Augmentation</td></tr><tr><td>20</td><td>ST (1) + DA (5)</td><td>0.800</td><td>0.794</td><td>0.805</td><td>0.756</td><td>0.787</td><td>0.726</td></tr><tr><td>21</td><td>ST (30) + ResetModel + KeepData + DA (5)</td><td>0.789</td><td>0.790</td><td>0.788</td><td>0.754</td><td>0.780</td><td>0.730</td></tr><tr><td>22</td><td>ST (30) + ResetModel + ResetData + DA (5)</td><td>0.795</td><td>0.792</td><td>0.798</td><td>0.765</td><td>0.788</td><td>0.744</td></tr><tr><td>23</td><td>ST (30) + KeepModel + KeepData + DA (5)</td><td>0.794</td><td>0.801</td><td>0.788</td><td>0.759</td><td>0.786</td><td>0.734</td></tr><tr><td>24</td><td>ST (30) + KeepModel + ResetData + DA (5)</td><td>0.797</td><td>0.791</td><td>0.802</td><td>0.747</td><td>0.771</td><td>0.724</td></tr></table>
239
+
240
+ Table 3: Performance of domain adaptation strategies on the time expression recognition target domains. AL $(k\times i)$ is active learning with $k$ samples and $i$ iterations. ST $(i)$ is self-training up to $i$ iterations. DA $(n)$ is augmenting each time entity with up to $n$ new examples. The best scores are in bold and the worst scores are underlined.
241
+
242
+ gorized the error into "wrong cue" (there was a negation cue in the sentence but it did not apply to the target event) or "short sentence" (especially on the i2b2 domain, the model liked to predict all short sentences as negated). For time expression recognition, we categorized all errors of the source-domain model by entity type (inside-outside-beginning format) for each target domain.
243
+
244
+ For both tasks, we then calculated how many of these source-domain model errors the best adapted models continued to make. Heatmaps of these analyses are plotted in appendix A.2. Across all tasks and domains, we see that the best self-trained models correct errors roughly evenly across source-domain error categories, while the best active learning models correct different errors, more like the oracle (target-fine-tuned) model. For example, the oracle model and active learning adapted models correct many more "wrong cue" errors in the negation i2b2 domain, more denies and none errors in
245
+
246
+ the negation MIMIC III domain, more B-Period and B-Month-Of-Year entities in the time news domain, and more B-Season-Of-Year, I-Season-Of-Year, and B-This entities in the time food domain.
247
+
248
+ Some error types appear to be only learnable with substantially more data. Only the oracle model is able to correct errors with the non and afebrile negation cues in the i2b2 domain and with the hold negation cue in MIMIC-III domain. This suggests that the source-domain model may be very confident in some types of wrong examples causing them not to be selected in active learning and generating poor pseudo-labels in self-training.
249
+
250
+ # 9 Conclusion
251
+
252
+ In this paper, we present a detailed comparison of the use of active learning, self-training and data augmentation to adapt a source-domain model on a target domain when the source-domain training data is unavailable. We identify a specific formula
253
+
254
+ tion of source-free active learning that consistently improves performance of the source-domain model. We believe our work highlights the interesting challenges of source-free domain adaptation, and its systematic comparison provides a solid base for future research in this area.
255
+
256
+ # Acknowledgements
257
+
258
+ Research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Numbers R01LM012918 and R01LM010090. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
259
+
260
+ # Ethical Considerations
261
+
262
+ Our comparison experiments and proposed formulation are intended to encourage model sharing in source-free domain adaptation while avoiding the risk of privacy leakage caused by direct data sharing. The data we use in this experiment are publicly available and from a shared task, however some of that data is from health institutions and requires a data use agreement to work with the data. Though recent research has found it difficult to recover protected information from trained models (Lehman et al., 2021), there is still some small risk that more complex models may be able to do so. However, as our research is a comparative study, we are not directly releasing models, and thus not risking any release of protected health information.
263
+
264
+ # References
265
+
266
+ Nikolay Arefyev, Boris Sheludko, Alexander Podolskiy, and Alexander Panchenko. 2020. Always keep your target in mind: Studying semantics and improving performance of neural lexical substitution. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1242-1255, Barcelona, Spain (Online). International Committee on Computational Linguistics.
267
+ Eyal Ben-David, Carmel Rabinovitz, and Roi Reichart. 2020. PERL: Pivot-based domain adaptation for pre-trained deep contextualized embedding models. Transactions of the Association for Computational Linguistics, 8:504-521.
268
+ Steven Bethard and Jonathan Parker. 2016. A semantically compositional annotation scheme for time normalization. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3779-3786, Portoorž,
269
+
270
+ Slovenia. European Language Resources Association (ELRA).
271
+ John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440-447, Prague, Czech Republic. Association for Computational Linguistics.
272
+ Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 49-56, Prague, Czech Republic. Association for Computational Linguistics.
273
+ Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
274
+ Xia Cui and Danushka Bollegala. 2019. Self-adaptation for unsupervised domain adaptation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 213-222, Varna, Bulgaria. INCOMA Ltd.
275
+ Cyprien de Masson d'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. In NeurIPS.
276
+ Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7949-7962, Online. Association for Computational Linguistics.
277
+ Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Adaptive semi-supervised learning for cross-domain sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3467-3476, Brussels, Belgium. Association for Computational Linguistics.
278
+ Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2744-2751, Online. Association for Computational Linguistics.
279
+
280
+ Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.
281
+ Yunzhong Hou and Liang Zheng. 2020. Source freedom domain adaptation with image translation.
282
+ Mika Juuti, Tommi Gröndahl, Adrian Flanagan, and N. Asokan. 2020. A little goes a long way: Improving toxic language classification despite data scarcity. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2991-3009, Online. Association for Computational Linguistics.
283
+ Youngeun Kim, Sungeun Hong, Donghyeon Cho, Hyoungseob Park, and Priyadarshini Panda. 2020. Domain adaptation without source data. CoRR, abs/2007.01524.
284
+ Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452-457, New Orleans, Louisiana. Association for Computational Linguistics.
285
+ Wouter M. Kouw. 2018. An introduction to domain adaptation and transfer learning. CoRR, abs/1812.11806.
286
+ Vinod K. Kurmi, Venkatesh K. Subramanian, and Vinay P. Namboodiri. 2021. Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 615-625.
287
+ Egoitz Laparra, Steven Bethard, and Timothy A Miller. 2020. Rethinking domain adaptation for machine learning over clinical language. JAMIA open, 3(2):146-150.
288
+ Egoitz Laparra, Xin Su, Yiyun Zhao, Ozlem Uzuner, Timothy Miller, and Steven Bethard. 2021. SemEval-2021 task 10: Source-free domain adaptation for semantic processing. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 348-356, Online. Association for Computational Linguistics.
289
+ Egoitz Laparra, Dongfang Xu, Ahmed Elsayed, Steven Bethard, and Martha Palmer. 2018. SemEval 2018 task 6: Parsing time normalizations. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 88-96, New Orleans, Louisiana. Association for Computational Linguistics.
290
+ Eric Lehman, Sarthak Jain, Karl Pichotta, Yoav Goldberg, and Byron Wallace. 2021. Does BERT pretrained on clinical notes reveal sensitive data? In Proceedings of the 2021 Conference of the North
291
+
292
+ American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 946-959, Online. Association for Computational Linguistics.
293
+ Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. 2020. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
294
+ Jian Liang, Dapeng Hu, and Jiashi Feng. 2020. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6028-6039. PMLR.
295
+ Pei Liu, Xuemin Wang, Chao Xiang, and Weiye Meng. 2020. A survey of text data augmentation. In 2020 International Conference on Computer Communication and Network Security (CCNS), pages 191-195.
296
+ Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
297
+ Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 90-98, Hong Kong. Association for Computational Linguistics.
298
+ Michael McCloskey and Neal J. Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Gordon H. Bower, editor, *Psychology of Learning and Motivation*, volume 24, pages 109-165. Academic Press.
299
+ David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152-159, New York City, USA. Association for Computational Linguistics.
300
+ Zhengjie Miao, Yuliang Li, Xiaolan Wang, and Wang-Chiew Tan. 2020. Snippext: Semi-supervised opinion mining with augmented data. CoRR, abs/2002.03049.
301
+ Timothy Miller. 2019. Simplified neural unsupervised domain adaptation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 414-419, Minneapolis, Minnesota. Association for Computational Linguistics.
302
+
303
+ Piyush Rai, Avishek Saha, Hal Daumé, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 27-32, Los Angeles, California. Association for Computational Linguistics.
304
+ Alan Ramponi and Barbara Plank. 2020. Neural unsupervised domain adaptation in NLP—A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online). International Committee on Computational Linguistics.
305
+ Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044-1054, Melbourne, Australia. Association for Computational Linguistics.
306
+ Gözde Gül Şahin and Mark Steedman. 2018. Data augmentation via dependency tree morphing for low-resource languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5004-5009, Brussels, Belgium. Association for Computational Linguistics.
307
+ Roshni Sahoo, Divya Shanmugam, and John V. Guttag. 2020. Unsupervised domain adaptation in the absence of source data. CoRR, abs/2007.10233.
308
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics.
309
+ Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.
310
+ Xin Su, Yiyun Zhao, and Steven Bethard. 2021. The University of Arizona at SemEval-2021 task 10: Applying self-training, active learning and data augmentation to source-free domain adaptation. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 458-466, Online. Association for Computational Linguistics.
311
+ Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2020. LAMAL: LLanguage modeling is all you need for lifelong language learning. In International Conference on Learning Representations.
312
+ Bailin Wang, Wenpeng Yin, Xi Victoria Lin, and Caiming Xiong. 2021. Learning to synthesize data for semantic parsing. CoRR, abs/2104.05827.
313
+ Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the
314
+
315
+ 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.
316
+ Garrett Wilson and Diane J. Cook. 2020. A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol., 11(5).
317
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
318
+ Silei Xu, Sina Semnani, Giovanni Campagna, and Monica Lam. 2020. AutoQA: From databases to QA semantic parsers with only synthetic training data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 422-434, Online. Association for Computational Linguistics.
319
+ David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189-196, Cambridge, Massachusetts, USA. Association for Computational Linguistics.
320
+ Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, and Lidong Bing. 2020. Feature adaptation of pre-trained language models across languages and domains with robust self-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7386-7399, Online. Association for Computational Linguistics.
321
+ Juntao Yu, Mohab Elkaref, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via self-training. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1-10, Bilbao, Spain. Association for Computational Linguistics.
322
+ Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. BERT-based lexical substitution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3368-3373, Florence, Italy. Association for Computational Linguistics.
323
+ Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017),
324
+
325
+ pages 400-410, Vancouver, Canada. Association for Computational Linguistics.
326
+
327
+ # A Appendix
328
+
329
+ # A.1 Hyperparameters
330
+
331
+ For both tasks, when we continue training the source-domain model on the target domain, we keep the same training hyperparameters as were used when the shared task organizers trained the models on the source domains. Those hyperparameters are shown in tables A1 and A2.
332
+
333
+ <table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>maximum sequence length</td><td>128</td></tr><tr><td>batch size</td><td>8</td></tr><tr><td>epochs</td><td>10</td></tr><tr><td>gradient accumulation steps</td><td>4</td></tr><tr><td>learning rate warm up steps</td><td>0</td></tr><tr><td>weight decay</td><td>0.0</td></tr><tr><td>learning rate</td><td>5e-5</td></tr><tr><td>adam epsilon</td><td>1e-08</td></tr><tr><td>maximum gradient norm</td><td>1.0</td></tr></table>
334
+
335
+ Table A1: Hyperparameters for negation detection systems.
336
+
337
+ <table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>maximum sequence length</td><td>271</td></tr><tr><td>batch size</td><td>2</td></tr><tr><td>epochs</td><td>3</td></tr><tr><td>gradient accumulation steps</td><td>1</td></tr><tr><td>learning rate warm up steps</td><td>500</td></tr><tr><td>weight decay</td><td>0.01</td></tr><tr><td>learning rate</td><td>5e-5</td></tr><tr><td>adam epsilon</td><td>1e-08</td></tr><tr><td>maximum gradient norm</td><td>1.0</td></tr></table>
338
+
339
+ Table A2: Hyperparameters for time expression recognition systems.
340
+
341
+ # A.2 Heat Maps for Error Analysis
342
+
343
+ For both tasks, we calculated how many source-domain model errors the best adapted models continued to make, and plotted them as heatmaps, where the rows are types of errors, and the columns are different models. Figures A1 to A4 show these analyses.
344
+
345
+ ![](images/993f070e4692e4a27c9d4e750332c713d431a01e32d1fa61df8fbabc4db79552.jpg)
346
+ Figure A1: Negation i2b2 target domain error heat map. Source is source-domain model. Oracle is oracle model. ALD is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models.
347
+
348
+ ![](images/7a130a9d7ee971764fc8a6759e52029d79c7c68f010766ff2a3bc2cdb21f8841.jpg)
349
+
350
+ ![](images/f6ea37944cb6be53d9c28e2731b9dcbe48fc60e24af9dfcddd072a3047642604.jpg)
351
+ Figure A2: Negation MIMIC-III target domain error heat map. Source is source-domain model. Oracle is oracle model. AL is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models.
352
+
353
+ ![](images/0f336386a2efc9ab28fdbca015e32333e46aa55dd0be0d389a0a6637ba6f0427.jpg)
354
+ Figure A3: Time news target domain error heat map. Source is source-domain model. Oracle is oracle model. AL is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models.
355
+
356
+ ![](images/982b294d11d139854d2030fba506d61352fb8b1173eb2ea5331dd7c05185b847.jpg)
357
+ Figure A4: Time food security target domain error heat map. Source is source-domain model. Oracle is oracle model. AL is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models.
358
+
359
+ ![](images/cb17328b1c2f3488ad8839947b776a2164c50f7e62049078ea36e2e4c5d35c94.jpg)
360
+
361
+ <table><tr><td>Strategy</td><td>B→D</td><td>B→E</td><td>B→K</td><td>D→B</td><td>D→E</td><td>D→K</td><td>E→B</td><td>E→D</td><td>E→K</td><td>K→B</td><td>K→D</td><td>K→E</td></tr><tr><td>Source-Domain Model (baseline)</td><td>88.5</td><td>92.0</td><td>93.8</td><td>90.2</td><td>91.7</td><td>90.7</td><td>89.0</td><td>89.2</td><td>93.5</td><td>92.0</td><td>90.5</td><td>94.8</td></tr><tr><td>Fine-Tuned Source-Domain Model (oracle)</td><td>89.7</td><td>93.0</td><td>94.5</td><td>91.5</td><td>93.5</td><td>94.3</td><td>93.2</td><td>91.0</td><td>94.0</td><td>92.2</td><td>90.5</td><td>94.3</td></tr><tr><td>Self-Distilled Model</td><td>88.0</td><td>91.7</td><td>95.5</td><td>92.5</td><td>90.5</td><td>93.0</td><td>89.2</td><td>90.5</td><td>94.0</td><td>90.5</td><td>90.0</td><td>92.5</td></tr><tr><td>Passive Learning Model</td><td>86.5</td><td>92.5</td><td>92.5</td><td>91.5</td><td>89.2</td><td>91.2</td><td>90.0</td><td>90.2</td><td>93.2</td><td>91.5</td><td>89.7</td><td>91.2</td></tr><tr><td>Best model from Ye et al. (2020)</td><td>87.9</td><td>91.3</td><td>92.5</td><td>91.5</td><td>91.6</td><td>92.5</td><td>88.7</td><td>88.2</td><td>93.6</td><td>89.8</td><td>87.9</td><td>92.6</td></tr><tr><td colspan="13">Active Learning</td></tr><tr><td>AL (96 x 1)</td><td>87.7</td><td>90.2</td><td>92.7</td><td>90.7</td><td>91.0</td><td>93.0</td><td>90.2</td><td>90.7</td><td>93.2</td><td>91.7</td><td>90.0</td><td>93.8</td></tr><tr><td>AL (12 X 8) + KeepModel + KeepData</td><td>88.2</td><td>90.0</td><td>91.0</td><td>90.2</td><td>90.5</td><td>94.8</td><td>91.0</td><td>88.2</td><td>94.0</td><td>89.7</td><td>91.0</td><td>92.7</td></tr><tr><td>AL (12 X 8) + KeepModel + ResetData</td><td>87.5</td><td>93.0</td><td>79.0</td><td>83.5</td><td>90.5</td><td>91.0</td><td>86.8</td><td>78.5</td><td>89.0</td><td>85.3</td><td>83.8</td><td>89.5</td></tr><tr><td>AL (12 X 8) + ResetModel + KeepData</td><td>87.5</td><td>92.2</td><td>93.5</td><td>92.5</td><td>91.2</td><td>94.0</td><td>91.2</td><td>89.0</td><td>94.5</td><td>91.0</td><td>89.2</td><td>94.8</td></tr><tr><td>AL (12 X 8) + ResetModel + ResetData</td><td>75.0</td><td>84.0</td><td>67.2</td><td>91.7</td><td>62.5</td><td>90.0</td><td>89.2</td><td>87.5</td><td>91.0</td><td>93.0</td><td>69.0</td><td>94.5</td></tr><tr><td colspan="13">Self-training</td></tr><tr><td>ST (1)</td><td>87.5</td><td>91.7</td><td>94.3</td><td>91.5</td><td>90.5</td><td>92.5</td><td>90.2</td><td>91.7</td><td>92.5</td><td>91.5</td><td>91.5</td><td>94.3</td></tr><tr><td>ST (30) + KeepModel + KeepData</td><td>87.5</td><td>92.5</td><td>94.0</td><td>90.5</td><td>91.0</td><td>92.0</td><td>89.5</td><td>89.5</td><td>94.5</td><td>90.2</td><td>89.7</td><td>93.2</td></tr><tr><td>ST (30) + KeepModel + ResetData</td><td>90.0</td><td>91.2</td><td>94.3</td><td>91.2</td><td>90.2</td><td>92.7</td><td>90.7</td><td>90.5</td><td>94.5</td><td>91.2</td><td>90.5</td><td>93.5</td></tr><tr><td>ST (30) + ResetModel + KeepData</td><td>88.2</td><td>91.0</td><td>94.3</td><td>91.7</td><td>91.0</td><td>91.7</td><td>90.7</td><td>92.2</td><td>95.3</td><td>91.0</td><td>92.0</td><td>92.7</td></tr><tr><td>ST (30) + ResetModel + ResetData</td><td>89.0</td><td>92.5</td><td>94.0</td><td>90.7</td><td>90.5</td><td>92.2</td><td>90.0</td><td>90.7</td><td>94.8</td><td>91.5</td><td>91.2</td><td>94.3</td></tr></table>
362
+
363
+ Table A3: Accuracy on the Amazon benchmark dataset from Ye et al. (2020). B is Books. D is DVDs. E is Electronics. K is Kitchen. The bolded score is the highest score for the entire column. The underlined score is the worst score for the entire column.
364
+
365
+ # A.3 Results on Amazon Benchmark
366
+
367
+ The Amazon Sentiment Analysis dataset has been used as a domain adaptation benchmark dataset by a large number of previous works (Blitzer et al., 2007; Ziser and Reichart, 2017; He et al., 2018; Ye et al., 2020; Ben-David et al., 2020). The data consists of reviews of four different product types (domains): Books, DVDs, Electronics, and Kitchen appliances. For the labeled portion, there are 1000 positive reviews and 1000 negative reviews for each domain. From these 4 domains, we construct 12 source-free domain adaptation tasks. For better comparison we directly use the data and split from the software release of Ye et al. (2020). The data of each source domain is split into $80\%$ as source-domain training set and $20\%$ as source-domain development set. The source-domain model is trained on the source-domain training set and its hyperparameters are tuned using the source-domain development set. The data of each target domain is split into $80\%$ as target-domain development set and $20\%$ as target-domain test set. The use of target-domain development set and target-domain test set is the same as in section 3.
368
+
369
+ When training the source-domain model, we used RoBERTa-base as a starting point and used grid search to tune the hyperparameters within the space of:
370
+
371
+ Learning Rate (Adam): 1e-5, 2e-5, 3e-5
372
+
373
+ Batch Size: 8
374
+
375
+ # Gradient Accumulation Steps: 2, 4 Epochs: 10
376
+
377
+ Table A3 shows the results of these 12 source-free domain adaptations. In 9 of 12 cases, our unadapted source-domain models score higher than the best adaptation model from Ye et al. (2020) $^{4}$ . The gap between these unadapted source-domain models and the fully target-domain adapted (oracle) models is also very small: the average difference is only 1.3 points, much smaller than the 11.1 point average difference in tables 2 and 3. In essence, no domain adaptation is needed for this data, so it is a poor dataset for evaluating source-free domain adaptation. Unsurprisingly, we thus see no source-free domain adaptation models that consistently improve performance, though we do see that the active learning ResetData models are typically poor, as they were in tables 2 and 3.
378
+
379
+ To make sure that it is not a specific split or a smaller test set that leads to good source-domain models, we also use the data from Ben-David et al. (2020) to train and test the source-domain models again. The source-domain data split and usage here is the same as before. The only difference is that there is no target-domain development set and the entire target domain is used as a test set. We show the results in table A4. All source-domain models outperform the best adapted models from Ben-David et al. (2020). It is worth noting that when we
380
+
381
+ <table><tr><td>Strategy</td><td>B→D</td><td>B→E</td><td>B→K</td><td>D→B</td><td>D→E</td><td>D→K</td><td>E→B</td><td>E→D</td><td>E→K</td><td>K→B</td><td>K→D</td><td>K→E</td></tr><tr><td>SD</td><td>91.8</td><td>93.5</td><td>95.0</td><td>93.0</td><td>93.0</td><td>94.6</td><td>92.8</td><td>90.8</td><td>94.7</td><td>92.1</td><td>90.2</td><td>94.4</td></tr><tr><td>Best model from Ben-David et al. (2020)</td><td>87.8</td><td>87.2</td><td>90.2</td><td>85.6</td><td>89.3</td><td>90.4</td><td>84.3</td><td>85.0</td><td>91.2</td><td>83.0</td><td>85.6</td><td>91.2</td></tr></table>
382
+
383
+ Table A4: Accuracy on the Amazon benchmark dataset from Ben-David et al. (2020). B is Books. D is DVDs. E is Electronics. K is Kitchen. The bolded score is the highest score for the entire column. The underlined score is the worst score for the entire column.
384
+
385
+ trained the source-domain model, we found that a large number of punctuation and special symbols included in the data from Ben-David et al. (2020) caused severe overfitting of the model (accuracy is 1 on the source-domain development set). After removing these symbols, the problem was resolved.
386
+
387
+ # A.4 Other Experimental Methods
388
+
389
+ We also tried to adapt the source-domain model by continuing to pre-train it with masked language modeling on the target domain. We removed the classification layer of the source-domain model, replaced it with a randomly initialized masked language modeling layer, then trained the language model on the unlabeled target-domain data, and then replaced the masked language modeling layer with the original classification layer. The hope was that this would bring the internal representations of the source-domain model closer to the target domain. However, despite a number of attempts at pre-training both all layers and selected layers, performance of this model was always much worse than the source-domain model. In the future, we plan to experiment with different initialization methods for the masked language model layer.
acomparisonofstrategiesforsourcefreedomainadaptation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8bc360e34f434d034a77c6bf01aa9d6a53540a6a14844799d7a3f7dbb5620b1
3
+ size 976063
acomparisonofstrategiesforsourcefreedomainadaptation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1006f4760fefb1bcf1d932abf9230efbc6b65618953c3d7d4299664708de1626
3
+ size 471659
acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3114f01379d10e7058e02f3e0e227947dca3bfe1ec1d190d296f80f836a3284
3
+ size 78317
acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f4aeb3b379e5d13966b51106eb7af524487b61b9f5dfd70dc0eab4143b6bb75
3
+ size 97364